zoukankan      html  css  js  c++  java
  • redis问题优化

    maxclients
    This is a number of clients redis can handle simultaneously. The default limit is 10000.

    maxclients 20000

    但是你可能会在错误日志中看到如下警告:
    # You requested maxclients of 10000 requiring at least 10032 max file descriptors.
    # Redis can't set maximum open files to 10032 because of OS error: Operation not permitted.
    # Current maximum open files is 4096. maxclients has been reduced to 4064 to compensate for low ulimit. If you need higher maxclients increase 'ulimit -n'.
    Please try increasing open file limit first.
    

    如果重新启动Redis后仍看到上述警告,请尝试以下操作:

    打开 /lib/systemd/system/redis-server.service
    在 [Service] 下面增加一行 LimitNOFILE=64000
    然后重启redis服务

    tcp-backlog
    如果redis连接很多,建议修改此参数.
    更改此参数还需要调整somaxconn和tcp_max_syn_backlog OS参数

    echo "net.core.somaxconn=65536" >> /etc/sysctl.conf
    echo "net.ipv4.tcp_max_syn_backlog=8192" >> /etc/sysctl.conf
    sysctl -p

    WARNING: Transparent Huge Pages
    For a warning like below:
    # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
    As described in warning, please run following command.
    

    echo never > /sys/kernel/mm/transparent_hugepage/enabled
    Also, add above line to the file /etc/rc.local for change to persist after a reboot. You do NOT need a reboot now.

    WARNING: overcommit_memory
    For a warning like below:
    WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
    Run following commands:
    

    echo "vm.overcommit_memory=1" >> /etc/sysctl.conf
    sysctl -p

  • 相关阅读:
    Spark Standalone集群搭建
    虚拟机Ubuntu磁盘扩容
    Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.ipc.ServerNotRunningYetException): org.apache.hadoop.hbase.ipc.ServerNotRunningYetException: Server is not runn
    centos脚本编写
    centos中的shell编程
    从零开始部署hadood分布式数据平台!
    从0开始部署hadoop HA集群,使用zk实现自动容灾
    zookeeper动物园管理员学习笔记
    hive UDF
    创建视图
  • 原文地址:https://www.cnblogs.com/jonnyan/p/12967244.html
Copyright © 2011-2022 走看看