zoukankan      html  css  js  c++  java
  • redis配置详解(中英文)

    V2.8.21: (中英字幕同步)

    # Redis configuration file example
    #* Redis
    配置文件例子

    # Note on units: when memory size is needed, it is possible to specify
    # it in the usual form of 1k 5GB 4M and so forth:
    #
    # 1k => 1000 bytes
    # 1kb => 1024 bytes
    # 1m => 1000000 bytes
    # 1mb => 1024*1024 bytes
    # 1g => 1000000000 bytes
    # 1gb => 1024*1024*1024 bytes
    #
    # units are case insensitive so 1GB 1Gb 1gB are all the same.

    #
    注意内存单位: 当使用内存大小的限制需要设置时, 在这里可以设置它的大小格式
    #
    例如: 1k 5GB 4M 等等都是可以的:
    #
    # 1k => 1000 bytes
    # 1kb => 1024 bytes
    # 1m => 1000000 bytes
    # 1mb => 1024*1024 bytes
    # 1g => 1000000000 bytes
    # 1gb => 1024*1024*1024 bytes
    #
    #
    单位不区分大小写因此 so 1GB 1Gb 1gB 都是一样的.

    ################################## INCLUDES ###################################

    # Include one or more other config files here. This is useful if you
    # have a standard template that goes to all Redis servers but also need
    # to customize a few per-server settings. Include files can include
    # other files, so use this wisely.
    #
    # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
    # from admin or Redis Sentinel. Since Redis always uses the last processed
    # line as value of a configuration directive, you'd better put includes
    # at the beginning of this file to avoid overwriting config change at runtime.
    #
    # If instead you are interested in using includes to override configuration
    # options, it is better to use include as the last line.
    #
    # include /path/to/local.conf
    # include /path/to/other.conf

    #
    要包含其他额外的配置文件在这里设置. 这个设置对于有自己的redis标准配置模板很有用
    #
    #
    自己声明的配置文件不会被命令"CONFIG REWRITE"重写
    # redis
    使用最后一个配置文件作为重写的文件,如果不行被重写,
    #
    那么请放在前面声明自己的配置文件
    #
    最后一行的配置就会被重写
    #
    # include /path/to/local.conf
    # include /path/to/other.conf


    ################################ GENERAL #####################################

    # By default Redis does not run as a daemon. Use 'yes' if you need it.
    # Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
    #
    默认情况下Redis不是运行在守护进程的模式. 如果你需要运行在守护进程的模式,请设置为'yes'.
    #
    当运行在守护进程模式,则会写到一个pid文件: /var/run/redis.pid.

    daemonize no

    # When running daemonized, Redis writes a pid file in /var/run/redis.pid by
    # default. You can specify a custom pid file location here.
    #
    当运行在守护进程模式,Redis会默认写到/var/run/redis.pid
    #
    你可以在这里设置修改
    pidfile /var/run/redis.pid

    # Accept connections on the specified port, default is 6379.
    # If port 0 is specified Redis will not listen on a TCP socket.
    # Redis
    监听端口来接收连接,默认端口是 6379
    #
    如果端口是0, Redis不会监听TCP socket
    port 6379

    # TCP listen() backlog.
    #
    # In high requests-per-second environments you need an high backlog in order
    # to avoid slow clients connections issues. Note that the Linux kernel
    # will silently truncate it to the value of /proc/sys/net/core/somaxconn so
    # make sure to raise both the value of somaxconn and tcp_max_syn_backlog
    # in order to get the desired effect.

    #
    在高并发环境下你需要一个高backlog值来避免慢客户端连接问题。注意Linux内核默默地将这个值减小
    #
    /proc/sys/net/core/somaxconn的值,所以需要确认增大somaxconntcp_max_syn_backlog
    #
    两个值来达到想要的效果。
    # [syn queue && accept queue,
    慢客户端会造成accept queue 比较长, 所以加大一些如果客户端太慢的话]
    tcp-backlog 511

    # By default Redis listens for connections from all the network interfaces
    # available on the server. It is possible to listen to just one or multiple
    # interfaces using the "bind" configuration directive, followed by one or
    # more IP addresses.
    #
    # Examples:
    #
    #
    默认Redis监听服务器上所有可用网络接口的连接。可以用"bind"配置指令跟一个或多个ip地址来实现
    #
    监听一个或多个网络接口
    # bind 192.168.1.100 10.0.0.1
    # bind 127.0.0.1


    # Specify the path for the Unix socket that will be used to listen for
    # incoming connections. There is no default, so Redis will not listen
    # on a unix socket when not specified.
    #
    #
    指定用来监听Unix套套接字的路径。没有默认值,所以在没有指定的情况下Redis不会监听Unix套接字
    # unixsocket /tmp/redis.sock
    # unixsocketperm 700


    # Close the connection after a client is idle for N seconds (0 to disable)
    #
    一个客户端空闲多少秒后关闭连接。(0代表禁用,永不关闭)
    timeout 0

    # TCP keepalive.
    #
    # If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
    # of communication. This is useful for two reasons:
    #
    # 1) Detect dead peers.
    # 2) Take the connection alive from the point of view of network
    # equipment in the middle.
    #
    # On Linux, the specified value (in seconds) is the period used to send ACKs.
    # Note that to close the connection the double of the time is needed.
    # On other kernels the period depends on the kernel configuration.
    #
    # A reasonable value for this option is 60 seconds.
    # TCP keepalive.

    #
    如果非零,则设置SO_KEEPALIVE选项来向空闲连接的客户端发送ACK,由于以下两个原因这是很有用的:
    #
    # 1
    )能够检测无响应的对端
    # 2
    )让该连接中间的网络设备知道这个连接还存活
    #
    #
    Linux上,这个指定的值(单位:秒)就是发送ACK的时间间隔。
    #
    注意:要关闭这个连接需要两倍的这个时间值。
    #
    在其他内核上这个时间间隔由内核配置决定
    #
    #
    这个选项的一个合理值是60

    tcp-keepalive 0

    # Specify the server verbosity level.
    # This can be one of:
    # debug (a lot of information, useful for development/testing)
    # verbose (many rarely useful info, but not a mess like the debug level)
    # notice (moderately verbose, what you want in production probably)
    # warning (only very important / critical messages are logged)

    #
    指定服务器调试等级
    #
    可能值:
    # debug
    (大量信息,对开发/测试有用)
    # verbose
    (很多精简的有用信息,但是不像debug等级那么多)
    # notice
    (适量的信息,基本上是你生产环境中需要的)
    # warning
    (只有很重要/严重的信息会记录下来)
    loglevel notice

    # Specify the log file name. Also the empty string can be used to force
    # Redis to log on the standard output. Note that if you use standard
    # output for logging but daemonize, logs will be sent to /dev/null

    #
    指明日志文件名。也可以使用""来强制让Redis把日志信息写到标准输出上。
    #
    注意:如果Redis以守护进程方式运行,而设置日志显示到标准输出的话,日志会发送到/dev/null
    logfile ""

    # To enable logging to the system logger, just set 'syslog-enabled' to yes,
    # and optionally update the other syslog parameters to suit your needs.
    #
    要使用系统日志记录器,只要设置 "syslog-enabled" "yes" 就可以了。
    #
    然后根据需要设置其他一些syslog参数就可以了。
    # syslog-enabled no

    # Specify the syslog identity.
    #
    指明syslog身份
    # syslog-ident redis

    # Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
    #
    指明syslog的设备。必须是userLOCAL0 ~ LOCAL7之一。
    # syslog-facility local0

    # Set the number of databases. The default database is DB 0, you can select
    # a different one on a per-connection basis using SELECT <dbid> where
    # dbid is a number between 0 and 'databases'-1
    #
    设置数据库个数。默认数据库是 DB 0
    #
    可以通过select <dbid> (0 <= dbid <= 'databases' - 1 )来为每个连接使用不同的数据库。
    databases 16

    ################################ SNAPSHOTTING ################################
    #
    # Save the DB on disk:
    #
    # save <seconds> <changes>
    #
    # Will save the DB if both the given number of seconds and the given
    # number of write operations against the DB occurred.
    #
    # In the example below the behaviour will be to save:
    # after 900 sec (15 min) if at least 1 key changed
    # after 300 sec (5 min) if at least 10 keys changed
    # after 60 sec if at least 10000 keys changed
    #
    # Note: you can disable saving completely by commenting out all "save" lines.
    #
    # It is also possible to remove all the previously configured save
    # points by adding a save directive with a single empty string argument
    # like in the following example:
    #
    # save ""

    #
    #
    把数据库存到磁盘上:
    #
    # save <seconds> <changes>
    #
    #
    会在指定秒数和数据变化次数之后把数据库写到磁盘上。
    #
    #
    下面的例子将会进行把数据写入磁盘的操作:
    # 900
    秒(15分钟)之后,且至少1次变更
    # 300
    秒(5分钟)之后,且至少10次变更
    # 60
    秒之后,且至少10000次变更
    #
    #
    注意:你要想不写磁盘的话就把所有 "save" 设置注释掉就行了。
    #
    #
    通过添加一条带空字符串参数的save指令也能移除之前所有配置的save指令
    #
    像下面的例子:
    # save ""

    save 900 1
    save 300 10
    save 60 10000


    # By default Redis will stop accepting writes if RDB snapshots are enabled
    # (at least one save point) and the latest background save failed.
    # This will make the user aware (in a hard way) that data is not persisting
    # on disk properly, otherwise chances are that no one will notice and some
    # disaster will happen.
    #
    # If the background saving process will start working again Redis will
    # automatically allow writes again.
    #
    # However if you have setup your proper monitoring of the Redis server
    # and persistence, you may want to disable this feature so that Redis will
    # continue to work as usual even if there are problems with disk,
    # permissions, and so forth.

    #
    默认如果开启RDB快照(至少一条save指令)并且最新的后台保存失败,Redis将会停止接受写操作
    #
    这将使用户知道数据没有正确的持久化到硬盘,否则可能没人注意到并且造成一些灾难。
    #
    #
    如果后台保存进程能重新开始工作,Redis将自动允许写操作
    #
    #
    然而如果你已经部署了适当的Redis服务器和持久化的监控,你可能想关掉这个功能以便于即使是
    #
    硬盘,权限等出问题了Redis也能够像平时一样正常工作
    stop-writes-on-bgsave-error yes

    # Compress string objects using LZF when dump .rdb databases?
    # For default that's set to 'yes' as it's almost always a win.
    # If you want to save some CPU in the saving child set it to 'no' but
    # the dataset will likely be bigger if you have compressible values or keys.
    #
    当导出到 .rdb 数据库时是否用LZF压缩字符串对象?
    #
    默认设置为 "yes",因为几乎在任何情况下它都是不错的。
    #
    如果你想节省CPU的话你可以把这个设置为 "no",但是如果你有可压缩的keyvalue的话,
    #
    那数据文件就会更大了。
    rdbcompression yes

    # Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
    # This makes the format more resistant to corruption but there is a performance
    # hit to pay (around 10%) when saving and loading RDB files, so you can disable it
    # for maximum performances.
    #
    # RDB files created with checksum disabled have a checksum of zero that will
    # tell the loading code to skip the check.

    #
    因为版本5RDB有一个CRC64算法的校验和放在了文件的最后。这将使文件格式更加可靠但在
    #
    生产和加载RDB文件时,这有一个性能消耗(大约10%),所以你可以关掉它来获取最好的性能。
    #
    #
    生成的关闭校验的RDB文件有一个0的校验和,它将告诉加载代码跳过检查
    rdbchecksum yes

    # The filename where to dump the DB
    #
    持久化数据库的文件名
    dbfilename dump.rdb

    # The working directory.
    #
    # The DB will be written inside this directory, with the filename specified
    # above using the 'dbfilename' configuration directive.
    #
    # The Append Only File will also be created inside this directory.
    #
    # Note that you must specify a directory here, not a file name.
    #
    工作目录
    #
    #
    数据库会写到这个目录下,文件名就是上面的 "dbfilename" 的值。
    #
    #
    累加文件也放这里。
    #
    #
    注意你这里指定的必须是目录,不是文件名。
    dir ./

    ################################# REPLICATION #################################

    # Master-Slave replication. Use slaveof to make a Redis instance a copy of
    # another Redis server. A few things to understand ASAP about Redis replication.
    #
    # 1) Redis replication is asynchronous, but you can configure a master to
    # stop accepting writes if it appears to be not connected with at least
    # a given number of slaves.
    # 2) Redis slaves are able to perform a partial resynchronization with the
    # master if the replication link is lost for a relatively small amount of
    # time. You may want to configure the replication backlog size (see the next
    # sections of this file) with a sensible value depending on your needs.
    # 3) Replication is automatic and does not need user intervention. After a
    # network partition slaves automatically try to reconnect to masters
    # and resynchronize with them.

    #
    主从同步。通过 slaveof 指令来实现Redis实例备份其他实例。
    #
    理解redis asap备份的几个要点如下:
    # 1) redis
    备份是异步的,但是你可以配置达到一定数量的从redis可以工作时,redis才进行备份,否则停止接受写操作。
    # 2) redis
    支持跟主redis分步重新同步数据,如果这个连接断开比较短的时间,你可以配置这个分步同步的buffer
    # 3)
    同步是自动进行的,无需要用户参与,当从redis主动重新连接上主redis后,这个同步就会自动进行。
    #
    注意,这里是本地从远端复制数据。也就是说,本地可以有不同的数据库文件、绑定不同的IP、监听
    #
    不同的端口。
    #
    # slaveof <masterip> <masterport>

    # If the master is password protected (using the "requirepass" configuration
    # directive below) it is possible to tell the slave to authenticate before
    # starting the replication synchronization process, otherwise the master will
    # refuse the slave request.
    #
    #
    如果master设置了密码保护(通过 "requirepass" 选项来配置),那么slave在开始同步之前必须
    #
    进行身份验证,否则它的同步请求会被拒绝。
    # masterauth <master-password>

    # When a slave loses its connection with the master, or when the replication
    # is still in progress, the slave can act in two different ways:
    #
    # 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will
    # still reply to client requests, possibly with out of date data, or the
    # data set may just be empty if this is the first synchronization.
    #
    # 2) if slave-serve-stale-data is set to 'no' the slave will reply with
    # an error "SYNC with master in progress" to all the kind of commands
    # but to INFO and SLAVEOF.
    #
    #
    当一个slave失去和master的连接,或者同步正在进行中,slave的行为可以有两种:
    #
    # 1)
    如果 slave-serve-stale-data 设置为 "yes" (默认值)slave会继续响应客户端请求,
    #
    可能是正常数据,或者是过时了的数据,也可能是还没获得值的空数据。
    # 2)
    如果 slave-serve-stale-data 设置为 "no"slave会回复"正在从master同步
    #
    SYNC with master in progress"来处理各种请求,除了 INFO SLAVEOF 命令。
    #
    slave-serve-stale-data yes

    # You can configure a slave instance to accept writes or not. Writing against
    # a slave instance may be useful to store some ephemeral data (because data
    # written on a slave will be easily deleted after resync with the master) but
    # may also cause problems if clients are writing to it because of a
    # misconfiguration.
    #
    # Since Redis 2.6 by default slaves are read-only.
    #
    # Note: read only slaves are not designed to be exposed to untrusted clients
    # on the internet. It's just a protection layer against misuse of the instance.
    # Still a read only slave exports by default all the administrative commands
    # such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
    # security of read only slaves using 'rename-command' to shadow all the
    # administrative / dangerous commands.
    #
    #
    你可以配置salve实例是否接受写操作。可写的slave实例可能对存储临时数据比较有用(因为写入salve
    #
    的数据在同master同步之后将很容被删除),但是如果客户端由于配置错误在写入时也可能产生一些问题。
    #
    #
    Redis2.6默认所有的slave为只读
    #
    #
    注意:只读的slave不是为了暴露给互联网上不可信的客户端而设计的。它只是一个防止实例误用的保护层。
    #
    一个只读的slave支持所有的管理命令比如config,debug等。为了限制你可以用'rename-command'
    #
    隐藏所有的管理和危险命令来增强只读slave的安全性
    slave-read-only yes

    # Replication SYNC strategy: disk or socket.
    #
    # -------------------------------------------------------
    # WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
    # -------------------------------------------------------
    #
    # New slaves and reconnecting slaves that are not able to continue the replication
    # process just receiving differences, need to do what is called a "full
    # synchronization". An RDB file is transmitted from the master to the slaves.
    # The transmission can happen in two different ways:
    #
    # 1) Disk-backed: The Redis master creates a new process that writes the RDB
    # file on disk. Later the file is transferred by the parent
    # process to the slaves incrementally.
    # 2) Diskless: The Redis master creates a new process that directly writes the
    # RDB file to slave sockets, without touching the disk at all.
    #
    # With disk-backed replication, while the RDB file is generated, more slaves
    # can be queued and served with the RDB file as soon as the current child producing
    # the RDB file finishes its work. With diskless replication instead once
    # the transfer starts, new slaves arriving will be queued and a new transfer
    # will start when the current one terminates.
    #
    # When diskless replication is used, the master waits a configurable amount of
    # time (in seconds) before starting the transfer in the hope that multiple slaves
    # will arrive and the transfer can be parallelized.
    #
    # With slow disks and fast (large bandwidth) networks, diskless replication
    # works better.
    #
    #
    主从备份策略: 写磁盘方式或者socket方式.
    #
    # -------------------------------------------------------
    #
    注意: 当前主从无磁盘备份还处于试验阶段
    # -------------------------------------------------------
    #
    #
    新的从redis或者那些不能进行部分同步备份的redis需要进行"全磁盘备份".
    #
    一个RDB文件会从主redis传输到从redis.
    #
    这个传输可以使用两种不同的策略:
    #
    # 1)
    磁盘方式: redis会创建一个子进程进行写RDB文件到磁盘中,
    #
    之后这个文件会被父进程传送给从redis.
    # 2)
    无磁盘方式: redis会创建一个子进程进行直接将RDB文件通过socket传送给从redis
    #
    而无需使用到磁盘.
    #
    #
    使用磁盘主从备份的方式, 当这个RDB文件产生之后, 多个从redis可以进行排队等待当前子进程
    #
    完成这个RDB文件的写工作后进行同步。
    #
    使用无磁盘主从备份的方式,一旦这个传输已经开始,那么新连接进来的从redis需要等待新的一次
    #
    全新的传输,这个需要等待上一次传输的完成。
    #
    #
    当采用无磁盘备份的方式之后, redis会等待一定的时间之后才会开始传输, 以便尽可能多的
    #
    对多个从redis进行同时备份,即平衡备份。这个时间可以在这里配置,单位为秒。
    #
    即配置 repl-diskless-sync-delay x
    #
    #
    对于磁盘比较慢的而且带宽比较大的环境下,无磁盘主从备份会工作得更好。

    repl-diskless-sync no

    # When diskless replication is enabled, it is possible to configure the delay
    # the server waits in order to spawn the child that trnasfers the RDB via socket
    # to the slaves.
    #
    # This is important since once the transfer starts, it is not possible to serve
    # new slaves arriving, that will be queued for the next RDB transfer, so the server
    # waits a delay in order to let more slaves arrive.
    #
    # The delay is specified in seconds, and by default is 5 seconds. To disable
    # it entirely just set it to 0 seconds and the transfer will start ASAP.
    #
    #
    当开启了无磁盘同步备份, 它可以配置这个延时备份的时间。
    #
    redis等待一定的时间后创建子进程通过socket和多个从redis进行同步备份。
    #
    #
    这个很重要, 因为一旦传输开始了,它不可能中途将新的从redis加入到这个同步传输中
    #
    只能够等待新的一次RDB传输, 因为主redis等待一定的时间是为了尽可能多的等待从redis连接上来。
    #
    #
    这个延时是使用秒为单位, 默认值是5秒钟.
    #
    如果要禁用这个延时的特性,将其设为0即可,那么这个同步会马上开始.

    repl-diskless-sync-delay 5

    # Slaves send PINGs to server in a predefined interval. It's possible to change
    # this interval with the repl_ping_slave_period option. The default value is 10
    # seconds.
    #
    # slave
    根据指定的时间间隔向master发送ping请求。
    #
    时间间隔可以通过 repl_ping_slave_period 来设置。
    #
    默认10秒。

    # repl-ping-slave-period 10

    # The following option sets the replication timeout for:
    #
    # 1) Bulk transfer I/O during SYNC, from the point of view of slave.
    # 2) Master timeout from the point of view of slaves (data, pings).
    # 3) Slave timeout from the point of view of masters (REPLCONF ACK pings).
    #
    # It is important to make sure that this value is greater than the value
    # specified for repl-ping-slave-period otherwise a timeout will be detected
    # every time there is low traffic between the master and the slave.
    #
    #
    以下选项设置同步的超时时间
    #
    # 1
    slave在与master SYNC期间有大量数据传输,造成超时
    # 2
    )在slave角度,master超时,包括数据、ping
    # 3
    )在master角度,slave超时,当master发送REPLCONF ACK pings
    #
    #
    确保这个值大于指定的repl-ping-slave-period,否则在主从间流量不高时每次都会检测到超时
    #
    # repl-timeout 60

    # Disable TCP_NODELAY on the slave socket after SYNC?
    #
    # If you select "yes" Redis will use a smaller number of TCP packets and
    # less bandwidth to send data to slaves. But this can add a delay for
    # the data to appear on the slave side, up to 40 milliseconds with
    # Linux kernels using a default configuration.
    #
    # If you select "no" the delay for data to appear on the slave side will
    # be reduced but more bandwidth will be used for replication.
    #
    # By default we optimize for low latency, but in very high traffic conditions
    # or when the master and slaves are many hops away, turning this to "yes" may
    # be a good idea.

    #
    是否在slave套接字发送SYNC之后禁用 TCP_NODELAY
    #
    #
    如果你选择“yes”Redis将使用更少的TCP包和带宽来向slaves发送数据。但是这将使数据传输到slave
    #
    上有延迟,Linux内核的默认配置会达到40毫秒
    #
    #
    如果你选择了 "no" 数据传输到salve的延迟将会减少但要使用更多的带宽
    #
    #
    默认我们会为低延迟做优化,但高流量情况或主从之间的跳数过多时,把这个选项设置为“yes”
    #
    是个不错的选择。

    repl-disable-tcp-nodelay no

    # Set the replication backlog size. The backlog is a buffer that accumulates
    # slave data when slaves are disconnected for some time, so that when a slave
    # wants to reconnect again, often a full resync is not needed, but a partial
    # resync is enough, just passing the portion of data the slave missed while
    # disconnected.
    #
    # The bigger the replication backlog, the longer the time the slave can be
    # disconnected and later be able to perform a partial resynchronization.
    #
    # The backlog is only allocated once there is at least a slave connected.
    #
    #
    设置数据备份的backlog大小。backlog是一个slave在一段时间内断开连接时记录salve数据的缓冲,
    #
    所以一个slave在重新连接时,不必要全量的同步,而是一个增量同步就足够了,将在断开连接的这段
    #
    时间内slave丢失的部分数据传送给它。
    #
    #
    同步的backlog越大,slave能够进行增量同步并且允许断开连接的时间就越长。
    #
    # backlog
    只分配一次并且至少需要一个slave连接
    #
    # repl-backlog-size 1mb

    # After a master has no longer connected slaves for some time, the backlog
    # will be freed. The following option configures the amount of seconds that
    # need to elapse, starting from the time the last slave disconnected, for
    # the backlog buffer to be freed.
    #
    # A value of 0 means to never release the backlog.
    #
    #
    master在一段时间内不再与任何slave连接,backlog将会释放。以下选项配置了从最后一个
    # slave
    断开开始计时多少秒后,backlog缓冲将会释放。
    #
    # 0
    表示永不释放backlog
    #
    # repl-backlog-ttl 3600

    # The slave priority is an integer number published by Redis in the INFO output.
    # It is used by Redis Sentinel in order to select a slave to promote into a
    # master if the master is no longer working correctly.
    #
    # A slave with a low priority number is considered better for promotion, so
    # for instance if there are three slaves with priority 10, 100, 25 Sentinel will
    # pick the one with priority 10, that is the lowest.
    #
    # However a special priority of 0 marks the slave as not able to perform the
    # role of master, so a slave with priority of 0 will never be selected by
    # Redis Sentinel for promotion.
    #
    # By default the priority is 100.
    #

    # slave
    的优先级是一个整数展示在RedisInfo输出中。如果master不再正常工作了,哨兵将用它来
    #
    选择一个slave提升=升为master
    #
    #
    优先级数字小的salve会优先考虑提升为master,所以例如有三个slave优先级分别为1010025
    #
    哨兵将挑选优先级最小数字为10slave
    #
    # 0
    作为一个特殊的优先级,标识这个slave不能作为master,所以一个优先级为0slave永远不会被
    #
    哨兵挑选提升为master
    #
    #
    默认优先级为100
    #
    slave-priority 100

    # It is possible for a master to stop accepting writes if there are less than
    # N slaves connected, having a lag less or equal than M seconds.
    #
    # The N slaves need to be in "online" state.
    #
    # The lag in seconds, that must be <= the specified value, is calculated from
    # the last ping received from the slave, that is usually sent every second.
    #
    # This option does not GUARANTEE that N replicas will accept the write, but
    # will limit the window of exposure for lost writes in case not enough slaves
    # are available, to the specified number of seconds.
    #
    # For example to require at least 3 slaves with a lag <= 10 seconds use:
    #
    # min-slaves-to-write 3
    # min-slaves-max-lag 10
    #
    # Setting one or the other to 0 disables the feature.
    #
    # By default min-slaves-to-write is set to 0 (feature disabled) and
    # min-slaves-max-lag is set to 10.
    #
    # master
    里面如果slave少于N个延时小于等于M秒的已连接slave,就可以停止接收写操作。
    #
    # N
    slave需要是“oneline”状态
    #
    #
    延时是以秒为单位,并且必须小于等于指定值,是从最后一个从slave接收到的ping(通常每秒发送)
    #
    开始计数。
    #
    #
    这个设置选项并不保证多个从redis都会接受到这个写请求,
    #
    但是可以减小这个数据丢失的机率,一旦没有满足数量的从服务器或者没达到这个延时秒数之下.
    #
    #
    例如至少需要3个延时小于等于10秒的slave用下面的指令:
    #
    # min-slaves-to-write 3
    # min-slaves-max-lag 10

    #
    #
    两者之一设置为0将禁用这个功能。
    #
    #
    默认 min-slaves-to-write 值是0(该功能禁用)并且 min-slaves-max-lag 值是10


    ################################## SECURITY ###################################

    # Require clients to issue AUTH <PASSWORD> before processing any other
    # commands. This might be useful in environments in which you do not trust
    # others with access to the host running redis-server.
    #
    # This should stay commented out for backward compatibility and because most
    # people do not need auth (e.g. they run their own servers).
    #
    # Warning: since Redis is pretty fast an outside user can try up to
    # 150k passwords per second against a good box. This means that you should
    # use a very strong password otherwise it will be very easy to break.
    #
    #
    要求客户端在处理任何命令时都要验证身份和密码。
    #
    这个功能在有你不信任的其它客户端能够访问redis服务器的环境里非常有用。
    #

    #
    为了向后兼容的话这段应该注释掉。而且大多数人不需要身份验证(例如:它们运行在自己的服务器上)
    #
    #
    警告:因为Redis太快了,所以外面的人可以尝试每秒150k的密码来试图破解密码。这意味着你需要
    #
    一个高强度的密码,否则破解太容易了。
    #
    # requirepass foobared
    #


    # Command renaming.
    #
    # It is possible to change the name of dangerous commands in a shared
    # environment. For instance the CONFIG command may be renamed into something
    # hard to guess so that it will still be available for internal-use tools
    # but not available for general clients.
    #
    # Example:
    #
    # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
    #
    # It is also possible to completely kill a command by renaming it into
    # an empty string:
    #
    # rename-command CONFIG ""
    #
    # Please note that changing the name of commands that are logged into the
    # AOF file or transmitted to slaves may cause problems.
    #
    #
    命令重命名
    #
    #
    在共享环境下,可以为危险命令改变名字。比如,你可以为 CONFIG 改个其他不太容易猜到的名字,
    #
    这样内部的工具仍然可以使用,而普通的客户端将不行。
    #
    #
    例如:
    #
    # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
    #
    #
    也可以通过改名为空字符串来完全禁用一个命令
    #
    # rename-command CONFIG ""
    #
    #
    请注意:改变命令名字被记录到AOF文件或被传送到从服务器可能产生问题。

    ################################### LIMITS ####################################

    # Set the max number of connected clients at the same time. By default
    # this limit is set to 10000 clients, however if the Redis server is not
    # able to configure the process file limit to allow for the specified limit
    # the max number of allowed clients is set to the current file limit
    # minus 32 (as Redis reserves a few file descriptors for internal uses).
    #
    # Once the limit is reached Redis will close all the new connections sending
    # an error 'max number of clients reached'.
    #
    #
    设置最多同时连接的客户端数量。默认这个限制是10000个客户端,然而如果Redis服务器不能配置
    #
    处理文件的限制数来满足指定的值,那么最大的客户端连接数就被设置成当前文件限制数减32(因
    #
    Redis服务器保留了一些文件描述符作为内部使用)
    #
    #
    一旦达到这个限制,Redis会关闭所有新连接并发送错误'max number of clients reached'

    # maxclients 10000

    # Don't use more memory than the specified amount of bytes.
    # When the memory limit is reached Redis will try to remove keys
    # according to the eviction policy selected (see maxmemory-policy).
    #
    # If Redis can't remove keys according to the policy, or if the policy is
    # set to 'noeviction', Redis will start to reply with errors to commands
    # that would use more memory, like SET, LPUSH, and so on, and will continue
    # to reply to read-only commands like GET.
    #
    # This option is usually useful when using Redis as an LRU cache, or to set
    # a hard memory limit for an instance (using the 'noeviction' policy).
    #
    # WARNING: If you have slaves attached to an instance with maxmemory on,
    # the size of the output buffers needed to feed the slaves are subtracted
    # from the used memory count, so that network problems / resyncs will
    # not trigger a loop where keys are evicted, and in turn the output
    # buffer of slaves is full with DELs of keys evicted triggering the deletion
    # of more keys, and so forth until the database is completely emptied.
    #
    # In short... if you have slaves attached it is suggested that you set a lower
    # limit for maxmemory so that there is some free RAM on the system for slave
    # output buffers (but this is not needed if the policy is 'noeviction').
    #
    #
    不要用比设置的上限更多的内存。一旦内存使用达到上限,Redis会根据选定的回收策略(参见:
    # maxmemmory-policy
    )删除key
    #
    #
    如果因为删除策略Redis无法删除key,或者策略设置为 "noeviction"Redis会回复需要更
    #
    多内存的错误信息给命令。例如,SET,LPUSH等等,但是会继续响应像Get这样的只读命令。
    #
    #
    在使用Redis作为LRU缓存,或者为实例设置了硬性内存限制的时候(使用 "noeviction" 策略)
    #
    的时候,这个选项通常事很有用的。
    #
    #
    警告:当有多个slave连上达到内存上限的实例时,master为同步slave的输出缓冲区所需
    #
    内存不计算在使用内存中。这样当驱逐key时,就不会因网络问题 / 重新同步事件触发驱逐key
    #
    的循环,反过来slaves的输出缓冲区充满了key被驱逐的DEL命令,这将触发删除更多的key
    #
    直到这个数据库完全被清空为止
    #
    #
    总之...如果你需要附加多个slave,建议你设置一个稍小maxmemory限制,这样系统就会有空闲
    #
    的内存作为slave的输出缓存区(但是如果最大内存策略设置为"noeviction"的话就没必要了)
    #
    # maxmemory <bytes>

    # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
    # is reached. You can select among five behaviors:
    #
    # volatile-lru -> remove the key with an expire set using an LRU algorithm
    # allkeys-lru -> remove any key according to the LRU algorithm
    # volatile-random -> remove a random key with an expire set
    # allkeys-random -> remove a random key, any key
    # volatile-ttl -> remove the key with the nearest expire time (minor TTL)
    # noeviction -> don't expire at all, just return an error on write operations
    #
    # Note: with any of the above policies, Redis will return an error on write
    # operations, when there are no suitable keys for eviction.
    #
    # At the date of writing these commands are: set setnx setex append
    # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
    # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
    # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
    # getset mset msetnx exec sort
    #
    # The default is:
    #
    #
    最大内存策略:如果达到内存限制了,Redis如何选择删除key。你可以在下面五个行为里选:
    #
    # volatile-lru ->
    根据LRU算法删除带有过期时间的key
    # allkeys-lru ->
    根据LRU算法删除任何key
    # volatile-random ->
    根据过期设置来随机删除key, 具备过期时间的key
    # allkeys->random ->
    无差别随机删, 任何一个key
    # volatile-ttl ->
    根据最近过期时间来删除(辅以TTL, 这是对于有过期时间的key
    # noeviction ->
    谁也不删,直接在写操作时返回错误。
    #
    #
    注意:对所有策略来说,如果Redis找不到合适的可以删除的key都会在写操作时返回一个错误。
    #

    #
    目前为止涉及的命令:set setnx setex append
    # incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
    # sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
    # zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
    # getset mset msetnx exec sort
    #

    #
    默认值如下:
    #
    # maxmemory-policy volatile-lru

    # LRU and minimal TTL algorithms are not precise algorithms but approximated
    # algorithms (in order to save memory), so you can select as well the sample
    # size to check. For instance for default Redis will check three keys and
    # pick the one that was used less recently, you can change the sample size
    # using the following configuration directive.
    #
    # LRU
    和最小TTL算法的实现都不是很精确,但是很接近(为了省内存),所以你可以用样本量做检测。
    #
    例如:默认Redis会检查3key然后取最旧的那个,你可以通过下面的配置指令来设置样本的个数。
    #
    # maxmemory-samples 3

    ############################## APPEND ONLY MODE ###############################

    # By default Redis asynchronously dumps the dataset on disk. This mode is
    # good enough in many applications, but an issue with the Redis process or
    # a power outage may result into a few minutes of writes lost (depending on
    # the configured save points).
    #
    # The Append Only File is an alternative persistence mode that provides
    # much better durability. For instance using the default data fsync policy
    # (see later in the config file) Redis can lose just one second of writes in a
    # dramatic event like a server power outage, or a single write if something
    # wrong with the Redis process itself happens, but the operating system is
    # still running correctly.
    #
    # AOF and RDB persistence can be enabled at the same time without problems.
    # If the AOF is enabled on startup Redis will load the AOF, that is the file
    # with the better durability guarantees.
    #
    # Please check http://redis.io/topics/persistence for more information.
    #
    #
    默认情况下,Redis是异步的把数据导出到磁盘上。这种模式在很多应用里已经足够好,但Redis进程
    #
    出问题或断电时可能造成一段时间的写操作丢失(这取决于配置的save指令)
    #
    # AOF
    是一种提供了更可靠的替代持久化模式,例如使用默认的数据写入文件策略(参见后面的配置)
    #
    在遇到像服务器断电或单写情况下Redis自身进程出问题但操作系统仍正常运行等突发事件时,Redis
    #
    能只丢失1秒的写操作。
    #
    # AOF
    RDB持久化能同时启动并且不会有问题。
    #
    如果AOF开启,那么在启动时Redis将加载AOF文件,它更能保证数据的可靠性。
    #
    #
    请查看 http://redis.io/topics/persistence 来获取更多信息.

    appendonly no

    # The name of the append only file (default: "appendonly.aof")
    #
    纯累加文件名字(默认:"appendonly.aof"
    #

    appendfilename "appendonly.aof"

    # The fsync() call tells the Operating System to actually write data on disk
    # instead of waiting for more data in the output buffer. Some OS will really flush
    # data on disk, some other OS will just try to do it ASAP.
    #
    # Redis supports three different modes:
    #
    # no: don't fsync, just let the OS flush the data when it wants. Faster.
    # always: fsync after every write to the append only log. Slow, Safest.
    # everysec: fsync only one time every second. Compromise.
    #
    # The default is "everysec", as that's usually the right compromise between
    # speed and data safety. It's up to you to understand if you can relax this to
    # "no" that will let the operating system flush the output buffer when
    # it wants, for better performances (but if you can live with the idea of
    # some data loss consider the default persistence mode that's snapshotting),
    # or on the contrary, use "always" that's very slow but a bit safer than
    # everysec.
    #
    # More details please check the following article:
    # http://antirez.com/post/redis-persistence-demystified.html
    #
    # If unsure, use "everysec".

    # fsync()
    系统调用告诉操作系统把数据写到磁盘上,而不是等更多的数据进入输出缓冲区。
    #
    有些操作系统会真的把数据马上刷到磁盘上;有些则会尽快去尝试这么做。
    #
    # Redis
    支持三种不同的模式:
    #
    # no
    :不要立刻刷,只有在操作系统需要刷的时候再刷。比较快。
    # always
    :每次写操作都立刻写入到aof文件。慢,但是最安全。
    # everysec
    :每秒写一次。折中方案。
    #
    #
    默认的 "everysec" 通常来说能在速度和数据安全性之间取得比较好的平衡。根据你的理解来
    #
    决定,如果你能放宽该配置为"no" 来获取更好的性能(但如果你能忍受一些数据丢失,可以考虑使用
    #
    默认的快照持久化模式),或者相反,用“always”会比较慢但比everysec要更安全。
    #
    #
    请查看下面的文章来获取更多的细节
    # http://antirez.com/post/redis-persistence-demystified.html
    #
    #
    如果不能确定,就用 "everysec"

    # appendfsync always
    appendfsync everysec
    # appendfsync no


    # When the AOF fsync policy is set to always or everysec, and a background
    # saving process (a background save or AOF log background rewriting) is
    # performing a lot of I/O against the disk, in some Linux configurations
    # Redis may block too long on the fsync() call. Note that there is no fix for
    # this currently, as even performing fsync in a different thread will block
    # our synchronous write(2) call.
    #
    # In order to mitigate this problem it's possible to use the following option
    # that will prevent fsync() from being called in the main process while a
    # BGSAVE or BGREWRITEAOF is in progress.
    #
    # This means that while another child is saving, the durability of Redis is
    # the same as "appendfsync none". In practical terms, this means that it is
    # possible to lose up to 30 seconds of log in the worst scenario (with the
    # default Linux settings).
    #
    # If you have latency problems turn this to "yes". Otherwise leave it as
    # "no" that is the safest pick from the point of view of durability.
    #
    #
    如果AOF的同步策略设置成 "always" 或者 "everysec",并且后台的存储进程(后台存储或写入AOF
    #
    日志)会产生很多磁盘I/O开销。某些Linux的配置下会使Redis因为 fsync()系统调用而阻塞很久。
    #
    注意,目前对这个情况还没有完美修正,甚至不同线程的 fsync() 会阻塞我们同步的write(2)调用。
    #
    #
    为了缓解这个问题,可以用下面这个选项。它可以在 BGSAVE BGREWRITEAOF 处理时阻止主进程进行fsync()
    #
    #
    这就意味着如果有子进程在进行保存操作,那么Redis就处于"不可同步"的状态。
    #
    这实际上是说,在最差的情况下可能会丢掉30秒钟的日志数据。(默认Linux设定)
    #
    #
    如果你有延时问题把这个设置成"yes",否则就保持"no",这是保存持久数据的最安全的方式。
    #

    no-appendfsync-on-rewrite no

    # Automatic rewrite of the append only file.
    # Redis is able to automatically rewrite the log file implicitly calling
    # BGREWRITEAOF when the AOF log size grows by the specified percentage.
    #
    # This is how it works: Redis remembers the size of the AOF file after the
    # latest rewrite (if no rewrite has happened since the restart, the size of
    # the AOF at startup is used).
    #
    # This base size is compared to the current size. If the current size is
    # bigger than the specified percentage, the rewrite is triggered. Also
    # you need to specify a minimal size for the AOF file to be rewritten, this
    # is useful to avoid rewriting the AOF file even if the percentage increase
    # is reached but it is still pretty small.
    #
    # Specify a percentage of zero in order to disable the automatic AOF
    # rewrite feature.
    #
    #
    自动重写AOF文件
    #
    如果AOF日志文件增大到指定百分比,Redis能够通过 BGREWRITEAOF 自动重写AOF日志文件。
    #
    #
    工作原理:Redis记住上次重写时AOF文件的大小(如果重启后还没有写操作,就直接用启动时的AOF大小)
    #
    #
    这个基准大小和当前大小做比较。如果当前大小超过指定比例,就会触发重写操作。你还需要指定被重写
    #
    日志的最小尺寸,这样避免了达到指定百分比但尺寸仍然很小的情况还要重写。
    #
    #
    指定百分比为0会禁用AOF自动重写特性。
    #

    auto-aof-rewrite-percentage 100
    auto-aof-rewrite-min-size 64mb


    # An AOF file may be found to be truncated at the end during the Redis
    # startup process, when the AOF data gets loaded back into memory.
    # This may happen when the system where Redis is running
    # crashes, especially when an ext4 filesystem is mounted without the
    # data=ordered option (however this can't happen when Redis itself
    # crashes or aborts but the operating system still works correctly).
    #
    # Redis can either exit with an error when this happens, or load as much
    # data as possible (the default now) and start if the AOF file is found
    # to be truncated at the end. The following option controls this behavior.
    #
    # If aof-load-truncated is set to yes, a truncated AOF file is loaded and
    # the Redis server starts emitting a log to inform the user of the event.
    # Otherwise if the option is set to no, the server aborts with an error
    # and refuses to start. When the option is set to no, the user requires
    # to fix the AOF file using the "redis-check-aof" utility before to restart
    # the server.
    #
    # Note that if the AOF file will be found to be corrupted in the middle
    # the server will still exit with an error. This option only applies when
    # Redis will try to read more data from the AOF file but not enough bytes
    # will be found.
    # AOF
    文件可能在尾部是不完整的(这跟system关闭有问题,尤其是mount ext4文件系统时
    #
    没有加上data=ordered选项。只会发生在os死时,redis自己死不会不完整)。
    #
    redis重启时load进内存的时候就有问题了。
    #
    发生的时候,可以选择redis启动报错,并且通知用户和写日志,或者load尽量多正常的数据。
    #
    如果aof-load-truncatedyes,会自动发布一个log给客户端然后load(默认)。
    #
    如果是no,用户必须手动redis-check-aof修复AOF文件才可以。
    #
    注意,如果在读取的过程中,发现这个aof是损坏的,服务器也是会退出的,
    #
    这个选项仅仅用于当服务器尝试读取更多的数据但又找不到相应的数据时。
    #
    aof-load-truncated yes

    ################################ LUA SCRIPTING ###############################

    # Max execution time of a Lua script in milliseconds.
    #
    # If the maximum execution time is reached Redis will log that a script is
    # still in execution after the maximum allowed time and will start to
    # reply to queries with an error.
    #
    # When a long running script exceeds the maximum execution time only the
    # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be
    # used to stop a script that did not yet called write commands. The second
    # is the only way to shut down the server in the case a write command was
    # already issued by the script but the user doesn't want to wait for the natural
    # termination of the script.
    #
    # Set it to 0 or a negative value for unlimited execution without warnings.
    #
    # Lua
    脚本的最大执行时间,毫秒为单位
    #
    #
    如果达到了最大的执行时间,Redis将要记录在达到最大允许时间之后一个脚本仍然在执行,并且将
    #
    开始对查询进行错误响应。
    #
    #
    当一个长时间运行的脚本超过了最大执行时间,只有 SCRIPT KILL SHUTDOWN NOSAVE 两个
    #
    命令可用。第一个可以用于停止一个还没有调用写命名的脚本。第二个是关闭服务器唯一方式,当
    #
    写命令已经通过脚本开始执行,并且用户不想等到脚本的自然终止。
    #
    #
    设置成0或者负值表示不限制执行时间并且没有任何警告
    #
    lua-time-limit 5000

    ################################## SLOW LOG ###################################

    # The Redis Slow Log is a system to log queries that exceeded a specified
    # execution time. The execution time does not include the I/O operations
    # like talking with the client, sending the reply and so forth,
    # but just the time needed to actually execute the command (this is the only
    # stage of command execution where the thread is blocked and can not serve
    # other requests in the meantime).
    #
    # You can configure the slow log with two parameters: one tells Redis
    # what is the execution time, in microseconds, to exceed in order for the
    # command to get logged, and the other parameter is the length of the
    # slow log. When a new command is logged the oldest one is removed from the
    # queue of logged commands.

    # The following time is expressed in microseconds, so 1000000 is equivalent
    # to one second. Note that a negative number disables the slow log, while
    # a value of zero forces the logging of every command.
    #
    # Redis
    慢查询日志可以记录超过指定时间的查询。运行时间不包括各种I/O时间,例如:连接客户端,
    #
    发送响应数据等,而只计算命令执行的实际时间(这只是线程阻塞而无法同时为其他请求服务的命令执
    #
    行阶段)
    #
    #
    你可以为慢查询日志配置两个参数:一个指明Redis的超时时间(单位为微秒)来记录超过这个时间的命令
    #
    另一个是慢查询日志长度。当一个新的命令被写进日志的时候,最老的那个记录从队列中移除。
    #
    #
    下面的时间单位是微秒,所以1000000就是1秒。注意,负数时间会禁用慢查询日志,而0则会强制记录
    #
    所有命令。
    #
    slowlog-log-slower-than 10000

    # There is no limit to this length. Just be aware that it will consume memory.
    # You can reclaim memory used by the slow log with SLOWLOG RESET.
    #
    #
    这个长度没有限制。只是要主要会消耗内存。你可以通过 SLOWLOG RESET 来回收内存。
    #
    slowlog-max-len 128

    ################################ LATENCY MONITOR ##############################

    # The Redis latency monitoring subsystem samples different operations
    # at runtime in order to collect data related to possible sources of
    # latency of a Redis instance.
    #
    # Via the LATENCY command this information is available to the user that can
    # print graphs and obtain reports.
    #
    # The system only logs operations that were performed in a time equal or
    # greater than the amount of milliseconds specified via the
    # latency-monitor-threshold configuration directive. When its value is set
    # to zero, the latency monitor is turned off.
    #
    # By default latency monitoring is disabled since it is mostly not needed
    # if you don't have latency issues, and collecting data has a performance
    # impact, that while very small, can be measured under big load. Latency
    # monitoring can easily be enalbed at runtime using the command
    # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed.
    #
    # redis
    延时监控系统在运行时会采样一些操作,以便收集可能导致延时的数据根源。
    #
    #
    通过 LATENCY命令可以打印一些图样和获取一些报告,方便监控
    #
    #
    这个系统仅仅记录那个执行时间大于或等于预定时间(毫秒)的操作,
    #
    这个预定时间是通过latency-monitor-threshold配置来指定的,
    #
    当设置为0时,这个监控系统处于停止状态
    #
    #
    默认情况下这个监控系统是处于停止状态的,因为大部分情况下都是不需要的,如果你
    #
    没有延时问题,收集数据会有一个性能冲击,这个影响会比较小,并且可以测试出来
    #
    延时监控可以很容易的在线开启,通过命令 CONFIG SET latency-monitor-threshold
    # <milliseconds>
    开启.
    latency-monitor-threshold 0

    ############################# Event notification ##############################

    # Redis can notify Pub/Sub clients about events happening in the key space.
    # This feature is documented at http://redis.io/topics/notifications
    #
    # For instance if keyspace events notification is enabled, and a client
    # performs a DEL operation on key "foo" stored in the Database 0, two
    # messages will be published via Pub/Sub:
    #
    # PUBLISH __keyspace@0__:foo del
    # PUBLISH __keyevent@0__:del foo
    #
    # It is possible to select the events that Redis will notify among a set
    # of classes. Every class is identified by a single character:
    #
    # K Keyspace events, published with __keyspace@<db>__ prefix.
    # E Keyevent events, published with __keyevent@<db>__ prefix.
    # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ...
    # $ String commands
    # l List commands
    # s Set commands
    # h Hash commands
    # z Sorted set commands
    # x Expired events (events generated every time a key expires)
    # e Evicted events (events generated when a key is evicted for maxmemory)
    # A Alias for g$lshzxe, so that the "AKE" string means all the events.
    #
    # The "notify-keyspace-events" takes as argument a string that is composed
    # of zero or multiple characters. The empty string means that notifications
    # are disabled.
    #
    # Example: to enable list and generic events, from the point of view of the
    # event name, use:
    #
    # notify-keyspace-events Elg
    #
    # Example 2: to get the stream of the expired keys subscribing to channel
    # name __keyevent@0__:expired use:
    #
    # notify-keyspace-events Ex
    #
    # By default all notifications are disabled because most users don't need
    # this feature and the feature has some overhead. Note that if you don't
    # specify at least one of K or E, no events will be delivered.
    #
    # Redis
    能通知 Pub/Sub 客户端关于键空间发生的事件
    #
    这个功能文档位于http://redis.io/topics/keyspace-events
    #
    #
    例如:如果键空间事件通知被开启,并且客户端对 0 号数据库的键 foo 执行 DEL 命令时,将通过
    # Pub/Sub
    发布两条消息:
    # PUBLISH __keyspace@0__:foo del
    # PUBLISH __keyevent@0__:del foo
    #
    #
    可以在下表中选择Redis要通知的事件类型。事件类型由单个字符来标识:
    #
    # K
    键空间通知,以__keyspace@<db>__为前缀
    # E
    键事件通知,以__keysevent@<db>__为前缀
    # g DEL , EXPIRE , RENAME
    等类型无关的通用命令的通知, ...
    # $ String
    命令
    # l List
    命令
    # s Set
    命令
    # h Hash
    命令
    # z
    有序集合命令
    # x
    过期事件(每次key过期时生成)
    # e
    驱逐事件(当key在内存满了被清除时生成)
    # A g$lshzxe
    的别名,因此”AKE”意味着所有的事件
    #
    # notify-keyspace-events
    带一个由0到多个字符组成的字符串参数。空字符串意思是通知被禁用。
    #
    #
    例子:启用List和通用事件通知:
    # notify-keyspace-events Elg
    #
    #
    例子2:为了获取过期key的通知订阅名字为 __keyevent@__:expired 的频道,用以下配置
    # notify-keyspace-events Ex
    #
    #
    默认所用的通知被禁用,因为用户通常不需要该特性,并且该特性会有性能损耗。
    #
    注意如果你不指定至少KE之一,不会发送任何事件。
    #
    notify-keyspace-events ""

    ############################### ADVANCED CONFIG ###############################

    # Hashes are encoded using a memory efficient data structure when they have a
    # small number of entries, and the biggest entry does not exceed a given
    # threshold. These thresholds can be configured using the following directives.
    #
    hash只有少量的entry时,并且最大的entry所占空间没有超过指定的限制时,会用一种节省内存的
    #
    数据结构来编码。可以通过下面的指令来设定限制
    hash-max-ziplist-entries 512
    hash-max-ziplist-value 64


    # Similarly to hashes, small lists are also encoded in a special way in order
    # to save a lot of space. The special representation is only used when
    # you are under the following limits:
    #
    hash似,数据元素较少的list,可以用另一种方式来编码从而节省大量空间。
    #
    这种特殊的方式只有在符合下面限制时才可以用:
    list-max-ziplist-entries 512
    list-max-ziplist-value 64


    # Sets have a special encoding in just one case: when a set is composed
    # of just strings that happen to be integers in radix 10 in the range
    # of 64 bit signed integers.
    # The following configuration setting sets the limit in the size of the
    # set in order to use this special memory saving encoding.
    # set
    有一种特殊编码的情况:当set数据全是十进制64位有符号整型数字构成的字符串时。
    #
    下面这个配置项就是用来设置set使用这种编码来节省内存的最大长度。
    set-max-intset-entries 512

    # Similarly to hashes and lists, sorted sets are also specially encoded in
    # order to save a lot of space. This encoding is only used when the length and
    # elements of a sorted set are below the following limits:
    #
    hashlist相似,有序集合也可以用一种特别的编码方式来节省大量空间。
    #
    这种编码只适合长度和元素都小于下面限制的有序集合:
    zset-max-ziplist-entries 128
    zset-max-ziplist-value 64


    # HyperLogLog sparse representation bytes limit. The limit includes the
    # 16 bytes header. When an HyperLogLog using the sparse representation crosses
    # this limit, it is converted into the dense representation.
    #
    # A value greater than 16000 is totally useless, since at that point the
    # dense representation is more memory efficient.
    #
    # The suggested value is ~ 3000 in order to have the benefits of
    # the space efficient encoding without slowing down too much PFADD,
    # which is O(N) with the sparse encoding. The value can be raised to
    # ~ 10000 when CPU is not a concern, but space is, and the data set is
    # composed of many HyperLogLogs with cardinality in the 0 - 15000 range.
    #
    # HyperLogLog
    稀疏结构表示字节的限制。该限制包括
    # 16
    个字节的头。当HyperLogLog使用稀疏结构表示
    #
    这些限制,它会被转换成密度表示。

    #
    值大于16000是完全没用的,因为在该点
    #
    密集的表示是更多的内存效率。

    #
    建议值是3000左右,以便具有的内存好处, 减少内存的消耗
    #
    不会减慢太多PFADD操作时间,
    #
    它是ON)使用稀疏编码的话。该值可以提高到
    # 10000
    左右如果CPU不是一个问题的话,但空间,数据集是
    #
    由许多HyperLogLogs与基数在0 - 15000范围内。

    hll-sparse-max-bytes 3000

    # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in
    # order to help rehashing the main Redis hash table (the one mapping top-level
    # keys to values). The hash table implementation Redis uses (see dict.c)
    # performs a lazy rehashing: the more operation you run into a hash table
    # that is rehashing, the more rehashing "steps" are performed, so if the
    # server is idle the rehashing is never complete and some more memory is used
    # by the hash table.
    #
    # The default is to use this millisecond 10 times every second in order to
    # actively rehash the main dictionaries, freeing memory when possible.
    #
    # If unsure:
    # use "activerehashing no" if you have hard latency requirements and it is
    # not a good thing in your environment that Redis can reply from time to time
    # to queries with 2 milliseconds delay.
    #
    # use "activerehashing yes" if you don't have such hard requirements but
    # want to free memory asap when possible.
    #
    #
    启用哈希刷新,每100CPU毫秒会拿出1个毫秒来刷新Redis的主哈希表(顶级键值映射表)。
    # redis
    所用的哈希表实现(见dict.c)采用延迟哈希刷新机制:你对一个哈希表操作越多,哈希刷新
    #
    操作就越多;反之,如果服务器是空闲的,那么哈希刷新就不会完成,哈希表就会占用更多的一些
    #
    内存而已。
    #
    #
    默认是每秒钟进行10次哈希表刷新,用来刷新字典,然后尽快释放内存。
    #
    #
    建议:
    #
    如果你对延迟比较在意,不能够接受Redis时不时的对请求有2毫秒的延迟的话,就用
    # "activerehashing no"
    ,如果不太在意延迟而希望尽快释放内存就设置"activerehashing yes"
    activerehashing yes

    # The client output buffer limits can be used to force disconnection of clients
    # that are not reading data from the server fast enough for some reason (a
    # common reason is that a Pub/Sub client can't consume messages as fast as the
    # publisher can produce them).
    #
    # The limit can be set differently for the three different classes of clients:
    #
    # normal -> normal clients including MONITOR clients
    # slave -> slave clients
    # pubsub -> clients subscribed to at least one pubsub channel or pattern
    #
    # The syntax of every client-output-buffer-limit directive is the following:
    #
    # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds>
    #
    # A client is immediately disconnected once the hard limit is reached, or if
    # the soft limit is reached and remains reached for the specified number of
    # seconds (continuously).
    # So for instance if the hard limit is 32 megabytes and the soft limit is
    # 16 megabytes / 10 seconds, the client will get disconnected immediately
    # if the size of the output buffers reach 32 megabytes, but will also get
    # disconnected if the client reaches 16 megabytes and continuously overcomes
    # the limit for 10 seconds.
    #
    # By default normal clients are not limited because they don't receive data
    # without asking (in a push way), but just after a request, so only
    # asynchronous clients may create a scenario where data is requested faster
    # than it can read.
    #
    # Instead there is a default limit for pubsub and slave clients, since
    # subscribers and slaves receive data in a push fashion.
    #
    # Both the hard or the soft limit can be disabled by setting them to zero.
    #
    #
    客户端的输出缓冲区的限制,可用于强制断开那些因为某种原因从服务器读取数据的速度不够快的客户端,
    #
    (一个常见的原因是一个发布/订阅客户端消费消息的速度无法赶上生产它们的速度)
    #
    #
    可以对三种不同的客户端设置不同的限制:
    # normal ->
    正常客户端
    # slave -> slave
    MONITOR 客户端
    # pubsub ->
    至少订阅了一个pubsub channelpattern的客户端
    #
    #
    下面是每个client-output-buffer-limit语法:
    # client-output-buffer-limit <class><hard limit> <soft limit> <soft seconds>
    #
    #
    一旦达到硬限制客户端会立即被断开,或者达到软限制并持续达到指定的秒数(连续的)。
    #
    例如,如果硬限制为32兆字节和软限制为16兆字节/10秒,客户端将会立即断开
    #
    如果输出缓冲区的大小达到32兆字节,或客户端达到16兆字节并连续超过了限制10秒,就将断开连接。
    #
    #
    默认normal客户端不做限制,因为他们在不主动请求时不接收数据(以推的方式),只有异步客户端
    #
    可能会出现请求数据的速度比它可以读取的速度快的场景。
    #
    # pubsub
    slave客户端会有一个默认值,因为订阅者和slaves以推的方式来接收数据
    #
    #
    把硬限制和软限制都设置为0来禁用该功能
    #
    client-output-buffer-limit normal 0 0 0
    client-output-buffer-limit slave 256mb 64mb 60
    client-output-buffer-limit pubsub 32mb 8mb 60


    # Redis calls an internal function to perform many background tasks, like
    # closing connections of clients in timeout, purging expired keys that are
    # never requested, and so forth.
    #
    # Not all tasks are performed with the same frequency, but Redis checks for
    # tasks to perform according to the specified "hz" value.
    #
    # By default "hz" is set to 10. Raising the value will use more CPU when
    # Redis is idle, but at the same time will make Redis more responsive when
    # there are many keys expiring at the same time, and timeouts may be
    # handled with more precision.
    #
    # The range is between 1 and 500, however a value over 100 is usually not
    # a good idea. Most users should use the default of 10 and raise this up to
    # 100 only in environments where very low latency is required.
    #
    # Redis
    调用内部函数来执行许多后台任务,如关闭客户端超时的连接,清除未被请求过的过期Key等等。
    #
    #
    不是所有的任务都以相同的频率执行,但Redis依照指定的“hz”值来执行检查任务。
    #
    #
    默认情况下,“hz”的被设定为10。提高该值将在Redis空闲时使用更多的CPU时,但同时当有多个key
    #
    同时到期会使Redis的反应更灵敏,以及超时可以更精确地处理。
    #
    #
    范围是1500之间,但是值超过100通常不是一个好主意。
    #
    大多数用户应该使用10这个默认值,只有在非常低的延迟要求时有必要提高到100
    #
    hz 10

    # When a child rewrites the AOF file, if the following option is enabled
    # the file will be fsync-ed every 32 MB of data generated. This is useful
    # in order to commit the file to the disk more incrementally and avoid
    # big latency spikes.
    #
    #
    当一个子进程重写AOF文件时,如果启用下面的选项,则文件每生成32M数据会被同步。
    #
    为了增量式的写入硬盘并且避免大的延迟高峰这个指令是非常有用的

    #
    aof-rewrite-incremental-fsync yes

     

     

  • 相关阅读:
    SQL Server调优系列基础篇(常用运算符总结——三种物理连接方式剖析)
    SQL Server调优系列基础篇
    《SQL Server企业级平台管理实践》读书笔记——SQL Server中关于系统库Tempdb总结
    你所不知道的SQL Server数据库启动过程(用户数据库加载过程的疑难杂症)
    你所不知道的SQL Server数据库启动过程,以及启动不起来的各种问题的分析及解决技巧
    《SQL Server企业级平台管理实践》读书笔记——几个系统库的备份与恢复
    Struts2
    Struts2
    Struts2
    Struts2
  • 原文地址:https://www.cnblogs.com/linuxbug/p/5125337.html
Copyright © 2011-2022 走看看