zoukankan      html  css  js  c++  java
  • Throughput limit on data sender process

    https://www.zabbix.com/forum/showthread.php?t=51258

    We have 10 proxies pointing to a single central server (v2.4.5). As far as I can tell the data sender process is what sends data from the proxy to the server trapper processes which then send it to the server's database.

    There seems to be some limit past which the data sender can no longer keep up with the vps rate and data collected by that proxy starts to go missing or lag behind on graphs.

    Any suggestions as to how best to deal with such an issue? Most of the proxies are very quiet CPU wise so I was hoping to have each one take on more polling per proxy and thereby need less proxies. At one location we are running 3 proxies just to deal with this issue. Given that our overall vps is 2434, I'm a bit concerned about how well this solution will scale.

    grep -i DataSender /etc/zabbix/zabbix_proxy.conf 
    ### Option: DataSenderFrequency
    # DataSenderFrequency=1

    I kept kicking up StartDBSyncers thinking it was the process that was doing this syncing. When I read it the first time, I always saw this as a pointless option as why would I want to introduce a delay to the time it takes to get data from a proxy to the server.

    In other news we got the queues down, it turns out that there was high load on the server's mysql instance.

    Active proxies have a hard coded data sender limit of 1000 values per each connection. However with 400+ nvps on a proxy that is not the case.

    Each DB syncer is capable of processing again up to ~1000 nvps so it can even make things worse if you increase the amount of syncers to more than default 4 unless you run over 4k nvps. 

    Networks and database performance are usually the most common problems.

    If a data sender on a proxy is busy then check ALL internal process and cache graphs on Zabbix server. Most likely you might see some issues there. And always log slow queries longer than 3000 milliseconds. 

    This limit can be increased by change in source code ZBX_MAX_HRECORD in include/proxy.h. FYI: Alexey told me that zabbix dev team is thinking about add this param as the proxy configuration variable (they are aware that fixed value of this #define may be a little problematic )
    I'm using from few weeks 5000 as limit (so far with this change was possible to fix few nasty issues).

    Yes, this ZBX_MAX_HRECORD value indeed can be increased and it is recommended to do so on large setups however that will not be the case of our friend with few hundred vps. Not likely he will see any difference with it. Having this setting configurable in config would be great though.

    But the last thing with network ad perf tuning is overlooked waaay to many times than we would like to. People might forget to check it and it can be useful to remind. 

  • 相关阅读:
    从 0 开始带你成为消息中间件实战高手
    jenkins升级2.249版本后 节点(Node)配置的启动方式中没有Launch agent via Java Web Star
    Gradle build daemon disappeared unexpectedly (it may have been killed or may have crashed)
    centOS7 gradle版本升级
    使用PDB调试Python程序
    pytest在控制台的方式下提示找不到包
    docker-compose文件详解
    storm案例分析
    strom 简单案例
    springboot实现读写分离
  • 原文地址:https://www.cnblogs.com/xianguang/p/7054752.html
Copyright © 2011-2022 走看看