zoukankan      html  css  js  c++  java
  • PostgreSQL Loadbalance Analysis CPU

    Before we can even begin to decide on a processor count, we need a baseline. With a working PostgreSQL server to base our numbers on, we can just use the amount of existing users during a busy period. Without that, we need to guess. This guess can actually be pretty close,depending on how the application was targeted. If the intent is to service 1000 users per second, we should start there since that's the same assumption the company is using to buy application and web servers.

    After that, we are applying a commonly accepted formula used by PostgreSQL administrators for a very long time. The ideal number of active connections is equal to twice the amount of available processor cores, plus the amount of disk spindles. Amusingly, the disk spindles increase the ideal number of connections because they contribute seek times, which forces the processor to wait for information. While a processor is waiting for input for one connection, the operating system may decide to lend the processor to another until the data is retrieved.

    The processor count is only part of the story. Intel CPUs have a few added elements we need to consider.

    Hyperthreading

    Newer generations of Intel processors often provide a feature called hyperthreading, which splits each physical processor core into two virtual cores. Historically, this was not well received, as benchmarks often illustrated performance degradation when the feature was enabled.

    Since the introduction of Nehalem-based architecture in 2008, this is no longer the case. While doubling the processor count does not result in a doubling of throughput, we've run several tests that show up to 40 percent improvement over using physical cores alone. This may not be universal, but it does apply to PostgreSQL performance tests. What this means is that the commonly accepted formula for determining ideal connection count requires modification.

    Current advice is to only multiply the physical core count by two. Assuming a 40 percent increase by enabling hyperthreading, the new formula becomes: 2 * 1.4 * CPUs + spindles. With that in mind, if we wanted to serve 1000 connections per second, and used SSDs to host our data, our minimum CPU count would be: 1000 / 50 / 1.4, or 14. Half of that is seven, but no CPU has seven physical cores, so we would need at least eight. If we used the physical cores alone for our calculation, we would need 10.

    Linux下清理内存和Cache方法 /proc/sys/vm/drop_caches

    频繁的文件访问会导致系统的Cache使用量大增

    通过修改proc系统的drop_caches清理free的cache
    $echo 3 > /proc/sys/vm/drop_caches

    drop_caches的详细文档如下:
    Writing to this will cause the kernel to drop clean caches, dentries and inodes from memory, causing that memory to become free.
    To free pagecache:
    * echo 1 > /proc/sys/vm/drop_caches
    To free dentries and inodes:
    * echo 2 > /proc/sys/vm/drop_caches
    To free pagecache, dentries and inodes:
    * echo 3 > /proc/sys/vm/drop_caches
    As this is a non-destructive operation, and dirty objects are notfreeable, the user should run "sync" first in order to make sure allcached objects are freed.
    This tunable was added in 2.6.16.

    修改/etc/sysctl.conf 添加如下选项后就不会内存持续增加
    vm.dirty_ratio = 1
    vm.dirty_background_ratio=1
    vm.dirty_writeback_centisecs=2
    vm.dirty_expire_centisecs=3
    vm.drop_caches=3
    vm.swappiness =100
    vm.vfs_cache_pressure=163
    vm.overcommit_memory=2
    vm.lowmem_reserve_ratio=32 32 8
    kern.maxvnodes=3

     

  • 相关阅读:
    如何解除任务管理器被禁用
    一、JavaScript概述
    001_html基本结构
    postman常见问题记录
    fidder工具使用
    SonarQube工具使用问题汇总
    业余书籍读后感
    jmater常见问题处理
    测试知识记录(更新中)
    HTTP协议
  • 原文地址:https://www.cnblogs.com/songyuejie/p/5212217.html
Copyright © 2011-2022 走看看