zoukankan      html  css  js  c++  java
  • postgres高可用学习篇一:如何通过patroni如何管理3个postgres节点

    环境: CentOS Linux release 7.6.1810 (Core) 内核版本:3.10.0-957.10.1.el7.x86_64

    node1:192.168.216.130

    node2:192.168.216.132

    node3:192.168.216.134

    postgres内核优化指南:https://github.com/digoal/blog/blob/master/201611/20161121_01.md?spm=a2c4e.10696291.0.0.660a19a4sIk1Ok&file=20161121_01.md

    一、安装postgres

    yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7-x86_64/pgdg-centos11-11-2.noarch.rpm
    yum install postgresql11
    yum install postgresql11-server
    yum install postgresql11-libs
    yum install postgresql11-contrib 
    yum install postgresql11-devel
    可以参考:https://www.jianshu.com/p/b4a759c2208f
    

     安装完成后可以查询下rpm -qa|grep postgres安装了哪些包

    postgresql11-libs-11.5-1PGDG.rhel7.x86_64
    postgresql10-libs-10.10-1PGDG.rhel7.x86_64
    postgresql11-11.5-1PGDG.rhel7.x86_64
    postgresql11-contrib-11.5-1PGDG.rhel7.x86_64
    postgresql11-server-11.5-1PGDG.rhel7.x86_64
    postgresql11-devel-11.5-1PGDG.rhel7.x86_64
    

     安装后不需要初始化,可由patroni来完成初始化操作,如果已经初始化完成,不需要patroni来初始化操作,可以修改patroni配置文件的以下参数来指定data目录和安装目录

    data_dir: /var/lib/pgsql/11/data
    bin_dir: /usr/pgsql-11/bin
    config_dir: /var/lib/pgsql/11/data
    stats_temp_directory: /var/lib/pgsql_stats_tmp
    chown -Rf postgres:postgres /var/lib/pgsql/11/data
    chmod -Rf 700 /var/lib/pgsql/11/data
    chown -Rf postgres:postgres /var/lib/pgsql_stats_tmp
    chmod -Rf 700 /var/lib/pgsql_stats_tmp

    二、安装patroni,这里建议先修改pip源为国内,否则在安装过程中可能遇到大量超时问题

    可参考:https://www.cnblogs.com/caidingyu/p/11566690.html

    yum install gcc
    yum install python-devel.x86_64
    curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
    python get-pip.py
    pip install psycopg2-binary
    pip install patroni[etcd,consul]

    三、安装etcd服务

    可参考:https://www.cnblogs.com/caidingyu/p/11408389.html

    四、创建patroni的配置文件

    node1:patroni配置文件如下

    [root@localhost tmp]# cat /etc/patroni/patroni.yml 
    
    scope: postgres-cluster
    name: pgnode01
    namespace: /service/
    
    restapi:
      listen: 192.168.216.130:8008
      connect_address: 192.168.216.130:8008
    #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
    #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
    #  authentication:
    #    username: username
    #    password: password
    
    etcd:
      hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379
    
    bootstrap:
      # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
      # and all other cluster members will use it as a `global configuration`
      dcs:
        ttl: 30
        loop_wait: 10
        retry_timeout: 10
        maximum_lag_on_failover: 1048576
        master_start_timeout: 300
        synchronous_mode: false
        synchronous_mode_strict: false
        #standby_cluster:
          #host: 127.0.0.1
          #port: 1111
          #primary_slot_name: patroni 
        postgresql:
          use_pg_rewind: true
          use_slots: true
          parameters:
            max_connections: 100
            superuser_reserved_connections: 5
            max_locks_per_transaction: 64
            max_prepared_transactions: 0
            huge_pages: try
            shared_buffers: 512MB
            work_mem: 128MB
            maintenance_work_mem: 256MB
            effective_cache_size: 4GB
            checkpoint_timeout: 15min
            checkpoint_completion_target: 0.9
            min_wal_size: 2GB
            max_wal_size: 4GB
            wal_buffers: 32MB
            default_statistics_target: 1000
            seq_page_cost: 1
            random_page_cost: 4
            effective_io_concurrency: 2
            synchronous_commit: on
            autovacuum: on
            autovacuum_max_workers: 5
            autovacuum_vacuum_scale_factor: 0.01
            autovacuum_analyze_scale_factor: 0.02
            autovacuum_vacuum_cost_limit: 200
            autovacuum_vacuum_cost_delay: 20
            autovacuum_naptime: 1s
            max_files_per_process: 4096
            archive_mode: on
            archive_timeout: 1800s
            archive_command: cd .
            wal_level: replica
            wal_keep_segments: 130
            max_wal_senders: 10
            max_replication_slots: 10
            hot_standby: on
            wal_log_hints: on
            shared_preload_libraries: pg_stat_statements,auto_explain
            pg_stat_statements.max: 10000
            pg_stat_statements.track: all
            pg_stat_statements.save: off
            auto_explain.log_min_duration: 10s
            auto_explain.log_analyze: true
            auto_explain.log_buffers: true
            auto_explain.log_timing: false
            auto_explain.log_triggers: true
            auto_explain.log_verbose: true
            auto_explain.log_nested_statements: true
            track_io_timing: on
            log_lock_waits: on
            log_temp_files: 0
            track_activities: on
            track_counts: on
            track_functions: all
            log_checkpoints: on
            logging_collector: on
            log_truncate_on_rotation: on
            log_rotation_age: 1d
            log_rotation_size: 0
            log_line_prefix: '%t [%p-%l] %r %q%u@%d '
            log_filename: 'postgresql-%a.log'
            log_directory: /var/log/postgresql
     
     #      recovery_conf:
    #        restore_command: cp ../wal_archive/%f %p
     
      # some desired options for 'initdb'
      initdb:  # Note: It needs to be a list (some options need values, others are switches)
      - encoding: UTF8
      - locale: en_US.UTF-8
      - data-checksums
     
      pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
      - host replication replicator 0.0.0.0/0 md5
      - host all all 0.0.0.0/0 md5
    
     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
    # post_init: /usr/local/bin/setup_cluster.sh
     
      # Some additional users which needs to be created after initializing new cluster
    #  users:
    #    admin:
    #      password: admin-pass
    #      options:
    #        - createrole
    #        - createdb
    
      
    postgresql:
      listen: 192.168.216.130,127.0.0.1:5432
      connect_address: 192.168.216.130:5432
      use_unix_socket: true
      data_dir: /var/lib/pgsql/11/data
      bin_dir: /usr/pgsql-11/bin
      config_dir: /var/lib/pgsql/11/data
      pgpass: /var/lib/pgsql/.pgpass
      authentication:
        replication:
          username: replicator
          password: replicator-pass
        superuser:
          username: postgres
          password: postgres-pass
    #    rewind:  # Has no effect on postgres 10 and lower
    #      username: rewind_user
    #      password: rewind_password
      parameters:
        unix_socket_directories: /var/run/postgresql
        stats_temp_directory: /var/lib/pgsql_stats_tmp
     
    #  callbacks:
    #    on_start:
    #    on_stop:
    #    on_restart:
    #    on_reload:
    #    on_role_change:
     
      create_replica_methods:
    #   - pgbackrest
    #   - wal_e
       - basebackup
    # pgbackrest:
    #   command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
    #   keep_data: True
    #   no_params: True
    #  wal_e
    #    command: patroni_wale_restore
    #    no_master: 1
    #    envdir: /etc/wal_e/envdir
    #    use_iam: 1
      basebackup:
        max-rate: '100M'
     
     #watchdog:
    #  mode: automatic # Allowed values: off, automatic, required
    #  device: /dev/watchdog
    #  safety_margin: 5
     
    tags:
        nofailover: false
        noloadbalance: false
        clonefrom: false
        nosync: false
    # specify a node to replicate from. This can be used to implement a cascading replication.
    #    replicatefrom: (node name)
    

    node2:patroni配置文件如下

    [root@localhost postgresql]# cat /etc/patroni/patroni.yml 
    
    scope: postgres-cluster
    name: pgnode02
    namespace: /service/
    
    restapi:
      listen: 192.168.216.132:8008
      connect_address: 192.168.216.132:8008
    #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
    #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
    #  authentication:
    #    username: username
    #    password: password
    
    etcd:
      hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379
    
    bootstrap:
      # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
      # and all other cluster members will use it as a `global configuration`
      dcs:
        ttl: 30
        loop_wait: 10
        retry_timeout: 10
        maximum_lag_on_failover: 1048576
        master_start_timeout: 300
        synchronous_mode: false
        synchronous_mode_strict: false
        #standby_cluster:
          #host: 127.0.0.1
          #port: 1111
          #primary_slot_name: patroni 
        postgresql:
          use_pg_rewind: true
          use_slots: true
          parameters:
            max_connections: 100
            superuser_reserved_connections: 5
            max_locks_per_transaction: 64
            max_prepared_transactions: 0
            huge_pages: try
            shared_buffers: 512MB
            work_mem: 128MB
            maintenance_work_mem: 256MB
            effective_cache_size: 4GB
            checkpoint_timeout: 15min
            checkpoint_completion_target: 0.9
            min_wal_size: 2GB
            max_wal_size: 4GB
            wal_buffers: 32MB
            default_statistics_target: 1000
            seq_page_cost: 1
            random_page_cost: 4
            effective_io_concurrency: 2
            synchronous_commit: on
            autovacuum: on
            autovacuum_max_workers: 5
            autovacuum_vacuum_scale_factor: 0.01
            autovacuum_analyze_scale_factor: 0.02
            autovacuum_vacuum_cost_limit: 200
            autovacuum_vacuum_cost_delay: 20
            autovacuum_naptime: 1s
            max_files_per_process: 4096
            archive_mode: on
            archive_timeout: 1800s
            archive_command: cd .
            wal_level: replica
            wal_keep_segments: 130
            max_wal_senders: 10
            max_replication_slots: 10
            hot_standby: on
            wal_log_hints: on
            shared_preload_libraries: pg_stat_statements,auto_explain
            pg_stat_statements.max: 10000
            pg_stat_statements.track: all
            pg_stat_statements.save: off
            auto_explain.log_min_duration: 10s
            auto_explain.log_analyze: true
            auto_explain.log_buffers: true
            auto_explain.log_timing: false
            auto_explain.log_triggers: true
            auto_explain.log_verbose: true
            auto_explain.log_nested_statements: true
            track_io_timing: on
            log_lock_waits: on
            log_temp_files: 0
            track_activities: on
            track_counts: on
            track_functions: all
            log_checkpoints: on
            logging_collector: on
            log_truncate_on_rotation: on
            log_rotation_age: 1d
            log_rotation_size: 0
            log_line_prefix: '%t [%p-%l] %r %q%u@%d '
            log_filename: 'postgresql-%a.log'
            log_directory: /var/log/postgresql
     
     #      recovery_conf:
    #        restore_command: cp ../wal_archive/%f %p
     
      # some desired options for 'initdb'
      initdb:  # Note: It needs to be a list (some options need values, others are switches)
      - encoding: UTF8
      - locale: en_US.UTF-8
      - data-checksums
     
      pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
      - host replication replicator 0.0.0.0/0 md5
      - host all all 0.0.0.0/0 md5
    
     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
    # post_init: /usr/local/bin/setup_cluster.sh
     
      # Some additional users which needs to be created after initializing new cluster
    #  users:
    #    admin:
    #      password: admin-pass
    #      options:
    #        - createrole
    #        - createdb
    
      
    postgresql:
      listen: 192.168.216.132,127.0.0.1:5432
      connect_address: 192.168.216.132:5432
      use_unix_socket: true
      data_dir: /var/lib/pgsql/11/data
      bin_dir: /usr/pgsql-11/bin
      config_dir: /var/lib/pgsql/11/data
      pgpass: /var/lib/pgsql/.pgpass
      authentication:
        replication:
          username: replicator
          password: replicator-pass
        superuser:
          username: postgres
          password: postgres-pass
    #    rewind:  # Has no effect on postgres 10 and lower
    #      username: rewind_user
    #      password: rewind_password
      parameters:
        unix_socket_directories: /var/run/postgresql
        stats_temp_directory: /var/lib/pgsql_stats_tmp
     
    #  callbacks:
    #    on_start:
    #    on_stop:
    #    on_restart:
    #    on_reload:
    #    on_role_change:
     
      create_replica_methods:
    #   - pgbackrest
    #   - wal_e
       - basebackup
    # pgbackrest:
    #   command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
    #   keep_data: True
    #   no_params: True
    #  wal_e
    #    command: patroni_wale_restore
    #    no_master: 1
    #    envdir: /etc/wal_e/envdir
    #    use_iam: 1
      basebackup:
        max-rate: '100M'
     
     #watchdog:
    #  mode: automatic # Allowed values: off, automatic, required
    #  device: /dev/watchdog
    #  safety_margin: 5
     
    tags:
        nofailover: false
        noloadbalance: false
        clonefrom: false
        nosync: false
    # specify a node to replicate from. This can be used to implement a cascading replication.
    #    replicatefrom: (node name)
    

    node3:patroni配置文件如下

    [root@localhost tmp]# cat /etc/patroni/patroni.yml 
    
    scope: postgres-cluster
    name: pgnode03
    namespace: /service/
    
    restapi:
      listen: 192.168.216.134:8008
      connect_address: 192.168.216.134:8008
    #  certfile: /etc/ssl/certs/ssl-cert-snakeoil.pem
    #  keyfile: /etc/ssl/private/ssl-cert-snakeoil.key
    #  authentication:
    #    username: username
    #    password: password
    
    etcd:
      hosts: 192.168.216.130:2379,192.168.216.132:2379,192.168.216.134:2379
    
    bootstrap:
      # this section will be written into Etcd:/<namespace>/<scope>/config after initializing new cluster
      # and all other cluster members will use it as a `global configuration`
      dcs:
        ttl: 30
        loop_wait: 10
        retry_timeout: 10
        maximum_lag_on_failover: 1048576
        master_start_timeout: 300
        synchronous_mode: false
        synchronous_mode_strict: false
        #standby_cluster:
          #host: 127.0.0.1
          #port: 1111
          #primary_slot_name: patroni 
        postgresql:
          use_pg_rewind: true
          use_slots: true
          parameters:
            max_connections: 100
            superuser_reserved_connections: 5
            max_locks_per_transaction: 64
            max_prepared_transactions: 0
            huge_pages: try
            shared_buffers: 512MB
            work_mem: 128MB
            maintenance_work_mem: 256MB
            effective_cache_size: 4GB
            checkpoint_timeout: 15min
            checkpoint_completion_target: 0.9
            min_wal_size: 2GB
            max_wal_size: 4GB
            wal_buffers: 32MB
            default_statistics_target: 1000
            seq_page_cost: 1
            random_page_cost: 4
            effective_io_concurrency: 2
            synchronous_commit: on
            autovacuum: on
            autovacuum_max_workers: 5
            autovacuum_vacuum_scale_factor: 0.01
            autovacuum_analyze_scale_factor: 0.02
            autovacuum_vacuum_cost_limit: 200
            autovacuum_vacuum_cost_delay: 20
            autovacuum_naptime: 1s
            max_files_per_process: 4096
            archive_mode: on
            archive_timeout: 1800s
            archive_command: cd .
            wal_level: replica
            wal_keep_segments: 130
            max_wal_senders: 10
            max_replication_slots: 10
            hot_standby: on
            wal_log_hints: on
            shared_preload_libraries: pg_stat_statements,auto_explain
            pg_stat_statements.max: 10000
            pg_stat_statements.track: all
            pg_stat_statements.save: off
            auto_explain.log_min_duration: 10s
            auto_explain.log_analyze: true
            auto_explain.log_buffers: true
            auto_explain.log_timing: false
            auto_explain.log_triggers: true
            auto_explain.log_verbose: true
            auto_explain.log_nested_statements: true
            track_io_timing: on
            log_lock_waits: on
            log_temp_files: 0
            track_activities: on
            track_counts: on
            track_functions: all
            log_checkpoints: on
            logging_collector: on
            log_truncate_on_rotation: on
            log_rotation_age: 1d
            log_rotation_size: 0
            log_line_prefix: '%t [%p-%l] %r %q%u@%d '
            log_filename: 'postgresql-%a.log'
            log_directory: /var/log/postgresql
     
     #      recovery_conf:
    #        restore_command: cp ../wal_archive/%f %p
     
      # some desired options for 'initdb'
      initdb:  # Note: It needs to be a list (some options need values, others are switches)
      - encoding: UTF8
      - locale: en_US.UTF-8
      - data-checksums
     
      pg_hba:  # Add following lines to pg_hba.conf after running 'initdb'
      - host replication replicator 0.0.0.0/0 md5
      - host all all 0.0.0.0/0 md5
    
     # Additional script to be launched after initial cluster creation (will be passed the connection URL as parameter)
    # post_init: /usr/local/bin/setup_cluster.sh
     
      # Some additional users which needs to be created after initializing new cluster
    #  users:
    #    admin:
    #      password: admin-pass
    #      options:
    #        - createrole
    #        - createdb
    
      
    postgresql:
      listen: 192.168.216.134,127.0.0.1:5432
      connect_address: 192.168.216.134:5432
      use_unix_socket: true
      data_dir: /var/lib/pgsql/11/data
      bin_dir: /usr/pgsql-11/bin
      config_dir: /var/lib/pgsql/11/data
      pgpass: /var/lib/pgsql/.pgpass
      authentication:
        replication:
          username: replicator
          password: replicator-pass
        superuser:
          username: postgres
          password: postgres-pass
    #    rewind:  # Has no effect on postgres 10 and lower
    #      username: rewind_user
    #      password: rewind_password
      parameters:
        unix_socket_directories: /var/run/postgresql
        stats_temp_directory: /var/lib/pgsql_stats_tmp
     
    #  callbacks:
    #    on_start:
    #    on_stop:
    #    on_restart:
    #    on_reload:
    #    on_role_change:
     
      create_replica_methods:
    #   - pgbackrest
    #   - wal_e
       - basebackup
    # pgbackrest:
    #   command: /usr/bin/pgbackrest --stanza=<Stanza_Name> --delta restore
    #   keep_data: True
    #   no_params: True
    #  wal_e
    #    command: patroni_wale_restore
    #    no_master: 1
    #    envdir: /etc/wal_e/envdir
    #    use_iam: 1
      basebackup:
        max-rate: '100M'
     
     #watchdog:
    #  mode: automatic # Allowed values: off, automatic, required
    #  device: /dev/watchdog
    #  safety_margin: 5
     
    tags:
        nofailover: false
        noloadbalance: false
        clonefrom: false
        nosync: false
    # specify a node to replicate from. This can be used to implement a cascading replication.
    #    replicatefrom: (node name)
    

      五、分别在3个node节点上创建/etc/systemd/system/patroni.service来通过systemctl管理patroni服务

    可以先执行下,确认patroni的安装位置

    which patroni
    

    如果安装位置和patroni.service配置中的不一致,可以采用创建软连接的方式,

    1、创建软连接(或者手动修改patroni.service中patroni的路径为实际路径,即:ExecStart=patroni的实际路径)

    ln -s /usr/bin/patronictl /usr/local/bin/patronictl
    ln -s /usr/bin/patroni /usr/local/bin/patroni
    

    2、在Node1上创建/etc/systemd/system/patroni.service

    cat /etc/systemd/system/patroni.service
    [Unit]
    Description=Runners to orchestrate a high-availability PostgreSQL - patroni
    After=syslog.target network.target
     
    [Service]
    Type=simple
     
    User=postgres
    Group=postgres
    
    # Read in configuration file if it exists, otherwise proceed
    EnvironmentFile=-/etc/patroni_env.conf
    
    WorkingDirectory=~
     
    # Where to send early-startup messages from the server
    # This is normally controlled by the global default set by systemd
    # StandardOutput=syslog
    
    # Pre-commands to start watchdog device
    # Uncomment if watchdog is part of your patroni setup
    #ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
    #ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
    
    # Start the patroni process
    ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml
    
    # Send HUP to reload from patroni.yml
    ExecReload=/bin/kill -s HUP $MAINPID
     
    # only kill the patroni process, not it's children, so it will gracefully stop postgres
    KillMode=process
     
    # Give a reasonable amount of time for the server to start up/shut down
    TimeoutSec=60
     
    # Do not restart the service if it crashes, we want to manually inspect database on failure
    Restart=no
     
    [Install]
    WantedBy=multi-user.target
    

    3、在Node2上创建/etc/systemd/system/patroni.service

    cat /etc/systemd/system/patroni.service
    [Unit]
    Description=Runners to orchestrate a high-availability PostgreSQL - patroni
    After=syslog.target network.target
     
    [Service]
    Type=simple
     
    User=postgres
    Group=postgres
    
    # Read in configuration file if it exists, otherwise proceed
    EnvironmentFile=-/etc/patroni_env.conf
    
    WorkingDirectory=~
     
    # Where to send early-startup messages from the server
    # This is normally controlled by the global default set by systemd
    # StandardOutput=syslog
    
    # Pre-commands to start watchdog device
    # Uncomment if watchdog is part of your patroni setup
    #ExecStartPre=-/usr/bin/sudo /sbin/modprobe softdog
    #ExecStartPre=-/usr/bin/sudo /bin/chown postgres /dev/watchdog
    
    # Start the patroni process
    ExecStart=/usr/local/bin/patroni /etc/patroni/patroni.yml
    
    # Send HUP to reload from patroni.yml
    ExecReload=/bin/kill -s HUP $MAINPID
     
    # only kill the patroni process, not it's children, so it will gracefully stop postgres
    KillMode=process
     
    # Give a reasonable amount of time for the server to start up/shut down
    TimeoutSec=60
     
    # Do not restart the service if it crashes, we want to manually inspect database on failure
    Restart=no
     
    [Install]
    WantedBy=multi-user.target
    

     4、node3上操作同上,直接复制即可  

  • 相关阅读:
    我的期末可以加分项
    冲刺
    公司授课管理系统
    挑战赛题终于完成
    Java web 学习
    Java web 学习
    Javaweb 学习
    Base64加密
    选课系统
    Educational Codeforces Round 62题解
  • 原文地址:https://www.cnblogs.com/caidingyu/p/11566791.html
Copyright © 2011-2022 走看看