zoukankan      html  css  js  c++  java
  • etcd 基于ubuntu 20.04 部署集群

    Etcd是Kubernetes集群中的一个十分重要的组件,用于保存集群所有的网络配置和对象的状态信息,K8S中所有持久化的状态信息都是以Key-Value的形式存储在etcd中,提供分布式协调服务。之所以说kubenetes各个组件是无状态的,就是因为其中把数据都存放在etcd中。
    由于etcd支持集群,本实验中在三台主机上都部署上etcd.

    一、创建etcd的配置文件

    2379端口用于外部通信,2380用于内部通信

    承接上文:https://www.cnblogs.com/yangzp/p/15692046.html

    master 操作:

    yang@master:/opt/kubernetes/$ sudo mkdir -p /opt/kubernetes/cfg
    yang@master:/opt/kubernetes/cfg$ sudo nano /opt/kubernetes/cfg/etcd.conf 
    #[member]
    ##ETCD节点名称修改,这个ETCD_NAME每个节点必须不同
    ETCD_NAME="etcd-node1"
    #ETCD数据目录
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    
    #ETCD_SNAPSHOT_COUNTER="10000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #
    ##ETCD监听的URL,每个节点不同需要修改
    ETCD_LISTEN_PEER_URLS="https://192.168.1.106:2380"
    
    #外部通信监听URL修改,每个节点不同需要修改
    ETCD_LISTEN_CLIENT_URLS="https://192.168.1.106:2379,https://127.0.0.1:2379"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    #ETCD_CORS=""
    #[cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.106:2380"
    # if you use different ETCD_NAME (e.g. test),
    # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
    ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.1.106:2380,etcd-node2=https://192.168.1.108:2380,etcd-node3=https://192.168.1.109:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.106:2379"
    #[security]
    CLIENT_CERT_AUTH="true"
    ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
    PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem" 

    node1 操作:

    yang@node1:/opt/kubernetes/$ sudo mkdir -p /opt/kubernetes/cfg
    yang@node1:/opt/kubernetes/cfg$ sudo nano /opt/kubernetes/cfg/etcd.conf 
    #[member]
    ##ETCD节点名称修改,这个ETCD_NAME每个节点必须不同
    ETCD_NAME="etcd-node2"
    #ETCD数据目录
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    #ETCD_SNAPSHOT_COUNTER="10000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    ##ETCD监听的URL,每个节点不同需要修改
    
    ETCD_LISTEN_PEER_URLS="https://192.168.1.108:2380"
    #外部通信监听URL修改,每个节点不同需要修改
    ETCD_LISTEN_CLIENT_URLS="https://192.168.1.108:2379,https://127.0.0.1:2379"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    #ETCD_CORS=""
    #[cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.108:2380"
    # if you use different ETCD_NAME (e.g. test),
    # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
    # #添加集群访问
    ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.1.106:2380,etcd-node2=https://192.168.1.108:2380,etcd-node3=https://192.168.1.109:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.108:2379"
    #[security]
    CLIENT_CERT_AUTH="true"
    ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
    PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

    node2 操作:

    yang@node2:/opt/kubernetes/$ sudo mkdir -p /opt/kubernetes/cfg
    yang@node2:/opt/kubernetes/cfg$ cat /opt/kubernetes/cfg/etcd.conf 
    #[member]
    ##ETCD节点名称修改,这个ETCD_NAME每个节点必须不同
    ETCD_NAME="etcd-node3"
    #ETCD数据目录
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"  
    #ETCD_SNAPSHOT_COUNTER="10000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    ##ETCD监听的URL,每个节点不同需要修改
    ETCD_LISTEN_PEER_URLS="https://192.168.1.109:2380"
    #外部通信监听URL修改,每个节点不同需要修改
    ETCD_LISTEN_CLIENT_URLS="https://192.168.1.109:2379,https://127.0.0.1:2379"
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    #ETCD_CORS=""
    #[cluster]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.109:2380"
    # if you use different ETCD_NAME (e.g. test),
    # set ETCD_INITIAL_CLUSTER value for this name, i.e. "test=http://..."
    # 添加集群访问
    ETCD_INITIAL_CLUSTER="etcd-node1=https://192.168.1.106:2380,etcd-node2=https://192.168.1.108:2380,etcd-node3=https://192.168.1.109:2380"
    ETCD_INITIAL_CLUSTER_STATE="new"
    ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.109:2379"
    #[security]
    CLIENT_CERT_AUTH="true"
    ETCD_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"
    PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CA_FILE="/opt/kubernetes/ssl/ca.pem"
    ETCD_PEER_CERT_FILE="/opt/kubernetes/ssl/etcd.pem"
    ETCD_PEER_KEY_FILE="/opt/kubernetes/ssl/etcd-key.pem"

    二 、创建etcd系统服务

    master 创建:

    yang@master:/opt/kubernetes/cfg$ sudo nano /etc/systemd/system/etcd.service 
    
    [Unit]
    Description=Etcd Server
    After=network.target
    
    [Service]
    Type=simple
    WorkingDirectory=/var/lib/etcd
    EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
    # set GOMAXPROCS to number of processors
    ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
    Type=notify
    
    [Install]
    WantedBy=multi-user.target
    

    node1 创建:

    yang@node1:/opt/kubernetes/cfg$ sudo nano /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    
    [Service]
    Type=simple
    WorkingDirectory=/var/lib/etcd
    EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
    # set GOMAXPROCS to number of processors
    ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
    Type=notify
    
    [Install]
    WantedBy=multi-user.target
    

    node2 创建:

    yang@node2:/opt/kubernetes/cfg$ sudo nano /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    
    [Service]
    Type=simple
    WorkingDirectory=/var/lib/etcd
    EnvironmentFile=-/opt/kubernetes/cfg/etcd.conf
    # set GOMAXPROCS to number of processors
    ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /opt/kubernetes/bin/etcd"
    Type=notify
    
    [Install]
    WantedBy=multi-user.target

    三、重新加载系统服务

    1、在master 、node1 、node2 三台服务器上重新加载并设置开机启动etcd系统服务

    yang@master:/opt/kubernetes/cfg$ sudo systemctl daemon-reload
    yang@master:/opt/kubernetes/cfg$ sudo systemctl enable etcd

    2、默认不会创建etcd的数据存储目录,这里在三个节点上创建etcd数据存储目录并启动etcd

    yang@master:/opt/kubernetes/cfg$ sudo  mkdir /var/lib/etcd
    yang@master:/opt/kubernetes/cfg$ sudo systemctl start etcd
    yang@master:/opt/kubernetes/cfg$ sudo systemctl status etcd
    
    yang@node1:/opt/kubernetes/cfg$ sudo  mkdir /var/lib/etcd
    yang@node1:/opt/kubernetes/cfg$ sudo systemctl start etcd
    yang@node1:/opt/kubernetes/cfg$ sudo systemctl status etcd
    
    yang@node2:/opt/kubernetes/cfg$ sudo  mkdir /var/lib/etcd
    yang@node2:/opt/kubernetes/cfg$ sudo systemctl start etcd
    yang@node2:/opt/kubernetes/cfg$ sudo systemctl status etcd
    

    3、查看三台服务器监听的2379 和2380端口

    master 查看:

    yang@master:/opt/kubernetes/cfg$ sudo apt install net-tools
    yang@master:/opt/kubernetes/cfg$ netstat -antpl
    (Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 192.168.1.106:2379      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.106:2380      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -                   
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.106:53248     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.106:53250     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0     36 192.168.1.106:22        192.168.1.114:64179     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.108:46790     ESTABLISHED -                   
    tcp        0      0 127.0.0.1:38788         127.0.0.1:2379          ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.109:36012     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.108:46778     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.108:46780     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:53260     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.109:36008     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.109:36000     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:33486     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 127.0.0.1:2379          127.0.0.1:38788         ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2379      192.168.1.106:39984     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:33476     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.106:2380      192.168.1.109:35998     ESTABLISHED -                   
    tcp        0      0 192.168.1.106:33478     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.106:39984     192.168.1.106:2379      ESTABLISHED -                   
    tcp6       0      0 :::22                   :::*                    LISTEN      -                   
    tcp6       0      0 ::1:6010                :::*                    LISTEN      -  
    

    node1 查看:

    yang@node1:/opt/kubernetes/cfg$ netstat -antpl
    (Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.108:2379      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.108:2380      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -                   
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.108:44020     192.168.1.108:2379      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.109:60836     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.106:53260     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2379      192.168.1.108:44020     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.106:53248     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:53444     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:53452     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:46778     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:46780     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:46790     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.109:60842     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.109:60848     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.106:53250     ESTABLISHED -                   
    tcp        0      0 192.168.1.108:53442     192.168.1.109:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.108:2380      192.168.1.109:60856     ESTABLISHED -                   
    tcp        0      0 127.0.0.1:60782         127.0.0.1:2379          ESTABLISHED -                   
    tcp        0     36 192.168.1.108:22        192.168.1.114:64184     ESTABLISHED -                   
    tcp        0      0 127.0.0.1:2379          127.0.0.1:60782         ESTABLISHED -                   
    tcp6       0      0 ::1:6010                :::*                    LISTEN      -                   
    tcp6       0      0 :::22                   :::*                    LISTEN      -  
    

    node2 查看:

    yang@node2:/opt/kubernetes/cfg$ netstat -antpl
    (Not all processes could be identified, non-owned process info
     will not be shown, you would have to be root to see it all.)
    Active Internet connections (servers and established)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 127.0.0.53:53           0.0.0.0:*               LISTEN      -                   
    tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.109:2379      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.109:2380      0.0.0.0:*               LISTEN      -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.106:33478     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:36008     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.106:33476     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:36000     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.106:33486     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:60842     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2379      192.168.1.109:37924     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:36012     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:37924     192.168.1.109:2379      ESTABLISHED -                   
    tcp        0     36 192.168.1.109:22        192.168.1.114:64185     ESTABLISHED -                   
    tcp        0      0 127.0.0.1:2379          127.0.0.1:57926         ESTABLISHED -                   
    tcp        0      0 192.168.1.109:60856     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.108:53452     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:35998     192.168.1.106:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:60848     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 192.168.1.109:60836     192.168.1.108:2380      ESTABLISHED -                   
    tcp        0      0 127.0.0.1:57926         127.0.0.1:2379          ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.108:53442     ESTABLISHED -                   
    tcp        0      0 192.168.1.109:2380      192.168.1.108:53444     ESTABLISHED -                   
    tcp6       0      0 :::22                   :::*                    LISTEN      -                   
    tcp6       0      0 ::1:6010                :::*                    LISTEN      -  

    四、验证ETCD集群

    master 操作:

    yang@master:/opt/kubernetes/bin$ etcdctl --endpoints=https://192.168.1.106:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
    
    Command 'etcdctl' not found, but can be installed with:
    
    sudo apt install etcd-client
    
    yang@master:/opt/kubernetes/bin$ sudo apt install etcd-client
    Reading package lists... Done
    Building dependency tree       
    Reading state information... Done
    The following NEW packages will be installed:
      etcd-client
    0 upgraded, 1 newly installed, 0 to remove and 44 not upgraded.
    Need to get 4,563 kB of archives.
    After this operation, 17.2 MB of additional disk space will be used.
    Get:1 http://mirrors.aliyun.com/ubuntu focal/universe amd64 etcd-client amd64 3.2.26+dfsg-6 [4,563 kB]
    Fetched 4,563 kB in 2s (2,398 kB/s)      
    Selecting previously unselected package etcd-client.
    (Reading database ... 71849 files and directories currently installed.)
    Preparing to unpack .../etcd-client_3.2.26+dfsg-6_amd64.deb ...
    Unpacking etcd-client (3.2.26+dfsg-6) ...
    Setting up etcd-client (3.2.26+dfsg-6) ...
    Processing triggers for man-db (2.9.1-1) ...
    yang@master:/opt/kubernetes/bin$ sudo etcdctl --endpoints=https://192.168.1.106:2379 --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/etcd.pem  --key-file=/opt/kubernetes/ssl/etcd-key.pem cluster-health
    member 1baa0a58b574d69b is healthy: got healthy result from https://192.168.1.108:2379
    member 39133e181b350a4e is healthy: got healthy result from https://192.168.1.106:2379
    member 69fb6a35f1ce3d83 is healthy: got healthy result from https://192.168.1.109:2379
    cluster is healthy

    至此,etcd 部署完毕!

    参考:https://blog.csdn.net/qq_34261373/article/details/90052220 

  • 相关阅读:
    只用一个字节 计算象棋将帅之间可能的位置
    后缀数组学习
    java 构造不可变类集的使用方法
    topcoder SRM 639 div2
    navicat和pymysql
    表查询
    表的关系对应
    MySQl数据类型和条件限制
    复习之网络编程
    协程
  • 原文地址:https://www.cnblogs.com/yangzp/p/15693090.html
Copyright © 2011-2022 走看看