zoukankan      html  css  js  c++  java
  • Kubernetes的kubeadm项目安装部署

    一、网络拓扑

     

    二、系统配置

    OS System:
    [root@localhost ~]# cat /etc/redhat-release 
    CentOS Linux release 7.8.2003 (Core)
    内核版本:
    [root@localhost ~]# uname -r
    5.4.109-1.el7.elrepo.x86_64
    
    k8s-master-VIP:172.168.32.248
    haproxy+keepalived-master:172.168.32.208
    haproxy+keepalived-slave:172.168.32.209
    #etcd01:172.168.32.211
    #etcd02:172.168.32.212
    #etcd03:172.168.32.213
    k8s-master01:172.168.32.201 
    k8s-master02:172.168.32.202
    k8s-master03:172.168.32.203
    k8s-node01:172.168.32.204
    k8s-node02:172.168.32.205
    k8s-node03:172.168.32.206
    harbor+ansible+nfs:172.168.32.41
    
    访问域名:
    172.168.32.248 www.ywx.net
    harbor域名:
    172.168.32.41 harbor.ywx.net

    三、centos内核升级

    # 载入公钥
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    # 安装ELRepo
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
    # 载入elrepo-kernel元数据
    yum --disablerepo=* --enablerepo=elrepo-kernel repolist
    # 查看可用的rpm包
    yum --disablerepo=* --enablerepo=elrepo-kernel list kernel*
    # 安装长期支持版本的kernel
    yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
    # 删除旧版本工具包
    yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
    # 安装新版本工具包
    yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
    
    #查看默认启动顺序
     [root@localhost tmp]# awk -F' '$1=="menuentry " {print $2}' /etc/grub2.cfg 
    CentOS Linux (5.4.109-1.el7.elrepo.x86_64) 7 (Core)
    CentOS Linux (3.10.0-327.el7.x86_64) 7 (Core)
    CentOS Linux (0-rescue-d7e33d2d499040e5ab09a6182a7175d9) 7 (Core)
    #默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
    grub2-set-default 0  
    #重启并检查
    reboot

    报错解决

    Error: Package: kernel-lt-tools-5.4.109-1.el7.elrepo.x86_64 (elrepo-kernel)
               Requires: libpci.so.3(LIBPCI_3.3)(64bit)
    Error: Package: kernel-lt-tools-5.4.109-1.el7.elrepo.x86_64 (elrepo-kernel)
               Requires: libpci.so.3(LIBPCI_3.5)(64bit)
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
    
    处理方法:
    yum install -y pciutils-libs

    四、安装部署keepalived+haproxy

    1、部署keepalived

    在172.168.32.208和172.168.32.209上部署keepalived

    yum install -y keepalived

    haproxy+keepalived-master:172.168.32.208的keepalived配置文件

    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
         acassen@firewall.loc
         failover@firewall.loc
         sysadmin@firewall.loc
       }
       notification_email_from Alexandre.Cassen@firewall.loc
       smtp_server 172.168.200.1
       smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_instance VI_1 {
        state MASTER
        interface eth0
        virtual_router_id 51
        priority 100
        advert_int 3
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            172.168.32.248 dev eth0 label eth0:1
        }
    }

    haproxy+keepalived-slave:172.168.32.209的keepalived的配置

    ! Configuration File for keepalived
    
    global_defs {
       notification_email {
         acassen@firewall.loc
         failover@firewall.loc
         sysadmin@firewall.loc
       }
       notification_email_from Alexandre.Cassen@firewall.loc
       smtp_server 172.168.200.1
       smtp_connect_timeout 30
       router_id LVS_DEVEL
       vrrp_skip_check_adv_addr
       vrrp_strict
       vrrp_garp_interval 0
       vrrp_gna_interval 0
    }
    
    vrrp_instance VI_1 {
        state BACKUP
        interface eth0
        virtual_router_id 51
        priority 80
        advert_int 1
        authentication {
            auth_type PASS
            auth_pass 1111
        }
        virtual_ipaddress {
            172.168.32.248 dev eth0 label eth0:1
        }
    }

    启动keepalived,并设置为开机自启动

    systemctl start keepalived
    systemctl enable keepalived

    验证keepalived

    keepalived master:172.168.32.208

    [root@haproxy01 ~]# ip a|grep eth0
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        inet 172.168.32.208/16 brd 172.168.255.255 scope global eth0
        inet 172.168.32.248/32 scope global eth0:1

    keepalived slave:172.168.32.209

    [root@haproxy02 ~]# ip a|grep eth0
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        inet 172.168.32.209/16 brd 172.168.255.255 scope global eth0

    关闭master,验正vip漂移到slave上

    keepalived master:172.168.32.208

    [root@haproxy01 ~]# systemctl stop keepalived.service 
    [root@haproxy01 ~]# ip a |grep eth0
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        inet 172.168.32.208/16 brd 172.168.255.255 scope global eth0

    keepalived slave:172.168.32.209

    [root@haproxy02 ~]# ip a|grep eth0
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        inet 172.168.32.209/16 brd 172.168.255.255 scope global eth0
        inet 172.168.32.248/32 scope global eth0:1

    VIP漂移成功过

    2、VIP无法ping通故障处理

    该实验需要在haproxy01:172.168.32.208和haproxy02:172.168.32.209上都部署

    yum安装会自动生成防火墙策略,可以删除或禁止生成
    [root@haproxy02 ~]# iptables -vnL
    Chain INPUT (policy ACCEPT 46171 packets, 3069K bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set keepalived dst
    #yum安装的keepalived会在input链上设置一条iptables规则,该规则是静止访问VIP。

    修改iptables规则

    [root@haproxy02 ~]# iptables-save > /tmp/iptables.txt
    [root@haproxy02 ~]# vim /tmp/iptables.txt
    # Generated by iptables-save v1.4.21 on Tue Apr  6 02:57:11 2021
    *filter
    :INPUT ACCEPT [47171:3118464]
    :FORWARD ACCEPT [0:0]
    :OUTPUT ACCEPT [46521:2350054]
    -A INPUT -m set --match-set keepalived dst -j DROP
    COMMIT
    # Completed on Tue Apr  6 02:57:11 2021
    
    #将-A INPUT -m set --match-set keepalived dst -j DROP改为-A INPUT -m set --match-set keepalived dst -j ACCEPT
    
    #重新导入iptables规则
    [root@haproxy02 ~]# iptables-restore /tmp/iptables.txt 
    [root@haproxy02 ~]# iptables -vnL
    Chain INPUT (policy ACCEPT 115 packets, 5732 bytes)
     pkts bytes target     prot opt in     out     source               destination         
        0     0 ACCEPT     all  --  *      *       0.0.0.0/0            0.0.0.0/0            match-set keepalived dst
    
    
    VIP即可ping通

    设置自动加载iptables规则

    [root@haproxy02 ~]#vim /etc/rc.d/rc.local
    /usr/sbin/iptables-restore  /tmp/iptables.txt
    
    [root@haproxy02 ~]#chmod +x /etc/rc.d/rc.local

    3、安装部署haproxy

    在172.168.32.208和172.168.32.209上部署haproxy

    yum install -y haproxy

    在haproxy.cfg的配置文件添加如下信息

    listen stats
        mode http
        bind 172.168.32.248:9999 
        stats enable
        log global
        stats uri         /haproxy-status
        stats auth        haadmin:123456
    
    
    listen k8s_api_nodes_6443
        bind 172.168.32.248:6443
        mode tcp
        #balance leastconn
       server 172.168.32.201 172.168.32.201:6443 check inter 2000 fall 3 rise 5
       server 172.168.32.202 172.168.32.202:6443 check inter 2000 fall 3 rise 5
       server 172.168.32.203 172.168.32.203:6443 check inter 2000 fall 3 rise 5

    启动haporxy

    systemctl start haproxy
    systemctl enable haproxy

    验证:

    可以使用www.ywx.net:9999/haproxy-status访问

    五、安装部署harbor之https

    1、 软件版本

    harbor 172.168.32.41

    harbor软件版本:
    harbor-offline-installer-v1.8.6
    
    docker:
    19.03.9-ce
    
    OS
    [root@harbor ~]# cat /etc/redhat-release 
    CentOS Linux release 7.8.2003 (Core)
    [root@harbor ~]# uname -r
    3.10.0-1127.el7.x86_64

    2 、安装docker

    docker安装脚本

    cat >> /tmp/docker_install.sh << EOF
    #! /bin/bash
    ver=19.03.9
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
    yum makecache fast
    yum install -y docker-ce-$ver docker-ce-cli-$ver
    systemctl start docker
    systemctl enable docker
    EOF

    安装docker

    bash /tmp/docker_install.sh

    安装Docker Compose

    方法一:
    https://github.com/docker/compose/releases
    mv docker-compose-Linux-x86_64 /usr/bin/docker-compose
    chmod +x /usr/bin/docker-compose
    方法二:
    yum install -y docker-compose

    3. 创建证书文件

    mkdir /certs
    cd /certs
    #生成私有key
    openssl genrsa -out /certs/harbor-ca.key
    #签发证书
    openssl req -x509 -new -nodes -key /certs/harbor-ca.key -subj "/CN=harbor.ywx.net" -days 7120 -out /certs/harbor-ca.crt

    4 、安装harbor之https

    mkdir /apps
    cd /apps
    #把harbor-offline-installer-v2.1.2.tgz上传到/apps
    tar -xf harbor-offline-installer-v2.1.2.tgz
    #修改harbor配置文件
    cd harbor 
    cp harbor.yml.tmpl harbor.yml
    #配置文件信息
    vim harbor.yml
    # Configuration file of Harbor
    
    # The IP address or hostname to access admin UI and registry service.
    # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
    hostname: 172.168.32.41
    
    # http related config
    http:
      # port for http, default is 80. If https enabled, this port will redirect to https port
      port: 80
    
    # https related config
    https:
      # https port for harbor, default is 443
      port: 443
      # The path of cert and key files for nginx
      certificate: /certs/harbor-ca.crt
      private_key: /certs/harbor-ca.key
    
    # # Uncomment following will enable tls communication between all harbor components
    # internal_tls:
    #   # set enabled to true means internal tls is enabled
    #   enabled: true
    #   # put your cert and key files on dir
    #   dir: /etc/harbor/tls/internal
    
    # Uncomment external_url if you want to enable external proxy
    # And when it enabled the hostname will no longer used
    # external_url: https://reg.mydomain.com:8433
    
    # The initial password of Harbor admin
    # It only works in first time to install harbor
    # Remember Change the admin password from UI after launching Harbor.
    harbor_admin_password: 123456
    。。。。。。
    
    #安装harbor
    ./install.sh

    访问https://172.168.32.41或者域名访问https://harbor.ywx.net

     

    六、kubernetes集群系统初始化

    kubernetes所有master和node节点

    1、安装部署ansible

    在172.168.32.41上部署ansible

    yum install -y ansible

    2、系统初始化及内核参数优化

    系统初始化

    yum install  vim iotop bc gcc gcc-c++ glibc glibc-devel pcre 
    pcre-devel openssl  openssl-devel zip unzip zlib-devel  net-tools 
    lrzsz tree ntpdate telnet lsof tcpdump wget libevent libevent-devel 
    bc  systemd-devel bash-completion traceroute -y
    
    ntpdate time1.aliyun.com
    cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
    echo "*/5 * * * * ntpdate time1.aliyun.com &> /dev/null && hwclock -w" >> /var/spool/cron/root
    
    systemctl stop firewalld
    systemctl disable firewalld
    systemctl stop NetworkManager
    systemctl disable NetworkManager

    内核参数优化

    cat > /etc/modules-load.d/ipvs.conf <<EOF
    # Load IPVS at boot
    ip_vs
    ip_vs_rr
    ip_vs_wrr
    ip_vs_sh
    nf_conntrack_ipv4
    EOF
    systemctl enable --now systemd-modules-load.service
    #确认内核模块加载成功
    lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    #安装ipset、ipvsadm
    yum install -y ipset ipvsadm
    #配置内核参数;
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system

    3、配置sysctl.conf文件

    vim /etc/sysctl.conf
    
    # sysctl settings are defined through files in
    # /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
    #
    # Vendors settings live in /usr/lib/sysctl.d/.
    # To override a whole file, create a new file with the same in
    # /etc/sysctl.d/ and put new settings there. To override
    # only specific settings, add a file with a lexically later
    # name in /etc/sysctl.d/ and put new settings there.
    #
    # For more information, see sysctl.conf(5) and sysctl.d(5).
    
    # Controls source route verification
     net.ipv4.conf.default.rp_filter = 1
     net.ipv4.ip_nonlocal_bind = 1
     net.ipv4.ip_forward = 1
     # Do not accept source routing
     net.ipv4.conf.default.accept_source_route = 0
     # Controls the System Request debugging functionality of the kernel
     kernel.sysrq = 0
     # Controls whether core dumps will append the PID to the core filename.
     # Useful for debugging multi-threaded applications.
     kernel.core_uses_pid = 1
     # Controls the use of TCP syncookies
     net.ipv4.tcp_syncookies = 1
     # Disable netfilter on bridges.
     net.bridge.bridge-nf-call-ip6tables = 1
     net.bridge.bridge-nf-call-iptables = 1
     net.bridge.bridge-nf-call-arptables = 0
     # Controls the default maxmimum size of a mesage queue
     kernel.msgmnb = 65536
     # # Controls the maximum size of a message, in bytes
     kernel.msgmax = 65536
     # Controls the maximum shared segment size, in bytes
     kernel.shmmax = 68719476736
     # # Controls the maximum number of shared memory segments, in pages
     kernel.shmall = 4294967296
     # TCP kernel paramater
     net.ipv4.tcp_mem = 786432 1048576 1572864
     net.ipv4.tcp_rmem = 4096 87380 4194304
     net.ipv4.tcp_wmem = 4096 16384 4194304
     net.ipv4.tcp_window_scaling = 1
     net.ipv4.tcp_sack = 1
     # socket buffer
     net.core.wmem_default = 8388608
     net.core.rmem_default = 8388608
     net.core.rmem_max = 16777216
     net.core.wmem_max = 16777216
     net.core.netdev_max_backlog = 262144
     net.core.somaxconn = 20480
     net.core.optmem_max = 81720
     # TCP conn
     net.ipv4.tcp_max_syn_backlog = 262144
     net.ipv4.tcp_syn_retries = 3
     net.ipv4.tcp_retries1 = 3
     net.ipv4.tcp_retries2 = 15
     # tcp conn reuse
     net.ipv4.tcp_timestamps = 0
     net.ipv4.tcp_tw_reuse = 0
     net.ipv4.tcp_tw_recycle = 0
     net.ipv4.tcp_fin_timeout = 1
     net.ipv4.tcp_max_tw_buckets = 20000
     net.ipv4.tcp_max_orphans = 3276800
     net.ipv4.tcp_synack_retries = 1
     net.ipv4.tcp_syncookies = 1
     # keepalive conn
     net.ipv4.tcp_keepalive_time = 300
     net.ipv4.tcp_keepalive_intvl = 30
     net.ipv4.tcp_keepalive_probes = 3
     net.ipv4.ip_local_port_range = 10001 65000
     # swap
     vm.overcommit_memory = 0
     vm.swappiness = 10
     #net.ipv4.conf.eth1.rp_filter = 0
     #net.ipv4.conf.lo.arp_ignore = 1
     #net.ipv4.conf.lo.arp_announce = 2
     #net.ipv4.conf.all.arp_ignore = 1
     #net.ipv4.conf.all.arp_announce = 2

    sysctl.conf脚本

    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    
    #sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
     scp -r /apps/sysctl.conf ${node}:/etc/
     ssh ${node} "/usr/sbin/sysctl --system"
      if [ $? -eq 0 ];then
        echo "${node} iptable_k8s.conf copy完成" 
      else
        echo "${node} iptable_k8s.conf copy失败" 
      fi
    done

    4、配置ansible 172.168.32.41主机免密钥登录所有kubernetes集群主机

    免密钥拷贝脚本

    cat scp.sh
    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213for node in ${IP};do
    
    sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
      if [ $? -eq 0 ];then
        echo "${node} 秘钥copy完成" 
      else
        echo "${node} 秘钥copy失败" 
      fi
    done

    配置免密钥

    [root@harbor tmp]# ssh-keygen
    [root@harbor tmp]# bash scp.sh 
    172.168.32.201 秘钥copy完成
    172.168.32.202 秘钥copy完成
    172.168.32.203 秘钥copy完成
    172.168.32.204 秘钥copy完成
    172.168.32.205 秘钥copy完成
    172.168.32.206 秘钥copy完成
    172.168.32.211 秘钥copy完成
    172.168.32.212 秘钥copy完成
    172.168.32.213 秘钥copy完成

    5、拷贝hosts文件

    在172.168.32.41上操作

    拷贝hosts文件到所有的集群所有节点

    vim /etc/hosts
    172.168.32.201 k8s-master01
    172.168.32.202 k8s-master02
    172.168.32.203 k8s-master03
    172.168.32.204 k8s-node01
    172.168.32.205 k8s-node02
    172.168.32.206 k8s-node03
    #172.168.32.211 etcd01
    #172.168.32.212 etcd02
    #172.168.32.213 etcd03
    172.168.32.41 harbor.ywx.net
    172.168.32.248 www.ywx.net

    hosts贝脚本

    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    
    scp -r /etc/hosts root@${node}:/etc/hosts
      if [ $? -eq 0 ];then
        echo "${node} hosts copy完成" 
      else
        echo "${node} hosts copy失败" 
      fi
    done

    6、集群时间同步

    vim time_tongbu.sh
    
    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    ssh ${node} "/usr/sbin/ntpdate time1.aliyun.com &> /dev/null && hwclock -w"
      if [ $? -eq 0 ];then
         echo "${node}--->time sysnc success!!!"
      else
         echo "${node}--->time sysnc false!!!"
      fi
    done

    同步集群时间

    [root@harbor apps]# bash time_tongbu.sh
    172.168.32.201--->time sysnc success!!!
    172.168.32.202--->time sysnc success!!!
    172.168.32.203--->time sysnc success!!!
    172.168.32.204--->time sysnc success!!!
    172.168.32.205--->time sysnc success!!!
    172.168.32.206--->time sysnc success!!!
    172.168.32.211--->time sysnc success!!!
    172.168.32.212--->time sysnc success!!!
    172.168.32.213--->time sysnc success!!!

    时间同步测试脚本

    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    echo "------------"
    echo ${node} 
    ssh ${node} echo "${hostname}-$(/usr/bin/date)"
    done

    测试集群时间是否同步

    [root@harbor apps]# bash date.sh 
    ------------
    172.168.32.201
    -Sat May 22 06:12:56 CST 2021
    ------------
    172.168.32.202
    -Sat May 22 06:12:56 CST 2021
    ------------
    172.168.32.203
    -Sat May 22 06:12:56 CST 2021
    ------------
    172.168.32.204
    -Sat May 22 06:12:57 CST 2021
    ------------
    172.168.32.205
    -Sat May 22 06:12:57 CST 2021
    ------------
    172.168.32.206
    -Sat May 22 06:12:57 CST 2021
    ------------
    172.168.32.211
    -Sat May 22 06:12:57 CST 2021
    ------------
    172.168.32.212
    -Sat May 22 06:12:57 CST 2021
    ------------
    172.168.32.213
    -Sat May 22 06:12:57 CST 2021

    7、关闭swapoff

    vim swapoff.sh
    
    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    
    #sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
     ssh ${node} "swapoff -a && sed -i '/swap/s@UUID@#UUID@g' /etc/fstab"
      if [ $? -eq 0 ];then
        echo "${node} swap关闭成功" 
      else
        echo "${node} swap关闭失败" 
      fi
    done

    执行关闭swap的脚本

    [root@harbor apps]# bash swapoff.sh 
    172.168.32.201 swap关闭成功
    172.168.32.202 swap关闭成功
    172.168.32.203 swap关闭成功
    172.168.32.204 swap关闭成功
    172.168.32.205 swap关闭成功
    172.168.32.206 swap关闭成功
    172.168.32.211 swap关闭成功
    172.168.32.212 swap关闭成功
    172.168.32.213 swap关闭成功

    七、使用kubeadm部署kubernetes V1.20

    在所有的master和node节点上操作

    1、安装docker-v19.03.8

    docker_install.sh

    #! /bin/bash
    ver=19.03.8
    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
    yum makecache fast
    yum install -y docker-ce-$ver docker-ce-cli-$ver
    systemctl start docker
    systemctl enable docker

    docker_scp.sh

    #!/bin/bash
    #目标主机列表
    IP="
    172.168.32.201
    172.168.32.202
    172.168.32.203
    172.168.32.204
    172.168.32.205
    172.168.32.206
    172.168.32.211
    172.168.32.212
    172.168.32.213"
    
    
    for node in ${IP};do
    
    #sshpass -p 123456 ssh-copy-id ${node}  -o StrictHostKeyChecking=no &> /dev/null
     scp -r /apps/docker_install.sh ${node}:/tmp/
     ssh ${node} "/usr/bin/bash /tmp/docker_install.sh"
      if [ $? -eq 0 ];then
        echo "${node}----> docker install success完成" 
      else
        echo "${node}----> docker install false失败" 
      fi
    done

    在172.168.32.41上使用脚本批量安装给kuberntes所有master和node节点安装docker

    [root@harbor apps]#bash docker_scp.sh

    配置docker的阿里云加速

    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ],
      "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
    }
    EOF
    
     systemctl daemon-reload 
     systemctl restart docker

    2、添加阿里云的kubernetes的yum源

     cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    yum makecache

    3、安装kubeadm,kubelet和kubectl

    安装版本号为v1.20

    yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
     systemctl enable kubelet
     #先不要启动,后面由kubeadm来启动

    4、部署kubernetes maste

    4.1 kubernetes 初始化

    kubeadm init 
    --apiserver-advertise-address=172.168.32.201 
    --control-plane-endpoint=172.168.32.248 
    --apiserver-bind-port=6443 
    --kubernetes-version=v1.20.0 
    --pod-network-cidr=10.244.0.0/16 
    --service-cidr=10.96.0.0/16 
    --service-dns-domain=cluster.local 
    --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers 
    --upload-certs 
    --ignore-preflight-errors=swap
    --apiserver-advertise-address=172.168.32.201 #kubernetes要监听的本地IP
    --control-plane-endpoint=172.168.32.248 #为控制平台指定一个稳定的 IP 地址或 DNS 名称,即配置一个可以长期使用切是高可用的 VIP 或者域名,k8s多 master 高可用基于此参数实现
    
    --apiserver-bind-port=6443 #apisever绑定的端口号,默认为6443
    --kubernetes-version=v1.20.0 #指定安装的kubernetes版本,一般为kubeadm version
    --pod-network-cidr=10.244.0.0/16 #pod ip的地址范围
    --service-cidr=10.96.0.0/16 #service ip的地址范围
    --service-dns-domain=cluster.local #设置 k8s 内部域名,默认为 cluster.local,会有相应的 DNS 服务(kube-dns/coredns)解析生成的域名记录。
    
    --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers #设置一个镜像仓库,默认为 k8s.gcr.io
    --upload-certs #更新证书,用于添加master节点
    --ignore-preflight-errors=swap #可以忽略检查过程  中出现的错误信息,比如忽略 swap,如果为 all就忽略所有。

    仅在一台master节点上配置,这里在k8s-master01 172.168.32.201上配置

    [root@k8s-master01 ~]# kubeadm init 
    --apiserver-advertise-address=172.168.32.201 
    --control-plane-endpoint=172.168.32.248 
    --apiserver-bind-port=6443 
    --kubernetes-version=v1.20.0 
    --pod-network-cidr=10.244.0.0/16 
    --service-cidr=10.96.0.0/16 
    --service-dns-domain=cluster.local 
    --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers 
    --upload-certs 
    --ignore-preflight-errors=swap
    
    ......
    Your Kubernetes control-plane has initialized successfully!
    #使用集群的配置步骤
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
      export KUBECONFIG=/etc/kubernetes/admin.conf
      
    #部署集群网络
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    #添加master节点
    You can now join any number of the control-plane node running the following command on each as root:
    
      kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
        --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
        --control-plane --certificate-key a552bdb7c4844682faeff86f6a4eaedd28c5ca52769cc9178e56b5bc245e9fc7
    #token过期,创建新的token
    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
    
    #添加node节点
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
        --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 

    4.2 开始配置集群使用环境

    [root@k8s-master01 ~]#  mkdir -p $HOME/.kube
    [root@k8s-master01 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master01 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
    [root@k8s-master01 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
    [root@k8s-master01 ~]# kubectl get nodes
    NAME           STATUS     ROLES                  AGE    VERSION
    k8s-master01   NotReady   control-plane,master   5m6s   v1.20.0
    #NotReady状态是因为还没有配置集群网络插件

    4.3 配置集群网络插件Calico

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

    注意:只需要部署下面其中一个,推荐Calico。

    Calico是一个纯三层的数据中心网络方案,Calico支持广泛的平台,包括Kubernetes、OpenStack等。

    Calico 在每一个计算节点利用 Linux Kernel 实现了一个高效的虚拟路由器( vRouter) 来负责数据转发,而每个 vRouter 通过 BGP 协议负责把自己上运行的 workload 的路由信息向整个 Calico 网络内传播。

    此外,Calico 项目还实现了 Kubernetes 网络策略,提供ACL功能。

    https://docs.projectcalico.org/getting-started/kubernetes/quickstart

    下载calico.yaml

    mkdir /apps
    cd /apps
    wget https://docs.projectcalico.org/manifests/calico.yaml

    下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init指定的一样

    把CALICO_IPV4POOL_CIDR必须与master初始化时的--pod-network-cidr=10.244.0.0/16一致,修改下面信息
    
     #- name: CALICO_IPV4POOL_CIDR
     #             value: "192.168.0.0/16"
     改为
    - name: CALICO_IPV4POOL_CIDR
                 value: "10.244.0.0/16"

    运行calico.yaml文件

    [root@k8s-master01 apps]# kubectl apply -f calico.yaml 
    configmap/calico-config created
    customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
    customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
    clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
    clusterrole.rbac.authorization.k8s.io/calico-node created
    clusterrolebinding.rbac.authorization.k8s.io/calico-node created
    daemonset.apps/calico-node created
    serviceaccount/calico-node created
    deployment.apps/calico-kube-controllers created
    serviceaccount/calico-kube-controllers created
    poddisruptionbudget.policy/calico-kube-controllers created
    
    [root@k8s-master01 apps]# kubectl get pods -n kube-system
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-7f4f5bf95d-c9gn8   1/1     Running   0          5m43s
    calico-node-882fl                          1/1     Running   0          5m43s
    coredns-54d67798b7-5dc67                   1/1     Running   0          22m
    coredns-54d67798b7-kxdgz                   1/1     Running   0          22m
    etcd-k8s-master01                          1/1     Running   0          22m
    kube-apiserver-k8s-master01                1/1     Running   0          22m
    kube-controller-manager-k8s-master01       1/1     Running   0          22m
    kube-proxy-5g22z                           1/1     Running   0          22m
    kube-scheduler-k8s-master01                1/1     Running   0          22m
    #calico已经部署完成
    
    [root@k8s-master01 apps]# kubectl get nodes
    NAME           STATUS   ROLES                  AGE   VERSION
    k8s-master01   Ready    control-plane,master   23m   v1.20.0
    #calico网络部署完成,集群状态变为Ready

    5、添加其他的mater节点

    在其他master节点172.168.32.202和172.168.32.203上操作,如下命令

    kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
        --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
        --control-plane --certificate-key a552bdb7c4844682faeff86f6a4eaedd28c5ca52769cc9178e56b5bc245e9fc7

    k8s-master02 172.168.32.202

    To start administering your cluster from this node, you need to run the following as a regular user:
    
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Run 'kubectl get nodes' to see this node join the cluster.

    k8s-master03 172.168.32.203

    ......
    To start administering your cluster from this node, you need to run the following as a regular user:
    
        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Run 'kubectl get nodes' to see this node join the cluster.

    配置k8s-master02和k8s-master03的集群使用环境

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    master节点添加完成

    [root@k8s-master01 apps]# kubectl get nodes
    NAME           STATUS   ROLES                  AGE     VERSION
    k8s-master01   Ready    control-plane,master   37m     v1.20.0
    k8s-master02   Ready    control-plane,master   2m34s   v1.20.0
    k8s-master03   Ready    control-plane,master   3m29s   v1.20.0

    6、添加node工作节点

    在所有node节点172.168.32.204 172.168.32.205 172.168.32.206上执行如下命令

    kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
        --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 

    k8s-node01 172.168.32.204

    [root@k8s-node01 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
    >     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
    [preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    k8s-node02 172.168.32.205

    [root@k8s-node02k8s-node03 172.168.32.206 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
    >     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
    [preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    k8s-node03 172.168.32.206

    [root@k8s-node03 ~]# kubeadm join 172.168.32.248:6443 --token o1yuan.zos9j6zkar9ldlbc 
    >     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
    [preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

    在master节点上使用"kubectl get nodes"查看kubernetes集群状态

    [root@k8s-master01 apps]# kubectl get nodes
    NAME           STATUS   ROLES                  AGE   VERSION
    k8s-master01   Ready    control-plane,master   52m   v1.20.0
    k8s-master02   Ready    control-plane,master   17m   v1.20.0
    k8s-master03   Ready    control-plane,master   18m   v1.20.0
    k8s-node01     Ready    <none>                 10m   v1.20.0
    k8s-node02     Ready    <none>                 10m   v1.20.0
    k8s-node03     Ready    <none>                 10m   v1.20.0

    八、部署dashboard

    https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml下载dashboard清单文件,手动修改service暴露nodeport的端口为30001

    dashboard.yaml清单

    # Copyright 2017 The Kubernetes Authors.
    #
    # Licensed under the Apache License, Version 2.0 (the "License");
    # you may not use this file except in compliance with the License.
    # You may obtain a copy of the License at
    #
    #     http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    
    apiVersion: v1
    kind: Namespace
    metadata:
      name: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-certs
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-csrf
      namespace: kubernetes-dashboard
    type: Opaque
    data:
      csrf: ""
    
    ---
    
    apiVersion: v1
    kind: Secret
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-key-holder
      namespace: kubernetes-dashboard
    type: Opaque
    
    ---
    
    kind: ConfigMap
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard-settings
      namespace: kubernetes-dashboard
    
    ---
    
    kind: Role
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    rules:
      # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
      - apiGroups: [""]
        resources: ["secrets"]
        resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
        verbs: ["get", "update", "delete"]
        # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
      - apiGroups: [""]
        resources: ["configmaps"]
        resourceNames: ["kubernetes-dashboard-settings"]
        verbs: ["get", "update"]
        # Allow Dashboard to get metrics.
      - apiGroups: [""]
        resources: ["services"]
        resourceNames: ["heapster", "dashboard-metrics-scraper"]
        verbs: ["proxy"]
      - apiGroups: [""]
        resources: ["services/proxy"]
        resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
        verbs: ["get"]
    
    ---
    
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
    rules:
      # Allow Metrics Scraper to get metrics from the Metrics server
      - apiGroups: ["metrics.k8s.io"]
        resources: ["pods", "nodes"]
        verbs: ["get", "list", "watch"]
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: kubernetes-dashboard
    subjects:
      - kind: ServiceAccount
        name: kubernetes-dashboard
        namespace: kubernetes-dashboard
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: kubernetes-dashboard
      template:
        metadata:
          labels:
            k8s-app: kubernetes-dashboard
        spec:
          containers:
            - name: kubernetes-dashboard
              image: kubernetesui/dashboard:v2.0.3
              imagePullPolicy: Always
              ports:
                - containerPort: 8443
                  protocol: TCP
              args:
                - --auto-generate-certificates
                - --namespace=kubernetes-dashboard
                # Uncomment the following line to manually specify Kubernetes API server Host
                # If not specified, Dashboard will attempt to auto discover the API server and connect
                # to it. Uncomment only if the default does not work.
                # - --apiserver-host=http://my-address:port
              volumeMounts:
                - name: kubernetes-dashboard-certs
                  mountPath: /certs
                  # Create on-disk volume to store exec logs
                - mountPath: /tmp
                  name: tmp-volume
              livenessProbe:
                httpGet:
                  scheme: HTTPS
                  path: /
                  port: 8443
                initialDelaySeconds: 30
                timeoutSeconds: 30
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          volumes:
            - name: kubernetes-dashboard-certs
              secret:
                secretName: kubernetes-dashboard-certs
            - name: tmp-volume
              emptyDir: {}
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
    
    ---
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      ports:
        - port: 8000
          targetPort: 8000
      selector:
        k8s-app: dashboard-metrics-scraper
    
    ---
    
    kind: Deployment
    apiVersion: apps/v1
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
      name: dashboard-metrics-scraper
      namespace: kubernetes-dashboard
    spec:
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: dashboard-metrics-scraper
      template:
        metadata:
          labels:
            k8s-app: dashboard-metrics-scraper
          annotations:
            seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
        spec:
          containers:
            - name: dashboard-metrics-scraper
              image: kubernetesui/metrics-scraper:v1.0.4
              ports:
                - containerPort: 8000
                  protocol: TCP
              livenessProbe:
                httpGet:
                  scheme: HTTP
                  path: /
                  port: 8000
                initialDelaySeconds: 30
                timeoutSeconds: 30
              volumeMounts:
              - mountPath: /tmp
                name: tmp-volume
              securityContext:
                allowPrivilegeEscalation: false
                readOnlyRootFilesystem: true
                runAsUser: 1001
                runAsGroup: 2001
          serviceAccountName: kubernetes-dashboard
          nodeSelector:
            "kubernetes.io/os": linux
          # Comment the following tolerations if Dashboard must not be deployed on master
          tolerations:
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          volumes:
            - name: tmp-volume
              emptyDir: {}

    运行dashboard.yaml

    [root@k8s-master01 apps]# kubectl apply -f dashboard.yaml 
    namespace/kubernetes-dashboard created
    serviceaccount/kubernetes-dashboard created
    service/kubernetes-dashboard created
    secret/kubernetes-dashboard-certs created
    secret/kubernetes-dashboard-csrf created
    secret/kubernetes-dashboard-key-holder created
    configmap/kubernetes-dashboard-settings created
    role.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
    rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
    deployment.apps/kubernetes-dashboard created
    service/dashboard-metrics-scraper created
    deployment.apps/dashboard-metrics-scraper created

    dashboard相关的service和pod已经在kubernetes-dashboard命令空间下创建完成

    [root@k8s-master01 apps]# kubectl get svc -n kubernetes-dashboard
    NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    dashboard-metrics-scraper   ClusterIP   10.96.110.28   <none>        8000/TCP        3m11s
    kubernetes-dashboard        NodePort    10.96.56.144   <none>        443:30001/TCP   3m11s
    
    [root@k8s-master01 apps]# kubectl get pod -n kubernetes-dashboard
    NAME                                         READY   STATUS    RESTARTS   AGE
    dashboard-metrics-scraper-79c5968bdc-tlzbx   1/1     Running   0          2m44s
    kubernetes-dashboard-9f9799597-l6p4v         1/1     Running   0          2m44s

    使用https://nodeIP:30001测试

    九、kubernetes一些常见问题

    1、cgroupfs和systemd

    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

    解决方法一:

    在所有master和node节点

    sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
    systemctl daemon-reload && systemctl restart kubelet

    解决方法二:

    在所有master和node节点,修改docker配置文件

    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ],
      "registry-mirrors": ["https://uyah70su.mirror.aliyuncs.com"]
    }
    EOF

    2、添加master节点的证书过期

    #生成master证书用户添加新master节点
    kubeadm init phase upload-certs --upload-certs
    
    I0509 18:08:59.985444    7521 version.go:251] remote version is much newer: v1.21.0; falling back to: stable-1.20
    [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
    [upload-certs] Using certificate key:
    f22f123c33009cd57d10b6eebc3d78d7a204cc22eff795298c43c5dd1b452e74
    
    master节点(在新的master上执行)
    kubeadm join 192.168.32.248:6443 --token edba93.h7pb7iygmvn2kgrm 
        --discovery-token-ca-cert-hash sha256:647a8ed047a042d258a0ef79faeb5f458e01e2b5281d553bf5c524d32c65c106 
        --control-plane --certificate-key f22f123c33009cd57d10b6eebc3d78d7a204cc22eff795298c43c5dd1b452e74

    3、添加node节点的没有记录token信息或token信息过期

    #执行kubeadm init时没有记录下加入集群的指令,可以通过以下命令重新创建即可; kubeadm token create --print-join-command

    #在master生成新的token
     kubeadm token create --print-join-command
    
    kubeadm join 172.168.32.248:6443 --token x9dqr4.r7n0spisz9scdfbj     --discovery-token-ca-cert-hash sha256:7a0c77a53025f28d34033337a0e17971899cbf6cdfdb547aa02b73f99f06005b 
    
    node节点(在node节点执行)
    kubeadm join 192.168.32.248:6443 --token edba93.h7pb7iygmvn2kgrm 
        --discovery-token-ca-cert-hash sha256:647a8ed047a042d258a0ef79faeb5f458e01e2b5281d553bf5c524d32c65c106 

    4、集群时间不同步

    同步集群服务器时间

    5、重新创建token来加入node节点

    1、重新生成token

    [root@k8s-master ~]# kubeadm token create
    kk0ee6.nhvz5p85avmzyof3
    [root@k8s-master ~]# kubeadm token list
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    bgis60.aubeva3pjl9vf2qx   6h        2020-02-04T17:24:00+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    kk0ee6.nhvz5p85avmzyof3   23h       2020-02-05T11:02:44+08:00   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token

    2、获取ca证书sha256编码hash值

    [root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    9db128fe4c68b0e65c19bb49226fc717e64e790d23816c5615ad6e21fbe92020

    3、添加新的node节点k8s-node4

    [root@k8s-node4 ~]# kubeadm join --token kk0ee6.nhvz5p85avmzyof3 --discovery-token-ca-cert-hash sha256:9db128fe4c68b0e65c19bb49226fc717e64e790d23816c5615ad6e21fbe92020  192.168.31.35:6443
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

     

    I have a dream so I study hard!!!
  • 相关阅读:
    指令
    linux学习之多高并发服务器篇(三)
    linux学习之高并发服务器篇(二)
    linux学习之多高并发服务器篇(一)
    Linux学习之socket编程(二)
    Linux学习之socket编程(一)
    myeclipse中如何修改Servlet模板_day01
    Properties的使用以及配置文件值的获取
    Sql_Server中如何判断表中某字段是否存在
    微博开发流程-01
  • 原文地址:https://www.cnblogs.com/yaokaka/p/15501688.html
Copyright © 2011-2022 走看看