zoukankan      html  css  js  c++  java
  • 全新一台node节点加入到集群中

    前言

    基于前面的所有文章完成后,这里介绍一下如何添加一个全新的节点;

    对新节点做解析

    方法一 hosts 文件解析

    如果使用的是 hosts 文件解析,则需要在所有节点的 hsots 文件添加下面内容:

    10.0.20.15 node05 node05.k8s.com
    

    方法二 bind 解析

    如果使用的时候内部bind 做DNS解析,则只需要添加一条A记录即可

    node05  IN  A   10.0.20.15
    

    重载bind配置文件

    rndc reload
    

    测试

    在 node01 机器上测试

    [root@node01 work]# ping -c 1 node05
    PING node05.k8s.com (10.0.20.15) 56(84) bytes of data.
    64 bytes from 10.0.20.15 (10.0.20.15): icmp_seq=1 ttl=64 time=0.122 ms
    
    --- node05.k8s.com ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms
    [root@node01 work]# ping -c 1 node05.k8s.com
    PING node05.k8s.com (10.0.20.15) 56(84) bytes of data.
    64 bytes from 10.0.20.15 (10.0.20.15): icmp_seq=1 ttl=64 time=0.121 ms
    
    --- node05.k8s.com ping statistics ---
    1 packets transmitted, 1 received, 0% packet loss, time 0ms
    rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms
    

    分发密钥对

    为了方便推送文件,这一步可做可不做

    ssh-copy-id -i ~/.ssh/id_rsa.pub node05
    

    推送 CA 证书

    ssh node05 "mkdir -p /etc/kubernetes/cert"
    scp ca*.pem ca-config.json node05:/etc/kubernetes/cert
    

    flanneld 部署

    下面所有步骤都在 node01 上执行

    推送flanneld二进制命令

    scp flannel/{flanneld,mk-docker-opts.sh} node05:/opt/k8s/bin/
    

    推送flanneld秘钥

    ssh node05 "mkdir -p /etc/flanneld/cert"
    scp flanneld*.pem node05:/etc/flanneld/cert
    

    推送flanneld启动文件

    scp flanneld.service node05:/etc/systemd/system/
    

    启动flanneld

    ssh node05 "systemctl daemon-reload && systemctl enable flanneld && systemctl restart flanneld"
    ssh node05 "systemctl status flanneld|grep Active"
    

    查看flanneld网络

    ssh node05 "/usr/sbin/ip addr show flannel.1|grep -w inet"
    

    查看etcd中网络数据

    查看一下etcd中CIDR分配的数量是否添加

    source /opt/k8s/bin/environment.sh
    etcdctl 
      --endpoints=${ETCD_ENDPOINTS} 
      --ca-file=/etc/kubernetes/cert/ca.pem 
      --cert-file=/etc/flanneld/cert/flanneld.pem 
      --key-file=/etc/flanneld/cert/flanneld-key.pem 
      get ${FLANNEL_ETCD_PREFIX}/subnets/172.30.80.0-21
    

    查看输出结果:

    [root@node01 work]# etcdctl 
    >   --endpoints=${ETCD_ENDPOINTS} 
    >   --ca-file=/etc/kubernetes/cert/ca.pem 
    >   --cert-file=/etc/flanneld/cert/flanneld.pem 
    >   --key-file=/etc/flanneld/cert/flanneld-key.pem 
    >   ls ${FLANNEL_ETCD_PREFIX}/subnets
    /kubernetes/network/subnets/172.30.80.0-21
    /kubernetes/network/subnets/172.30.48.0-21
    /kubernetes/network/subnets/172.30.216.0-21
    /kubernetes/network/subnets/172.30.224.0-21
    /kubernetes/network/subnets/172.30.160.0-21
    

    从上面输出看出,新节点的flanneld已经正常

    docker 安装配置

    这里直接在 node05 节点上操作

    安装

    yum install docker-ce-18.09.6 -y
    

    创建配置文件

    mkdir -p /etc/docker/
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "registry-mirrors": ["https://hjvrgh7a.mirror.aliyuncs.com"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2"
    }
    EOF
    

    修改Docker启动参数

    vim /usr/lib/systemd/system/docker.service
    EnvironmentFile=-/run/flannel/docker
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
    

    启动查看

    [root@localhost ~]# vim /usr/lib/systemd/system/docker.service 
    [root@localhost ~]# systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    [root@localhost ~]# systemctl start docker
    

    查看docker0网桥是否是 flanneld 网络的网关

    [root@localhost ~]# ip addr show flannel.1 && /usr/sbin/ip addr show docker0
    5: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
        link/ether fe:64:95:d1:b5:7c brd ff:ff:ff:ff:ff:ff
        inet 172.30.216.0/32 scope global flannel.1
           valid_lft forever preferred_lft forever
    6: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:79:3b:10:f3 brd ff:ff:ff:ff:ff:ff
        inet 172.30.216.1/21 brd 172.30.223.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    kubelet 安装

    这里在 node01 上操作

    推送 kubelet 二进制文件

    scp kubernetes/server/bin/kubelet node05:/opt/k8s/bin/
    

    创建 kubelet bootstrap kubeconfig 文件

    export BOOTSTRAP_TOKEN=$(kubeadm token create 
      --description kubelet-bootstrap-token 
      --groups system:bootstrappers:node05 
      --kubeconfig ~/.kube/config)
    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=/etc/kubernetes/cert/ca.pem 
      --embed-certs=true 
      --server=https://vip.k8s.com:8443 
      --kubeconfig=kubelet-bootstrap-node05.kubeconfig
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
      --token=${BOOTSTRAP_TOKEN} 
      --kubeconfig=kubelet-bootstrap-node05.kubeconfig
    # 设置上下文参数
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kubelet-bootstrap 
      --kubeconfig=kubelet-bootstrap-node05.kubeconfig
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=kubelet-bootstrap-node05.kubeconfig
    

    分发 bootstrap kubeconfig 文件到所有 worker 节点

    scp kubelet-bootstrap-node05.kubeconfig root@node05:/etc/kubernetes/kubelet-bootstrap.kubeconfig
    

    查看kubeadm为新节点创建的token

    [root@node01 work]# kubeadm token list --kubeconfig ~/.kube/config
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
    cu4q2e.ogvim78s3p252ysg   7h        2019-12-06T17:44:24+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node03
    nrypmb.35fyygbwr9failr5   7h        2019-12-06T17:44:23+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node01
    r5luwb.6x6c5lnit5utyotz   7h        2019-12-06T17:44:23+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node02
    # 下面多了一条node05
    ss66d3.yse8ia5bt1s06jmg   23h       2019-12-07T10:01:58+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node05
    sx8n4m.vlltkkv8m23ogxg9   7h        2019-12-06T17:44:24+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:node04
    

    查看 token 关联的 Secret

    [root@node01 work]# kubectl get secrets  -n kube-system|grep bootstrap-token
    bootstrap-token-cu4q2e                           bootstrap.kubernetes.io/token         7      16h
    bootstrap-token-nrypmb                           bootstrap.kubernetes.io/token         7      16h
    bootstrap-token-r5luwb                           bootstrap.kubernetes.io/token         7      16h
    # 根据上面查看的token,下面这条是新添加的
    bootstrap-token-ss66d3                           bootstrap.kubernetes.io/token         7      99s
    bootstrap-token-sx8n4m                           bootstrap.kubernetes.io/token         7      16h
    

    创建和分发kubelet参数配置

    cd /opt/k8s/work
    sed -e "s/##NODE_IP##/10.0.20.15/" kubelet-config.yaml.template > kubelet-config-10.0.20.15.yaml.template
    scp kubelet-config-10.0.20.15.yaml.template root@node05:/etc/kubernetes/kubelet-config.yaml
    

    创建和分发kubelet启动文件

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    sed -e "s/##NODE_NAME##/node05/" kubelet.service.template > kubelet-node05.service
    scp kubelet-node05.service root@node05:/etc/systemd/system/kubelet.service
    

    启动 kubelet 服务

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    ssh root@node05 "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"
    ssh root@node05 "/usr/sbin/swapoff -a"
    ssh root@node05 "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"
    

    手动approve server cert csr

    稍等片刻后,需要手动通过证书请求
    基于安全考虑,CSR approving controllers不会自动approve kubelet server证书签名请求,需要手动approve

    kubectl get csr | grep Pending | awk '{print $1}' | xargs kubectl certificate approve
    

    查看新节点

    此时再次使用 查看节点的命令时,发现已经添加成功

    [root@node01 work]# kubectl get nodes
    NAME     STATUS   ROLES    AGE   VERSION
    node01   Ready    <none>   16h   v1.15.6
    node02   Ready    <none>   16h   v1.15.6
    node03   Ready    <none>   16h   v1.15.6
    node04   Ready    <none>   16h   v1.15.6
    node05   Ready    <none>   74s   v1.15.6   # 这里显示正常
    

    kube-proxy 安装

    此处均在 node01 上执行

    推送 kube-proxy 二进制启动文件

    cd /opt/k8s/work/
    scp kubernetes/server/bin/kube-proxy node05:/opt/k8s/bin/
    

    分发 kubeconfig 文件

    cd /opt/k8s/work/
    scp kube-proxy.kubeconfig root@node05:/etc/kubernetes/
    

    分发和创建kube-proxy配置文件

    cd /opt/k8s/work/
    sed -e "s/##NODE_NAME##/node05/" -e "s/##NODE_IP##/10.0.20.15/" kube-proxy-config.yaml.template > kube-proxy-config-node05.yaml.template
    scp kube-proxy-config-node05.yaml.template root@node05:/etc/kubernetes/kube-proxy-config.yaml
    

    分发 kube-proxy systemd unit 文件

    scp kube-proxy.service root@node05:/etc/systemd/system/
    

    启动 kube-proxy 服务

    cd /opt/k8s/work
    source /opt/k8s/bin/environment.sh
    ssh root@node05 "mkdir -p ${K8S_DIR}/kube-proxy"
    ssh root@node05 "modprobe ip_vs_rr"
    ssh root@node05 "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"
    

    检查启动结果

    ssh root@node05 "systemctl status kube-proxy|grep Active"
    ssh root@node05 "netstat -lnpt|grep kube-prox"
    

    查看ipvs路由规则

    ssh root@node05 "/usr/sbin/ipvsadm -ln"
    

    输出:

    [root@node01 work]# ssh root@node05 "/usr/sbin/ipvsadm -ln"
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.254.0.1:443 rr
      -> 10.0.20.11:6443              Masq    1      0          0         
      -> 10.0.20.12:6443              Masq    1      0          0         
      -> 10.0.20.13:6443              Masq    1      0          0         
    TCP  10.254.0.2:53 rr
      -> 172.30.48.2:53               Masq    1      0          0         
      -> 172.30.160.2:53              Masq    1      0          0         
    TCP  10.254.0.2:9153 rr
      -> 172.30.48.2:9153             Masq    1      0          0         
      -> 172.30.160.2:9153            Masq    1      0          0         
    UDP  10.254.0.2:53 rr
      -> 172.30.48.2:53               Masq    1      0          0         
      -> 172.30.160.2:53              Masq    1      0          0 
    

    至此,全新的node节点加入集群操作完成。

    查看当前集群 nodes

    [root@node01 work]# kubectl get nodes -o wide
    NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION              CONTAINER-RUNTIME
    node01   Ready    <none>   16h   v1.15.6   10.0.20.11    <none>        CentOS Linux 7 (Core)   5.4.1-1.el7.elrepo.x86_64   docker://18.9.6
    node02   Ready    <none>   16h   v1.15.6   10.0.20.12    <none>        CentOS Linux 7 (Core)   5.4.1-1.el7.elrepo.x86_64   docker://18.9.6
    node03   Ready    <none>   16h   v1.15.6   10.0.20.13    <none>        CentOS Linux 7 (Core)   5.4.1-1.el7.elrepo.x86_64   docker://18.9.6
    node04   Ready    <none>   16h   v1.15.6   10.0.20.14    <none>        CentOS Linux 7 (Core)   5.4.1-1.el7.elrepo.x86_64   docker://18.9.6
    node05   Ready    <none>   12m   v1.15.6   10.0.20.15    <none>        CentOS Linux 7 (Core)   5.4.1-1.el7.elrepo.x86_64   docker://18.9.6
    
  • 相关阅读:
    JQueryMobile开发必须的知道的知识
    15款很棒的 JavaScript 开发工具
    浅谈 JavaScript 编程语言的编码规范
    也谈谈js的压缩,jquery压缩。【转】
    jQuery实现点击单选按钮切换选中状态效果
    JavaScript入门学习书籍的阶段选择
    试读《基于MVC的JavaScript Web富应用开发》— 不一样的JavaScript
    javaScript之function定义
    利用Powershell自动部署asp.net mvc网站项目 (一)
    【好文收藏】javascript中event对象详解
  • 原文地址:https://www.cnblogs.com/winstom/p/11993893.html
Copyright © 2011-2022 走看看