zoukankan      html  css  js  c++  java
  • Kubernetes学习之路(四)之Node节点二进制部署

    K8S Node节点部署

    • 1、部署kubelet

    (1)二进制包准备
    [root@linux-node1 ~]# cd /usr/local/src/kubernetes/server/bin/
    [root@linux-node1 bin]# cp kubelet kube-proxy /opt/kubernetes/bin/
    [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.120:/opt/kubernetes/bin/
    [root@linux-node1 bin]# scp kubelet kube-proxy 192.168.56.130:/opt/kubernetes/bin/
    (2)创建角色绑定

    kubelet启动时会向kube-apiserver发送tsl bootstrap请求,所以需要将bootstrap的token设置成对应的角色,这样kubectl才有权限创建该请求。

    [root@linux-node1 ~]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
    clusterrolebinding "kubelet-bootstrap" created
    (3)创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
    [root@linux-node1 ~]# cd /usr/local/src/ssl
    [root@linux-node1 ssl]# kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.56.110:6443 --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set.
    (4)设置客户端认证参数
    [root@linux-node1 ssl]# kubectl config set-credentials kubelet-bootstrap 
       --token=ad6d5bb607a186796d8861557df0d17f 
       --kubeconfig=bootstrap.kubeconfig   
    User "kubelet-bootstrap" set.
    (5)设置上下文参数
    [root@linux-node1 ssl]# kubectl config set-context default 
       --cluster=kubernetes 
       --user=kubelet-bootstrap 
       --kubeconfig=bootstrap.kubeconfig
    Context "default" created.
    (6)选择默认上下文
    [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    Switched to context "default".
    [root@linux-node1 ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg
    [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.120:/opt/kubernetes/cfg
    [root@linux-node1 ssl]# scp bootstrap.kubeconfig 192.168.56.130:/opt/kubernetes/cfg
    • 2、部署kubelet 1.设置CNI支持

    (1)配置CNI
    [root@linux-node2 ~]# mkdir -p /etc/cni/net.d
    [root@linux-node2 ~]# vim /etc/cni/net.d/10-default.conf
    {
            "name": "flannel",
            "type": "flannel",
            "delegate": {
                "bridge": "docker0",
                "isDefaultGateway": true,
                "mtu": 1400
            }
    }
    [root@linux-node3 ~]# mkdir -p /etc/cni/net.d
    [root@linux-node2 ~]# scp /etc/cni/net.d/10-default.conf 192.168.56.130:/etc/cni/net.d/10-default.conf
    (2)创建kubelet数据存储目录
    [root@linux-node2 ~]# mkdir /var/lib/kubelet
    [root@linux-node3 ~]# mkdir /var/lib/kubelet
    (3)创建kubelet服务配置
    [root@linux-node2 ~]# vim /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet 
      --address=192.168.56.120 
      --hostname-override=192.168.56.120 
      --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 
      --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig 
      --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig 
      --cert-dir=/opt/kubernetes/ssl 
      --network-plugin=cni 
      --cni-conf-dir=/etc/cni/net.d 
      --cni-bin-dir=/opt/kubernetes/bin/cni 
      --cluster-dns=10.1.0.2 
      --cluster-domain=cluster.local. 
      --hairpin-mode hairpin-veth 
      --allow-privileged=true 
      --fail-swap-on=false 
      --logtostderr=true 
      --v=2 
      --logtostderr=false 
      --log-dir=/opt/kubernetes/log
    Restart=on-failure
    RestartSec=5
    
    
    
    [root@linux-node3 ~]# vim /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet 
      --address=192.168.56.130 
      --hostname-override=192.168.56.130 
      --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 
      --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig 
      --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig 
      --cert-dir=/opt/kubernetes/ssl 
      --network-plugin=cni 
      --cni-conf-dir=/etc/cni/net.d 
      --cni-bin-dir=/opt/kubernetes/bin/cni 
      --cluster-dns=10.1.0.2 
      --cluster-domain=cluster.local. 
      --hairpin-mode hairpin-veth 
      --allow-privileged=true 
      --fail-swap-on=false 
      --logtostderr=true 
      --v=2 
      --logtostderr=false 
      --log-dir=/opt/kubernetes/log
    Restart=on-failure
    RestartSec=5
    View Code
    (4)启动Kubelet
    [root@linux-node2 ~]# systemctl daemon-reload
    [root@linux-node2 ~]# systemctl enable kubelet
    [root@linux-node2 ~]# systemctl start kubelet
    [root@linux-node2 kubernetes]# systemctl status kubelet
    
    [root@linux-node3 ~]# systemctl daemon-reload
    [root@linux-node3 ~]# systemctl enable kubelet
    [root@linux-node3 ~]# systemctl start kubelet
    [root@linux-node3 kubernetes]# systemctl status kubelet

    在查看kubelet的状态,发现有如下报错Failed to get system container stats for "/system.slice/kubelet.service": failed to...此时需要调整kubelet的启动参数。

    解决方法: 
    /usr/lib/systemd/system/kubelet.service[service]新增: Environment="KUBELET_MY_ARGS=--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice" 
    修改ExecStart: 在末尾新增$KUBELET_MY_ARGS

    [root@linux-node2 system]# systemctl status kubelet
    ● kubelet.service - Kubernetes Kubelet
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
       Active: active (running) since 四 2018-05-31 16:33:17 CST; 16h ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
     Main PID: 53223 (kubelet)
       CGroup: /system.slice/kubelet.service
               └─53223 /opt/kubernetes/bin/kubelet --address=192.168.56.120 --hostname-override=192.168.56.120 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experiment...
    
    6月 01 08:51:09 linux-node2.example.com kubelet[53223]: E0601 08:51:09.355765   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:51:19 linux-node2.example.com kubelet[53223]: E0601 08:51:19.363906   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:51:29 linux-node2.example.com kubelet[53223]: E0601 08:51:29.385439   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:51:39 linux-node2.example.com kubelet[53223]: E0601 08:51:39.393790   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:51:49 linux-node2.example.com kubelet[53223]: E0601 08:51:49.401081   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:51:59 linux-node2.example.com kubelet[53223]: E0601 08:51:59.407863   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:52:09 linux-node2.example.com kubelet[53223]: E0601 08:52:09.415552   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:52:19 linux-node2.example.com kubelet[53223]: E0601 08:52:19.425998   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:52:29 linux-node2.example.com kubelet[53223]: E0601 08:52:29.443804   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    6月 01 08:52:39 linux-node2.example.com kubelet[53223]: E0601 08:52:39.450814   53223 summary.go:102] Failed to get system container stats for "/system.slice/kubelet.service": failed to...
    Hint: Some lines were ellipsized, use -l to show in full.
    (5)查看csr请求 注意是在linux-node1上执行。
    [root@linux-node1 ssl]# kubectl get csr
    NAME                                                   AGE       REQUESTOR           CONDITION
    node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U   1m        kubelet-bootstrap   Pending
    node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA   1m        kubelet-bootstrap   Pending
    (6)批准kubelet 的 TLS 证书请求
    [root@linux-node1 ssl]#  kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
    certificatesigningrequest.certificates.k8s.io "node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U" approved
    certificatesigningrequest.certificates.k8s.io "node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA" approved
    
    [root@linux-node1 ssl]# kubectl get csr
    NAME                                                   AGE       REQUESTOR           CONDITION
    node-csr-6Wc7kmqBIaPOw83l2F1uCKN-uUaxfkVhIU8K93S5y1U   2m        kubelet-bootstrap   Approved,Issued
    node-csr-fIXcxO7jyR1Au7nrpUXht19eXHnX1HdFl99-oq2sRsA   2m        kubelet-bootstrap   Approved,Issued
    
    执行完毕后,查看节点状态已经是Ready的状态了 
    [root@linux-node1 ssl]# kubectl get node
    NAME             STATUS    ROLES     AGE       VERSION
    192.168.56.120   Ready     <none>    50m       v1.10.1
    192.168.56.130   Ready     <none>    46m       v1.10.1
    • 3、部署Kubernetes Proxy

    (1)配置kube-proxy使用LVS
    [root@linux-node2 ~]# yum install -y ipvsadm ipset conntrack
    [root@linux-node3 ~]# yum install -y ipvsadm ipset conntrack
    (2)创建 kube-proxy 证书请求
    [root@linux-node1 ~]# cd /usr/local/src/ssl/
    [root@linux-node1 ssl]# vim kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "BeiJing",
          "L": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    (3)生成证书
    [root@linux-node1~]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem 
       -ca-key=/opt/kubernetes/ssl/ca-key.pem 
       -config=/opt/kubernetes/ssl/ca-config.json 
       -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy
    (4)分发证书到所有Node节点
    [root@linux-node1 ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/
    [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
    [root@linux-node1 ssl]# scp kube-proxy*.pem 192.168.56.120:/opt/kubernetes/ssl/
    (5)创建kube-proxy配置文件
    [root@linux-node1 ssl]# kubectl config set-cluster kubernetes 
       --certificate-authority=/opt/kubernetes/ssl/ca.pem 
       --embed-certs=true 
       --server=https://192.168.56.110:6443 
       --kubeconfig=kube-proxy.kubeconfig
    Cluster "kubernetes" set.
    
    [root@linux-node1 ssl]# kubectl config set-credentials kube-proxy 
       --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem 
       --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem 
       --embed-certs=true 
       --kubeconfig=kube-proxy.kubeconfig
    User "kube-proxy" set.
    
    [root@linux-node1 ssl]# kubectl config set-context default 
       --cluster=kubernetes 
       --user=kube-proxy 
       --kubeconfig=kube-proxy.kubeconfig
    Context "default" created.
    
    [root@linux-node1 ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    Switched to context "default".
    (6)分发kubeconfig配置文件
    [root@linux-node1 ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/
    [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.120:/opt/kubernetes/cfg/
    [root@linux-node1 ssl]# scp kube-proxy.kubeconfig 192.168.56.130:/opt/kubernetes/cfg/
    (7)创建kube-proxy服务配置
    [root@linux-node1 ssl]# mkdir /var/lib/kube-proxy
    [root@linux-node2 ssl]# mkdir /var/lib/kube-proxy
    [root@linux-node3 ssl]# mkdir /var/lib/kube-proxy
    
    [root@linux-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    WorkingDirectory=/var/lib/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy 
      --bind-address=192.168.56.120 
      --hostname-override=192.168.56.120 
      --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig 
    --masquerade-all 
      --feature-gates=SupportIPVSProxyMode=true 
      --proxy-mode=ipvs 
      --ipvs-min-sync-period=5s 
      --ipvs-sync-period=5s 
      --ipvs-scheduler=rr 
      --logtostderr=true 
      --v=2 
      --logtostderr=false 
      --log-dir=/opt/kubernetes/log
    
    Restart=on-failure
    RestartSec=5
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.120:/usr/lib/systemd/system/kube-proxy.service
    kube-proxy.service                                         100%  701   109.4KB/s   00:00    
    [root@linux-node1 ssl]# scp /usr/lib/systemd/system/kube-proxy.service 192.168.56.130:/usr/lib/systemd/system/kube-proxy.service
    kube-proxy.service                                         100%  701    34.9KB/s   00:00    
    (8)启动Kubernetes Proxy
    [root@linux-node2 ~]# systemctl daemon-reload
    [root@linux-node2 ~]# systemctl enable kube-proxy
    [root@linux-node2 ~]# systemctl start kube-proxy
    [root@linux-node2 ~]# systemctl status kube-proxy
    
    [root@linux-node3 ~]# systemctl daemon-reload
    [root@linux-node3 ~]# systemctl enable kube-proxy
    [root@linux-node3 ~]# systemctl start kube-proxy
    [root@linux-node3 ~]# systemctl status kube-proxy
    
    检查LVS状态,可以看到已经创建了一个LVS集群,将来自10.1.0.1:443的请求转到192.168.56.110:6443,而6443就是api-server的端口
    [root@linux-node2 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.1.0.1:443 rr persistent 10800
      -> 192.168.56.110:6443          Masq    1      0          0         
    
    [root@linux-node3 ~]# ipvsadm -Ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.1.0.1:443 rr persistent 10800
      -> 192.168.56.110:6443          Masq    1      0          0         
    
    如果你在两台实验机器都安装了kubelet和proxy服务,使用下面的命令可以检查状态:
    
    [root@linux-node1 ssl]#  kubectl get node
    NAME            STATUS    ROLES     AGE       VERSION
    192.168.56.120   Ready     <none>    22m       v1.10.1
    192.168.56.130   Ready     <none>    3m        v1.10.1

    到此,K8S的集群就部署完毕,由于K8S本身不支持网络,需要借助第三方网络才能进行创建Pod,将在下一节学习Flannel网络为K8S提供网络支持。

    (9)遇到的问题:kubelet无法启动,kubectl get node 提示:no resource found

    [root@linux-node1 ssl]#  kubectl get node
    No resources found.
    
    [root@linux-node3 ~]# systemctl status kubelet
    ● kubelet.service - Kubernetes Kubelet
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; static; vendor preset: disabled)
       Active: activating (auto-restart) (Result: exit-code) since Wed 2018-05-30 04:48:29 EDT; 1s ago
         Docs: https://github.com/GoogleCloudPlatform/kubernetes
      Process: 16995 ExecStart=/opt/kubernetes/bin/kubelet --address=192.168.56.130 --hostname-override=192.168.56.130 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --cert-dir=/opt/kubernetes/ssl --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/kubernetes/bin/cni --cluster-dns=10.1.0.2 --cluster-domain=cluster.local. --hairpin-mode hairpin-veth --allow-privileged=true --fail-swap-on=false --logtostderr=true --v=2 --logtostderr=false --log-dir=/opt/kubernetes/log (code=exited, status=255)
     Main PID: 16995 (code=exited, status=255)
    
    May 30 04:48:29 linux-node3.example.com systemd[1]: Unit kubelet.service entered failed state.
    May 30 04:48:29 linux-node3.example.com systemd[1]: kubelet.service failed.
    [root@linux-node3 ~]# tailf /var/log/messages
    ......
    May 30 04:46:24 linux-node3 kubelet: F0530 04:46:24.134612   16207 server.go:233] failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
    
    提示kubelet使用的cgroup驱动类型和docker的cgroup驱动类型不一致。进行查看docker.service [Unit] Description
    =Docker Application Container Engine Documentation=http://docs.docker.com After=network.target Wants=docker-storage-setup.service Requires=docker-cleanup.timer [Service] Type=notify NotifyAccess=all KillMode=process EnvironmentFile=-/etc/sysconfig/docker EnvironmentFile=-/etc/sysconfig/docker-storage EnvironmentFile=-/etc/sysconfig/docker-network Environment=GOTRACEBACK=crash Environment=DOCKER_HTTP_HOST_COMPAT=1 Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd ###修改此处"systemd""cgroupfs" --userland-proxy-path=/usr/libexec/docker/docker-proxy-current $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=1048576 LimitNPROC=1048576 LimitCORE=infinity TimeoutStartSec=0 Restart=on-abnormal MountFlags=slave [Install] WantedBy=multi-user.target [root@linux-node3 ~]# systemctl daemon-reload [root@linux-node3 ~]# systemctl restart docker.service [root@linux-node3 ~]# systemctl restart kubelet
  • 相关阅读:
    如何借助BM算法轻松理解KMP算法
    如何实现文本编辑器中的查找替换功能?——BF算法
    C++中求数组长度与memset的用法
    什么是素数/质数/合数
    深度和广度优先搜索:如何找出社交网络中的三度好友关系?
    如何存储微博、微信等社交网络中的好友关系?
    为什么说堆排序没有快速排序快?
    HTML5中标记与特殊属性
    margin外边距问题
    html热点区域
  • 原文地址:https://www.cnblogs.com/linuxk/p/9272778.html
Copyright © 2011-2022 走看看