zoukankan      html  css  js  c++  java
  • kubeadm搭建k8s(v1.17.3)集群与Kubespere最小化安装

     前言:不断学习就是程序员的宿命

    一、kubeadm搭建k8s集群

    1、系统准备

     3台虚拟机

    IP 主机名 配置
    192.168.56.100 node01 4C、4G
    192.168.56.101 node02 4C、4G
    192.168.56.102 node03 4C、4G

    2、环境配置(3台)

    2.1 关闭防火墙(3个节点)

    systemctl stop firewalld
    systemctl disable firewalld
    

    2.2 关闭selinux(3个节点)

    sed -i 's/enforcing/disabled/' /etc/selinux/config
    setenforce 0
    

    2.3 关闭swap(3个节点)

    swapoff -a #临时关闭
    sed -ri 's/.*swap.*/#&/' /etc/fstab #永久关闭
    free -g #验证,swap必须为0
                  total        used        free      shared  buff/cache   available
    Mem:              3           0           3           0           0           3
    Swap:             0           0           0
    
    

    2.4 配置主机名(3个节点)

    查看主机名:hostname
    如果主机名不正确,可以通过“hostnamectl set-hostname :指定新的hostname”命令来进行修改

    [root@k8s-node1 ~]# ip route show 
    default via 10.0.2.1 dev eth0 proto dhcp metric 101 
    10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 metric 101 
    192.168.56.0/24 dev eth1 proto kernel scope link src 192.168.56.100 metric 100 
    [root@k8s-node1 ~]# ip addr
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 08:00:27:ac:35:31 brd ff:ff:ff:ff:ff:ff
        inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
           valid_lft 1184sec preferred_lft 1184sec
        inet6 fe80::a00:27ff:feac:3531/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 08:00:27:02:58:16 brd ff:ff:ff:ff:ff:ff
        inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
           valid_lft forever preferred_lft forever
        inet6 fe80::a00:27ff:fe02:5816/64 scope link 
           valid_lft forever preferred_lft forever
    [root@k8s-node1 ~]# cat /etc/hosts  #3个节点
    127.0.0.1	k8s-node1	k8s-node1
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    
    10.0.2.15 k8s-node1
    10.0.2.4 k8s-node2
    10.0.2.5 k8s-node3
    

    2.5 配置内核参数

    将桥接的IPv4流量传递到iptables的链

    cat > /etc/sysctl.d/k8s.conf <<EOF
    
    net.bridge.bridge-nf-call-ip6tables = 1
    
    net.bridge.bridge-nf-call-iptables = 1
    
    EOF
    [root@k8s-node1 ~]#  sysctl --system
    * Applying /usr/lib/sysctl.d/00-system.conf ...
    * Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
    kernel.yama.ptrace_scope = 0
    * Applying /usr/lib/sysctl.d/50-default.conf ...
    kernel.sysrq = 16
    kernel.core_uses_pid = 1
    net.ipv4.conf.default.rp_filter = 1
    net.ipv4.conf.all.rp_filter = 1
    net.ipv4.conf.default.accept_source_route = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.default.promote_secondaries = 1
    net.ipv4.conf.all.promote_secondaries = 1
    fs.protected_hardlinks = 1
    fs.protected_symlinks = 1
    * Applying /etc/sysctl.d/99-sysctl.conf ...
    * Applying /etc/sysctl.d/k8s.conf ...
    * Applying /etc/sysctl.conf ...
    

    3、安装Docker(3台)

    Kubenetes默认CRI(容器运行时)为Docker,因此先安装Docker。

    3.1安装Docker

    $ sudo yum remove docker 
                      docker-client 
                      docker-client-latest 
                      docker-common 
                      docker-latest 
                      docker-latest-logrotate 
                      docker-logrotate 
                      docker-engine
    

    3.2安装Docker -CE(3台)

     sudo yum install -y yum-utils
    
     sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
     yum-config-manager  --add-repo  http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
        
     sudo yum -y install docker-ce docker-ce-cli containerd.io 
    

    3.3配置镜像加速(3台)

    这里使用的是 阿里云镜像服务

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://0v8k2rvr.mirror.aliyuncs.com"]
    }
    EOF
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    

    3.4启动Docker && 设置docker开机启动(3台)

    [root@node01 ~]# systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    

    4、安装kubeadm、kubelet、kubectl

    4.1添加阿里yum源(3台)

    更多详情见

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    

    4.2安装kubeadm、kubelet、kubectl

    yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3 --setopt=obsoletes=0
    

    4.3设置开机启动

     systemctl enable kubelet && systemctl start kubelet
    

    4.4查看kubelet状态

    #systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
      Drop-In: /usr/lib/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: activating (auto-restart) (Result: exit-code) since Fri 2020-06-26 14:53:12 CST; 4s ago
         Docs: https://kubernetes.io/docs/
      Process: 10192 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
     Main PID: 10192 (code=exited, status=255)
    
    Jun 26 14:53:12 node01 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:53:12 node01 systemd[1]: kubelet.service failed.
    
    

    4.5查看kubelet版本

    kubelet --version
    Kubernetes v1.17.3
    

    5、部署k8s-master

    5.1在Master节点上,创建并执行master_images.sh,脚本内容如下

    #!/bin/bash
    
    images=(
    	kube-apiserver:v1.17.3
        kube-proxy:v1.17.3
    	kube-controller-manager:v1.17.3
    	kube-scheduler:v1.17.3
    	coredns:1.6.5
    	etcd:3.4.3-0
        pause:3.1
    )
    
    for imageName in ${images[@]} ; do
        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    #   docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  k8s.gcr.io/$imageName
    done
    [root@k8s-node1 ~]# ./master_images.sh    #下载镜像
    修改脚本注释重新打tag
    [root@k8s-node1 ~]# ./master_images.sh    #打tag
    [root@k8s-node1 ~]# docker images    #查看刚刚下载docker镜像
    REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy                                                         v1.17.3             ae853e93800d        4 months ago        116MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.17.3             ae853e93800d        4 months ago        116MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.17.3             b0f1517c1f4b        4 months ago        161MB
    k8s.gcr.io/kube-controller-manager                                            v1.17.3             b0f1517c1f4b        4 months ago        161MB
    k8s.gcr.io/kube-apiserver                                                     v1.17.3             90d27391b780        4 months ago        171MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.17.3             90d27391b780        4 months ago        171MB
    k8s.gcr.io/kube-scheduler                                                     v1.17.3             d109c0821a2b        4 months ago        94.4MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.17.3             d109c0821a2b        4 months ago        94.4MB
    k8s.gcr.io/coredns                                                            1.6.5               70f311871ae1        7 months ago        41.6MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.6.5               70f311871ae1        7 months ago        41.6MB
    k8s.gcr.io/etcd                                                               3.4.3-0             303ce5db0e90        8 months ago        288MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        8 months ago        288MB
    registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
    k8s.gcr.io/pause                                                              3.1                 da86e6ba6ca1        2 years ago         742kB
    

    5.2初始化kubeadm(主节点)

    查看网卡地址

    [root@k8s-node1 ~]# ip addr   #使用默认网卡初始化
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 08:00:27:ac:35:31 brd ff:ff:ff:ff:ff:ff
        inet 10.0.2.15/24 brd 10.0.2.255 scope global noprefixroute dynamic eth0
           valid_lft 824sec preferred_lft 824sec
        inet6 fe80::a00:27ff:feac:3531/64 scope link 
           valid_lft forever preferred_lft forever
    3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
        link/ether 08:00:27:02:58:16 brd ff:ff:ff:ff:ff:ff
        inet 192.168.56.100/24 brd 192.168.56.255 scope global noprefixroute eth1
           valid_lft forever preferred_lft forever
        inet6 fe80::a00:27ff:fe02:5816/64 scope link 
           valid_lft forever preferred_lft forever
    4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:2f:70:a1:f8 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    
    

    主节点master初始化

    kubeadm init --kubernetes-version=1.17.3  
    --apiserver-advertise-address=10.0.2.15   
    --image-repository registry.aliyuncs.com/google_containers  
    --service-cidr=10.10.0.0/16 --pod-network-cidr=10.122.0.0/16
    

    注:

    • --apiserver-advertise-address=10.0.2.15 :这里的IP地址是master主机的地址,为上面的eth0网卡的地址
      执行结果如下:
    W0627 05:59:23.420885    2230 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W0627 05:59:23.420970    2230 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.3
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 10.0.2.15]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-node1 localhost] and IPs [10.0.2.15 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W0627 05:59:36.078833    2230 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W0627 05:59:36.079753    2230 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 33.002443 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-node1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: z2roeo.9tzndilx8gnjjqfj
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 10.0.2.15:6443 --token z2roeo.9tzndilx8gnjjqfj 
        --discovery-token-ca-cert-hash sha256:7cfbf6693daa652f2af8e45594c4a66f45a8d081711e7e17a45cc42abfe7792f
    

    由于默认拉取镜像地址k8s.cr.io国内无法访问,这里指定阿里云仓库地址。可以手动按照我们的images.sh先拉取镜像。

    地址变为:registry.aliyuncs.com/googole_containers也可以。
    科普:无类别域间路由(Classless Inter-Domain Routing 、CIDR)是一个用于给用户分配IP地址以及在互联网上有效第路由IP数据包的对IP地址进行归类的方法。
    拉取可能失败,需要下载镜像。

    运行完成提前复制:加入集群的令牌。

    5.3测试kubectl(主节点执行)

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    详细部署文档

    [root@node02 ~]# kubectl get nodes    #获取所有节点,目前Master状态为notready。等待网络加入完成即可。
    NAME     STATUS     ROLES    AGE   VERSION
    node02   NotReady   master   97s   v1.17.3
    
    [root@node02 ~]# journalctl -u kubelet    #查看kubelet日志
    -- Logs begin at Fri 2020-06-26 22:32:06 CST, end at Fri 2020-06-26 15:31:07 CST. --
    Jun 26 14:52:40 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:52:40 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:52:40 node02 kubelet[11114]: F0626 14:52:40.203059   11114 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:52:40 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:52:40 node02 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:52:40 node02 systemd[1]: kubelet.service failed.
    Jun 26 14:52:50 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
    Jun 26 14:52:50 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:52:50 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:52:50 node02 kubelet[11128]: F0626 14:52:50.311073   11128 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:52:50 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:52:50 node02 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:52:50 node02 systemd[1]: kubelet.service failed.
    Jun 26 14:53:00 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
    Jun 26 14:53:00 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:53:00 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:53:00 node02 kubelet[11142]: F0626 14:53:00.562832   11142 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:53:00 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:53:00 node02 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:53:00 node02 systemd[1]: kubelet.service failed.
    Jun 26 14:53:10 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
    Jun 26 14:53:10 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:53:10 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:53:10 node02 kubelet[11157]: F0626 14:53:10.810988   11157 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:53:10 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:53:10 node02 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:53:10 node02 systemd[1]: kubelet.service failed.
    Jun 26 14:53:21 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
    Jun 26 14:53:21 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:53:21 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:53:21 node02 kubelet[11171]: F0626 14:53:21.061248   11171 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:53:21 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:53:21 node02 systemd[1]: Unit kubelet.service entered failed state.
    Jun 26 14:53:21 node02 systemd[1]: kubelet.service failed.
    Jun 26 14:53:31 node02 systemd[1]: kubelet.service holdoff time over, scheduling restart.
    Jun 26 14:53:31 node02 systemd[1]: Started kubelet: The Kubernetes Node Agent.
    Jun 26 14:53:31 node02 systemd[1]: Starting kubelet: The Kubernetes Node Agent...
    Jun 26 14:53:31 node02 kubelet[11185]: F0626 14:53:31.311175   11185 server.go:198] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/c
    Jun 26 14:53:31 node02 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    Jun 26 14:53:31 node02 systemd[1]: Unit kubelet.service entered failed state.
    

    5.4安装POD网络插件(CNI)

    在master节点上执行按照POD网络插件

    kubectl apply -f 
    https://raw.githubusercontent.com/coreos/flanne/master/Documentation/kube-flannel.yml
    

    以上地址可能被墙,可以直接获取本地已经下载的flannel.yml运行即可,如:

    本地flannel.yml
      
          ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: beta.kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: beta.kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
      
    [root@node02 ~]# ll
    total 20
    -rw-r--r-- 1 root root 15016 Feb 26 23:05 kube-flannel.yml
    -rwx------ 1 root root   392 Jun 26 14:57 master_images.sh
    [root@node02 ~]# kubectl apply -f  kube-flannel.yml   ###主节点执行
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds-amd64 created
    daemonset.apps/kube-flannel-ds-arm64 created
    daemonset.apps/kube-flannel-ds-arm created
    daemonset.apps/kube-flannel-ds-ppc64le created
    daemonset.apps/kube-flannel-ds-s390x created
    
    

    同时flannel.yml中指定的images访问不到可以去docker hub找一个wget yml地址
    vi 修改yml 所有amd64的地址修改了即可
    等待大约3分钟

    [root@k8s-node1 k8s]#  kubectl get pods --all-namespaces  #查看所有名称空间的pods
    NAMESPACE     NAME                                READY   STATUS     RESTARTS   AGE
    kube-system   coredns-9d85f5447-bmwwg             1/1     Running    0          10m
    kube-system   coredns-9d85f5447-qwd5q             1/1     Running    0          10m
    kube-system   etcd-k8s-node1                      1/1     Running    0          10m
    kube-system   kube-apiserver-k8s-node1            1/1     Running    0          10m
    kube-system   kube-controller-manager-k8s-node1   1/1     Running    0          10m
    kube-system   kube-flannel-ds-amd64-cn6m9         0/1     Init:0/1   0          55s
    kube-system   kube-flannel-ds-amd64-kbbhz         1/1     Running    0          4m11s
    kube-system   kube-flannel-ds-amd64-lll8c         0/1     Init:0/1   0          52s
    kube-system   kube-proxy-df9jw                    1/1     Running    0          10m
    kube-system   kube-proxy-kwg4s                    1/1     Running    0          52s
    kube-system   kube-proxy-t5pkz                    1/1     Running    0          55s
    kube-system   kube-scheduler-k8s-node1            1/1     Running    0          10m
    

    $ ip link set cni0 down 如果网络出现问题,关闭cni0,重启虚拟机继续测试
    执行watch kubectl get pod -n kube-system -o wide 监控pod进度
    等待3-10分钟,完全都是running以后继续

    查看命名空间:

    [root@k8s-node1 ~]#  kubectl get ns
    NAME              STATUS   AGE
    default           Active   8m43s
    kube-node-lease   Active   8m44s
    kube-public       Active   8m44s
    kube-system       Active   8m44s
    

    查看master上的节点信息:

    [root@k8s-node1 ~]# kubectl get nodes
    NAME     STATUS   ROLES    AGE   VERSION
    k8s-node1   Ready    master   13m   v1.17.3   #status为ready才能够执行下面的命令
    

    5.5node02和node03节点加入集群

    最后再次执行,并且分别在“k8s-node2”和“k8s-node3”上也执行这里命令:(#主节点初始化完成后打印)

    [root@k8s-node2 ~]# kubeadm join 10.0.2.15:6443 --token z2roeo.9tzndilx8gnjjqfj     --discovery-token-ca-cert-hash sha256:7cfbf6693daa652f2af8e45594c4a66f45a8d081711e7e17a45cc42abfe7792f
    W0626 15:56:39.223689   10631 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    主节点查看

    [root@k8s-node1 ~]# kubectl get nodes
    NAME        STATUS     ROLES    AGE     VERSION
    k8s-node1   Ready      master   9m38s   v1.17.3
    k8s-node2   NotReady   <none>   9s      v1.17.3
    k8s-node3   NotReady   <none>   6s      v1.17.3
    

    监控pod进度

    watch kubectl get pod -n kube-system -o wide
    

    等到所有的status都变为running状态后,再次查看节点信息:

    [root@k8s-node1 k8s]# kubectl get nodes
    NAME        STATUS   ROLES    AGE    VERSION
    k8s-node1   Ready    master   11m    v1.17.3
    k8s-node2   Ready    <none>   113s   v1.17.3
    k8s-node3   Ready    <none>   110s   v1.17.3
    

    5.6token过期处理

    在node节点中执行,向集群中添加新的节点,执行在kubeadm init 输出的kubeadm join命令;
    确保node节点成功:
    token过期怎么办

    kubeadm token create --print-join-command
    

    5.7集群查看

    [root@k8s-node1 k8s]# kubectl get nodes #主节点执行查看所有节点
    NAME        STATUS   ROLES    AGE    VERSION
    k8s-node1   Ready    master   11m    v1.17.3
    k8s-node2   Ready    <none>   113s   v1.17.3
    k8s-node3   Ready    <none>   110s   v1.17.3
    [root@k8s-node1 k8s]# kubectl get pods --all-namespaces #主节点查看所有pod
    NAMESPACE     NAME                                READY   STATUS    RESTARTS   AGE
    kube-system   coredns-9d85f5447-bmwwg             1/1     Running   0          11m
    kube-system   coredns-9d85f5447-qwd5q             1/1     Running   0          11m
    kube-system   etcd-k8s-node1                      1/1     Running   0          11m
    kube-system   kube-apiserver-k8s-node1            1/1     Running   0          11m
    kube-system   kube-controller-manager-k8s-node1   1/1     Running   0          11m
    kube-system   kube-flannel-ds-amd64-cn6m9         1/1     Running   0          2m29s
    kube-system   kube-flannel-ds-amd64-kbbhz         1/1     Running   0          5m45s
    kube-system   kube-flannel-ds-amd64-lll8c         1/1     Running   0          2m26s
    kube-system   kube-proxy-df9jw                    1/1     Running   0          11m
    kube-system   kube-proxy-kwg4s                    1/1     Running   0          2m26s
    kube-system   kube-proxy-t5pkz                    1/1     Running   0          2m29s
    kube-system   kube-scheduler-k8s-node1            1/1     Running   0          11m
    
    
    • 至此k8s集群搭建完成,接下来进行kubespere的定制化安装

    二、kubespere定制化安装

    kubespere官网地址

    详情见
    前提条件如下:

    1.安装helm(master节点执行)

    helm是kubernetes的包管理器。包管理器类似于在Ubuntu中使用的apt,centos中的yum或者python中的pip一样,能够快速查找,下载和安装软件包。Helm有客户端组件helm和服务端组件Tiller组成,能够将一组K8S资源打包统一管理,是查找、共享和使用为Kubernetes构建的软件的最佳方式。

    [root@k8s-node1 k8s]# curl -L https://git.io/get_helm.sh|bash
      % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                     Dload  Upload   Total   Spent    Left  Speed
      0     0    0     0    0     0      0      0 --:--:--  0:00:05 --:--:--     0
    100  7185  100  7185    0     0   1069      0  0:00:06  0:00:06 --:--:-- 12761
    Downloading https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
    Preparing to install helm and tiller into /usr/local/bin
    helm installed into /usr/local/bin/helm
    tiller installed into /usr/local/bin/tiller
    Run 'helm init' to configure helm
    
    #由于被墙的原因,使用我们给定的get_helm.sh
    [root@node02 ~]# ./get_helm.sh 
    Downloading https://get.helm.sh/helm-v2.16.9-linux-amd64.tar.gz
    Preparing to install helm and tiller into /usr/local/bin
    helm installed into /usr/local/bin/helm
    tiller installed into /usr/local/bin/tiller
    Run 'helm init' to configure helm.
    [root@node02 ~]# helm version   #验证版本
    Client: &version.Version{SemVer:"v2.16.9", GitCommit:"8ad7037828e5a0fca1009dabe290130da6368e39", GitTreeState:"clean"}
    Error: could not find tiller
    [root@node02 ~]# kubectl apply -f helm-rabc.yml   #创建权限(master执行)
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
    

    2.安装Tilller(Master执行)

    [root@node02 ~]# helm init --service-account=tiller --tiller-image=sapcc/tiller:v2.16.3 --history-max 300 
    Creating /root/.helm 
    Creating /root/.helm/repository 
    Creating /root/.helm/repository/cache 
    Creating /root/.helm/repository/local 
    Creating /root/.helm/plugins 
    Creating /root/.helm/starters 
    Creating /root/.helm/cache/archive 
    Creating /root/.helm/repository/repositories.yaml 
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
    Adding local repo with URL: http://127.0.0.1:8879/charts 
    $HELM_HOME has been configured at /root/.helm.
    
    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
    
    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    To prevent this, run `helm init` with the --tiller-tls-verify flag.
    For more information on securing your installation see: https://v2.helm.sh/docs/securing_installation/
    
    • --tiller-image 指定镜像,否则会被墙,等待节点上部署的tiller完成即可。
    [root@node02 ~]# kubectl get pods -n kube-system
    NAME                             READY   STATUS    RESTARTS   AGE
    coredns-9d85f5447-798ss          1/1     Running   0          88m
    coredns-9d85f5447-pz4wr          1/1     Running   0          88m
    etcd-node02                      1/1     Running   0          88m
    kube-apiserver-node02            1/1     Running   0          88m
    kube-controller-manager-node02   1/1     Running   0          88m
    kube-flannel-ds-amd64-9x6lh      1/1     Running   0          74m
    kube-flannel-ds-amd64-s77xm      1/1     Running   0          83m
    kube-flannel-ds-amd64-t8wth      1/1     Running   0          60m
    kube-proxy-2vbcp                 1/1     Running   0          60m
    kube-proxy-bd7zp                 1/1     Running   0          74m
    kube-proxy-lk459                 1/1     Running   0          88m
    kube-scheduler-node02            1/1     Running   0          88m
    tiller-deploy-5fdc6844fb-zwbz7   1/1     Running   0          38s
    
    [root@node02 ~]# kubectl get node -o wide  #查看集群所有节点信息
    NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
    node01   Ready    <none>   60m   v1.17.3   172.16.174.134   <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://19.3.12
    node02   Ready    master   88m   v1.17.3   172.16.174.133   <none>        CentOS Linux 7 (Core)   3.10.0-693.2.2.el7.x86_64    docker://19.3.12
    node03   Ready    <none>   74m   v1.17.3   172.16.193.3     <none>        CentOS Linux 7 (Core)   3.10.0-514.26.2.el7.x86_64   docker://19.3.12
    

    2.1测试问题:

    [root@k8s-node1 k8s]# helm install stable/nginx-ingress --name nginx-ingress
    Error: release nginx-ingress failed: namespaces "default" is forbidden: User "system:serviceaccount:kube-system:tiller" cannot get resource "namespaces" in API group "" in the namespace "default"
    
    

    2.2解决方案:

    [root@k8s-node1 k8s]# kubectl create serviceaccount --namespace kube-system tiller
    Error from server (AlreadyExists): serviceaccounts "tiller" already exists
    [root@k8s-node1 k8s]# kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
    clusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created
    [root@k8s-node1 k8s]# 
    [root@k8s-node1 k8s]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
    deployment.apps/tiller-deploy patched (no change)
    [root@k8s-node1 k8s]#  helm install --name nginx-ingress --set rbac.create=true stable/nginx-ingress
    
    [root@k8s-node1 k8s]#  helm install --name nginx-ingress --set rbac.create=true stable/nginx-ingress
    NAME:   nginx-ingress
    LAST DEPLOYED: Sat Jun 27 10:52:53 2020
    NAMESPACE: default
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ClusterRole
    NAME           AGE
    nginx-ingress  0s
    
    ==> v1/ClusterRoleBinding
    NAME           AGE
    nginx-ingress  0s
    
    ==> v1/Deployment
    NAME                           READY  UP-TO-DATE  AVAILABLE  AGE
    nginx-ingress-controller       0/1    1           0          0s
    nginx-ingress-default-backend  0/1    1           0          0s
    
    ==> v1/Pod(related)
    NAME                                            READY  STATUS             RESTARTS  AGE
    nginx-ingress-controller-5989bf7f8f-p7rck       0/1    ContainerCreating  0         0s
    nginx-ingress-default-backend-5b967cf596-4s9qn  0/1    ContainerCreating  0         0s
    nginx-ingress-controller-5989bf7f8f-p7rck       0/1    ContainerCreating  0         0s
    nginx-ingress-default-backend-5b967cf596-4s9qn  0/1    ContainerCreating  0         0s
    
    ==> v1/Role
    NAME           AGE
    nginx-ingress  0s
    
    ==> v1/RoleBinding
    NAME           AGE
    nginx-ingress  0s
    
    ==> v1/Service
    NAME                           TYPE          CLUSTER-IP     EXTERNAL-IP  PORT(S)                     AGE
    nginx-ingress-controller       LoadBalancer  10.96.138.118  <pending>    80:31494/TCP,443:31918/TCP  0s
    nginx-ingress-default-backend  ClusterIP     10.96.205.77   <none>       80/TCP                      0s
    
    ==> v1/ServiceAccount
    NAME                   SECRETS  AGE
    nginx-ingress          1        0s
    nginx-ingress-backend  1        0s
    
    
    NOTES:
    The nginx-ingress controller has been installed.
    It may take a few minutes for the LoadBalancer IP to be available.
    You can watch the status by running 'kubectl --namespace default get services -o wide -w nginx-ingress-controller'
    
    An example Ingress that makes use of the controller:
    
      apiVersion: extensions/v1beta1
      kind: Ingress
      metadata:
        annotations:
          kubernetes.io/ingress.class: nginx
        name: example
        namespace: foo
      spec:
        rules:
          - host: www.example.com
            http:
              paths:
                - backend:
                    serviceName: exampleService
                    servicePort: 80
                  path: /
        # This section is only required if TLS is to be enabled for the Ingress
        tls:
            - hosts:
                - www.example.com
              secretName: example-tls
    
    If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:
    
      apiVersion: v1
      kind: Secret
      metadata:
        name: example-tls
        namespace: foo
      data:
        tls.crt: <base64 encoded cert>
        tls.key: <base64 encoded key>
      type: kubernetes.io/tls
    

    3.安装openEBS

    安装过程参考官网

    3.1确认 master 节点是否有 Taint,如下看到 master 节点有 Taint

    [root@k8s-node1 k8s]# kubectl describe node k8s-node1 | grep Taint
    Taints:             node-role.kubernetes.io/master:NoSchedule
    

    3.2去掉 master 节点的 Taint

    [root@k8s-node1 k8s]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master:NoSchedule-
    node/k8s-node1 untainted
    

    3.3创建 OpenEBS 的 namespace,OpenEBS 相关资源将创建在这个 namespace 下

    $ kubectl create ns openebs
    A.若集群已安装了 Helm,可通过 Helm 命令来安装 OpenEBS
    helm install --namespace openebs --name openebs stable/openebs --version 1.5.0
    B.还可以通过 kubectl 命令安装
    $ kubectl apply -f https://openebs.github.io/charts/openebs-operator-1.5.0.yaml
    
    [root@k8s-node1 k8s]# helm install --namespace openebs --name openebs stable/openebs --version 1.5.0   #这里我使用helm安装
    NAME:   openebs
    LAST DEPLOYED: Sat Jun 27 11:12:58 2020
    NAMESPACE: openebs
    STATUS: DEPLOYED
    
    RESOURCES:
    ==> v1/ClusterRole
    NAME     AGE
    openebs  0s
    
    ==> v1/ClusterRoleBinding
    NAME     AGE
    openebs  0s
    
    ==> v1/ConfigMap
    NAME                DATA  AGE
    openebs-ndm-config  1     0s
    
    ==> v1/DaemonSet
    NAME         DESIRED  CURRENT  READY  UP-TO-DATE  AVAILABLE  NODE SELECTOR  AGE
    openebs-ndm  3        3        0      3           0          <none>         0s
    
    ==> v1/Deployment
    NAME                         READY  UP-TO-DATE  AVAILABLE  AGE
    openebs-admission-server     0/1    0           0          0s
    openebs-apiserver            0/1    0           0          0s
    openebs-localpv-provisioner  0/1    1           0          0s
    openebs-ndm-operator         0/1    1           0          0s
    openebs-provisioner          0/1    1           0          0s
    openebs-snapshot-operator    0/1    1           0          0s
    
    ==> v1/Pod(related)
    NAME                                          READY  STATUS             RESTARTS  AGE
    openebs-admission-server-5cf6864fbf-5fs2h     0/1    ContainerCreating  0         0s
    openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
    openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
    openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
    openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
    openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
    openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
    openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
    openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
    openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
    openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
    openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
    openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
    openebs-apiserver-bc55cd99b-n6f67             0/1    ContainerCreating  0         0s
    openebs-localpv-provisioner-85ff89dd44-n26ff  0/1    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-operator-87df44d9-6lbcx           0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    openebs-provisioner-7f86c6bb64-56cmf          0/1    ContainerCreating  0         0s
    openebs-snapshot-operator-54b9c886bf-68nsf    0/2    ContainerCreating  0         0s
    openebs-ndm-8w67w                             0/1    ContainerCreating  0         0s
    openebs-ndm-jj2vh                             0/1    ContainerCreating  0         0s
    openebs-ndm-s8mbd                             0/1    ContainerCreating  0         0s
    
    ==> v1/Service
    NAME                TYPE       CLUSTER-IP   EXTERNAL-IP  PORT(S)   AGE
    openebs-apiservice  ClusterIP  10.96.7.135  <none>       5656/TCP  0s
    
    ==> v1/ServiceAccount
    NAME     SECRETS  AGE
    openebs  1        0s
    
    
    NOTES:
    The OpenEBS has been installed. Check its status by running:
    $ kubectl get pods -n openebs
    
    For dynamically creating OpenEBS Volumes, you can either create a new StorageClass or
    use one of the default storage classes provided by OpenEBS.
    
    Use `kubectl get sc` to see the list of installed OpenEBS StorageClasses. A sample
    PVC spec using `openebs-jiva-default` StorageClass is given below:"
    
    ---
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: demo-vol-claim
    spec:
      storageClassName: openebs-jiva-default
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5G
    ---
    
    For more information, please visit http://docs.openebs.io/.
    
    Please note that, OpenEBS uses iSCSI for connecting applications with the
    OpenEBS Volumes and your nodes should have the iSCSI initiator installed.
    

    3.4安装 OpenEBS 后将自动创建 4 个 StorageClass,查看创建的 StorageClass

    [root@k8s-node1 ~]# kubectl get sc
    NAME                        PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    openebs-device              openebs.io/local                                           Delete          WaitForFirstConsumer   false                  18m
    openebs-hostpath            openebs.io/local                                           Delete          WaitForFirstConsumer   false                  18m
    openebs-jiva-default        openebs.io/provisioner-iscsi                               Delete          Immediate              false                  18m
    openebs-snapshot-promoter   volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  18m
    ###查看pod状态
    [root@k8s-node1 ~]# kubectl get pods --all-namespaces
    NAMESPACE       NAME                                             READY   STATUS    RESTARTS   AGE
    default         nginx-ingress-controller-5989bf7f8f-p7rck        0/1     Running   2          49m
    default         nginx-ingress-default-backend-5b967cf596-4s9qn   1/1     Running   1          49m
    ingress-nginx   nginx-ingress-controller-s2jmb                   0/1     Running   4          87m
    ingress-nginx   nginx-ingress-controller-tzhmm                   0/1     Running   3          29m
    ingress-nginx   nginx-ingress-controller-wrlcm                   0/1     Running   3          87m
    kube-system     coredns-7f9c544f75-bdrf5                         1/1     Running   3          122m
    kube-system     coredns-7f9c544f75-ltfw6                         1/1     Running   3          122m
    kube-system     etcd-k8s-node1                                   1/1     Running   3          122m
    kube-system     kube-apiserver-k8s-node1                         1/1     Running   3          122m
    kube-system     kube-controller-manager-k8s-node1                1/1     Running   4          122m
    kube-system     kube-flannel-ds-amd64-jfjlw                      1/1     Running   6          116m
    kube-system     kube-flannel-ds-amd64-vwjrd                      1/1     Running   4          117m
    kube-system     kube-flannel-ds-amd64-wqbhw                      1/1     Running   5          119m
    kube-system     kube-proxy-clts7                                 1/1     Running   3          117m
    kube-system     kube-proxy-wnq6t                                 1/1     Running   4          116m
    kube-system     kube-proxy-xjz7c                                 1/1     Running   3          122m
    kube-system     kube-scheduler-k8s-node1                         1/1     Running   4          122m
    kube-system     tiller-deploy-797955c678-gtblv                   0/1     Running   2          52m
    openebs         openebs-admission-server-5cf6864fbf-5fs2h        1/1     Running   2          28m
    openebs         openebs-apiserver-bc55cd99b-n6f67                1/1     Running   9          28m
    openebs         openebs-localpv-provisioner-85ff89dd44-n26ff     1/1     Running   2          28m
    openebs         openebs-ndm-8w67w                                1/1     Running   3          28m
    openebs         openebs-ndm-jj2vh                                1/1     Running   3          28m
    openebs         openebs-ndm-operator-87df44d9-6lbcx              0/1     Running   3          28m
    openebs         openebs-ndm-s8mbd                                1/1     Running   3          28m
    openebs         openebs-provisioner-7f86c6bb64-56cmf             1/1     Running   2          28m
    openebs         openebs-snapshot-operator-54b9c886bf-68nsf       2/2     Running   1          28m
    

    3.5将 openebs-hostpath设置为 默认的 StorageClass:

    [root@k8s-node1 ~]# kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
    storageclass.storage.k8s.io/openebs-hostpath patched
    [root@k8s-node1 ~]# kubectl get sc
    NAME                         PROVISIONER                                                RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
    openebs-device               openebs.io/local                                           Delete          WaitForFirstConsumer   false                  21m
    openebs-hostpath (default)   openebs.io/local                                           Delete          WaitForFirstConsumer   false                  21m
    openebs-jiva-default         openebs.io/provisioner-iscsi                               Delete          Immediate              false                  21m
    openebs-snapshot-promoter    volumesnapshot.external-storage.k8s.io/snapshot-promoter   Delete          Immediate              false                  21m
    
    

    3.6将 master 节点 Taint 加上,避免业务相关的工作负载调度到 master 节点抢占 master 资源

    [root@k8s-node1 ~]# kubectl taint nodes k8s-node1 node-role.kubernetes.io/master=:NoSchedule
    node/k8s-node1 tainted
    
    • 至此前提条件已经满足,下面开始安装kubespere.

    4.kubespere最小化安装

    kubectl apply -f https://raw.githubusercontent.com/kubesphere/ks-installer/master/kubesphere-minimal.yaml
    [root@k8s-node1 k8s]# kubectl apply -f kubespere-mini.yaml 
    namespace/kubesphere-system created
    configmap/ks-installer created
    serviceaccount/ks-installer created
    clusterrole.rbac.authorization.k8s.io/ks-installer created
    clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
    deployment.apps/ks-installer created
    [root@node02 ~]#   kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f   #查看滚动日志
    
  • 相关阅读:
    LeetCode 623. Add One Row to Tree
    LeetCode 894. All Possible Full Binary Trees
    LeetCode 988. Smallest String Starting From Leaf
    LeetCode 979. Distribute Coins in Binary Tree
    LeetCode 814. Binary Tree Pruning
    LeetCode 951. Flip Equivalent Binary Trees
    LeetCode 426. Convert Binary Search Tree to Sorted Doubly Linked List
    LeetCode 889. Construct Binary Tree from Preorder and Postorder Traversal
    LeetCode 687. Longest Univalue Path
    LeetCode 428. Serialize and Deserialize N-ary Tree
  • 原文地址:https://www.cnblogs.com/rmxd/p/13194709.html
Copyright © 2011-2022 走看看