zoukankan      html  css  js  c++  java
  • CentOS7.5 使用 kubeadm 安装配置 Kubernetes1.12(四)

    在之前的文章,我们已经演示了 yum二进制方式的安装方式,本文我们将用官方推荐的kubeadm来进行安装部署。

    kubeadm是 Kubernetes 官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

    一、所有节点环境准备

    1、软件版本

    软件 版本
    kubernetes v1.12.2
    CentOS 7.5 CentOS Linux release 7.5.1804
    Docker v18.06
    flannel 0.10.0

    2、节点规划

    IP 角色 主机名
    172.18.8.200 k8s master master.wzlinux.com
    172.18.8.201 k8s node01 node01.wzlinux.com
    172.18.8.202 k8s node02 node02.wzlinux.com

    节点及网络规划如下:

    3、系统配置

    关闭防火墙。

    systemctl stop firewalld
    systemctl disable firewalld
    

    配置/etc/hosts,添加如下内容。

    172.18.8.200 master.wzlinux.com master
    172.18.8.201 node01.wzlinux.com node01
    172.18.8.202 node02.wzlinux.com node02
    

    关闭SELinux。

    sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
    setenforce 0
    

    关闭swap。

    swapoff -a
    sed -i 's/.*swap.*/#&/' /etc/fstab
    

    配置转发参数。

    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system
    

    设置国内kubernetes阿里云源。

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    4、docker安装

    因为不管是master还是node,都是需要容器引擎,所以我们提前把docker安装好。
    设置官方docker源。

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/
    

    查看目前官方仓库的docker版本。

    [root@master ~]# yum list docker-ce.x86_64  --showduplicates |sort -r
    已加载插件:fastestmirror
    可安装的软件包
     * updates: mirrors.aliyun.com
    Loading mirror speeds from cached hostfile
     * extras: mirrors.aliyun.com
    docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
    docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stable
    docker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable
     * base: mirrors.aliyun.com
    

    根据官方的推荐要求,我们需要安装v18.06。

    yum install docker-ce-18.06.1.ce -y
    

    配置国内镜像仓库加速器。

    sudo mkdir -p /etc/docker
    sudo tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]
    }
    EOF
    

    启动docker。

    systemctl daemon-reload
    systemctl enable docker
    systemctl start docker
    

    5、安装kubernetes相关组件

    yum install kubelet kubeadm kubectl -y
    systemctl enable kubelet && systemctl start kubelet
    

    6、加载IPVS内核

    加载ipvs内核,使node节点kube-proxy支持ipvs代理规则。

    modprobe ip_vs_rr
    modprobe ip_vs_wrr
    modprobe ip_vs_sh
    

    并添加到开机启动文件/etc/rc.local里面。

    cat <<EOF >> /etc/rc.local
    modprobe ip_vs_rr
    modprobe ip_vs_wrr
    modprobe ip_vs_sh
    EOF
    

    二、安装 master 节点

    1、初始化master节点

    因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,注意下载的版本尽量和我们的kubeadm等版本一样,我们选择v1.12.2,修改tag。执行下面这个Shell脚本即可。

    #!/bin/bash
    kube_version=:v1.12.2
    kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
    addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1)
    
    for imageName in ${kube_images[@]} ; do
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
      docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
      docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
    done
    
    for imageName in ${addon_images[@]} ; do
      docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
      docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
      docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
    done
    
    docker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24
    docker image rm k8s.gcr.io/etcd-amd64:3.2.24
    docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
    docker image rm k8s.gcr.io/pause-amd64:3.1
    

    关于脚本中的各镜像的版本,如果大家不清楚的话,可以先进行kubeadm init初始化一下,查看一下报错的版本,然后我们在针对获取。
    如果kubeadm升级了,我们可以选用新的版本,下载新版本镜像即可。

    执行脚本,我们就把需要的的镜像下载下来了,我们是使用别人做好的仓库,当然我们也可以建自己的私有仓库。

    [root@master ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy                v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
    k8s.gcr.io/kube-apiserver            v1.12.2             51a9c329b7c5        4 weeks ago         194MB
    k8s.gcr.io/kube-controller-manager   v1.12.2             15548c720a70        4 weeks ago         164MB
    k8s.gcr.io/kube-scheduler            v1.12.2             d6d57c76136c        4 weeks ago         58.3MB
    k8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        2 months ago        220MB
    k8s.gcr.io/coredns                   1.2.2               367cdc8433a4        3 months ago        39.2MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        11 months ago       742kB
    

    使用kubeadm init自动安装 Master 节点,需要指定版本。

    kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
    
    [init] using Kubernetes version: v1.12.2
    [preflight] running pre-flight checks
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1]
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [certificates] Generated sa key and public key.
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
    [init] this might take a minute or longer if the control plane images have to be pulled
    [apiclient] All control plane components are healthy after 20.005448 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''"
    [markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation
    [bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
    
    

    服务启动后需要根据输出提示,进行配置:

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    

    2、给pod配置网络

    pod网络插件是必要安装,以便pod可以相互通信。在部署应用和启动kube-dns之前,需要部署网络,kubeadm仅支持CNI的网络。

    pod支持的网络插件有很多,如CalicoCanalFlannelRomanaWeave Net等,因为之前我们初始化使用了参数--pod-network-cidr=10.244.0.0/16,所以我们使用插件flannel

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
    

    检查是否正常启动,因为要下载flannel镜像,需要时间会稍微长一些。

    [root@master ~]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGE
    kube-system   coredns-576cbf47c7-ptzmh                     1/1     Running   0          22m
    kube-system   coredns-576cbf47c7-q78r9                     1/1     Running   0          22m
    kube-system   etcd-master.wzlinux.com                      1/1     Running   0          21m
    kube-system   kube-apiserver-master.wzlinux.com            1/1     Running   0          22m
    kube-system   kube-controller-manager-master.wzlinux.com   1/1     Running   0          22m
    kube-system   kube-flannel-ds-amd64-vqtzq                  1/1     Running   0          5m54s
    kube-system   kube-proxy-ld262                             1/1     Running   0          22m
    kube-system   kube-scheduler-master.wzlinux.com            1/1     Running   0          22m
    

    故障排查思路:

    • 确认端口和容器是否正常启动,查看 /var/log/message日志信息
    • 通过docker logs ID查看容器的启动日志,特别是频繁创建的容器
    • 使用kubectl --namespace=kube-system describe pod POD-NAME查看错误状态的pod日志。
    • 使用kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}查看具体错误。
    • Calico - Canal - Flannel已经被官方验证过,其他的网络插件有可能有坑,能不能爬出来就看个人能力了。
    • 一般常见的错误是镜像名称版本不对或者镜像无法下载。

    三、安装node节点

    1、下载需要的镜像

    同样的node节点也需要下载镜像kube-proxypause,它需要的镜像会少一些。

    #!/bin/bash
    
    kube_version=:v1.12.2
    coredns_version=1.2.2
    pause_version=3.1
    
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
    docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
    
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
    docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
    
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
    docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
    

    查看下载好的镜像。

    [root@node01 ~]# docker images
    REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy    v1.12.2             15e9da1ca195        4 weeks ago         96.5MB
    k8s.gcr.io/pause   3.1                 da86e6ba6ca1        11 months ago       742kB
    

    2、添加节点(node1为例)

    我们在master节点上初始化成功的时候,在最后有一个kubeadm join的命令,就是用来添加node节点的。

    kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
    
    [preflight] running pre-flight checks
    [discovery] Trying to connect to API Server "172.18.8.200:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443"
    [discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443"
    [discovery] Successfully established connection with API Server "172.18.8.200:6443"
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [preflight] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotation
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the master to see this node join the cluster.
    
    

    提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create生成一个新的token。
    如果忘记token,可以使用kubeadm token list查看。

    执行添加命令后,在Master上查看节点信息。

    [root@master ~]# kubectl get nodes
    NAME                 STATUS   ROLES    AGE   VERSION
    master.wzlinux.com   Ready    master   64m   v1.12.2
    node01.wzlinux.com   Ready    <none>   32m   v1.12.2
    node02.wzlinux.com   Ready    <none>   15m   v1.12.2
    

    可以把master节点的配置文件放到node节点上面,方便node节点使用kubectl。

    scp /etc/kubernetes/admin.conf  172.18.8.201:/root/.kube/config
    

    创建几个pod看看。

    [root@master ~]# kubectl run nginx --image=nginx --replicas=3
    
    [root@master ~]# kubectl get pods -o wide
    NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
    nginx-dbddb74b8-7qnsl   1/1     Running   0          27s   10.244.2.2   node02.wzlinux.com   <none>
    nginx-dbddb74b8-ck4l9   1/1     Running   0          27s   10.244.1.2   node01.wzlinux.com   <none>
    nginx-dbddb74b8-rpc2r   1/1     Running   0          27s   10.244.1.3   node01.wzlinux.com   <none>
    

    完整的架构图如下:

    四、案例演示

    为了帮助大家更好地理解 Kubernetes 架构,我们部署一个应用来演示各个组件之间是如何协作的。

    kubectl run httpd-app --image=httpd --replicas=2
    

    查看部署的应用。

    [root@master ~]# kubectl get  pod -o wide
    NAME                         READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE
    httpd-app-66cb7d499b-gskrg   1/1     Running   0          59s   10.244.1.2   node01.wzlinux.com   <none>
    httpd-app-66cb7d499b-km5t8   1/1     Running   0          59s   10.244.2.2   node02.wzlinux.com   <none>
    

    Kubernetes 部署了 deployment httpd-app,有两个副本 Pod,分别运行在node1node2

    整个部署过程流程如下:

    1. kubectl 发送部署请求到 API Server。
    2. API Server 通知 Controller Manager 创建一个 deployment 资源。
    3. Scheduler 执行调度任务,将两个副本 Pod 分发到 node1 和 node2。
    4. node1 和 node2 上的 kubelet 在各自的节点上创建并运行 Pod。

    应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod 时 API Server 会从 etcd 中读取这些数据。
    flannel 会为每个 Pod 都分配 IP。因为没有创建 service,目前 kube-proxy 还没参与进来。

    一切OK,到此为止,我们的集群已经部署完成,大家可以开始应用了。

    五、kube-proxy 启动 ipvs

    从kubernetes1.8版本开始,新增了kube-proxy对ipvs的支持,并且在新版的kubernetes1.11版本中被纳入了GA。

    iptables模式问题不好定位,规则多了性能会显著下降,甚至会出现规则丢失的情况;相比而言,ipvs就稳定的多。

    默认安装使用的是iptables,我们需要进行修改配置开启ipvs。

    1、加载内核模块。

    modprobe ip_vs_rr
    modprobe ip_vs_wrr
    modprobe ip_vs_sh
    

    2、更改kube-proxy配置

    kubectl edit configmap kube-proxy -n kube-system
    

    找到如下部分。

        kind: KubeProxyConfiguration
        metricsBindAddress: 127.0.0.1:10249
        mode: "ipvs"
        nodePortAddresses: null
        oomScoreAdj: -999
    

    其中mode原来是空,默认为iptables模式,改为ipvs。scheduler默认是空,默认负载均衡算法为轮训。

    3、删除所有kube-proxy的pod

    kubectl delete pod kube-proxy-xxx -n kube-system
    

    4、查看kube-proxy的pod日志

    [root@master ~]# kubectl logs kube-proxy-t4t8j -n kube-system
    I1211 03:43:01.297068       1 server_others.go:189] Using ipvs Proxier.
    W1211 03:43:01.297549       1 proxier.go:365] IPVS scheduler not specified, use rr by default
    I1211 03:43:01.297698       1 server_others.go:216] Tearing down inactive rules.
    I1211 03:43:01.355516       1 server.go:464] Version: v1.13.0
    I1211 03:43:01.366922       1 conntrack.go:52] Setting nf_conntrack_max to 196608
    I1211 03:43:01.367294       1 config.go:102] Starting endpoints config controller
    I1211 03:43:01.367304       1 config.go:202] Starting service config controller
    I1211 03:43:01.367327       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
    I1211 03:43:01.367343       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
    I1211 03:43:01.467475       1 controller_utils.go:1034] Caches are synced for service config controller
    I1211 03:43:01.467485       1 controller_utils.go:1034] Caches are synced for endpoints config controller
    

    5、安装ipvsadm

    使用ipvsadm查看ipvs相关规则,如果没有这个命令可以直接yum安装

    yum install -y ipvsadm
    
    [root@master ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  10.96.0.1:443 rr
      -> 172.18.8.200:6443           Masq    1      0          0         
    TCP  10.96.0.10:53 rr
      -> 10.244.0.4:53                Masq    1      0          0         
      -> 10.244.0.5:53                Masq    1      0          0         
    UDP  10.96.0.10:53 rr
      -> 10.244.0.4:53                Masq    1      0          0         
      -> 10.244.0.5:53                Masq    1      0          0         
    

    附录:生产的各组件配置文件

    所有的密钥明文占用篇幅太多,我这里用秘钥内容代替。

    admin.conf

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:   秘钥内容
        server: https://172.18.8.200:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: kubernetes-admin
      name: kubernetes-admin@kubernetes
    current-context: kubernetes-admin@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
      user:
        client-certificate-data:   秘钥内容
        client-key-data:  秘钥内容
    

    controller-manager.conf

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:  密钥内容
        server: https://172.18.8.200:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: system:kube-controller-manager
      name: system:kube-controller-manager@kubernetes
    current-context: system:kube-controller-manager@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: system:kube-controller-manager
      user:
        client-certificate-data:  密钥内容
        client-key-data:   密钥内容
    

    kubelet.conf

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data:  密钥内容
        server: https://172.18.8.200:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: system:node:master.wzlinux.com
      name: system:node:master.wzlinux.com@kubernetes
    current-context: system:node:master.wzlinux.com@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: system:node:master.wzlinux.com
      user:
        client-certificate-data: 密钥内容
        client-key-data: 密钥内容
    

    scheduler.conf

    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: 密钥内容
        server: https://172.18.8.200:6443
      name: kubernetes
    contexts:
    - context:
        cluster: kubernetes
        user: system:kube-scheduler
      name: system:kube-scheduler@kubernetes
    current-context: system:kube-scheduler@kubernetes
    kind: Config
    preferences: {}
    users:
    - name: system:kube-scheduler
      user:
        client-certificate-data: 密钥内容
        client-key-data: 秘钥内容
    

    参考文档:https://kubernetes.io/docs/setup/independent/install-kubeadm/

  • 相关阅读:
    Java框架介绍-13个不容错过的框架项目
    微信公众号 模板消息开发
    微信授权-授权方式、公众号是否关注
    Java Spring-Spring与Quartz整合
    Java框架搭建-Maven、Mybatis、Spring MVC整合搭建
    IOS UIView 04- 自定义控件
    IOS UIView 03- 自定义 Collection View 布局
    IOS UIView 02- 深入理解 Scroll Views
    MVC架构中的Repository模式 个人理解
    零开始的领域驱动设计
  • 原文地址:https://www.cnblogs.com/wzlinux/p/10159289.html
Copyright © 2011-2022 走看看