zoukankan      html  css  js  c++  java
  • kubeadm install kubernetes

    安装参考

    1. https://www.kubernetes.org.cn/3805.html
    2. https://www.cnblogs.com/liangDream/p/7358847.html#undefined
    3. 高可用方案参考https://www.kubernetes.org.cn/3536.html

    安装过程

    [root@node1 ~]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.1.0.0/16 --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=192.168.0.30,192.168.0.31,192.168.0.36,127.0.0.1,node1,node2 --skip-preflight-checks
    Flag --skip-preflight-checks has been deprecated, it is now equivalent to --ignore-preflight-errors=all
    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
        [WARNING Port-6443]: Port 6443 is in use
        [WARNING Port-10250]: Port 10250 is in use
        [WARNING Port-10251]: Port 10251 is in use
        [WARNING Port-10252]: Port 10252 is in use
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [WARNING FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
        [WARNING Port-2379]: Port 2379 is in use
        [WARNING DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
    [certificates] Using the existing ca certificate and key.
    [certificates] Using the existing apiserver certificate and key.
    [certificates] Using the existing apiserver-kubelet-client certificate and key.
    [certificates] Using the existing etcd/ca certificate and key.
    [certificates] Using the existing etcd/server certificate and key.
    [certificates] Using the existing etcd/peer certificate and key.
    [certificates] Using the existing etcd/healthcheck-client certificate and key.
    [certificates] Using the existing apiserver-etcd-client certificate and key.
    [certificates] Using the existing sa key.
    [certificates] Using the existing front-proxy-ca certificate and key.
    [certificates] Using the existing front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    
    ▽
    Your Kubernetes master has initialized successfully!
    [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 15.003265 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node node1 as master by adding a label and a taint
    [markmaster] Master node1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: 0jno1c.9zq6om8oilnkv5gw
    
    ▽
    {
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    
    ▽
    FLANNEL_NETWORK=10.1.0.0/16
    FLANNEL_SUBNET=10.1.1.0/24
    FLANNEL_MTU=1450
    FLANNEL_NETWORK=10.1.0.0/16
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.0.30:6443 --token 0jno1c.9zq6om8oilnkv5gw --discovery-token-ca-cert-hash sha256:d8ebdaece073af6205683d65d966079a8ea7102e47f8750d333571487b9312df
    
    [root@node1 ~]# cd kubeadm/
    [root@node1 kubeadm]# ls
    get_k8simages.sh  nohup.out
    [root@node1 kubeadm]# vim token
    [root@node1 kubeadm]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    [root@node1 kubeadm]# source ../.bash_profile
    [root@node1 kubeadm]# echo $KUBECONFIG
    /etc/kubernetes/admin.conf
    [root@node1 kubeadm]# mkdir -p /etc/cni/net.d/
    [root@node1 kubeadm]# cat <<EOF> /etc/cni/net.d/10-flannel.conf
    > {
    > “name”: “cbr0”,
    > “type”: “flannel”,
    > “delegate”: {
    > “isDefaultGateway”: true
    > }
    > }
    > EOF
    [root@node1 kubeadm]# vim /etc/cni/net.d/10-flannel.conf
    [root@node1 kubeadm]# mkdir /usr/share/oci-umount/oci-umount.d -p
    [root@node1 kubeadm]# mkdir /run/flannel/
    [root@node1 kubeadm]# cat <<EOF> /run/flannel/subnet.env
    > FLANNEL_NETWORK=10.244.0.0/16
    > FLANNEL_SUBNET=10.244.1.0/24
    > FLANNEL_MTU=1450
    > FLANNEL_IPMASQ=true
    > EOF
    [root@node1 kubeadm]# vim /run/flannel/subnet.env
    [root@node1 kubeadm]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io "flannel" created
    clusterrolebinding.rbac.authorization.k8s.io "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel-cfg" created
    daemonset.extensions "kube-flannel-ds" created
    

    监控安装

    参考https://www.jianshu.com/p/3f803cd02d74

    trouble shooting

    1. 无法启动kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
      Drop-In: /etc/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: activating (auto-restart) (Result: exit-code) since 四 2018-04-12 13:22:23 CHOST; 8s ago
         Docs: http://kubernetes.io/docs/
      Process: 4887 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
     Main PID: 4887 (code=exited, status=255)
       Memory: 0B
       CGroup: /system.slice/kubelet.service
    
    4月 12 13:22:23 node1 systemd[1]: Unit kubelet.service entered failed state.
    4月 12 13:22:23 node1 systemd[1]: kubelet.service failed.
    

    解决办法:

    编辑10-kubeadm.conf的文件,修改cgroup-driver配置:

    [root@centos7-base-ok]# cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true"
    Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
    Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
    Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
    Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
    ExecStart=
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_EXTRA_ARGS
    

    将“--cgroup-driver=systems”修改成为“--cgroup-driver=cgroupfs”,重新启动kubelet。

    参考:https://www.cnblogs.com/liangDream/p/7358847.html#undefined

    1. kube-dns-xx的pod起不来flannel的pod也起不来

      NAME                            READY     STATUS     RESTARTS   AGE
      etcd-node1                      1/1       Running    0          27s
      kube-apiserver-node1            1/1       Running    0          48s
      kube-controller-manager-node1   1/1       Running    0          30s
      kube-dns-86f4d74b45-zfjzq       0/3       Pending    0          1m
      kube-flannel-ds-wvllr           0/1       Init:0/1   0          9s
      kube-proxy-zk42p                1/1       Running    0          1m
      kube-scheduler-node1            1/1       Running    0          31s
      

      查看/var/log/message

    Apr 12 15:46:31 node1 kubelet: W0412 15:46:31.289539    2223 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
    Apr 12 15:46:31 node1 kubelet: E0412 15:46:31.289934    2223 kubelet.go:2125] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    
    解决办法:
    
    mkdir -p /etc/cni/net.d/
    cat <<EOF> /etc/cni/net.d/10-flannel.conf
    {
    “name”: “cbr0”,
    “type”: “flannel”,
    “delegate”: {
    “isDefaultGateway”: true
    }
    }
    EOF
    
    1. 加入新节点出现问题
    
    root@node1 kubeadm]# kubeadm join 192.168.0.31:6443 --token 3xzznv.n5cfqpng50z2aa1p
    [preflight] Running pre-flight checks.
        [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [preflight] Some fatal errors occurred:
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    
    1. kubelet服务无法启动
    Failed to restart kubelet.service: Unit not found.
    

    解决方法:

    systemctl daemon-reload
    
    1. 有时候dashboard不一定安装在master节点这个需要注意

    其他参考

    1. kubernetes认证服务发现

    2. kubeadm源码解析

  • 相关阅读:
    DB2
    Data Queue
    QMQY
    CMD(SA400 Command)
    Software development process
    CSS display样式
    CSS行高line-height解释
    CS和CS3知识点
    HTML图片<img>标签空白解决方法
    CS清除浮动
  • 原文地址:https://www.cnblogs.com/codeblock/p/kubeadm-install-kubernetes.html
Copyright © 2011-2022 走看看