zoukankan      html  css  js  c++  java
  • 2.使用kubeadm快速搭建k8s集群

    准备工作:

    时间同步

    systemctl stop iptables.service
    systemctl stop firewalld.service

    安装docker

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    yum -y install docker-ce-18.09*

    systemctl start docker
    systemctl enable docker

    vim /etc/docker/daemon.json

    {
    "registry-mirrors": ["https://3s01e0d2.mirror.aliyuncs.com"],
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
    }

    systemctl daemon-reload
    systemctl restart docker


    1.配置kubernetes yum源。
    阿里镜像源:https://opsx.alibaba.com/mirror

    配置yum源:
    [kubernetes]
    name=kubernetes Repo
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

    wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    rpm --import rpm-package-key.gpg


    master
    2.安装:
    yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

    echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
    echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables


    kubeadm 初始化

    忽略swap
    vim /etc/sysconfig/kubelet
    [root@heaven00 ~]# cat /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"

    初试化命令:

    kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap


    镜像仓库:
    quay.io


    docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.12.2
    docker pull mirrorgooglecontainers/kube-controller-amd64:v1.12.2
    docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.12.2
    docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.12.2
    docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.12.2
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd-amd64:3.2.24
    docker pull mirrorgooglecontainers/coredns:1.2.2
    docker pull mirrorgooglecontainers/coredns-amd64:1.2.2
    docker pull coredns/coredns:1.2.2




    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy

    Your Kubernetes control-plane has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 10.250.0.89:6443 --token aji2ef.f103v7o45h7hjeld
    --discovery-token-ca-cert-hash sha256:fd01c8ced3c470d3d1bf35350c8cb8bbba82fa808f14573654c9ffaf4e29fca6


    查看状态:
    [root@k8s-master ~]# kubectl get cs
    NAME STATUS MESSAGE ERROR
    controller-manager Healthy ok
    scheduler Healthy ok
    etcd-0 Healthy {"health":"true"}
    [root@k8s-master ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master NotReady master 18m v1.14.3
    [root@k8s-master ~]#

    安装网络组建flannel
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


    [root@k8s-master ~]# kubectl get pods -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-8686dcc4fd-27wg7 1/1 Running 0 26m
    coredns-8686dcc4fd-ks8hc 1/1 Running 0 26m
    etcd-k8s-master 1/1 Running 0 25m
    kube-apiserver-k8s-master 1/1 Running 0 25m
    kube-controller-manager-k8s-master 1/1 Running 0 26m
    kube-flannel-ds-amd64-kgfwd 1/1 Running 0 2m15s
    kube-proxy-js82w 1/1 Running 0 26m
    kube-scheduler-k8s-master 1/1 Running 0 25m
    [root@k8s-master ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master Ready master 27m v1.14.3


    ===================================================================================
    node 节点:

    yum install kubelet-1.14.3  kubeadm-1.14.3  kubectl-1.14.3 docker-ce-18.09*

    [root@k8s-master ~]# kubectl get nodes
    NAME STATUS ROLES AGE VERSION
    k8s-master Ready master 73m v1.14.3
    k8s-node1 Ready <none> 2m25s v1.14.3



    ==================================================
    部署过程中需要拉取镜像国内非常慢

    有大佬写了个工具,亲测好使。
    https://github.com/xuxinkun/littleTools#azk8spull
    https://www.cnblogs.com/xuxinkun/p/11025020.html








    =================================
    报错1:
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/


    解决:
    修改/etc/docker/daemon.json文件

    {
    "exec-opts": ["native.cgroupdriver=systemd"]
    }

    重启docker

    systemctl daemon-reload
    systemctl restart docker

    报错2:k8s.gcr.io连接不上之后,会报如下的错误:

    初始化k8s时候,

    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.14.3: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.23.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.10: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
    , error: exit status 1
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.3.1: output: Error response from daemon: Get https://k8s.gcr.io/v1/_ping: dial tcp 64.233.188.82:443: i/o timeout
    , error: exit status 1


    解决方法:
    通过别的镜像仓库,先将镜像下载下来然后再修改成k8s.gcr.io对应的tag

    pull镜像
    docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3
    docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3
    docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3
    docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.14.3
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd:3.3.10
    docker pull coredns/coredns:1.3.1


    修改tag:
    docker tag mirrorgooglecontainers/kube-apiserver-amd64:v1.14.3 k8s.gcr.io/kube-apiserver:v1.14.3
    docker tag mirrorgooglecontainers/kube-controller-manager-amd64:v1.14.3 k8s.gcr.io/kube-controller-manager:v1.14.3
    docker tag mirrorgooglecontainers/kube-scheduler-amd64:v1.14.3 k8s.gcr.io/kube-scheduler:v1.14.3
    docker tag mirrorgooglecontainers/kube-proxy-amd64:v1.14.3 k8s.gcr.io/kube-proxy:v1.14.3
    docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag mirrorgooglecontainers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag coredns/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

    =====================================
    问题3:
    [root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.14.3
    [preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
    [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
    [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
    [ERROR Port-10250]: Port 10250 is in use
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    [root@heaven00 lib]# rm -rf /etc/kubernetes /var/lib/etcd -rf
    [root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.14.3
    [preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR Port-10250]: Port 10250 is in use
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    [root@heaven00 lib]# docker ps
    CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [root@heaven00 lib]# netstat -tulnp | grep 10250
    tcp6 0 0 :::10250 :::* LISTEN 17007/kubelet
    [root@heaven00 lib]# systemctl stop kubelet
    [root@heaven00 lib]# netstat -tulnp | grep 10250
    [root@heaven00 lib]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --kubernetes-version=v1.14.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.14.3
    [preflight] Running pre-flight checks
    [WARNING Swap]: running with swap on is not supported. Please disable swap
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [heaven00 localhost] and IPs [10.139.165.32 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [heaven00 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.139.165.32]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 17.502305 seconds
    [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-check] Initial timeout of 40s passed.
    error execution phase upload-config/kubelet: Error writing Crisocket information for the control-plane node: timed out waiting for the condition

    message log:

    Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.878745 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:33 heaven00 kubelet: E0620 12:04:33.978932 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.035393 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.079094 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.179282 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.235901 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
    Jun 20 12:04:34 heaven00 kubelet: W0620 12:04:34.274044 18155 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.279465 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.379615 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.435968 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorized
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.479820 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.580004 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.601351 18155 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.636902 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Unauthorized
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.680184 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.780347 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.836914 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.880525 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:34 heaven00 kubelet: E0620 12:04:34.980697 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.036733 18155 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthorized
    Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.080854 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.181033 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.237132 18155 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unauthorized
    Jun 20 12:04:35 heaven00 kubelet: E0620 12:04:35.281221 18155 kubelet.go:2244] node "heaven00" not found
    Jun 20 12:04:37 heaven00 telegraf: 2019-06-20T04:04:37Z E! [inputs.ping]: Error in plugin: host www.google.com: signal: killed
    Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25606_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25606_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25611_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Removed slice libcontainer_25611_systemd_test_default.slice.
    Jun 20 12:04:39 heaven00 systemd: Created slice libcontainer_25629_systemd_test_default.slice.

    ##
    [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
    [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.


    解决方法:
    找了新机器重新部署,发现没有这个问题了。

    ============================================================================================================
    kubeadm init初始化成功后会打印出node 加入master的命令,如下:

    kubeadm join 10.239.44.68:6443 --token 8jxvj4.5lop20zjbu48h6kl
    --discovery-token-ca-cert-hash sha256:1ca8f0a098601b94d7c2a9b4a3758ff0880a0213db813336dec0e9272ed55a78
    注意:kubeadm init生成的token有效期只有1天,如果你的node节点在使用kubeadm join时出现如下错误

    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Unauthorized
    请到master上检查你所使用的token是否有效,kubeadm token list

    49y4v3.jxq5w76jj5hh028u <invalid> 2019-04-13T15:00:47-04:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token
    8jxvj4.5lop20zjbu48h6kl 23h 2019-04-25T10:21:41-04:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
    生成不过期的token

    kubeadm token create --ttl 0 --print-join-command
    ============================================================================================================

  • 相关阅读:
    Azkaban的使用
    Azkaban安装
    Kafka 启动失败,报错Corrupt index found以及org.apache.kafka.common.protocol.types.SchemaException: Error reading field 'version': java.nio.BufferUnderflowException
    Kafka 消费者设置分区策略及原理
    Kafka利用Java API自定义生产者,消费者,拦截器,分区器等组件
    zookeeper群起总是有那么几个节点起不来的问题解决
    flume 启动agent报No appenders could be found for logger的解决
    Flume 的监控方式
    Flume 自定义 组件
    Source r1 has been removed due to an error during configuration java.lang.IllegalArgumentException: Required parameter bind must exist and may not be null & 端口无法连接
  • 原文地址:https://www.cnblogs.com/heaven-xi/p/11312573.html
Copyright © 2011-2022 走看看