zoukankan      html  css  js  c++  java
  • kubernetes集群部署

    配置kubernetes的仓库

     https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
     [root@node1 ~]# cd /etc/yum.repos.d/
     [root@node1 yum.repos.d]# ls
    CentOS-Base.repo       CentOS-fasttrack.repo  CentOS-Vault.repo  k8s.repo
    CentOS-CR.repo         CentOS-Media.repo      epel.repo
    CentOS-Debuginfo.repo  CentOS-Sources.repo    epel-testing.repo
    [root@node1 yum.repos.d]# vi k8s.repo 
     [kubernetes]
     name=kubernetes repo
     baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
     gpgcheck=0
     enabled=1
    配置yum源

    [root@node1 yum.repos.d]# yum install kubelet kubeadm kubectl docker-ce

    准备镜像     

       导入docker k8s 镜像 这些镜像默认在google是无法访问到的 需要从dockerhub官方网站上把需要的镜像先下载到本地 然后修改镜像的tag 再执行kubeadm init 

        kubeadm首先会查看本机是否存在相应的docker镜像 如果有就直接启动本机的镜像如果没有就会从https://k8s.gcr.io仓库中下载相关镜像

        

      [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-apiserver ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-controller-manager ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-scheduler ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.12.1: output: Trying to pull repository k8s.gcr.io/kube-proxy ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 74.125.204.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Trying to pull repository k8s.gcr.io/pause ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.2.24: output: Trying to pull repository k8s.gcr.io/etcd ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused
    , error: exit status 1
        [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.2.2: output: Trying to pull repository k8s.gcr.io/coredns ... 
    Get https://k8s.gcr.io/v1/_ping: dial tcp 108.177.125.82:443: getsockopt: connection refused
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
    镜像下载失败提示

    上面的提示信息中包含所需要的镜像的名称和tag     根据提示所需的名称和tag来从dockerhub中下载镜像到本机

    images=(etcd-amd64:3.0.17 pause-amd64:3.0 kube-proxy-amd64:v1.7.2 kube-scheduler-amd64:v1.7.2 kube-controller-manager-amd64:v1.7.2 kube-apiserver-amd64:v1.7.2 kubernetes-dashboard-amd64:v1.6.1 k8s-dns-sidecar-amd64:1.14.4 k8s-dns-kube-dns-amd64:1.14.4 k8s-dns-dnsmasq-nanny-amd64:1.14.4)
    for imageName in ${images[@]} ; do
      docker pull cloudnil/$imageName
      docker tag cloudnil/$imageName gcr.io/google_containers/$imageName
      docker rmi cloudnil/$imageName
    done
    自动下载镜像
    根据提示从dockerhub下载对应版本的镜像 需要到官方网站上查看已经有哪些版本
    docker pull mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3
    docker pull mirrorgooglecontainers/kube-controller-manager-amd64:v1.11.3
    docker pull mirrorgooglecontainers/kube-scheduler-amd64:v1.11.3
    docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
    docker pull mirrorgooglecontainers/pause:3.1
    docker pull mirrorgooglecontainers/etcd-amd64:3.2.18
    docker pull coredns/coredns:1.1.3
    
    把下载下来的镜像的tag修改成kubeadm指定的tag
    docker tag docker.io/mirrorgooglecontainers/kube-apiserver-amd64:v1.11.3 k8s.gcr.io/kube-apiserver:v1.12.1
    docker tag docker.io/mirrorgooglecontainers/etcd-amd64:3.2.24  k8s.gcr.io/etcd:3.2.24
    下载镜像
    [root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
    [root@node1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
    
    vi /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"
    [root@node1 ~]# kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    
    kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
    修改配置
    停用全部运行中的容器
       docker stop $(docker ps -q)
    
    删除全部容器
       docker rm $(docker ps -aq)
    
    一条命令实现停用并删除容器
       docker stop $(docker ps -q) & docker rm $(docker ps -aq)
    
    想要删除untagged images,也就是那些id为的image的话可以用
       docker rmi $(docker images | grep "^<none>" | awk "{print $3}")
    
    要删除全部image的话
       docker rmi $(docker images -q)
    
    强制删除全部image的话
       docker rmi -f $(docker images -q)
    容器和镜像的批量删除
    # -*- coding: UTF-8 -*-
    
    import re
    import os
    import subprocess
    
    if __name__ == "__main__":
        os.system('rm -rf /tmp/k8s')
        os.system('mkdir -p /tmp/k8s')
        for line in p.stdout.readlines():
    
            # 此处的正则表达式是为了匹配镜像名以k8s为开头的镜像   
            # 实际使用中根据需要自行调整
            m = re.match(r'(^k8s[^s]*s*)s([^s]*s)', line.decode("utf-8"))
            if not m:
               continue
    
            # 镜像名
            iname = m.group(1).strip(' ')
            # tag
            itag = m.group(2)
    
            # tar包的名字
            tarname = iname.split('/')[-1]
            print(tarname)
            tarball = tarname + '.tar'
            ifull = iname + ':' + itag
            #save
            cmd = 'docker save -o ' + tarball + ' ' + ifull
            os.system(cmd)
    
            # 将tar包放在临时目录
            os.system('mv %s /tmp/k8s/'%tarball)
    镜像的批量导出save.py

      执行   python save.py

    import os
    import sys
    
    if __name__ == "__main__":
        #tarball = sys.argv[1]
        #print(tarball)
        workdir = '/tmp/k8s'
        #os.system('rm -rf %s'%workdir)
        #os.system('mkdir -p %s'%workdir)
        #os.system('tar -zxvf %s -C %s'%(tarball, workdir))
    
        os.chdir(workdir)
        files = os.listdir(workdir)
        for filename in files:
            print(filename)
            os.system('docker load -i %s'%filename)
    dockerload.py

         可以把所有的镜像包压缩成一个tar包上传到工作目录     执行python dockerload.py  Images.tar.gz

         也可以把所有的镜像tar包分别上传到工作目录                执行 python dockerload.py即可

      kubeadm init --kubernetes-version=v1.12.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

    [init] using Kubernetes version: v1.12.1
    [preflight] running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
        [WARNING Swap]: running with swap on is not supported. Please disable swap
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [preflight/images] Pulling images required for setting up a Kubernetes cluster
    [preflight/images] This might take a minute or two, depending on the speed of your internet connection
    [preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [preflight] Activating the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [node1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.134]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [node1 localhost] and IPs [192.168.11.134 127.0.0.1 ::1]
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [node1 localhost] and IPs [127.0.0.1 ::1]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
    [certificates] Generated sa key and public key.
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" 
    [init] this might take a minute or longer if the control plane images have to be pulled
    [apiclient] All control plane components are healthy after 33.015497 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
    [markmaster] Marking the node node1 as master by adding the label "node-role.kubernetes.io/master=''"
    [markmaster] Marking the node node1 as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation
    [bootstraptoken] using token: 8955q6.o8fopwcef7i3fn4w
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 192.168.11.134:6443 --token 8955q6.o8fopwcef7i3fn4w --discovery-token-ca-cert-hash sha256:a3ec67c68a14b2e587aecb016c9050c36dc579069725eb261b27a0cd1eb024dc
    初始化成功
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    
    [root@node1 ~]#  mkdir -p $HOME/.kube
    [root@node1 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@node1 ~]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok                   
    scheduler            Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}
    初始化后操作

    kubernetes v.1.11.1部署

    目前kubernetes v1.12.0初始化后无法启动flannel 容器  造成所有节点一直处于Not Ready状态  改成部署v1.11.1 成功 部署过程如下

    1.安装docker

    wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
    wget -c https://download.docker.com/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm
    yum install -y docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm
    
    [root@k8s-master ~]# vi /usr/lib/systemd/system/docker.service
    
     ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
    
    
    systemctl daemon-reload && systemctl enable docker && systemctl start docker
    View Code

    2.镜像准备

      把下载好的k8s需要的镜像导入到本机docker中

    [root@k8s-master images]# ls
    coredns.tar     kube-apiserver-amd64.tar           kube-proxy-amd64.tar      pause.tar
    etcd-amd64.tar  kube-controller-manager-amd64.tar  kube-scheduler-amd64.tar
    flannel.tar     kube-flannel.yml                   master.sh
    View Code

    3.添加kubeadm的yum源

    cat > /etc/yum.repos.d/kubernetes.repo <<EOF 
    [kubernetes] 
    name=Kubernetes 
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
    enabled=1 
    gpgcheck=0 
    repo_gpgcheck=0 
    EOF
    
    # 列出最新版本
    yum list kubeadm --showduplicates
    
    #目前最新版本为v1.12.0 安装v.1.12.0版本有问题  必须指定安装v.11.1
       yum install kubectl-1.11.1
       yum install kubelet-1.11.1
       yum install kubeadm-1.11.1
    
    
    kubelet额外参数设置  确保kubelet和docker使用同一个cgroupdriver, 这里使用systemd;
     vi /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
    
    systemctl enable kubelet && systemctl start kubelet
    View Code

    4.初始化集群

    kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 
    
    
    增加kubectl权限访问:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    配置 k8s 网络– flannel
    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    kubectl apply -f  kube-flannel.yml
    
    如果有多张网卡,需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称  类似 :
    
    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=eth1
    View Code

    5.验证集群master启动结果

    [root@k8s-master ~]# kubectl get pod --all-namespaces
    NAMESPACE     NAME                                 READY     STATUS    RESTARTS   AGE
    kube-system   coredns-78fcdf6894-8fprv             1/1       Running   0          24m
    kube-system   coredns-78fcdf6894-rdrm7             1/1       Running   0          24m
    kube-system   etcd-k8s-master                      1/1       Running   0          23m
    kube-system   kube-apiserver-k8s-master            1/1       Running   0          23m
    kube-system   kube-controller-manager-k8s-master   1/1       Running   0          23m
    kube-system   kube-flannel-ds-amd64-xh8ln          1/1       Running   0          22m
    kube-system   kube-proxy-l2tbn                     1/1       Running   0          24m
    kube-system   kube-scheduler-k8s-master            1/1       Running   0          23m
    [root@k8s-master ~]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}   
    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS    ROLES     AGE       VERSION
    k8s-master   Ready     master    29m       v1.11.1
    [root@k8s-master ~]# 出现以上结果表示全部正常
    View Code

    6.向集群中增加节点

      1.安装docker 步骤和master一样

      2.配置kubeadm的yum源

      3.把本地镜像导入docker

    [root@node2 ~]# docker load -i flannel.tar 
    [root@node2 ~]# docker load -i kube-proxy-amd64.tar 
    [root@node2 ~]# docker load -i pause.tar 
    View Code
    [root@node2 ~]#  vi /etc/sysconfig/kubelet
    
    KUBELET_EXTRA_ARGS="--runtime-cgroups=/systemd/system.slice --kubelet-cgroups=/systemd/system.slice"
    
    [root@node2 ~]# vi /usr/lib/systemd/system/docker.service
    ExecStart=/usr/bin/dockerd --exec-opt native.cgroupdriver=systemd
    
    
    systemctl daemon-reload && systemctl enable docker && systemctl start docker
    docker和kubelet的启动参数
    [root@node2 ~]# kubeadm join 192.168.11.141:6443 --token rumavf.aphute2alfbnqj8x --discovery-token-ca-cert-hash sha256:f51ee790dbef8e1f1e36315ba99bc476b0315bb4a861b3d6caef9cbc8b411397
    [preflight] running pre-flight checks
        [WARNING RequiredIPVSKernelModulesAvailable]: the IPVS proxier will not be used, because the following required kernel modules are not loaded: [ip_vs_wrr ip_vs_sh ip_vs ip_vs_rr] or no builtin kernel ipvs support: map[ip_vs:{} ip_vs_rr:{} ip_vs_wrr:{} ip_vs_sh:{} nf_conntrack_ipv4:{}]
    you can solve this problem with following methods:
     1. Run 'modprobe -- ' to load missing kernel modules;
    2. Provide the missing builtin kernel ipvs support
    
    I1004 04:55:53.603066    2118 kernel_validator.go:81] Validating kernel version
    I1004 04:55:53.603140    2118 kernel_validator.go:96] Validating kernel config
    [discovery] Trying to connect to API Server "192.168.11.141:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.11.141:6443"
    [discovery] Requesting info from "https://192.168.11.141:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.11.141:6443"
    [discovery] Successfully established connection with API Server "192.168.11.141:6443"
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.11" ConfigMap in the kube-system namespace
    [kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [preflight] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node2" as an annotation
    
    This node has joined the cluster:
    * Certificate signing request was sent to master and a response
      was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the master to see this node join the cluster.
    node节点执行
    [root@k8s-master images]# kubectl get nodes
    NAME         STATUS    ROLES     AGE       VERSION
    k8s-master   Ready     master    47m       v1.11.1
    node2        Ready     <none>    2m        v1.11.1
    master节点验证加入是否成功
  • 相关阅读:
    ASP.NET Core 发布 centos7 配置守护进程
    AutoMapper在asp.netcore中的使用
    git忽略文件并删除git仓库中的文件
    Animate.css 一款牛逼的css3动画库
    URL中特殊符号的处理
    efcore 配置链接sqlserver
    简单抓取小程序大全,并展示。
    UEditor上传图片到七牛C#(后端实现)
    软件项目管理三国启示录01 群雄争霸之项目经理的自我修养
    【调侃】IOC前世今生
  • 原文地址:https://www.cnblogs.com/yxh168/p/9735016.html
Copyright © 2011-2022 走看看