zoukankan      html  css  js  c++  java
  • centos7下用kubeadm安装k8s集群并使用ipvs做高可用方案

    1.准备

    1.1系统配置

    在安装之前,需要先做如下准备。三台CentOS主机如下:
    配置yum源(使用腾讯云的)

    替换之前先备份旧配置
    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
    centos各版本的源配置列表
    centos5
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos5_base.repo
    centos6
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos6_base.repo
    centos7
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repo
    更新缓存
    yum clean all
    yum makecache

    cat /etc/hosts
    
    192.168.233.251 k8sMaster
    192.168.233.170 k8sNode1
    192.168.233.35  k8sNode2
    

    关闭swap:
    临时关闭
    swapoff -a
    永久关闭(删除或注释掉swap那一行重启即可)
    vim /etc/fstab

    关闭所有防火墙
    systemctl stop firewalld
    systemctl disable firewalld
    禁用SELINUX:
    setenforce 0

    vi /etc/selinux/config
    SELINUX=disabled
    

    将桥接的IPv4流量传递到iptables的链:

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    

    使设置生效
    sysctl --systemmodprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf

    1.2kube-proxy开启ipvs的前置条件

    由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

    ip_vs
    ip_vs_rr
    ip_vs_wrr
    ip_vs_sh
    nf_conntrack_ipv4
    

    在所有的Kubernetes节点node1和node2上执行以下脚本:

    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    

    脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

    在所有节点上安装ipset软件包
    yum install ipset -y
    为了方便查看ipvs规则我们要安装ipvsadm(可选)
    yum install ipvsadm -y

    1.3安装Docker(所有节点)

    Kubernetes默认CRI(容器运行时)为Docker,因此先安装Docker。
    Docker/kubeadm/kubelet
    配置docker国内源(阿里云)

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

    **注意如果需要安装指定版本的docker-ce,请参考下面命令

    指定docker-ce版本安装[可选]

        查询18.09版的docker-ce 并安装
    
    yum  list available docker-ce* --showduplicates|grep 18.09
    
    docker-ce.x86_64                3:18.09.0-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.1-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.2-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.3-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.4-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.5-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.6-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.7-3.el7                 docker-ce-stable
    docker-ce.x86_64                3:18.09.8-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.0-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.1-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.2-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.3-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.4-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.5-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.6-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.7-3.el7                 docker-ce-stable
    docker-ce-cli.x86_64            1:18.09.8-3.el7                 docker-ce-stable
    
    yum install -y docker-ce-18.09.8-3.el7
    systemctl enable docker && systemctl start docker
    docker --version
    

    最新版docker-ce安装

    yum -y install docker-ce
    systemctl enable docker && systemctl start docker
    docker --version
    

    bubernetes的源(阿里云)

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    手动导入gpgkey或者关闭 gpgcheck=0
    rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
    rpmkeys --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    开始安装kubeadm和kubelet:

    yum install -y kubelet kubeadm kubectl
    systemctl enable kubelet
    

    开始部署Kubernetes

    初始化master

    kubeadm init 
    --apiserver-advertise-address=192.168.233.251
    --image-repository registry.aliyuncs.com/google_containers 
    --kubernetes-version v1.14.2 
    --pod-network-cidr=10.244.0.0/16
    

    关注输出内容

    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.233.251:6443 --token a9vg9z.dlboqvfuwwzauufq 
        --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb
    

    执行下面命令 初始化当前用户配置 使用kubectl会用到

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    安装pod网络插件

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
    

    在各个node上执行 下面加入命令(加入集群中)
    kubeadm join 192.168.233.251:6443 --token a9vg9z.dlboqvfuwwzauufq --discovery-token-ca-cert-hash sha256:c2ade88a856f15de80240ff4994661a6daa668113cea0c4a4073f701f05192cb

    检测集群状态

    kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    controller-manager   Healthy   ok
    scheduler            Healthy   ok
    etcd-0               Healthy   {"health": "true"}
    

    集群初始化如果遇到问题,可以使用下面的命令进行清理:

    kubeadm reset
    ifconfig cni0 down
    ip link delete cni0
    ifconfig flannel.1 down
    ip link delete flannel.1
    rm -rf /var/lib/cni/
    

    使用kubectl get pod –all-namespaces -o wide确保所有的Pod都处于Running状态。

    [root@k8smaster centos]# kubectl get pod --all-namespaces -o wide
    NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP                NODE                  NOMINATED NODE   READINESS GATES
    kube-system   coredns-8686dcc4fd-5h9xc                      1/1     Running   0          15m     10.244.0.3        k8smaster.novalocal   <none>           <none>
    kube-system   coredns-8686dcc4fd-8w6l2                      1/1     Running   0          15m     10.244.0.2        k8smaster.novalocal   <none>           <none>
    kube-system   etcd-k8smaster.novalocal                      1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
    kube-system   kube-apiserver-k8smaster.novalocal            1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
    kube-system   kube-controller-manager-k8smaster.novalocal   1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
    kube-system   kube-flannel-ds-amd64-2mfgq                   1/1     Running   0          3m34s   192.168.233.35    k8snode2.novalocal    <none>           <none>
    kube-system   kube-flannel-ds-amd64-8twxz                   1/1     Running   0          3m34s   192.168.233.251   k8smaster.novalocal   <none>           <none>
    kube-system   kube-flannel-ds-amd64-sbd6n                   1/1     Running   0          3m34s   192.168.233.170   k8snode1.novalocal    <none>           <none>
    kube-system   kube-proxy-2m5jh                              1/1     Running   0          15m     192.168.233.251   k8smaster.novalocal   <none>           <none>
    kube-system   kube-proxy-nfzfl                              1/1     Running   0          10m     192.168.233.170   k8snode1.novalocal    <none>           <none>
    kube-system   kube-proxy-shxdt                              1/1     Running   0          9m47s   192.168.233.35    k8snode2.novalocal    <none>           <none>
    kube-system   kube-scheduler-k8smaster.novalocal            1/1     Running   0          14m     192.168.233.251   k8smaster.novalocal   <none>           <none>
    
    

    2.4 master node参与工作负载

    使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点node1被打上了node-role.kubernetes.io/master:NoSchedule的污点标记:
    查看污点标记

    kubectl describe node k8smaster.novalocal |grep Taint
    Taints:             node-role.kubernetes.io/master:NoSchedule
    

    执行命令去除标记

    kubectl taint nodes k8smaster.novalocal node-role.kubernetes.io/master:NoSchedule-
    

    测试dns

    [root@k8smaster centos]# kubectl run curl --image=radial/busyboxplus:curl -it
    kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
    If you don't see a command prompt, try pressing enter.
    [ root@curl-66bdcf564-4c42d:/ ]$ nslookup kubernetes.default
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes.default
    Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
    

    kube-proxy开启ipvs

    #修改ConfigMap的kube-system/kube-proxy中的config.conf,把 mode: "" 改为mode: “ipvs" 保存退出即可
    [root@k8smaster centos]# kubectl edit cm kube-proxy -n kube-system
    configmap/kube-proxy edited
    ###删除之前的proxy pod
    [root@k8smaster centos]# kubectl get pod -n kube-system |grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'
    pod "kube-proxy-2m5jh" deleted
    pod "kube-proxy-nfzfl" deleted
    pod "kube-proxy-shxdt" deleted
    #查看proxy运行状态
    [root@k8smaster centos]# kubectl get pod -n kube-system | grep kube-proxy
    kube-proxy-54qnw                              1/1     Running   0          24s
    kube-proxy-bzssq                              1/1     Running   0          14s
    kube-proxy-cvlcm                              1/1     Running   0          37s
    #查看日志,如果有 `Using ipvs Proxier.` 说明kube-proxy的ipvs 开启成功!
    [root@k8smaster centos]# kubectl logs kube-proxy-54qnw -n kube-system
    I0518 20:24:09.319160       1 server_others.go:176] Using ipvs Proxier.
    W0518 20:24:09.319751       1 proxier.go:386] IPVS scheduler not specified, use rr by default
    I0518 20:24:09.320035       1 server.go:562] Version: v1.14.2
    I0518 20:24:09.334372       1 conntrack.go:52] Setting nf_conntrack_max to 131072
    I0518 20:24:09.334853       1 config.go:102] Starting endpoints config controller
    I0518 20:24:09.334916       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
    I0518 20:24:09.334945       1 config.go:202] Starting service config controller
    I0518 20:24:09.334976       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
    I0518 20:24:09.435153       1 controller_utils.go:1034] Caches are synced for service config controller
    I0518 20:24:09.435271       1 controller_utils.go:1034] Caches are synced for endpoints config controller
    

    至此安装就差不多了.

    参考:
    官方:
    https://k8smeetup.github.io/docs/admin/kubeadm/
    关于 Taints(污点)和Tolerations(容忍)
    https://blog.51cto.com/newfly/2067531
    k8s安装
    https://www.kubernetes.org.cn/4956.html

  • 相关阅读:
    HDU 5313 bitset优化背包
    bzoj 2595 斯坦纳树
    COJ 1287 求匹配串在模式串中出现的次数
    HDU 5381 The sum of gcd
    POJ 1739
    HDU 3377 插头dp
    HDU 1693 二进制表示的简单插头dp
    HDU 5353
    URAL 1519 基础插头DP
    UVA 10294 等价类计数
  • 原文地址:https://www.cnblogs.com/lovesKey/p/10888006.html
Copyright © 2011-2022 走看看