zoukankan      html  css  js  c++  java
  • centos7.1使用kubeadm部署kubernetes 1.16.2的master高可用

    机器列表,配置域名解析

    cat /etc/hosts
    192.168.200.210 k8s-master1
    192.168.200.211 k8s-master2
    192.168.200.212 k8s-node1
    192.168.200.213 k8s-node2
    192.168.200.214 k8s-master-vip
    环境版本

    centos 7.1
    docker 19.03.4
    kubernetes 1.16.2
    前期准备

    1、Linux查看版本当前操作系统内核信息

    uname -a
    输出

    Linux iZwz9d75c59ll4waz4y8ctZ 3.10.0-693.2.2.el7.x86_64 #1 SMP Tue Sep 12 22:26:13 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

    2、Linux查看当前操作系统版本信息

    cat /proc/version
    输出

    Linux version 3.10.0-693.2.2.el7.x86_64 (builder@kbuilder.dev.centos.org) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) ) #1 SMP Tue Sep 12 22:26:13 UTC 2017

    Linux查看版本当前操作系统发行版信息

    cat /etc/redhat-release
    输出

    CentOS Linux release 7.4.1708 (Core)

    3、升级内核

    # 升级yum,升级系统内核,系统内核最好能是4.10以上,可以使用overlay2存储驱动
    # 本人在自己虚拟机使用centos7.6,升级内核后,可以用到overlay2存储驱动
    # 在服务器上由于某些原因没办法升级内核,在3.10版本也能部署成功,只是存储驱动使用overlay
    yum update -y

    # 安装工具
    yum install -y wget curl vim
    4、设置主机名,全部机器都要修改

    hostnamectl set-hostname master
    5、关闭防火墙 、selinux和swap

    systemctl disable firewalld --now

    setenforce 0

    sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

    swapoff -a
    echo "vm.swappiness = 0">> /etc/sysctl.conf

    sed -i 's/.*swap.*/#&/' /etc/fstab

    sysctl -p

    6、配置内核参数,将桥接的IPv4流量传递到iptables的链
     

    cat > /etc/sysctl.d/k8s.conf << EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF

    sysctl --system
    一、配置软件源

    1、配置yum源base repo为阿里云的yum源

    cd /etc/yum.repos.d
    mv CentOS-Base.repo CentOS-Base.repo.bak
    mv epel.repo epel.repo.bak
    curl https://mirrors.aliyun.com/repo/Centos-7.repo -o CentOS-Base.repo
    sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/CentOS-Base.repo
    curl https://mirrors.aliyun.com/repo/epel-7.repo -o epel.repo
    gpkcheck=0 表示对从这个源下载的rpm包不进行校验

    2、配置docker repo

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O/etc/yum.repos.d/docker-ce.repo
    3、配置kubernetes源为阿里的yum源

    cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64   Kubernetes源设为阿里

    gpgcheck=0:表示对从这个源下载的rpm包不进行校验

    repo_gpgcheck=0:某些安全性配置文件会在 /etc/yum.conf 内全面启用 repo_gpgcheck,以便能检验软件库的中继数据的加密签署

    如果gpgcheck设为1,会进行校验,就会报错如下,所以这里设为0

    repomd.xml signature could not be verified for kubernetes
    4、update cache 更新缓存

    yum clean all
    yum makecache
    yum repolist
    二、安装docker

    查看可安装的版本

    yum list docker-ce --showduplicates | sort -r
    安装

    yum install -y docker-ce
    启动docker

    systemctl enable docker && systemctl start docker
    查看docker版本

    docker version
    设置docker的镜像源,这里我设置为国内的docker源https://www.docker-cn.com

    vim /etc/docker/daemon.json
    {
    "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
    "log-opts": {
    "max-file": "3",
    "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ],
    "registry-mirrors": ["https://www.docker-cn.com"]
    }
    注意:native.cgroupdriver=systemd 官方推荐此配置,地址 https://kubernetes.io/docs/setup/cri/
    重新加载daemon,重启docker

    systemctl daemon-reload
    systemctl restart docker
    如果docker重启失败提示start request repeated too quickly for docker.service

    应该是centos的内核太低内核是3.x,不支持overlay2的文件系统,移除下面的配置
    "storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
    ]
    三、安装kubeadm、kubelet和kubectl

    kubeadm不管kubelet和kubectl,所以我们安装kubeadm,还要安装kubelet和kubectl

    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。

    Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。

    Kubectl是Kubernetes集群管理工具。

    最后启动kubelet:

    systemctl enable kubelet --now
    注:因为kubeadm默认生成的证书有效期只有一年,所以kubeadm等安装成功后,我用自己之前编译好的kubeadm替换掉默认的kubeadm。后面初始化k8s生成的证书都是100年。编译方案看我另外一个文章:通过编译kubeadm修改证书有效期

    四、master高可用(只需要在k8s-master1、k8s-master2上操作)

    1、master节点全部ssh免密登录

    # 先cd到/root/.ssh目录
    # 如果没有.ssh目录,只需要本地ssh本机或者其他主机,就会生成.ssh文件夹

    # 首先登陆master1
    ssh-keygen -t rsa //一路回车
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master2

    # 尝试登陆一下
    ssh root@k8s-master2



    # 然后登陆master2
    ssh-keygen -t rsa //一路回车
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@k8s-master1

    # 尝试登陆一下
    ssh root@k8s-master1
    2、keepalived创建虚拟ip

    在k8s的master节点的机器上安装keepalived,用于漂移虚拟ip

    # 两台master节点都要安装keepalived
    yum install -y keepalived
    在k8s-master1节点,配置keepalived角色为MASTER

    cat > /etc/keepalived/keepalived.conf << EOF
    ! Configuration File for keepalived

    global_defs {
    router_id uat-k8s-master1 #标识
    }

    vrrp_instance VI_1 {
    state MASTER #角色是master
    interface eth0 #vip 绑定端口
    virtual_router_id 51 #让master 和backup在同一个虚拟路由里,id 号必须相同
    priority 150 #优先级,谁的优先级高谁就是master
    advert_int 1 #心跳间隔时间
    authentication {
    auth_type PASS #认证
    auth_pass k8s #密码
    }
    virtual_ipaddress {
    192.168.200.214 #虚拟ip
    }
    }
    EOF

    # 启动keepalived
    systemctl enable keepalived.service && systemctl start keepalived.service
    在k8s-master2节点,配置keepalived角色为BACKUP

    cat > /etc/keepalived/keepalived.conf << EOF
    ! Configuration File for keepalived

    global_defs {
    router_id uat-k8s-master2
    }

    vrrp_instance VI_1 {
    state BACKUP #角色是backup
    interface eth0
    virtual_router_id 51 #让master 和backup在同一个虚拟路由里,id 号必须相同
    priority 100 #优先级,设置得比master低
    advert_int 1 #心跳间隔时间
    authentication {
    auth_type PASS #认证
    auth_pass k8s #密码
    }
    virtual_ipaddress {
    192.168.200.214 #虚拟ip
    }
    }
    EOF


    # 启动keepalived
    systemctl enable keepalived.service && systemctl start keepalived.service
    测试一下keepalived虚拟ip是否启动成功

    分别在k8s-master1和k8s-master2执行如下命令,可以发现只有k8s-master1有结果

    ip a | grep 192.168.200.214

    显示如下
    inet 192.168.200.214/32 scope global eth0
    说明目前192.168.200.214虚拟ip是漂移到k8s-master1机器上,因为k8s-master1的keepalived设置为MASTER

    测试keepalived的ip漂移功能

    尝试重启k8s-master1机器,重启期间,虚拟ip会漂移到k8s-master2机器,

    输入命令ip a | grep 192.168.200.21"可以看到结果,

    等k8s-master1重启完毕,虚拟ip会重新漂移到k8s-master1上

    证明keepalived运行正常

    3、haproxy负载均衡

    # 两台master节点都要安装haproxy
    yum install -y haproxy
    在k8s-master1、k8s-master1节点,分别配置haproxy

    cat > /etc/haproxy/haproxy.cfg << EOF
    global
    chroot /var/lib/haproxy
    daemon
    group haproxy
    user haproxy
    log 127.0.0.1:514 local0 warning
    pidfile /var/lib/haproxy.pid
    maxconn 20000
    spread-checks 3
    nbproc 8
    defaults
    log global
    mode tcp
    retries 3
    option redispatch
    listen https-apiserver
    bind 0.0.0.0:8443 # 指定绑定的端口,ip都设置为0.0.0.0,我这里使用8443端口
    mode tcp
    balance roundrobin
    timeout server 15s
    timeout connect 15s
    server apiserver1 192.168.200.210:6443 check port 6443 inter 5000 fall 5 #转发到k8s-master1的apiserver上,apiserver端口默认是6443
    server apiserver2 192.168.200.211:6443 check port 6443 inter 5000 fall 5 #转发到k8s-master2的apiserver上,apiserver端口默认是6443
    EOF



    # 启动haproxy
    systemctl start haproxy.service && systemctl enable haproxy.service
    上面配置只需要修改bind,还有server分发部分,其他不需要改变

    分别查看haproxy的运行状态,确保haproxy正在运行

    systemctl status haproxy
    4、初始化master

    首先初始化k8s-master1,初始化之前检查haproxy是否正在运行,keepalived是否正常运作

    查看所需的镜像

    kubeadm config images list
    显示如下
    k8s.gcr.io/kube-apiserver:v1.16.2
    k8s.gcr.io/kube-controller-manager:v1.16.2
    k8s.gcr.io/kube-scheduler:v1.16.2
    k8s.gcr.io/kube-proxy:v1.16.2
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.15-0
    k8s.gcr.io/coredns:1.6.2
    能翻墙的可以通过以下命令提前把镜像pull下来

    kubeadm config images pull
    不能翻墙,唯有在初始化时指定使用阿里的镜像库:registry.aliyuncs.com/google_containers

    开始初始化

    kubeadm init
    --apiserver-advertise-address=当前master机器的ip
    --image-repository registry.aliyuncs.com/google_containers
    --kubernetes-version v1.16.2
    --service-cidr=10.1.0.0/16
    --pod-network-cidr=10.244.0.0/16
    --control-plane-endpoint 192.168.200.214:8443
    --upload-certs
    上面指定的参数都不能缺

    参数描述
    –apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。
    –image-repository: 指定阿里云镜像仓库地址
    –kubernetes-version: 用于指定k8s版本;
    –pod-network-cidr:用于指定Pod的网络范围;10.244.0.0/16
    –service-cidr:用于指定SVC的网络范围;
    –control-plane-endpoint:指定keepalived的虚拟ip
    –upload-certs:上传证书
    初始化成功后,会看到大概如下提示,下面信息先保留。后续添加master节点,添加node节点需要用到

    Your Kubernetes control-plane has initialized successfully!

    To start administering your cluster from this node, you need to run the following as a regular user:

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    You can now join any number of the control-plane node running the following command on each as root:

    kubeadm join 192.168.200.214:8443 --token fo15mb.w2c272dznmpr1qil
    --discovery-token-ca-cert-hash sha256:3455de957a699047e817f3c42b11da1cc665ee667f78661d20cfabc5abcc4478
    --control-plane --certificate-key bcd49ad71ed9fa66f6204ba63635a899078b1be908a69a30287554f7b01a9421

    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

    Then you can join any number of worker nodes by running the following on each as root:

    kubeadm join 192.168.200.214:8443 --token fo15mb.w2c272dznmpr1qil
    --discovery-token-ca-cert-hash sha256:3455de957a699047e817f3c42b11da1cc665ee667f78661d20cfabc5abcc4478
    这时使用kubectl  get node能看到一个master节点处于NotReady状态,那是因为没安装网络

    注:如果由于初始化信息没设置好,已经执行了初始化命令,可以使用kubeadm reset重置。但本人部署时遇到问题,reset后,再次初始化,始终不成功。发现docker ps里面全部镜像都没启动。怀疑是reset时,某些docker相关的文件没有删除干净。通过yum remove docker-ce卸载docker,重启服务器,再重新安装docker,我的问题就解决了

    按提示执行如下命令,kubectl就能使用了

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
     

    5、将pod网络部署到集群

    这里我是用flannel网络

    下载kube-flannel.yml文件

    curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    下载后的文件

    注:我所在的网络可能有限制无法访问,是用自己的阿里云下载好,再拿过来使用的,并且应修改了网卡,这里贴上文件内容

    ---
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
    name: psp.flannel.unprivileged
    annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
    privileged: false
    volumes:
    - configMap
    - secret
    - emptyDir
    - hostPath
    allowedHostPaths:
    - pathPrefix: "/etc/cni/net.d"
    - pathPrefix: "/etc/kube-flannel"
    - pathPrefix: "/run/flannel"
    readOnlyRootFilesystem: false
    # Users and groups
    runAsUser:
    rule: RunAsAny
    supplementalGroups:
    rule: RunAsAny
    fsGroup:
    rule: RunAsAny
    # Privilege Escalation
    allowPrivilegeEscalation: false
    defaultAllowPrivilegeEscalation: false
    # Capabilities
    allowedCapabilities: ['NET_ADMIN']
    defaultAddCapabilities: []
    requiredDropCapabilities: []
    # Host namespaces
    hostPID: false
    hostIPC: false
    hostNetwork: true
    hostPorts:
    - min: 0
    max: 65535
    # SELinux
    seLinux:
    # SELinux is unsed in CaaSP
    rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
    name: flannel
    rules:
    - apiGroups: ['extensions']
    resources: ['podsecuritypolicies']
    verbs: ['use']
    resourceNames: ['psp.flannel.unprivileged']
    - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - get
    - apiGroups:
    - ""
    resources:
    - nodes
    verbs:
    - list
    - watch
    - apiGroups:
    - ""
    resources:
    - nodes/status
    verbs:
    - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
    name: flannel
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: flannel
    subjects:
    - kind: ServiceAccount
    name: flannel
    namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
    name: flannel
    namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: kube-flannel-cfg
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    data:
    cni-conf.json: |
    {
    "cniVersion": "0.2.0",
    "name": "cbr0",
    "plugins": [
    {
    "type": "flannel",
    "delegate": {
    "hairpinMode": true,
    "isDefaultGateway": true
    }
    },
    {
    "type": "portmap",
    "capabilities": {
    "portMappings": true
    }
    }
    ]
    }
    net-conf.json: |
    {
    "Network": "10.244.0.0/16",
    "Backend": {
    "Type": "vxlan"
    }
    }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-amd64
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
    app: flannel
    template:
    metadata:
    labels:
    tier: node
    app: flannel
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: beta.kubernetes.io/os
    operator: In
    values:
    - linux
    - key: beta.kubernetes.io/arch
    operator: In
    values:
    - amd64
    hostNetwork: true
    tolerations:
    - operator: Exists
    effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
    image: quay.io/coreos/flannel:v0.11.0-amd64
    command:
    - cp
    args:
    - -f
    - /etc/kube-flannel/cni-conf.json
    - /etc/cni/net.d/10-flannel.conflist
    volumeMounts:
    - name: cni
    mountPath: /etc/cni/net.d
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-amd64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=eth0 # 改成你服务器在使用的网卡名
    resources:
    requests:
    cpu: "100m"
    memory: "50Mi"
    limits:
    cpu: "100m"
    memory: "50Mi"
    securityContext:
    privileged: false
    capabilities:
    add: ["NET_ADMIN"]
    env:
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    volumeMounts:
    - name: run
    mountPath: /run/flannel
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    volumes:
    - name: run
    hostPath:
    path: /run/flannel
    - name: cni
    hostPath:
    path: /etc/cni/net.d
    - name: flannel-cfg
    configMap:
    name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-arm64
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
    app: flannel
    template:
    metadata:
    labels:
    tier: node
    app: flannel
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: beta.kubernetes.io/os
    operator: In
    values:
    - linux
    - key: beta.kubernetes.io/arch
    operator: In
    values:
    - arm64
    hostNetwork: true
    tolerations:
    - operator: Exists
    effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
    image: quay.io/coreos/flannel:v0.11.0-arm64
    command:
    - cp
    args:
    - -f
    - /etc/kube-flannel/cni-conf.json
    - /etc/cni/net.d/10-flannel.conflist
    volumeMounts:
    - name: cni
    mountPath: /etc/cni/net.d
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-arm64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    resources:
    requests:
    cpu: "100m"
    memory: "50Mi"
    limits:
    cpu: "100m"
    memory: "50Mi"
    securityContext:
    privileged: false
    capabilities:
    add: ["NET_ADMIN"]
    env:
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    volumeMounts:
    - name: run
    mountPath: /run/flannel
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    volumes:
    - name: run
    hostPath:
    path: /run/flannel
    - name: cni
    hostPath:
    path: /etc/cni/net.d
    - name: flannel-cfg
    configMap:
    name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-arm
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
    app: flannel
    template:
    metadata:
    labels:
    tier: node
    app: flannel
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: beta.kubernetes.io/os
    operator: In
    values:
    - linux
    - key: beta.kubernetes.io/arch
    operator: In
    values:
    - arm
    hostNetwork: true
    tolerations:
    - operator: Exists
    effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
    image: quay.io/coreos/flannel:v0.11.0-arm
    command:
    - cp
    args:
    - -f
    - /etc/kube-flannel/cni-conf.json
    - /etc/cni/net.d/10-flannel.conflist
    volumeMounts:
    - name: cni
    mountPath: /etc/cni/net.d
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-arm
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    resources:
    requests:
    cpu: "100m"
    memory: "50Mi"
    limits:
    cpu: "100m"
    memory: "50Mi"
    securityContext:
    privileged: false
    capabilities:
    add: ["NET_ADMIN"]
    env:
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    volumeMounts:
    - name: run
    mountPath: /run/flannel
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    volumes:
    - name: run
    hostPath:
    path: /run/flannel
    - name: cni
    hostPath:
    path: /etc/cni/net.d
    - name: flannel-cfg
    configMap:
    name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-ppc64le
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
    app: flannel
    template:
    metadata:
    labels:
    tier: node
    app: flannel
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: beta.kubernetes.io/os
    operator: In
    values:
    - linux
    - key: beta.kubernetes.io/arch
    operator: In
    values:
    - ppc64le
    hostNetwork: true
    tolerations:
    - operator: Exists
    effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
    image: quay.io/coreos/flannel:v0.11.0-ppc64le
    command:
    - cp
    args:
    - -f
    - /etc/kube-flannel/cni-conf.json
    - /etc/cni/net.d/10-flannel.conflist
    volumeMounts:
    - name: cni
    mountPath: /etc/cni/net.d
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-ppc64le
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    resources:
    requests:
    cpu: "100m"
    memory: "50Mi"
    limits:
    cpu: "100m"
    memory: "50Mi"
    securityContext:
    privileged: false
    capabilities:
    add: ["NET_ADMIN"]
    env:
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    volumeMounts:
    - name: run
    mountPath: /run/flannel
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    volumes:
    - name: run
    hostPath:
    path: /run/flannel
    - name: cni
    hostPath:
    path: /etc/cni/net.d
    - name: flannel-cfg
    configMap:
    name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
    name: kube-flannel-ds-s390x
    namespace: kube-system
    labels:
    tier: node
    app: flannel
    spec:
    selector:
    matchLabels:
    app: flannel
    template:
    metadata:
    labels:
    tier: node
    app: flannel
    spec:
    affinity:
    nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
    nodeSelectorTerms:
    - matchExpressions:
    - key: beta.kubernetes.io/os
    operator: In
    values:
    - linux
    - key: beta.kubernetes.io/arch
    operator: In
    values:
    - s390x
    hostNetwork: true
    tolerations:
    - operator: Exists
    effect: NoSchedule
    serviceAccountName: flannel
    initContainers:
    - name: install-cni
    image: quay.io/coreos/flannel:v0.11.0-s390x
    command:
    - cp
    args:
    - -f
    - /etc/kube-flannel/cni-conf.json
    - /etc/cni/net.d/10-flannel.conflist
    volumeMounts:
    - name: cni
    mountPath: /etc/cni/net.d
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-s390x
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    resources:
    requests:
    cpu: "100m"
    memory: "50Mi"
    limits:
    cpu: "100m"
    memory: "50Mi"
    securityContext:
    privileged: false
    capabilities:
    add: ["NET_ADMIN"]
    env:
    - name: POD_NAME
    valueFrom:
    fieldRef:
    fieldPath: metadata.name
    - name: POD_NAMESPACE
    valueFrom:
    fieldRef:
    fieldPath: metadata.namespace
    volumeMounts:
    - name: run
    mountPath: /run/flannel
    - name: flannel-cfg
    mountPath: /etc/kube-flannel/
    volumes:
    - name: run
    hostPath:
    path: /run/flannel
    - name: cni
    hostPath:
    path: /etc/cni/net.d
    - name: flannel-cfg
    configMap:
    name: kube-flannel-cfg
    安装flannel network,还要保证你的网络能正常拉取镜像quay.io/coreos/flannel:v0.11.0-amd64

    kubectl apply -f ./kube-flannel.yml
    这里注意kube-flannel.yml这个文件里的flannel的镜像是0.11.0,quay.io/coreos/flannel:v0.11.0-amd64
    执行后,显示如下,表示flannel部署成功

    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds-amd64 created
    daemonset.apps/kube-flannel-ds-arm64 created
    daemonset.apps/kube-flannel-ds-arm created
    daemonset.apps/kube-flannel-ds-ppc64le created
    daemonset.apps/kube-flannel-ds-s390x created
    如果flannel创建失败的,则可能是服务器存在多个网卡,目前需要在kube-flannel.yml中使用–iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。需要将kube-flannel.yml下载到本地,flanneld启动参数加上–iface=<iface-name>

    注意:kube-flannel.yml文件里面有多个containers,要在quay.io/coreos/flannel:v0.11.0-amd64这个containers的启动参数里面设

    containers:
    - name: kube-flannel
    image: quay.io/coreos/flannel:v0.11.0-amd64
    command:
    - /opt/bin/flanneld
    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=eth0
    containers下的args指定参数iface=eth0,前提先ifconfig查看一下本地ip使用的网卡名称:eth0

    修改之前下载的kube-flannel.yml,修改containers下的args启动参数,指定网卡名称

    修改完kube-flannel.yml后,先删除之前部署的网络

    kubectl delete -f ./kube-flannel.yml
    再重新创建

    kubectl apply -f ./kube-flannel.yml
    执行完后,查看node状态

    NAME STATUS ROLES AGE VERSION
    uat-k8s-master1 Ready master 5h42m v1.16.2
    查看pod

    kubectl get pod -n kube-system
    NAME READY STATUS RESTARTS AGE
    coredns-58cc8c89f4-pb8b4 1/1 Running 0 5h43m
    coredns-58cc8c89f4-qwpx5 1/1 Running 0 5h43m
    etcd-uat-k8s-master1 1/1 Running 1 5h42m
    kube-apiserver-uat-k8s-master1 1/1 Running 1 5h43m
    kube-controller-manager-uat-k8s-master1 1/1 Running 3 5h42m
    kube-flannel-ds-amd64-6zjrx 1/1 Running 0 4h30m
    kube-proxy-v6wcg 1/1 Running 1 5h43m
    kube-scheduler-uat-k8s-master1 1/1 Running 3 5h43m

    6、添加另外一个控制平面,也就是添加另外一个master

    使用之前生成的提示信息,在k8s-master2执行,就能添加一个master

    kubeadm join 192.168.200.214:8443 --token fo15mb.w2c272dznmpr1qil
    --discovery-token-ca-cert-hash sha256:3455de957a699047e817f3c42b11da1cc665ee667f78661d20cfabc5abcc4478
    --control-plane --certificate-key bcd49ad71ed9fa66f6204ba63635a899078b1be908a69a30287554f7b01a9421
    执行完如上初始化命令,第二个master节点就能添加到集群

    提示信息里面有如下内容,应该是之前指定参数上传的证书2个小时后会被删除,可以使用命令重新上传证书

    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
    五、添加node节点

    kubeadm join 192.168.200.214:8443 --token fo15mb.w2c272dznmpr1qil
    --discovery-token-ca-cert-hash sha256:3455de957a699047e817f3c42b11da1cc665ee667f78661d20cfabc5abcc4478
     

    完事。。。。

    keepalived脑裂问题,这个我还没去处理,后面再找时间摸索一下
    ————————————————
    版权声明:本文为CSDN博主「不屑哥」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。
    原文链接:https://blog.csdn.net/fuck487/java/article/details/102783300

  • 相关阅读:
    Docker容器监控
    Docker Compose集成式应用组合及service编排
    Docker数据挂载
    Docker 构建私有仓库
    Dockerfile构建私有镜像
    Docker常用命令
    【手记】Reflexil直接让方法返回true或false
    【组件分享】自定义窗口标题菜单
    DLL/OCX文件的注册与数据执行保护DEP
    【SQL】用SSMS连接Oracle手记
  • 原文地址:https://www.cnblogs.com/zhengchunyuan/p/12612475.html
Copyright © 2011-2022 走看看