zoukankan      html  css  js  c++  java
  • K8s

    本文演示如何搭建一个三节点的 Kubernetes Cluster 集群(一个 master 节点和两个 node 节点),并且这三台服务器使用的都是 CentOS 7 系统。
     

    一、准备工作(三个节点都需要设置)

    1,安装 Docker

    所有的节点都需要安装 Docker,具体步骤可以参考之前的文章:

    2,安装 kubelet、kubeadm 和 kubectl

    (1)我们需要在所有节点上安装 kubelet、kubeadm 和 kubectl,它们作用分别如下:
    • kubeadm:用来初始化集群(Cluster)
    • kubelet:运行在集群中的所有节点上,负责启动 pod 和 容器。
    • kubectl:这个是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。
     
    (2)依次执行下面命令进行安装这三个工具(为避免出现“网络不可达”错误,这里将谷歌的镜像换成国内镜像):
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
     
    #安装kubelet kubeadm kubectl,可以指定版本
    $ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
     
     
     
    #启动kubelet 并设置为默认开机启动
    $ systemctl enable kubelet && systemctl start kubelet
     

    3,修改 sysctl 配置

    对于 RHEL/CentOS 7 系统,可以会由于 iptables 被绕过导致网络请求被错误的路由。所以还需执行如下命令保证 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被设为1。
    (1)使用 vi 命令编辑相关文件:
    1
    vi /etc/sysctl.conf

    (2)在文件中添加如下内容后,保存退出。
    1
    2
    3
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1

    (3)最后执行如下命令即可:
    1
    sysctl --system
     
     
     

    4,关闭 swap

    (1)首先执行如下命令将其关闭:
    1
    swapoff -a
     
    (2)接着编辑 /etc/fstab 文件。
    1
    vi /etc/fstab

    (3)将 /dev/mapper/centos-swap swap swap default 0 0 这一行前面加个 # 号将其注释掉。
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)
    (4)编辑完毕后保存退出。这样机器重启后 swap 便不会又自动打开了。

    5. 修改Cgroup Driver

    5.1 修改daemon.json

    修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’

    [root@master ~]# more /etc/docker/daemon.json

    {
    "registry-mirrors": ["http://hub-mirror.c.163.com/"],
    "exec-opts": ["native.cgroupdriver=systemd"]
    }

     

    5.2 重新加载docker

    [root@master ~]# systemctl daemon-reload
    [root@master ~]# systemctl restart docker
    6.设置SELinux 
     
    # 将 SELinux 设置为 permissive 模式(将其禁用)
    $ setenforce 0
    $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
     

    二、Master 节点的安装配置

    1,初始化 Master

    (1)我们在 Master 上执行如下命令进行初始化:
    注意:--pod-network-cidr=10.244.0.0/16 是 k8s 的网络插件所需要用到的配置信息,用来给 node 分配子网段。然后我们这边用到的网络插件是 flannel,就是这么配。
    1
    $:kubeadm init --pod-network-cidr=10.244.0.0/16
    #或者此命令

    $:kubeadm init --kubernetes-version=v1.21.0 --apiserver-advertise-address 192.168.37.101 --pod-network-cidr=10.10.0.0/16


    (2)初始化的时候 kubeadm 会做一系列的校验,以检测你的服务器是否符合 kubernetes 的安装条件,检测结果分为 [WARNING] 和 [ERROR] 两种。其中 [ERROR] 部分要予以解决。
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)

    (3)比如上图我这里检测到三个 error:
    • Master 节点需要至少两核 CPU:由于我用的是虚拟机,关机后更改下虚拟机配置即可。
    • bridge-nf-call-iptables 这个参数,需要设置为 1:如果我们前面做了准备工作里的第三步,就不会有这个问题了。
    • swap 需要关闭:执行 swapoff -a 将其关闭即可。
     
    (4)所有 error 解决后,再执行最开始的 init 命令后 kubeadm 就开始安装了。但通常这时还是会报错,kubeadm init 命令默认使用的docker镜像仓库为k8s.gcr.io,国内无法直接访问,于是需要变通一下。
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)

    (5)我们可以通过国内厂商提供的 kubernetes 的镜像服务来下载,执行以下命令比如第一个 k8s.gcr.io/kube-apiserver:v1.14.1 镜像,可以执行如下命令从阿里云下载:
    #创建脚本文件 images.sh,
    $:vi  images.sh

    #脚本内容如下:
    #!/bin/bash
    #镜像仓库地址
    #阿里 registry.aliyuncs.com/google_containers
    url=registry.aliyuncs.com/google_containers
    version=v1.21.0
    images=(`kubeadm config images list --kubernetes-version=$version|awk -F 'io/' '{print $2}'`)
    for imagename in ${images[@]} ; do
      if [[ $imagename = coredns* ]] ;
         then
          docker pull $imagename
          docker tag  $imagename k8s.gcr.io/$imagename
          docker rmi -f $imagename
         else
          docker pull $url/$imagename
          docker tag  $url/$imagename k8s.gcr.io/$imagename
          docker rmi -f $url/$imagename
         fi
    done
    #然后授予执行权限
    $:chmod +x ./images.sh
    #执行:
    $:./images.sh

     

     (6)其中  coredns/coredns:v1.8.0 下载失败,因从hub.docker上查找不到此版本,我们采用手动的方式进行下载,访问conedns官网,找到对应版本进行下载

     通过以下命令导入容器

    1
     

    $:cat coredns_1.8.0_linux_amd64.tgz | docker import - coredns:v1.8.0

    (7)镜像下载下来以后再通过 docker tag 命令将其改成kudeadm安装时候需要的镜像名称。
    1
    $:docker tag coredns:v1.8.0  k8s.gcr.io/coredns/coredns:v1.8.0
     
    (8)镜像全部下载完毕后,再执行最开始的 init 命令后 kubeadm 就能成功安装了。最后一行,kubeadm 会提示我们,其他节点需要加入集群的话,只需要执行这条命令就行了,同时里面包含了加入集群所需要的 token(这个要记下来)。

     2,配置 kubectl(使用root用户也正常)

    kubectl 是管理 Kubernetes 集群的命令行工具,前面我们已经在所有的节点安装了 kubectl。Master 初始化安装完后需要做一些配置工作,然后 kubectl 就能使用了。
    (1)具体操作就依照前面 kubeadm init 输出的第一个红框内容。这里推荐使用使用普通用户执行 kubectl(root 会有一些问题),首先我们新建个普通用户 hangge,具体方法可以参考我之前写的文章:
     
    (2)切换成 app用户
    1
    su - app

    (3)依次执行如下命令(即前面 kubeadm init 输出的第一个红框内容),为 hangge 用户配置 kubectl:
    1
    2
    3
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    (4)为了使用更加便捷,启用 kubectl 命令的自动补全功能。
    1
    echo "source <(kubectl completion bash)" >> ~/.bash_profile
    source .bash_profile 

    3,安装 Pod 网络

    要让 Kubernetes 集群能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。(即前面 kubeadm init 输出的第二个红框内容)
    Kubernetes 支持多种网络方案,flannel或者calico。执行如下命令即可部署 flannel/calico:
    1
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

    4,开放端口(可以关闭防火墙)

    分别执行下面两条命令配置 firewall 防火墙策略,开放相关端口:
    1
    2
    3
    4
    5
    6
    7
    firewall-cmd --permanent --add-port=6443/tcp
    firewall-cmd --permanent --add-port=2379/tcp
    firewall-cmd --permanent --add-port=2380/tcp
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --permanent --add-port=10251/tcp
    firewall-cmd --permanent --add-port=10252/tcp
    firewall-cmd --reload

    三、Node 节点的安装配置

    1,添加节点

    (1)在两个 node 节点上分别执行如下命令(即前面 kubeadm init 输出的最后一个红框内容),将其注册到 Cluster 中:
    1

    kubeadm join 192.168.37.101:6443 --token ethqh8.nmtfwcg88gnfwvsu --discovery-token-ca-cert-hash sha256:1319e8da4d083b5b2f40161045845674bdbe7823c93c6767326c39cf719cb0f1


    (2)显示如下内容则说明节点添加成功:
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)
     
    (3)
      安装过程中,出现以下错误,通过使用命令:kubeadm reset  解决

    2,安装镜像

    (1)在每一个 node 节点上我们还需要下载 quay.io/coreos/flannel:v0.11.0-amd64、k8s.gcr.io/pause 和 k8s.gcr.io/kube-proxy 这三个镜像,其中后面两个镜像具体版本可以执行kubeadm config images list 查看一下:
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)

    (2)由于网络问题,后面两个镜像可能没法自动下载下来(第一个可以直接下载)。我们可以通过国内厂商提供的 kubernetes 的镜像服务来下载,再通过 docker tag 命令将其改成kudeadm 需要的镜像名称。
    1
    2
    3
    4
    5
    6
    7
    docker pull quay.io/coreos/flannel:v0.11.0-amd64
     
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
     
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.1 k8s.gcr.io/kube-proxy:v1.14.1
     

    3,开放端口

    10250 端口是一定要开放的,如果控制节点也要运行容器,也需要开启对应的端口(30000-32767)
    1
    2
    firewall-cmd --permanent --add-port=10250/tcp
    firewall-cmd --reload
     

    四、查看节点状态

    (1)在 master 节点上执行 kubectl get nodes 查看节点状态:
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)
    (2)目前 node 节点还处于 NotReady 状态,这是因为每个节点都需要启动若干组件,这些组件都是在 Pod 中运行,需要首先从 Google 下载镜像。
     
    (3)我们可以通过如下命令查看 Pod 状态。CrashLoopBackOff、ContainerCreating、Init:0/1 等都表明 Pod 没有就绪,只有 Running 才是就绪状态。
    1
    kubectl get pod --all-namespaces
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)
     
    (4)我们也可以通过 kubectl describe pod <Pod Name> 查看 Pod 的具体情况,比如我们查看 kube-proxy-96bz6 这个 Pod 目前为何还没就绪。
    1
    kubectl describe pod kube-proxy-96bz6 --namespace=kube-system

    (5)结果如下,是由于下载 image 时失败。这个可能是网络问题,我们可以继续等待,因为 Kubernetes 会自动重试。当然我们也可以自己手动执行 docker pull 去下载这个镜像。
    注意:不一定都是 Master 节点下载镜像失败,还有可能是 node 节点上下载镜像失败,具体是哪里可以看前面部分信息。比如这里的 k8s.gcr.io/pause:3.1 就是 node 节点上没下载下来。
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)

    (6)当所有的 Pod 都处于 Running 状态后,可以发现所有的节点也就准备好了。至此 Kubernetes集群创建成功。
    原文:K8s - Kubernetes集群的安装部署教程(CentOS系统)
     
     

    kubectl get cs

     注释掉/etc/kubernetes/manifests下的kube-controller-manager.yaml和kube-scheduler.yaml的 --port=0

    即可启动成功

     
     
     
     
     
    kubectl apply -f kube-flannel.yml
     
    apiVersion: policy/v1beta1
    kind: PodSecurityPolicy
    metadata:
      name: psp.flannel.unprivileged
      annotations:
        seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
        seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
        apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
        apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
    spec:
      privileged: false
      volumes:
        - configMap
        - secret
        - emptyDir
        - hostPath
      allowedHostPaths:
        - pathPrefix: "/etc/cni/net.d"
        - pathPrefix: "/etc/kube-flannel"
        - pathPrefix: "/run/flannel"
      readOnlyRootFilesystem: false
      # Users and groups
      runAsUser:
        rule: RunAsAny
      supplementalGroups:
        rule: RunAsAny
      fsGroup:
        rule: RunAsAny
      # Privilege Escalation
      allowPrivilegeEscalation: false
      defaultAllowPrivilegeEscalation: false
      # Capabilities
      allowedCapabilities: ['NET_ADMIN']
      defaultAddCapabilities: []
      requiredDropCapabilities: []
      # Host namespaces
      hostPID: false
      hostIPC: false
      hostNetwork: true
      hostPorts:
      - min: 0
        max: 65535
      # SELinux
      seLinux:
        # SELinux is unused in CaaSP
        rule: 'RunAsAny'
    ---
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    rules:
      - apiGroups: ['extensions']
        resources: ['podsecuritypolicies']
        verbs: ['use']
        resourceNames: ['psp.flannel.unprivileged']
      - apiGroups:
          - ""
        resources:
          - pods
        verbs:
          - get
      - apiGroups:
          - ""
        resources:
          - nodes
        verbs:
          - list
          - watch
      - apiGroups:
          - ""
        resources:
          - nodes/status
        verbs:
          - patch
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1beta1
    metadata:
      name: flannel
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: flannel
    subjects:
    - kind: ServiceAccount
      name: flannel
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: flannel
      namespace: kube-system
    ---
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: kube-flannel-cfg
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    data:
      cni-conf.json: |
        {
          "name": "cbr0",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "flannel",
              "delegate": {
                "hairpinMode": true,
                "isDefaultGateway": true
              }
            },
            {
              "type": "portmap",
              "capabilities": {
                "portMappings": true
              }
            }
          ]
        }
      net-conf.json: |
        {
          "Network": "10.244.0.0/16",
          "Backend": {
            "Type": "vxlan"
          }
        }
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-amd64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - amd64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-amd64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm64
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm64
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-arm64
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-arm64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-arm
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - arm
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-arm
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-arm
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-ppc64le
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - ppc64le
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-ppc64le
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-ppc64le
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
    ---
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: kube-flannel-ds-s390x
      namespace: kube-system
      labels:
        tier: node
        app: flannel
    spec:
      selector:
        matchLabels:
          app: flannel
      template:
        metadata:
          labels:
            tier: node
            app: flannel
        spec:
          affinity:
            nodeAffinity:
              requiredDuringSchedulingIgnoredDuringExecution:
                nodeSelectorTerms:
                  - matchExpressions:
                      - key: kubernetes.io/os
                        operator: In
                        values:
                          - linux
                      - key: kubernetes.io/arch
                        operator: In
                        values:
                          - s390x
          hostNetwork: true
          tolerations:
          - operator: Exists
            effect: NoSchedule
          serviceAccountName: flannel
          initContainers:
          - name: install-cni
            image: quay.io/coreos/flannel:v0.11.0-s390x
            command:
            - cp
            args:
            - -f
            - /etc/kube-flannel/cni-conf.json
            - /etc/cni/net.d/10-flannel.conflist
            volumeMounts:
            - name: cni
              mountPath: /etc/cni/net.d
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-s390x
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            resources:
              requests:
                cpu: "100m"
                memory: "50Mi"
              limits:
                cpu: "100m"
                memory: "50Mi"
            securityContext:
              privileged: false
              capabilities:
                 add: ["NET_ADMIN"]
            env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            volumeMounts:
            - name: run
              mountPath: /run/flannel
            - name: flannel-cfg
              mountPath: /etc/kube-flannel/
          volumes:
            - name: run
              hostPath:
                path: /run/flannel
            - name: cni
              hostPath:
                path: /etc/cni/net.d
            - name: flannel-cfg
              configMap:
                name: kube-flannel-cfg
     
     
     
     
     
  • 相关阅读:
    attr系列保留方法使用
    利用python的标准库hashlib 的md5()生成唯一的id
    【病因】 神经衰弱的几大病因
    群里看到的一个骗子批八字的例子
    i'll make a man out of you
    It's A Good Day To Die
    两天了。照着SVN的界面画的一个界面。
    起一卦,看看我想要的,依然这么倒霉
    倒霉倒霉真倒霉,这一卦起得和上一卦一样
    只要是倒霉,起卦就能看出来
  • 原文地址:https://www.cnblogs.com/pinghengxing/p/14665253.html
Copyright © 2011-2022 走看看