zoukankan      html  css  js  c++  java
  • Ubuntu18.04 Server部署Flannel网络的Kubernetes

    准备服务器

    ESXi6.5安装Ubuntu18.04 Server, 使用三台主机, 计划使用hostname为 kube01, kube02, kube03, 配置为2核4G/160G, K8s要求U为双核以上.

    因为ESXi6.5存在Ubuntu虚机在Remote SSH时宕机的Bug, 根据 https://kb.vmware.com/s/article/2151480 中的解决方案, 需要SSH登录ESXi后修改配置, 对应的文件在 /vmfs/volumes/584f7xxx-7xx749b4-3461-x0... / 目录下, 将虚机关机后, 找到对应的虚机文件目录, 在下面找到vmx文件, 在最后添加

    vmxnet3.rev.30 = FALSE
    

    更新服务器

    将Ubuntu的apt源设为国内

    kube02:~$ more /etc/apt/sources.list
    deb https://mirrors.ustc.edu.cn/ubuntu bionic main
    deb https://mirrors.ustc.edu.cn/ubuntu bionic-security main
    deb https://mirrors.ustc.edu.cn/ubuntu bionic-updates main
    
    sudo apt update
    sudo apt upgrade
    

    修改主机名

    修改cloud.cfg

    sudo vi /etc/cloud/cloud.cfg
    # 将
    preserve_hostname: false
    # 修改为
    preserve_hostname: true
    

    否则hostnamectl set-hostname在重启后就被恢复了

    修改hostname

    sudo hostnamectl set-hostname kube01
    

    关闭swap分区

    1. 立即关闭swap

    sudo swapoff -a 
    

    2. 在fstab中关闭swap

    vi /etc/fstab 
    

    用#注释swap那一行

    3. 在systemctl中禁用swap, 这一步如果不操作的话, 重启后依然会出现swap分区

    # 也可能是sdb, sdc, 根据自己机器硬盘定, 看哪个分区是swap), 假定是/dev/sda2
    sudo fdisk -lu /dev/sda 
    # 根据上一步的结果, 执行下面的命令
    sudo systemctl mask dev-sda2.swap
    

    安装并配置 Docker

    根据计划安装的k8s版本, 选择对应的docker版本, 以下为直接安装最新版

    # 准备软件
    sudo apt install apt-transport-https ca-certificates curl software-properties-common
    # 安装证书, 注意管道后面要用sudo
    curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
    # 添加当前发行版的apt源
    lsb_release -cs
    sudo add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
    # 安装Docker
    sudo apt install docker-ce
    # 检查版本, 此次安装版本为 19.03.5
    docker version
    # 将当前用户添加到docker group, 之后需要重新登录使其生效, 用id命令检查
    sudo usermod -aG docker milton
    # 配置docker, 添加mirror及其他配置
    sudo vi /etc/docker/daemon.json
    

    daemon.json内容如下

    {
      "registry-mirrors": ["https://registry.docker-cn.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {"max-size": "100m"},
      "storage-driver": "overlay2"
    }
    

    修改cgroup为systemd

    # 重启 docker服务, 并检查Cgroup Driver和Registry Mirrors是否正确
    sudo systemctl restart docker
    docker info
    

     如果需要安装指定版本的docker, 需要使用以下命令

    # 查看可用版本列表
    apt-cache madison docker-ce
    # 安装指定版本
    sudo apt install docker-ce=18.06.3~ce~3-0~ubuntu
    

    安装Kubernetes

    # 安装证书, 注意管道后面的sudo
    curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
    # 添加apt源, 没有bionic的, 用xenial
    cd /etc/apt/sources.list.d/
    sudo vi kubernetes.list
    

    kubernetes.list文件的内容

    deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
    

    更新并安装

    sudo apt update
    sudo apt install kubelet kubeadm kubectl
    

    kubeadm: the command to bootstrap the cluster.
    kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
    kubectl: the command line util to talk to your cluster.

    同样, 这里也可以选择指定的版本, 使用以下命令安装

    apt-cache madison kubelet
    sudo apt install kubelet=1.14.10-00 kubeadm=1.14.10-00 kubectl=1.14.10-00
    

    拖取无法下载的k8s容器镜像

    查看需要的镜像列表, 会得到以 k8s.gcr.io/ 开头的一堆结果

    kubeadm config images list
    

    写一个脚本, 将来源改为 registry.aliyuncs.com/google_containers/ , 拖取完再改回去, 脚本内容如下, 要根据上一步得到的列表修改, 然后执行.

    #!/bin/bash
    # 下面的镜像应该去除"k8s.gcr.io/"的前缀,版本换成kubeadm config images list命令获取到的版本
    images=(
        kube-apiserver:v1.17.0
        kube-controller-manager:v1.17.0
        kube-scheduler:v1.17.0
        kube-proxy:v1.17.0
        pause:3.1
        etcd:3.4.3-0
        coredns:1.6.5
    )
    
    for imageName in ${images[@]} ; do
        docker pull registry.aliyuncs.com/google_containers/$imageName
        docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
        docker rmi registry.aliyuncs.com/google_containers/$imageName
    done
    

    对于node节点, 因为join后也需要下载并启动对应的容器, 如果没有预先下载会发现join后, 在nodes列表里一直会NotReady, 所以也要预先下载, 配置node节点到这步就可以了, 如果是配置master节点, 就再往下走

    使用kubeadm init初始化Master主机

    上面的准备工作都做好之后, 就可以初始化Master主机了

    sudo kubeadm init --apiserver-advertise-address=0.0.0.0 --pod-network-cidr=172.16.0.0/16 --service-cidr=10.1.0.0/16
    

    其中的参数说明

    • --apiserver-advertise-address 用哪个IP(网口)提供api, 可以用当前主机的IP, 或者0.0.0.0不指定
    • --pod-network-cidr Pod层的网络IP范围, 需要与后面要配置的kube-flannel.yml里的设置一致
    • --service-cidr Service层的网络IP范围, 这个是虚拟IP不会体现在路由表上, 与前面的IP区分开就行

    输出的信息

    W1231 08:57:05.495224   11297 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W1231 08:57:05.495416   11297 version.go:102] falling back to the local client version: v1.17.0
    W1231 08:57:05.495703   11297 validation.go:28] Cannot validate kube-proxy config - no validator is available
    W1231 08:57:05.495735   11297 validation.go:28] Cannot validate kubelet config - no validator is available
    [init] Using Kubernetes version: v1.17.0
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kube01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.11.129]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kube01 localhost] and IPs [192.168.11.129 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    W1231 08:57:14.315543   11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    W1231 08:57:14.318419   11297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 37.004860 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node kube01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node kube01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: f3jgn2.5w8152dpifacihnj
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj 
        --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3 
    

    根据上面的提示, 创建.kube 目录, 复制config文件并修改owner属性. kubeadm join的命令在24小时内有效(包括关机重启).

    检查

    # 查看pods
    kubectl get pods -n kube-system
    # 输出
    NAME                             READY   STATUS    RESTARTS   AGE
    coredns-6955765f44-7dnqv         1/1     Running   0          71m
    coredns-6955765f44-pvlcp         1/1     Running   0          71m
    etcd-kube01                      1/1     Running   0          71m
    kube-apiserver-kube01            1/1     Running   0          71m
    kube-controller-manager-kube01   1/1     Running   0          71m
    kube-proxy-7c8f5                 1/1     Running   0          71m
    kube-scheduler-kube01            1/1     Running   0          71m
    

    .

    安装Flannel

    # 下载 kube-flannel.yml 
    wget https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
    # 修改其中net-conf.json的Network参数使其与kubeadm init时指定的 --pod-network-cidr一致, 此次使用的是172.16.0.0/16
    vi kube-flannel.yml 
    # 安装
    kubectl apply -f kube-flannel.yml 
    输出
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.apps/kube-flannel-ds-amd64 created
    daemonset.apps/kube-flannel-ds-arm64 created
    daemonset.apps/kube-flannel-ds-arm created
    daemonset.apps/kube-flannel-ds-ppc64le created
    daemonset.apps/kube-flannel-ds-s390x created
    

    而后会在后台进程下载flannel的容器镜像并启动, 稍待片刻后, 查看flannel网络信息

    more /run/flannel/subnet.env 
    FLANNEL_NETWORK=172.16.0.0/16
    FLANNEL_SUBNET=172.16.0.1/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=true
    

    查看flannel网络配置

    more /etc/cni/net.d/10-flannel.conflist 
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
    

    再次查看pods, 能看到新增加的flannel

    kube-flannel-ds-amd64-kkxlm      1/1     Running   0          3m5s
    

    查看pod日志

    kubectl logs coredns-6955765f44-7dnqv -n kube-system
    .:53
    [INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
    CoreDNS-1.6.5
    linux/amd64, go1.13.4, c2fd1b2
    

    查看nodes, 此时只有master主机

    kubectl get nodes
    NAME     STATUS   ROLES    AGE   VERSION
    kube01   Ready    master   78m   v1.17.0
    

    Node主机加入集群

    使用前面kubeadm init产生的命令, 需要sudo, 与网上查到的教程不同, 不需要从master主机复制配置文件, 实际测试直接运行下面的命令就加入集群了

    sudo kubeadm join 192.168.11.129:6443 --token f3jgn2.5w8152dpifacihnj --discovery-token-ca-cert-hash sha256:cc1ae32e0924dffa587b5d94b61005ae892db289f1a59f1ef71b45a7eda65ca3
    

    输出

    W1231 10:42:36.665020    6229 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    在master主机上检查新加入的node主机

    kubectl get nodes
    # 会看到NotReady
    NAME     STATUS     ROLES    AGE    VERSION
    kube01   Ready      master   105m   v1.17.0
    kube02   NotReady   <none>   10s    v1.17.0
    
    # 过一段时间后就Ready了
    NAME     STATUS   ROLES    AGE    VERSION
    kube01   Ready    master   107m   v1.17.0
    kube02   Ready    <none>   109s   v1.17.0
    

    部署测试容器

    在master主机上创建一个nginx-deployment.yaml文件, 内容如下

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 2 # tells deployment to run 2 pods matching the template
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
    

    如果在国内, 可以将nginx:1.7.9替换为 registry.cn-shanghai.aliyuncs.com/jovi/nginx:alpine 这个镜像拉取比较快

    运行部署命令

    kubectl apply -f nginx-deployment.yaml
    

    查看部署, 查看对应的pod, 如果pod一直不ready, 可以用describe看pod的event列表, 现在执行到哪一步. 在有些网络下pull image会花比较长时间.

    $ kubectl describe deployment nginx-deployment
    
    $ kubectl get pods
    NAME                               READY   STATUS              RESTARTS   AGE
    nginx-deployment-6dd86d77d-qwlmz   0/1     ContainerCreating   0          78s
    nginx-deployment-6dd86d77d-xk294   0/1     ContainerCreating   0          78s
    
    $ kubectl describe pod nginx-deployment-6dd86d77d-qwlmz
    

    通过describe查看到pod的IP后, 就可以在master节点通过 curl http://IP 查看到nginx的欢迎页了.

    集群节点间Pods的网络互通问题

    在K8s 1.17下, 默认安装完, master节点就能ping通node节点上的pod IP, 不存在互通问题.

    存在问题的是K8s 1.14. 在百度上查到的方式, 大多数使用了下面的第一种方法, 实际上不是最优方案.

    在k8s 1.12之前, 默认是使用以下方式来解决node节点的pod之间的互相访问, 相应讨论 https://github.com/coreos/flannel/issues/699

    1. 在node节点上, 修改 /etc/sysctl.conf,  设置 #net.ipv4.ip_forward=1, 并执行 sudo sysctl -p 生效
    2. 在node节点上, 执行 sudo iptables --policy FORWARD ACCEPT

    这时候从master节点或者其他node节点, 去ping这个node节点上的pod IP就能ping通

    在1.13及之后, k8s调整了网络策略, 因为这样的方式会带来安全问题, 相应的讨论在 
    https://github.com/kubernetes/kubernetes/issues/40182 以及 https://github.com/moby/moby/pull/28257

    要解决互通问题, 可以使用针对cni0网口的特定规则, 执行

    sudo iptables -A FORWARD -i cni0 -j ACCEPT
    sudo iptables -A FORWARD -o cni0 -j ACCEPT
    

    然后这个node节点上的pod IP就能被ping通了. 在1.14下没有直接设置这个的方法, 可以加到开机启动执行脚本中去.

    节点维护

    节点的维护涉及到加入/删除节点, 停用/启用节点等

    加入新节点

    新加入节点: 在node主机上使用 kubeadm join 命令. 对于已经过期的token, 在master主机上通过以下步骤得到hash

    # 查看现在可用的token
    $ kubeadm token list
    TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
    pn2yvw.h2ffrw5goe0y8hoy   3h          2020-01-08T11:41:49Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    
    # 运行以下命令得到sha256 Hash, 取等号后面的部分
    $ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt 
        | openssl rsa -pubin -outform der 2>/dev/null 
        | openssl dgst -sha256 -hex 
    (stdin)= 165d0f5e60f569e8fbf558f61b8b3e823023cdba4e3d95aa55cc5b6e7a082841
    

    根据上面得到的结果, 组装成kubeadm join命令, 两个值分别来自于上面两个命令的结果, 在node主机上执行

    sudo kubeadm join 192.168.11.129:6443 --token pn2yvw.h2ffrw5goe0y8hoy --discovery-token-ca-cert-hash sha256:165d0f5e60f569e8fbf558f61b8b3e823023cdba4e3d95aa55cc5b6e7a082841
    

    如果没有可用的token, 或者都已经过期, 则使用以下命令创建

    kubeadm token create --print-join-command
    

    停用 / 启用节点

    在master主机上执行命令

    # 停用节点, 使其unscheduable
    kubectl drain 节点名
    
    # 启用节点, 使其scheduable
    kubectl uncordon 节点名
    

    删除节点

    在master主机上, 先停用节点后, 再使用 kubectl delete node 节点名 删除节点

    集群关机重启

    集群关机, 直接在所有节点上执行halt -p是可以的, 如果讲究操作顺序的话:

    1. 卸载pods,
    2. 在master上drain所有的node, 
    3. 在node上停止kubelet服务, 停止docker服务, 关机
    4. 在master上停止kubelet服务, 停止docker服务, 关机

    因为K8s视所有pod都是临时的, 整个集群应当看做一个和持久数据无关的服务群体, 所以在停机时, 应当将(非系统自带的)pod全部消除, 在下次启动集群后, 再由发布脚本重建所有pod.

    参考

    https://kubernetes.io/docs/setup/production-environment/container-runtimes/
    http://pwittrock.github.io/docs/admin/kubeadm/
    https://github.com/coreos/flannel
    https://www.latelee.org/kubernetes/k8s-deploy-1.17.0-detail.html
    https://blog.csdn.net/liukuan73/article/details/83116271

  • 相关阅读:
    vivim (十一):文本重排
    vivim (十):接出(复制)
    python的函数
    从oracle11g向oracle9i导数据遇到的一些问题
    vivim (十二):中介字元正则表达式
    DataList如何实现横向排列数据交替行变色!
    跳出率对百度排名的影响越来越大
    asp.net 服务器端控件使用服务器端变量
    .net .用户控件和页面的加载顺序、生命周期
    网站如何让被DOMZ收录
  • 原文地址:https://www.cnblogs.com/milton/p/12127064.html
Copyright © 2011-2022 走看看