zoukankan      html  css  js  c++  java
  • kuberbetes1.17.3二进制安装

    K8s简介

    Kubernetes(简称k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能。,努力成为跨主机集群的自动部署、扩展以及运行应用程序容器的平台。 它支持一系列容器工具, 包括Docker等。

    ##

    # kubernetes

    本文档是kubernetes1.17.3二进制安装的第一篇

    # 注意 本文所有操作均在所有节点执行

    本文环境

    | 名称 | 配置 |
    | -------------------- | ------------- |
    | Kubernetes版本 | v1.17.3 |
    | Cluster Pod CIDR | 10.244.0.0/16 |
    | Cluster Service CIDR | 10.250.0.0/24 |
    | kubernetes service | 10.250.0.1 |
    | dns service | 10.250.0.10 |

    本文档主要介绍在裸机环境下以二进制的方式安装部署kubernetes

    本次文档主要是对于kubernetes 1.17.3版本

    内容主要有环境准备安装配置docker 升级内核 调整系统参数

    # 环境准备

    | 主机名 | IP | 组件 | 配置 |
    | ----------- | -------------- | ------------------------------------------------------------ | ------ |
    | k8s-master1 | 200.200.100.71 | etcd、apiserver、controller-manager、schedule、kube-proxy、kubelet、docker-ce | 2核 4G |
    | k8s-master2 | 200.200.100.72 | etcd、apiserver、controller-manager、schedule、kube-proxy、kubelet、docker-ce | 2核 4G |
    | k8s-master3 | 200.200.100.73 | kube-proxy、kubelet、docker-ce | 2核 4G |
    | k8s-node1 | 200.200.100.74 | kube-proxy、kubelet、docker-ce | 2核 4G |

    以下操作除非具体说明的步骤 其他的均在所有节点执行

    ### ***环境变量***

    此环境变量为主节点(控制节点)的主机名和IP地址

    ```
    export VIP=200.200.100.70
    export MASTER1_HOSTNAME=k8s-master1
    export MASTER1_IP=200.200.100.71
    export MASTER2_HOSTNAME=k8s-master2
    export MASTER2_IP=200.200.100.72
    export MASTER3_HOSTNAME=k8s-master3
    export MASTER3_IP=200.200.100.73
    export NODE1_HOSTNAME=k8s-node1
    export NODE1_IP=200.200.100.74
    ```

    ### ***SSH免密/时间同步/主机名修改***

    · SSH免密

    · NTP时间同步

    · 主机名修改

    · 环境变量生成

    · Host 解析

    这里需要说一下,所有的密钥分发以及后期拷贝等都在master1上操作,因为master1做免密了

    k8s集群所有的机器都需要进行host解析

    ```
    cat >> /etc/hosts << EOF
    200.200.100.71 k8s-master1
    200.200.100.72 k8s-master2
    200.200.100.73 K8s-master3
    200.200.100.74 k8s-node1
    EOF
    ```

    批量免密

    # 做免密前请修改好主机名对应的host

    ```
    yum -y install expect
    ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa
    for i in 200.200.100.71 200.200.100.72 200.200.100.73 200.200.100.74 k8s-master1 k8s-master2 k8s-master3 k8s-node1;do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i
    expect {
    "*yes/no*" {send "yes "; exp_continue}
    "*password*" {send "Bscadmin@8037 "; exp_continue}
    "*Password*" {send "Bscadmin@8037 ";}
    } "
    done
    ```

    切记所有机器需要自行设定ntp,否则不只HA下apiserver通信有问题,各种千奇百怪的问题。

    ```
    yum -y install ntp
    systemctl enable ntpd
    systemctl start ntpd
    ntpdate -u time1.aliyun.com
    hwclock --systohc
    timedatectl set-timezone Asia/Shanghai
    ```

    批量修改主机名

    ```
    ssh 200.200.100.71 "hostnamectl set-hostname k8s-master1" &&
    ssh 200.200.100.72 "hostnamectl set-hostname k8s-master2" &&
    ssh 200.200.100.73 "hostnamectl set-hostname k8s-master3" &&
    ssh 200.200.100.74 "hostnamectl set-hostname k8s-node1"
    ```

    执行完毕bash刷新一下即可

    测试通信

    ```
    for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 ; do ssh root@$i "hostname";done
    ```

    ![](E:/BaiduYunDownload/桌面文件/k8s/k8s1.17.3部署/kubernetes/A-Kubernetes部署/v1.17.3二进制安装/image/ktest-shh.gif)

    ### 升级内核

    安装内核语言编译器(内核是用perl语言编写的)

    yum -y install perl

    下载密钥和yum源

    导入密钥

    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

    安装7版本的yum源文件

    yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

    安装 ml版本 5版本的内核名字叫ml

    yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64

    附:4.4版本内核安装方法

    ```
    yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64
    ```

    然后配置从新的内核启动

    grub2-set-default 0

    然后重新生成grub2.cfg 使虚拟机使用新内核启动

    grub2-mkconfig -o /boot/grub2/grub.cfg

    ### 调整系统内核参数

    ```shell
    cat > /etc/sysctl.conf <<EOF
    net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-ip6tables=1
    net.ipv4.ip_forward=1
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.inotify.max_user_instances=8192
    fs.inotify.max_user_watches=1048576
    fs.file-max=52706963
    fs.nr_open=52706963
    net.ipv6.conf.all.disable_ipv6=1
    net.netfilter.nf_conntrack_max=2310720
    EOF
    ```

    然后执行sysctl -p 使配置的内核响应参数生效

    sysctl -p

    ### 关闭防火墙、selinux、swap、NetworkManager

    ```shell
    systemctl stop firewalld
    systemctl disable firewalld
    iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat
    iptables -P FORWARD ACCEPT
    sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
    setenforce 0
    sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
    systemctl stop NetworkManager
    systemctl disable NetworkManager
    ```

    ### 修改资源限制

    ```shell
    echo "* soft nofile 65536" >> /etc/security/limits.conf
    echo "* hard nofile 65536" >> /etc/security/limits.conf
    echo "* soft nproc 65536" >> /etc/security/limits.conf
    echo "* hard nproc 65536" >> /etc/security/limits.conf
    echo "* soft memlock unlimited" >> /etc/security/limits.conf
    echo "* hard memlock unlimited" >> /etc/security/limits.conf
    ```

    ### 常用软件安装

    ```shell
    yum -y install bridge-utils chrony ipvsadm ipset sysstat conntrack libseccomp wget tcpdump screen vim nfs-utils bind-utils wget socat telnet sshpass net-tools sysstat lrzsz yum-utils device-mapper-persistent-data lvm2 tree nc lsof strace nmon iptraf iftop rpcbind mlocate ipvsadm
    ```

    ### 加载内核ipvs模块

    ```shell
    cat > /etc/sysconfig/modules/ipvs.modules <<EOF

    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4

    EOF
    chmod 755 /etc/sysconfig/modules/ipvs.modules
    bash /etc/sysconfig/modules/ipvs.modules
    lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    ```

    ### 安装docker-ce:

    添加yun源

    tee /etc/yum.repos.d/docker-ce.repo <<-'EOF'
    [aliyun-docker-ce]
    name=aliyun-docker-ce
    baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/
    enable=1
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
    EOF

    安装docker-ce

    yum -y install docker-ce

    重启docker并设置为开机自启

    systemctl daemon-reload
    systemctl restart docker
    systemctl enable docker

    ### 然后重启系统 验证内核是否升级成功

    # 下载命令

    ### 下载cfssl以及cfssljson命令

    此举目的是下载创建证书以及配置文件需要的命令

    **PKI基础概念**

    **什么是PKI?**

    公开密钥基础建设(英语:Public Key Infrastructure,缩写:PKI),又称公开密钥基础架构、公钥基础建设、公钥基础设施、公开密码匙基础建设或公钥基础架构,是一组由硬件、软件、参与者、管理政策与流程组成的基础架构,其目的在于创造、管理、分配、使用、存储以及撤销数字证书。(节选维基百科)

    PKI是借助CA(权威数字证书颁发/认证机构)将用户的个人身份跟公开密钥链接在一起,它能够确保每个用户身份的唯一性,这种链接关系是通过注册和发布过程实现,并且根据担保级别,链接关系可能由CA和各种软件或在人为监督下完成。PKI用来确定链接关系的这一角色称为RA(Registration Authority, 注册管理中心),RA能够确保公开密钥和个人身份链接,可以防抵赖,防篡改。在微软的公钥基础建设下,RA又被称为CA,目前大多数称为CA。

    **PKI组成要素**

    从上面可以得知PKI的几个主要组成要素,用户(使用PKI的人或机构),认证机构(CA,颁发证书的人或机构),仓库(保存证书的数据库)等。

    **非对称加密**

    本文提到的密钥均为非对称加密,有公钥和私钥之分,并且他们总是成对出现,它们的特点就是其中一个加密的数据,只能使用另一个解密,即使它本身也无法解密,也就是说公钥加密的,私钥可以解密,私钥加密的,公钥可以解密。

    **证书签名请求CSR**

    它是向CA机构申请数字证书时使用的请求文件,这里的CSR不是证书,而向权威证书颁发机构获得签名证书的申请,当CA机构颁发的证书过期时,你可以使用相同的CSR来申请新的证书,此时key不变。

    **数字签名**

    数字签名就是“非对称加密+摘要算法”,其目的不是为了加密,而是为了防抵赖或者他们篡改数据。其核心思想是:比如A要给B发送数据,A先用摘要算法得到数据的指纹,然后用A的私钥加密指纹,加密后的指纹就是A的签名,B收到数据和A的签名后,也用同样的摘要算法计算指纹,然后用A公开的公钥解密签名,比较两个指纹,如果相同,说明数据没有被篡改,确实是A发过来的数据。假设C想改A发给B的数据来欺骗B,因为篡改数据后指纹会变,要想跟A的签名里面的指纹一致,就得改签名,但由于没有A的私钥,所以改不了,如果C用自己的私钥生成一个新的签名,B收到数据后用A的公钥根本就解不开。(来源于网络)

    **数字证书格式**

    数字证书格式有很多,比如.pem,.cer或者.crt等。

    **PKI工作流程**

    下图来源于网络,上半部分最右边就是CA机构,可以颁发证书。证书订阅人,首先去申请一个证书,为了申请这个证书,需要去登记,告诉它,我是谁,我属于哪个组织,到了登记机构,再通过CSR,发送给CA中心,CA中心通过验证通过之后 ,会颁发一对公钥和私钥,并且公钥会在CA中心存一份;证书订阅人拿到证书以后,部署在服务器;

    当用户访问我们的web服务器时,它会请求我们的证书,服务器会把公钥发送给我们的客户端,客户端会去验证我们证书的合法性,客户端是如何验证证书是否有效呢?CA中心会把过期证书放在CRL服务器上面 ,这个CRL服务会把所有过期的证书形成一条链条,所以他的性能非常的差,所以又推出了OCSP程序,OCSP可以就一个证书进行查询,它是否过期,浏览器可以直接去查询OCSP响应程序,但OCSP响应程序效率还不是很高,最终往往我们会把web服务器如nginx有一个ocsp开关,当我们打开这个开关以后,会有nginx服务器主动的去ocsp服务器去查询,这样大量的客户端直接从web服务器就可以直接获取到证书是否有效。

    **CFSSL介绍**

    **CFSSL是什么?**

    cfssl是使用go编写,由CloudFlare开源的一款PKI/TLS工具。主要程序有cfssl,是CFSSL的命令行工具,cfssljson用来从cfssl程序获取JSON输出,并将证书,密钥,CSR和bundle写入文件中。

    ### 安装CFSSL

    可能下载时间较长

    ```
    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    cfssl version
    ```

    版本应该是1.2.0及以上版本

    显示信息

    ```
    Version: 1.2.0
    Revision: dev
    Runtime: go1.6
    ```

    # 生成证书

    ### 注意 本文所有操作均在master1节点执行

    主要内容为生成kubernetes集群所需要的各种证书

    主要有两个部分 生成etcd的证书 和生成kubernetes组件的证书

    ## 生成etcd证书

    1 创建生成证书和临时存放证书的目录

    ```
    mkdir /root/ssl/{etcd,kubernetes} -p
    ```

    进入etcd目录

    ```
    cd /root/ssl/etcd/
    ```

    2 创建用来生成CA文件的JSON配置文件

    此CA文件只用与etcd的证书

    ```
    cat << EOF | tee ca-config.json
    {
    "signing": {
    "default": {
    "expiry": "87600h"
    },
    "profiles": {
    "etcd": {
    "expiry": "87600h",
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ]
    }
    }
    }
    }
    EOF
    ```

    server auth 表示client可以对使用该ca对server提供的证书进行验证

    client auth 表示server可以使用该ca对client提供的证书进行验证

    3 创建用来生成CA证书签名请求(CSR)的JSON配置文件

    ```
    cat << EOF | tee ca-csr.json
    {
    "CN": "etcd CA",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "L": "Beijing",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    4 生成CA证书和私钥

    ```
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    ```

    输出内容

    ```
    2019/10/12 19:35:14 [INFO] generating a new CA key and certificate from CSR
    2019/10/12 19:35:14 [INFO] generate received request
    2019/10/12 19:35:14 [INFO] received CSR
    2019/10/12 19:35:14 [INFO] generating key: rsa-2048
    2019/10/12 19:35:14 [INFO] encoded CSR
    2019/10/12 19:35:14 [INFO] signed certificate with serial number 76399392328271693420688681207409409662642174207
    ```

    查看生成的CA证书和私钥

    ```
    ls ca*.pem
    ```

    输出内容

    ```
    ca-key.pem ca.pem
    ```

    5 创建etcd证书请求

    ```
    cat << EOF | tee etcd-csr.json
    {
    "CN": "etcd",
    "hosts": [
    "200.200.100.71",
    "200.200.100.72",
    "200.200.100.73"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "CN",
    "L": "Beijing",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    6 生成etcd证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=etcd etcd-csr.json | cfssljson -bare etcd
    ```

    输出内容

    ```
    2019/10/12 19:39:16 [INFO] generate received request
    2019/10/12 19:39:16 [INFO] received CSR
    2019/10/12 19:39:16 [INFO] generating key: rsa-2048
    2019/10/12 19:39:17 [INFO] encoded CSR
    2019/10/12 19:39:17 [INFO] signed certificate with serial number 276878925110307603699002043209122885766807800060
    2019/10/12 19:39:17 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    7 查看生成的所有etcd证书

    ```
    ls | grep pem
    ```

    输出的4个文件

    ```
    ca-key.pem
    ca.pem
    etcd-key.pem
    etcd.pem
    ```

    ## 生成kubernetes组件证书

    切换到kubernetes组件证书申请和存放目录

    ```
    cd /root/ssl/kubernetes/
    ```

    ### 新建CA配置文件

    用于kubernetes集群的组件和admin角色

    ```
    cat > ca-config.json <<EOF
    {
    "signing": {
    "default": {
    "expiry": "8760h"
    },
    "profiles": {
    "kubernetes": {
    "expiry": "8760h",
    "usages": [
    "signing",
    "key encipherment",
    "server auth",
    "client auth"
    ]
    }
    }
    }
    }
    EOF
    ```

    新建CA凭证签发请求文件

    ```
    cat > ca-csr.json <<EOF
    {
    "CN": "Kubernetes",
    "hosts": [
    "127.0.0.1",
    "200.200.100.71",
    "200.200.100.72",
    "200.200.100.73",
    "200.200.100.70"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "Kubernetes",
    "OU": "Beijing",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成CA凭证和私钥

    ```
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca
    ```

    输出信息

    ```
    2019/10/12 19:56:26 [INFO] generating a new CA key and certificate from CSR
    2019/10/12 19:56:26 [INFO] generate received request
    2019/10/12 19:56:26 [INFO] received CSR
    2019/10/12 19:56:26 [INFO] generating key: rsa-2048
    2019/10/12 19:56:26 [INFO] encoded CSR
    2019/10/12 19:56:26 [INFO] signed certificate with serial number 679742542757179200541008226092035525850208663173
    ```

    查看创建的证书和私钥

    ```
    ls ca*.pem
    ```

    输出文件

    ```
    ca-key.pem ca.pem
    ```

    ### client与server凭证

    创建admin client 凭证签发请求文件

    ```
    cat > admin-csr.json <<EOF
    {
    "CN": "admin",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:masters",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    创建admin client 凭证和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    admin-csr.json | cfssljson -bare admin
    ```

    输出信息

    ```
    2019/10/12 19:59:38 [INFO] generate received request
    2019/10/12 19:59:38 [INFO] received CSR
    2019/10/12 19:59:38 [INFO] generating key: rsa-2048
    2019/10/12 19:59:38 [INFO] encoded CSR
    2019/10/12 19:59:38 [INFO] signed certificate with serial number 514625224786356937263551808946632861542829130401
    2019/10/12 19:59:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    查看生成的文件

    ```
    ls admin*.pem
    ```

    ```
    admin-key.pem admin.pem
    ```

    ### 生成kubelet客户端凭证

    kubernetes使用special-purpose authorization mode(被称作 Node Authorizer) 授权来自kubelet的API请求

    为了通过Node Authorizer的授权,kubelet 必须使用一个署名为system:node:<NodeName>的凭证来证明它属于system:nodes用户组。

    本节将会给每台节点(包括master节点)创建凭证和私钥

    创建master1节点的凭证签发请求文件

    ```
    cat << EOF | tee k8s-master1-csr.json
    {
    "CN": "system:node:${MASTER1_HOSTNAME}",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:nodes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成master节点的证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -hostname=${MASTER1_HOSTNAME},${MASTER1_IP}
    -profile=kubernetes
    k8s-master1-csr.json | cfssljson -bare k8s-master1
    ```

    输出信息

    ```
    2019/10/12 20:08:33 [INFO] generate received request
    2019/10/12 20:08:33 [INFO] received CSR
    2019/10/12 20:08:33 [INFO] generating key: rsa-2048

    2019/10/12 20:08:33 [INFO] encoded CSR
    2019/10/12 20:08:33 [INFO] signed certificate with serial number 340503546795644080420594727795505971193705840974
    ```

    输出的文件

    ```
    ls k8s-master*.pem
    ```

    ```
    k8s-master1-key.pem k8s-master1.pem
    ```

    创建master2节点的凭证签发请求文件

    ```
    cat << EOF | tee k8s-master2-csr.json
    {
    "CN": "system:node:${MASTER2_HOSTNAME}",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:nodes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成master2节点的证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -hostname=${MASTER2_HOSTNAME},${MASTER2_IP}
    -profile=kubernetes
    k8s-master2-csr.json | cfssljson -bare k8s-master2
    ```

    输出信息

    ```
    2019/10/12 20:08:33 [INFO] generate received request
    2019/10/12 20:08:33 [INFO] received CSR
    2019/10/12 20:08:33 [INFO] generating key: rsa-2048

    2019/10/12 20:08:33 [INFO] encoded CSR
    2019/10/12 20:08:33 [INFO] signed certificate with serial number 340503546795644080420594727795505971193705840974
    ```

    输出的文件

    ```
    ls k8s-master*.pem
    ```

    ```
    k8s-master1-key.pem k8s-master1.pem k8s-master2-key.pem k8s-master2.pem
    ```

    创建master3节点的凭证签发请求文件

    ```
    cat << EOF | tee k8s-master3-csr.json
    {
    "CN": "system:node:${MASTER3_HOSTNAME}",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:nodes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成k8s-node节点的证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -hostname=${MASTER3_HOSTNAME},${MASTER3_IP}
    -profile=kubernetes
    k8s-master3-csr.json | cfssljson -bare k8s-master3
    ```

    输出信息

    ```
    2019/10/12 20:11:22 [INFO] generate received request
    2019/10/12 20:11:22 [INFO] received CSR
    2019/10/12 20:11:22 [INFO] generating key: rsa-2048
    2019/10/12 20:11:22 [INFO] encoded CSR
    2019/10/12 20:11:22 [INFO] signed certificate with serial number 329201759031912279536498320815194792351902510021
    ```

    输出的文件

    ```
    ls k8s-master3*.pem
    ```

    ```
    k8s-master3-key.pem k8s-master3.pem
    ```

    创建k8s-node1节点的凭证签发请求文件

    ```
    cat << EOF | tee k8s-node1-csr.json
    {
    "CN": "system:node:${NODE1_HOSTNAME}",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:nodes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成k8s-node1节点的证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -hostname=${NODE1_HOSTNAME},${NODE1_IP}
    -profile=kubernetes
    k8s-node1-csr.json | cfssljson -bare k8s-node1
    ```

    输出信息

    ```
    2019/10/12 20:16:27 [INFO] generate received request
    2019/10/12 20:16:27 [INFO] received CSR
    2019/10/12 20:16:27 [INFO] generating key: rsa-2048
    2019/10/12 20:16:27 [INFO] encoded CSR
    2019/10/12 20:16:27 [INFO] signed certificate with serial number 11529605845303364851563251013549393798169113866
    ```

    输出文件

    ```
    ls k8s-node1*.pem
    ```

    ```
    k8s-node1-key.pem k8s-node1.pem
    ```

    8 创建master组件需要的证书

    ### 创建kube-controller-manager客户端凭证

    ```
    cat > kube-controller-manager-csr.json <<EOF
    {
    "CN": "system:kube-controller-manager",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:kube-controller-manager",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
    ```

    输出信息

    ```
    2019/10/12 20:29:06 [INFO] generate received request
    2019/10/12 20:29:06 [INFO] received CSR
    2019/10/12 20:29:06 [INFO] generating key: rsa-2048
    2019/10/12 20:29:06 [INFO] encoded CSR
    2019/10/12 20:29:06 [INFO] signed certificate with serial number 173346030426505912970345315612511532042452194730
    2019/10/12 20:29:06 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    输出文件

    ```
    ls kube-con*.pem
    ```

    ```
    kube-controller-manager-key.pem kube-controller-manager.pem
    ```

    ### 创建kube-proxy客户端凭证

    ```
    cat <<EOF |tee kube-proxy-csr.json
    {
    "CN": "system:kube-proxy",
    "hosts": [],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:node-proxier",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    kube-proxy-csr.json | cfssljson -bare kube-proxy
    ```

    输出信息

    ```
    2019/10/12 20:31:11 [INFO] generate received request
    2019/10/12 20:31:11 [INFO] received CSR
    2019/10/12 20:31:11 [INFO] generating key: rsa-2048
    2019/10/12 20:31:11 [INFO] encoded CSR
    2019/10/12 20:31:11 [INFO] signed certificate with serial number 3973180903081703880688638425637585151040946194
    2019/10/12 20:31:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    输出文件

    ```
    ls kube-proxy*.pem
    ```

    ```
    kube-proxy-key.pem kube-proxy.pem
    ```

    ### 创建kube-scheduler凭证签发

    ```
    cat <<EOF | tee kube-scheduler-csr.json
    {
    "CN": "system:kube-scheduler",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "system:kube-scheduler",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成证书

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    kube-scheduler-csr.json | cfssljson -bare kube-scheduler
    ```

    输出信息

    ```
    2019/10/12 20:18:57 [INFO] generate received request
    2019/10/12 20:18:57 [INFO] received CSR
    2019/10/12 20:18:57 [INFO] generating key: rsa-2048
    2019/10/12 20:18:57 [INFO] encoded CSR
    2019/10/12 20:18:57 [INFO] signed certificate with serial number 56094122509645103760584094055826646549201635795
    2019/10/12 20:18:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    输出文件

    ```
    ls kube-sch*.pem
    ```

    ```
    kube-scheduler-key.pem kube-scheduler.pem
    ```

    ### 创建kubernetes 证书

    为了保证客户端与kubernetes API的认证,kubernetes API Server 凭证中必须包含master的静态IP地址

    此IP地址使用上面配置的环境变量

    创建kubernetes API Server 凭证签发请求文件

    ```
    cat <<EOF | tee kubernetes-csr.json
    {
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "200.200.100.71",
    "200.200.100.72",
    "200.200.100.73",
    "200.200.100.70",
    "10.250.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "Kubernetes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成kubernetes API Server 凭证与私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    kubernetes-csr.json | cfssljson -bare kubernetes
    ```

    输出信息

    ```
    2019/10/12 20:23:03 [INFO] generate received request
    2019/10/12 20:23:03 [INFO] received CSR
    2019/10/12 20:23:03 [INFO] generating key: rsa-2048
    2019/10/12 20:23:03 [INFO] encoded CSR
    2019/10/12 20:23:03 [INFO] signed certificate with serial number 319608271292119912072742471756939391576493389087
    ```

    输出文件

    ```
    ls kubernetes*.pem
    ```

    ```
    kubernetes-key.pem kubernetes.pem
    ```

    Service Account 证书

    创建凭证签发问

    ```
    cat > service-account-csr.json <<EOF
    {
    "CN": "service-accounts",
    "key": {
    "algo": "rsa",
    "size": 2048
    },
    "names": [
    {
    "C": "China",
    "L": "Beijing",
    "O": "Kubernetes",
    "OU": "Kubernetes",
    "ST": "Beijing"
    }
    ]
    }
    EOF
    ```

    生成证书和私钥

    ```
    cfssl gencert
    -ca=ca.pem
    -ca-key=ca-key.pem
    -config=ca-config.json
    -profile=kubernetes
    service-account-csr.json | cfssljson -bare service-account
    ```

    输出信息

    ```
    2019/10/12 20:25:31 [INFO] generate received request
    2019/10/12 20:25:31 [INFO] received CSR
    2019/10/12 20:25:31 [INFO] generating key: rsa-2048
    2019/10/12 20:25:31 [INFO] encoded CSR
    2019/10/12 20:25:31 [INFO] signed certificate with serial number 538955391110960009078645942634491132767864895292
    2019/10/12 20:25:31 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    ```

    输出文件

    ```
    ls service*.pem
    ```

    ```
    service-account-key.pem service-account.pem
    ```

    9 拷贝etcd证书到相应节点的相应目录

    创建etcd目录

    ```
    for host in k8s-master1 k8s-master2 k8s-master3;do
    echo "---$host---"
    ssh root@$host "mkdir /usr/local/etcd/{bin,ssl,data,json,src} -p";
    done
    ```

    拷贝etcd证书

    ```

    cd ../etcd/
    拷贝方法一;
    scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master1:/usr/local/etcd/ssl/
    scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master2:/usr/local/etcd/ssl/
    scp etcd-key.pem etcd.pem ca.pem ca-key.pem k8s-master3:/usr/local/etcd/ssl/


    拷贝方法二;
    for host in k8s-master1 k8s-master2 k8s-master3;
    do
    echo "--- $host---"
    scp -r *.pem $host:/usr/local/etcd/ssl/
    done
    ```

    # 配置和生成kubernetes配置文件

    本文档是kubernetes1.17.3二进制安装的第四篇

    ### 注意 本文所有操作均在master节点执行

    本文将主要介绍创建kubeconfig配置文件 她们是kubernetes客户端与API Server 认证与鉴权的保证

    kubectl是kubernetes命令行客户端,一般情况集群都开启了TLS认证,kubectl或其它客户端每次与集群kube-apiserver交互都少不了身份验证,目前有两种常用认证方式,使用证书和token,这两种方式也是最通用的方式,本节简单说下kubectl客户端如何使用证书的认证方式访问集群。

    使用证书的方式,一般情况下我们需要创建一个kubeconfig配置文件,这个文件用来组织有关集群、用户、命名空间和身份认证机制的信息。kubectl使用kubeconfig配置文件来查找选择集群所需信息,并且集群kube-apiserver进行通信,kubectl默认查到${HOME}/.kube目录下面的config文件,当然也可以通过设置KUBECONFIG环境变量或者在命令行使用--kubeconfig参数指定kubeconfig配置文件。

    **配置详情**

    | 步骤 | 配置选项 | 选项说明 |
    | ----------------------------- | ------------------------------------------------- | ------------------------------------- |
    | 1. 设置集群信息 | set-cluster <string> | kubectl config 设置集群信息时使用 |
    | --certificate-authority | 设置集群的根证书路径 | |
    | --embed-certs | 将--certificate-authority根证书写入到kubeconfig中 | |
    | --server | 指定访问集群的socket | |
    | --kubeconfig | kubeconfig配置文件路径 | |
    | 2. 设置客户端参数 | set-credentials <string> | kubectl config 设置客户端认证信息 |
    | --client-certificate | 指定kubectl使用的证书路径 | |
    | --client-key | 指定kubectl使用的私钥路径 | |
    | --embed-certs | 将kubectl使用的证书和私钥写入到kubeconfig中 | |
    | --kubeconfig | kubeconfig配置文件路径 | |
    | 3. 设置上下文信息 | set-context <string> | kubectl config 设置上下文参数 |
    | --cluster | 配置使用哪个集群信息,set-cluster中设置的 | |
    | --user | 配置使用哪个客户端,set-credentials中设置的 | |
    | --kubeconfig | kubeconfig配置文件路径 | |
    | 4. kubeconfig中使用哪个上下文 | use-context <string> | kubectl config 设置使用哪个上下文信息 |

    客户端认证配置

    本节将会创建用于 kube-proxy kube-controll-manager kube-scheduler 和kubelet 的kubeconfig文件

    ### kubelet配置文件

    为了确保Node Authorizer授权 kubelet配置文件中的客户端证书必须匹配Node名字

    Node名字还使用生成证书那一节配置的环境变量

    为所有节点创建kubeconfig配置(都在master节点操作)

    生成配置文件所在的目录就在上一节生成kubernetes组件所在的目录

    首先安装kubectl

    这里一次性将所有需要的软件都安装上

    ```
    wget --timestamping
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-apiserver"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-controller-manager"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-scheduler"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl"
    chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl

    #或者一次性下载
    https://dl.k8s.io/v1.17.3/kubernetes-server-linux-amd64.tar.gz

    #解压缩
    tar -zxvf /root/kubernetes-server-1.17.3-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

    #拷贝文件至另外两个master节点
    for host in k8s-master2 k8s-master3; do
    echo "---$host---"
    scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $host:/usr/local/bin/
    done


    #拷贝文件至node节点
    scp /usr/local/bin/kube{let,-proxy} k8s-node1:/usr/local/bin/
    ```

    ```
    cd /root/ssl/kubernetes/
    ```

    不用环境变量的方法

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=k8s-master1.kubeconfig

    kubectl config set-credentials system:node:k8s-master1
    --client-certificate=k8s-master1.pem
    --client-key=k8s-master1-key.pem
    --embed-certs=true
    --kubeconfig=k8s-master1.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:node:k8s-master1
    --kubeconfig=k8s-master1.kubeconfig

    kubectl config use-context default --kubeconfig=k8s-master1.kubeconfig
    ```

    使用环境变量的方法

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig

    kubectl config set-credentials system:node:${MASTER1_HOSTNAME}
    --client-certificate=${MASTER1_HOSTNAME}.pem
    --client-key=${MASTER1_HOSTNAME}-key.pem
    --embed-certs=true
    --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:node:${MASTER1_HOSTNAME}
    --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig

    kubectl config use-context default --kubeconfig=${MASTER1_HOSTNAME}.kubeconfig
    ```

    输出文件

    ```
    ls k8s-master*config
    ```

    ```
    k8s-master1.kubeconfig
    ```

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=k8s-master2.kubeconfig

    kubectl config set-credentials system:node:k8s-master2
    --client-certificate=k8s-master2.pem
    --client-key=k8s-master2-key.pem
    --embed-certs=true
    --kubeconfig=k8s-master2.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:node:k8s-master2
    --kubeconfig=k8s-master2.kubeconfig

    kubectl config use-context default --kubeconfig=k8s-master2.kubeconfig
    ```

    输出文件

    ```
    ls k8s-master*config

    k8s-master1.kubeconfig k8s-master2.kubeconfig
    ```

    k8s-master3节点的配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=k8s-master3.kubeconfig

    kubectl config set-credentials system:node:k8s-master3
    --client-certificate=k8s-master3.pem
    --client-key=k8s-master3-key.pem
    --embed-certs=true
    --kubeconfig=k8s-master3.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:node:k8s-master3
    --kubeconfig=k8s-master3.kubeconfig

    kubectl config use-context default --kubeconfig=k8s-master3.kubeconfig
    ```

    输出文件

    ```
    ls k8s-master3*config
    ```

    ```
    k8s-master3.kubeconfig
    ```

    k8s-node1节点配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=k8s-node1.kubeconfig

    kubectl config set-credentials system:node:k8s-node1
    --client-certificate=k8s-node1.pem
    --client-key=k8s-node1-key.pem
    --embed-certs=true
    --kubeconfig=k8s-node1.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:node:k8s-node1
    --kubeconfig=k8s-node1.kubeconfig

    kubectl config use-context default --kubeconfig=k8s-node1.kubeconfig
    ```

    输出文件

    ```
    ls k8s-node1*config
    ```

    ```
    k8s-node1.kubeconfig
    ```

    ### kube-proxy配置文件

    为kube-proxy服务生成kubeconfig配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-credentials system:kube-proxy
    --client-certificate=kube-proxy.pem
    --client-key=kube-proxy-key.pem
    --embed-certs=true
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:kube-proxy
    --kubeconfig=kube-proxy.kubeconfig

    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    ```

    输出文件

    ```
    ls kube-proxy*config
    ```

    ```
    kube-proxy.kubeconfig
    ```

    ### kube-controller-manager配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=kube-controller-manager.kubeconfig

    kubectl config set-credentials system:kube-controller-manager
    --client-certificate=kube-controller-manager.pem
    --client-key=kube-controller-manager-key.pem
    --embed-certs=true
    --kubeconfig=kube-controller-manager.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:kube-controller-manager
    --kubeconfig=kube-controller-manager.kubeconfig

    kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
    ```

    输出文件

    ```
    ls kube-con*config
    ```

    ```
    kube-controller-manager.kubeconfig
    ```

    ### kube-scheduler配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=kube-scheduler.kubeconfig

    kubectl config set-credentials system:kube-scheduler
    --client-certificate=kube-scheduler.pem
    --client-key=kube-scheduler-key.pem
    --embed-certs=true
    --kubeconfig=kube-scheduler.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=system:kube-scheduler
    --kubeconfig=kube-scheduler.kubeconfig

    kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
    ```

    输出文件

    ```
    ls kube-sch*config
    ```

    ```
    kube-scheduler.kubeconfig
    ```

    ### Admin配置文件

    ```
    kubectl config set-cluster kubernetes-training
    --certificate-authority=ca.pem
    --embed-certs=true
    --server=https://${VIP}:8443
    --kubeconfig=admin.kubeconfig

    kubectl config set-credentials admin
    --client-certificate=admin.pem
    --client-key=admin-key.pem
    --embed-certs=true
    --kubeconfig=admin.kubeconfig

    kubectl config set-context default
    --cluster=kubernetes-training
    --user=admin
    --kubeconfig=admin.kubeconfig

    kubectl config use-context default --kubeconfig=admin.kubeconfig
    ```

    输出文件

    ```
    ls ad*config
    ```

    ```
    admin.kubeconfig
    ```

    ### 配置和生成密钥

    Kubernetes 存储了集群状态、应用配置和密钥等很多不同的数据。而 Kubernetes 也支持集群数据的加密存储。

    本部分将会创建加密密钥以及一个用于加密 Kubernetes Secrets 的 加密配置文件。

    所有操作主节点执行

    加密密钥
    建立加密密钥:

    ```
    ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
    ```

    加密配置文件
    生成名为 encryption-config.yaml 的加密配置文件:

    ```
    cat > encryption-config.yaml <<EOF
    kind: EncryptionConfig
    apiVersion: v1
    resources:
    - resources:
    - secrets
    providers:
    - aescbc:
    keys:
    - name: key1
    secret: ${ENCRYPTION_KEY}
    - identity: {}
    EOF
    ```

    ### 分发证书文件

    将 kubelet 与 kube-proxy kubeconfig 配置文件复制到每个 worker 节点上:

    创建配置文件目录

    ```
    for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1 ; do ssh root@$host "mkdir -p
    /opt/cni/bin
    /var/lib/kubelet
    /var/lib/kube-proxy
    /var/lib/kubernetes
    /var/run/kubernetes " ; done
    ```

    将 admin、kube-controller-manager 与 kube-scheduler kubeconfig 配置文件复制到每个控制节点上:

    ```
    分发文件方法一;
    scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
    service-account-key.pem service-account.pem
    encryption-config.yaml
    kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master1:/var/lib/kubernetes/
    scp admin.kubeconfig k8s-master1:~/

    scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
    service-account-key.pem service-account.pem
    encryption-config.yaml
    kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master2:/var/lib/kubernetes/


    scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
    service-account-key.pem service-account.pem
    encryption-config.yaml
    kube-controller-manager.kubeconfig kube-scheduler.kubeconfig k8s-master3:/var/lib/kubernetes/
    分发文件方法二;

    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "-----$NODE------"
    scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem
    service-account-key.pem service-account.pem
    encryption-config.yaml
    kube-controller-manager.kubeconfig kube-scheduler.kubeconfig $NODE:/var/lib/kubernetes/;
    done
    ```

    ```
    scp k8s-master1-key.pem k8s-master1.pem k8s-master1:/var/lib/kubelet/
    scp k8s-master1.kubeconfig k8s-master1:/var/lib/kubelet/kubeconfig
    scp kube-proxy.kubeconfig k8s-master1:/var/lib/kube-proxy/kubeconfig
    scp kube-proxy.pem k8s-master1:/var/lib/kube-proxy/
    scp kube-proxy-key.pem k8s-master1:/var/lib/kube-proxy/
    scp kube-controller-manager-key.pem k8s-master1:/var/lib/kubernetes/kube-controller-manager-key.pem
    scp kube-controller-manager.pem k8s-master1:/var/lib/kubernetes/
    scp kube-scheduler.pem k8s-master1:/var/lib/kubernetes/kube-scheduler.pem
    scp kube-scheduler-key.pem k8s-master1:/var/lib/kubernetes/


    scp k8s-master2-key.pem k8s-master2.pem k8s-master2:/var/lib/kubelet/
    scp k8s-master2.kubeconfig k8s-master2:/var/lib/kubelet/kubeconfig
    scp kube-proxy.kubeconfig k8s-master2:/var/lib/kube-proxy/kubeconfig
    scp kube-proxy-key.pem k8s-master2:/var/lib/kube-proxy/
    scp kube-proxy.pem k8s-master2:/var/lib/kube-proxy/
    scp kube-controller-manager-key.pem k8s-master2:/var/lib/kubernetes/kube-controller-manager-key.pem
    scp kube-controller-manager.pem k8s-master2:/var/lib/kubernetes/
    scp kube-scheduler.pem k8s-master2:/var/lib/kubernetes/kube-scheduler.pem
    scp kube-scheduler-key.pem k8s-master2:/var/lib/kubernetes/

    scp k8s-master3-key.pem k8s-master3.pem k8s-master3:/var/lib/kubelet/
    scp k8s-master3.kubeconfig k8s-master3:/var/lib/kubelet/kubeconfig
    scp kube-proxy.kubeconfig k8s-master3:/var/lib/kube-proxy/kubeconfig
    scp kube-proxy-key.pem k8s-master3:/var/lib/kube-proxy/
    scp kube-proxy.pem k8s-master3:/var/lib/kube-proxy/
    scp kube-controller-manager-key.pem k8s-master3:/var/lib/kubernetes/kube-controller-manager-key.pem
    scp kube-controller-manager.pem k8s-master3:/var/lib/kubernetes/
    scp kube-scheduler.pem k8s-master3:/var/lib/kubernetes/kube-scheduler.pem
    scp kube-scheduler-key.pem k8s-master3:/var/lib/kubernetes/

    scp ca.pem k8s-node1:/var/lib/kubernetes/
    scp k8s-node1-key.pem k8s-node1.pem k8s-node1:/var/lib/kubelet/
    scp k8s-node1.kubeconfig k8s-node1:/var/lib/kubelet/kubeconfig
    scp kube-proxy.kubeconfig k8s-node1:/var/lib/kube-proxy/kubeconfig
    scp kube-proxy-key.pem k8s-node1:/var/lib/kube-proxy/
    scp kube-proxy.pem k8s-node1:/var/lib/kube-proxy/
    ```

    kubernetes的组件都是无状态的,所有集群的状态都存储在etcd集群中

    ##

    # 部署etcd集群

    etcd目录结构

    软件包解压目录

    /usr/local/etcd/src/

    ssl证书申请文件

    /usr/local/etcd/json

    ssl证书文件

    /usr/local/etcd/ssl/

    可执行文件

    /usr/local/etcd/bin/

    工作目录

    /usr/local/etcd/data/

    etcd的目录在生成证书的最后已经全部创建 并且证书文件也全部拷贝完成

    ### 下载etcd二进制包并解压

    ```
    cd
    wget https://github.com/etcd-io/etcd/releases/download/v3.4.1/etcd-v3.4.1-linux-amd64.tar.gz
    tar -zxvf etcd-v3.4.1-linux-amd64.tar.gz -C /usr/local/etcd/src/
    ```

    复制可执行文件

    ```
    复制方法一;
    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master1:/usr/local/etcd/bin/
    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master1:/usr/local/etcd/bin/

    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master2:/usr/local/etcd/bin/
    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master2:/usr/local/etcd/bin/

    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd k8s-master3:/usr/local/etcd/bin/
    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcdctl k8s-master2:/usr/local/etcd/bin/


    复制方法二;
    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "----$host----"
    scp -p /usr/local/etcd/src/etcd-v3.4.1-linux-amd64/etcd* $host:/usr/local/etcd/bin/;
    ssh $host "ln -s /usr/local/etcd/bin/* /usr/bin/";
    done
    ```

    ```
    或者直接复制执行文件到 /usr/bin,这里方便管理复制到/usr/local/etcd/bin下
    ```

    ### 设置etcd系统服务文件

    在/etc/systemd/system/目录下导入文件etcd.service

    配置master节点的配置文件:

    ```

    配置master1的etcd

    cat << EOF | tee /etc/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    Documentation=https://coreos.com/etcd/docs/latest/
    After=network.target
    After=network-online.target
    Wants=network-online.target

    [Service]
    Type=notify
    Restart=on-failure
    LimitNOFILE=65536
    ExecStart=/usr/local/etcd/bin/etcd \
    --name=etcd00 \
    --data-dir=/usr/local/etcd/data/ \
    --listen-peer-urls=https://$MASTER1_IP:2380 \
    --listen-client-urls=https://$MASTER1_IP:2379,https://127.0.0.1:2379 \
    --advertise-client-urls=https://$MASTER1_IP:2379 \
    --initial-advertise-peer-urls=https://$MASTER1_IP:2380 \
    --initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \
    --initial-cluster-token=etcd-cluster \
    --initial-cluster-state=new \
    --cert-file=/usr/local/etcd/ssl/etcd.pem \
    --key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --peer-cert-file=/usr/local/etcd/ssl/etcd.pem \
    --peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --trusted-ca-file=/usr/local/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem


    [Install]
    WantedBy=multi-user.target
    EOF


    配置master2的etcd

    cat << EOF | tee /etc/systemd/system/etcd2.service
    [Unit]
    Description=Etcd Server
    Documentation=https://coreos.com/etcd/docs/latest/
    After=network.target
    After=network-online.target
    Wants=network-online.target

    [Service]
    Type=notify
    Restart=on-failure
    LimitNOFILE=65536
    ExecStart=/usr/local/etcd/bin/etcd \
    --name=etcd01 \
    --data-dir=/usr/local/etcd/data/ \
    --listen-peer-urls=https://$MASTER2_IP:2380 \
    --listen-client-urls=https://$MASTER2_IP:2379,https://127.0.0.1:2379 \
    --advertise-client-urls=https://$MASTER2_IP:2379 \
    --initial-advertise-peer-urls=https://$MASTER2_IP:2380 \
    --initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \
    --initial-cluster-token=etcd-cluster \
    --initial-cluster-state=new \
    --cert-file=/usr/local/etcd/ssl/etcd.pem \
    --key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --peer-cert-file=/usr/local/etcd/ssl/etcd.pem \
    --peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --trusted-ca-file=/usr/local/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem

    [Install]
    WantedBy=multi-user.target
    EOF


    scp /etc/systemd/system/etcd2.service k8s-master2:/etc/systemd/system/etcd.service
    rm -rf /etc/systemd/system/etcd2.service


    配置master3的etcd

    cat << EOF | tee /etc/systemd/system/etcd3.service
    [Unit]
    Description=Etcd Server
    Documentation=https://coreos.com/etcd/docs/latest/
    After=network.target
    After=network-online.target
    Wants=network-online.target

    [Service]
    Type=notify
    Restart=on-failure
    LimitNOFILE=65536
    ExecStart=/usr/local/etcd/bin/etcd \
    --name=etcd02 \
    --data-dir=/usr/local/etcd/data/ \
    --listen-peer-urls=https://$MASTER3_IP:2380 \
    --listen-client-urls=https://$MASTER3_IP:2379,https://127.0.0.1:2379 \
    --advertise-client-urls=https://$MASTER3_IP:2379 \
    --initial-advertise-peer-urls=https://$MASTER3_IP:2380 \
    --initial-cluster=etcd00=https://$MASTER1_IP:2380,etcd01=https://$MASTER2_IP:2380,etcd02=https://$MASTER3_IP:2380 \
    --initial-cluster-token=etcd-cluster \
    --initial-cluster-state=new \
    --cert-file=/usr/local/etcd/ssl/etcd.pem \
    --key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --peer-cert-file=/usr/local/etcd/ssl/etcd.pem \
    --peer-key-file=/usr/local/etcd/ssl/etcd-key.pem \
    --trusted-ca-file=/usr/local/etcd/ssl/ca.pem \
    --peer-trusted-ca-file=/usr/local/etcd/ssl/ca.pem

    [Install]
    WantedBy=multi-user.target
    EOF


    scp /etc/systemd/system/etcd3.service k8s-master3:/etc/systemd/system/etcd.service
    rm -rf /etc/systemd/system/etcd3.service
    ```

    | 配置选项 | 选项说明 |
    | --------------------------- | ------------------------------------------------------------ |
    | wal | 存放预写式日志,文件中记录了整个数据变化的全部历程,数据的修改在提交前,都要先写入到WAL中 |
    | data-dir | 指定节点的数据存储目录(包括:节点ID、集群ID、集群初始化配置、Snapshot文件等),如果未指定,会写在当前目录 |
    | wal-dir | 存放预写式日志,文件中记录了整个数据变化的全部历程,数据的修改在提交前,都要先写入到WAL中,如果未指定,写在--data-dir目录下面 |
    | name | 节点名称,如果--initial-cluster-state=new这个值为new,哪么--name的参数值必须位于--initial-cluster列表中 |
    | cert-file | 客户端与服务器之间TLS证书文件的路径 |
    | key-file | 客户端与服务器之间TLS密钥文件的路径 |
    | trusted-ca-file | 签名client证书的CA证书,用于验证client证书 |
    | peer-cert-file | 对等服务器TLS证书文件的路径 |
    | peer-key-file | 对等服务器TLS密钥文件的路径 |
    | peer-client-cert-auth | 启用对等客户端证书验证 |
    | client-cert-auth | 启用客户端验证 |
    | listen-peer-urls | 与集群其它成员之间的通信地址 |
    | initial-advertise-peer-urls | 通告给集群其它节点,本地的对等URL地址 |
    | listen-client-urls | 监听本地端口,对外提供服务的地址 |
    | advertise-client-urls | 客户端URL,用于通告集群的其余部分 |
    | initial-cluster-token | 集群的token,整个集群中保持一致 |
    | initial-cluster | 集群中的所有信息节点 |
    | initial-cluster-state | 初始化集群状态,默认为new |
    | auto-compaction-mode | 配置基于时间的三种模式 |
    | auto-compaction-retention | 设置保留历史时间为1小时 |
    | max-request-bytes | 服务器将接受的最大客户端请求大小 |
    | quota-backend-bytes | 当后端大小超过给定的配额时,报警 |
    | heartbeat-interval | 心跳间隔的时间,单位毫秒 |
    | election-timeout | 选举超时时间,单位毫秒 |

    ### 所有节点启动etcd服务并设置为开机自启

    ```
    启动服务方法一(每台单独执行);

    systemctl daemon-reload && systemctl enable etcd.service && systemctl restart etcd.service &

    启动服务方法二(master1执行);
    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE ---"
    ssh $NODE "systemctl daemon-reload"
    ssh $NODE "systemctl enable --now etcd" &
    done
    wait
    ```

    如果有某一台节点启动失败 请手动启动

    ```
    systemctl daemon-reload
    systemctl restart etcd.service

    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE ---"
    ssh $NODE "systemctl daemon-reload"
    ssh $NODE "systemctl start etcd" &
    done
    wait
    ```

    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE ---"
    ssh $NODE "systemctl daemon-reload"
    ssh $NODE "systemctl restart etcd" &
    done

    ### 检查etcd集群状态

    三个节点的IP

    ```
    etcdctl
    --cacert=/usr/local/etcd/ssl/ca.pem
    --cert=/usr/local/etcd/ssl/etcd.pem
    --key=/usr/local/etcd/ssl/etcd-key.pem
    --endpoints="https://200.200.100.71:2379,
    https://200.200.100.72:2379,https://200.200.100.73:2379" endpoint health
    ```

    输出信息

    ```
    https://200.200.100.71:2379 is healthy: successfully committed proposal: took = 14.100152ms
    https://200.200.100.72:2379 is healthy: successfully committed proposal: took = 29.074303ms
    https://200.200.100.73:2379 is healthy: successfully committed proposal: took = 29.074303ms
    ```

    全部节点为healthy 表明etcd集群搭建成功

    # 部署keepalived+HAProxy

    1.信息可以按照自己的环境填写,或者和我相同

    2.网卡名称都为eth0,如有不相同建议修改下面配置,或者直接修改[centos7网卡为eth0](https://i4t.com/3723.html)

    3.cluster dns或domain有改变的话,需要修改kubelet-conf.yml

    HA(haproxy+keepalived) 单台master就不要用HA了

    首先所有master安装haproxy+keeplived

    ```
    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE---"
    ssh $NODE 'yum install haproxy keepalived -y' &
    done
    ```

    安装完记得检查 (是每台master进行检查)

    ```
    for NODE in k8s-master1 k8s-master2 k8s-master3;do
    echo "--- $NODE ---"
    ssh $NODE "rpm -qa|grep haproxy"
    ssh $NODE "rpm -qa|grep keepalived"
    done
    ```

    在k8s-master1修改配置文件,并分发给其他master

    · haproxy配置文件修改

    ```
    cat << EOF | tee /etc/haproxy/haproxy.cfg
    global
    log 127.0.0.1 local2
    chroot /var/lib/haproxy
    pidfile /var/run/haproxy.pid
    maxconn 4000
    user haproxy
    group haproxy
    daemon

    defaults
    mode tcp
    log global
    retries 3
    timeout connect 10s
    timeout client 1m
    timeout server 1m

    frontend kubernetes
    bind *:8443
    mode tcp
    option tcplog
    default_backend kubernetes-apiserver

    backend kubernetes-apiserver
    mode tcp
    balance roundrobin
    server k8s-master1 200.200.100.71:6443 check maxconn 2000
    server k8s-master2 200.200.100.72:6443 check maxconn 2000
    server k8s-master3 200.200.100.73:6443 check maxconn 2000
    EOF
    ```

    #在最后一行修改或者添加我们的master节点,端口默认是8443、这里更改了默认端口

    · keeplived配置文件修改

    ```
    cat << EOF | tee /etc/keepalived/keepalived.conf
    global_defs {
    router_id LVS_DEVEL
    }

    vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3
    fall 10
    timeout 9
    rise 2
    }
    vrrp_instance VI_1 {
    state MASTER #备服务器上改为BACKUP
    interface ens18 #改为自己的接口
    virtual_router_id 51
    priority 100 #备服务器上改为小于100的数字,90,80
    advert_int 1
    mcast_src_ip 200.200.100.71 #本机IP
    nopreempt
    authentication {
    auth_type PASS
    auth_pass 1111
    }
    unicast_peer {
    200.200.100.72 #除本机外其余两个master的IP节点
    200.200.100.73
    }
    virtual_ipaddress {
    200.200.100.70 #虚拟vip,自己设定
    }
    track_script {
    check_haproxy
    }
    }
    EOF
    ```

    另外keepalived需要修改

    ```
    router_id LVS_DEVEL_1 修改为不同
    state BACKUP 备用
    priority 101 优先级需要修改
    ```

    #unicast_peer 为master节点IP

    #virtual_ipaddress 为VIP地址,自行修改

    #interface 物理网卡地址

    添加keeplived健康检查脚本

    ```
    cat > /etc/keepalived/check_haproxy.sh <<EOF
    #!/bin/bash
    A=\`ps -C haproxy --no-header | wc -l\`
    if [ $A -eq 0 ];then
    systemctl stop keepalived
    fi
    EOF
    ```

    ```
    chmod +x /etc/keepalived/check_haproxy.sh
    ```

    ##注意修改VIP地址

    分发keeplived及haproxy文件给所有master

    # 分发文件

    ```
    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE ---"
    scp -r /etc/haproxy/haproxy.cfg $NODE:/etc/haproxy/
    scp -r /etc/keepalived/keepalived.conf $NODE:/etc/keepalived/
    scp -r /etc/keepalived/check_haproxy.sh $NODE:/etc/keepalived/
    done
    ```

    ping下vip看看能通否,先等待大概四五秒等keepalived和haproxy起来

    ping 200.200.100.70

    这里的70是我们漂移IP (VIP)

    如果vip没起来就是keepalived没起来就每个节点上去restart下keepalived或者确认下配置文件/etc/keepalived/keepalived.conf里网卡名和ip是否注入成功

    ```
    for NODE in k8s-master1 k8s-master2 k8s-master3; do
    echo "--- $NODE ---"
    ssh $NODE 'systemctl enable --now haproxy keepalived'
    ssh $NODE 'systemctl restart haproxy keepalived'
    done
    ```

    ```
    systemctl status keepalived
    systemctl status haproxy
    ```

    # 部署master节点

    ### 注意 本文所有操作均在master节点执行

    本文主要介绍kubernetes master组件的安装

    附 v1.17.3版本软件包的下载地址

    ```
    https://dl.k8s.io/v1.17.3/kubernetes-server-linux-amd64.tar.gz
    ```

    本部分将会在控制节点上部署 Kubernetes 控制服务。每个控制节点上需要部署的服务包括:Kubernetes API Server、Scheduler 以及 Controller Manager 等。

    ### 下载并安装kubernetes组件可执行文件

    kubectl 文件已经下载过了 可以选择不下载

    ```
    wget --timestamping
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-apiserver"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-controller-manager"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kube-scheduler"
    "https://storage.googleapis.com/kubernetes-release/release/v1.17.3/bin/linux/amd64/kubectl"
    chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl

    解压缩
    tar -zxvf /root/kubernetes-server-1.17.3-linux-amd64.tar.gz --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    cd /usr/local/bin/
    scp kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $host:/usr/local/bin/
    done


    scp /usr/local/bin/kube{let,-proxy} k8s-node1:/usr/local/bin/

    ```

    使kubectl命令可以使用table键

    ```
    source /usr/share/bash-completion/bash_completion
    source <(kubectl completion bash)
    ```

    ```
    cat > /var/lib/kubernetes/audit-policy.yaml <<EOF
    apiVersion: audit.k8s.io/v1beta1
    kind: Policy
    rules:
    # The following requests were manually identified as high-volume and low-risk, so drop them.
    - level: None
    resources:
    - group: ""
    resources:
    - endpoints
    - services
    - services/status
    users:
    - 'system:kube-proxy'
    verbs:
    - watch
    - level: None
    resources:
    - group: ""
    resources:
    - nodes
    - nodes/status
    userGroups:
    - 'system:nodes'
    verbs:
    - get
    - level: None
    namespaces:
    - kube-system
    resources:
    - group: ""
    resources:
    - endpoints
    users:
    - 'system:kube-controller-manager'
    - 'system:kube-scheduler'
    - 'system:serviceaccount:kube-system:endpoint-controller'
    verbs:
    - get
    - update
    - level: None
    resources:
    - group: ""
    resources:
    - namespaces
    - namespaces/status
    - namespaces/finalize
    users:
    - 'system:apiserver'
    verbs:
    - get
    # Don't log HPA fetching metrics.
    - level: None
    resources:
    - group: metrics.k8s.io
    users:
    - 'system:kube-controller-manager'
    verbs:
    - get
    - list
    # Don't log these read-only URLs.
    - level: None
    nonResourceURLs:
    - '/healthz*'
    - /version
    - '/swagger*'
    # Don't log events requests.
    - level: None
    resources:
    - group: ""
    resources:
    - events
    # node and pod status calls from nodes are high-volume and can be large, don't log responses for expected updates from nodes
    - level: Request
    omitStages:
    - RequestReceived
    resources:
    - group: ""
    resources:
    - nodes/status
    - pods/status
    users:
    - kubelet
    - 'system:node-problem-detector'
    - 'system:serviceaccount:kube-system:node-problem-detector'
    verbs:
    - update
    - patch
    - level: Request
    omitStages:
    - RequestReceived
    resources:
    - group: ""
    resources:
    - nodes/status
    - pods/status
    userGroups:
    - 'system:nodes'
    verbs:
    - update
    - patch
    # deletecollection calls can be large, don't log responses for expected namespace deletions
    - level: Request
    omitStages:
    - RequestReceived
    users:
    - 'system:serviceaccount:kube-system:namespace-controller'
    verbs:
    - deletecollection
    # Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
    # so only log at the Metadata level.
    - level: Metadata
    omitStages:
    - RequestReceived
    resources:
    - group: ""
    resources:
    - secrets
    - configmaps
    - group: authentication.k8s.io
    resources:
    - tokenreviews
    # Get repsonses can be large; skip them.
    - level: Request
    omitStages:
    - RequestReceived
    resources:
    - group: ""
    - group: admissionregistration.k8s.io
    - group: apiextensions.k8s.io
    - group: apiregistration.k8s.io
    - group: apps
    - group: authentication.k8s.io
    - group: authorization.k8s.io
    - group: autoscaling
    - group: batch
    - group: certificates.k8s.io
    - group: extensions
    - group: metrics.k8s.io
    - group: networking.k8s.io
    - group: policy
    - group: rbac.authorization.k8s.io
    - group: scheduling.k8s.io
    - group: settings.k8s.io
    - group: storage.k8s.io
    verbs:
    - get
    - list
    - watch
    # Default level for known APIs
    - level: RequestResponse
    omitStages:
    - RequestReceived
    resources:
    - group: ""
    - group: admissionregistration.k8s.io
    - group: apiextensions.k8s.io
    - group: apiregistration.k8s.io
    - group: apps
    - group: authentication.k8s.io
    - group: authorization.k8s.io
    - group: autoscaling
    - group: batch
    - group: certificates.k8s.io
    - group: extensions
    - group: metrics.k8s.io
    - group: networking.k8s.io
    - group: policy
    - group: rbac.authorization.k8s.io
    - group: scheduling.k8s.io
    - group: settings.k8s.io
    - group: storage.k8s.io
    # Default level for all other requests.
    - level: Metadata
    omitStages:
    - RequestReceived
    EOF
    ```

    拷贝配置文件到其他master节点

    ```
    for host in k8s-master2 k8s-master3; do
    echo "---$host---"
    scp /var/lib/kubernetes/audit-policy.yaml $host:/var/lib/kubernetes/audit-policy.yaml
    done
    ```

    ### 生成kube-apiserver.service配置启动文件

    ```
    cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    ExecStart=/usr/local/bin/kube-apiserver \
    --advertise-address=$MASTER1_IP \
    --default-not-ready-toleration-seconds=360 \
    --default-unreachable-toleration-seconds=360 \
    --feature-gates=DynamicAuditing=true \
    --max-mutating-requests-inflight=2000 \
    --max-requests-inflight=4000 \
    --default-watch-cache-size=200 \
    --delete-collection-workers=2 \
    --encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \
    --etcd-cafile=/usr/local/etcd/ssl/ca.pem \
    --etcd-certfile=/usr/local/etcd/ssl/etcd.pem \
    --etcd-keyfile=/usr/local/etcd/ssl/etcd-key.pem \
    --etcd-servers=https://$MASTER1_IP:2379,https://$MASTER2_IP:2379,https://$MASTER3_IP:2379 \
    --bind-address=0.0.0.0 \
    --secure-port=6443 \
    --tls-cert-file=/var/lib/kubernetes/kubernetes.pem \
    --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem \
    --insecure-port=0 \
    --audit-dynamic-configuration \
    --audit-log-maxage=15 \
    --audit-log-maxbackup=3 \
    --audit-log-maxsize=100 \
    --audit-log-truncate-enabled \
    --audit-log-path=/var/log/audit.log \
    --audit-policy-file=/var/lib/kubernetes/audit-policy.yaml \
    --profiling \
    --anonymous-auth=false \
    --client-ca-file=/var/lib/kubernetes/ca.pem \
    --enable-bootstrap-token-auth \
    --requestheader-allowed-names="aggregator" \
    --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
    --requestheader-extra-headers-prefix="X-Remote-Extra-" \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --service-account-key-file=/var/lib/kubernetes/service-account.pem \
    --authorization-mode=Node,RBAC \
    --runtime-config=api/all=true \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \
    --allow-privileged=true \
    --apiserver-count=3 \
    --event-ttl=168h \
    --kubelet-certificate-authority=/var/lib/kubernetes/ca.pem \
    --kubelet-client-certificate=/var/lib/kubernetes/kubernetes.pem \
    --kubelet-client-key=/var/lib/kubernetes/kubernetes-key.pem \
    --kubelet-https=true \
    --kubelet-timeout=10s \
    --proxy-client-cert-file=/var/lib/kube-proxy/kube-proxy.pem \
    --proxy-client-key-file=/var/lib/kube-proxy/kube-proxy-key.pem \
    --service-cluster-ip-range=10.250.0.0/16 \
    --service-node-port-range=30000-32767 \
    --logtostderr=true \
    --v=2

    Restart=on-failure
    RestartSec=5

    [Install]
    WantedBy=multi-user.target
    EOF
    ```

    拷贝配置文件到另外节点并修改advertise-address地址

    ```
    scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/
    scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/
    ```

    配置详解如下

    | 配置选项 | 选项说明 |
    | ------------------------------------------------------ | ------------------------------------------------------------ |
    | --advertise-address | 向集群成员通知 apiserver 消息的 IP 地址,这个地址必须能够被集群中其他成员访问,如果 IP 地址为空,将会使用 --bind-address,如果未指定 --bind-address,将会使用主机的默认接口地址 |
    | --default-not-ready-toleration-seconds | 表示 notReady状态的容忍度秒数:默认情况下,NoExecute 被添加到尚未具有此容忍度的每个 Pod 中 |
    | --default-unreachable-toleration-seconds | 表示 unreachable状态的容忍度秒数:默认情况下,NoExecute 被添加到尚未具有此容忍度的每个 Pod 中 |
    | --feature-gates=DynamicAuditing=true | 用于实验性质的特性开关组,每个key=value表示 |
    | --max-mutating-requests-inflight=2000 | 在给定时间内进行中可变请求的最大数量,当超过该值时,服务将拒绝所有请求,0 值表示没有限制(默认值 200) |
    | --max-requests-inflight=4000 | 在给定时间内进行中不可变请求的最大数量,当超过该值时,服务将拒绝所有请求,0 值表示没有限制。(默认值 400) |
    | --default-watch-cache-size=200 | 默认监视缓存大小,0 表示对于没有设置默认监视大小的资源,将禁用监视缓存 |
    | --delete-collection-workers=2 | 用于 DeleteCollection 调用的工作者数量,这被用于加速 namespace 的清理( 默认值 1) |
    | --encryption-provider-config | 将Secret数据加密存储到etcd中的配置文件 |
    | --etcd-cafile | 用于etcd 通信的 SSL CA 文件 |
    | --etcd-certfile | 用于 etcd 通信的的 SSL 证书文件 |
    | --etcd-keyfile | 用于 etcd 通信的 SSL 密钥文件 . |
    | --etcd-servers | 连接的 etcd 服务器列表 , 形式为(scheme://ip:port),使用逗号分隔 |
    | --bind-address | 监听 --seure-port 的 IP 地址,被关联的接口必须能够被集群其它节点和 CLI/web 客户端访问,如果为空,则将使用所有接口(0.0.0.0) |
    | --secure-port=6443 | 用于监听具有认证授权功能的 HTTPS 协议的端口,默认值是6443 |
    | --tls-cert-file | 包含用于 HTTPS 的默认 x509 证书的文件,(如果有 CA 证书,则附加于 server 证书之后),如果启用了 HTTPS 服务,并且没有提供 --tls-cert-file 和 --tls-private-key-file,则将为公共地址生成一个自签名的证书和密钥并保存于 /var/run/kubernetes 目录中 |
    | --tls-private-key-file | 包含匹配 --tls-cert-file 的 x509 证书私钥的文件 |
    | --insecure-port=0 | 监听不安全端口,默认值是8080,设置为0,表示禁用不安全端口 |
    | --audit-dynamic-configuration | 动态审计配置 |
    | --audit-log-maxage=15 | 基于文件名中的时间戳,旧审计日志文件的最长保留天数 |
    | --audit-log-maxbackup=3 | 旧审计日志文件的最大保留个数 |
    | --audit-log-maxsize=100 | 审计日志被轮转前的最大兆字节数 |
    | --audit-log-truncate-enabled | 是否启用事件和batch截断功能。 |
    | --audit-log-path | 如果设置,表示所有到apiserver的请求都会记录到这个文件中,‘-’表示写入标准输出 |
    | --audit-policy-file | 定义审计策略配置文件的路径,需要打开 'AdvancedAuditing' 特性开关,AdvancedAuditing 需要一个配置来启用审计功能 |
    | --profiling | 在 web 接口 host:port/debug/pprof/ 上启用 profiling(默认值 true) |
    | --anonymous-auth | 启用到 API server 的安全端口的匿名请求,未被其他认证方法拒绝的请求被当做匿名请求,匿名请求的用户名为 system:anonymous,用户组名为 system:unauthenticated(默认值 true) |
    | --client-ca-file | 如果设置此标志,对于任何请求,如果存包含 client-ca-file 中的 authorities 签名的客户端证书,将会使用客户端证书中的 CommonName 对应的身份进行认证 |
    | --enable-bootstrap-token-auth | 启用此选项以允许 'kube-system' 命名空间中的 'bootstrap.kubernetes.io/token' 类型密钥可以被用于 TLS 的启动认证 |
    | --requestheader-allowed-names | 使用 --requestheader-username-headers 指定的,允许在头部提供用户名的客户端证书通用名称列表。如果为空,任何通过 --requestheader-client-ca-file 中 authorities 验证的客户端证书都是被允许的 |
    | --requestheader-client-ca-file | 在信任请求头中以 --requestheader-username-headers 指示的用户名之前,用于验证接入请求中客户端证书的根证书捆绑 |
    | --requestheader-extra-headers-prefix="X-Remote-Extra-" | 用于检查的请求头的前缀列表,建议使用 X-Remote-Extra- |
    | --requestheader-group-headers=X-Remote-Group | 用于检查群组的请求头列表,建议使用 X-Remote-Group |
    | --requestheader-username-headers=X-Remote-User | 用于检查用户名的请求头列表,建议使用 X-Remote-User |
    | --service-account-key-file | 包含 PEM 加密的 x509 RSA 或 ECDSA 私钥或公钥的文件,用于验证 ServiceAccount 令牌,如果设置该值,--tls-private-key-file 将会被使用,指定的文件可以包含多个密钥,并且这个标志可以和不同的文件一起多次使用 |
    | --authorization-mode=Node,RBAC | 在安全端口上进行权限验证的插件的顺序列表,以逗号分隔的列表,包括:AlwaysAllow,AlwaysDeny,ABAC,Webhook,RBAC,Node.(默认值 "AlwaysAllow") |
    | --runtime-config=api/all=true | 传递给 apiserver 用于描述运行时配置的键值对集合 |
    | --enable-admission-plugins=NodeRestriction | 资源限制的相关配置 |
    | --allow-privileged=true | 如果为 true, 将允许特权容器 |
    | --apiserver-count=3 | 集群中运行的 apiserver 数量,必须为正数(默认值 1 |
    | --event-ttl=168h | 事件驻留时间(默认值 1h0m0s) |
    | --kubelet-certificate-authority | 证书 authority 的文件路径 |
    | --kubelet-client-certificate | 用于 TLS 的客户端证书文件路径 |
    | --kubelet-client-key | 用于 TLS 的客户端证书密钥文件路径 |
    | --kubelet-https=true | 为 kubelet 启用 https(默认值 true) |
    | --kubelet-timeout=10s | kubelet 操作超时时间(默认值5秒) |
    | --proxy-client-cert-file | 当必须调用外部程序时,用于证明 aggregator 或者 kube-apiserver 的身份的客户端证书,包括代理到用户 api-server 的请求和调用 webhook 准入控制插件的请求,它期望这个证书包含一个来自于 CA 中的 --requestheader-client-ca-file 标记的签名,该 CA 在 kube-system 命名空间的 'extension-apiserver-authentication' configmap 中发布,从 Kube-aggregator 收到调用的组件应该使用该 CA 进行他们部分的双向 TLS 验证 |
    | --proxy-client-key-file | 当必须调用外部程序时,用于证明 aggregator 或者 kube-apiserver 的身份的客户端证书密钥。包括代理到用户 api-server 的请求和调用 webhook 准入控制插件的请求 |
    | --service-cluster-ip-range | Service网络地址分配 ,CIDR 表示的 IP 范围,服务的 cluster ip 将从中分配, 一定不要和分配给 nodes 和 pods 的 IP 范围产生重叠 |
    | --service-node-port-range | Service使用的端口范围 |
    | --logtostderr=true | 输出日志到标准错误控制台,不输出到文件 |
    | --v=2 | 指定输出日志的级别 |

    ### 生成kube-controller-manager.service 配置启动文件

    kube-controller-manager(k8s控制器管理器)是一个守护进程,它通过kube-apiserver监视集群的共享状态(kube-apiserver收集或监视到的一些集群资源状态,供kube-controller-manager或其它客户端watch), 控制器管理器并尝试将当前的状态向所定义的状态迁移(移动、靠近),它本身是有状态的,会修改集群状态信息,如果多个控制器管理器同时生效,则会有一致性问题,所以kube-controller-manager的高可用,只能是主备模式,而kubernetes集群是采用租赁锁实现leader选举,需要在启动参数中加入--leader-elect=true。

    ```
    cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    ExecStart=/usr/local/bin/kube-controller-manager \
    --profiling \
    --cluster-name=kubernetes \
    --controllers=*,bootstrapsigner,tokencleaner \
    --kube-api-qps=1000 \
    --kube-api-burst=2000 \
    --leader-elect \
    --use-service-account-credentials\
    --concurrent-service-syncs=2 \
    --bind-address=0.0.0.0 \
    --secure-port=10257 \
    --tls-cert-file=/var/lib/kubernetes/kube-controller-manager.pem \
    --tls-private-key-file=/var/lib/kubernetes/kube-controller-manager-key.pem \
    --port=10252 \
    --authentication-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
    --client-ca-file=/var/lib/kubernetes/ca.pem \
    --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
    --requestheader-extra-headers-prefix="X-Remote-Extra-" \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --authorization-kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
    --cluster-signing-cert-file=/var/lib/kubernetes/ca.pem \
    --cluster-signing-key-file=/var/lib/kubernetes/ca-key.pem \
    --experimental-cluster-signing-duration=876000h \
    --horizontal-pod-autoscaler-sync-period=10s \
    --concurrent-deployment-syncs=10 \
    --concurrent-gc-syncs=30 \
    --node-cidr-mask-size=24 \
    --service-cluster-ip-range=10.250.0.0/16 \
    --pod-eviction-timeout=6m \
    --terminated-pod-gc-threshold=10000 \
    --root-ca-file=/var/lib/kubernetes/ca.pem \
    --service-account-private-key-file=/var/lib/kubernetes/service-account-key.pem \
    --kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \
    --logtostderr=true \
    --v=2

    Restart=on-failure
    RestartSec=5

    [Install]
    WantedBy=multi-user.target
    EOF
    ```

    原第22行出问题的配置项

    ```
    --requestheader-allowed-names="" \
    ```

    拷贝配置文件到另外节点

    ```
    scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/
    scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/
    ```

    **配置详解**

    | 配置选项 | 选项说明 |
    | ------------------------------------------------------------ | ------------------------------------------------------------ |
    | --profiling | 通过web界面启动分析接口,host:port/debug/pprof/ |
    | --cluster-name=kubernetes | 集群名称,默认是kubernetes |
    | --controllers=*,bootstrapsigner,tokencleaner | *是启用默认启用所有控制器,但bootstrapsigner,tokencleaner 这两个控制器默认是禁用的,需要人为指定启用 |
    | --kube-api-qps=1000 | 与kube-apiserver通信时的QPS |
    | --kube-api-burst=2000 | 与kube-apiserver通信时使用 |
    | --leader-elect | 高可用时启用选举功能 |
    | --use-service-account-credentials | 如果为true,为每个控制器使用单独的service account |
    | --concurrent-service-syncs=2 | 允许同时同步Service数量,数字越大,服务管理响应越快,同时消耗更多的CPU和网络资源; |
    | --bind-address=0.0.0.0 | 监控地址 |
    | --secure-port=10257 | 提供HTTPS服务,默认端口为10257,如果为0,不提供https服务 |
    | --tls-cert-file | 指定x509证书文件,如果启用了HTTPS服务,但是 --tls-cert-file和--tls-private-key-file 未提供,则会为公共地址生成自签名证书和密钥,并将其保存到--cert-dir指定的目录中。 |
    | --tls-private-key-file | 指定与--tls-cert-file对应的私钥 |
    | --port=10252 | 提供HTTP服务,不认证,如果设置0,不提供HTTP服务,默认值是10252 |
    | --authentication-kubeconfig | kube-controller-kube-controller-manager也是kube-apiserver的客户端,也可以使用kubeconfig方式访问kube-apiserver, |
    | --client-ca-file | 启用客户端证书认证 |
    | --requestheader-allowed-names="aggregator" | 允许通过的客户端证书Common Name列表,可以提供 –requestheader-username-headers 中的 Header 的用户名。如果为空,则所有通过–requestheader-client-ca-file校验的都允许通过 |
    | --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem | 针对收到的请求,在信任--requestheader-username-header指定的header里包含的用户名之前,验证客户端证书的根证书 |
    | --requestheader-extra-headers-prefix="X-Remote-Extra-" | 要检查的请求头前缀列表,建议使用 X-Remote-Extra- |
    | --requestheader-group-headers=X-Remote-Group | 请求头中需要检查的组名 |
    | --requestheader-username-headers=X-Remote-User | 请求头中需要检查的用户名 |
    | --authorization-kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig | 指定kubeconfig配置文件路径 |
    | --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem | 指定用于集群签发的所有集群范围内证书文件(根证书文件) |
    | --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem | 指定集群签发证书的key |
    | --experimental-cluster-signing-duration=876000h | 证书签发时间 |
    | --horizontal-pod-autoscaler-sync-period=10s | HPA控制器检查周期 |
    | --concurrent-deployment-syncs=10 | 允许并发同步的Deployment对象的数量,更大的数量等于更快的部署响应 |
    | --concurrent-gc-syncs=30 | 允许并发同步的垃圾收集器数量。默认值20 |
    | --node-cidr-mask-size=24 | node节点的CIDR掩码,默认是24 |
    | --service-cluster-ip-range=10.254.0.0/16 | 集群Services 的CIDR范围 |
    | --pod-eviction-timeout=6m | 在失败节点删除Pod的宽限期,默认是300秒 |
    | --terminated-pod-gc-threshold=10000 | 在Pod垃圾收集器开始删除终止Pod前,允许存在终止的Pod数量,默认是12500 |
    | --root-ca-file=/etc/kubernetes/cert/ca.pem | 如果设置,该根证书权限将包含service acount的toker secret,这必须是一个有效的PEM编码CA 包 |
    | --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem | 包含用于签署service account token的PEM编码RSA或者ECDSA私钥的文件名 |
    | --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig | 指定kubeconfig配置文件 |
    | --logtostderr=true | 错误日志到标准输出,而不是文件 |
    | --v=2 | 日志级别 |

    ​ 1 控制器管理器管理控制器,各个控制器负责监视(watch)apiserver暴露的集群状态,并不断地尝试把当前状态向所期待的状态迁移;

    2. 配置使用kubeconfig访问kube-apiserver安全端口;

    3. 默认非安全端口10252,安全端口10257;

    4. kube-controller-manager 3节点高可用,去竞争锁,成为leader;

    ### 生成kube-scheduler.service配置启动文件

    kube-scheduler作为kubemaster核心组件运行在master节点上面,主要是watch kube-apiserver中未被调度的Pod,如果有,通过调度算法找到最适合的节点Node,然后通过kube-apiserver以对象(pod名称、Node节点名称等)的形式写入到etcd中来完成调度,kube-scheduler的高可用与kube-controller-manager一样,需要使用选举的方式产生。

    ```
    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    ssh $host "mkdir /etc/kubernetes/config/ -p"
    done
    ```

    ```
    cat <<EOF | sudo tee /etc/kubernetes/config/kube-scheduler.yaml
    apiVersion: kubescheduler.config.k8s.io/v1alpha1
    kind: KubeSchedulerConfiguration
    bindTimeoutSeconds: 600
    clientConnection:
    burst: 200
    kubeconfig: "/var/lib/kubernetes/kube-scheduler.kubeconfig"
    qps: 100
    enableContentionProfiling: false
    enableProfiling: true
    hardPodAffinitySymmetricWeight: 1
    healthzBindAddress: 127.0.0.1:10251
    leaderElection:
    leaderElect: true
    metricsBindAddress: $MASTER1_IP:10251
    EOF
    ```

    ```
    cat <<EOF | tee /etc/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    ExecStart=/usr/local/bin/kube-scheduler \
    --config=/etc/kubernetes/config/kube-scheduler.yaml \
    --bind-address=$MASTER1_IP \
    --secure-port=10259 \
    --port=10251 \
    --tls-cert-file=/var/lib/kubernetes/kube-scheduler.pem \
    --tls-private-key-file=/var/lib/kubernetes/kube-scheduler-key.pem \
    --authentication-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
    --client-ca-file=/var/lib/kubernetes/ca.pem \
    --requestheader-allowed-names="aggregator" \
    --requestheader-client-ca-file=/var/lib/kubernetes/ca.pem \
    --requestheader-extra-headers-prefix="X-Remote-Extra-" \
    --requestheader-group-headers=X-Remote-Group \
    --requestheader-username-headers=X-Remote-User \
    --authorization-kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \
    --logtostderr=true \
    --v=2
    Restart=on-failure
    RestartSec=5

    [Install]
    WantedBy=multi-user.target
    EOF
    ```

    拷贝配置文件到另外节点并修改其配置!!!

    ```
    scp /etc/kubernetes/config/kube-scheduler.yaml k8s-master2:/etc/kubernetes/config/
    scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/


    scp /etc/kubernetes/config/kube-scheduler.yaml k8s-master3:/etc/kubernetes/config/
    scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/
    ```

    **启动参数详解**

    | 配置选项 | 选项说明 |
    | ------------------------------------------------------------ | ------------------------------------------------------------ |
    | --config=/etc/kubernetes/kube-scheduler.yaml | 配置文件的路径 |
    | --bind-address= | 监控地址 |
    | --secure-port=10259 | 监听的安全端口,设置为0,不提供安全端口 |
    | --port=10251 | 监听非安全端口,设置为0,不提供非安全端口 |
    | --tls-cert-file=/etc/kubernetes/cert/kube-scheduler.pem | 包含默认的 HTTPS x509 证书的文件,(CA证书(如果有)在服务器证书之后并置),如果启用了 HTTPS 服务,并且未提供 --tls-cert-file 和 --tls-private-key-file,则会为公共地址生成一个自签名证书和密钥,并将其保存到 --cert-dir 指定的目录中 |
    | --tls-private-key-file=/etc/kubernetes/cert/kube-scheduler-key.pem | 包含与 --tls-cert-file 匹配的默认 x509 私钥的文件 |
    | --authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig | 指定kube-scheduler做为kube-apiserver客户端时使用kubeconfig文件 |
    | --client-ca-file=/etc/kubernetes/cert/ca.pem | 如果已设置,由 client-ca-file 中的授权机构签名的客户端证书的任何请求都将使用与客户端证书的 CommonName 对应的身份进行身份验证 |
    | --requestheader-allowed-names="aggregator" | 客户端证书通用名称列表允许在 --requestheader-username-headers 指定的头部中提供用户名。如果为空,则允许任何由权威机构 --requestheader-client-ca-file 验证的客户端证书。 |
    | --requestheader-client-ca-file=/etc/kubernetes/cert/ca.pem | 在信任 --requestheader-username-headers 指定的头部中的用户名之前用于验证传入请求上的客户端证书的根证书包。警告:通常不依赖于传入请求已经完成的授权。 |
    | --requestheader-extra-headers-prefix="X-Remote-Extra-" | 要检查请求头部前缀列表。建议使用 X-Remote-Extra- |
    | --requestheader-group-headers=X-Remote-Group | 用于检查组的请求头部列表。建议使用 X-Remote-Group |
    | --requestheader-username-headers=X-Remote-User | 用于检查用户名的请求头部列表。X-Remote-User 很常见。 |
    | --authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig | 指向具有足够权限以创建 subjectaccessreviews.authorization.k8s.io 的 'core' kubernetes 服务器的 kubeconfig 文件,这是可选的,如果为空,则禁止所有未经授权跳过的请求 |
    | --logtostderr=true | 日志记录到标准错误而不是文件 |
    | --v=2 | 日志级别详细程度的数字 |

    1. kube-scheduler提供非安全端口10251, 安全端口10259;
    2. kube-scheduler 部署3节点高可用,通过选举产生leader;
    3. 它监视kube-apiserver提供的watch接口,它根据预选和优选策略两个环节找一个最佳适配,然后调度到此节点;

    ### 启动各组件

    ```
    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    ssh $host "systemctl daemon-reload"
    ssh $host "systemctl enable --now kube-apiserver kube-controller-manager kube-scheduler"
    done
    ```

    ```
    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    ssh $host "systemctl daemon-reload"
    ssh $host "systemctl restart kube-apiserver kube-controller-manager kube-scheduler"
    done

    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    ssh $host "systemctl daemon-reload"
    ssh $host "systemctl stop kube-apiserver kube-controller-manager kube-scheduler"
    done
    ```

    请等待10秒以便 kubernetes api server初始化

    ```
    systemctl status kube-apiserver
    systemctl status kube-controller-manager
    systemctl status kube-scheduler
    ```

    ### 拷贝admin 的kubeconfig 为kubectl默认读取的.kube/config

    ```
    cd
    for host in k8s-master1 k8s-master2 k8s-master3; do
    echo "---$host---"
    ssh $host "mkdir /root/.kube -p"
    scp /root/ssl/kubernetes/admin.kubeconfig $host:/root/.kube/config
    done
    ```

    查看集群信息

    ```
    kubectl cluster-info
    ```

    输出信息

    ```
    Kubernetes master is running at https://127.0.0.1:8443

    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    ```

    查看 cs信息

    ```
    kubectl get cs
    ```

    输出信息

    ```
    NAME STATUS MESSAGE ERROR
    controller-manager Healthy ok
    scheduler Healthy ok
    etcd-1 Healthy {"health":"true"}
    etcd-0 Healthy {"health":"true"}
    etcd-2 Healthy {"health":"true"}
    ```

    ###

    # Kubelet RBAC 授权

    ### 注意 本文所有操作均在master节点执行

    本节将会配置 API Server 访问 Kubelet API 的 RBAC 授权。访问 Kubelet API 是获取 metrics、日志以及执行容器命令所必需的。

    所有操作均在主节点操作

    这里设置 Kubeket --authorization-mode 为 Webhook 模式。Webhook 模式使用 SubjectAccessReview API 来决定授权。
    创建 system:kube-apiserver-to-kubelet ClusterRole 以允许请求 Kubelet API 和执行大部分来管理 Pods 的任务:

    ```
    cd /root/ssl/kubernetes
    ```

    ```
    cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRole
    metadata:
    annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
    labels:
    kubernetes.io/bootstrapping: rbac-defaults
    name: system:kube-apiserver-to-kubelet
    rules:
    - apiGroups:
    - ""
    resources:
    - nodes/proxy
    - nodes/stats
    - nodes/log
    - nodes/spec
    - nodes/metrics
    verbs:
    - "*"
    EOF
    ```

    输出信息

    ```
    clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
    ```

    Kubernetes API Server 使用客户端凭证授权 Kubelet 为 kubernetes 用户,此凭证用 --kubelet-client-certificate flag 来定义。

    绑定 system:kube-apiserver-to-kubelet ClusterRole 到 kubernetes 用户:

    ```
    cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
    name: system:kube-apiserver
    namespace: ""
    roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: system:kube-apiserver-to-kubelet
    subjects:
    - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
    EOF
    ```

    输出信息

    ```
    clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
    ```

    # 部署node节点

    ### 注意 本文所有操作所有节点执行

    本部分将会部署 Kubernetes 工作节点。每个节点上将会安装以下服务:
    container networking plugins
    kubelet
    kube-proxy

    安装依赖
    安装 OS 依赖组件:

    ```
    for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1;do
    echo "---$host---"
    ssh $host "yum install -y socat conntrack ipset";
    done
    ```

    socat 命令用于支持 kubectl port-forward 命令。

    下载worker 二进制文件

    ```
    wget https://github.com/containernetworking/plugins/releases/download/v0.8.2/cni-plugins-linux-amd64-v0.8.2.tgz
    ```

    解压cni插件

    ```
    tar -zxvf cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin/

    for host in k8s-master2 k8s-master3 k8s-node1; do
    echo "---$host---"
    ssh $host "mkdir /opt/cni/bin/ -p"
    scp /opt/cni/bin/* $host:/opt/cni/bin/
    done
    ```

    生成kubelet.service systemd服务文件

    ```
    cat << EOF | sudo tee /etc/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service

    [Service]
    ExecStart=/usr/local/bin/kubelet \
    --config=/var/lib/kubelet/kubelet-config.yaml \
    --image-pull-progress-deadline=2m \
    --kubeconfig=/var/lib/kubelet/kubeconfig \
    --pod-infra-container-image=cargo.caicloud.io/caicloud/pause-amd64:3.1 \
    --network-plugin=cni \
    --register-node=true \
    --v=2 \
    --container-runtime=docker \
    --container-runtime-endpoint=unix:///var/run/dockershim.sock \
    --image-pull-progress-deadline=15m

    Restart=on-failure
    RestartSec=5

    [Install]
    WantedBy=multi-user.target
    EOF
    ```

    拷贝启动文件到其他节点

    ```
    for host in k8s-master2 k8s-master3 k8s-node1; do
    echo "---$host---"
    scp /etc/systemd/system/kubelet.service $host:/etc/systemd/system/
    done
    ```

    **配置参数详解**

    | 配置选项 | 选项说明 |
    | ------------------------------- | ------------------------------------------------------------ |
    | --bootstrap-kubeconfig | 指定令牌认证文件 |
    | --cert-dir | 设置kube-controller-manager生成证书和私钥的目录 |
    | --cni-conf-dir= | 指定cni配置文件目录 |
    | --container-runtime=docker | 指定容器运行时引擎 |
    | --container-runtime-endpoint= | 监听的unix socket位置(Windows上面为 tcp 端口)。 |
    | --root-dir= | kubelet 保存数据的目录,默认:/var/lib/kubelet |
    | --kubeconfig= | kubelet作为客户端使用的kubeconfig认证文件,此文件是由kube-controller-mananger生成的 |
    | --config= | 指定kubelet配置文件 |
    | --hostname-override= | 用来配置该节点在集群中显示的主机名,kubelet设置了-–hostname-override参数后,kube-proxy也需要设置,否则会出现找不到Node的情况 |
    | --pod-infra-container-image= | 每个pod中的network/ipc 名称空间容器将使用的镜像 |
    | --image-pull-progress-deadline= | 镜像拉取进度最大时间,如果在这段时间拉取镜像没有任何进展,将取消拉取,默认:1m0s |
    | --volume-plugin-dir= | 第三方卷插件的完整搜索路径,默认:"/usr/libexec/kubernetes/kubelet-plugins/volume/exec/" |
    | --logtostderr=true | 日志记录到标准错误而不是文件 |
    | --v=2 | 日志级别详细程度的数字 |

    master1节点

    ```
    cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
    anonymous:
    enabled: false
    webhook:
    enabled: true
    x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
    authorization:
    mode: Webhook
    clusterDomain: "cluster.local"
    clusterDNS:
    - "10.250.0.10"
    runtimeRequestTimeout: "15m"
    tlsCertFile: "/var/lib/kubelet/k8s-master1.pem"
    tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master1-key.pem"
    address: "$MASTER1_IP"
    staticPodPath: ""
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: "$MASTER1_IP"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "10.244.0.0/16"
    podPidsLimit: -1
    resolvConf: /etc/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
    memory.available: "100Mi"
    nodefs.available: "10%"
    nodefs.inodesFree: "5%"
    imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF
    ```

    master2节点

    ```
    cat << EOF | sudo tee /var/lib/kubelet/k8s-master2.yaml
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
    anonymous:
    enabled: false
    webhook:
    enabled: true
    x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
    authorization:
    mode: Webhook
    clusterDomain: "cluster.local"
    clusterDNS:
    - "10.250.0.10"
    runtimeRequestTimeout: "15m"
    tlsCertFile: "/var/lib/kubelet/k8s-master2.pem"
    tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master2-key.pem"
    address: "$MASTER2_IP"
    staticPodPath: ""
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: "$MASTER2_IP"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "10.244.0.0/16"
    podPidsLimit: -1
    resolvConf: /etc/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
    memory.available: "100Mi"
    nodefs.available: "10%"
    nodefs.inodesFree: "5%"
    imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF


    scp /var/lib/kubelet/k8s-master2.yaml k8s-master2:/var/lib/kubelet/kubelet-config.yaml
    rm -rf /var/lib/kubelet/k8s-master2.yaml
    ```

    k8s-master3 节点的配置文件 在master主机上写入文件

    ```
    cat << EOF | sudo tee /var/lib/kubelet/k8s-master3.yaml
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
    anonymous:
    enabled: false
    webhook:
    enabled: true
    x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
    authorization:
    mode: Webhook
    clusterDomain: "cluster.local"
    clusterDNS:
    - "10.250.0.10"
    runtimeRequestTimeout: "15m"
    tlsCertFile: "/var/lib/kubelet/k8s-master3.pem"
    tlsPrivateKeyFile: "/var/lib/kubelet/k8s-master3-key.pem"
    address: "$MASTER3_IP"
    staticPodPath: ""
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: "$MASTER3_IP"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "10.244.0.0/16"
    podPidsLimit: -1
    resolvConf: /etc/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
    memory.available: "100Mi"
    nodefs.available: "10%"
    nodefs.inodesFree: "5%"
    imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    EOF
    ```

    ```
    scp /var/lib/kubelet/k8s-master3.yaml k8s-master3:/var/lib/kubelet/kubelet-config.yaml
    rm -rf /var/lib/kubelet/k8s-master3.yaml
    ```

    k8s-node1节点

    ```
    cat << EOF | sudo tee /var/lib/kubelet/k8s-node1.yaml
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    authentication:
    anonymous:
    enabled: false
    webhook:
    enabled: true
    x509:
    clientCAFile: "/var/lib/kubernetes/ca.pem"
    authorization:
    mode: Webhook
    clusterDomain: "cluster.local"
    clusterDNS:
    - "10.250.0.10"
    runtimeRequestTimeout: "15m"
    address: "$NODE1_IP"
    staticPodPath: ""
    syncFrequency: 1m
    fileCheckFrequency: 20s
    httpCheckFrequency: 20s
    staticPodURL: ""
    port: 10250
    readOnlyPort: 0
    rotateCertificates: true
    serverTLSBootstrap: true
    registryPullQPS: 0
    registryBurst: 20
    eventRecordQPS: 0
    eventBurst: 20
    enableDebuggingHandlers: true
    enableContentionProfiling: true
    healthzPort: 10248
    healthzBindAddress: "$NODE1_IP"
    nodeStatusUpdateFrequency: 10s
    nodeStatusReportFrequency: 1m
    imageMinimumGCAge: 2m
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    volumeStatsAggPeriod: 1m
    kubeletCgroups: ""
    systemCgroups: ""
    cgroupRoot: ""
    cgroupsPerQOS: true
    cgroupDriver: cgroupfs
    runtimeRequestTimeout: 10m
    hairpinMode: promiscuous-bridge
    maxPods: 220
    podCIDR: "10.244.0.0/16"
    podPidsLimit: -1
    resolvConf: /etc/resolv.conf
    maxOpenFiles: 1000000
    kubeAPIQPS: 1000
    kubeAPIBurst: 2000
    serializeImagePulls: false
    evictionHard:
    memory.available: "100Mi"
    nodefs.available: "10%"
    nodefs.inodesFree: "5%"
    imagefs.available: "15%"
    evictionSoft: {}
    enableControllerAttachDetach: true
    failSwapOn: true
    containerLogMaxSize: 20Mi
    containerLogMaxFiles: 10
    systemReserved: {}
    kubeReserved: {}
    systemReservedCgroup: ""
    kubeReservedCgroup: ""
    enforceNodeAllocatable: ["pods"]
    tlsCertFile: "/var/lib/kubelet/k8s-node1.pem"
    tlsPrivateKeyFile: "/var/lib/kubelet/k8s-node1-key.pem"
    EOF
    ```

    ```
    scp /var/lib/kubelet/k8s-node1.yaml k8s-node1:/var/lib/kubelet/kubelet-config.yaml
    rm -rf /var/lib/kubelet/k8s-node1.yaml
    ```

    **配置详解**

    | 配置选项 | 选项说明 |
    | ------------------------------------------- | ------------------------------------------------------------ |
    | address | kubelet 服务监听的地址 |
    | staticPodPath: "" | kubelet 会定期的扫描这个文件夹下的YAML/JSON 文件来创建/删除静态Pod,使用kubeadm安装时非常有用 |
    | syncFrequency: 1m | 同步运行容器和配置之间的最大时间间隔,默认为1m |
    | fileCheckFrequency: 20s | 检查新数据配置文件的周期,默认 20s |
    | httpCheckFrequency: 20s | 通过 http 检查新数据的周期,默认 20s |
    | staticPodURL: "" | |
    | port: 10250 | kubelet 服务的端口,默认 10250 |
    | readOnlyPort: 0 | 没有认证/授权的只读 kubelet 服务端口 ,设置为 0 表示禁用,默认 10255 |
    | rotateCertificates | 远程认证,默认为false |
    | serverTLSBootstrap: true | kubelet安全引导认证,重启 Kubelet,会发现出现了新的 CSR |
    | authentication: | 认证方式有以下几种 |
    | anonymous: | 匿名 |
    | enabled: false | 值为false |
    | webhook: | webhook的方式 |
    | enabled: true | 值为true |
    | x509: | x509认证 |
    | clientCAFile: "/etc/kubernetes/cert/ca.pem" | 集群ca证书 |
    | authorization: | 授权 |
    | mode: Webhook | 授权webhook |
    | registryPullQPS | 限制每秒拉取镜像个数,设置为 0 表示不限制, 默认 5 |
    | registryBurst | 仅当 --registry-qps 大于 0 时使用,设置拉取镜像的最大并发数,允许同时拉取的镜像数,不能超过 registry-qps ,默认 10 |
    | eventRecordQPS | 限制每秒创建的事件数目,设置为0则不限制,默认为5 |
    | eventBurst | 当--event-qps大于0时,临时允许该事件记录值超过设定值,但不能超过 event-qps 的值,默认10 |
    | enableDebuggingHandlers | 启用用于日志收集和本地运行容器和命令的服务端端点,默认值:true |
    | enableContentionProfiling | 如果启用了 profiling,则启用性能分析锁 |
    | healthzPort | 本地 healthz 端点的端口,设置为 0 将禁用,默认值:10248 |
    | healthzBindAddress | 监听healthz 端口的地址 |
    | clusterDomain: "cluster.local" | 集群域名, kubelet 将配置所有容器除了主机搜索域还将搜索当前域 |
    | clusterDNS: | DNS 服务器的IP地址列表 |
    | - "10.254.0.2" | 指定一个DNS服务器地址 |
    | nodeStatusUpdateFrequency: 10s | node节点状态更新上报频率,默认:10s |
    | nodeStatusReportFrequency: 1m | node节点上报自身状态频率,默认:1m |
    | imageMinimumGCAge: 2m | 设置镜像最少多久没有被使用才会被清理 |
    | imageGCHighThresholdPercent: 85 | 设置镜像占用磁盘比率最大值,超过此值将执行镜像垃圾回收,默认 85 |
    | imageGCLowThresholdPercent: 80 | 设置镜像占用磁盘比率最小值,低于此值将停止镜像垃圾回收,默认 80 |
    | volumeStatsAggPeriod: 1m | 指定kubelet计算和缓存所有容器组及卷的磁盘使用量时间间隔,设置为0禁用卷计算,默认:1m |
    | kubeletCgroups: "" | 可选的 cgroups 的绝对名称来创建和运行 kubelet |
    | systemCgroups: "" | 可选的 cgroups 的绝对名称,用于将未包含在 cgroup 内的所有非内核进程放置在根目录 / 中,修改这个参数需要重启 |
    | cgroupRoot: "" | Pods 可选的root cgroup, 这是由container runtime,在最佳工作的基础上处理的,默认值: '',意思是使用container runtime的默认处理 |
    | cgroupsPerQOS: true | 支持创建QoS cgroup的层级结构,如果是true,最高层级的 |
    | cgroupDriver: systemd | Kubelet用来操作主机cgroups的驱动 |
    | runtimeRequestTimeout: 10m | 除了长时间运行的请求(pull,logs,exec,attach)以外,所有runtime请求的超时时间,当触发超时时,kubelet会取消请求,抛出一个错误并稍后重试,默认值:2m0s |
    | hairpinMode: promiscuous-bridge | Kubelet该怎么设置hairpin promiscuous-bridge.这个参数允许Service的端点试图访问自己的Service。如果网络没有正确配置为 “hairpin” 流量,通常当 kube-proxy 以 iptables 模式运行,并且 Pod 与桥接网络连接时,就会发生这种情况。Kubelet 公开了一个 hairpin-mode 标志,如果 pod 试图访问他们自己的 Service VIP,就可以让 Service 的 endpoints 重新负载到他们自己身上。hairpin-mode 标志必须设置为 hairpin-veth 或者 promiscuous-bridge。 |
    | maxPods | 当前 kubelet 可以运行的容器组数目,默认:110 |
    | podCIDR | pod使用的CIDR网段 |
    | podPidsLimit: -1 | pod中设置PID限制 |
    | resolvConf | 用作容器DNS解析配置的解析器配置文件,默认: "/etc/resolv.conf" |
    | maxOpenFiles: 1000000 | kubelet 进程可以打开的文件数目,默认:1000000 |
    | kubeAPIQPS | 与kube-apiserver会话时的QPS,默认:15 |
    | kubeAPIBurst | 与kube-apiserver会话时的并发数,默认:10 |
    | serializeImagePulls: false | 禁止一次只拉取一个镜像 |
    | evictionHard: | 一个清理阈值的集合,达到该阈值将触发一次容器清理 |
    | memory.available: "100Mi" | 小gf 100Mi时清理 |
    | nodefs.available: "10%" | |
    | nodefs.inodesFree: "5%" | |
    | imagefs.available: "15%" | |
    | evictionSoft: {} | 清理阈值的集合,如果达到一个清理周期将触发一次容器清理 |
    | enableControllerAttachDetach: true | 允许附加/分离控制器来管理调度到当前节点的附加/分离卷,并禁用kubelet执行任何附加/分离操作,默认:true |
    | failSwapOn: true | 如果在节点上启用了swap,kubelet将启动失败 |
    | containerLogMaxSize: 20Mi | 容器日志容量最大值 |
    | containerLogMaxFiles: 10 | 容器日志文件数最大值 |
    | systemReserved: {} | https://kubernetes.io/zh/docs/tasks/administer-cluster/reserve-compute-resources/,系统守护进程争取资源预留,这里均设置为默认值 |
    | kubeReserved: {} | kubernetes 系统守护进程争取资源预留,这里均设置为默认值 |
    | systemReservedCgroup: "" | 要选择性的在系统守护进程上执行 kube-reserved,需要把 kubelet 的 --kube-reserved-cgroup 标志的值设置为 kube 守护进程的父控制组要想在系统守护进程上可选地执行 system-reserved,请指定 --system-reserved-cgroup kubelet 标志的值为 OS 系统守护进程的父级控制组,这里均设置为默认值 |
    | kubeReservedCgroup: "" | 要选择性的在系统守护进程上执行 kube-reserved,需要把 kubelet 的 --kube-reserved-cgroup 标志的值设置为 kube 守护进程的父控制组这里均设置为默认值 |
    | enforceNodeAllocatable: ["pods"] | 无论何时,如果所有 pod 的总用量超过了 Allocatable,驱逐 pod 的措施将被执行 |

    kubelet组件采用主动的查询机制,定期向kube-apiserver获取当前节点应该处理的任务,如果有任务分配到了自己身上(如创建Pod),从而他去处理这些任务;

    kubelet暴露了两个端口10248,http形式的healthz服务,另一个是10250,https服务,其实还有一个只读的10255端口,这里是禁用的。

    ### 生成kube-proxy 服务启动文件

    本文是二进制安装kubernetes v1.17.0 之kube-proxy,kube-proxy是什么,这里就不得不说下service,service是一组Pod的抽象集合,它相当于一组Pod的负载均衡器,负责将请求分发到对应的pod,kube-proxy就是负责service的实现的,当请求到达service时,它通过label关联到后端并转发到某个Pod;kube-proxy提供了三种负载均衡模式:用户空间、iptables、ipvs,网上有很多关于这三种模式的区别,这里先不详述,本文采用ipvs。

    kube-proxy需要运行在所有节点上(因为我们master节点也有Pod,如果没有的话,可以只部署在非master节点上),kube-proxy它主动的去监听kube-apiserver中service和endpoint的变化情况,然后根据定义的模式,创建路由规则,并提供服务service IP(headless类型的service无IP)和负载均衡功能。注意:在所有节点安装ipvsadm和ipset命令,加载ip_vs内核模块,准备章节已经执行过。

    master节点上操作

    ```
    cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
    burst: 200
    kubeconfig: "/var/lib/kube-proxy/kubeconfig"
    qps: 100
    bindAddress: $VIP
    healthzBindAddress: $VIP:10256
    metricsBindAddress: $VIP:10249
    enableProfiling: true
    clusterCIDR: 10.244.0.0/16
    mode: "ipvs"
    portRange: ""
    kubeProxyIPTablesConfiguration:
    masqueradeAll: false
    kubeProxyIPVSConfiguration:
    scheduler: rr
    excludeCIDRs: []
    EOF


    for host in k8s-master2 k8s-master3;do
    echo "---$host---"
    scp /var/lib/kube-proxy/kube-proxy-config.yaml $host:/var/lib/kube-proxy/
    done


    node配置

    cat << EOF | sudo tee k8s-node1.yaml
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
    burst: 200
    kubeconfig: "/var/lib/kube-proxy/kubeconfig"
    qps: 100
    bindAddress: $NODE1_IP
    healthzBindAddress: $NODE1_IP:10256
    metricsBindAddress: $NODE1_IP:10249
    enableProfiling: true
    clusterCIDR: 10.244.0.0/16
    mode: "ipvs"
    portRange: ""
    kubeProxyIPTablesConfiguration:
    masqueradeAll: false
    kubeProxyIPVSConfiguration:
    scheduler: rr
    excludeCIDRs: []
    EOF


    scp k8s-node1.yaml k8s-node1:/var/lib/kube-proxy/kube-proxy-config.yaml
    rm -rf k8s-node1.yaml
    ```

    **配置详解**

    | 配置选项 | 选项说明 |
    | ------------------------------- | ------------------------------------------------------------ |
    | clientConnection | 与kube-apiserver交互时的参数设置 |
    | burst: 200 | 临时允许该事件记录值超过qps设定值 |
    | kubeconfig | kube-proxy 客户端连接 kube-apiserver 的 kubeconfig 文件路径设置 |
    | qps: 100 | 与kube-apiserver交互时的QPS,默认值5 |
    | bindAddress | kube-proxy监听地址 |
    | healthzBindAddress | 用于检查服务的IP地址和端口,默认:0.0.0.0:10256 |
    | metricsBindAddress | metrics服务的ip地址和端口,默认:127.0.0.1:10249 |
    | enableProfiling | 如果设为true,则通过/debug/pprof处理程序上的web界面进行概要分析 |
    | clusterCIDR | kube-proxy 根据 --cluster-cidr 判断集群内部和外部流量,指定 --cluster-cidr 或 --masquerade-all 选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT |
    | hostnameOverride | 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则; |
    | mode | 使用 ipvs 模式 |
    | portRange | 主机端口的范围(beginPort- endport,单端口或beginPort+偏移量),可用于代理服务流量。如果(未指定,0,或0-0)端口将被随机选择 |
    | kubeProxyIPTablesConfiguration: | |
    | masqueradeAll: false | 如果使用纯iptables代理,SNAT所有通过服务集群ip发送的通信 |
    | kubeProxyIPVSConfiguration: | |
    | scheduler: rr | 当proxy为ipvs模式时,ipvs调度类型 |
    | excludeCIDRs: [] | |

    ```
    cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube Proxy
    Documentation=https://github.com/kubernetes/kubernetes

    [Service]
    ExecStart=/usr/local/bin/kube-proxy \
    --config=/var/lib/kube-proxy/kube-proxy-config.yaml \
    --logtostderr=true \
    --v=2
    Restart=on-failure
    RestartSec=5

    [Install]
    WantedBy=multi-user.target
    EOF
    ```

    拷贝启动配置文件到其他节点

    ```
    for host in k8s-master2 k8s-master3 k8s-node1; do
    echo "---$host---"
    scp /etc/systemd/system/kube-proxy.service $host:/etc/systemd/system/
    done
    ```

    启动worker服务

    ```
    for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1; do
    echo "---$host---"
    ssh $host "systemctl daemon-reload"
    ssh $host "systemctl enable --now kubelet kube-proxy"
    done
    ```

    ```
    systemctl status kubelet
    systemctl status kube-proxy
    ```

    此时所有节点的kubelet启动之后 会自动加入集群

    查看集群节点

    ```
    kubectl get nodes
    ```

    输出信息

    ```
    NAME STATUS ROLES AGE VERSION
    k8s-master1 NotReady <none> 9s v1.17.3
    k8s-master2 NotReady <none> 9s v1.17.3
    k8s-master3 NotReady <none> 39m v1.17.3
    k8s-node1 NotReady <none> 39m v1.17.3
    ```

    可以看到 所有的节点都已经成功加入集群

    状态显示 NotReady 是因为没有配置网络插件 配置好网络插件就 Ready了

    ```
    for host in k8s-master1 k8s-master2 k8s-master3 k8s-node1; do
    echo "---$host---"
    ssh $host "systemctl daemon-reload"
    ssh $host "systemctl restart kubelet kube-proxy"
    done
    ```

    # 部署网络配置

    ### 注意 本文所有操作均在master节点执行

    本文主要介绍部署网络 dns 插件

    本文的网络插件选用的是calico 以容器的方式运行
    本文的dns'插件选用的是coredns以容器的方式运行

    ### 安装calico网络

    下载calico网络插件的yaml文件

    1.16版本的kubernetes calico网络插件应选用3.9及以上的版本

    本文选用3.10

    ```
    curl https://docs.projectcalico.org/v3.10/manifests/calico.yaml -O
    ```

    修改网络插件yaml文件内容

    ```
    sed -i 's|192.168.0.0|10.244.0.0|' calico.yaml
    ```

    应用yaml文件

    ```
    kubectl apply -f calico.yaml
    ```

    查看pod状态

    ```
    kubectl get pods -n kube-system
    ```

    输出信息

    ```
    NAME READY STATUS RESTARTS AGE
    calico-kube-controllers-6d85fdfbd8-8v2hs 1/1 Running 0 7m40s
    calico-node-984fd 1/1 Running 0 7m40s
    calico-node-n5kn8 1/1 Running 0 7m40s
    calico-node-q4p7c 1/1 Running 0 7m40s
    ```

    网络插件安装成功

    ### 安装coredns插件

    先下载jq命令(coredns提供的部署脚本需要使用jq命令对生成的yaml文件进行整理)

    ```
    wget https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64 -O /usr/bin/jq
    chmod a+x /usr/bin/jq
    ```

    克隆GitHub仓库

    ```
    yum -y install git
    ```

    ```
    git clone https://github.com/coredns/deployment.git
    ```

    执行

    ```
    cd deployment/kubernetes/
    ```

    ```
    ./deploy.sh -i 10.250.0.10 | kubectl apply -f -
    ```

    ip为开头写的集群dns的IP

    输出信息

    ```
    serviceaccount/coredns created
    clusterrole.rbac.authorization.k8s.io/system:coredns created
    clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
    configmap/coredns created
    deployment.apps/coredns created
    service/kube-dns created
    ```

    查看pod

    ```
    kubectl scale -n kube-system deployment coredns --replicas=3
    kubectl get pods -n kube-system
    ```

    输出信息

    ```
    NAME READY STATUS RESTARTS AGE
    calico-kube-controllers-6d85fdfbd8-8v2hs 1/1 Running 0 91m
    calico-node-984fd 1/1 Running 0 91m
    calico-node-n5kn8 1/1 Running 0 91m
    calico-node-q4p7c 1/1 Running 0 91m
    coredns-68567cdb47-6j7bb 1/1 Running 0 5m31s
    coredns-68567cdb47-7nwcg 1/1 Running 0 5m31s
    ```

    可以看到calico网络和coredns插件都好了

    ### 验证

    进入pod内部 pingbaidu.com

    如果能ping同 则插件部署完毕

    ### 安装Ingress-nginx插件

    yaml[文件](https://github.com/mytting/kubernetes/blob/master/B-kubernetes%E5%9F%BA%E7%A1%80/yaml/nginx-ds.yaml)

    点击上方超链接 将yaml文件的所有内容复制下来 保存到主节点的nginx-ingress.yaml文件内

    然后

    ```
    kubectl apply -f nginx-ds.yaml
    ```

    输出信息

    ```
    namespace/ingress-nginx created
    configmap/nginx-configuration created
    configmap/tcp-services created
    configmap/udp-services created
    serviceaccount/nginx-ingress-serviceaccount created
    clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
    role.rbac.authorization.k8s.io/nginx-ingress-role created
    rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
    clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
    daemonset.apps/nginx-ingress-controller created
    ```

    然后查看Pod

    ```
    kubectl get pods -n ingress-nginx -o wide
    ```

    输出信息 全部为Running即可

    ```
    NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
    nginx-ingress-controller-8xzrd 1/1 Running 0 2m13s 200.200.100.73 k8s-node <none> <none>
    nginx-ingress-controller-c84dv 1/1 Running 0 2m13s 200.200.100.74 k8s-node1 <none> <none>
    nginx-ingress-controller-qlpn5 1/1 Running 0 2m13s 200.200.100.71 master <none> <none>
    ```

    Ingress-nginx插件安装完毕

    # 集群部署完毕

  • 相关阅读:
    《C语言课程设计与游戏开发实践课程》67章总结
    祖玛(Zuma)
    .net 实现微信公众平台的主动推送信息
    关于ASP与C#的感悟
    不同方面高手的地址。
    ASP中关于全局页面的作用 asax文件
    学习C#,开始了我的第一个进程。
    江苏立方网络科技有限公司招聘PHP工程师
    网上看到的ArcEngine控制地图显示范围的好方法(记下)
    3DS文件结构
  • 原文地址:https://www.cnblogs.com/qinghe123/p/12673590.html
Copyright © 2011-2022 走看看