zoukankan      html  css  js  c++  java
  • 采用二进制方式安装K8S集群,版本etcd-v3.3.10,flannel-v0.11.0,kubernetes-server-linux-amd64

    官方提供的几种Kubernetes部署方式

    • minikube

    Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,尝试Kubernetes或日常开发的用户使用。不能用于生产环境。

    官方地址:https://kubernetes.io/docs/setup/minikube/

    • kubeadm

    Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。

    官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

    • 二进制包

    从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。

    小结:
    生产环境中部署Kubernetes集群,只有Kubeadm和二进制包可选,Kubeadm降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。我们这里使用二进制包部署Kubernetes集群,我也是推荐大家使用这种方式,虽然手动部署麻烦点,但学习很多工作原理,更有利于后期维护。

    环境介绍

    软件环境
    软件 版本
    操作系统 CentOS 7.6_x64
    Docker 18-ce
    Kubernetes 1.12

    服务器角色
    角色 IP 组件
    master 192.168.75.64 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
    node1 192.168.75.65 kubelet,kube-proxy,docker,flannel,etcd
    node2 192.168.75.66 kubelet,kube-proxy,docker,flannel,etcd

    初始化:
    关闭selinux
    关闭防火墙

    组件 使用的证书
    etcd ca.pem,server.pem,server-key.pem
    flannel ca.pem,server.pem,server-key.pem
    kube-apiserver ca.pem,server.pem,server-key.pem
    kubelet ca.pem,ca-key.pem
    kube-proxy ca.pem,kube-proxy.pem,kube-proxy-key.pem
    kubectl ca.pem,admin.pem,admin-key.pem

    注意事项:
    三台主机的时间要尽可能的同步,保持一致,否则日志中会出现如下提示:

    Nov  1 09:13:42 bogon etcd: the clock difference against peer e4ba0635cb718aa3 is too high [1.321146676s > 1s]
    Nov  1 09:13:42 bogon etcd: the clock difference against peer e4ba0635cb718aa3 is too high [1.316524004s > 1s]
    Nov  1 09:13:57 bogon etcd: the clock difference against peer a3174a13e9f88ee8 is too high [1.139050363s > 1s]
    Nov  1 09:13:57 bogon etcd: the clock difference against peer a3174a13e9f88ee8 is too high [1.143273312s > 1s]
    

    三台主机使用公共的同步时间服务器,或者指定其中一台服务器作为同步时间服务器,另外两台从这台进行时间同步

    time.windows.com

    再注意:
    flannel v0.11 不支持etcd v3用法

    部署Etcd集群

    三台主机都需要部署etcd

    1. 使用cfssl来生成自签证书,先下载cfssl工具:

    使用shell脚本:cfssl.sh

    或者手动执行如下命令

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
    

    这三个命令保存压缩包:cfssl证书生成命令.7z

    2. 生成证书

    使用shell脚本:etcd-cert.sh

    或者手动执行如下命令

    创建以下三个文件:

    # cat ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    
    # cat ca-csr.json
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    
    # cat server-csr.json
    # 注意: hosts主机参数要根据实际情况进行修改
    {
        "CN": "etcd",
        "hosts": [
        "192.168.75.64",
        "192.168.75.65",
        "192.168.75.66"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    
    
    # 生成证书
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    
    # 查看证书
    # ls *pem
    ca-key.pem  ca.pem  server-key.pem  server.pem
    
    

    证书这块知道怎么生成、怎么用即可,建议暂时不必过多研究

    3. 部署Etcd

    二进制包下载地址:https://github.com/coreos/etcd/releases

    以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的,ETCD_NAME也要写当前的

    # 解压二进制包
    mkdir /opt/etcd/{bin,cfg,ssl} -p
    tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
    cp etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
    
    # 创建etcd配置文件
    
    # cat /opt/etcd/cfg/etcd
    
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.75.64:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.75.64:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.75.64:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.75.64:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.75.64:2380,etcd02=https://192.168.75.65:2380,etcd03=https://192.168.75.66:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    #[Security]
    ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    
    

    etcd配置文件说明:

    • ETCD_NAME 节点名称
    • ETCD_DATA_DIR 数据目录
    • ETCD_LISTEN_PEER_URLS 集群通信监听地址
    • ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
    • ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
    • ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
    • ETCD_INITIAL_CLUSTER 集群节点地址
    • ETCD_INITIAL_CLUSTER_TOKEN 集群Token
    • ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
    # systemd管理etcd
    
    # cat /usr/lib/systemd/system/etcd.service 
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/etcd/cfg/etcd
    ExecStart=/opt/etcd/bin/etcd
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    
    # 把刚才生成的证书拷贝到配置文件中的位置
    cp ca.pem server*pem /opt/etcd/ssl
    
    # 启动并设置开启启动:
    systemctl start etcd
    systemctl enable etcd
    
    ssh-keygen -t rsa
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.75.65
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.75.66
    
    # 传输给其他node节点
    
    # 需要修改配置文件
    scp -r /opt/etcd/ root@192.168.75.65:/opt/
    scp -r /opt/etcd/ root@192.168.75.66:/opt/
    
    
    scp /usr/lib/systemd/system/etcd.service root@192.168.75.65:/usr/lib/systemd/system/
    scp /usr/lib/systemd/system/etcd.service root@192.168.75.66:/usr/lib/systemd/system/
    
    # 启动
    systemctl daemon-reload
    systemctl start etcd
    systemctl enable etcd
    
    # 三个配置文件示例
    
    # 192.168.75.64
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.75.64:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.75.64:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.75.64:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.75.64:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.75.64:2380,etcd02=https://192.168.75.65:2380,etcd03=https://192.168.75.66:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    #[Security]
    ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    
    # 192.168.75.65
    #[Member]
    ETCD_NAME="etcd02"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.75.65:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.75.65:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.75.65:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.75.65:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.75.64:2380,etcd02=https://192.168.75.65:2380,etcd03=https://192.168.75.66:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    #[Security]
    ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    
    # 192.168.75.66
    #[Member]
    ETCD_NAME="etcd03"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.75.66:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.75.66:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.75.66:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.75.66:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.75.64:2380,etcd02=https://192.168.75.65:2380,etcd03=https://192.168.75.66:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    #[Security]
    ETCD_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_PEER_CERT_FILE="/opt/etcd/ssl/server.pem"
    ETCD_PEER_KEY_FILE="/opt/etcd/ssl/server-key.pem"
    ETCD_PEER_TRUSTED_CA_FILE="/opt/etcd/ssl/ca.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    
    # 都部署完成后,检查etcd集群状态
    
    /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379" cluster-health
    
    
    member a3174a13e9f88ee8 is healthy: got healthy result from https://192.168.75.65:2379
    member d6f32b054860cf2b is healthy: got healthy result from https://192.168.75.64:2379
    member e4ba0635cb718aa3 is healthy: got healthy result from https://192.168.75.66:2379
    cluster is healthy
    
    # 若是提示各种命令参数找不到,可以使用/opt/etcd/bin/etcdctl --help命令查看后面的参数
    # 不同的etcd版本后面跟的参数有可能不一样
    

    如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

    在Node安装Docker

    在node1和node2主机节点部署Docker

    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    yum install -y yum-utils device-mapper-persistent-data lvm2
    # K8S不支持最高版本的Docker,需要指定docker版本
    yum -y install docker-ce-18.06.1.ce-3.el7
    
    systemctl start docker && systemctl enable docker
    
    

    这个操作步骤随便一个主机上操作就行,目的是往etcd集群中写入数据
    (使用etcdctl v3.4.3命令会得到不同的返回结果)

    # Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段
    /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379" set /coreos.com/network/config  '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
    
    {"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
    
    # 查看
    /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379" get /coreos.com/network/config  
    
    {"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
    
    # 删除
    /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379" del /coreos.com/network/config    
    
    

    以下部署步骤在规划的每个node节点都操作

    wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    tar zxvf flannel-v0.11.0-linux-amd64.tar.gz
    mkdir -p /opt/flannel/{bin,cfg}
    cp flanneld mk-docker-opts.sh /opt/flannel/bin
    
    

    使用脚本:

    或者执行如下命令操作:flannel.sh
    脚本用法:bash flannel.sh https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379

    # 配置Flannel
    
    # cat /opt/flannel/cfg/flanneld
    FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    
    # 使用systemd管理Flannel
    
    # cat /usr/lib/systemd/system/flanneld.service 
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/flannel/cfg/flanneld
    ExecStart=/opt/flannel/bin/flanneld --ip-masq $FLANNEL_OPTIONS
    ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    # 修改docker.service文件,结果如下:
    
    # 配置Docker启动指定子网段
    
    # cat /usr/lib/systemd/system/docker.service
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
    ExecReload=/bin/kill -s HUP $MAINPID
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    TimeoutStartSec=0
    Delegate=yes
    KillMode=process
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
    
    [Install]
    WantedBy=multi-user.target
    
    
    # 重启flannel和docker
    
    systemctl daemon-reload
    systemctl start flanneld
    systemctl enable flanneld
    
    systemctl restart docker
    
    # 检查是否生效
    
    # 确保docker0与flannel.1在同一网段
    
    # ps -ef | grep docker
    root       6879      1  0 14:14 ?        00:00:01 /usr/bin/dockerd --bip=172.17.69.1/24 --ip-masq=false --mtu=1450
    
    # ip addr
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
        link/ether 02:42:77:67:ce:78 brd ff:ff:ff:ff:ff:ff
        inet 172.17.69.1/24 brd 172.17.69.255 scope global docker0
           valid_lft forever preferred_lft forever
    4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default 
        link/ether 52:96:0d:2d:ab:08 brd ff:ff:ff:ff:ff:ff
        inet 172.17.69.0/32 scope global flannel.1
           valid_lft forever preferred_lft forever
        inet6 fe80::5096:dff:fe2d:ab08/64 scope link 
           valid_lft forever preferred_lft forever
    

    测试不同节点互通:

    • 节点到容器
    • 容器到节点
    • 容器到容器
    # node1节点ping本机docker ip
    # ping -c 3 172.17.69.1
    PING 172.17.69.1 (172.17.69.1) 56(84) bytes of data.
    64 bytes from 172.17.69.1: icmp_seq=1 ttl=64 time=0.055 ms
    64 bytes from 172.17.69.1: icmp_seq=2 ttl=64 time=0.030 ms
    64 bytes from 172.17.69.1: icmp_seq=3 ttl=64 time=0.034 ms
    
    --- 172.17.69.1 ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2000ms
    rtt min/avg/max/mdev = 0.030/0.039/0.055/0.013 ms
    
    # docker内容器ping node1本机ip
    # 拉取一个最简单的镜像busybox
    # docker run -it busybox 
    Unable to find image 'busybox:latest' locally
    latest: Pulling from library/busybox
    0f8c40e1270f: Pull complete 
    Digest: sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0
    Status: Downloaded newer image for busybox:latest
    / # ip addr # 查看172.17.69.2容器使用的ip
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
    5: eth0@if6: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue 
        link/ether 02:42:ac:11:45:02 brd ff:ff:ff:ff:ff:ff
        inet 172.17.69.2/24 brd 172.17.69.255 scope global eth0
           valid_lft forever preferred_lft forever
    / # ping 192.168.75.65 -c 3 # ping 本机ip
    PING 192.168.75.65 (192.168.75.65): 56 data bytes
    64 bytes from 192.168.75.65: seq=0 ttl=64 time=0.168 ms
    64 bytes from 192.168.75.65: seq=1 ttl=64 time=0.056 ms
    64 bytes from 192.168.75.65: seq=2 ttl=64 time=0.063 ms
    
    --- 192.168.75.65 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.056/0.095/0.168 ms
    / # ping -c 3 192.168.75.66 # ping node2节点的ip
    PING 192.168.75.66 (192.168.75.66): 56 data bytes
    64 bytes from 192.168.75.66: seq=0 ttl=63 time=0.609 ms
    64 bytes from 192.168.75.66: seq=1 ttl=63 time=0.434 ms
    64 bytes from 192.168.75.66: seq=2 ttl=63 time=0.315 ms
    
    --- 192.168.75.66 ping statistics ---
    3 packets transmitted, 3 packets received, 0% packet loss
    round-trip min/avg/max = 0.315/0.452/0.609 ms
    / # 
    

    在Master节点部署组件

    在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续.

    使用脚本:k8s-cert.sh

    或者使用如下命令操作生成证书

    # 生成证书
    
    # 创建CA证书
    # cat ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    
    # cat ca-csr.json
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    # 生成apiserver证书
    
    # cat server-csr.json
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "192.168.75.64",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    # 生成kube-proxy证书
    
    # cat kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    
    # 最终生成以下证书文件
    
    # ls *.pem
    ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem
    

    部署apiserver组件

    
    # 下载二进制包:https://github.com/kubernetes/kubernetes/releases
    # 下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。
    
    mkdir /opt/kubernetes/{bin,cfg,ssl} -p
    tar zxvf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin
    cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
    
    # 创建token文件
    
    cat /opt/kubernetes/cfg/token.csv
    674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    
    # 第一列:随机字符串,自己可生成
    # 第二列:用户名
    # 第三列:UID
    # 第四列:用户组
    
    # 创建apiserver配置文件
    # 配置好前面生成的证书,确保能连接etcd
    cat /opt/kubernetes/cfg/kube-apiserver
    KUBE_APISERVER_OPTS="--logtostderr=true 
    --v=4 
    --etcd-servers=https://192.168.75.64:2379,https://192.168.75.65:2379,https://192.168.75.66:2379 
    --bind-address=192.168.75.64 
    --secure-port=6443 
    --advertise-address=192.168.75.64 
    --allow-privileged=true 
    --service-cluster-ip-range=10.0.0.0/24 
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction 
    --authorization-mode=RBAC,Node 
    --enable-bootstrap-token-auth 
    --token-auth-file=/opt/kubernetes/cfg/token.csv 
    --service-node-port-range=30000-50000 
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem 
    --client-ca-file=/opt/kubernetes/ssl/ca.pem 
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem 
    --etcd-cafile=/opt/etcd/ssl/ca.pem 
    --etcd-certfile=/opt/etcd/ssl/server.pem 
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    
    

    参数说明:

    • logtostderr 启用日志
    • -v 日志等级
    • etcd-servers etcd集群地址
    • bind-address 监听地址
    • secure-port https安全端口
    • advertise-address 集群通告地址
    • allow-privileged 启用授权
    • service-cluster-ip-range Service虚拟IP地址段
    • enable-admission-plugins 准入控制模块
    • authorization-mode 认证授权,启用RBAC授权和节点自管理
    • enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
    • token-auth-file token文件
    • service-node-port-range Service Node类型默认分配端口范围
    # systemd管理apiserver
    
    # cat /usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    # 启动
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl start kube-apiserver
    
    

    部署scheduler组件

    # 创建schduler配置文件
    
    # cat /opt/kubernetes/cfg/kube-scheduler
    KUBE_SCHEDULER_OPTS="--logtostderr=true 
    --v=4 
    --master=127.0.0.1:8080 
    --leader-elect"
    

    参数说明:

    • --master 连接本地apiserver
    • --leader-elect 当该组件启动多个时,自动选举(HA)
    # systemd管理schduler组件
    # cat /usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    # 启动
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl start kube-scheduler
    

    部署controller-manager组件

    # 创建controller-manager配置文件
    
    # cat /opt/kubernetes/cfg/kube-controller-manager
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true 
    --v=4 
    --master=127.0.0.1:8080 
    --leader-elect=true 
    --address=127.0.0.1 
    --service-cluster-ip-range=10.0.0.0/24 
    --cluster-name=kubernetes 
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem 
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  
    --root-ca-file=/opt/kubernetes/ssl/ca.pem 
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
    
    # systemd管理controller-manager组件
    
    # cat /usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    
    # 启动
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl start kube-controller-manager
    

    所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:

    # /opt/kubernetes/bin/kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-2               Healthy   {"health":"true"}   
    etcd-0               Healthy   {"health":"true"}   
    etcd-1               Healthy   {"health":"true"} 
    
    # 如上输出说明组件都正常
    

    或者分别执行master目录下的sh脚本文件,注意脚本执行时需要参数

    在Node节点部署组件

    Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

    # 将kubelet-bootstrap用户绑定到系统集群角色
    
    /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
    
    # 执行结果
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
    
    # 创建kubeconfig文件
    
    # 在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件
    # 创建kubelet bootstrapping kubeconfig 
    cd /opt/k8s_2
    # 执行如下两个命令
    BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
    KUBE_APISERVER="https://192.168.75.64:6443"
    
    # 设置集群参数
    /opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig
    # 执行结果
    Cluster "kubernetes" set.
    
    # 设置客户端认证参数
    /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
    # 执行结果
    User "kubelet-bootstrap" set.
    
    # 设置上下文参数
    /opt/kubernetes/bin/kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
    # 执行结果
    Context "default" created.
    
    # 设置默认上下文
    /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    # 执行结果
    Switched to context "default".
    
    # 创建kube-proxy kubeconfig文件
    /opt/kubernetes/bin/kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
    # 执行结果
    Cluster "kubernetes" set.
    
    /opt/kubernetes/bin/kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
    # 执行结果
    User "kube-proxy" set.
    
    /opt/kubernetes/bin/kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
    # 执行结果
    Context "default" created.
    
    /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    # 执行结果
    Switched to context "default".
    
    # ls
    bootstrap.kubeconfig  kube-proxy.kubeconfig
    # 将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下
    

    部署kubelet组件

    将前面下载的二进制包中的kubelet和kube-proxy拷贝到/opt/kubernetes/bin目录下

    cd /opt/k8s_2/kubernetes/server/bin/
    cp kubelet kube-proxy /opt/kubernetes/bin/
    
    # 创建kubelet配置文件
    # cat /opt/kubernetes/cfg/kubelet
    KUBELET_OPTS="--logtostderr=true 
    --v=4 
    --hostname-override=192.168.75.65 
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig 
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig 
    --config=/opt/kubernetes/cfg/kubelet.config 
    --cert-dir=/opt/kubernetes/ssl 
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
    
    
    # 其中/opt/kubernetes/cfg/kubelet.config配置文件如下:
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.75.65
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS: ["10.0.0.2"]
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
      anonymous:
        enabled: true
    
    
    # systemd管理kubelet组件
    # cat /usr/lib/systemd/system/kubelet.service 
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    
    # 启动
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl start kubelet
    
    

    参数说明:

    • --hostname-override 在集群中显示的主机名
    • --kubeconfig 指定kubeconfig文件位置,会自动生成
    • --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
    • --cert-dir 颁发证书存放位置
    • --pod-infra-container-image 管理Pod网络的镜像

    在Master审批Node加入集群

    # 启动后还没加入到集群中,需要手动允许该节点才可以。在Master节点查看请求签名的Node:
    [root@bogon cfg]# /opt/kubernetes/bin/kubectl get csr
    NAME                                                   AGE   REQUESTOR           CONDITION
    node-csr-5O5xP__kXZ1UaDABvbe9u90WrV1EMwEYRYYeFLtO-7w   48s   kubelet-bootstrap   Pending
    
    [root@bogon cfg]# /opt/kubernetes/bin/kubectl certificate approve node-csr-5O5xP__kXZ1UaDABvbe9u90WrV1EMwEYRYYeFLtO-7w
    certificatesigningrequest.certificates.k8s.io/node-csr-5O5xP__kXZ1UaDABvbe9u90WrV1EMwEYRYYeFLtO-7w approved
    
    [root@bogon cfg]# /opt/kubernetes/bin/kubectl get node
    NAME            STATUS   ROLES    AGE   VERSION
    192.168.75.65   Ready    <none>   12s   v1.12.1
    [root@bogon cfg]# 
    

    部署kube-proxy组件

    # 创建kube-proxy配置文件
    # cat /opt/kubernetes/cfg/kube-proxy
    KUBE_PROXY_OPTS="--logtostderr=true 
    --v=4 
    --hostname-override=192.168.75.65 
    --cluster-cidr=10.0.0.0/24 
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    
    # systemd管理kube-proxy组件
    # cat /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    
    # 启动
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl start kube-proxy
    

    Node2部署方式一样
    需要注意的是配置文件中的IP地址需要换成当前使用的

    查看集群状态

    # 在master主机上查看
    /opt/kubernetes/bin/kubectl get node
    NAME            STATUS   ROLES    AGE     VERSION
    192.168.75.65   Ready    <none>   14m     v1.12.1
    192.168.75.66   Ready    <none>   2m54s   v1.12.1
    
    # 在node主机上查看会出现这样的结果:The connection to the server localhost:8080 was refused - did you specify the right host or port?
    
    /opt/kubernetes/bin/kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    etcd-1               Healthy   {"health":"true"}   
    etcd-0               Healthy   {"health":"true"}   
    controller-manager   Healthy   ok                  
    etcd-2               Healthy   {"health":"true"}  
    

    运行一个测试示例

    # 创建一个Nginx Web,测试集群是否正常工作
    /opt/kubernetes/bin/kubectl run nginx --image=nginx --replicas=3
    # 执行结果
    kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
    deployment.apps/nginx created
    
    
    /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
    # 执行结果
    service/nginx exposed
    
    # 查看Pod,Service
    /opt/kubernetes/bin/kubectl get pods
    # 执行结果
    NAME                    READY   STATUS    RESTARTS   AGE
    nginx-dbddb74b8-4bd8v   1/1     Running   0          90s
    nginx-dbddb74b8-5kjns   1/1     Running   0          90s
    nginx-dbddb74b8-tbzhl   1/1     Running   0          90s
    
    /opt/kubernetes/bin/kubectl get svc
    # 执行结果
    NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP        100m
    nginx        NodePort    10.0.0.116   <none>        88:37027/TCP   66s
    
    # 访问集群中部署的Nginx,打开浏览器输入:http://192.168.75.65:37027 或者http://192.168.75.66:37027
    

    注意事项

    flannel v0.11版本不支持etcd v3.4.3版本,支持etcd v3.3.10版本

    因为etcd分v2和v3俩版本,不同版本使用的命令参数不同,得到的结果也不同

    若flannel v0.11使用etcd v3.4.3版本,则(Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段)使用的命令会有变化,然后结果是可以写进去的。但是在启动flannel的时候,会报错:Couldn't fetch network config: client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint.

    这就是使用flannel版本跟etcd版本不支持的结果

  • 相关阅读:
    Jmeter JAVA工程测试
    jsp页面img利用tomcat配置访问服务器绝对路径显示图片
    PostgreSQL模仿Oracle的instr函数
    linux清理内存命令
    Tomcat去除项目名称和端口号,直接使用ip地址访问项目的方法
    linux下重启oracle服务:监听器和实例
    RedHat Linux 9.0的安装+入门指南(图文并茂)
    linux命令详解之挂载光驱的方法
    jQuery download file
    fiddler
  • 原文地址:https://www.cnblogs.com/sanduzxcvbnm/p/11778633.html
Copyright © 2011-2022 走看看