zoukankan      html  css  js  c++  java
  • Kubernetes 二进制部署(一)单节点部署(Master 与 Node 同一机器)

    0. 前言

    • 最近受“新冠肺炎”疫情影响,在家等着,入职暂时延后,在家里办公和学习
    • 尝试通过源码编译二进制的方式单一节点(Master 与 Node 部署在同一个机器上)上部署一个 k8s 环境,整理相关步骤和脚本如下
    • 参考原文:Kubernetes二进制部署(一)单节点部署

    1. 相关概念

    1.1 基本架构

    1.2 核心组件 

    1.2.1 Master

    1.2.1.1 kube-apiserver

    • 集群的统一入口,各组件协调者
    • 以RESTful API提供接口服务
    • 所有对象资源的增删改查和监听操作都交给 kube-apiserver 处理
    • 之后再通过分布式存储组件 Etcd 保存状态

    1.2.1.2 kube-controller-manager

    • 处理集群中常规后台任务
    • 一个资源对应一个控制器,而 kube-controller-manager 就是负责管理这些控制器的

    1.2.1.3 kube-scheduler

    • 根据调度算法为新创建的 Pod 选择一个 Node 节点
    • 可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上

    1.2.1.4 etcd

    • 分布式键值存储系统
    • 用于保存集群状态数据,比如 Pod、Service 等对象信息

    1.2.2 Node

    1.2.2.1 kubelet

    • kubelet 是 Master 在Node 节点上的 Agent
    • 管理本机运行容器的生命周期,比如创建容器、Pod 挂载数据卷、下载 Secret、获取容器和节点状态等工作
    • kubelet 将每个 Pod 转换成一组容器

    1.2.2.2 kube-proxy

    • 在 Node 节点上实现 Pod 网络代理,维护网络规则和四层负载均衡工作
    • 对于从主机上发出的数据,它可以基于请求地址发现远程服务器
    • 并将数据正确路由,在某些情况下会使用轮循调度算法(Round-robin)将请求发送到集群中的多个实例

    1.2.2.3 docker

    • 容器引擎

    2. 部署流程

    2.1 源码编译

    • 安装 golang 环境
    • kubernetes v1.18 要求使用的 golang 版本为 1.13
    $ wget https://dl.google.com/go/go1.13.8.linux-amd64.tar.gz
    $ tar -zxvf go1.13.8.linux-amd64.tar.gz -C /usr/local/
    • 添加如下环境变量至 ~/.bashrc 或者 ~/.zshrc
    export GOROOT=/usr/local/go
     
    # GOPATH
    export GOPATH=$HOME/go
    
    # GOROOT bin
    export PATH=$PATH:$GOROOT/bin
    
    # GOPATH bin
    export PATH=$PATH:$GOPATH/bin
    • 更新环境变量
    $ source ~/.bashrc
    • 从 github 上下载 kubernetes 最新源码
    $ git clone https://github.com/kubernetes/kubernetes.git
    • 编译形成二进制文件
    $ make KUBE_BUILD_PLATFORMS=linux/amd64
    +++ [0215 22:16:44] Building go targets for linux/amd64:
        ./vendor/k8s.io/code-generator/cmd/deepcopy-gen
    +++ [0215 22:16:52] Building go targets for linux/amd64:
        ./vendor/k8s.io/code-generator/cmd/defaulter-gen
    +++ [0215 22:17:00] Building go targets for linux/amd64:
        ./vendor/k8s.io/code-generator/cmd/conversion-gen
    +++ [0215 22:17:12] Building go targets for linux/amd64:
        ./vendor/k8s.io/kube-openapi/cmd/openapi-gen
    +++ [0215 22:17:25] Building go targets for linux/amd64:
        ./vendor/github.com/go-bindata/go-bindata/go-bindata
    +++ [0215 22:17:27] Building go targets for linux/amd64:
        cmd/kube-proxy
        cmd/kube-apiserver
        cmd/kube-controller-manager
        cmd/kubelet
        cmd/kubeadm
        cmd/kube-scheduler
        vendor/k8s.io/apiextensions-apiserver
        cluster/gce/gci/mounter
        cmd/kubectl
        cmd/gendocs
        cmd/genkubedocs
        cmd/genman
        cmd/genyaml
        cmd/genswaggertypedocs
        cmd/linkcheck
        vendor/github.com/onsi/ginkgo/ginkgo
        test/e2e/e2e.test
        cluster/images/conformance/go-runner
        cmd/kubemark
        vendor/github.com/onsi/ginkgo/ginkgo
    • KUBE_BUILD_PLATFORMS 指定了编译生成的二进制文件的目标平台,包括 darwin/amd64、linux/amd64 和 windows/amd64 等
    • 执行 make cross 会生成所有平台的二进制文件
    • 云服务器占用资源比较小,建议在本地编译然后上传至服务器
    • 生成的 _output 目录即为编译生成文件,核心二进制文件在 _output/local/bin/linux/amd64 中
    $ pwd
    /root/Coding/kubernetes/_output/local/bin/linux/amd64
    $ ls
    apiextensions-apiserver genman                  go-runner               kube-scheduler          kubemark
    e2e.test                genswaggertypedocs      kube-apiserver          kubeadm                 linkcheck
    gendocs                 genyaml                 kube-controller-manager kubectl                 mounter
    genkubedocs             ginkgo                  kube-proxy              kubelet
    • 其中 kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kube-proxy 和 kubelet 为安装需要的二进制文件

    2.2 安装 docker

    • 云服务器上已经安装了 docker,因此此次部署无需安装
    • 具体安装细节参见 官方文档

    2.3 下载安装脚本

    • 后续安装部署的所有脚本已经上传至 github 仓库 中,感兴趣的朋友可以下载
    • 创建工作目录 k8s 和脚本目录 k8s/scripts,复制仓库中的所有脚本,到工作目录中的脚本文件夹中
    $ git clone https://github.com/wangao1236/k8s_single_deploy.git
    $ cd k8s_single_deploy/scripts
    $ chmod +x *.sh
    $ mkdir -p k8s/scripts
    $ cp k8s_single_deploy/scripts/* k8s/scripts

    2.4 安装 cfssl

    • 安装 cfssl,执行 k8s/scripts/cfssl.sh 脚本,或者执行如下命令:
    $ curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
    $ curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
    $ curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
    $ chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
    • k8s/scripts/cfssl.sh 脚本内容如下:
    $ cat k8s_single_deploy/scripts/cfssl.sh
    curl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfssl
    curl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljson
    curl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfo
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

    2.5 安装 etcd

    • 创建目标文件夹
    $ mkdir -p /opt/etcd/{cfg,bin,ssl}
    • 下载 etcd 最新版安装包
    $ wget https://github.com/etcd-io/etcd/releases/download/v3.3.18/etcd-v3.3.18-linux-amd64.tar.gz
    $ tar -zxvf etcd-v3.3.18-linux-amd64.tar.gz
    $ cp etcd-v3.3.18-linux-amd64/etcdctl etcd-v3.3.18-linux-amd64/etcd /opt/etcd/bin
    • 创建文件夹 k8s/etcd-cert,其中 k8s 部署相关文件和脚本的存储根目录,etcd-cert 暂存 etcd https 的证书
    $ mkdir -p k8s/etcd-cert
    • 复制 etcd-cert.sh 脚本到 etcd-cert 目录中,并执行
    $ cp k8s/scripts/etcd-cert.sh k8s/etcd-cert 
    • 脚本内容如下:
    $ cat k8s/scripts/etcd-cert.sh
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json <<EOF
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    #-----------------------
    
    cat > server-csr.json <<EOF
    {
        "CN": "etcd",
        "hosts": [
        "10.206.240.188",
        "10.206.240.189",
        "10.206.240.111"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    View Code
    • 注意修改 server-csr.json 部分的 hosts 内容为 127.0.0.1 和服务器的 IP 地址
    • 执行脚本
    $ ./etcd-cert.sh
    2020/02/16 00:49:37 [INFO] generating a new CA key and certificate from CSR
    2020/02/16 00:49:37 [INFO] generate received request
    2020/02/16 00:49:37 [INFO] received CSR
    2020/02/16 00:49:37 [INFO] generating key: rsa-2048
    2020/02/16 00:49:38 [INFO] encoded CSR
    2020/02/16 00:49:38 [INFO] signed certificate with serial number 18016478413052532961889837653710495342880481812
    2020/02/16 00:49:38 [INFO] generate received request
    2020/02/16 00:49:38 [INFO] received CSR
    2020/02/16 00:49:38 [INFO] generating key: rsa-2048
    2020/02/16 00:49:38 [INFO] encoded CSR
    2020/02/16 00:49:38 [INFO] signed certificate with serial number 83852304780368923308324941155403278584239347004
    2020/02/16 00:49:38 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
    websites. For more information see the Baseline Requirements for the Issuance and Management
    of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
    specifically, section 10.2.3 ("Information Requirements").
    • 拷贝证书
    $ cp *.pem /opt/etcd/ssl
    • 执行 k8s/scripts/etcd.sh 脚本
    $ ./k8s/scripts/etcd.sh etcd01 127.0.0.1
    # 或者
    $ ./k8s/scripts/etcd.sh etcd01 ${服务器 IP}
    • k8s/scripts/etcd.sh 脚本内容如下:
    $ cat k8s/scripts/etcd.sh
    #!/bin/bash
    # example: ./etcd.sh etcd01 192.168.1.10
    
    ETCD_NAME=$1
    ETCD_IP=$2
    
    WORK_DIR=/opt/etcd
    
    cat <<EOF >$WORK_DIR/cfg/etcd
    #[Member]
    ETCD_NAME="${ETCD_NAME}"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://${ETCD_IP}:2380"
    ETCD_LISTEN_CLIENT_URLS="https://${ETCD_IP}:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_IP}:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_IP}:2379"
    ETCD_INITIAL_CLUSTER="${ETCD_NAME}=https://${ETCD_IP}:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/etcd.service
    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    EnvironmentFile=${WORK_DIR}/cfg/etcd
    ExecStart=${WORK_DIR}/bin/etcd 
    --name=${ETCD_NAME} 
    --data-dir=${ETCD_DATA_DIR} 
    --listen-peer-urls=${ETCD_LISTEN_PEER_URLS} 
    --listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 
    --advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} 
    --initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} 
    --initial-cluster=${ETCD_INITIAL_CLUSTER} 
    --initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} 
    --initial-cluster-state=new 
    --cert-file=${WORK_DIR}/ssl/server.pem 
    --key-file=${WORK_DIR}/ssl/server-key.pem 
    --peer-cert-file=${WORK_DIR}/ssl/server.pem 
    --peer-key-file=${WORK_DIR}/ssl/server-key.pem 
    --trusted-ca-file=${WORK_DIR}/ssl/ca.pem 
    --peer-trusted-ca-file=${WORK_DIR}/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable etcd
    systemctl restart etcd
    View Code
    • 由于证书已经加入了 127.0.0.1 和服务器 IP 地址,因此脚本第二个参数可以为 127.0.0.1 或者 服务器 IP
    • 为了隐私安全,本文隐藏了服务器 IP,尽可能使用 127.0.0.1 作为节点地址
    • 检查安装是否成功,执行如下命令:
    $ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" cluster-health
    member f6947f26c76d8a6b is healthy: got healthy result from https://127.0.0.1:2379
    cluster is healthy
    # 或者
    $ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://${服务器 IP}:2379" cluster-health
    member f6947f26c76d8a6b is healthy: got healthy result from https://${服务器 IP}:2379
    cluster is healthy 
    • 由于是单节点集群,因此指定集群地址时只有一个地址,出现“member ...... is healthy: go healthy result from .......”,说明 etcd 正常启动了
    • 脚本会生成配置文件
    $ cat /opt/etcd/cfg/etcd
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://127.0.0.1:2380"
    ETCD_LISTEN_CLIENT_URLS="https://127.0.0.1:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://127.0.0.1:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://127.0.0.1:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://127.0.0.1:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"

    2.6 部署 flannel

    • 写入分配的子网段到 etcd 中,供 flannel 使用:
    $ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
    { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}} 
    • 查看写入的信息
    $ /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://127.0.0.1:2379" get /coreos.com/network/config
    { "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
    • 下载 flannel 最新安装包
    $ wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    $ tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz
    $ mkdir -p /opt/kubernetes/{cfg,bin,ssl}
    $ mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
    • 执行脚本 k8s/scripts/flannel.sh,第一个参数为 etcd 地址 
    $ ./k8s/scripts/flannel.sh https://127.0.0.1:2379
    • 脚本内容如下:
    $ cat k8s/scripts/flannel.sh
    #!/bin/bash
    
    ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}
    
    cat <<EOF >/opt/kubernetes/cfg/flanneld
    
    FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} 
    -etcd-cafile=/opt/etcd/ssl/ca.pem 
    -etcd-certfile=/opt/etcd/ssl/server.pem 
    -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/flanneld.service
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/opt/kubernetes/cfg/flanneld
    ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
    ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    
    systemctl daemon-reload
    systemctl enable flanneld
    systemctl restart flanneld
    View Code
    • 查看启动时指定的子网
    $ cat /run/flannel/subnet.env
    DOCKER_OPT_BIP="--bip=172.17.23.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=false"
    DOCKER_OPT_MTU="--mtu=1450"
    DOCKER_NETWORK_OPTIONS=" --bip=172.17.23.1/24 --ip-masq=false --mtu=1450"
    • 执行 vim /usr/lib/systemd/system/docker.service 修改 docker 配置
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    BindsTo=containerd.service
    After=network-online.target firewalld.service containerd.service
    Wants=network-online.target
    Requires=docker.socket
     
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    EnvironmentFile=/run/flannel/subnet.env
    ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H unix:///var/run/docker.sock
    #ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.soc
    ExecReload=/bin/kill -s HUP $MAINPID
    TimeoutSec=0
    RestartSec=2
    Restart=always
    ......
    • 重启 docker  服务
    $ systemctl daemon-reload
    $ systemctl restart docker
    • 查看 flannel 网络,docker0 位于 flannel 分配的子网中
    $ ifconfig
    docker0:flags=4099<UP,BROADCAST,MULTICAST>mtu1500
            inet172.17.23.1netmask255.255.255.0broadcast172.17.23.255
            ether02:42:c4:96:b7:e3txqueuelen0(Ethernet)
            RXpackets0bytes0(0.0B)
            RXerrors0dropped0overruns0frame0
            TXpackets0bytes0(0.0B)
            TXerrors0dropped0overruns0carrier0collisions0
    eth1:......
    flannel.1:flags=4163<UP,BROADCAST,RUNNING,MULTICAST>mtu1450
            inet172.17.23.0netmask255.255.255.255broadcast0.0.0.0
            ether1e:7a:e8:a0:4d:a5txqueuelen0(Ethernet)
            RXpackets0bytes0(0.0B)
            RXerrors0dropped0overruns0frame0
            TXpackets0bytes0(0.0B)
            TXerrors0dropped0overruns0carrier0collisions0
    lo:flags=73<UP,LOOPBACK,RUNNING>mtu65536
            inet127.0.0.1netmask255.0.0.0
            looptxqueuelen0(LocalLoopback)
            RXpackets2807bytes220030(214.8KiB)
            RXerrors0dropped0overruns0frame0
            TXpackets2807bytes220030(214.8KiB)
            TXerrors0dropped0overruns0carrier0collisions0
    • 创建容器,查看容器网络
    $ docker run -it centos:7 /bin/bash
    [root@f04f38dfa5ec /]# yum install -y  net-tools
    [root@f04f38dfa5ec /]# ifconfig
    eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 172.17.23.2  netmask 255.255.255.0  broadcast 172.17.23.255
            ether 02:42:ac:11:17:02  txqueuelen 0  (Ethernet)
            RX packets 10391  bytes 14947016 (14.2 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 6295  bytes 445445 (435.0 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
            inet 127.0.0.1  netmask 255.0.0.0
            loop  txqueuelen 0  (Local Loopback)
            RX packets 0  bytes 0 (0.0 B)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 0  bytes 0 (0.0 B)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    [root@f04f38dfa5ec /]# ping 172.17.23.1
    PING 172.17.23.1 (172.17.23.1) 56(84) bytes of data.
    64 bytes from 172.17.23.1: icmp_seq=1 ttl=64 time=0.056 ms
    64 bytes from 172.17.23.1: icmp_seq=2 ttl=64 time=0.056 ms
    64 bytes from 172.17.23.1: icmp_seq=3 ttl=64 time=0.046 ms
    64 bytes from 172.17.23.1: icmp_seq=4 ttl=64 time=0.048 ms
    64 bytes from 172.17.23.1: icmp_seq=5 ttl=64 time=0.049 ms
    64 bytes from 172.17.23.1: icmp_seq=6 ttl=64 time=0.046 ms
    64 bytes from 172.17.23.1: icmp_seq=7 ttl=64 time=0.055 ms
    • 测试可以 ping 通 docker0 网卡 证明 flannel 起到路由作用

    2.7 安装 kube-apiserver

    • 修改 k8s/scripts/k8s-cert.sh 中 server-csr.json 部分的 hosts 字段为 127.0.0.1 和服务器 IP 地址
    cat > server-csr.json <<EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "${服务器IP}",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {   
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    • 使用 k8s/scripts/k8s-cert.sh 脚本生成认证:
    $ mkdir -p k8s/k8s-cert
    $ cp k8s/scripts/k8s-cert.sh k8s/k8s-cert
    $ cd k8s/k8s-cert
    $ ./k8s-cert.sh
    $ ls
    admin.csr admin.pem ca-csr.json k8s-cert.sh kube-proxy-key.pem server-csr.json
    admin-csr.json ca-config.json ca-key.pem kube-proxy.csr kube-proxy.pem server-key.pem
    admin-key.pem ca.csr ca.pem kube-proxy-csr.json server.csr server.pem
    $ cp ca*pem server*pem /opt/kubernetes/ssl/
    •  脚本内容如下:
    $ cat k8s-cert.sh
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    cat > ca-csr.json <<EOF
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                  "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    
    #-----------------------
    
    cat > server-csr.json <<EOF
    {
        "CN": "kubernetes",
        "hosts": [
          "10.0.0.1",
          "127.0.0.1",
          "10.206.176.19",
          "10.206.240.188",
          "10.206.240.189",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
    
    #-----------------------
    
    cat > admin-csr.json <<EOF
    {
      "CN": "admin",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "system:masters",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
    
    #-----------------------
    
    cat > kube-proxy-csr.json <<EOF
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "BeiJing",
          "ST": "BeiJing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
    View Code
    • 复制上述提到的 kube-apiserver、kubectl、kube-controller-manager、kube-scheduler、kubelet 和 kube-proxy/opt/kubernetes/bin/
    $ cp kube-apiserver kubectl kube-controller-manager kube-scheduler kubelet kube-proxy /opt/kubernetes/bin/
    • 生成随机序列号
    $ head -c 16 /dev/urandom | od -An -t x | tr -d ' '
    20cd735bd334f4334118f8be496df49d
    $ cat /opt/kubernetes/cfg/token.csv            
    20cd735bd334f4334118f8be496df49d,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    • 执行 k8s/scripts/apiserver.sh 脚本,启动 kube-apiserver.service 服务,第一个参数为 Master 节点地址,第二个为 etcd 集群地址
    $ ./k8s/scripts/apiserver.sh ${服务器IP} https://127.0.0.1:2379
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
    • 脚本内容如下:
    $ cat k8s/scripts/apiserver.sh
    #!/bin/bash
    
    MASTER_ADDRESS=$1
    ETCD_SERVERS=$2
    
    cat <<EOF >/opt/kubernetes/cfg/kube-apiserver
    
    KUBE_APISERVER_OPTS="--logtostderr=true \
    --v=4 \
    --etcd-servers=${ETCD_SERVERS} \
    --bind-address=${MASTER_ADDRESS} \
    --secure-port=6443 \
    --advertise-address=${MASTER_ADDRESS} \
    --allow-privileged=true \
    --service-cluster-ip-range=10.0.0.0/24 \
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
    --authorization-mode=RBAC,Node \
    --kubelet-https=true \
    --enable-bootstrap-token-auth \
    --token-auth-file=/opt/kubernetes/cfg/token.csv \
    --service-node-port-range=30000-50000 \
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  \
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
    --client-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --etcd-cafile=/opt/etcd/ssl/ca.pem \
    --etcd-certfile=/opt/etcd/ssl/server.pem \
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
    ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver
    View Code
    • 脚本会创建 kube-apiserver.service 的服务,查看服务状态
    $ systemctl status kube-apiserver.service
    * kube-apiserver.service - Kubernetes API Server
       Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
       Active: active (running) since Sun 2020-02-16 03:10:06 CST; 3min 10s ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 28740 (kube-apiserver)
        Tasks: 14
       Memory: 244.5M
       CGroup: /system.slice/kube-apiserver.service
               `-28740 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://127.0.0.1:...
    
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.491629   28740 available_controller.g...ons
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.514914   28740 httplog.go:90] verb="G…668":
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.516879   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.525747   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.527263   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:10 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:10.528568   28740 httplog.go:90] verb="G…668":
    Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.609546   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.611355   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.619297   28740 httplog.go:90] verb="G...8":
    Feb 16 03:13:11 VM_121_198_centos kube-apiserver[28740]: I0216 03:13:11.624253   28740 httplog.go:90] verb="G…668":
    Hint: Some lines were ellipsized, use -l to show in full.
    • 查看 kube-apiservce.service 的配置文件
    $ cat /opt/kubernetes/cfg/kube-apiserver
    
    KUBE_APISERVER_OPTS="--logtostderr=true 
    --v=4 
    --etcd-servers=https://127.0.0.1:2379 
    --bind-address=${服务器IP} 
    --secure-port=6443 
    --advertise-address=${服务器IP} 
    --allow-privileged=true 
    --service-cluster-ip-range=10.0.0.0/24 
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 
    --authorization-mode=RBAC,Node 
    --kubelet-https=true 
    --enable-bootstrap-token-auth 
    --token-auth-file=/opt/kubernetes/cfg/token.csv 
    --service-node-port-range=30000-50000 
    --tls-cert-file=/opt/kubernetes/ssl/server.pem  
    --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem 
    --client-ca-file=/opt/kubernetes/ssl/ca.pem 
    --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem 
    --etcd-cafile=/opt/etcd/ssl/ca.pem 
    --etcd-certfile=/opt/etcd/ssl/server.pem 
    --etcd-keyfile=/opt/etcd/ssl/server-key.pem"

    2.8 安装 kube-scheduler

    • 执行 k8s/scripts/scheduler.sh 脚本,创建 kube-scheduler.service 服务并启动,第一个参数为 Master 节点地址
    $ ./k8s/scripts/scheduler.sh 127.0.0.1
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
    • 脚本内容如下:
    $ cat k8s/scripts/scheduler.sh
    #!/bin/bash
    
    MASTER_ADDRESS=$1
    
    cat <<EOF >/opt/kubernetes/cfg/kube-scheduler
    
    KUBE_SCHEDULER_OPTS="--logtostderr=true \
    --v=4 \
    --master=${MASTER_ADDRESS}:8080 \
    --leader-elect"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
    ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler
    View Code 
    • 脚本会创建 kube-scheduler.service 服务,查看服务状态
    $ systemctl status kube-scheduler.service    
    * kube-scheduler.service - Kubernetes Scheduler
       Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
       Active: active (running) since Sun 2020-02-16 03:22:55 CST; 5h 35min ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 31963 (kube-scheduler)
        Tasks: 14
       Memory: 12.4M
       CGroup: /system.slice/kube-scheduler.service
               `-31963 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-...
    
    Feb 16 08:46:36 VM_121_198_centos kube-scheduler[31963]: I0216 08:46:36.873191   31963 reflector.go:494] k8s....ved
    Feb 16 08:50:33 VM_121_198_centos kube-scheduler[31963]: I0216 08:50:33.873911   31963 reflector.go:494] k8s....ved
    Feb 16 08:51:34 VM_121_198_centos kube-scheduler[31963]: I0216 08:51:34.876413   31963 reflector.go:494] k8s....ved
    Feb 16 08:52:05 VM_121_198_centos kube-scheduler[31963]: I0216 08:52:05.874120   31963 reflector.go:494] k8s....ved
    Feb 16 08:52:38 VM_121_198_centos kube-scheduler[31963]: I0216 08:52:38.873990   31963 reflector.go:494] k8s....ved
    Feb 16 08:53:47 VM_121_198_centos kube-scheduler[31963]: I0216 08:53:47.869403   31963 reflector.go:494] k8s....ved
    Feb 16 08:54:16 VM_121_198_centos kube-scheduler[31963]: I0216 08:54:16.876848   31963 reflector.go:494] k8s....ved
    Feb 16 08:54:24 VM_121_198_centos kube-scheduler[31963]: I0216 08:54:24.873540   31963 reflector.go:494] k8s....ved
    Feb 16 08:55:52 VM_121_198_centos kube-scheduler[31963]: I0216 08:55:52.876115   31963 reflector.go:494] k8s....ved
    Feb 16 08:57:42 VM_121_198_centos kube-scheduler[31963]: I0216 08:57:42.874884   31963 reflector.go:494] k8s....ved
    Hint: Some lines were ellipsized, use -l to show in full. 

    2.9 安装 kube-controller-manager

    • 执行 k8s/scripts/controller-manager.sh 脚本,创建 kube-controller-manager.service 服务并启动,第一个参数为 Master 节点地址
    $ ./k8s/scripts/controller-manager.sh 127.0.0.1
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
    • 脚本内容如下:
    $ cat ./k8s/scripts/controller-manager.sh
    #!/bin/bash
    
    MASTER_ADDRESS=$1
    
    cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager
    
    
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
    --v=4 \
    --master=${MASTER_ADDRESS}:8080 \
    --leader-elect=true \
    --address=127.0.0.1 \
    --service-cluster-ip-range=10.0.0.0/24 \
    --cluster-name=kubernetes \
    --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
    --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \
    --root-ca-file=/opt/kubernetes/ssl/ca.pem \
    --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
    --experimental-cluster-signing-duration=87600h0m0s"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
    ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager
    View Code
    • 脚本会创建 kube-controller-manager.service 服务,查看服务状态
    $ systemctl status kube-controller-manager.service
    * kube-controller-manager.service - Kubernetes Controller Manager
       Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
       Active: active (running) since Sun 2020-02-16 09:28:07 CST; 54s ago
         Docs: https://github.com/kubernetes/kubernetes
     Main PID: 17119 (kube-controller)
        Tasks: 13
       Memory: 24.9M
       CGroup: /system.slice/kube-controller-manager.service
               `-17119 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 ...
    
    Feb 16 09:28:57 VM_121_198_centos kube-controller-manager[17119]: I0216 09:28:57.577189   17119 pv_controller_...er
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.128738   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.178743   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.228729   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.278737   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.635791   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.685804   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.735807   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.785776   17119 request.go:556...2s
    Feb 16 09:29:00 VM_121_198_centos kube-controller-manager[17119]: I0216 09:29:00.786828   17119 resource_quota...nc
    Hint: Some lines were ellipsized, use -l to show in full.
    • 至此,Master 节点的相关组件已经安装完毕了
    • 将二进制文件目录加入环境变量:export PATH=$PATH:/opt/kubernetes/bin/
    $ vim ~/.zshrc
    ......
    export PATH=$PATH:/opt/kubernetes/bin/
    $ source ~/.zshrc
    • 查看 Master 节点状态
    $ kubectl get cs 
    NAME                 STATUS    MESSAGE             ERROR
    controller-manager   Healthy   ok                  
    scheduler            Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}

    2.10 安装 kubelet

    • 从此节开始,安装的组件均为 Node 节点使用,因为是单节点,因此安装在同一机器上
    • 创建工作目录,复制 k8s/scripts/kubeconfig.sh 脚本
    $ mkdir -p k8s/kubeconfig
    $ cp k8s/scripts/kubeconfig.sh k8s/kubeconfig
    $ cd k8s/kubeconfig
    • 查看 kube-apiserver 的 token
    $ cat /opt/kubernetes/cfg/token.csv
    20cd735bd334f4334118f8be496df49d,kubelet-bootstrap,10001,"system:kubelet-bootstrap" 
    • 修改脚本 设置客户端认证参数 部分,将如下:
    ......
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
      --token=${BOOTSTRAP_TOKEN} 
      --kubeconfig=bootstrap.kubeconfig
    ......
    • 替换为 /opt/kubernetes/cfg/token.csv 中的 token
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
       --token=20cd735bd334f4334118f8be496df49d 
       --kubeconfig=bootstrap.kubeconfig 
    • 执行 kubeconfig.sh 脚本,第一个参数为 kube-apiserver 监听的 IP 地址,第二个为上述建立的 k8s-cert 目录,生成的文件复制到配置文件目录中
    $ ./kubeconfig.sh ${服务器 IP} ../k8s-cert
    f920d3cad77834c418494860695ea887
    Cluster "kubernetes" set.
    User "kubelet-bootstrap" set.
    Context "default" modified.
    Switched to context "default".
    Cluster "kubernetes" set.
    User "kube-proxy" set.
    Context "default" modified.
    Switched to context "default".
    $ cp bootstrap.kubeconfig kube-proxy.kubeconfig /opt/kubernetes/cfg/ 
    • 脚本内容如下:
    $ cat kubeconfig.sh
    # 创建 TLS Bootstrapping Token
    BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
    echo ${BOOTSTRAP_TOKEN}
    
    cat > token.csv <<EOF
    ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    
    #----------------------
    
    APISERVER=$1
    SSL_DIR=$2
    
    # 创建kubelet bootstrapping kubeconfig 
    export KUBE_APISERVER="https://$APISERVER:6443"
    
    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=$SSL_DIR/ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
      --token=20cd735bd334f4334118f8be496df49d 
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置上下文参数
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kubelet-bootstrap 
      --kubeconfig=bootstrap.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
    #----------------------
    
    # 创建kube-proxy kubeconfig文件
    
    kubectl config set-cluster kubernetes 
      --certificate-authority=$SSL_DIR/ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-credentials kube-proxy 
      --client-certificate=$SSL_DIR/kube-proxy.pem 
      --client-key=$SSL_DIR/kube-proxy-key.pem 
      --embed-certs=true 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kube-proxy 
      --kubeconfig=kube-proxy.kubeconfig
    
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    View Code
    • 执行 k8s/scripts/kubelet.sh 脚本,创建 kubelet.service 服务并启动,第一个参数为 Node 节点地址(127.0.0.1 或者服务器 IP)
    $ ./k8s/scripts/kubelet.sh 127.0.0.1
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    • 脚本内容如下:
    $ cat ./k8s/scripts/kubelet.sh
    #!/bin/bash
    
    NODE_ADDRESS=$1
    DNS_SERVER_IP=${2:-"10.0.0.2"}
    
    cat <<EOF >/opt/kubernetes/cfg/kubelet
    
    KUBELET_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=${NODE_ADDRESS} \
    --node-labels=node.kubernetes.io/k8s-master=true \
    --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
    --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
    --config=/opt/kubernetes/cfg/kubelet.config \
    --cert-dir=/opt/kubernetes/ssl \
    --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
    
    EOF
    
    cat <<EOF >/opt/kubernetes/cfg/kubelet.config
    
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: ${NODE_ADDRESS}
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS:
    - ${DNS_SERVER_IP} 
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
      anonymous:
        enabled: true
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
    
    [Service]
    EnvironmentFile=/opt/kubernetes/cfg/kubelet
    ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet
    View Code
    • 创建 bootstrap 角色赋予权限用于连接 kube-apiserver 请求签名
    $ kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
    • 检查请求
    $ kubectl get csr
    NAME                                                   AGE   REQUESTOR           CONDITION
    node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA   16s   kubelet-bootstrap   Pending
    • 同意请求并颁发证书
    $ kubectl certificate approve node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA
    certificatesigningrequest.certificates.k8s.io/node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA approved
    $ kubectl get csr                                                                 
    NAME                                                   AGE     REQUESTOR           CONDITION
    node-csr-080q0dk5uSm0JxED14c6UB7q4jeUSCUHUxqosnqtnpA   2m51s   kubelet-bootstrap   Approved,Issued
    • 查看集群节点
    $ kubectl get node
    NAME        STATUS   ROLES    AGE   VERSION
    127.0.0.1   Ready    <none>   77s   v1.18.0-alpha.5.158+1c60045db0bd6e 
    • 已经是 Ready 状态,说明加入成功
    • 由于该 Node 同时也是 Master 角色,因此需要标记一下
    $ kubectl label node 127.0.0.1 node-role.kubernetes.io/master=true
    node/127.0.0.1 labeled
    $ kubectl get node                                                
    NAME        STATUS   ROLES    AGE     VERSION
    127.0.0.1   Ready    master   4m21s   v1.18.0-alpha.5.158+1c60045db0bd6e

    2.11 安装 kube-proxy

    • 执行 k8s/scripts/proxy.sh 脚本,创建 kube-proxy.service 服务并启动,第一个参数为 Node 节点地址
    $ ./k8s/scripts/proxy.sh 127.0.0.1  
    Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service. 
    • 脚本内容如下:
    $ cat ./k8s/scripts/proxy.sh
    #!/bin/bash
    
    NODE_ADDRESS=$1
    
    cat <<EOF >/opt/kubernetes/cfg/kube-proxy
    
    KUBE_PROXY_OPTS="--logtostderr=true \
    --v=4 \
    --hostname-override=${NODE_ADDRESS} \
    --cluster-cidr=10.0.0.0/24 \
    --proxy-mode=ipvs \
    --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
    
    EOF
    
    cat <<EOF >/usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
    
    [Service]
    EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
    ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy
    View Code
    • 脚本会创建 kube-proxy.service 服务,查看服务状态
    $ systemctl status kube-proxy.service             
    * kube-proxy.service - Kubernetes Proxy
       Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
       Active: active (running) since Sun 2020-02-16 10:39:16 CST; 4min 13s ago
     Main PID: 3474 (kube-proxy)
        Tasks: 10
       Memory: 7.6M
       CGroup: /system.slice/kube-proxy.service
               `-3474 /opt/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=127.0.0.1 --cluste...
    
    Feb 16 10:43:20 VM_121_198_centos kube-proxy[3474]: I0216 10:43:20.716190    3474 config.go:169] Calling han...date
    Feb 16 10:43:20 VM_121_198_centos kube-proxy[3474]: I0216 10:43:20.800724    3474 config.go:169] Calling han...date
    Feb 16 10:43:22 VM_121_198_centos kube-proxy[3474]: I0216 10:43:22.724946    3474 config.go:169] Calling han...date
    Feb 16 10:43:22 VM_121_198_centos kube-proxy[3474]: I0216 10:43:22.809768    3474 config.go:169] Calling han...date
    Feb 16 10:43:24 VM_121_198_centos kube-proxy[3474]: I0216 10:43:24.733676    3474 config.go:169] Calling han...date
    Feb 16 10:43:24 VM_121_198_centos kube-proxy[3474]: I0216 10:43:24.818662    3474 config.go:169] Calling han...date
    Feb 16 10:43:26 VM_121_198_centos kube-proxy[3474]: I0216 10:43:26.743754    3474 config.go:169] Calling han...date
    Feb 16 10:43:26 VM_121_198_centos kube-proxy[3474]: I0216 10:43:26.830673    3474 config.go:169] Calling han...date
    Feb 16 10:43:28 VM_121_198_centos kube-proxy[3474]: I0216 10:43:28.755816    3474 config.go:169] Calling han...date
    Feb 16 10:43:28 VM_121_198_centos kube-proxy[3474]: I0216 10:43:28.838915    3474 config.go:169] Calling han...date
    Hint: Some lines were ellipsized, use -l to show in full.

    2.12 检验安装

    • 创建 yaml 文件
    $ mkdir -p k8s/yamls
    $ cd k8s/yamls
    $ vim nginx-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-deployment
    spec:
      selector:
        matchLabels:
          app: nginx
      replicas: 2
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - name: nginx
            image: nginx:1.7.9
            ports:
            - containerPort: 80
    • 创建 deployment 对象,查看生成的 Pod,进入 Running 状态,说明已经成功创建
    $ kubectl apply -f nginx-deployment.yaml
    $ kubectl get pods
    NAME                                READY   STATUS    RESTARTS   AGE
    nginx-deployment-54f57cf6bf-c24p4   1/1     Running   0          12s
    nginx-deployment-54f57cf6bf-w2pqp   1/1     Running   0          12s
    • 查看 Pod 具体信息
    $ kubectl describe pod  nginx-deployment-54f57cf6bf-c24p4
    Name:         nginx-deployment-54f57cf6bf-c24p4
    Namespace:    default
    Priority:     0
    Node:         127.0.0.1/127.0.0.1
    Start Time:   Sun, 16 Feb 2020 10:49:05 +0800
    Labels:       app=nginx
                  pod-template-hash=54f57cf6bf
    Annotations:  <none>
    Status:       Running
    IP:           172.17.23.2
    IPs:
      IP:           172.17.23.2
    Controlled By:  ReplicaSet/nginx-deployment-54f57cf6bf
    Containers:
      nginx:
        Container ID:   docker://ffdfdefa8a743e5911634ca4b4d5c00b3e98955799c7c52c8040e6a8161706f9
        Image:          nginx:1.7.9
        Image ID:       docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
        Port:           80/TCP
        Host Port:      0/TCP
        State:          Running
          Started:      Sun, 16 Feb 2020 10:49:06 +0800
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-x7gjm (ro)
    Conditions:
      Type              Status
      Initialized       True 
      Ready             True 
      ContainersReady   True 
      PodScheduled      True 
    Volumes:
      default-token-x7gjm:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-x7gjm
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age        From                Message
      ----    ------     ----       ----                -------
      Normal  Scheduled  <unknown>  default-scheduler   Successfully assigned default/nginx-deployment-54f57cf6bf-c24p4 to 127.0.0.1
      Normal  Pulled     2m11s      kubelet, 127.0.0.1  Container image "nginx:1.7.9" already present on machine
      Normal  Created    2m11s      kubelet, 127.0.0.1  Created container nginx
      Normal  Started    2m11s      kubelet, 127.0.0.1  Started container nginx
    • 若查看时报如下错误:
    Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) 
    • 则需要给集群加一个 cluster-admin 权限:
    $ kubectl create clusterrolebinding system:anonymous   --clusterrole=cluster-admin   --user=system:anonymous

    3. 小结

    • 相对于 kubeadm,二进制安装复杂的多
    • 但是基于源码的开发和测试需要通过二进制方式部署
    • 因此掌握二进制方式部署 k8s 集群的方法非常重要
    • 上述的脚本均上传至 github 仓库
    • 欢迎各位提出问题和批评

    4. 参考文献

  • 相关阅读:
    ubuntu下文件安装与卸载
    webkit中的JavaScriptCore部分
    ubuntu 显示文件夹中的隐藏文件
    C语言中的fscanf函数
    test
    Use SandCastle to generate help document automatically.
    XElement Getting OuterXML and InnerXML
    XUACompatible meta 用法
    Adobe Dreamweaver CS5.5 中文版 下载 注册码
    The Difference Between jQuery’s .bind(), .live(), and .delegate()
  • 原文地址:https://www.cnblogs.com/wangao1236/p/12313231.html
Copyright © 2011-2022 走看看