zoukankan      html  css  js  c++  java
  • 【转载】二进制部署kubernetes v1.17.0

    本文转载自:

    https://shanhy.blog.csdn.net/article/details/104837622

    https://shanhy.blog.csdn.net/article/details/104862955

    https://shanhy.blog.csdn.net/article/details/104837634

    (一)ETCD集群部署
    (二)安装配置 Flannel Docker
    (三)手工部署kubernetes-1.17.0
    (四)K8S之HelloWorld


    ETCD集群部署

    附件

    /opt/soft/etcd/etcd-v3.4.4-linux-amd64.tar.gz
    下载地址:https://github.com/etcd-io/etcd/releases

    服务器

    192.168.1.54、192.168.1.65、192.168.1.105

    安装

    1、解压包(每台机器)

    ETCD_VER=v3.4.4
    cd /opt/soft/etcd
    tar xzvf etcd-${ETCD_VER}-linux-amd64.tar.gz --strip-components=1
    rm -f etcd-${ETCD_VER}-linux-amd64.tar.gz
    ./etcd --version
    ./etcdctl version

    2、创建etcd配置文件(每台机器)

    /opt/soft/etcd/etcd.conf
    下面配置文件中的IP地址分别修改为本机IP地址,ETCD_NAME分别为etcd01、etcd02、etcd03

    #[Member]
    #ETCD_CORS=""
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="https://192.168.1.65:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.1.65:2379,https://127.0.0.1:2379"
    #ETCD_WAL_DIR=""
    #ETCD_MAX_SNAPSHOTS="5"
    #ETCD_MAX_WALS="5"
    #ETCD_SNAPSHOT_COUNT="100000"
    #ETCD_HEARTBEAT_INTERVAL="100"
    #ETCD_ELECTION_TIMEOUT="1000"
    #ETCD_QUOTA_BACKEND_BYTES="0"
    #ETCD_MAX_REQUEST_BYTES="1572864"
    #ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
    #ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
    #ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
    #
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.65:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.65:2379"
    #ETCD_DISCOVERY=""
    #ETCD_DISCOVERY_FALLBACK="proxy"
    #ETCD_DISCOVERY_PROXY=""
    #ETCD_DISCOVERY_SRV=""
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.54:2380,etcd02=https://192.168.1.65:2380,etcd03=https://192.168.1.105:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    #ETCD_INITIAL_CLUSTER_STATE="new"
    #ETCD_STRICT_RECONFIG_CHECK="true"
    #ETCD_ENABLE_V2="true"
    

     3、创建etcd.service配置文件(每台机器)
    /usr/lib/systemd/system/etcd.service
    这个是systemd中etcd的启动配置文件,配置完之后就可以用systemd启停etc服务了

    [Unit]
    Description=Etcd Server
    After=network.target
    After=network-online.target
    Wants=network-online.target
    
    [Service]
    Type=notify
    WorkingDirectory=/opt/soft/etcd/
    EnvironmentFile=/opt/soft/etcd/etcd.conf
    ExecStart=/opt/soft/etcd/etcd 
    --initial-cluster-state=new 
    --cert-file=/opt/soft/etcd/ssl/server.pem 
    --key-file=/opt/soft/etcd/ssl/server-key.pem 
    --peer-cert-file=/opt/soft/etcd/ssl/server.pem 
    --peer-key-file=/opt/soft/etcd/ssl/server-key.pem 
    --trusted-ca-file=/opt/soft/etcd/ssl/ca.pem 
    --peer-trusted-ca-file=/opt/soft/etcd/ssl/ca.pem
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    

      4、创建TLS证书
    找一个有网络的机器生成证书,否则你就手工下载相关文件

    mkdir -p /opt/soft/etcd/ssl && cd $_
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson

    tls.sh 文件内容如下全部内容(先修改其中的IP地址,其中hosts尽可能多加)

    # etcd
    # cat ca-config.json
    cat > ca-config.json <<EOF
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "www": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    # cat ca-csr.json
    cat > ca-csr.json <<EOF
    {
        "CN": "etcd CA",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    
    # cat server-csr.json
    cat > server-csr.json <<EOF
    {
        "CN": "etcd",
        "hosts": [
        "127.0.0.1",
        "192.168.1.65",
        "192.168.1.54",
        "192.168.1.105"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "BeiJing",
                "ST": "BeiJing"
            }
        ]
    }
    EOF
    

      执行如下命令

    sh tls.sh
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
    
    ls *.pem
    

      

    然后将生成的4个pem文件证书复制到各个机器的/opt/soft/etcd/ssl目录中

    5、启动服务

    systemctl daemon-reload
    systemctl enable etcd
    systemctl start etcd

    6、查看集群状态
    集群状态主要是etcdctl endpoint statusetcdctl endpoint health两条命令

    cd /opt/soft/etcd/ && ./etcdctl 
    --endpoints="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379" 
    --cacert=ssl/ca.pem 
    --key=ssl/server-key.pem  
    --cert=ssl/server.pem  
    endpoint health 

    输出内容is healthy(健康)

    https://192.168.1.105:2379 is healthy: successfully committed proposal: took = 29.874089ms
    https://192.168.1.65:2379 is healthy: successfully committed proposal: took = 29.799246ms
    https://192.168.1.54:2379 is healthy: successfully committed proposal: took = 39.710904ms

    测试(正好配置一个flanneld的网络信息)
    因为flanneld(目前最新版v0.11.0)只支持V2版本命令,所以请使用下面的第二段V2命令操作

    # V2命令(set设值,ls查看)
    ETCDCTL_API=2 etcdctl 
    --endpoints="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379"  
    --ca-file=ssl/ca.pem 
    --key-file=ssl/server-key.pem  
    --cert-file=ssl/server.pem 
    set /flannel/network/config '{"Network":"10.244.0.0/16", "SubnetMin": "10.244.1.0", "SubnetMax": "10.244.254.0", "SubnetLen": 24, "Backend": {"Type": "vxlan"}}'
    
    ETCDCTL_API=2 etcdctl 
    --endpoints="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379"  
    --ca-file=ssl/ca.pem 
    --key-file=ssl/server-key.pem  
    --cert-file=ssl/server.pem 
    ls /flannel/network
    
    # 输出如下内容表示正常
    /flannel/network/config
    /flannel/network/subnets
    

      

    etcd 3.4注意事项

    ETCD3.4版本ETCDCTL_API=3 etcdctl 和 etcd --enable-v2=false 成为了默认配置,如要使用v2版本,执行etcdctl时候需要设置ETCDCTL_API环境变量,例如:ETCDCTL_API=2 etcdctl
    ETCD3.4版本会自动读取环境变量的参数,所以EnvironmentFile文件中有的参数,不需要再次在ExecStart启动参数中添加,二选一,如同时配置,会触发以下类似报错“etcd: conflicting environment variable “ETCD_NAME” is shadowed by corresponding command-line flag (either unset environment variable or disable flag)”
    flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API
    注意:flannel操作etcd使用的是v2的API,而kubernetes操作etcd使用的v3的API,为了兼容flannel,将默认开启v2版本,故需要配置文件/opt/soft/etcd/etcd.conf中设置 ETCD_ENABLE_V2=“true”
    在配合K8S使用的时候,写入的Network网段必须是 /16 段地址,必须与K8S的kube-controller-manager、kube-proxy的cluster-cidr参数值一致(原则上应该反过来说)

    另外,flanneld 配置文件中要将集群的地址和证书都配置上,如下示例:

    FLANNEL_ETCD_ENDPOINTS="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379"
    FLANNEL_ETCD_PREFIX="/flannel/network"
    FLANNEL_OPTIONS="-etcd-cafile=/opt/soft/etcd/ssl/ca.pem -etcd-keyfile=/opt/soft/etcd/ssl/server-key.pem  -etcd-certfile=/opt/soft/etcd/ssl/server.pem"

    还有flanneld.service的片段:

    ExecStart=/opt/soft/flannel/flanneld -ip-masq -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} $FLANNEL_OPTIONS -etcd-prefix=${FLANNEL_ETCD_PREFIX}
    

      

    安装配置 Flannel

    前置说明

    所有docker宿主机上都需要进行 flannel 配置,flanneld 为每个docker宿主机上的systemd服务。

    flannel 的安装非常简单,直接下载二进制文件即可(当然您也可以自己编译)
    打开网址 https://github.com/coreos/flannel/releases 下载最新版对应的架构的版本,一般使用 amd64(我的CentOS 7.6)
    比如我的下载地址为: https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz

    安装配置

    然后一顿命令操作如下:

    wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
    tar -zxvf flannel-v0.11.0-linux-amd64.tar.gz

    记下文件的位置,例如(/opt/soft/flannel/flanneld)
    添加一个flannel服务的System单元,简单的就可以。

    #编辑文件
    vi /usr/lib/systemd/system/flanneld.service
    # 内容如下
    [Unit]
    Description=Flanneld overlay address etcd agent
    After=network-online.target network.target
    Before=docker.service
    
    [Service]
    Type=notify
    EnvironmentFile=/etc/default/flanneld.conf
    ExecStart=/opt/soft/flannel/flanneld -ip-masq -etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} -etcd-prefix=${FLANNEL_ETCD_PREFIX} $FLANNEL_OPTIONS
    ExecStartPost=/opt/soft/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /etc/default/docker
    Restart=on-failure
    
    [Install]
    WantedBy=multi-user.target
    

      flanneld.service 中引用了1个配置文件,指向了1个配置,的内容如下(一个需要手工配置,一个是自动生成)
    /etc/default/flanneld.conf 文件内容(手工配置)

    # Flanneld configuration options
    
    # etcd url location.  Point this to the server where etcd runs
    #FLANNEL_ETCD_ENDPOINTS="http://etcd.goodcol.com:2379"
    FLANNEL_ETCD_ENDPOINTS="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379"
    
    # etcd config key.  This is the configuration key that flannel queries
    # For address range assignment
    FLANNEL_ETCD_PREFIX="/flannel/network"
    
    # Any additional options that you want to pass
    FLANNEL_OPTIONS="-etcd-cafile=/opt/soft/etcd/ssl/ca.pem -etcd-keyfile=/opt/soft/etcd/ssl/server-key.pem  -etcd-certfile=/opt/soft/etcd/ssl/server.pem"
    

      然后编辑文件 vim /usr/lib/systemd/system/docker.service
    找到 ExecStart,在前面添加一行 EnvironmentFile=/etc/default/docker
    然后在 ExecStart 最后添加变量 $DOCKER_NETWORK_OPTIONS (注意其他的参数应该是docker在之前相关需要中添加的,你不要动,这个地方只需要添加这个即可)
    示例如下(其中--graph设置Docker运行时根目录,如果你想单独指定数据位置才配置):

    (省略前面代码)
    EnvironmentFile=/etc/default/docker
    ExecStart=/usr/bin/dockerd --graph=/opt/soft/docker -H fd:// --containerd=/run/containerd/containerd.sock $DOCKER_NETWORK_OPTIONS
    (省略后面代码)
    

      执行命令

    touch /etc/default/docker

    一切就绪后,重启flannel和docker

    systemctl daemon-reload
    systemctl start flanneld
    systemctl enable flanneld
    systemctl restart docker

    同上操作在其他docker宿主机上进行 flannel 配置。

    验证

    1、查看 flannel 申请的网段,操作示例如下:

    ETCDCTL_API=2 /opt/soft/etcd/etcdctl 
    --endpoints="https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379"  
    --ca-file=ssl/ca.pem 
    --key-file=ssl/server-key.pem  
    --cert-file=ssl/server.pem 
    ls /flannel/network/subnets
    

    2、在每个 flannel 宿主机上,使用 cat /run/flannel/subnet.env 查看网段信息,自动生成的内容如下:

    [root@host02 etcd]# cat /run/flannel/subnet.env
    FLANNEL_NETWORK=10.244.0.0/16
    FLANNEL_SUBNET=10.244.21.1/24
    FLANNEL_MTU=1450
    FLANNEL_IPMASQ=true

    3、在每个 flannel 宿主机上,使用 cat /etc/default/docker 查看自动生成的 docker 的启动参数信息,自动生成的内容如下:

    [root@host02 etcd]# cat /etc/default/docker
    DOCKER_OPT_BIP="--bip=10.244.21.1/24"
    DOCKER_OPT_IPMASQ="--ip-masq=false"
    DOCKER_OPT_MTU="--mtu=1450"
    DOCKER_NETWORK_OPTIONS=" --bip=10.244.21.1/24 --ip-masq=false --mtu=1450"

    4、在 flannel 宿主机上使用命令 ifconfig 查看 flannel0 和 docker0 网卡的IP网址在同一个子网,并且和文件 /etc/default/docker 中的子网也一致,即为OK,如下示例:

    docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
            inet 10.244.21.1  netmask 255.255.255.0  broadcast 10.244.21.255
            inet6 fe80::42:38ff:fe0a:39a2  prefixlen 64  scopeid 0x20<link>
            ether 02:42:38:0a:39:a2  txqueuelen 0  (Ethernet)
            RX packets 24  bytes 5294 (5.1 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 53  bytes 6379 (6.2 KiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
            inet 192.168.1.65  netmask 255.255.255.0  broadcast 192.168.1.255
            inet6 fe80::fe5d:7f87:1b5:2290  prefixlen 64  scopeid 0x20<link>
            ether 00:0c:29:85:c6:57  txqueuelen 1000  (Ethernet)
            RX packets 144773824  bytes 50608795240 (47.1 GiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 141257989  bytes 76184941003 (70.9 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    
    flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
            inet 10.244.21.0  netmask 255.255.255.255  broadcast 0.0.0.0
            inet6 fe80::80ea:28ff:fecb:a9af  prefixlen 64  scopeid 0x20<link>
            ether 82:ea:28:cb:a9:af  txqueuelen 0  (Ethernet)
            RX packets 670  bytes 43380 (42.3 KiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 774  bytes 62322 (60.8 KiB)
            TX errors 0  dropped 48 overruns 0  carrier 0  collisions 0

    如果网络不对,可能是上面操作不当导致,按如下命令操作重启服务一遍

    systemctl daemon-reload
    systemctl restart flanneld
    systemctl enable flanneld
    systemctl restart docker

    查看具体 docker 容器的IP地址的命令为 docker inspect --format='{{.NetworkSettings.IPAddress}}' ID或NAMES,或者直接 docker inspect ID或NAMES 看详细信息。
    在不同的 flannel 宿主机上,分别随便启动一个docker服务,在一个docker中ping另外一个docker的IP地址,即可进行验证,如果你 docker 有http服务,使用 curl 命令请求测试也一样。

    二进制部署kubernetes-1.17.0

    ip地址LableComponent
    192.168.1.54 master apiserver,scheduler,controller-manager,etcd,docker,flannel
    192.168.1.65 node kubelet,kube-proxy,docker,flannel
    192.168.1.105 node kubelet,kube-proxy,docker,flannel

    环境初始化

    文件下载

    解释一下下面这这3个压缩包文件中的内容,kubernetes-server 中包含了 kubernetes-node 中的文件,kubernetes-node 中包含了 kubernetes-client 中的文件,所以 kubernetes-server 是最全的。之所有有后面2个压缩包的存在,你可以理解为当只需要 kubernetes-node 和 kubernetes-client 中文件的时候就没有必要下载最全的 kubernetes-server 包。所以后面2个包根据实际需要决定是否下载。

    wget https://dl.k8s.io/v1.17.0/kubernetes-server-linux-amd64.tar.gz
    wget https://dl.k8s.io/v1.17.0/kubernetes-node-linux-amd64.tar.gz
    wget https://dl.k8s.io/v1.17.0/kubernetes-client-linux-amd64.tar.gz

    关闭防火墙和SELINUX

    systemctl stop firewalld
    systemctl disable firewalld
    
    setenforce 0
    sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

    关闭swap并配置docker参数

    swapoff -a
    注释掉/etc/fstab中swap那一行
    echo 0 > /proc/sys/vm/swappiness #使swappiness=0临时生效
    cat >  /etc/sysctl.d/k8s.conf <<-EOF
    vm.swappiness = 0
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl -p #使配置生效

    配置cfssl用于创建证书(如果没网络就下载好拷贝过来)

    wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
    wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
    mv cfssl_linux-amd64 /usr/local/bin/cfssl
    mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
    mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

    创建k8s ca证书

    mkdir -p /etc/kubernetes/ssl && cp $_
    cat << EOF | tee ca-config.json
    {
      "signing": {
        "default": {
          "expiry": "87600h"
        },
        "profiles": {
          "kubernetes": {
             "expiry": "87600h",
             "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ]
          }
        }
      }
    }
    EOF
    cat << EOF | tee ca-csr.json
    {
        "CN": "kubernetes",
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF

    生成证书和私钥
    生成ca所必需的文件ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名

    cfssl gencert -initca ca-csr.json | cfssljson -bare ca - && ll
    

    创建etcd server证书(这个etcd的证书本文未使用到,所以也没有执行,跳过)

    cat << EOF | tee etcd-csr.json
    {
        "CN": "etcd",
        "hosts": [
        "192.168.1.54",
        "192.168.1.65",
        "192.168.1.105"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing"
            }
        ]
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    

    生成apiserver ca证书及私钥
    注意hosts中填写的apiserver宿主机IP地址和后面为集群定义的service-cluster-ip-range的IP地址10.96.0.1

    cat << EOF | tee server-csr.json
    {
        "CN": "kubernetes",
        "hosts": [
          "192.168.1.54",
          "127.0.0.1",
          "10.96.0.1",
          "kubernetes",
          "kubernetes.default",
          "kubernetes.default.svc",
          "kubernetes.default.svc.cluster",
          "kubernetes.default.svc.cluster.local"
        ],
        "key": {
            "algo": "rsa",
            "size": 2048
        },
        "names": [
            {
                "C": "CN",
                "L": "Beijing",
                "ST": "Beijing",
                "O": "k8s",
                "OU": "System"
            }
        ]
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

    创建kube-proxy ca证书及私钥

    cat << EOF | tee kube-proxy-csr.json
    {
      "CN": "system:kube-proxy",
      "hosts": [],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "L": "Beijing",
          "ST": "Beijing",
          "O": "k8s",
          "OU": "System"
        }
      ]
    }
    EOF
    cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

    安装kube-apiserver服务

    # 解压下载的二进制文件 kubernetes-server-linux-amd64.tar.gz
    mkdir -p /opt/soft/k8s && cd $_
    tar -xzvf kubernetes-server-linux-amd64.tar.gz
    cd kubernetes/server/bin/
    cp -p kube-apiserver /usr/bin/
    mkdir -p /etc/kubernetes && mkdir -p /var/log/kubernetes

    token.csv文件的生成

    echo "`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`,kubelet-bootstrap,10001,"system:kubelet-bootstrap"" > /etc/kubernetes/token.csv
    cat /etc/kubernetes/token.csv
    # 生成的token.csv的内容
    7a348d935970b45991367f8f02081535,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

    配置apiserver文件,设置文件中的etcd地址

    cat > /etc/kubernetes/apiserver <<-EOF
    KUBE_API_OPTS="--etcd-servers=https://192.168.1.54:2379,https://192.168.1.65:2379,https://192.168.1.105:2379 
    --service-cluster-ip-range=10.96.0.0/24 
    --service-node-port-range=30000-32767 
    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota  
    --logtostderr=true 
    --log-dir=/var/log/kubernetes 
    --authorization-mode=Node,RBAC 
    --enable-bootstrap-token-auth=true 
    --token-auth-file=/etc/kubernetes/token.csv 
    --v=2 
    --etcd-cafile=/opt/soft/etcd/ssl/ca.pem 
    --etcd-certfile=/opt/soft/etcd/ssl/server.pem 
    --etcd-keyfile=/opt/soft/etcd/ssl/server-key.pem 
    --tls-cert-file=/etc/kubernetes/ssl/server.pem  
    --tls-private-key-file=/etc/kubernetes/ssl/server-key.pem 
    --client-ca-file=/etc/kubernetes/ssl/ca.pem 
    --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem 
    --allow-privileged=true"
    EOF

    配置kube-apiserver系统服务

    cat > /usr/lib/systemd/system/kube-apiserver.service <<-EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GooglePlatform/kubernetes
    After=etcd.service
    Wants=etcd.service
     
    [Service]
    EnvironmentFile=/etc/kubernetes/apiserver
    ExecStart=/usr/bin/kube-apiserver $KUBE_API_OPTS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF

    启动服务

    systemctl daemon-reload
    systemctl enable kube-apiserver
    systemctl restart kube-apiserver

    确认运行状态

    ps -ef |grep kube-apiserver
    systemctl status kube-apiserver

    安装kube-controller-manager服务

    拷贝执行程序

    cd /opt/soft/k8s/kubernetes && cp -p server/bin/kube-controller-manager /usr/bin

    配置服务启动参数

    cat > /etc/kubernetes/controller-manager <<-EOF
    KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true 
    --v=2 
    --master=127.0.0.1:8080 
    --leader-elect=true 
    --bind-address=127.0.0.1 
    --service-cluster-ip-range=10.96.0.0/24 
    --cluster-name=kubernetes 
    --allocate-node-cidrs=true 
    --cluster-cidr=10.244.0.0/16 
    --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem 
    --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  
    --root-ca-file=/etc/kubernetes/ssl/ca.pem 
    --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem"
    EOF

    配置系统服务

    cat << EOF | tee /usr/lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleDloudPlatform/kubernetes
    After=kube-apiserver.service
    Requires=kube-apiserver.service
     
    [Service]
    EnvironmentFile=/etc/kubernetes/controller-manager
    ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
    Restart=on-failure
    LimitNOFILE=65536
     
    [Install]
    WantedBy=multi-user.target
    EOF

    启动服务

    systemctl daemon-reload
    systemctl enable kube-controller-manager
    systemctl restart kube-controller-manager

    确认运行状态

    ps -ef |grep kube-controller-manager
    systemctl status kube-controller-manager

    安装kube-scheduler服务

    拷贝执行程序

    cd /opt/soft/k8s/kubernetes && cp -p server/bin/kube-scheduler /usr/bin

    配置服务启动参数

    cat << EOF | tee /etc/kubernetes/scheduler
    KUBE_SCHEDULER_OPTS="--logtostderr=true --v=2 --master=127.0.0.1:8080 --leader-elect"
    EOF

    配置系统服务

    cat > /usr/lib/systemd/system/kube-scheduler.service <<-EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
     
    [Service]
    EnvironmentFile=/etc/kubernetes/scheduler
    ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
    Restart=on-failure
     
    [Install]
    WantedBy=multi-user.target
    EOF

    启动服务

    systemctl daemon-reload
    systemctl enable kube-scheduler
    systemctl restart kube-scheduler

    确认运行状态

    ps -ef |grep kube-scheduler
    systemctl status kube-scheduler

    配置环境变量

    vim /etc/profile
    # 修改PATH
    export PATH=/opt/soft/k8s/kubernetes/server/bin:$PATH
    # 生效配置
    source /etc/profile
    

    查看集群状态

    [root@server1 kubernetes]# kubectl get cs,nodes
    NAME                                 STATUS    MESSAGE             ERROR
    componentstatus/scheduler            Healthy   ok                  
    componentstatus/controller-manager   Healthy   ok                  
    componentstatus/etcd-1               Healthy   {"health":"true"}   
    componentstatus/etcd-2               Healthy   {"health":"true"}   
    componentstatus/etcd-0               Healthy   {"health":"true"} 

    PS:命令中的cscomponentstatus的缩写

    部署node节点

    将压缩包中的 kubelet 和 kube-proxy 二进制文件拷贝node节点的 /usr/bin 目录中

    cd /opt/soft/k8s/kubernetes/server/bin
    scp kubelet kube-proxy root@192.168.1.65:/usr/bin
    scp kubelet kube-proxy root@192.168.1.105:/usr/bin

    创建脚本文件environment.sh(依然在master机器操作)

    mkdir -p /etc/kubernetes-node && cd $_ && touch environment.sh

    下面是environment.sh文件内容:

    # BOOTSTRAP_TOKEN 的值在 /etc/kubernetes/token.csv 文件中
    BOOTSTRAP_TOKEN=7a348d935970b45991367f8f02081535
    
    # KUBE_APISERVER 在Master上,所以是Master的IP地址
    KUBE_APISERVER="https://192.168.1.54:6443"
    
    # 创建kubelet bootstrapping kubeconfig
     
    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=/etc/kubernetes/ssl/ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=bootstrap.kubeconfig
     
    # 设置客户端认证参数
    kubectl config set-credentials kubelet-bootstrap 
      --token=${BOOTSTRAP_TOKEN} 
      --kubeconfig=bootstrap.kubeconfig
     
    # 设置上下文参数
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kubelet-bootstrap 
      --kubeconfig=bootstrap.kubeconfig
     
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
    
    # 创建kube-proxy kubeconfig文件
     
    kubectl config set-cluster kubernetes 
      --certificate-authority=/etc/kubernetes/ssl/ca.pem 
      --embed-certs=true 
      --server=${KUBE_APISERVER} 
      --kubeconfig=kube-proxy.kubeconfig
     
    kubectl config set-credentials kube-proxy 
      --client-certificate=/etc/kubernetes/ssl/kube-proxy.pem 
      --client-key=/etc/kubernetes/ssl/kube-proxy-key.pem 
      --embed-certs=true 
      --kubeconfig=kube-proxy.kubeconfig
     
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=kube-proxy 
      --kubeconfig=kube-proxy.kubeconfig
     
    kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
    

    执行脚本environment.sh生成配置文件bootstrap.kubeconfigkube-proxy.kubeconfig

    sh environment.sh && ll

    创建kubelet参数配置文件

    cat << EOF | tee kubelet.config
    kind: KubeletConfiguration
    apiVersion: kubelet.config.k8s.io/v1beta1
    address: 192.168.1.54
    port: 10250
    readOnlyPort: 10255
    cgroupDriver: cgroupfs
    clusterDNS: ["10.96.0.2"]
    clusterDomain: cluster.local.
    failSwapOn: false
    authentication:
      anonymous:
        enabled: true
    EOF

    创建kubelet配置文件
    注意下面配置中的k8s.gcr.io/pause:3.1你可能因为蔷的原因拉不下来,离线安装你需要手工导入这个image到主机,或者你可以考虑使用docker.io/xzxiaoshan/pause:3.1

    cat << EOF | tee kubelet
    KUBELET_OPTS="--logtostderr=true 
    --v=2 
    --hostname-override=192.168.1.54 
    --kubeconfig=/etc/kubernetes-node/kubelet.kubeconfig 
    --bootstrap-kubeconfig=/etc/kubernetes-node/bootstrap.kubeconfig 
    --config=/etc/kubernetes-node/kubelet.config 
    --cert-dir=/etc/kubernetes-node/ssl 
    --pod-infra-container-image=k8s.gcr.io/pause:3.1"
    EOF
    

      创建kubelet.service文件

    cat << EOF | tee /usr/lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet
    After=docker.service
    Requires=docker.service
     
    [Service]
    EnvironmentFile=/etc/kubernetes-node/kubelet
    ExecStart=/usr/bin/kubelet $KUBELET_OPTS
    Restart=on-failure
    KillMode=process
     
    [Install]
    WantedBy=multi-user.target
    EOF
    

      创建kube-proxy-config.yaml文件

    cat << EOF | tee /etc/kubernetes-node/kube-proxy-config.yaml
    kind: KubeProxyConfiguration
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    clientConnection:
      burst: 200
      kubeconfig: "/etc/kubernetes-node/kube-proxy.kubeconfig"
      qps: 100
    bindAddress: 192.168.1.54
    healthzBindAddress: 192.168.1.54:10256
    metricsBindAddress: 192.168.1.54:10249
    enableProfiling: true
    clusterCIDR: 10.244.0.0/16
    hostnameOverride: 192.168.1.54
    mode: "ipvs"
    portRange: ""
    kubeProxyIPTablesConfiguration:
      masqueradeAll: false
    kubeProxyIPVSConfiguration:
      scheduler: rr
      excludeCIDRs: []
    EOF
    

      创建kube-proxy.service文件

    cat << EOF | tee /usr/lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Proxy
    After=network.target
     
    [Service]
    ExecStart=/usr/bin/kube-proxy \
      --config=/etc/kubernetes-node/kube-proxy-config.yaml \
      --logtostderr=true \
      --v=2
    Restart=on-failure
     
    [Install]
    WantedBy=multi-user.target
    EOF

    至此,目录/etc/kubernetes-node中包含了如下几个文件

    ll /etc/kubernetes-node
    bootstrap.kubeconfig  environment.sh  kubelet  kubelet.config  kube-proxy.kubeconfig  kube-proxy-config.yaml

    拷贝文件到node

    # 拷贝目录/etc/kubernetes-node到node的/etc/目录
    scp -r /etc/kubernetes-node root@192.168.1.65:/etc/
    scp -r /etc/kubernetes-node root@192.168.1.105:/etc/
    # 拷贝kubelet.service文件到node的/usr/lib/systemd/system/中
    scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} root@192.168.1.65:/usr/lib/systemd/system/
    scp /usr/lib/systemd/system/{kubelet.service,kube-proxy.service} root@192.168.1.105:/usr/lib/systemd/system/

    修改node节点的配置参数address和hostname-override

    在每个node上分别修改下面几个文件:
    将/etc/kubernetes-node/kubelet.config中的address设置为node主机的ip地址
    将/etc/kubernetes-node/kubelet中的hostname-override设置为node主机的ip地址
    将/etc/kubernetes-node/kube-proxy-config.yaml中的相关IP地址也设置为node主机的ip地址
    PS:可以在node主机上直接使用命令 sed -i “s/192.168.1.54/192.168.1.65/g” /etc/kubernetes-node/{kubelet.config,kubelet,kube-proxy-config.yaml} 快捷全部替换

    在master上执行命令将kubelet-bootstrap用户绑定到系统集群角色

    kubectl create clusterrolebinding kubelet-bootstrap 
      --clusterrole=system:node-bootstrapper 
      --user=kubelet-bootstrap
    # 执行命令后会输出下面这行 
    clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
    

    启动所有node节点的kubelet服务

    systemctl daemon-reload
    systemctl enable kubelet
    systemctl restart kubelet

    校验服务状态

    ps -ef|grep kubelet
    systemctl status kubelet

    approve kubelet CSR 请求,下面有关kubectl命令的操作都在master上执行

    # 节点的kubelet启动后,会自动向master节点发送验证加入请求,查看CSR列表结果中所有为pending状态的,都是需要master批准的
    [root@server1 ~]# kubectl get csr
    NAME                                                   AGE         REQUESTOR           CONDITION
    node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA   47s         kubelet-bootstrap   Pending
    node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8   67s         kubelet-bootstrap   Pending
    
    # 手动approve CSR请求,批准请求(其中参数就是kubectl get csr输出的NAME)
    [root@server1 ~]# kubectl certificate approve node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA
    certificatesigningrequest.certificates.k8s.io/node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA approved
    [root@server1 ~]# kubectl certificate approve node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8
    certificatesigningrequest.certificates.k8s.io/node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8 approved
    
    # 再次查看CSR状态CONDITION已经改变
    [root@server1 ~]# kubectl get csr
    NAME                                                   AGE         REQUESTOR           CONDITION
    node-csr-FyLBxx1d5NsScmsd9H8jHOEy4qnKR9IzddbeDKq1KmA   3m20s       kubelet-bootstrap   Approved,Issued
    node-csr-KgAYOBy3eXgDYAQi-44ElYOo6pnNqUEQuiIIKBoMcg8   3m40s       kubelet-bootstrap   Approved,Issued

    查看node节点

    [root@cib-server1 ~]# kubectl get nodes
    NAME            STATUS   ROLES    AGE     VERSION
    192.168.1.105   Ready    <none>   2m16s   v1.17.0
    192.168.1.65    Ready    <none>   113s    v1.17.0

    给节点打标记
    如下示例给192.168.1.54标记master,其他节点标记node(也可以用其他你需要的名字)

    kubectl label node 192.168.1.54  node-role.kubernetes.io/master='master'
    kubectl label node 192.168.1.65  node-role.kubernetes.io/node='node'
    kubectl label node 192.168.1.105 node-role.kubernetes.io/node='node'

    然后再用kubectl get nodes查看,ROLES列就有数据了

    启动所有node节点的kube-proxy服务

    systemctl daemon-reload
    systemctl enable kube-proxy
    systemctl restart kube-proxy

    校验服务状态

    ps -ef|grep kube-proxy
    systemctl status kube-proxy

    至此,单master+多node安装部署完成。
    本文内容,在Redhat7.4和CentOS7.6部署均正常。

    End

  • 相关阅读:
    Conda 使用笔记
    个人日志笔记软件比较
    CMD 命令笔记
    Joplin 资源汇总
    【NAS】Hexo+Github 搭建博客&基础配置
    【NAS】群晖 WordPress 使用记录
    哈工大计组mooc 第四章 中 测试
    下列软件包有未满足的依赖关系:
    安装ubuntu用Ultraiso制作引导盘便捷启动提示:找到多余一个分区
    ros安装caffe anaconda2之后roscore无法执行
  • 原文地址:https://www.cnblogs.com/hailun1987/p/14186667.html
Copyright © 2011-2022 走看看