zoukankan      html  css  js  c++  java
  • K8S-二进制安装部署高可用集群环境

    简述

    在CentOS|RHEL平台,用二进制方式安装高可用k8s集群1.20.x

    环境说明

    软硬件环境

    kubernetes软件版本选择

    • Kubernetes v1.21.2-alpha.1:内测版本(alpha)

    • Kubernetes v1.21.0-beta.1:公测版本(beta)

    • Kubernetes v1.20.2: 稳定版本 (Stable)

    image-20210125150206677

    通过 CHANGELOG 查看关联软件版本选型

    • 在changelog中查找etcd默认版本('Update default etcd server version to'),如下:

      image-20210420095008974

    网络配置规划

    网络 主机名称 角色 组件
    192.168.10.221/24 k8s-master01 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
    192.168.10.222/24 k8s-master02 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
    192.168.10.223/24 k8s-master03 master kube-apiserver、kube-controller-manager、kube-scheduler、etcd
    192.168.10.231/24 k8s-node01 node kubelet、kube-proxy, docker
    192.168.10.232/24 k8s-node02 node kubelet、kube-proxy, docker
    192.168.10.225/32 VIP
    172.16.0.0/16 Pod网段
    10.96.0.0/12 Service网段
    • 建议主机网络,Pod网络,Service网络使用不同地址段

    架构规划图

    image-20210417102828663

    基础环境配置(所有节点)

    没有特意指出时,默认所有节点配置

    升级系统

    CentOS7|RHEL7 因docker|kubernetes Bug, 需要升级内核4.18+

    背景原因

    CentOS|RHEL 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定

    解决方案
    • 升级内核到 4.4.X (kernel-lt) 或4.18.X (kernel-ml) 以上

    • 或手动编译内核,disable CONFIG_MEMCG_KMEM 特性

    • 或安装 Docker 18.09.1 及以上的版本。但由于 kubelet 也会设置 kmem(它 vendor 了 runc),所以需要重新编译 kubelet 并指定 GOFLAGS="-tags=nokmem";

      git clone --branch v1.14.1 --single-branch --depth 1 https://github.com/kubernetes/kubernetes
      cd kubernetes
      KUBE_GIT_VERSION=v1.14.1 ./build/run.sh make kubelet GOFLAGS="-tags=nokmem"
      

    升级系统包

    yum update -y --exclude=kernel*
    reboot
    

    升级内核(RHEL7|CentOS7)

    # 下载 rpm 包
    ## 官方镜像 http://elrepo.reloumirrors.net/kernel/el7/x86_64/RPMS/
    wget -LO http://hkg.mirror.rackspace.com/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
    wget -LO http://hkg.mirror.rackspace.com/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
    
    
    # 安装
    yum localinstall -y kernel-ml*
    
    修改默认启动内核版本
    grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
    
    # 在 Centos/RedHat Linux 7 中启用 user namespace
    grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
    
    # 检查确认启动的 namespace, 如果是 y,则启用了对应的namespace,否则未启用
    grep "CONFIG_[USER,IPC,PID,UTS,NET]*_NS" $(ls /boot/config*|tail -1)
    
    # 在 Centos/RedHat Linux 7 中关闭 user namespace
    grubby --remove-args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
    

    升级内核(CentOS8|RHEL8)

    # 可以使用dnf升级,也可以使用上面7版本的步骤升级
    rpm --import https://www.elrepo.ora/RPM-GPG-KEY-elrepo.ora
    yum install https://www.elrepo.org/elrepo-release-8.el8.elrepo.noarch.rpm
    
    dnf --disablerepo=* --enablerepo=elrepo-kernel -y install kernel-ml kernel-ml-devel
    
    grubby --default-kernel && reboot
    
    • dnf 方式自动修改成使用新版本内核启动

    检查确认

    grubby --default-kernel
    reboot
    uname -r
    

    配置主机名,host文件

    hostnamectl --static set-hostname  k8s-master01
    hostnamectl --static set-hostname  k8s-master02
    hostnamectl --static set-hostname  k8s-master03
    hostnamectl --static set-hostname  k8s-node01
    hostnamectl --static set-hostname  k8s-node02
    
    
    cat >> /etc/hosts <<-'EOF'
    
    # k8s hosts
    192.168.10.221 k8s-master01
    192.168.10.222 k8s-master02
    192.168.10.223 k8s-master03
    
    192.168.10.231 k8s-node01
    192.168.10.232 k8s-node02
    
    192.168.10.225 k8s-vip
    EOF
    

    关闭不用的服务

    /sbin/chkconfig rhnsd off
    systemctl stop rhnsd
    systemctl disable --now NetworkManager
    systemctl disable --now firewalld
    systemctl disable --now postfix
    systemctl disable --now rhsmcertd
    systemctl disable --now irqbalance.service
    

    关闭防火墙

    systemctl disable --now firewalld.service
    systemctl disable --now dnsmasq
    systemctl disable --now NetworkManager  # rhel7|CentOS7版本
    # 开启 dnsmasq 会导致 docker 容器无法解析域名,需要关闭
    
    • 开启 dnsmasq 会导致 docker 容器无法解析域名,需要关闭

    • rhel7|CentOS7 关闭NetworkManager

    禁用SELinux

    setenforce 0
    # 修改/etc/selinux/config 文件,将SELINUX=enforcing改为SELINUX=disabled
    sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
    # 该项设置需要重启后才能生效。
    

    关闭swap分区

    cp /etc/fstab /etc/fstab_bak
    swapoff -a && sysctl -w vm.swappiness=0
    sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
    # 选一方式
    sed -i 's/.*swap.*/#&/' /etc/fstab
    sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
    

    配置免秘钥登陆

    mkdir -p ~/.ssh
    chmod 700 ~/.ssh
    cd ~/.ssh
    rm -f ~/.ssh/*
    
    ssh-keygen -b 2048 -q -t rsa -P '' -f ~/.ssh/id_rsa
    # or
    # ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
    
    ssh-keygen -q -t dsa -P '' -f ~/.ssh/id_dsa
    
    cat ~/.ssh/*.pub >> ~/.ssh/authorized_keys
    ssh-keyscan -t ecdsa -H "$host_ip" >> ~/.ssh/known_hosts
    
    chmod 600 id_dsa id_rsa
    chmod 644 id_dsa.pub id_rsa.pub 
    chmod 644 authorized_keys
    

    检查测试

    alias mssh='ssh -o ConnectTimeout=3 -o ConnectionAttempts=5 -o PasswordAuthentication=no -o StrictHostKeyChecking=no'
    
    for host in $(grep 'k8s' /etc/hosts|grep -Ev '^#|vip'); do
      echo "------ ${host} ------"
      ping $host -c 1 >/dev/null && mssh $host date
    done
    

    安装依赖软件包

    RHEL|CentOS平台(yum)

    repo文件内容
    # 阿里云 CentOS7 源
    cat > /etc/yum.repos.d/ali-docker-ce.repo <<-'EOF'
    [docker-ce-stable]
    name=Docker CE Stable - $basearch
    baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
    EOF
    
    # 清华镜像源
    cat > /etc/yum.repos.d/th-docker-ce.repo <<-'EOF'
    [docker-ce-stable]
    name=Docker CE Stable - $basearch
    baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/$basearch/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg
    EOF
    
    基础软件安装
    # 阿里云CentOS7源
    curl -o /etc/yum.repos.d/aliyun.repo https://mirrors.aliyun.com/repo/Centos-7.repo
    sed -ri -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' -e 's/$releasever/7/g' /etc/yum.repos.d/aliyun.repo
    
    # 安装 epel 源,用于安装 container-selinux
    yum -y install epel-release
    
    ## 安装基础包
    yum -y install bash-completion net-tools tree wget curl make cmake gcc gcc-c++ createrepo yum-utils device-mapper-persistent-data lvm2 jq psmisc vim lrzsz git vim-enhanced ntpdate ipvsadm ipset sysstat conntrack-tools libseccomp
    
    ## 检查
    rpm -q --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})
    " yum-utils container-selinux device-mapper-persistent-data lvm2 git wget jq psmisc vim net-tools conntrack-tools ipvsadm ipset jq sysstat curl libseccomp ntpdate 
    ipvsadm -l -n
    
    # master节点部署高可用软件
    yum -y install keepalived haproxy
    
    kubernetes源(忽略)
    # k8s 源
    cat > /etc/yum.repos.d/kubernetes.repo <<-'EOF'
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    • [] 中括号中的是repository id,唯一,用来标识不同仓库
    • name 仓库名称,自定义
    • baseurl 仓库地址
    • enable 是否启用该仓库,默认为1表示启用
    • gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
    • repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
    • gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了

    时间配置

    调整时区(按需)

    # 调整系统 TimeZone
    timedatectl set-timezone Asia/Shanghai
    
    # 当前的 UTC 时间写入硬件时钟
    timedatectl set-local-rtc 0
    

    时间同步

    • master1节点去同步互联网时间,其他节点与master1节点进行时间同步
    • chrony服务端节点启动ntpd服务,其余与服务端同步时间的节点停用ntpd服务
    # 更新时间
    ntpdate cn.pool.ntp.org
    
    # chrony
    yum install -y chrony
    
    ## 配置 vi /etc/chrony.conf 时间同步服务器
    #注意:注释掉默认ntp服务器,我们此处使用阿里云公网ntp服务器
    server ntp.aliyun.com iburst
    server ntp1.aliyun.com iburst
    server ntp2.aliyun.com iburst
    server ntp3.aliyun.com iburst
    server ntp4.aliyun.com iburst
    server ntp5.aliyun.com iburst
    server ntp6.aliyun.com iburst
    server ntp7.aliyun.com iburst
    
    # Allow NTP client access from local network.
    allow 192.168.10.0/24
    
    
    ## 其它节点配置master01作为同步服务器
    vi /etc/chrony.conf
    server 192.168.10.221 iburst
    
    ## 启动服务
    systemctl enable --now chronyd
    
    # 检查确认 [时间同步状态 ^*表示已经同步]
    chronyc sources
    
    

    配置内核参数

    cat > /etc/sysctl.d/99-k8s.conf <<-'EOF'
    vm.swappiness=0
    vm.overcommit_memory=1
    vm.panic_on_oom=0
    fs.may_detach_mounts = 1
    fs.inotify.max_user_watches=89100
    fs.inotify.max_user_instances=8192
    fs.file-max=52706963
    fs.nr_open=52706963
    
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.netfilter.nf_conntrack_max=2310720
    net.core.somaxconn = 16384
    net.ipv4.ip_forward = 1
    net.ipv4.ip_conntrack_max = 65536
    net.ipv4.tcp_keepalive_time = 600
    net.ipv4.tcp_keepalive_probes = 3
    net.ipv4.tcp_keepalive_intvl =15
    net.ipv4.tcp_max_tw_buckets = 36000
    net.ipv4.tcp_max_orphans = 327680
    net.ipv4.tcp_max_syn_backlog = 16384
    net.ipv4.tcp_orphan_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_tw_reuse = 1
    # net.ipv4.tcp_tw_recycle=0
    net.ipv4.tcp_timestamps = 0
    net.ipv4.neigh.default.gc_thresh1=1024
    net.ipv4.neigh.default.gc_thresh1=2048
    net.ipv4.neigh.default.gc_thresh1=4096
    net.ipv6.conf.all.disable_ipv6=1
    EOF
    
    # 为了保证br_netfilter模块加载,我们需要执行以下命令使参数生效
    sysctl -p /etc/sysctl.d/99-k8s-cri.conf
    sysctl --system && modprobe br_netfilter
    
    • tcp_tw_recycle 和 Kubernetes 的 NAT 冲突,必须关闭 ,否则会导致服务不通
      • tcp_tw_recycle参数在linux内核4.12版本之后已经移除了tcp_tw_recycle参数
      • 内核4.12以前版本时,需要添加 net.ipv4.tcp_tw_recycle=0参数
    • 关闭 IPV6,防止触发 docker BUG

    配置资源限制

    cat > /etc/security/limits.d/97-k8s.conf <<-'EOF'
    *    soft    nofile    655360
    *    hard    nofile    131072
    *    soft    nproc    655350
    *    hard    nproc    655350
    *    soft    memlock    unlimited
    *    hard    memlock    unlimited
    EOF
    

    加载模块

    ipvs模块配置

    kube-proxy开启ipvs的前置条件

    原文:https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/ipvs/README.md

    参考:https://www.qikqiak.com/post/how-to-use-ipvs-in-kubernetes/

    创建配置文件

    内核 4.19+版本 nf_conntrack_ipv4 已改为 nf_conntrack ,4.18以下使用 nf_conntrack_ipv4即可

    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack
    
    cat > /etc/modules-load.d/ipvs.conf <<-'EOF'
    ip_vs
    ip_vs_lc
    ip_vs_wlc
    ip_vs_rr
    ip_vs_wrr
    ip_vs_lblc
    ip_vs_lblcr
    ip_vs_dh
    ip_vs_sh
    ip_vs_fo
    ip_vs_nq
    ip_vs_sed
    ip_vs_ftp
    ip_vs_sh
    nf_conntrack  # 4.18 改成这个nf_conntrack_ipv4
    ip_tables
    ip_set
    xt_set
    ipt_set
    ipt_rpfilter
    ipt_REJECT
    ipip
    EOF
    
    重新加载配置
    # 加载内核配置
    systemctl enable --now systemd-modules-load.service
    
    检查确认
    # 检查加载的模块
    lsmod | grep --color=auto -e ip_vs -e nf_conntrack
    lsmod |grep -E "ip_vs|nf_conntrack"
    # 或者
    cut -f1 -d " "  /proc/modules | grep -e ip_vs -e nf_conntrack
    

    创建软件相关目录

    mkdir -p /ups/app/kubernetes/{bin,pki,log,cfg,manifests}
    
    mkdir -p /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
    
    # 创建CNI插件目录及配置文件目录
    mkdir -p /opt/cni/bin  /etc/cni/net.d
    
    # etct 数据目录和 wal 目录
    mkdir -p /ups/data/k8s/{etcd,wal}
    chmod 700 /ups/data/k8s/etcd
    

    安装 CRI(Container Runtime Interface)组件

    containerd组件配置(选一)

    containerd 实现了 kubernetes 的 Container Runtime Interface (CRI) 接口,提供容器运行时核心功能,如镜像管理、容器管理等,相比 dockerd 更加简单、健壮和可移植。

    加载模块
    cat> /etc/modules-load.d/containerd.conf <<EOF
    overlay
    br_netfilter
    EOF
    
    modprobe overlay
    modprobe br_netfilter
    
    
    yum 安装containerd
    yum install -y containerd.io
    
    # 配置 containerd
    mkdir -p /etc/containerd
    containerd config default > /etc/containerd/config.toml
    
    # 替换配置文件 
    sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g"  /etc/containerd/config.toml 
    sed -i '/containerd.runtimes.runc.options/a            SystemdCgroup = true' /etc/containerd/config.toml 
    sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g"  /etc/containerd/config.toml
    
    ## 使用 systemd cgroup 驱动程序
    ### 结合 runc 使用 systemd cgroup 驱动,在 /etc/containerd/config.toml 中设置
    
    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
      ...
      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
        SystemdCgroup = true
    
    # 启动
    systemctl daemon-reload 
    systemctl enable --now containerd 
    
    下载二进制文件
    wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.17.0/crictl-v1.17.0-linux-amd64.tar.gz 
      https://github.com/opencontainers/runc/releases/download/v1.0.0-rc10/runc.amd64 
      https://github.com/containernetworking/plugins/releases/download/v0.8.5/cni-plugins-linux-amd64-v0.8.5.tgz 
      https://github.com/containerd/containerd/releases/download/v1.3.3/containerd-1.3.3.linux-amd64.tar.gz 
    
    解压
    mkdir containerd
    # 不包含 runc 二进制文件
    tar -xvf containerd-1.3.3.linux-amd64.tar.gz -C containerd
    tar -xvf crictl-v1.17.0-linux-amd64.tar.gz
    
    mkdir cni-plugins
    sudo tar -xvf cni-plugins-linux-amd64-v0.8.5.tgz -C cni-plugins
    
    sudo mv runc.amd64 runc
    
    # 包含了所有 Kubernetes 需要的二进制文件
    tar -C / -xf cri-containerd-cni-1.4.3-linux-amd64.tar.gz
    
    分发二进制文件到所有 worker 节点
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp containerd/bin/*  crictl  cni-plugins/*  runc  root@${node_ip}:/opt/k8s/bin
        ssh root@${node_ip} "chmod a+x /opt/k8s/bin/* && mkdir -p /etc/cni/net.d"
      done
    
    创建和分发 containerd 配置文件
    cat > /etc/containerd/containerd-config.toml<<EOF
    version = 2
    root = "${CONTAINERD_DIR}/root"
    state = "${CONTAINERD_DIR}/state"
    
    [plugins]
      [plugins."io.containerd.grpc.v1.cri"]
        sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2"
        [plugins."io.containerd.grpc.v1.cri".cni]
          bin_dir = "/opt/k8s/bin"
          conf_dir = "/etc/cni/net.d"
      [plugins."io.containerd.runtime.v1.linux"]
        shim = "containerd-shim"
        runtime = "runc"
        runtime_root = ""
        no_shim = false
        shim_debug = false
    EOF
    
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        ssh root@${node_ip} "mkdir -p /etc/containerd/ ${CONTAINERD_DIR}/{root,state}"
        scp containerd-config.toml root@${node_ip}:/etc/containerd/config.toml
      done
    
    创建 containerd systemd unit 文件
    cat > /usr/lib/systemd/system/containerd.service <<EOF
    [Unit]
    Description=containerd container runtime
    Documentation=https://containerd.io
    After=network.target local-fs.target
    
    [Service]
    # Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"
    ExecStartPre=-/sbin/modprobe overlay
    ExecStartPre=-/sbin/modprobe br_netfilter
    ExecStart=/usr/bin/containerd
    
    Type=notify
    Delegate=yes
    KillMode=process
    Restart=always
    RestartSec=5
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNPROC=infinity
    LimitCORE=infinity
    LimitNOFILE=1048576
    # Comment TasksMax if your systemd version does not supports it.
    # Only systemd 226 and above support this version.
    TasksMax=infinity
    OOMScoreAdjust=-999
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    [Unit]
    Description=Lightweight Kubernetes
    Documentation=https://containerd.io
    After=network-online.target
    
    [Service]
    ExecStartPre=-/sbin/modprobe br_netfilter
    ExecStartPre=-/sbin/modprobe overlay
    ExecStartPre=-/bin/mkdir -p /run/k8s/containerd
    ExecStart=/usr/local/bin/containerd 
             -c /apps/k8s/etc/containerd/config.toml 
             -a /run/k8s/containerd/containerd.sock 
             --state /apps/k8s/run/containerd 
             --root /apps/k8s/containerd 
    
    KillMode=process
    Delegate=yes
    OOMScoreAdjust=-999
    LimitNOFILE=1024000   # 决定容器里面文件打开数可以在这里设置
    LimitNPROC=1024000
    LimitCORE=infinity
    TasksMax=infinity
    TimeoutStartSec=0
    Restart=always
    RestartSec=5s
    
    [Install]
    WantedBy=multi-user.target
    
    启动 containerd 服务
    for node_ip in ${NODE_IPS[@]}
      do
        echo ">>> ${node_ip}"
        scp containerd.service root@${node_ip}:/etc/systemd/system
        ssh root@${node_ip} "systemctl enable containerd && systemctl restart containerd"
      done
    
    创建和分发 crictl 配置文件

    crictl 是兼容 CRI 容器运行时的命令行工具,提供类似于 docker 命令的功能

    cat > /etc/crictl.yaml <<EOF
    runtime-endpoint: unix:///run/containerd/containerd.sock
    image-endpoint: unix:///run/containerd/containerd.sock
    timeout: 10
    debug: false
    EOF
    
    # 分发到所有 worker 节点
    for node_ip in ${NODE_IPS[@]}; do
        echo ">>> ${node_ip}"
        scp crictl.yaml root@${node_ip}:/etc/crictl.yaml
    done
    
    镜像管理
    导入本地镜像
    # 1.3 前
    ctr cri load image.tar
    
    # 1.3 后
    ctr -n=k8s.io image import pause-v3.2.tar
    
    检查确认导入的镜像
    ctr images list
    
    crictl image list
    

    docker配置(选一)

    这里选用 docker 19.03.X 版本作为CRI 。 可以只在 work|Node 节点上安装

    RHEL8|CentOS8 需要单独安装containerd

    软件部署
    二进制包安装
    wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.15.tgz
    
    tar -xf docker-19.03.15.tgz -C /usr/local/bin --no-same-owner --strip-components=1
    
    # 配置服务文件
    cat> /usr/lib/systemd/system/docker.service <<-'EOF'
    [Unit]
    Description=Docker Application Container Engine
    Documentation=https://docs.docker.com
    After=network-online.target firewalld.service
    Wants=network-online.target
      
    [Service]
    Type=notify
    # the default is not to use systemd for cgroups because the delegate issues still
    # exists and systemd currently does not support the cgroup feature set required
    # for containers run by docker
    ExecStart=/usr/local/bin/dockerd
    ExecReload=/bin/kill -s HUP $MAINPID
    # Having non-zero Limit*s causes performance problems due to accounting overhead
    # in the kernel. We recommend using cgroups to do container-local accounting.
    LimitNOFILE=infinity
    LimitNPROC=infinity
    LimitCORE=infinity
    # Uncomment TasksMax if your systemd version supports it.
    # Only systemd 226 and above support this version.
    #TasksMax=infinity
    TimeoutStartSec=0
    # set delegate yes so that systemd does not reset the cgroups of docker containers
    Delegate=yes
    # kill only the docker process, not all processes in the cgroup
    KillMode=process
    # restart the docker process if it exits prematurely
    Restart=on-failure
    StartLimitBurst=3
    StartLimitInterval=60s
      
    [Install]
    WantedBy=multi-user.target
    
    EOF
    
    yum源方式安装
    # 配置docker-ce源
    yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    sed -ri -e 's/$releasever/7/g' /etc/yum.repos.d/docker-ce.repo
    
    # yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
    
    # 或者直接配置repo文件
    cat > /etc/yum.repos.d/docker-ce.repo <<-'EOF'
    [docker-ce-stable]
    name=Docker CE Stable - $basearch
    baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/7/$basearch/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
    EOF
    
    yum clean all && yum makecache fast
    
    # 查询docker可用版本
    yum list docker-ce.x86_64 --showduplicates | sort -r 
    
    # 检查依赖包是否已安装
    rpm -q --queryformat "%{NAME}-%{VERSION}-%{RELEASE} (%{ARCH})
    " docker-ce-19.03.* containerd.io container-selinux
    
    #安装具体版本的docker
    yum -y install docker-ce-19.03.15 docker-ce-cli-19.03.15
    
    docker参数配置
    mkdir -p /etc/docker
    mkdir -p /ups/data/docker
    cat > /etc/docker/daemon.json <<EOF
    {
      "graph": "/ups/data/docker",
      "storage-driver": "overlay2",
      "insecure-registries": [ "registry.access.redhat.com" ],
      "registry-mirrors": [ 
        "https://hub-mirror.c.163.com",
        "https://docker.mirrors.ustc.edu.cn",
        "https://registry.docker-cn.com",
        "https://mirror.baidubce.com"
      ],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "max-concurrent-downloads": 10,
      "max-concurrent-uploads": 5,
      "live-restore": true,
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m",
        "max-file": "2"
      }
    }
    EOF
    
    • 不希望使用https的安全机制来访问gcr.io 时,则可以添加--insecure-registry gcr.io命令行参数启动docker 服务的方式,表示匿名下载
    • max-concurrent-downloads # 下载并发数
    • max-concurrent-uploads # 上传并发数
    • max-size # 日志文件最大到多少切割
    • max-file # 日志文件保留个数
    • live-restore # 开启这个参数,重启docker服务不会影响容器的运行
    • native.cgroupdriver=systemd # k8s 推荐使用systemd
    启动服务
    systemctl daemon-reload && systemctl enable --now docker  # 开机启动并启动服务
    
    # 异常时检查服务
    journalctl -u docker
    
    信息检查
    docker version
    docker info
    # kubelet 建议使用sytemd类型
    docker info | grep "Cgroup Driver"
    
    ps -elfH|grep docker
    
    镜像
    拉取镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
    
    # 国内镜像
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0
    # 国外镜像
    docker pull coredns/coredns:1.8.0
    docker tag  coredns/coredns:1.8.0  k8s.gcr.io/coredns:1.8.0
    docker rmi  coredns/coredns:1.8.0
    
    打包镜像
    docker save -o coredns-1.7.0.tar k8s.gcr.io/coredns:1.7.0
    docker save -o pause-3.2.tar k8s.gcr.io/pause:3.2
    
    docker save -o calico-v3.16.6.tar calico/kube-controllers:v3.16.6 calico/node:v3.16.6 calico/pod2daemon-flexvol:v3.16.6 calico/cni:v3.16.6
    docker save -o coredns-v1.8.0.tar coredns/coredns:1.8.0
    docker save -o dashboard-v2.1.0.tar kubernetesui/dashboard:v2.1.0 kubernetesui/metrics-scraper:v1.0.6
    docker save -o metrics-server-v0.4.1.tar k8s.gcr.io/metrics-server/metrics-server:v0.4.1
    
    导入镜像
    for images in pause-v3.2.tar calico-v3.15.3.tar coredns-1.7.0.tar dashboard-v2.1.0.tar metrics-server-v0.4.1.tar ;do
      docker load -i $images
    done
    
    • 或使用aliyun镜像

      image-20210413102302963

    二进制包部署

    下载

    k8s软件包
    https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
    
    https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
    

    打开页面后选择对应版本的Server Binaries下载

    # server 软件包(已包含work node所需软件)
    curl -sSL -C - -O https://dl.k8s.io/v1.20.5/kubernetes-server-linux-amd64.tar.gz
    
    etcd软件包
    wget https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
    # or
    curl -sSL -C - -O https://github.com/etcd-io/etcd/releases/download/v3.4.13/etcd-v3.4.13-linux-amd64.tar.gz
    
    CNI插件

    kubelet组件在启动时,在命令行选项 --network-plugin=cni 来选择CNI插件。它会自动搜索 --cni-bin-dir (default /opt/cni/bin)指定目录下的网络插件,并使用 --cni-conf-dir (default /etc/cni/net.d) 目录下配置文件设置每个Pod的网络。CNI配置文件引用的插件必须--cni-bin-dir目录中。

    新版k8s 不需要单独安装CNI, calico自带有cni插件

    wget https://github.com/containernetworking/plugins/releases/download/v0.9.0/cni-plugins-linux-amd64-v0.9.0.tgz
    
    export CNI_VER='v0.9.0'
    curl -sSL -C - -O https://github.com/containernetworking/plugins/releases/download/${CNI_VER}/cni-plugins-linux-amd64-${CNI_VER}.tgz
    
    https://github.com/projectcalico/cni-plugin/releases/tag/v3.16.8
    https://github.com/projectcalico/calicoctl/releases
    
    

    部署

    etcd
    # 解包
    tar -xf etcd-v3.4.13-linux-amd64.tar.gz --no-same-owner --strip-components=1 -C /ups/app/kubernetes/bin/ etcd-v3.4.13-linux-amd64/etcd{,ctl}
    
    # 查看版本
    etcdctl version
    
    k8s
    # 解包
    tar -xf kubernetes-server-linux-amd64.tar.gz --no-same-owner --strip-components=3 -C /ups/app/kubernetes/bin/ kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy,adm}
    
    ## tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /ups/app/kubernetes/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy,-aggregator,adm} kubernetes/server/bin/{mounter,apiextensions-apiserver}
    
    # 查看版本
    kubectl version
    kubectl version --client=true --short=true
    
    • --strip-components=N: 在提取时从文件名中删除 NUMBER 个前导目录
    CNI插件
    mkdir -p /opt/cni/bin
    tar -xf cni-plugins-linux-amd64-v0.9.0.tgz -C /opt/cni/bin/
    

    TLS软件(只在master01配置)

    介绍

    CFSSL是CloudFlare开源的一款PKI/TLS工具。 CFSSL 包含一个命令行工具 和一个用于签名,验证并且捆绑TLS证书的 HTTP API 服务。 使用Go语言编写

    部署cfssl工具

    这里使用cfssl软件配置所需的证书和私钥文件

    二进制部署
    export TLS_BIN_DIR="/usr/local/bin"
    curl -s -L -o ${TLS_BIN_DIR}/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && 
    curl -s -L -o ${TLS_BIN_DIR}/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && 
    curl -s -L -o ${TLS_BIN_DIR}/cfssl-certinfo https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
    
    chmox +x ${TLS_BIN_DIR}/cfssl{,json,-certinfo}
    
    源码编译
    #go 环境部署
    yum install go
    vi ~/.bash_profile
    GOBIN=/root/go/bin/
    PATH=$PATH:$GOBIN:$HOME/bin
    export PATH
    go get  github.com/cloudflare/cfssl/cmd/cfssl
    go get  github.com/cloudflare/cfssl/cmd/cfssljson
    
    查看版本
    #  查看版本
    cfssl version
    

    所需镜像源网址

    在安装kubernetes时,默认的官方镜像都存在gcr.io上,而在国内无法直接访问gcr.io上的镜像的。

    使用阿里云镜像地址

    1. registry.aliyuncs.com/google_containers
    2. registry.cn-hangzhou.aliyuncs.com/google_containers

    使用dockerhub下的mirrorgooglecontainers

    # 要下载kube-proxy-amd64:v1.11.3这个镜像,可以使用docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3来进行下载,下载以后对镜像重新打标签
    
    # 1、先pull下来
    docker pull mirrorgooglecontainers/kube-proxy-amd64:v1.11.3
    
    # 2、重新打标签
    docker tag docker.io/mirrorgooglecontainers/kube-proxy-amd64:v1.11.3   k8s.gcr.io/kube-proxy-amd64:v1.11.3
    
    # 3、查看镜像,然后就可以直接使用这个镜像了
    docker images | grep k8s.gcr.io/kube-proxy-amd64
    
    

    使用国内镜像制作的镜像

     https://github.com/zhangguanzhang/gcr.io
    

    ETCD集群配置

    kubernetes 使用 etcd 集群持久化存储所有 API 对象、运行数据。

    可以使用外etcd集群(即不部署在 kubernetes master节点),这里在kubernetes master节点上部署etcd集群【节点个数建议3,5,7....】。

    生成证书和私钥文件(master01)

    etcd集群和kubernetes集群是2套不相关的证书

    创建etcd证书

    在master1上生成etcd证书,然后分发到其它master节点

    证书签名请求文件
    cat > etcd-ca-csr.json <<-'EOF'
    {
      "CN": "etcd",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ],
      "ca": {
        "expiry": "876000h"
      }
    }
    EOF
    
    cat > etcd-crs.json <<-'EOF'
    {
      "CN": "etcd",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ]
    }
    EOF
    
    创建证书和私钥
    cd k8s-ha-install-manual-installation-v1.19.x.zipk8s-ha-installpki
    
    # 生成CA 证书
    cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /ups/app/kubernetes/pki/etcd-ca
    
    #生成客户端证书
    cfssl gencert 
       -ca=/ups/app/kubernetes/pki/etcd-ca.pem 
       -ca-key=/ups/app/kubernetes/pki/etcd-ca-key.pem 
       -config=ca-config.json 
       -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.10.221,192.168.10.222,192.168.10.223,m01,m02,m03,k8s001,k8s002,k8s003 
       -profile=etcd 
       etcd-csr.json | cfssljson -bare /ups/app/kubernetes/pki/etcd
    
    

    分发证书文件

    MasterNodes='k8s-master02 k8s-master03 m02 m03'
    WorkNodes='k8s-node01 k8s-node02 n01 n02'
    
    for NODE in $MasterNodes; do
      ping -c 1 $NODE >/dev/null 2>&1
      if [[ "$?" = "0" ]]; then
    	ssh $NODE "mkdir -p /ups/app/kubernetes/pki"
    	for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do
    		scp /ups/app/kubernetes/pki/${FILE} $NODE:/ups/app/kubernetes/pki/${FILE}
    	done
      fi
    done
    

    配置集群

    master节点etcd配置文件

    注意修改主机名和IP地址

    master1节点
    cat > /ups/app/kubernetes/cfg/etcd.config.yml<<EOF
    name: 'k8s-master01'
    data-dir: /ups/data/k8s/etcd
    wal-dir: /ups/data/k8s/wal
    snapshot-count: 5000
    heartbeat-interval: 100
    election-timeout: 1000
    quota-backend-bytes: 0
    listen-peer-urls: 'https://192.168.10.221:2380'
    listen-client-urls: 'https://192.168.10.221:2379,http://127.0.0.1:2379'
    max-snapshots: 3
    max-wals: 5
    cors:
    initial-advertise-peer-urls: 'https://192.168.10.221:2380'
    advertise-client-urls: 'https://192.168.10.221:2379'
    discovery:
    discovery-fallback: 'proxy'
    discovery-proxy:
    discovery-srv:
    initial-cluster: 'k8s-master01=https://192.168.10.221:2380,k8s-master02=https://192.168.10.222:2380,k8s-master03=https://192.168.10.223:2380'
    initial-cluster-token: 'etcd-k8s-cluster'
    initial-cluster-state: 'new'
    strict-reconfig-check: false
    enable-v2: true
    enable-pprof: true
    proxy: 'off'
    proxy-failure-wait: 5000
    proxy-refresh-interval: 30000
    proxy-dial-timeout: 1000
    proxy-write-timeout: 5000
    proxy-read-timeout: 0
    client-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    peer-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      peer-client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    debug: false
    log-package-levels:
    log-outputs: [default]
    force-new-cluster: false
    EOF
    
    • etcd-v3.4+版本中,注意log-outputs是一个切片类型
    • --cert-file--key-file:etcd server 与 client 通信时使用的证书和私钥
    • --trusted-ca-file:签名 client 证书的 CA 证书,用于验证 client 证书
    • --peer-cert-file--peer-key-file:etcd 与 peer 通信使用的证书和私钥
    • --peer-trusted-ca-file:签名 peer 证书的 CA 证书,用于验证 peer 证书
    master2节点
    cat > /ups/app/kubernetes/cfg/etcd.config.yml<<EOF
    name: 'k8s-master02'
    data-dir: /ups/data/k8s/etcd
    wal-dir: /ups/data/k8s/wal
    snapshot-count: 5000
    heartbeat-interval: 100
    election-timeout: 1000
    quota-backend-bytes: 0
    listen-peer-urls: 'https://192.168.10.222:2380'
    listen-client-urls: 'https://192.168.10.222:2379,http://127.0.0.1:2379'
    max-snapshots: 3
    max-wals: 5
    cors:
    initial-advertise-peer-urls: 'https://192.168.10.222:2380'
    advertise-client-urls: 'https://192.168.10.222:2379'
    discovery:
    discovery-fallback: 'proxy'
    discovery-proxy:
    discovery-srv:
    initial-cluster: 'k8s-master01=https://192.168.10.221:2380,k8s-master02=https://192.168.10.222:2380,k8s-master03=https://192.168.10.223:2380'
    initial-cluster-token: 'etcd-k8s-cluster'
    initial-cluster-state: 'new'
    strict-reconfig-check: false
    enable-v2: true
    enable-pprof: true
    proxy: 'off'
    proxy-failure-wait: 5000
    proxy-refresh-interval: 30000
    proxy-dial-timeout: 1000
    proxy-write-timeout: 5000
    proxy-read-timeout: 0
    client-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    peer-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      peer-client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    debug: false
    log-package-levels:
    log-outputs: [default]
    force-new-cluster: false
    EOF
    
    master3节点
    cat > /ups/app/kubernetes/cfg/etcd.config.yml<<EOF
    name: 'k8s-master03'
    data-dir: /ups/data/k8s/etcd
    wal-dir: /ups/data/k8s/wal
    snapshot-count: 5000
    heartbeat-interval: 100
    election-timeout: 1000
    quota-backend-bytes: 0
    listen-peer-urls: 'https://192.168.10.223:2380'
    listen-client-urls: 'https://192.168.10.223:2379,http://127.0.0.1:2379'
    max-snapshots: 3
    max-wals: 5
    cors:
    initial-advertise-peer-urls: 'https://192.168.10.223:2380'
    advertise-client-urls: 'https://192.168.10.223:2379'
    discovery:
    discovery-fallback: 'proxy'
    discovery-proxy:
    discovery-srv:
    initial-cluster: 'k8s-master01=https://192.168.10.221:2380,k8s-master02=https://192.168.10.222:2380,k8s-master03=https://192.168.10.223:2380'
    initial-cluster-token: 'etcd-k8s-cluster'
    initial-cluster-state: 'new'
    strict-reconfig-check: false
    enable-v2: true
    enable-pprof: true
    proxy: 'off'
    proxy-failure-wait: 5000
    proxy-refresh-interval: 30000
    proxy-dial-timeout: 1000
    proxy-write-timeout: 5000
    proxy-read-timeout: 0
    client-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    peer-transport-security:
      cert-file: '/ups/app/kubernetes/pki/etcd.pem'
      key-file: '/ups/app/kubernetes/pki/etcd-key.pem'
      peer-client-cert-auth: true
      trusted-ca-file: '/ups/app/kubernetes/pki/etcd-ca.pem'
      auto-tls: true
    debug: false
    log-package-levels:
    log-outputs: [default]
    force-new-cluster: false
    EOF
    

    配置systemd unit文件

    cat > /usr/lib/systemd/system/etcd.service <<EOF
    [Unit]
    Description=Etcd Service
    Documentation=https://coreos.com/etcd/docs/latest/
    After=network.target
    
    [Service]
    Type=notify
    ExecStart=/ups/app/kubernetes/bin/etcd --config-file=/ups/app/kubernetes/cfg/etcd.config.yml
    Restart=on-failure
    RestartSec=10
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    Alias=etcd3.service
    EOF
    

    启动服务

    systemctl daemon-reload; systemctl enable --now etcd
    systemctl status etcd -l
    
    systemctl daemon-reload && systemctl restart etcd
    
    # 若服务启动异常,检查原因
    journalctl -u etcd
    

    验证集群

    etcdctl接口
    export ETCDCTL_API=3
    etcdctl --endpoints="192.168.10.223:2379,192.168.10.222:2379,192.168.10.221:2379" 
      --cacert=/ups/app/kubernetes/pki/etcd-ca.pem 
      --cert=/ups/app/kubernetes/pki/etcd.pem 
      --key=/ups/app/kubernetes/pki/etcd-key.pem 
      endpoint status 
      --write-out=table
      
    etcdctl --endpoints="192.168.10.223:2379,192.168.10.222:2379,192.168.10.221:2379" 
      --cacert=/ups/app/kubernetes/pki/etcd-ca.pem 
      --cert=/ups/app/kubernetes/pki/etcd.pem 
      --key=/ups/app/kubernetes/pki/etcd-key.pem 
      endpoint health -w table
    
    etcdctl --endpoints="192.168.10.223:2379,192.168.10.222:2379,192.168.10.221:2379" 
      --cacert=/ups/app/kubernetes/pki/etcd-ca.pem 
      --cert=/ups/app/kubernetes/pki/etcd.pem 
      --key=/ups/app/kubernetes/pki/etcd-key.pem 
      member list -w table
    

    image-20210413104312931

    curl命令获取
    curl http://127.0.0.1:2379/v2/members|jq
    

    image-20210413104510312

    高可用软件配置

    HAProxy配置文件

    Master配置HAProxy配置参数文件一样

    cat >/etc/haproxy/haproxy.cfg<<EOF
    global
      maxconn  2000
      ulimit-n  16384
      log  127.0.0.1 local0 err
      stats timeout 30s
    
    defaults
      log global
      mode  http
      option  httplog
      timeout connect 5000
      timeout client  50000
      timeout server  50000
      timeout http-request 15s
      timeout http-keep-alive 15s
    
    frontend monitor-in
      bind *:33305
      mode http
      option httplog
      monitor-uri /monitor
    
    listen stats
      bind    *:8666
      mode    http
      stats   enable
      stats   hide-version
      stats   uri       /stats
      stats   refresh   30s
      stats   realm     Haproxy Statistics
      stats   auth      admin:admin
    
    frontend k8s-master
      bind 0.0.0.0:8443
      bind 127.0.0.1:8443
      mode tcp
      option tcplog
      tcp-request inspect-delay 5s
      default_backend k8s-master
    
    backend k8s-master
      mode tcp
      option tcplog
      option tcp-check
      balance roundrobin
      default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
      server k8s-master01    192.168.10.221:6443  check
      server k8s-master02    192.168.10.222:6443  check
      server k8s-master03    192.168.10.223:6443  check
    EOF
    

    Nginx代理(可选)

    配置文件
    cat > kube-nginx.conf <<EOF
    worker_processes 1;
    events {
        worker_connections  1024;
    }
    stream {
        upstream backend {
            hash $remote_addr consistent;
            server 192.168.10.221:6443        max_fails=3 fail_timeout=30s;
            server 192.168.10.222:6443        max_fails=3 fail_timeout=30s;
            server 192.168.10.223:6443        max_fails=3 fail_timeout=30s;
        }
        server {
            listen *:16443;
            proxy_connect_timeout 1s;
            proxy_pass backend;
        }
    }
    EOF
    
    cat > /etc/nginx/nginx.conf << "EOF"
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
    
    include /usr/share/nginx/modules/*.conf;
    
    events {
        worker_connections 1024;
    }
    
    # 四层负载均衡,为Master apiserver组件提供负载均衡
    stream {
    
        log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    
        access_log  /var/log/nginx/k8s-access.log  main;
    
        upstream k8s-apiserver {
           server 192.168.10.221:6443;   # Master1 APISERVER IP:PORT
           server 192.168.10.222:6443;   # Master2 APISERVER IP:PORT
           server 192.168.10.223:6443;   # Master2 APISERVER IP:PORT
        }
        
        server {
           listen 6443;
           proxy_pass k8s-apiserver;
        }
    }
    
    http {
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
    
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
    
        server {
            listen       80 default_server;
            server_name  _;
    
            location / {
            }
        }
    }
    EOF
    
    服务启动文件
    cat > kube-nginx.service <<EOF
    [Unit]
    Description=kube-apiserver nginx proxy
    After=network.target
    After=network-online.target
    Wants=network-online.target
    [Service]
    Type=forking
    ExecStartPre=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -t
    ExecStart=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx
    ExecReload=/opt/k8s/kube-nginx/sbin/kube-nginx -c /opt/k8s/kube-nginx/conf/kube-nginx.conf -p /opt/k8s/kube-nginx -s reload
    PrivateTmp=true
    Restart=always
    RestartSec=5
    StartLimitInterval=0
    LimitNOFILE=65536
    [Install]
    WantedBy=multi-user.target
    EOF
    

    配置KeepAlived

    注意每个节点的state、proority、IP和网卡

    • 各 Master 节点的 mcast_src_ip
    • 权重参数 priority (值越大优先级越高)
    • virtual_router_id :整个区域内必须值唯一(广播)
    • interface: 网卡设备名称
    • state : MASTER|BACKUP 模式

    master1节点

    cat > /etc/keepalived/keepalived.conf <<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3
        rise 2
    }
    vrrp_instance VI_1 {
        state MASTER
        interface ens32
        mcast_src_ip 192.168.10.221
        virtual_router_id 51
        priority 200
        advert_int 2
        authentication {
          auth_type PASS
          auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
          192.168.10.225
        }
        track_script {
          chk_apiserver
    } }
    EOF
    

    master2节点

    cat > /etc/keepalived/keepalived.conf <<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3
        rise 2
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface ens32
        mcast_src_ip 192.168.10.222
        virtual_router_id 51
        priority 150
        advert_int 2
        authentication {
          auth_type PASS
          auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
          192.168.10.225
        }
        track_script {
          chk_apiserver
    } }
    EOF
    

    master3节点

    cat > /etc/keepalived/keepalived.conf <<EOF
    ! Configuration File for keepalived
    global_defs {
        router_id LVS_DEVEL
    }
    vrrp_script chk_apiserver {
        script "/etc/keepalived/check_apiserver.sh"
        interval 2
        weight -5
        fall 3
        rise 2
    }
    vrrp_instance VI_1 {
        state BACKUP
        interface ens32
        mcast_src_ip 192.168.10.223
        virtual_router_id 51
        priority 100
        advert_int 2
        authentication {
          auth_type PASS
          auth_pass K8SHA_KA_AUTH
        }
        virtual_ipaddress {
          192.168.10.225
        }
        track_script {
          chk_apiserver
    } }
    EOF
    

    健康检查配置

    cat > /etc/keepalived/check_apiserver.sh <<-'EOF'
    #!/bin/bash
    err=0
    for k in $(seq 1 5)
    do
        check_code=$(pgrep kube-apiserver)
        if [[ $check_code == "" ]]; then
          err=$(expr $err + 1)
          sleep 5
          continue
        else
          err=0
          break
        fi
    done
    
    if [[ $err != "0" ]]; then
        echo "systemctl stop keepalived"
        /usr/bin/systemctl stop keepalived
        exit 1
    else
        exit 0
    fi
    EOF
    
    # 改进,使用接口的方式检查服务健康情况
    cat > check_apiserver.sh<<EOF
    #!/bin/sh
    
    err=0 
    for k in $(seq 1 5); do
      check_code=$(curl -k -s https://127.0.0.1:6443/healthz)
      if [[ $check_code != "ok" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
      else
        err=0
        break
      fi
    done
    
    if [[ $err != "0" ]]; then
        echo "systemctl stop keepalived"
        /usr/bin/systemctl stop keepalived
        exit 1
    else
        exit 0
    fi
    EOF
    

    启动服务

    systemctl enable --now haproxy; systemctl status haproxy -l
    systemctl enable --now keepalived; systemctl status keepalived -l
    

    验证

    # 检查状态
    systemctl status haproxy keepalived -l
    
    # ping通VIP
    ping 192.168.10.225
    
    # 8443监听端口
    netstat -tnlp|grep -v sshd
    

    权限认证文件配置

    生成kubernetes证书

    kubernetes系统的各组件需要使用TLS证书对通信进行加密,每个k8s集群都需要有独立的CA证书体系。

    生成CA证书

    CA 策略配置文件
    cat > ca-config.json <<-'EOF'
    {
      "signing": {
        "default": {
          "expiry": "8760h"
        },
        "profiles": {
          "kubernetes": {
            "expiry": "876000h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
            ]
          },
          "etcd": {
            "expiry": "876000h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
            ]
          },
          "server": {
            "expiry": "876000h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth"
            ]
          },
          "client": {
            "expiry": "876000h",
            "usages": [
              "signing",
              "key encipherment",
              "client auth"
            ]
          },
          "peer": {
            "expiry": "876000h",
            "usages": [
              "signing",
              "key encipherment",
              "server auth",
              "client auth"
            ]
          }
        }
      }
    }
    EOF
    
    证书签名请求文件
    cat > ca-csr.json <<-'EOF'
    {
      "CN": "kubernetes",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "Kubernetes",
          "OU": "system"
        }
      ],
      "ca": {
        "expiry": "876000h"
      }
    }
    EOF
    
    生成证书和私钥
    cd k8s-ha-install-manual-installation-v1.19.x.zipk8s-ha-installpki
    
    cfssl gencert -initca ca-csr.json | cfssljson -bare /ups/app/kubernetes/pki/ca
    

    生成apiserver证书

    签名请求文件
    cat > apiserver-csr.json <<-'EOF'
    {
      "CN": "kube-apiserver",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "Kubernetes",
          "OU": "system"
        }
      ]
    }
    EOF
    
    客户端证书和私钥
    cfssl gencert -ca=/ups/app/kubernetes/pki/ca.pem 
      -ca-key=/ups/app/kubernetes/pki/ca-key.pem 
      -config=ca-config.json 
      -hostname=10.96.0.1,192.168.10.225,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.10.221,192.168.10.222,192.168.10.223,k8s-master01,k8s-master02,k8s-master03,m01,m02,m03 
      -profile=kubernetes apiserver-csr.json | cfssljson -bare /ups/app/kubernetes/pki/apiserver
    
    
    • 10.96.0. 1:由 kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP,如 10.96.0.1

    • 192.168.10.225 :kubernetes 服务的服务IP(即VIP)

    • hosts 字段指定授权使用该证书的 IP 和域名列表,这里列出了 master 节点 IP、kubernetes 服务的 IP 和域名

    生成token文件(忽略)
    cat > /ups/app/kubernetes/cfg/token.csv <<-'EOF'
    $(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
    EOF
    

    已弃用

    生成聚合证书

    证书签名请求文件
    cat > front-proxy-ca-csr.json <<EOF
    {
      "CN": "kubernetes",
      "key": {
         "algo": "rsa",
         "size": 2048
      }
    }
    EOF
    
    cat > front-proxy-client-csr.json <<EOF
    {
      "CN": "front-proxy-client",
      "key": {
         "algo": "rsa",
         "size": 2048
      }
    }
    EOF
    
    生成客户端证书和私钥
    cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /ups/app/kubernetes/pki/front-proxy-ca
    
    cfssl gencert 
      -ca=/ups/app/kubernetes/pki/front-proxy-ca.pem 
      -ca-key=/ups/app/kubernetes/pki/front-proxy-ca-key.pem 
      -config=ca-config.json 
      -profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /ups/app/kubernetes/pki/front-proxy-client
    

    生成controller-manager证书

    证书签名请求文件
    cat > manager-csr.json <<-'EOF'
    {
      "CN": "system:kube-controller-manager",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "system:kube-controller-manager",
          "OU": "system"
        }
      ]
    }
    EOF
    
    • CN 和 O 均为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。
    客户端证书和私钥
    cfssl gencert 
       -ca=/ups/app/kubernetes/pki/ca.pem 
       -ca-key=/ups/app/kubernetes/pki/ca-key.pem 
       -config=ca-config.json 
       -profile=kubernetes 
       manager-csr.json | cfssljson -bare /ups/app/kubernetes/pki/controller-manager
    

    生成scheduler证书

    证书签名请求文件
    cat> scheduler-csr.json <<-'EOF'
    {
      "CN": "system:kube-scheduler",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "system:kube-scheduler",
          "OU": "system"
        }
      ]
    }
    EOF
    
    • CN 和 O 均为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限
    客户端证书和私钥
    cfssl gencert 
       -ca=/ups/app/kubernetes/pki/ca.pem 
       -ca-key=/ups/app/kubernetes/pki/ca-key.pem 
       -config=ca-config.json 
       -profile=kubernetes 
       scheduler-csr.json | cfssljson -bare /ups/app/kubernetes/pki/scheduler
    

    生成admin证书

    证书签名请求文件
    cat> admin-csr.json <<-'EOF'
    {
      "CN": "admin",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "system:masters",
          "OU": "system"
        }
      ]
    }
    EOF
    
    • O: system:masters:kube-apiserver 收到使用该证书的客户端请求后,为请求添加组(Group)认证标识 system:masters
    • 预定义的 ClusterRoleBinding cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予操作集群所需的最高权限;
    • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;
    客户端证书和私钥
    cfssl gencert 
      -ca=/ups/app/kubernetes/pki/ca.pem 
      -ca-key=/ups/app/kubernetes/pki/ca-key.pem 
      -config=ca-config.json 
      -profile=kubernetes 
      admin-csr.json | cfssljson -bare /ups/app/kubernetes/pki/admin
    

    生成kube-proxy证书(跳过)

    证书签名请求文件
    cat> kube-proxy-csr.json <<EOF
    {
      "CN": "system:kube-proxy",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "GD",
          "L": "GZ",
          "O": "system:kube-proxy",
          "OU": "system"
        }
      ]
    }
    EOF
    
    客户端证书和私钥
    
    

    检查确认证书

    openssl命令

    openssl x509 -noout -text -in /ups/app/kubernetes/pki/etcd.pem
    
    • 确认Issuer字段的内容和etcd-ca-csr.json一致

    • 确认Subject字段的内容和etcd-csr.json一致;

    • 确认X509v3 Subject Alternative Name字段的内容和etcd-csr.json一致;

    • 确认X509v3 Key Usage、Extended Key Usage字段的内容和ca-config.json中kubernetes-profile一致

    使用cfss-certinfo命令

    cfssl-certinfo -cert /ups/app/kubernetes/pki/etcd.pem
    

    创建kubeconfig文件

    创建controller-manager.kubeconfig 文件

    # 设置一个集群项
    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=/ups/app/kubernetes/cfg/controller-manager.kubeconfig
    
    # 设置一个用户项
    kubectl config set-credentials system:kube-controller-manager 
      --client-certificate=/ups/app/kubernetes/pki/controller-manager.pem 
      --client-key=/ups/app/kubernetes/pki/controller-manager-key.pem 
      --embed-certs=true 
      --kubeconfig=/ups/app/kubernetes/cfg/controller-manager.kubeconfig
      
    # 设置上下文环境
    kubectl config set-context system:kube-controller-manager@kubernetes 
      --cluster=kubernetes 
      --user=system:kube-controller-manager 
      --kubeconfig=/ups/app/kubernetes/cfg/controller-manager.kubeconfig
    
    # 设置为默认环境
    kubectl config use-context system:kube-controller-manager@kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/controller-manager.kubeconfig
    

    --server:若非高可用环境,使用master01的IP和端口配置(--server=https://192.168.10.221:6443)

    创建scheduler.kubeconfig 文件

    kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书

    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=/ups/app/kubernetes/cfg/scheduler.kubeconfig
      
    kubectl config set-credentials system:kube-scheduler 
      --client-certificate=/ups/app/kubernetes/pki/scheduler.pem 
      --client-key=/ups/app/kubernetes/pki/scheduler-key.pem 
      --embed-certs=true 
      --kubeconfig=/ups/app/kubernetes/cfg/scheduler.kubeconfig
      
    kubectl config set-context system:kube-scheduler@kubernetes 
      --cluster=kubernetes 
      --user=system:kube-scheduler 
      --kubeconfig=/ups/app/kubernetes/cfg/scheduler.kubeconfig
      
    kubectl config use-context system:kube-scheduler@kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/scheduler.kubeconfig
    

    创建admin.kubeconfig 文件

    kubeconfig 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书

    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=/ups/app/kubernetes/cfg/admin.kubeconfig
    
    # 设置用户项
    kubectl config set-credentials kubernetes-admin 
      --client-certificate=/ups/app/kubernetes/pki/admin.pem 
      --client-key=/ups/app/kubernetes/pki/admin-key.pem 
      --embed-certs=true 
      --kubeconfig=/ups/app/kubernetes/cfg/admin.kubeconfig
    
    # 设置上下文参数  
    kubectl config set-context kubernetes-admin@kubernetes 
      --cluster=kubernetes 
      --user=kubernetes-admin 
      --kubeconfig=/ups/app/kubernetes/cfg/admin.kubeconfig
    
    # 设置默认上下文
    kubectl config use-context kubernetes-admin@kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/admin.kubeconfig
    
    • --certificate-authority:验证 kube-apiserver 证书的根证书;
    • --client-certificate--client-key:刚生成的 admin 证书和私钥,与 kube-apiserver https 通信时使用;
    • --embed-certs=true:将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl.kubeconfig 文件中(否则,写入的是证书文件路径,后续拷贝 kubeconfig 到其它机器时,还需要单独拷贝证书文件,不方便。);
    • --server:指定 kube-apiserver 的地址。如果使用高可用集群环境(如:lb,keepalive+haproxy|nginx),使用VIP地址和端口(${CLUSTER_VIP}:8443);否则使用第一个master 的IP地址和端口(${MASTER_IPS[0]}:6443)

    创建ServiceAccount Key

    openssl genrsa -out /ups/app/kubernetes/pki/sa.key 2048
    
    openssl rsa -in /ups/app/kubernetes/pki/sa.key -pubout -out /ups/app/kubernetes/pki/sa.pub
    

    分发文件

    # 复制/ups/app/kubernetes/pki目录下证书和私钥文件到其他master节点上
    for NODE in k8s-master02 k8s-master03; do
      for FILE in $(ls /ups/app/kubernetes/pki | grep -v etcd);do
        scp /ups/app/kubernetes/pki/${FILE} $NODE:/ups/app/kubernetes/pki/${FILE}
      done
      # 复制kubeconfig文件到其他master节点
      for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do
        scp /ups/app/kubernetes/cfg/${FILE} $NODE:/ups/app/kubernetes/cfg/${FILE};
      done
    done
    
    

    Kubernetes组件配置(Master)

    • k8s service网段为10.96.0.0/12,该网段不能和宿主机的网段、Pod网段的重复
    • Pod网段为 172.16.0.0/16

    Master节点必备组件

    • kube-apiserver
    • kube-scheduler
    • kube-controller-manager
    1. kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行
    2. kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性

    注意: 如果三台Master节点仅仅作为集群管理节点的话,那么则无需部署docker、kubelet、kube-proxy组件;但是如果后期要部署mertics-server、istio组件服务时会出现无法运行的情况,所以还是建议master节点也部署docker、kubelet、kube-proxy组件

    kube-apiserver 组件配置

    创建kube-apiserver service

    所有Master节点创建kube-apiserver service

    说明
    • --advertise-address:apiserver 对外通告的 IP(kubernetes 服务后端节点 IP);
    • --default-*-toleration-seconds:设置节点异常相关的阈值;
    • --max-*-requests-inflight:请求相关的最大阈值;
    • --etcd-*:访问 etcd 的证书和 etcd 服务器地址;
    • --bind-address: https 监听的 IP,不能为 127.0.0.1,否则外界不能访问它的安全端口 6443;
    • --secret-port:https 监听端口;
    • --insecure-port=0:关闭监听 http 非安全端口(8080);
    • --tls-*-file:指定 apiserver 使用的证书、私钥和 CA 文件;
    • --audit-*:配置审计策略和审计日志文件相关的参数;
    • --client-ca-file:验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;
    • --enable-bootstrap-token-auth:启用 kubelet bootstrap 的 token 认证;
    • --requestheader-*:kube-apiserver 的 aggregator layer 相关的配置参数,proxy-client & HPA 需要使用;
    • --requestheader-client-ca-file:用于签名 --proxy-client-cert-file--proxy-client-key-file 指定的证书;在启用了 metric aggregator 时使用;
    • --requestheader-allowed-names:不能为空,值为逗号分割的 --proxy-client-cert-file 证书的 CN 名称,这里设置为 "aggregator";
    • --service-account-key-file:签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的 --service-account-private-key-file 指定私钥文件,两者配对使用;
    • --runtime-config=api/all=true: 启用所有版本的 APIs,如 autoscaling/v2alpha1;
    • --authorization-mode=Node,RBAC--anonymous-auth=false: 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;
    • --enable-admission-plugins:启用一些默认关闭的 plugins;
    • --allow-privileged:运行执行 privileged 权限的容器;
    • --apiserver-count=3:指定 apiserver 实例的数量;
    • --event-ttl:指定 events 的保存时间;
    • --kubelet-*:如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;
    • --proxy-client-*:apiserver 访问 metrics-server 使用的证书;
    • --service-cluster-ip-range: 指定 Service Cluster IP 地址段;
    • --service-node-port-range: 指定 NodePort 的端口范围;

    如果 kube-apiserver 机器没有运行 kube-proxy,则还需要添加 --enable-aggregator-routing=true 参数;

    关于 --requestheader-XXX 相关参数,参考:

    注意:

    1. --requestheader-client-ca-file 指定的 CA 证书,必须具有 client auth and server auth
    2. 如果 --requestheader-allowed-names 不为空,且 --proxy-client-cert-file 证书的 CN 名称不在 allowed-names 中,则后续查看 node 或 pods 的 metrics 失败,提示:
    $ kubectl top nodes
    Error from server (Forbidden): nodes.metrics.k8s.io is forbidden: User "aggregator" cannot list resource "nodes" in API group "metrics.k8s.io" at the cluster scope
    
    mastet01
    cat> /usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-apiserver \
      --advertise-address=192.168.10.221 \
      --allow-privileged=true  \
      --authorization-mode=Node,RBAC  \
      --bind-address=0.0.0.0  \
      --client-ca-file=/ups/app/kubernetes/pki/ca.pem  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --enable-bootstrap-token-auth=true  \
      --enable-aggregator-routing=true \
      --etcd-cafile=/ups/app/kubernetes/pki/etcd-ca.pem  \
      --etcd-certfile=/ups/app/kubernetes/pki/etcd.pem  \
      --etcd-keyfile=/ups/app/kubernetes/pki/etcd-key.pem  \
      --etcd-servers=https://192.168.10.221:2379,https://192.168.10.222:2379,https://192.168.10.223:2379 \
      --insecure-port=0  \
      --kubelet-client-certificate=/ups/app/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --logtostderr=false  \
      --log-dir=/ups/app/kubernetes/log \
      --proxy-client-cert-file=/ups/app/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/ups/app/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-client-ca-file=/ups/app/kubernetes/pki/front-proxy-ca.pem  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-username-headers=X-Remote-User \
      --secure-port=6443  \
      --service-account-key-file=/ups/app/kubernetes/pki/sa.pub  \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --tls-cert-file=/ups/app/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --v=2
      # --token-auth-file=/ups/app/kubernetes/cfg/token.csv
    
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=65535
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    

    V1.20.X 中需新增以下2项参数项

    --service-account-signing-key-file=/ups/app/kubernetes/pki/sa.key

    --service-account-issuer=https://kubernetes.default.svc.cluster.local

    master2
    cat> /usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-apiserver \
      --advertise-address=192.168.10.222 \
      --allow-privileged=true  \
      --authorization-mode=Node,RBAC  \
      --bind-address=0.0.0.0  \
      --client-ca-file=/ups/app/kubernetes/pki/ca.pem  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --enable-bootstrap-token-auth=true  \
      --enable-aggregator-routing=true \
      --etcd-cafile=/ups/app/kubernetes/pki/etcd-ca.pem  \
      --etcd-certfile=/ups/app/kubernetes/pki/etcd.pem  \
      --etcd-keyfile=/ups/app/kubernetes/pki/etcd-key.pem  \
      --etcd-servers=https://192.168.10.221:2379,https://192.168.10.222:2379,https://192.168.10.223:2379 \
      --insecure-port=0  \
      --kubelet-client-certificate=/ups/app/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --logtostderr=false  \
      --log-dir=/ups/app/kubernetes/log \
      --proxy-client-cert-file=/ups/app/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/ups/app/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-client-ca-file=/ups/app/kubernetes/pki/front-proxy-ca.pem  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-username-headers=X-Remote-User \
      --secure-port=6443  \
      --service-account-key-file=/ups/app/kubernetes/pki/sa.pub  \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --tls-cert-file=/ups/app/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --v=2
      # --token-auth-file=/ups/app/kubernetes/cfg/token.csv
    
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=65535
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    
    master3
    cat> /usr/lib/systemd/system/kube-apiserver.service <<EOF
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-apiserver \
      --advertise-address=192.168.10.223 \
      --allow-privileged=true  \
      --authorization-mode=Node,RBAC  \
      --bind-address=0.0.0.0  \
      --client-ca-file=/ups/app/kubernetes/pki/ca.pem  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --enable-bootstrap-token-auth=true  \
      --enable-aggregator-routing=true \
      --etcd-cafile=/ups/app/kubernetes/pki/etcd-ca.pem  \
      --etcd-certfile=/ups/app/kubernetes/pki/etcd.pem  \
      --etcd-keyfile=/ups/app/kubernetes/pki/etcd-key.pem  \
      --etcd-servers=https://192.168.10.221:2379,https://192.168.10.222:2379,https://192.168.10.223:2379 \
      --insecure-port=0  \
      --kubelet-client-certificate=/ups/app/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --logtostderr=false  \
      --log-dir=/ups/app/kubernetes/log \
      --proxy-client-cert-file=/ups/app/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/ups/app/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-client-ca-file=/ups/app/kubernetes/pki/front-proxy-ca.pem  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-username-headers=X-Remote-User \
      --secure-port=6443  \
      --service-account-key-file=/ups/app/kubernetes/pki/sa.pub  \
      --service-cluster-ip-range=10.96.0.0/12  \
      --service-node-port-range=30000-32767  \
      --tls-cert-file=/ups/app/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/ups/app/kubernetes/pki/apiserver-key.pem  \
      --v=2
      # --token-auth-file=/ups/app/kubernetes/cfg/token.csv
    
    Restart=on-failure
    RestartSec=10s
    LimitNOFILE=65535
    
    [Install]
    WantedBy=multi-user.target
    
    EOF
    

    启动kube-apiserver服务

    systemctl daemon-reload && systemctl enable --now kube-apiserver
    systemctl status kube-apiserver -l
    

    检查服务

    验证
    curl --insecure https://192.168.10.221:6443/
    curl --insecure https://192.168.10.225:8443/
    
    检查集群状态
    $ kubectl cluster-info
    Kubernetes master is running at https://172.27.138.251:6443
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    
    $ kubectl get all --all-namespaces
    NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    default     service/kubernetes   ClusterIP   10.96.0.1   <none>        443/TCP   3m53s
    
    $ kubectl get componentstatuses
    NAME                 AGE
    controller-manager   <unknown>
    scheduler            <unknown>
    etcd-0               <unknown>
    etcd-2               <unknown>
    etcd-1               <unknown>
    
    $ kubectl get cs -o yaml 
    
    检查 kube-apiserver 监听的端口
    netstat -lnpt|grep kube
    
    查看集群组件状态
    kubectl cluster-info
    kubectl get componentstatuses
    kubectl get all --all-namespaces
    

    kube-controller-manager组件配置

    启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性

    kube-controller-manager 在以下两种情况下使用证书:

    1. 与 kube-apiserver 的安全端口通信;
    2. 安全端口(https,10252) 输出 prometheus 格式的 metrics

    创建kube-controller-manager service

    所有Master节点配置kube-controller-manager service

    cat >  /usr/lib/systemd/system/kube-controller-manager.service <<EOF
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target 
    After=kube-apiserver.service
    Requires=kube-apiserver.service
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-controller-manager \
      --address=0.0.0.0 \
      --allocate-node-cidrs=true \
      --cluster-cidr=172.16.0.0/16 \
      --cluster-signing-cert-file=/ups/app/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/ups/app/kubernetes/pki/ca-key.pem \
      --controllers=*,bootstrapsigner,tokencleaner \
      --kubeconfig=/ups/app/kubernetes/cfg/controller-manager.kubeconfig \
      --leader-elect=true \
      --logtostderr=false \
      --log-dir=/ups/app/kubernetes/log \
      --node-cidr-mask-size=24 \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --requestheader-client-ca-file=/ups/app/kubernetes/pki/front-proxy-ca.pem \
      --root-ca-file=/ups/app/kubernetes/pki/ca.pem \
      --service-account-private-key-file=/ups/app/kubernetes/pki/sa.key \
      --use-service-account-credentials=true \
      --v=2
      # --cluster-signing-duration=876000h0m0s \
    
    Restart=always
    RestartSec=10s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    启动kube-controller-manager 服务

    所有Master节点启动kube-controller-manager

    systemctl daemon-reload 
    systemctl enable --now kube-controller-manager
    systemctl status kube-controller-manager -l
    
    # 异常检查原因
    journalctl -u kube-apiserver
    

    检查

    检查服务监听端口
    • kube-controller-manager 监听 10252 端口,接收 http 请求

    • kube-controller-manager 监听 10257 端口,接收 https 请求

    netstat -lnpt | grep kube-control
    
    查看输出的 metrics

    在 kube-controller-manager 节点上执行

    curl -s --cacert /ups/app/kubernetes/pki/ca.pem 
      --cert /ups/app/kubernetes/pki/admin.pem 
      --key /ups/app/kubernetes/pki/admin-key.pem https://192.168.10.221:10252/metrics |head
    
    
    检查当前的leader
    kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
    

    kube-scheduler组件配置

    kube-scheduler 在以下两种情况下使用该证书:

    1. 与 kube-apiserver 的安全端口通信;
    2. 安全端口(https,10251) 输出 prometheus 格式的 metrics

    创建kube-scheduler service

    所有Master节点配置kube-scheduler service

    cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF
    [Unit]
    Description=Kubernetes Scheduler
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    After=kube-apiserver.service
    Requires=kube-apiserver.service
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-scheduler \
      --address=0.0.0.0 \
      --kubeconfig=/ups/app/kubernetes/cfg/scheduler.kubeconfig \
      --leader-elect=true \
      --logtostderr=false \
      --log-dir=/ups/app/kubernetes/log \
      --v=2
      # --secure-port=10259
    
    Restart=on-failure
    RestartSec=10s
    
    [Install]
    WantedBy=multi-user.target
    EOF
    
    • --kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver
    • --leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态

    启动kube-scheduler 服务

    systemctl daemon-reload 
    systemctl enable --now kube-scheduler
    systemctl status kube-scheduler -l
    
    # 异常时检查
    journalctl -u kube-scheduler
    

    检查

    查看输出的 metrics

    在 kube-scheduler 节点上执行

    kube-scheduler 监听 10251 和 10259 端口:

    • 10251:接收 http 请求,非安全端口,不需要认证授权
    • 10259:接收 https 请求,安全端口,需要认证授权
      两个接口都对外提供 /metrics 和 /healthz 的访问
     netstat -lnpt |grep kube-sched
    
    curl -s http://192.168.10.221:10251/metrics |head
    
    curl -s --cacert /ups/app/kubernetes/pki/ca.pem 
      --cert /ups/app/kubernetes/pki/admin.pem 
      --key /ups/app/kubernetes/pki/admin-key.pem 
      https://192.168.10.221:10259/metrics |head
    
    
    查看当前的 leader
    kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
    

    TLS Bootstrapping配置

    在master1创建bootstrap

    注意: 如果不是高可用集群,192.168.10.225:8443改为master01的地址,8443改为apiserver的端口,默认是6443

    创建bootstrap-kubelet.kubeconfig文件

    cd /root/k8s-ha-install/bootstrap
    
    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig
    
    kubectl config set-credentials tls-bootstrap-token-user 
      --token=c8ad9c.2e4d610cf3e7426e 
      --kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig
    
    • c8ad9c.2e4d610cf3e7426e是生成的随机序列,可通过以下命令生成,把生成的token写入bootstrap.secret.yaml文件中。建议自行修改

      echo "$(head -c 6 /dev/urandom | md5sum | head -c 6)"."$(head -c 16 /dev/urandom | md5sum | head -c 16)"
      c8ad9c.2e4d610cf3e7426e
      

      image-20210411205209729

      • token 格式: [a-z0-9]{6}.[a-z0-9]{16}
        • 第一部分是token_id, 它是一种公开信息,用于引用令牌并确保不会泄露认证所使用的秘密信息
        • 第二部分是“令牌秘密(Token Secret)”,它应该被共享给受信的第三方
    kubectl config set-context tls-bootstrap-token-user@kubernetes 
      --cluster=kubernetes 
      --user=tls-bootstrap-token-user 
      --kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig
    
    kubectl config use-context tls-bootstrap-token-user@kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig
    

    配置kubectl登陆认证文件

    mkdir -p /root/.kube
    cp -i /ups/app/kubernetes/cfg/admin.kubeconfig /root/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    

    创建secret配置文件

    若修改token,记住同时修改下面的文件并记录后续使用

    cat> bootstrap.secret.yaml<<-'EOF'
    apiVersion: v1
    kind: Secret
    metadata:
      name: bootstrap-token-c8ad9c
      namespace: kube-system
    type: bootstrap.kubernetes.io/token
    stringData:
      description: "The default bootstrap token generated by 'kubelet '."
      token-id: c8ad9c
      token-secret: 2e4d610cf3e7426e
      usage-bootstrap-authentication: "true"
      usage-bootstrap-signing: "true"
      auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
     
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: kubelet-bootstrap
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:node-bootstrapper
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:bootstrappers:default-node-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: node-autoapprove-bootstrap
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:bootstrappers:default-node-token
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: node-autoapprove-certificate-rotation
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
    subjects:
    - apiGroup: rbac.authorization.k8s.io
      kind: Group
      name: system:nodes
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:kube-apiserver-to-kubelet
    rules:
      - apiGroups:
          - ""
        resources:
          - nodes/proxy
          - nodes/stats
          - nodes/log
          - nodes/spec
          - nodes/metrics
        verbs:
          - "*"
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: system:kube-apiserver
      namespace: ""
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:kube-apiserver-to-kubelet
    subjects:
      - apiGroup: rbac.authorization.k8s.io
        kind: User
        name: kube-apiserver
    
    EOF
    

    k8s集群创建资源

    kubectl create -f bootstrap.secret.yaml
    

    查看secret

    kubectl get secret bootstrap-token-c8ad9c -n kube-system -oyaml
    
    • 输出token-id, token-secret都是经过bas464加密
    base64解密查看token-id 和 token-secret
    echo "YzhhZDLj" | base64 -d
    
    echo "MmU0ZDYxMGNmM2U3NDI2ZQ==" |base64 -d
    
    • 得到的结果与上面bootstrap.secret.yaml 文件内容中一致

      image-20210413224112357

    image-20210414122633247

    配置Kubectl

    kubectl 使用 https 协议与 kube-apiserver 进行安全通信,kube-apiserver 对 kubectl 请求包含的证书进行认证和授权。

    kubectl 后续用于集群管理,所以这里创建具有最高权限的 admin 证书。

    # 把master01节点上的admin.kubeconfig分发到其他节点
    for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
      ssh $NODE "mkdir -p $HOME/.kube"
      scp /ups/app/kubernetes/cfg/admin.kubeconfig $NODE:$HOME/.kube/config
      ssh $NODE "chmod 660 $HOME/.kube/config"
    done
    

    tab键命令补全

    echo "source <(kubectl completion bash)" >> ~/.bash_profile
    source ~/.bash_profile
    source /usr/share/bash-completion/bash_completion
    source <(kubectl completion bash)
    

    Worker|Node 节点配置

    Worker 节点组件

    • containerd | docker
    • kubelet
    • kube-proxy
    • calico

    同步证书和私钥文件

    把master1上的证书复制到Node节点

    for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
      ssh $NODE mkdir -p /ups/app/kubernetes/pki
      
      scp etcd-ca.pem etcd.pem etcd-key.pem ca.pem ca-key.pem front-proxy-ca.pem $NODE:/ups/app/kubernetes/pki/
      scp /ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig $NODE:/ups/app/kubernetes/cfg/
    done
    

    kubelet组件配置

    注意:从v1.19.X +版本开始,master节点建议都启动kubelet服务,然后通过配置污点的方式让master节点不运行Pod

    kubelet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

    kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

    为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)

    创建kubelet service

    所有节点配置kubelet service(Master节点不部署Pod也可无需配置)

    cat > /usr/lib/systemd/system/kubelet.service <<EOF
    [Unit]
    Description=Kubernetes Kubelet
    Documentation=https://github.com/kubernetes/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    ExecStart=/ups/app/kubernetes/bin/kubelet \
      --bootstrap-kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig \
      --cni-conf-dir=/etc/cni/net.d \
      --cni-bin-dir=/opt/cni/bin \
      --config=/ups/app/kubernetes/cfg/kubelet-conf.yml \
      --image-pull-progress-deadline=30m \
      --kubeconfig=/ups/app/kubernetes/cfg/kubelet.kubeconfig \
      --logtostderr=false \
      --log-dir=/ups/app/kubernetes/log \
      --network-plugin=cni \
      --node-labels=node.kubernetes.io/node='' \
      --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2 \
      --v=2
    
    Restart=always
    StartLimitInterval=0
    RestartSec=10
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    注意:使用非docker作为容器运行时的情况,需要添加以下配置项

    • --container-runtime 参数为 remote,

    • 设置 --container-runtime-endpoint 为对应的容器运行时的监听地址

    示例:使用containerd
    --container-runtime=remote
    --runtime-request-timeout=30m
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock

    配置10-kubelet.conf

    cat > /etc/systemd/system/kubelet.service.d/10-kubelet.conf <<-'EOF'
    [Service]
    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/ups/app/kubernetes/cfg/bootstrap-kubelet.kubeconfig --kubeconfig=/ups/app/kubernetes/cfg/kubelet.kubeconfig"
    Environment="KUBELET_SYSTEM_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
    Environment="KUBELET_CONFIG_ARGS=--config=/ups/app/kubernetes/cfg/kubelet-conf.yml --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
    Environment="KUBELET_EXTRA_ARGS=--node-labels=node.kubernetes.io/node='' --image-pull-progress-deadline=30m "
    ExecStart=
    ExecStart=/ups/app/kubernetes/bin/kubelet --logtostderr=false --log-dir=/ups/app/kubernetes/log $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_SYSTEM_ARGS $KUBELET_EXTRA_ARGS
    EOF
    
    • bootstrap-kubeconfig:首次启动时向apiserver申请证书
    • kubeconfig:通过bootstrap自动生成kubelet.kubeconfig文件,用于连接apiserver
    • pod-infra-container-image:管理Pod网络容器的镜像

    配置kubelet-conf.yml

    注意:

    • 如果更改了k8s的service网段,需要更改kubelet-conf.yml 的clusterDNS:配置,改成k8s Service网段的第十个地址,比如10.96.0.10(k8s的service网段开始设置的是10.96.0.0/12)
    • cgroupDriver 改成 systemd,必须与 docker 配置文件 /etc/docker/daemon.json 中 "exec-opts": ["native.cgroupdriver=systemd"]配置一致。
    cat > /ups/app/kubernetes/cfg/kubelet-conf.yml <<EOF
    apiVersion: kubelet.config.k8s.io/v1beta1
    kind: KubeletConfiguration
    address: 0.0.0.0
    port: 10250
    readOnlyPort: 10255
    authentication:
      anonymous:
        enabled: false
      webhook:
        cacheTTL: 2m0s
        enabled: true
      x509:
        clientCAFile: /ups/app/kubernetes/pki/ca.pem
    authorization:
      mode: Webhook
      webhook:
        cacheAuthorizedTTL: 5m0s
        cacheUnauthorizedTTL: 30s
    cgroupDriver: systemd
    cgroupsPerQOS: true
    clusterDNS:
    - 10.96.0.10
    clusterDomain: cluster.local
    containerLogMaxFiles: 5
    containerLogMaxSize: 10Mi
    contentType: application/vnd.kubernetes.protobuf
    cpuCFSQuota: true
    cpuManagerPolicy: none
    cpuManagerReconcilePeriod: 10s
    enableControllerAttachDetach: true
    enableDebuggingHandlers: true
    enforceNodeAllocatable:
    - pods
    eventBurst: 10
    eventRecordQPS: 5
    evictionHard:
      imagefs.available: 15%
      memory.available: 100Mi
      nodefs.available: 10%
      nodefs.inodesFree: 5%
    evictionPressureTransitionPeriod: 5m0s
    failSwapOn: true
    fileCheckFrequency: 20s
    hairpinMode: promiscuous-bridge
    healthzBindAddress: 127.0.0.1
    healthzPort: 10248
    httpCheckFrequency: 20s
    imageGCHighThresholdPercent: 85
    imageGCLowThresholdPercent: 80
    imageMinimumGCAge: 2m0s
    iptablesDropBit: 15
    iptablesMasqueradeBit: 14
    kubeAPIBurst: 10
    kubeAPIQPS: 5
    makeIPTablesUtilChains: true
    maxOpenFiles: 1000000
    maxPods: 110
    nodeStatusUpdateFrequency: 10s
    oomScoreAdj: -999
    podPidsLimit: -1
    registryBurst: 10
    registryPullQPS: 5
    resolvConf: /etc/resolv.conf
    rotateCertificates: true
    runtimeRequestTimeout: 2m0s
    serializeImagePulls: true
    staticPodPath: /ups/app/kubernetes/manifests
    streamingConnectionIdleTimeout: 4h0m0s
    syncFrequency: 1m0s
    volumeStatsAggPeriod: 1m0s
    EOF
    
    kubeadm打印默认配置

    kubeadm config print init-defaults --help

    This command prints objects such as the default init configuration that is used for 'kubeadm init'.

    Note that sensitive values like the Bootstrap Token fields are replaced with placeholder values like {"abcdef.0123456789abcdef" "" "nil" [] []} in order to pass validation but
    not perform the real computation for creating a token.

    Usage:
    kubeadm config print init-defaults [flags]

    Flags:
    --component-configs strings A comma-separated list for component config API objects to print the default values for. Available values: [KubeProxyConfiguration KubeletConfiguration]. If this flag is not set, no component configs will be printed.
    -h, --help help for init-defaults

    Global Flags:
    --add-dir-header If true, adds the file directory to the header of the log messages
    --kubeconfig string The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")
    --log-file string If non-empty, use this log file
    --log-file-max-size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
    --one-output If true, only write logs to their native severity level (vs also writing to each lower severity level
    --rootfs string [EXPERIMENTAL] The path to the 'real' host root filesystem.
    --skip-headers If true, avoid header prefixes in the log messages
    --skip-log-headers If true, avoid headers when opening log files
    -v, --v Level number for the log level verbosity

    kubeadm config print init-defaults --component-configs KubeletConfiguration
    

    启动kubelet

    所有节点启动kubelet

    systemctl daemon-reload
    systemctl enable --now kubelet
    
    systemctl status kubelet -l
    
    # 若启动失败,检查原因
    journalctl -xefu kubelet
    

    此时系统日志/var/log/messages, 显示只有如下信息为正常
    Unable to update cni config: no networks found in /etc/cni/net.d

    查看集群状态

    kubectl get node
    kubectl get csr
    

    Kube-Proxy组件配置

    创建kube-proxy.kubeconfig文件(在master1节点上)

    # 创建账号
    kubectl -n kube-system create serviceaccount kube-proxy
    # 角色绑定
    kubectl create clusterrolebinding system:kube-proxy 
      --clusterrole system:node-proxier 
      --serviceaccount kube-system:kube-proxy
    
    SECRET=$(kubectl -n kube-system get sa/kube-proxy 
      --output=jsonpath='{.secrets[0].name}')
    JWT_TOKEN=$(kubectl -n kube-system get secret/${SECRET} 
      --output=jsonpath='{.data.token}' | base64 -d)
    
    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=/ups/app/kubernetes/cfg/kube-proxy.kubeconfig
    
    kubectl config set-credentials kubernetes 
      --token=${JWT_TOKEN} 
      --kubeconfig=/ups/app/kubernetes/cfg/kube-proxy.kubeconfig
    
    kubectl config set-context kubernetes 
      --cluster=kubernetes 
      --user=kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/kube-proxy.kubeconfig
    
    kubectl config use-context kubernetes 
      --kubeconfig=/ups/app/kubernetes/cfg/kube-proxy.kubeconfig
    

    创建kube-proxy system unit文件

    cat > /usr/lib/systemd/system/kube-proxy.service <<EOF
    [Unit]
    Description=Kubernetes Kube Proxy
    Documentation=https://github.com/kubernetes/kubernetes
    After=network.target
    
    [Service]
    ExecStart=/ups/app/kubernetes/bin/kube-proxy \
      --config=/ups/app/kubernetes/cfg/kube-proxy.yaml \
      --logtostderr=false \
      --log-dir=/ups/app/kubernetes/log \
      --v=2
    
    Restart=always
    RestartSec=10s
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
    EOF
    

    创建配置文件

    如果更改了集群Pod的网段,需要更改kube-proxy/kube-proxy.conf的clusterCIDR: 172.16.0.0/12参数为pod的网段。

    cat > /ups/app/kubernetes/cfg/kube-proxy.conf <<EOF
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 10
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: /ups/app/kubernetes/cfg/kube-proxy.kubeconfig
      qps: 5
    clusterCIDR: 172.16.0.0/16
    configSyncPeriod: 15m0s
    conntrack:
      max: null
      maxPerCore: 32768
      min: 131072
      tcpCloseWaitTimeout: 1h0m0s
      tcpEstablishedTimeout: 24h0m0s
    enableProfiling: false
    healthzBindAddress: 0.0.0.0:10256
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: 14
      minSyncPeriod: 0s
      syncPeriod: 30s
    ipvs:
      masqueradeAll: true
      minSyncPeriod: 5s
      scheduler: "rr"
      syncPeriod: 30s
    kind: KubeProxyConfiguration
    metricsBindAddress: 127.0.0.1:10249
    mode: "ipvs"
    nodePortAddresses: null
    oomScoreAdj: -999
    portRange: ""
    udpIdleTimeout: 250ms
    EOF
    

    分发配置文件

    for NODE in k8s-master01 k8s-master02 k8s-master03 k8s-node01 k8s-node02; do
        scp /ups/app/kubernetes/cfg/kube-proxy.kubeconfig $NODE:/ups/app/kubernetes/cfg/kube-proxy.kubeconfig
    	scp kube-proxy/kube-proxy.conf $NODE:/ups/app/kubernetes/cfg/kube-proxy.conf
    	scp kube-proxy/kube-proxy.service $NODE:/usr/lib/systemd/system/kube-proxy.service
    done
    

    启动kube-proxy

    systemctl daemon-reload
    systemctl enable --now kube-proxy
    
    systemctl status kube-proxy -l
    

    安装插件

    安装calico网络插件(master01)

    Calico 是一款纯 Layer 3 的数据中心网络方案
    kubernetes 要求集群内各节点(包括 master 节点)能通过 Pod 网段互联互通。
    calico 使用 IPIP 或 BGP 技术(默认为 IPIP)为各节点创建一个可以互通的 Pod 网络。

    安装文档

    https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico-with-kubernetes-api-datastore-more-than-50-nodes

    将calico安装到etcd中

    image-20210414112556009

    下载默认配置文件
    # 最新版本
    curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico-etcd.yaml
    
    # 指定版本
    curl https://docs.projectcalico.org/archive/v3.15/manifests/calico-etcd.yaml -O
    
    curl https://docs.projectcalico.org/archive/v3.16/manifests/calico-etcd.yaml -o calico-etcd-v3.16.yaml
    
    wget https://docs.projectcalico.org/v3.16.6/manifests/calico.yaml -O calico.yaml 
    
    
    默认yaml文件内容
    ---
    # Source: calico/templates/calico-etcd-secrets.yaml
    # The following contains k8s Secrets for use with a TLS enabled etcd cluster.
    # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/
    apiVersion: v1
    kind: Secret
    type: Opaque
    metadata:
      name: calico-etcd-secrets
      namespace: kube-system
    data:
      # Populate the following with etcd TLS configuration if desired, but leave blank if
      # not using TLS for etcd.
      # The keys below should be uncommented and the values populated with the base64
      # encoded contents of each file that would be associated with the TLS data.
      # Example command for encoding a file contents: cat <file> | base64 -w 0
      # etcd-key: null
      # etcd-cert: null
      # etcd-ca: null
    ---
    # Source: calico/templates/calico-config.yaml
    # This ConfigMap is used to configure a self-hosted Calico installation.
    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: calico-config
      namespace: kube-system
    data:
      # Configure this with the location of your etcd cluster.
      etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"
      # If you're using TLS enabled etcd uncomment the following.
      # You must also populate the Secret below with these files.
      etcd_ca: ""   # "/calico-secrets/etcd-ca"
      etcd_cert: "" # "/calico-secrets/etcd-cert"
      etcd_key: ""  # "/calico-secrets/etcd-key"
      # Typha is disabled.
      typha_service_name: "none"
      # Configure the backend to use.
      calico_backend: "bird"
    
      # Configure the MTU to use for workload interfaces and tunnels.
      # By default, MTU is auto-detected, and explicitly setting this field should not be required.
      # You can override auto-detection by providing a non-zero value.
      veth_mtu: "0"
    
      # The CNI network configuration to install on each node. The special
      # values in this config will be automatically populated.
      cni_network_config: |-
        {
          "name": "k8s-pod-network",
          "cniVersion": "0.3.1",
          "plugins": [
            {
              "type": "calico",
              "log_level": "info",
              "log_file_path": "/var/log/calico/cni/cni.log",
              "etcd_endpoints": "__ETCD_ENDPOINTS__",
              "etcd_key_file": "__ETCD_KEY_FILE__",
              "etcd_cert_file": "__ETCD_CERT_FILE__",
              "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",
              "mtu": __CNI_MTU__,
              "ipam": {
                  "type": "calico-ipam"
              },
              "policy": {
                  "type": "k8s"
              },
              "kubernetes": {
                  "kubeconfig": "__KUBECONFIG_FILEPATH__"
              }
            },
            {
              "type": "portmap",
              "snat": true,
              "capabilities": {"portMappings": true}
            },
            {
              "type": "bandwidth",
              "capabilities": {"bandwidth": true}
            }
          ]
        }
    
    ---
    # Source: calico/templates/calico-kube-controllers-rbac.yaml
    
    # Include a clusterrole for the kube-controllers component,
    # and bind it to the calico-kube-controllers serviceaccount.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: calico-kube-controllers
    rules:
      # Pods are monitored for changing labels.
      # The node controller monitors Kubernetes nodes.
      # Namespace and serviceaccount labels are used for policy.
      - apiGroups: [""]
        resources:
          - pods
          - nodes
          - namespaces
          - serviceaccounts
        verbs:
          - watch
          - list
          - get
      # Watch for changes to Kubernetes NetworkPolicies.
      - apiGroups: ["networking.k8s.io"]
        resources:
          - networkpolicies
        verbs:
          - watch
          - list
    ---
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: calico-kube-controllers
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: calico-kube-controllers
    subjects:
    - kind: ServiceAccount
      name: calico-kube-controllers
      namespace: kube-system
    ---
    
    ---
    # Source: calico/templates/calico-node-rbac.yaml
    # Include a clusterrole for the calico-node DaemonSet,
    # and bind it to the calico-node serviceaccount.
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1
    metadata:
      name: calico-node
    rules:
      # The CNI plugin needs to get pods, nodes, and namespaces.
      - apiGroups: [""]
        resources:
          - pods
          - nodes
          - namespaces
        verbs:
          - get
      - apiGroups: [""]
        resources:
          - endpoints
          - services
        verbs:
          # Used to discover service IPs for advertisement.
          - watch
          - list
      # Pod CIDR auto-detection on kubeadm needs access to config maps.
      - apiGroups: [""]
        resources:
          - configmaps
        verbs:
          - get
      - apiGroups: [""]
        resources:
          - nodes/status
        verbs:
          # Needed for clearing NodeNetworkUnavailable flag.
          - patch
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: calico-node
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: calico-node
    subjects:
    - kind: ServiceAccount
      name: calico-node
      namespace: kube-system
    
    ---
    # Source: calico/templates/calico-node.yaml
    # This manifest installs the calico-node container, as well
    # as the CNI plugins and network config on
    # each master and worker node in a Kubernetes cluster.
    kind: DaemonSet
    apiVersion: apps/v1
    metadata:
      name: calico-node
      namespace: kube-system
      labels:
        k8s-app: calico-node
    spec:
      selector:
        matchLabels:
          k8s-app: calico-node
      updateStrategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      template:
        metadata:
          labels:
            k8s-app: calico-node
        spec:
          nodeSelector:
            kubernetes.io/os: linux
          hostNetwork: true
          tolerations:
            # Make sure calico-node gets scheduled on all nodes.
            - effect: NoSchedule
              operator: Exists
            # Mark the pod as a critical add-on for rescheduling.
            - key: CriticalAddonsOnly
              operator: Exists
            - effect: NoExecute
              operator: Exists
          serviceAccountName: calico-node
          # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
          # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
          terminationGracePeriodSeconds: 0
          priorityClassName: system-node-critical
          initContainers:
            # This container installs the CNI binaries
            # and CNI network config file on each node.
            - name: install-cni
              image: docker.io/calico/cni:v3.18.1
              command: ["/opt/cni/bin/install"]
              envFrom:
              - configMapRef:
                  # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
                  name: kubernetes-services-endpoint
                  optional: true
              env:
                # Name of the CNI config file to create.
                - name: CNI_CONF_NAME
                  value: "10-calico.conflist"
                # The CNI network config to install on each node.
                - name: CNI_NETWORK_CONFIG
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: cni_network_config
                # The location of the etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # CNI MTU Config variable
                - name: CNI_MTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # Prevents the container from sleeping forever.
                - name: SLEEP
                  value: "false"
              volumeMounts:
                - mountPath: /host/opt/cni/bin
                  name: cni-bin-dir
                - mountPath: /host/etc/cni/net.d
                  name: cni-net-dir
                - mountPath: /calico-secrets
                  name: etcd-certs
              securityContext:
                privileged: true
            # Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
            # to communicate with Felix over the Policy Sync API.
            - name: flexvol-driver
              image: docker.io/calico/pod2daemon-flexvol:v3.18.1
              volumeMounts:
              - name: flexvol-driver-host
                mountPath: /host/driver
              securityContext:
                privileged: true
          containers:
            # Runs calico-node container on each Kubernetes node. This
            # container programs network policy and routes on each
            # host.
            - name: calico-node
              image: docker.io/calico/node:v3.18.1
              envFrom:
              - configMapRef:
                  # Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
                  name: kubernetes-services-endpoint
                  optional: true
              env:
                # The location of the etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # Location of the CA certificate for etcd.
                - name: ETCD_CA_CERT_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_ca
                # Location of the client key for etcd.
                - name: ETCD_KEY_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_key
                # Location of the client certificate for etcd.
                - name: ETCD_CERT_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_cert
                # Set noderef for node controller.
                - name: CALICO_K8S_NODE_REF
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
                # Choose the backend to use.
                - name: CALICO_NETWORKING_BACKEND
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: calico_backend
                # Cluster type to identify the deployment type
                - name: CLUSTER_TYPE
                  value: "k8s,bgp"
                # Auto-detect the BGP IP address.
                - name: IP
                  value: "autodetect"
                # Enable IPIP
                - name: CALICO_IPV4POOL_IPIP
                  value: "Always"
                # Enable or Disable VXLAN on the default IP pool.
                - name: CALICO_IPV4POOL_VXLAN
                  value: "Never"
                # Set MTU for tunnel device used if ipip is enabled
                - name: FELIX_IPINIPMTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # Set MTU for the VXLAN tunnel device.
                - name: FELIX_VXLANMTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # Set MTU for the Wireguard tunnel device.
                - name: FELIX_WIREGUARDMTU
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: veth_mtu
                # The default IPv4 pool to create on startup if none exists. Pod IPs will be
                # chosen from this range. Changing this value after installation will have
                # no effect. This should fall within `--cluster-cidr`.
                # - name: CALICO_IPV4POOL_CIDR
                #   value: "192.168.0.0/16"
                # Disable file logging so `kubectl logs` works.
                - name: CALICO_DISABLE_FILE_LOGGING
                  value: "true"
                # Set Felix endpoint to host default action to ACCEPT.
                - name: FELIX_DEFAULTENDPOINTTOHOSTACTION
                  value: "ACCEPT"
                # Disable IPv6 on Kubernetes.
                - name: FELIX_IPV6SUPPORT
                  value: "false"
                # Set Felix logging to "info"
                - name: FELIX_LOGSEVERITYSCREEN
                  value: "info"
                - name: FELIX_HEALTHENABLED
                  value: "true"
              securityContext:
                privileged: true
              resources:
                requests:
                  cpu: 250m
              livenessProbe:
                exec:
                  command:
                  - /bin/calico-node
                  - -felix-live
                  - -bird-live
                periodSeconds: 10
                initialDelaySeconds: 10
                failureThreshold: 6
              readinessProbe:
                exec:
                  command:
                  - /bin/calico-node
                  - -felix-ready
                  - -bird-ready
                periodSeconds: 10
              volumeMounts:
                - mountPath: /lib/modules
                  name: lib-modules
                  readOnly: true
                - mountPath: /run/xtables.lock
                  name: xtables-lock
                  readOnly: false
                - mountPath: /var/run/calico
                  name: var-run-calico
                  readOnly: false
                - mountPath: /var/lib/calico
                  name: var-lib-calico
                  readOnly: false
                - mountPath: /calico-secrets
                  name: etcd-certs
                - name: policysync
                  mountPath: /var/run/nodeagent
                # For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
                # parent directory.
                - name: sysfs
                  mountPath: /sys/fs/
                  # Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
                  # If the host is known to mount that filesystem already then Bidirectional can be omitted.
                  mountPropagation: Bidirectional
                - name: cni-log-dir
                  mountPath: /var/log/calico/cni
                  readOnly: true
          volumes:
            # Used by calico-node.
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: var-run-calico
              hostPath:
                path: /var/run/calico
            - name: var-lib-calico
              hostPath:
                path: /var/lib/calico
            - name: xtables-lock
              hostPath:
                path: /run/xtables.lock
                type: FileOrCreate
            - name: sysfs
              hostPath:
                path: /sys/fs/
                type: DirectoryOrCreate
            # Used to install CNI.
            - name: cni-bin-dir
              hostPath:
                path: /opt/cni/bin
            - name: cni-net-dir
              hostPath:
                path: /etc/cni/net.d
            # Used to access CNI logs.
            - name: cni-log-dir
              hostPath:
                path: /var/log/calico/cni
            # Mount in the etcd TLS secrets with mode 400.
            # See https://kubernetes.io/docs/concepts/configuration/secret/
            - name: etcd-certs
              secret:
                secretName: calico-etcd-secrets
                defaultMode: 0400
            # Used to create per-pod Unix Domain Sockets
            - name: policysync
              hostPath:
                type: DirectoryOrCreate
                path: /var/run/nodeagent
            # Used to install Flex Volume Driver
            - name: flexvol-driver-host
              hostPath:
                type: DirectoryOrCreate
                path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: calico-node
      namespace: kube-system
    
    ---
    # Source: calico/templates/calico-kube-controllers.yaml
    # See https://github.com/projectcalico/kube-controllers
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      # The controllers can only have a single active instance.
      replicas: 1
      selector:
        matchLabels:
          k8s-app: calico-kube-controllers
      strategy:
        type: Recreate
      template:
        metadata:
          name: calico-kube-controllers
          namespace: kube-system
          labels:
            k8s-app: calico-kube-controllers
        spec:
          nodeSelector:
            kubernetes.io/os: linux
          tolerations:
            # Mark the pod as a critical add-on for rescheduling.
            - key: CriticalAddonsOnly
              operator: Exists
            - key: node-role.kubernetes.io/master
              effect: NoSchedule
          serviceAccountName: calico-kube-controllers
          priorityClassName: system-cluster-critical
          # The controllers must run in the host network namespace so that
          # it isn't governed by policy that would prevent it from working.
          hostNetwork: true
          containers:
            - name: calico-kube-controllers
              image: docker.io/calico/kube-controllers:v3.18.1
              env:
                # The location of the etcd cluster.
                - name: ETCD_ENDPOINTS
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_endpoints
                # Location of the CA certificate for etcd.
                - name: ETCD_CA_CERT_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_ca
                # Location of the client key for etcd.
                - name: ETCD_KEY_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_key
                # Location of the client certificate for etcd.
                - name: ETCD_CERT_FILE
                  valueFrom:
                    configMapKeyRef:
                      name: calico-config
                      key: etcd_cert
                # Choose which controllers to run.
                - name: ENABLED_CONTROLLERS
                  value: policy,namespace,serviceaccount,workloadendpoint,node
              volumeMounts:
                # Mount in the etcd TLS secrets.
                - mountPath: /calico-secrets
                  name: etcd-certs
              readinessProbe:
                exec:
                  command:
                  - /usr/bin/check-status
                  - -r
          volumes:
            # Mount in the etcd TLS secrets with mode 400.
            # See https://kubernetes.io/docs/concepts/configuration/secret/
            - name: etcd-certs
              secret:
                secretName: calico-etcd-secrets
                defaultMode: 0400
    
    ---
    
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
    
    ---
    
    # This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
    
    apiVersion: policy/v1beta1
    kind: PodDisruptionBudget
    metadata:
      name: calico-kube-controllers
      namespace: kube-system
      labels:
        k8s-app: calico-kube-controllers
    spec:
      maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: calico-kube-controllers
    
    ---
    # Source: calico/templates/calico-typha.yaml
    
    ---
    # Source: calico/templates/configure-canal.yaml
    
    ---
    # Source: calico/templates/kdd-crds.yaml
    
    
    

    修改配置

    sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.10.221:2379,https://192.168.10.222:2379,https://192.168.10.223:2379"#g' calico-etcd.yaml
    
    ETCD_CA=`cat /ups/app/kubernetes/pki/etcd-ca.pem | base64 | tr -d '
    '`
    ETCD_CERT=`cat /ups/app/kubernetes/pki/etcd.pem | base64 | tr -d '
    '`
    ETCD_KEY=`cat /ups/app/kubernetes/pki/etcd-key.pem | base64 | tr -d '
    '`
    
    sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
    
    
    sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
    
    # 更改此处为自己的pod网段
    POD_SUBNET="172.16.0.0/16"
    
    sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#   value: "192.168.0.0/16"@  value: '"${POD_SUBNET}"'@g' calico-etcd.yaml
    
    • CALICO_IPV4POOL_IPIP 为 Never 使用 BGP 模式
      • 它会以daemonset方式安装在所有node主机,每台主机启动一个bird(BGP client),它会将calico网络内的所有node分配的ip段告知集群内的主机,并通过本机的网卡eth0或者ens33转发数据
    • cni插件默认安装目录
      • image-20210414134601838

    创建资源

    kubectl apply -f calico-etcd.yaml
    

    检查Pod状态

    # 检查Pod状态
    kubectl get pod -n kube-system
    kubectl get po -n kube-system -owide
    kubectl get pods -A
    
    # 查看状态信息
    kubectl describe po calico-node-k7cff -n kube-system
    # 查看容器日志
    kubectl logs -f calico-node-k7cff -n kube-system
    

    安装CoreDNS

    服务发现插件

    下载默认配置文件

    # 软件地址
    https://github.com/coredns/deployment/tree/master/kubernetes
    
    wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed -O coredns.yaml.template
    
    • 修改yaml文件
      1. 将 kubernetes CLUSTER_DOMAIN REVERSE_CIDRS 改成 kubernetes cluster.local in-addr.arpa ip6.arpa
      2. 将 forward . UPSTREAMNAMESERVER 改成 forward . /etc/resolv.conf
      3. 将 clusterIP: CLUSTER_DNS_IP 改成 k8s service 网段的第10个IP 地址。例如:clusterIP:10.96.0.10(kubelet配置文件中的clusterDNS)

    创建配置文件

    如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP

    cat > coredns.yaml <<-'EOF'
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: coredns
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    rules:
    - apiGroups:
      - ""
      resources:
      - endpoints
      - services
      - pods
      - namespaces
      verbs:
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
      labels:
        kubernetes.io/bootstrapping: rbac-defaults
      name: system:coredns
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:coredns
    subjects:
    - kind: ServiceAccount
      name: coredns
      namespace: kube-system
    ---
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: coredns
      namespace: kube-system
    data:
      Corefile: |
        .:53 {
            errors
            health {
              lameduck 5s
            }
            ready
            kubernetes cluster.local in-addr.arpa ip6.arpa {
              fallthrough in-addr.arpa ip6.arpa
            }
            prometheus :9153
            forward . /etc/resolv.conf {
              max_concurrent 1000
            }
            cache 30
            loop
            reload
            loadbalance
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: coredns
      namespace: kube-system
      labels:
        k8s-app: kube-dns
        kubernetes.io/name: "CoreDNS"
    spec:
      # replicas: not specified here:
      # 1. Default is 1.
      # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 1
      selector:
        matchLabels:
          k8s-app: kube-dns
      template:
        metadata:
          labels:
            k8s-app: kube-dns
        spec:
          priorityClassName: system-cluster-critical
          serviceAccountName: coredns
          tolerations:
            - key: "CriticalAddonsOnly"
              operator: "Exists"
          nodeSelector:
            kubernetes.io/os: linux
          affinity:
             podAntiAffinity:
               preferredDuringSchedulingIgnoredDuringExecution:
               - weight: 100
                 podAffinityTerm:
                   labelSelector:
                     matchExpressions:
                       - key: k8s-app
                         operator: In
                         values: ["kube-dns"]
                   topologyKey: kubernetes.io/hostname
          containers:
          - name: coredns
            image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.7.0
            imagePullPolicy: IfNotPresent
            resources:
              limits:
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            args: [ "-conf", "/etc/coredns/Corefile" ]
            volumeMounts:
            - name: config-volume
              mountPath: /etc/coredns
              readOnly: true
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
            livenessProbe:
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              timeoutSeconds: 5
              successThreshold: 1
              failureThreshold: 5
            readinessProbe:
              httpGet:
                path: /ready
                port: 8181
                scheme: HTTP
          dnsPolicy: Default
          volumes:
            - name: config-volume
              configMap:
                name: coredns
                items:
                - key: Corefile
                  path: Corefile
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: kube-dns
      namespace: kube-system
      annotations:
        prometheus.io/port: "9153"
        prometheus.io/scrape: "true"
      labels:
        k8s-app: kube-dns
        kubernetes.io/cluster-service: "true"
        kubernetes.io/name: "CoreDNS"
    spec:
      selector:
        k8s-app: kube-dns
      clusterIP: 10.96.0.10
      ports:
      - name: dns
        port: 53
        protocol: UDP
      - name: dns-tcp
        port: 53
        protocol: TCP
      - name: metrics
        port: 9153
        protocol: TCP
    EOF
    
    • 如果更改了k8s service的网段需要将coredns的serviceIP改成k8s service网段的第十个IP。如:clusterIP: 10.96.0.10

      sed -i "s#10.96.0.10#10.96.0.10#g" CoreDNS/coredns.yaml
      

    集群中创建资源

    kubectl apply -f coredns.yaml
    

    检查确认

    # 查看状态
    kubectl get po -n kube-system -l k8s-app=kube-dns
    kubectl logs -f coredns-6ccb5d565f-nsfq4 -n kube-system
    kubectl get pods -n kube-system -o wide
    

    安装最新版本

    git clone https://github.com/coredns/deployment.git
    
    # 下载所需镜像
    for img in $(awk '/image:/{print $NF}' calico-etcd.yaml); do echo -e "pulling $img ------
    ";docker pull $img; done
    
    cd deployment/kubernetes
    # 安装或升级 coredns 版本
    ## deploy.sh通过提供-s选项,部署脚本将跳过ConfigMap从kube-dns到CoreDNS的转换。
    ## -i 选项指定 k8s service 的网段的第10个IP地址
    ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
    ./deploy.sh -i 10.96.0.10 -r "10.96.0.10/12" -s -t coredns.yaml.sed | kubectl apply -f -
    
    # 安装并替换 kube-dns
    ./deploy.sh | kubectl apply -f -
    kubectl delete --namespace=kube-system deployment kube-dns
    
    # 将 coredns 回滚到 kube-dns
    ./rollback.sh | kubectl apply -f -
    kubectl delete --namespace=kube-system deployment coredns
    
    # 查看状态
    kubectl get po -n kube-system -l k8s-app=kube-dns
    

    升级最新版coredns

    查看当前版本
     kubectl get pod -n kube-system coredns-867bfd96bd-f8ffj -oyaml|grep image
                f:image: {}
                f:imagePullPolicy: {}
        image: coredns/coredns:1.7.0
        imagePullPolicy: IfNotPresent
      - image: coredns/coredns:1.7.0
        imageID: ""
    
    
    备份原来的cm、deploy、clusterrole、clusterrolebinding
    [root@k8s-master01 ~]# kubectl get cm -n kube-system coredns -oyaml > coredns-config.yaml
    [root@k8s-master01 ~]# kubectl get deploy -n kube-system coredns -oyaml > coredns-controllers.yaml
    [root@k8s-master01 ~]# kubectl get clusterrole system:coredns -oyaml > coredns-clusterrole.yaml
    [root@k8s-master01 ~]# kubectl get clusterrolebinding  system:coredns -oyaml > coredns-clusterrolebinding.yaml
    
    升级
    git clone https://github.com/coredns/deployment.git
    
    cd deployment/kubernetes/
    ./deploy.sh -s | kubectl apply -f -
    

    image-20210413160038885

    检查确认
    kubectl get pod -n kube-system coredns-867bfd96bd-f8ffj -oyaml|grep image
    

    image-20210413160113155

    安装Metrics Server

    在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率

    注意:

    当使用非默认证书文件的路径/etc/kubernetes/pki,需要更新 metrics-server-v0.4.1.yaml 文件为自己的路径如下:

    image-20210419130313666

    拉取代码

    https://github.com/kubernetes-sigs/metrics-server/releases
    

    在线安装

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.1/components.yaml
    
    components.yaml 文件内容
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
        rbac.authorization.k8s.io/aggregate-to-admin: "true"
        rbac.authorization.k8s.io/aggregate-to-edit: "true"
        rbac.authorization.k8s.io/aggregate-to-view: "true"
      name: system:aggregated-metrics-reader
    rules:
    - apiGroups:
      - metrics.k8s.io
      resources:
      - pods
      - nodes
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    rules:
    - apiGroups:
      - ""
      resources:
      - pods
      - nodes
      - nodes/stats
      - namespaces
      - configmaps
      verbs:
      - get
      - list
      - watch
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server-auth-reader
      namespace: kube-system
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: extension-apiserver-authentication-reader
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server:system:auth-delegator
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:auth-delegator
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      labels:
        k8s-app: metrics-server
      name: system:metrics-server
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: system:metrics-server
    subjects:
    - kind: ServiceAccount
      name: metrics-server
      namespace: kube-system
    ---
    apiVersion: v1
    kind: Service
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      ports:
      - name: https
        port: 443
        protocol: TCP
        targetPort: https
      selector:
        k8s-app: metrics-server
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: metrics-server
      name: metrics-server
      namespace: kube-system
    spec:
      selector:
        matchLabels:
          k8s-app: metrics-server
      strategy:
        rollingUpdate:
          maxUnavailable: 0
      template:
        metadata:
          labels:
            k8s-app: metrics-server
        spec:
          containers:
          - args:
            - --cert-dir=/tmp
            - --secure-port=4443
            - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
            - --kubelet-use-node-status-port
            image: k8s.gcr.io/metrics-server/metrics-server:v0.4.1
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 3
              httpGet:
                path: /livez
                port: https
                scheme: HTTPS
              periodSeconds: 10
            name: metrics-server
            ports:
            - containerPort: 4443
              name: https
              protocol: TCP
            readinessProbe:
              failureThreshold: 3
              httpGet:
                path: /readyz
                port: https
                scheme: HTTPS
              periodSeconds: 10
            securityContext:
              readOnlyRootFilesystem: true
              runAsNonRoot: true
              runAsUser: 1000
            volumeMounts:
            - mountPath: /tmp
              name: tmp-dir
          nodeSelector:
            kubernetes.io/os: linux
          priorityClassName: system-cluster-critical
          serviceAccountName: metrics-server
          volumes:
          - emptyDir: {}
            name: tmp-dir
    ---
    apiVersion: apiregistration.k8s.io/v1
    kind: APIService
    metadata:
      labels:
        k8s-app: metrics-server
      name: v1beta1.metrics.k8s.io
    spec:
      group: metrics.k8s.io
      groupPriorityMinimum: 100
      insecureSkipTLSVerify: true
      service:
        name: metrics-server
        namespace: kube-system
      version: v1beta1
      versionPriority: 100
    
    

    利用自定义配置文件创建资源

    cd k8s-ha-install/metrics-server-0.4.x/
    kubectl apply -f comp.yaml
    
    kubectl get pod -n kube-system -l k8s-app=metrics-server
    kubectl get pod -n kube-system 
    
    • 注意:comp.yaml配置内容

      image-20210413145208995

    检查确认

    # 等待metrics server启动然后查看状态
    kubectl  top node
    
    kubectl get pod -n kube-system -l k8s-app=metrics-server
    

    查看输出metric

    # 查看 metrics-server 输出的 metrics
    kubectl get --raw https://192.168.10.221:6443/apis/metrics.k8s.io/v1beta1/nodes | jq .
    kubectl get --raw https://192.168.10.221:6443/apis/metrics.k8s.io/v1beta1/pods | jq .
    kubectl get --raw https://192.168.10.221:6443/apis/metrics.k8s.io/v1beta1/nodes/<node-name> | jq .
    kubectl get --raw https://192.168.10.221:6443/apis/metrics.k8s.io/v1beta1/namespace/<namespace-name>/pods/<pod-name> | jq .
    

    metric api 接口

    kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes |jq
    

    image-20210512163829452

    安装Dashboard

    Dashboard用于展示集群中的各类资源,同时也可以通过Dashboard实时查看Pod的日志和在容器中执行一些命令等

    cd dashboard/
    kubectl apply -f .
    
    

    安装最新版

    # 官方GitHub地址:https://github.com/kubernetes/dashboard
    # 可以在官方dashboard查看到最新版dashboard
    wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml -O dashboard.yaml
    
    curl https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml -o kube-dashboard-v2.2.0.yaml
    
    kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.1.0/aio/deploy/recommended.yaml
    
    # 创建管理员用户vim admin.yaml
    cat > dashboard-admin.yaml <<-'EOF'
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: admin-user
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding 
    metadata: 
      name: admin-user
      annotations:
        rbac.authorization.kubernetes.io/autoupdate: "true"
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: admin-user
      namespace: kube-system
    EOF
    # 安装
    kubectl apply -f dashboard-admin.yaml -n kube-system
    

    登录

    # 更改dashboard的svc为NodePort (将ClusterIP更改为NodePort)
    kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
    
    # 查看端口号
    kubectl get svc kubernetes-dashboard -n kubernetes-dashboard
    NAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
    kubernetes-dashboard   NodePort   10.96.77.112   <none>        443:30902/TCP   4m10s
    
    # 根据上面得到的实例端口号,通过任意安装了kube-proxy的宿主机或者VIP的IP+端口即可访问到dashboard
    如浏览器中打开web访问:https://192.168.10.225:30902/ 并使用token登录
    
    #### 浏览器问题
    谷歌浏览器(Chrome)启动文件中加入启动参数,用于解决无法访问Dashboard的问题
    --test-type --ignore-certificate-errors
    
    # 获取token
    kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
    

    image-20210419130649682

    image-20210419131033543

    image-20210419130957790

    用token的 kubeconfig文件登陆 dashboard
    # 创建登陆token
    kubectl create sa dashboard-admin -n kube-system
    kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
    
    ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')
    DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')
    
    # 设置集群参数
    kubectl config set-cluster kubernetes 
      --certificate-authority=/ups/app/kubernetes/pki/ca.pem 
      --embed-certs=true 
      --server=https://192.168.10.225:8443 
      --kubeconfig=dashboard.kubeconfig
     
    # 设置客户端认证参数,使用上面创建的 Token
    kubectl config set-credentials dashboard_user 
      --token=${DASHBOARD_LOGIN_TOKEN} 
      --kubeconfig=dashboard.kubeconfig
     
    # 设置上下文参数
    kubectl config set-context default 
      --cluster=kubernetes 
      --user=dashboard_user 
      --kubeconfig=dashboard.kubeconfig
     
    # 设置默认上下文
    kubectl config use-context default --kubeconfig=dashboard.kubeconfig
    

    image-20210419130528610

    安装 kube-prometheus

    项目地址

    image-20210414140351137

    软件下载

    git clone https://github.com/coreos/kube-prometheus.git
    

    安装配置

    cd kube-prometheus/
    find . -name "*.yaml" -exec grep 'image: ' {} ;|awk '{print $NF}'|sort|uniq
    find . -name "*.yaml" -exec grep 'quay.io' {} ;|awk '{print $NF}'|sort|uniq
    
    # 使用中科大的 Registry
     sed -i -e 's#quay.io#quay.mirrors.ustc.edu.cn#g' manifests/*.yaml manifests/setup/*.yaml    
     
    # 安装 prometheus-operator
    kubectl apply -f manifests/setup   
     
    # 安装 promethes metric adapter
    kubectl apply -f manifests/   
    
    

    image-20210414141830060

    检查运行状态

    kubectl get pods -n monitoring
    

    安装traefik

    服务暴露用插件

    创建命名空间

    kubectl create ns ingress-traefik
    

    创建CRD 资源

    在 traefik v2.0 版本后,开始使用 CRD(Custom Resource Definition)来完成路由配置等,所以需要提前创建 CRD 资源

    cat > traefik-crd.yaml <<-'EOF'
    ## IngressRoute
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: ingressroutes.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: IngressRoute
        plural: ingressroutes
        singular: ingressroute
    ---
    ## IngressRouteTCP
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: ingressroutetcps.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: IngressRouteTCP
        plural: ingressroutetcps
        singular: ingressroutetcp
    ---
    ## Middleware
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: middlewares.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: Middleware
        plural: middlewares
        singular: middleware
    ---
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: tlsoptions.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: TLSOption
        plural: tlsoptions
        singular: tlsoption
    ---
    ## TraefikService
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: traefikservices.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: TraefikService
        plural: traefikservices
        singular: traefikservice
     
    ---
    ## TraefikTLSStore
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: tlsstores.traefik.containo.us
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: TLSStore
        plural: tlsstores
        singular: tlsstore
     
    ---
    ## IngressRouteUDP
    apiVersion: apiextensions.k8s.io/v1beta1
    kind: CustomResourceDefinition
    metadata:
      name: ingressrouteudps.traefik.containo.us 
    spec:
      scope: Namespaced
      group: traefik.containo.us
      version: v1alpha1
      names:
        kind: IngressRouteUDP
        plural: ingressrouteudps
    singular: ingressrouteudp
    EOF
    
    # 创建资源
    kubectl apply -f traefik-crd.yaml
    # 查看crd资源
    kubectl get crd | grep traefik
    

    安装helm

    Helm 是 Kubernetes 的包管理器。使用Helm 能够从Chart repository(Helm应用仓库)快速查找、下载安装软件包并通过与K8s API Server交互构建应用。

    架构图

    image-20210423154448692

    组成

    image-20210423154640414

    • Charts: Helm使用的打包格式,一个Chart包含了一组K8s资源集合的描述文件。Chart有特定的文件目录结构,如果开发者想自定义一个新的 Chart,只需要使用Helm create命令生成一个目录结构即可进行开发。
    • Release: 通过Helm将Chart部署到 K8s集群时创建的特定实例,包含了部署在容器集群内的各种应用资源。
    • Tiller: Helm 2.x版本中,Helm采用Client/Server的设计,Tiller就是Helm的Server部分,需要具备集群管理员权限才能安装到K8s集群中运行。Tiller与Helm client进行交互,接收client的请求,再与K8s API Server通信,根据传递的Charts来生成Release。而在最新的Helm 3.x中,据说是为了安全性考虑移除了Tiller。
    • Chart Repository: Helm Chart包仓库,提供了很多应用的Chart包供用户下载使用,官方仓库的地址是https://hub.helm.sh。

    Helm的任务是在仓库中查找需要的Chart,然后将Chart以Release的形式安装到K8S集群中。

    image-20210423155312025

    下载地址

    https://github.com/helm/helm/releases/tag/v3.5.3
    
    https://get.helm.sh/helm-v3.5.3-linux-amd64.tar.gz
    

    配置

    #  helm
    wget https://get.helm.sh/helm-v3.4.1-linux-amd64.tar.gz
    tar -zxvf helm-v3.4.1-linux-amd64.tar.gz
    cd linux-amd64/
    cp helm /usr/local/bin
    chmod a+x /usr/local/bin/helm
    
    # 安装 Tiller
    helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.16.6  --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
    

    通过 kubectl get po -n kube-system 来查看 tiller 的安装情况

    k8s集群验证

    • Pod必须能解析Service

    • Pod必须能解解析跨namespace的Service

    • 每个节点都必须能访问集群的kubernetes的SVC(443端口)和kube-dns的svc(53端口)

    • Pod于pod之间要能通信:

      • 同namespace间
      • 跨namespace间
      • 跨机器能通信

    创建一个busybox的pod

    cat> busybox.yaml<<EOF
    apiVersion: v1
    kind: Pod
    metadata:
      name: busybox
      namespace: default
    spec:
      containers:
      - name: busybox
        image: busybox:1.28
        command:
      - sleep
      - "3600"
        imagePullPolicy: IfNotPresent
      restartPolicy: Always
    EOF
    
    kubectl apply -f busybox.yaml
    

    首先查看po是否安装成功

    kubectl get po
    

    image-20210419131237075

    查看svc是否正常

    kubectl get svc
    

    查看Pod是否能能解析Service

    # 查看Pod是否能能解析Service
    kubectl exec  busybox -n default -- nslookup kubernetes
    
    # 查看Pod是否能解析跨namespace的Service
    kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
    

    设置污点

    污点(Taint)的组成

    key=value:effect
    

    每个污点有一个key和value作为污点的标签,其中value可以为空,effect描述污点的作用。当前taint effect支持如下三个选项:

    • NoSchedule:表示k8s将不会将Pod调度到具有该污点的Node上
    • PreferNoSchedule:表示k8s将尽量避免将Pod调度到具有该污点的Node上
    • NoExecute:表示k8s将不会将Pod调度到具有该污点的Node上,同时会将Node上已经存在的Pod驱逐出去

    污点设置和去除

    # 设置污点
    kubectl taint nodes node1 key1=value1:NoSchedule
    
    # 去除污点
    kubectl taint nodes node1 key1:NoSchedule-
    

    示例

    Kubernetes集群中总共有3个master节点,节点的名称分别为k8s-master01k8s-master02k8s-master03。 为了保证集群的稳定性,同时提高master节点的利用率,我们将其中一个节点设置为node-role.kubernetes.io/master:NoSchedule,另外两个节点设置为node-role.kubernetes.io/master:PreferNoSchedule,这样保证3个节点中的1个无论在任何情况下都将不运行业务Pod,而另外2个载集群资源充足的情况下尽量不运行业务Pod

    kubectl taint nodes m01 node-role.kubernetes.io/master=:NoSchedule
    
    kubectl taint nodes m02 node-role.kubernetes.io/master=:PreferNoSchedule
    
    kubectl taint nodes m03 node-role.kubernetes.io/master=:PreferNoSchedule
    
    

    问题

    calico-kube-controllers启动失败

    错误信息

    kubectl logs calico-kube-controllers-8599495c57-bnqgp -n kube-system
    # -- 输出日志
    [FATAL][1] main.go 105: Failed to start error=failed to build Calico client: could not initialize etcdv3 client: open /calico-secrets/etcd-cert: permission denied
    
    

    image-20210419124204653

    处理

    参考文档

    image-20210419124710097

    修改配置文件
    # 修改 calico-etcd.yaml 中
    defaultMode: 0400 修改成 defaultMode: 0040
    

    image-20210419125232170

    重新应用资源
    kubectl apply -f calico-etcd.yaml
    
    检查确认
    kubectl get po -A -owide
    

    image-20210419125431320

    附录

    签名证书

    基础概念

    CA(Certification Authority)证书,指的是权威机构给我们颁发的证书。

    密钥就是用来加解密用的文件或者字符串。密钥在非对称加密的领域里,指的是私钥和公钥,他们总是成对出现,其主要作用是加密和解密。常用的加密强度是2048bit。

    RSA即非对称加密算法。非对称加密有两个不一样的密码,一个叫私钥,另一个叫公钥,用其中一个加密的数据只能用另一个密码解开,用自己的都解不了,也就是说用公钥加密的数据只能由私钥解开。

    证书的编码格式

    PEM(Privacy Enhanced Mail),通常用于数字证书认证机构(Certificate Authorities,CA),扩展名为.pem, .crt, .cer, 和 .key。内容为Base64编码的ASCII码文件,有类似"-----BEGIN CERTIFICATE-----""-----END CERTIFICATE-----"的头尾标记。服务器认证证书,中级认证证书和私钥都可以储存为PEM格式(认证证书其实就是公钥)。Apache和nginx等类似的服务器使用PEM格式证书。

    DER(Distinguished Encoding Rules),与PEM不同之处在于其使用二进制而不是Base64编码的ASCII。扩展名为.der,但也经常使用.cer用作扩展名,所有类型的认证证书和私钥都可以存储为DER格式。Java使其典型使用平台。

    证书签名请求CSR

    CSR(Certificate Signing Request),它是向CA机构申请数字×××书时使用的请求文件。在生成请求文件前,我们需要准备一对对称密钥。私钥信息自己保存,请求中会附上公钥信息以及国家,城市,域名,Email等信息,CSR中还会附上签名信息。当我们准备好CSR文件后就可以提交给CA机构,等待他们给我们签名,签好名后我们会收到crt文件,即证书。

    注意:CSR并不是证书。而是向权威证书颁发机构获得签名证书的申请。

    把CSR交给权威证书颁发机构,权威证书颁发机构对此进行签名,完成。保留好CSR,当权威证书颁发机构颁发的证书过期的时候,你还可以用同样的CSR来申请新的证书,key保持不变.

    数字签名

    数字签名就是"非对称加密+摘要算法",其目的不是为了加密,而是用来防止他人篡改数据。

    其核心思想是:比如A要给B发送数据,A先用摘要算法得到数据的指纹,然后用A的私钥加密指纹,加密后的指纹就是A的签名,B收到数据和A的签名后,也用同样的摘要算法计算指纹,然后用A公开的公钥解密签名,比较两个指纹,如果相同,说明数据没有被篡改,确实是A发过来的数据。假设C想改A发给B的数据来欺骗B,因为篡改数据后指纹会变,要想跟A的签名里面的指纹一致,就得改签名,但由于没有A的私钥,所以改不了,如果C用自己的私钥生成一个新的签名,B收到数据后用A的公钥根本就解不开。

    常用的摘要算法有MD5、SHA1、SHA256。

    使用私钥对需要传输的文本的摘要进行加密,得到的密文即被称为该次传输过程的签名。

    数字证书和公钥

    数字证书则是由证书认证机构(CA)对证书申请者真实身份验证之后,用CA的根证书对申请人的一些基本信息以及申请人的公钥进行签名(相当于加盖发证书机 构的公章)后形成的一个数字文件。实际上,数字证书就是经过CA认证过的公钥,除了公钥,还有其他的信息,比如Email,国家,城市,域名等。

    证书类型分类
    • client certificate: 用于服务端认证客户端(例如etcdctl、etcd proxy、fleetctl、docker 客户端 等等)
    • server certificate: 服务端使用,客户端以此验证服务端身份(例如 docker服务端、kube-apiserver 等等)
    • peer certificate: 双向证书,用于etcd 集群成员间通信
    证书分类
    • 服务器证书 :server cert
    • 客户端证书 : client cert
    • 对等证书 : peer cert (表示既是server cert又是client cert)
    在kubernetes 集群中需要的证书
    • etcd 节点需要标识自己服务的server cert,也需要client cert与etcd集群其他节点交互,当然可以分别指定2个证书,也可以使用一个对等证书
    • master 节点需要标识 apiserver服务的server cert,也需要client cert连接etcd集群,这里也使用一个对等证书
    • kubectl calico kube-proxy 只需要client cert,因此证书请求中 hosts 字段可以为空
    • kubelet证书比较特殊,(自动生成) 它由node节点TLS BootStrap向apiserver请求,由master节点的controller-manager 自动签发,包含一个client cert 和一个server cert
    工具使用方法
    生成证书和私钥
    # cfssl gencert --help
    	cfssl gencert -- generate a new key and signed certificate
    
    Usage of gencert:
        Generate a new key and cert from CSR:
            cfssl gencert -initca CSRJSON
            cfssl gencert -ca cert -ca-key key [-config config] [-profile profile] [-hostname hostname] CSRJSON
            cfssl gencert -remote remote_host [-config config] [-profile profile] [-label label] [-hostname hostname] CSRJSON
    
        Re-generate a CA cert with the CA key and CSR:
            cfssl gencert -initca -ca-key key CSRJSON
    
        Re-generate a CA cert with the CA key and certificate:
            cfssl gencert -renewca -ca cert -ca-key key
    
    Arguments:
            CSRJSON:    JSON file containing the request, use '-' for reading JSON from stdin
    
    Flags:
      -initca=false: initialise new CA
      -remote="": remote CFSSL server
      -ca="": CA used to sign the new certificate
      -ca-key="": CA private key
      -config="": path to configuration file
      -hostname="": Hostname for the cert, could be a comma-separated hostname list
      -profile="": signing profile to use
      -label="": key label to use in remote CFSSL server
    
    cfssljson

    从cfssl和multirootca程序获取JSON输出,并将证书,密钥,CSR和捆绑写入文件

    # cfssljson --help
    Usage of cfssljson:
      -bare
        	the response from CFSSL is not wrapped in the API standard response
      -f string
        	JSON input (default "-")
      -stdout
        	output the response instead of saving to a file
    
    
    创建证书所需的配置文件
    CA配置文件 (ca-config.json)

    从 模板文件 中生成 ca-config.json 文件

    ## cfssl print-defaults config > ca-config.json
    # cfssl print-defaults config
    {
        "signing": {
            "default": {
                "expiry": "168h"
            },
            "profiles": {
                "www": {
                    "expiry": "8760h",
                    "usages": [
                        "signing",
                        "key encipherment",
                        "server auth"
                    ]
                },
                "client": {
                    "expiry": "8760h",
                    "usages": [
                        "signing",
                        "key encipherment",
                        "client auth"
                    ]
                }
            }
        }
    }
    

    修改默认json文件,适用特定场景的配置

    cat > ca-config.json <<-'EOF'
    {
      "signing": {
        "default": {
          "expiry": "876000h"
        },
        "profiles": {
          "kubernetes": {
            "usages": [
                "signing",
                "key encipherment",
                "server auth",
                "client auth"
            ],
            "expiry": "876000h"
          }
        }
      }
    }
    EOF
    

    上面配置了一个default默认的配置,和一个kubernetes profiles,profiles可以设置多个profile

    字段说明:

    • default默认策略,指定了证书的默认有效期是一年(876000h)
    • kubernetes:表示该配置(profile)的用途是为kubernetes生成证书及相关的校验工作
      • signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE
      • server auth:表示可以该CA 对 server 提供的证书进行验证
      • client auth:表示可以用该 CA 对 client 提供的证书进行验证
    • expiry:也表示过期时间,如果不写以default中的为准
    CA证书签名请求 (ca-csr.json)

    用于生成CA证书和私钥(root 证书和私钥)

    从 模板文件 生成 CSR

    ## cfssl print-defaults csr > csr.json
    # cfssl print-defaults csr
    {
        "CN": "example.net",
        "hosts": [
            "example.net",
            "www.example.net"
        ],
        "key": {
            "algo": "ecdsa",
            "size": 256
        },
        "names": [
            {
                "C": "US",
                "L": "CA",
                "ST": "San Francisco"
            }
        ]
    }
    

    将默认csr.json 文件修改成适合指定场景环境

    cat > etcd-ca-csr.json <<-'EOF'
    {
      "CN": "etcd",
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Guangdong",
          "L": "Guangzhou",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ],
      "ca": {
        "expiry": "876000h"
      }
    }
    EOF
    

    参数字段说明

    • CN: Common Name,浏览器使用该字段验证网站是否合法,一般写的是域名。非常重要。浏览器使用该字段验证网站是否合法
    • key:生成证书的算法
    • hosts:表示哪些主机名(域名)或者IP可以使用此csr申请的证书,为空或者""表示所有的都可以使用(上面这个没有hosts字段)
    • names:一些其它的属性
      • C: Country, 国家
      • ST: State,州或者是省份
      • L: Locality Name,地区,城市
      • O: Organization Name,组织名称,公司名称(在k8s中常用于指定Group,进行RBAC绑定)
      • OU: Organization Unit Name,组织单位名称,公司部门
    客户端证书请求文件 (client-crs.json)
    cat > etcd-csr.json <<-'EOF'
    {
      "CN": "etcd",
      "hosts": [
          "127.0.0.1",
          "192.168.10.221",
          "192.168.10.222",
          "192.168.10.223",
          "192.168.10.224",
          "192.168.10.225",
          "192.168.10.226",
          "k8s01",
          "k8s02",
          "k8s03",
          "k8s04",
          "k8s05",
          "k8s06"
      ],
      "key": {
        "algo": "rsa",
        "size": 2048
      },
      "names": [
        {
          "C": "CN",
          "ST": "Guangdong",
          "L": "Guangzhou",
          "O": "etcd",
          "OU": "Etcd Security"
        }
      ]
    }
    EOF
    
    创建证书文件

    ca-key.pem(私钥)和ca.pem(证书),还会生成ca.csr(证书签名请求),用于交叉签名或重新签名。

    创建签名证书或私钥
    # 生成 CA 证书和私钥
    ## 初始化
    cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca
    ## 使用现有私钥, 重新生成
    cfssl gencert -initca -ca-key etcd-ca-key.pem etcd-ca-csr.json | cfssljson -bare etcd-ca
    
    # 生成私钥证书
    cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
    

    gencert: 生成新的key(密钥)和签名证书

    • -initca:初始化一个新ca
    • -ca:指明ca的证书
    • -ca-key:指明ca的私钥文件
    • -config:指明请求证书的json文件
    • -profile:与-config中的profile对应,是指根据config中的profile段来生成证书的相关信息
    查看证书信息
    查看cert(证书信息)
    cfssl certinfo -cert ca.pem
    
    查看CSR(证书签名请求)信息
    cfssl certinfo -csr etcd-ca.csr
    

    etcd配置方式

    环境变量方式

    cat > /ups/app/etcd/cfg/etcd.conf.sample <<-'EOF'
    #[Member]
    ETCD_NAME="etcd01"
    ETCD_DATA_DIR="/data/etcd/data/"
    ETCD_WAL_DIR="/data/etcd/wal/"
    ETCD_MAX_WALS="5"
    ETCD_LISTEN_PEER_URLS="https://192.168.10.221:2380"
    ETCD_LISTEN_CLIENT_URLS="https://192.168.10.221:2379,http://127.0.0.1:2379"
    
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.10.221:2380"
    ETCD_ADVERTISE_CLIENT_URLS="https://192.168.10.221:2379"
    ETCD_INITIAL_CLUSTER="etcd01=https://192.168.10.221:2380,etcd02=https://192.168.10.222:2380,etcd03=https://192.168.10.223:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
    ETCD_INITIAL_CLUSTER_STATE="new"
    
    #[Security]
    ETCD_CERT_FILE="/ups/app/etcd/pki/etcd.pem"
    ETCD_KEY_FILE="/ups/app/etcd/pki/etcd-key.pem"
    ETCD_CLIENT_CERT_AUTH="true"
    ETCD_TRUSTED_CA_FILE="/ups/app/etcd/pki/etcd-ca.pem"
    ETCD_AUTO_TLS="true"
    ETCD_PEER_CERT_FILE="/ups/app/etcd/pki/etcd.pem"
    ETCD_PEER_KEY_FILE="/ups/app/etcd/pki/etcd-key.pem"
    ETCD_PEER_CLIENT_CERT_AUTH="true"
    ETCD_PEER_TRUSTED_CA_FILE="/ups/app/etcd/pki/etcd-ca.pem"
    ETCD_PEER_AUTO_TLS="true"
    
    #[Log]
    ETCD_LOGGER="zap"
    ETCD_LOG_OUTPUTS="stderr"
    ETCD_LOG_LEVEL="error"
    EOF
    

    yml 配置文件

    cat > etcd.conf.yml.sample <<-'EOF'
    # This is the configuration file for the etcd server.
    
    # Human-readable name for this member.
    name: 'default'
    
    # Path to the data directory.
    data-dir:
    
    # Path to the dedicated wal directory.
    wal-dir:
    
    # Number of committed transactions to trigger a snapshot to disk.
    snapshot-count: 10000
    
    # Time (in milliseconds) of a heartbeat interval.
    heartbeat-interval: 100
    
    # Time (in milliseconds) for an election to timeout.
    election-timeout: 1000
    
    # Raise alarms when backend size exceeds the given quota. 0 means use the
    # default quota.
    quota-backend-bytes: 0
    
    # List of comma separated URLs to listen on for peer traffic.
    listen-peer-urls: http://localhost:2380
    
    # List of comma separated URLs to listen on for client traffic.
    listen-client-urls: http://localhost:2379
    
    # Maximum number of snapshot files to retain (0 is unlimited).
    max-snapshots: 5
    
    # Maximum number of wal files to retain (0 is unlimited).
    max-wals: 5
    
    # Comma-separated white list of origins for CORS (cross-origin resource sharing).
    cors:
    
    # List of this member's peer URLs to advertise to the rest of the cluster.
    # The URLs needed to be a comma-separated list.
    initial-advertise-peer-urls: http://localhost:2380
    
    # List of this member's client URLs to advertise to the public.
    # The URLs needed to be a comma-separated list.
    advertise-client-urls: http://localhost:2379
    
    # Discovery URL used to bootstrap the cluster.
    discovery:
    
    # Valid values include 'exit', 'proxy'
    discovery-fallback: 'proxy'
    
    # HTTP proxy to use for traffic to discovery service.
    discovery-proxy:
    
    # DNS domain used to bootstrap initial cluster.
    discovery-srv:
    
    # Initial cluster configuration for bootstrapping.
    initial-cluster:
    
    # Initial cluster token for the etcd cluster during bootstrap.
    initial-cluster-token: 'etcd-cluster'
    
    # Initial cluster state ('new' or 'existing').
    initial-cluster-state: 'new'
    
    # Reject reconfiguration requests that would cause quorum loss.
    strict-reconfig-check: false
    
    # Accept etcd V2 client requests
    enable-v2: true
    
    # Enable runtime profiling data via HTTP server
    enable-pprof: true
    
    # Valid values include 'on', 'readonly', 'off'
    proxy: 'off'
    
    # Time (in milliseconds) an endpoint will be held in a failed state.
    proxy-failure-wait: 5000
    
    # Time (in milliseconds) of the endpoints refresh interval.
    proxy-refresh-interval: 30000
    
    # Time (in milliseconds) for a dial to timeout.
    proxy-dial-timeout: 1000
    
    # Time (in milliseconds) for a write to timeout.
    proxy-write-timeout: 5000
    
    # Time (in milliseconds) for a read to timeout.
    proxy-read-timeout: 0
    
    client-transport-security:
      # Path to the client server TLS cert file.
      cert-file:
    
      # Path to the client server TLS key file.
      key-file:
    
      # Enable client cert authentication.
      client-cert-auth: false
    
      # Path to the client server TLS trusted CA cert file.
      trusted-ca-file:
    
      # Client TLS using generated certificates
      auto-tls: false
    
    peer-transport-security:
      # Path to the peer server TLS cert file.
      cert-file:
    
      # Path to the peer server TLS key file.
      key-file:
    
      # Enable peer client cert authentication.
      client-cert-auth: false
    
      # Path to the peer server TLS trusted CA cert file.
      trusted-ca-file:
    
      # Peer TLS using generated certificates.
      auto-tls: false
    
    # Enable debug-level logging for etcd.
    debug: false
    
    logger: zap
    
    # Specify 'stdout' or 'stderr' to skip journald logging even when running under systemd.
    log-outputs: [stderr]
    
    # Force to create a new one member cluster.
    force-new-cluster: false
    
    auto-compaction-mode: periodic
    auto-compaction-retention: "1"
    EOF
    
    ## 过滤空行或注释行 
    grep -Ev "^[ 	]*(#|$)" etcd.conf.yml.sample > etcd.conf.yml
    

    系统内核相关

    系统内核相关参数参考:https://docs.openshift.com/enterprise/3.2/admin_guide/overcommit.html

    3.10.x 内核 kmem bugs 相关的讨论和解决办法:
    https://github.com/kubernetes/kubernetes/issues/61937
    https://support.mesosphere.com/s/article/Critical-Issue-KMEM-MSPH-2018-0006
    https://pingcap.com/blog/try-to-fix-two-linux-kernel-bugs-while-testing-tidb-operator-in-k8s/

    kubelete认证

    1. 关于 controller 权限和 use-service-account-credentials 参数:https://github.com/kubernetes/kubernetes/issues/48208
    2. kubelet 认证和授权:https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization

    kubectl 命令行工具

    kubectl 是 kubernetes 集群的命令行管理工具,它默认从 ~/.kube/config 文件读取 kube-apiserver 地址、证书、用户名等信息。需要 admin 证书权限对集群进行管理。

    证书

    需要证书
    Default CN Parent CA O (in Subject) kind hosts (SAN)
    kube-etcd etcd-ca server, client localhost, 127.0.0.1
    kube-etcd-peer etcd-ca server, client , , localhost, 127.0.0.1
    kube-etcd-healthcheck-client etcd-ca client
    kube-apiserver-etcd-client etcd-ca system:masters client
    kube-apiserver kubernetes-ca server , , ``, [1]
    kube-apiserver-kubelet-client kubernetes-ca system:masters client
    front-proxy-client kubernetes-front-proxy-ca client
    证书路径
    Default CN recommended key path recommended cert path command key argument cert argument
    etcd-ca etcd/ca.key etcd/ca.crt kube-apiserver --etcd-cafile
    kube-apiserver-etcd-client apiserver-etcd-client.key apiserver-etcd-client.crt kube-apiserver --etcd-keyfile --etcd-certfile
    kubernetes-ca ca.key ca.crt kube-apiserver --client-ca-file
    kubernetes-ca ca.key ca.crt kube-controller-manager --cluster-signing-key-file --client-ca-file, --root-ca-file, --cluster-signing-cert-file
    kube-apiserver apiserver.key apiserver.crt kube-apiserver --tls-private-key-file --tls-cert-file
    kube-apiserver-kubelet-client apiserver-kubelet-client.key apiserver-kubelet-client.crt kube-apiserver --kubelet-client-key --kubelet-client-certificate
    front-proxy-ca front-proxy-ca.key front-proxy-ca.crt kube-apiserver --requestheader-client-ca-file
    front-proxy-ca front-proxy-ca.key front-proxy-ca.crt kube-controller-manager --requestheader-client-ca-file
    front-proxy-client front-proxy-client.key front-proxy-client.crt kube-apiserver --proxy-client-key-file --proxy-client-cert-file
    etcd-ca etcd/ca.key etcd/ca.crt etcd --trusted-ca-file, --peer-trusted-ca-file
    kube-etcd etcd/server.key etcd/server.crt etcd --key-file --cert-file
    kube-etcd-peer etcd/peer.key etcd/peer.crt etcd --peer-key-file --peer-cert-file
    etcd-ca etcd/ca.crt etcdctl --cacert
    kube-etcd-healthcheck-client etcd/healthcheck-client.key etcd/healthcheck-client.crt etcdctl --key --cert
    service account key pair
    private key path public key path command argument
    sa.key kube-controller-manager --service-account-private-key-file
    sa.pub kube-apiserver --service-account-key-file
    证书类型说明
    证书名称 配置文件 用途
    ca.pem ca-csr.json ca根证书
    kube-proxy.pem ca-config.json kube-proxy-csr.json kube-proxy使用的证书
    admin.pem admin-csr.json ca-config.json kubectl 使用的证书
    kubernetes.pem ca-config.json kubernetes-csr.json apiserver使用的证书
    使用证书的组件如下
    组件 证书
    kube-apiserver ca.pem、kubernetes-key.pem、kubernetes.pem
    kube-controller-manager ca-key.pem、ca.pem
    kubelet ca.pem
    kube-proxy ca.pem、kube-proxy-key.pem、kube-proxy.pem
    kubectl ca.pem、admin-key.pem、admin.pem

    etcd证书:

    • peer.pem、peer-key.pem:etcd各节点相互通信的对等证书及私钥(hosts指定所有etcd节点IP)
    • server.pem、server-key.pem:etcd各节点自己的服务器证书及私钥(hosts指定当前etcd节点的IP)
    • client.pem、client-key.pem:命令行客户端访问etcd使用的证书私钥(hosts可以不写或者为空)
    • apiserver-etcd-client.pem、apiserver-etcd-client-key.pem:apiserver访问etcd的证书及私钥;
    • 注:其中peer.pem和server.pem可以使用一个,因为都是服务端证书(hosts指定所有etcd节点IP)

    client.pem和apiserver-etcd-client.pem可以使用一个,因为都是客户端证书(hosts都为空或不写)

    k8s证书:

    • kube-apiserver.pem:kube-apiserver节点使用的证书(每个master生成一个,hosts为当前master的IP)
    • kubelet.pem:kube-apiserver访问kubelet时的客户端证书(每个master一个,hosts为当前master的IP)
    • aggregator-proxy.pem:kube-apiserver使用聚合时,客户端访问代理的证书(hosts为空)
    • admin.pem:kubectl客户端的证书(hosts为空或者不写)

    kubernetes组件启动参数说明

    日志级别(-v)

    • --v=0 : Generally useful for this to ALWAYS be visible to an operator.
    • --v=1 : A reasonable default log level if you don’t want verbosity.
    • --v=2 : Useful steady state information about the service and important log messages that may correlate to significant changes in the system. This is the recommended default log level for most systems.
    • --v=3 : Extended information about changes.
    • --v=4 : Debug level verbosity.
    • --v=6 : Display requested resources.
    • --v=7 : Display HTTP request headers.
    • --v=8 : Display HTTP request contents
  • 相关阅读:
    sqlserver和Oracle内部的错误数据修复(DBCC、DBMS_REPAIR)
    通过Oracle补充日志,找到锁阻塞源头的SQL
    禁用sqlserver的锁升级
    [转]SQLServer2008日志文件无法收缩处理方法
    Oracle警告、跟踪文件(10046、死锁等跟踪)
    dbms_stats包更新、导出、导入、锁定统计信息
    BulkCopy频繁执行产生的性能问题
    Oracle表空间不足
    组合索引字段顺序引发的死锁问题
    如何清除某条SQL的执行计划
  • 原文地址:https://www.cnblogs.com/binliubiao/p/14823221.html
Copyright © 2011-2022 走看看