zoukankan      html  css  js  c++  java
  • Centos7.6部署k8s 集群

    目录

    环境说明:

    主机名 操作系统 IP docker version kubelet version
    k8s-master CentOS 7.6.1810 192.168.20.128 18.09.6 v1.16.9
    19db1 CentOS 7.6.1810 192.168.20.126 18.09.6 v1.16.9
    19db3 CentOS 7.6.1810 192.168.20.127 18.09.6 v1.16.9

    一、Docker安装

    所有节点都需要安装docker

    1. 安装依赖包

    [root@k8s-master ~]# yum install -y yum-utils   device-mapper-persistent-data   lvm2
    

    2. 设置Docker源

    [root@k8s-master ~]# yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo
    Loaded plugins: fastestmirror
    adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
    grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
    repo saved to /etc/yum.repos.d/docker-ce.repo
    

    3. 安装Docker CE

    3.1 docker安装版本查看

    [root@k8s-master ~]# yum list docker-ce --showduplicates | sort -r
    * updates: mirrors.aliyun.com
    Loading mirror speeds from cached hostfile
    Loaded plugins: fastestmirror
    * extras: mirrors.aliyun.com
    docker-ce.x86_64            3:20.10.5-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.4-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.3-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.2-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.1-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:20.10.0-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.9-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.8-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.7-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.6-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.5-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.4-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.3-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.2-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.15-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.14-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.1-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:19.03.13-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.12-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.11-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.10-3.el7                    docker-ce-stable
    docker-ce.x86_64            3:19.03.0-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.9-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.8-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.7-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.6-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.5-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.4-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.3-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.2-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.1-3.el7                     docker-ce-stable
    docker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stable
    docker-ce.x86_64            18.06.3.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.06.2.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stable
    docker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stable
    .........
    

    3.2 安装docker

    [root@k8s-master ~]#  yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io -y
    

    3.2.1 卸载docker

    --1、查看已安装的docker
    [root@19db1 ~]# yum list installed | grep docker
    containerd.io.x86_64                       1.4.3-3.1.el7               @docker-ce-stable
    docker-ce.x86_64                           3:20.10.4-3.el7             @docker-ce-stable
    docker-ce-cli.x86_64                       1:20.10.4-3.el7             @docker-ce-stable
    docker-ce-rootless-extras.x86_64           20.10.4-3.el7               @docker-ce-stable
    --2、卸载已安装docker
    [root@19db1 ~]# yum remove docker-ce.x86_64 docker-ce-cli.x86_64 docker-ce-rootless-extras.x86_64 containerd.io.x86_64 -y
    --3、删除残余包
    [root@19db1 ~]# rm -rf /var/lib/docker
    

    4. 启动Docker

    [root@k8s-master ~]# systemctl start docker
    [root@k8s-master ~]# systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    

    5. 镜像加速

    由于Docker Hub的服务器在国外,下载镜像会比较慢,可以配置镜像加速器。主要的加速器有:Docker官方提供的中国registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置为例。

    5.1 登陆阿里云容器模块

    登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户

    5.2 配置镜像加速器

    配置daemon.json文件

    [root@k8s-master ~]# mkdir -p /etc/docker
    [root@k8s-master ~]# 
    tee /etc/docker/daemon.json <<-'EOF'
    {
      "registry-mirrors": ["https://4yh6llq4.mirror.aliyuncs.com"]
    }
    EOF
    [root@k8s-master ~]# systemctl daemon-reload
    [root@k8s-master ~]# systemctl restart docker
    [root@k8s-master ~]# systemctl status docker
    

    6. 验证

    [root@k8s-master ~]# docker --version
    Docker version 18.09.6, build 481bc77156
    

    通过查询docker版本和运行容器hello-world来验证docker是否安装成功。

    二、k8s安装准备工作

    安装Centos是已经禁用了防火墙和selinux并设置了阿里源。master和node节点都执行本部分操作。

    1. 配置主机名

    1.1 修改主机名

    [root@k8s-master ~]# hostnamectl set-hostname k8s-master
    [root@k8s-master ~]# more /etc/hostname        
    k8s-master
    

    退出重新登陆即可显示新设置的主机名k8s-master

    1.2 修改hosts文件

    [root@k8s-master ~]# cat >> /etc/hosts << EOF
    192.168.20.128   k8s-master
    192.168.20.126   19db1
    192.168.20.127   19db2
    EOF
    [root@k8s-master ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.20.128   k8s-master
    192.168.20.126   19db1
    192.168.20.127   19db2
    

    2. 验证mac地址uuid

    [root@k8s-master ~]# cat /sys/class/net/ens192/address
    00:50:56:a9:ba:f5
    [root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid
    32412942-7B08-BB9E-4B79-A63CC68DF861
    
    [root@19db1 ~]# cat /sys/class/net/ens192/address
    00:50:56:a9:64:1f
    [root@19db1 ~]# cat /sys/class/dmi/id/product_uuid
    F52B2942-EAEC-0559-3F49-E84A0B544692
    
    [root@19db2 ~]# cat /sys/class/net/ens192/address
    00:50:56:a9:ec:57
    [root@19db2 ~]# cat /sys/class/dmi/id/product_uuid
    7A822942-A765-27CB-C12D-4E1EE5F60639
    

    保证各节点mac和uuid唯一

    3. 禁用swap

    3.1 临时禁用

    [root@k8s-master ~]# swapoff -a
    [root@k8s-master ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:          16027         330       14967          11         730       15321
    Swap:             0           0           0
    

    3.2 永久禁用

    若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap

    [root@k8s-master ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
    [root@k8s-master ~]# more /etc/fstab
    
    
    #
    # /etc/fstab
    # Created by anaconda on Mon Sep  9 11:54:13 2019
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    /dev/mapper/centos-root /                       xfs     defaults        0 0
    UUID=5e104207-a2fd-4110-b965-8a7259001e22 /boot                   xfs     defaults        0 0
    #/dev/mapper/centos-swap swap                    swap    defaults        0 0
    

    4. 内核参数修改

    4.1 临时修改

    [root@k8s-master ~]# sysctl net.bridge.bridge-nf-call-iptables=1
    net.bridge.bridge-nf-call-iptables = 1
    [root@k8s-master ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
    net.bridge.bridge-nf-call-ip6tables = 1
    

    4.2 永久修改

    [root@k8s-master ~]# cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    [root@k8s-master ~]# sysctl -p /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    

    5. 修改Cgroup Driver

    5.1 修改daemon.json

    修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’

    [root@k8s-master ~]# more /etc/docker/daemon.json
    {
      "registry-mirrors": ["https://4yh6llq4.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    

    5.2 重新加载docker

    [root@19db1 ~]# systemctl daemon-reload
    [root@19db1 ~]# systemctl restart docker
    

    修改cgroupdriver是为了消除告警:
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

    6. 设置kubernetes源

    6.1 新增kubernetes源

    可卸载之前已有的k8s
    [root@19db2 ~]# yum list installed | grep kube
    kubernetes-client.x86_64              1.5.2-0.7.git269f928.el7        @extras   
    kubernetes-master.x86_64              1.5.2-0.7.git269f928.el7        @extras
    [root@19db2 ~]# yum remove kubernetes-client.x86_64 kubernetes-master.x86_64 -y
    
    [root@k8s-master yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    [] 中括号中的是repository id,唯一,用来标识不同仓库
    name 仓库名称,自定义
    baseurl 仓库地址
    enable 是否启用该仓库,默认为1表示启用
    gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
    repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
    gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了

    6.2 更新缓存

    [root@k8s-master ~]# yum clean all
    [root@k8s-master ~]# yum -y makecache
    

    三、Master节点安装

    1. 版本查看

    [root@k8s-master ~]# yum list kubelet --showduplicates | sort -r
    * updates: mirrors.ustc.edu.cn
    Loading mirror speeds from cached hostfile
    Loaded plugins: fastestmirror
    ...............
    kubelet.x86_64                       1.16.9-0                         kubernetes        <================= 选择https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.16.md
    kubelet.x86_64                       1.16.8-0                         kubernetes
    kubelet.x86_64                       1.16.7-0                         kubernetes
    kubelet.x86_64                       1.16.6-0                         kubernetes
    kubelet.x86_64                       1.16.5-0                         kubernetes
    kubelet.x86_64                       1.16.4-0                         kubernetes
    kubelet.x86_64                       1.16.3-0                         kubernetes
    kubelet.x86_64                       1.16.2-0                         kubernetes
    kubelet.x86_64                       1.16.15-0                        kubernetes
    kubelet.x86_64                       1.16.14-0                        kubernetes
    kubelet.x86_64                       1.16.13-0                        kubernetes
    kubelet.x86_64                       1.16.12-0                        kubernetes
    kubelet.x86_64                       1.16.11-1                        kubernetes
    kubelet.x86_64                       1.16.11-0                        kubernetes
    kubelet.x86_64                       1.16.1-0                         kubernetes
    kubelet.x86_64                       1.16.10-0                        kubernetes
    kubelet.x86_64                       1.16.0-0                         kubernetes
    
    ................
    选择版本1.16.9,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09
    

    2. 安装kubelet、kubeadm和kubectl

    2.1 安装三个包

    [root@k8s-master ~]# yum install -y kubelet-1.16.9 kubeadm-1.16.9 kubectl-1.16.9
    

    若不指定版本直接运行‘yum install -y kubelet kubeadm kubectl’则默认安装最新版即1.20.4,这个版本可能造成docker不兼容的情况。

    2.2 安装包说明

    kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
    kubeadm 用于初始化集群,启动集群的命令工具
    kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件

    2.3 启动kubelet

    启动kubelet并设置开机启动

    [root@k8s-master ~]# systemctl enable kubelet && systemctl start kubelet
    Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
    

    注意,这里不需要启动kubelet,初始化的过程中会自动启动的,如果此时启动了会出现如下报错,忽略即可。日志在tail -f /var/log/messages

    failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file “/var/lib/kubelet/config.yaml”, error: open /var/lib/kubelet/config.yaml: no such file or directory

    3. 下载镜像

    3.1 镜像下载的脚本

    Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。

    [root@k8s-master ~]# vi  image.sh
    #!/bin/bash
    url=registry.cn-hangzhou.aliyuncs.com/google_containers
    version=v1.16.9
    images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
    for imagename in ${images[@]} ; do
      docker pull $url/$imagename
      docker tag $url/$imagename k8s.gcr.io/$imagename
      docker rmi -f $url/$imagename
    done
    

    url为阿里云镜像仓库地址,version为安装的kubernetes版本。

    3.2 下载镜像

    运行脚本image.sh,下载指定版本的镜像,运行脚本前先赋权。

    [root@k8s-master ~]# chmod u+x image.sh
    [root@k8s-master ~]# ./image.sh
    [root@k8s-master ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-apiserver            v1.16.9             dd3b6beaa554        10 months ago       160MB
    k8s.gcr.io/kube-proxy                v1.16.9             a197b1cf22e3        10 months ago       82.8MB
    k8s.gcr.io/kube-controller-manager   v1.16.9             b6f6512bb3ba        10 months ago       152MB
    k8s.gcr.io/kube-scheduler            v1.16.9             476ac3ab84e5        10 months ago       83.6MB
    hello-world                          latest              bf756fb1ae65        14 months ago       13.3kB
    k8s.gcr.io/etcd                      3.3.15-0            b2756210eeab        18 months ago       247MB
    k8s.gcr.io/coredns                   1.6.2               bf261d157914        19 months ago       44.1MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        3 years ago         742kB
    

    4. 初始化Master

    4.1 初始化

    apiserver-advertise-address指定master的interface,pod-network-cidr指定Pod网络的范围,这里使用flannel网络方案

    [root@k8s-master ~]# kubeadm init --kubernetes-version=v1.16.9 --apiserver-advertise-address 192.168.20.128 --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.16.9
    ........................
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.20.128:6443 --token 3g8kyh.tkm6udlrdeqd7ri5 
        --discovery-token-ca-cert-hash sha256:88a0e2accde8a2cc640c3feb8e8516b16e5b2b265d6f3c8f4ffda47ab33f3b93
    

    记录kubeadm join的输出,后面需要这个命令将各个节点加入集群中。

    4.2 加载环境变量

    [root@k8s-master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
    [root@k8s-master ~]# source .bash_profile
    

    本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:

    mkdir -p $HOME/.kube
    cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    chown $(id -u):$(id -g) $HOME/.kube/config
    

    5. 安装pod网络

    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    podsecuritypolicy.policy/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    

    6. master节点配置--可选操作

    taint:污点的意思。如果一个节点被打上了污点,那么pod是不允许运行在这个节点上面的

    6.1 删除master节点默认污点

    默认情况下集群不会在master上调度pod,如果偏想在master上调度Pod,可以执行如下操作:
    查看污点:

    [root@k8s-master ~]# kubectl describe node k8s-master|grep -i taints
    Taints:             node-role.kubernetes.io/master:NoSchedule
    

    删除默认污点:

    [root@k8s-master ~]# kubectl taint nodes k8s-master node-role.kubernetes.io/master-
    

    6.2 污点机制

    语法:

    kubectl taint node [node] key=value[effect]   
         其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
          NoSchedule: 一定不能被调度
          PreferNoSchedule: 尽量不要调度
          NoExecute: 不仅不会调度, 还会驱逐Node上已有的Pod
    

    打污点

    [root@master ~]# kubectl taint node master key1=value1:NoSchedule
    node/master tainted
    [root@master ~]# kubectl describe node master|grep -i taints
    Taints:             key1=value1:NoSchedule
    

    key为key1,value为value1(value可以为空),effect为NoSchedule表示一定不能被调度
    删除污点:

    [root@master ~]# kubectl taint nodes master  key1-     
    node/master untainted
    [root@master ~]# kubectl describe node master|grep -i taints
    Taints:             <none>
    

    删除指定key所有的effect,‘-’表示移除所有以key1为键的污点

    四、Node节点安装

    1. 安装kubelet、kubeadm和kubectl

    同master节点

    2. 下载镜像

    同master节点

    3. 加入集群

    以下操作master上执行

    3.1 查看令牌

    [root@k8s-master ~]# kubeadm token list
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    3g8kyh.tkm6udlrdeqd7ri5   22h       2021-03-10T16:44:29+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    

    如果发现之前初始化时的令牌已过期,则需要执行下面操作初始化令牌。

    3.2 生成新的令牌 --可选操作

    [root@k8s-master ~]# kubeadm token create
    1zl3he.fxgz2pvxa3qkwxln
    

    3.3 生成新的加密串--可选操作

    [root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | 
       openssl dgst -sha256 -hex | sed 's/^.* //'
    

    3.4 node节点加入集群

    在node节点上分别执行如下操作:

    [root@19db2 ~]# kubeadm join 192.168.20.128:6443 --token 3g8kyh.tkm6udlrdeqd7ri5 
    >     --discovery-token-ca-cert-hash sha256:88a0e2accde8a2cc640c3feb8e8516b16e5b2b265d6f3c8f4ffda47ab33f3b93
    [preflight] Running pre-flight checks
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    3.5 master节点查看加入节点状态

    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    19db1        Ready    <none>   45s   v1.16.9
    19db2        Ready    <none>   16h   v1.16.9
    k8s-master   Ready    master   17h   v1.16.9
    

    五、Kuboard安装--master节点执行

    1. 创建storageClassName

    mkdir -p /data-kuboard1
    mkdir -p /data-kuboard2
    mkdir -p /data-kuboard3
    
    vi kuboard-pv-sc.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: data-kuboard1
    spec:
      storageClassName: data-kuboard
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      hostPath:
        path: /data-kuboard1
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: data-kuboard2
    spec:
      storageClassName: data-kuboard
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      hostPath:
        path: /data-kuboard2
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: data-kuboard3
    spec:
      storageClassName: data-kuboard
      capacity:
        storage: 5Gi
      accessModes:
        - ReadWriteMany
      hostPath:
        path: /data-kuboard3
    ---
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: data-kuboard
    provisioner: fuseim.pri/ifs
    

    2. 获取部署 Kuboard 所需的 YAML 文件

    curl -o kuboard-v3.yaml https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
    

    3. 修改参数KUBOARD_ENDPOINT和storageClassName

    sed -i "s#KUBOARD_ENDPOINT.*#KUBOARD_ENDPOINT: 'http://192.168.20.128:30080'#g" kuboard-v3.yaml
    sed -i 's#storageClassName.*#storageClassName: data-kuboard#g' kuboard-v3.yaml
    

    4. 创建kuboard-v3

    kubectl apply -f kuboard-pv-sc.yaml
    kubectl apply -f kuboard-v3.yaml
    

    5. 浏览器访问kuboard-v3

    http://192.168.20.128:30080
    用户名:admin
    密码:Kuboard123
    

    image
    Dashboard提供了可以实现集群管理、工作负载、服务发现和负载均衡、存储、字典配置、日志视图等功能。

    六、集群测试

    1. 部署应用

    1.1 命令方式

    [root@k8s-master ~]# kubectl run httpd-app --image=httpd --replicas=3
    kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
    deployment.apps/httpd-app created
    

    通过命令行方式部署apache服务

    1.2 配置文件方式

    [root@k8s-master ~]# cat >> nginx.yml << EOF
    > apiVersion: extensions/v1beta1
    > kind: Deployment
    > metadata:
    >   name: nginx
    > spec:
    >   replicas: 3
    >   template:
    >     metadata:
    >       labels:
    >         app: nginx
    >     spec:
    >       restartPolicy: Always
    >       containers:
    >       - name: nginx
    >         image: nginx:latest
    > EOF
    [root@k8s-master ~]# kubectl apply -f nginx.yml
    error: unable to recognize "nginx.yml": no matches for kind "Deployment" in version "extensions/v1beta1
    

    因为我的 k8s 版本是 1.16.9,在这个版本中 Deployment 已经从 extensions/v1beta1 弃用
    DaemonSet, Deployment, StatefulSet, and ReplicaSet resources will no longer be served from extensions/v1beta1, apps/v1beta1, or apps/v1beta2 by default in v1.16.9

    [root@k8s-master ~]# cat >> nginx.yml << EOF
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          restartPolicy: Always
          containers:
          - name: nginx
            image: nginx:latest
    EOF
    [root@k8s-master ~]# kubectl apply -f nginx.yml
    deployment.apps/nginx created
    

    2. 状态查看

    2.1 查看节点状态

    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    19db1        Ready    <none>   3h47m   v1.16.9
    19db2        Ready    <none>   20h     v1.16.9
    k8s-master   Ready    master   21h     v1.16.9
    

    2.2 查看pod状态

    [root@k8s-master ~]# kubectl get pod --all-namespaces
    NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
    default       httpd-app-c77bb8b47-b4fvx            1/1     Running   0          13m      <=====================   刚刚创建的apache
    default       httpd-app-c77bb8b47-h4htq            1/1     Running   0          13m      <=====================
    default       httpd-app-c77bb8b47-ptcrl            1/1     Running   0          13m      <=====================
    default       nginx-75b7bfdb6b-8msfp               1/1     Running   0          27s      <=====================   刚刚创建的nginx
    default       nginx-75b7bfdb6b-lk4qc               1/1     Running   0          27s      <=====================
    default       nginx-75b7bfdb6b-tgbhc               1/1     Running   0          27s      <=====================
    kube-system   coredns-5644d7b6d9-2pw8z             1/1     Running   1          21h
    kube-system   coredns-5644d7b6d9-zj5b2             1/1     Running   0          21h
    kube-system   etcd-k8s-master                      1/1     Running   0          21h
    kube-system   kube-apiserver-k8s-master            1/1     Running   0          21h
    kube-system   kube-controller-manager-k8s-master   1/1     Running   0          21h
    kube-system   kube-flannel-ds-c774z                1/1     Running   1          20h
    kube-system   kube-flannel-ds-crjjn                1/1     Running   0          20h
    kube-system   kube-flannel-ds-lc9xk                1/1     Running   0          3h54m
    kube-system   kube-proxy-7kq5m                     1/1     Running   0          3h54m
    kube-system   kube-proxy-d57mv                     1/1     Running   1          20h
    kube-system   kube-proxy-xj4rh                     1/1     Running   0          21h
    kube-system   kube-scheduler-k8s-master            1/1     Running   0          21h
    kuboard       kuboard-agent-2-7f87d6864c-64wnc     1/1     Running   0          34m
    kuboard       kuboard-agent-74ff88749c-pk5lh       1/1     Running   0          34m
    kuboard       kuboard-etcd-0                       1/1     Running   0          3h23m
    kuboard       kuboard-etcd-1                       1/1     Running   0          3h23m
    kuboard       kuboard-etcd-2                       1/1     Running   0          3h23m
    kuboard       kuboard-v3-547688df6c-428pt          1/1     Running   0          3h23m
    

    2.3 查看副本数

    [root@k8s-master ~]# kubectl get deployments
    NAME        READY   UP-TO-DATE   AVAILABLE   AGE
    httpd-app   3/3     3            3           14m
    nginx       3/3     3            3           2m3s
    [root@k8s-master ~]# kubectl get pod -o wide
    NAME                        READY   STATUS    RESTARTS   AGE     IP           NODE    NOMINATED NODE   READINESS GATES
    httpd-app-c77bb8b47-b4fvx   1/1     Running   0          14m     10.244.2.7   19db1   <none>           <none>
    httpd-app-c77bb8b47-h4htq   1/1     Running   0          14m     10.244.1.6   19db2   <none>           <none>
    httpd-app-c77bb8b47-ptcrl   1/1     Running   0          14m     10.244.2.6   19db1   <none>           <none>
    nginx-75b7bfdb6b-8msfp      1/1     Running   0          2m13s   10.244.1.7   19db2   <none>           <none>
    nginx-75b7bfdb6b-lk4qc      1/1     Running   0          2m13s   10.244.2.8   19db1   <none>           <none>
    nginx-75b7bfdb6b-tgbhc      1/1     Running   0          2m13s   10.244.2.9   19db1   <none>           <none>
    

    可以看到nginx和httpd的3个副本pod均匀分布在2个节点上

    2.4 查看deployment详细信息

    [root@k8s-master ~]#  kubectl describe deployments
    Name:                   httpd-app
    Namespace:              default
    CreationTimestamp:      Wed, 10 Mar 2021 13:50:43 +0800
    Labels:                 run=httpd-app
    Annotations:            deployment.kubernetes.io/revision: 1
    Selector:               run=httpd-app
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  run=httpd-app
      Containers:
       httpd-app:
        Image:        httpd
        Port:         <none>
        Host Port:    <none>
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    NewReplicaSetAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   httpd-app-c77bb8b47 (3/3 replicas created)
    Events:
      Type    Reason             Age   From                   Message
      ----    ------             ----  ----                   -------
      Normal  ScalingReplicaSet  15m   deployment-controller  Scaled up replica set httpd-app-c77bb8b47 to 3
    
    
    
    
    Name:                   nginx
    Namespace:              default
    CreationTimestamp:      Wed, 10 Mar 2021 14:03:19 +0800
    Labels:                 <none>
    Annotations:            deployment.kubernetes.io/revision: 1
                            kubectl.kubernetes.io/last-applied-configuration:
                              {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"replicas":3,"selec...
    Selector:               app=nginx
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge
    Pod Template:
      Labels:  app=nginx
      Containers:
       nginx:
        Image:        nginx:latest
        Port:         <none>
        Host Port:    <none>
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Available      True    MinimumReplicasAvailable
      Progressing    True    NewReplicaSetAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   nginx-75b7bfdb6b (3/3 replicas created)
    Events:
      Type    Reason             Age    From                   Message
      ----    ------             ----   ----                   -------
      Normal  ScalingReplicaSet  2m45s  deployment-controller  Scaled up replica set nginx-75b7bfdb6b to 3
    

    2.5 查看集群基本组件状态

    [root@k8s-master ~]# kubectl get cs
    NAME                 AGE
    scheduler            <unknown>
    controller-manager   <unknown>
    etcd-0               <unknown>
    

    至此完成Centos7.6下k8s(v1.16.9)集群部署。

    七、常用命令

    --停止k8s
    [root@k8s-master ~]# systemctl stop kubelet
    [root@k8s-master ~]# systemctl stop docker
    --启动k8s
    [root@k8s-master ~]# systemctl start docker
    [root@k8s-master ~]# systemctl start kubelet
    
    --日志
    /var/log/messages
    
    
    --- 1、kubectl get - 显示资源列表
    #获取类型为Deployment的资源列表
    kubectl get deployments
    #获取类型为Pod的资源列表
    kubectl get pods
    #获取类型为Node的资源列表
    kubectl get nodes
    # 查看所有名称空间的 Deployment
    kubectl get deployments -A
    kubectl get deployments --all-namespaces
    # 查看 kube-system 名称空间的 Deployment
    kubectl get deployments -n kube-system
    
    
    
    --- 2、kubectl describe - 显示有关资源的详细信息
    # kubectl describe 资源类型 资源名称
    
    #查看名称为nginx-XXXXXX的Pod的信息
    kubectl describe pod nginx-XXXXXX
    #查看名称为nginx的Deployment的信息
    kubectl describe deployment nginx
    
    --- 3、kubectl logs - 查看pod中的容器的打印日志(和命令docker logs 类似)
    # kubectl logs Pod名称
    
    #查看名称为nginx-pod-XXXXXXX的Pod内的容器打印的日志#本案例中的 nginx-pod 没有输出日志,所以您看到的结果是空的
    kubectl logs -f nginx-pod-XXXXXXX
    
    #查看名称为nginx-pod-XXXXXXX的Pod内的容器打印的日志#本案例中的 nginx-pod 没有输出日志,所以您看到的结果是空的
    kubectl logs -f nginx-pod-XXXXXXX  
    
    --4、kubectl exec - 在pod中的容器环境内执行命令(和命令docker exec 类似)
    # kubectl exec Pod名称 操作命令
    
    # 在名称为nginx-pod-xxxxxx的Pod中运行bash
    kubectl exec -it nginx-pod-xxxxxx /bin/bash
    
    ---5、查看资源名称
    查看资源名称
    kubectl api-resources
    
    查看可用的apiVersion版本
    kubectl api-versions
    
    ---6、kubectl explain查看api字段
    kubectl explain <资源名对象名>
    前面说过,可以通过kubectl api-resources来查看资源名称,如果想要查看某个资源的字段,可以通过
    kubectl explain pod
    列出所有api字段  (通过以上我们能感觉到,以上好像并没有罗列出所有的api字段,实际上以上列出的仅是一级字段,一级字段可能还包含二级的,三级的字段,想要罗列出所有的字段,可以加上--recursive来列出所有可能的字段)
    kubectl explain svc --recursive
    执行一下命令可查看哪些 Kubernetes 对象在名称空间里,哪些不在
    # 在名称空间里
    kubectl api-resources --namespaced=true
    
    # 不在名称空间里
    kubectl api-resources --namespaced=false
    
    ---7、kubectl config配置上下文
    如果没有kubectl的
    查看所有上下文
    kubectl config view
    kubectl config get-contexts  --列表形式
    查看当前上下文
    kubectl config current-context
    创建新上下文
    kubectl config set-context dev --namespace=development --cluster=kubernetes --user=kubernetes-admin
    切换上下文
    kubectl config use-context dev
    
    ---8、Taint污点
    查看污点
    kubectl describe node k8s-master |grep Taint
    打污点
    kubectl taint node master key1=value1:NoSchedule
    删除污点
    kubectl taint nodes k8s-master node-role.kubernetes.io/master-
    
    magnet:?xt=urn:btih:1901A26B78824F23DECD6A4BAEBFD62BC806E4CD
    
    https://zhuanlan.zhihu.com/p/48650569
    ---9、YAML文件生成
    kubectl create deployment web --image=nginx -o yaml --dry-run > my1.yaml
    kubectl get deploy nginx -o yaml  > my2.yaml
    
    ---10、升级,回滚
    升级版本
    kubectl set image deployment web nginx=nginx:1.15
    升级结果查看
    kubectl rollout status deployment web
    查看升级历史
    kubectl rollout history deployment web
    回滚至上个版本
    kubectl rollout undo deployment web
    回滚到指定版本
    kubectl rollout undo deployment nginx --to-revision=1
    调整副本
    kubectl scale deployment nginx --replicas=5
    

    作者:bicewow —— bicewow

    出处:http://www.cnblogs.com/bicewow/

    本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

  • 相关阅读:
    __getattribute__()、__getattr__()、__setattr__()、__delattr__()
    Python: Catch multiple exceptions in one line (except block)
    Python中的__new__和__init__
    使用sphinx生成Python文档
    Windows下干活儿辅助软件
    Python的Descriptor和Property混用
    Solved: Qt Library LNK 2001 staticMetaObject error
    OS sysbench压力测试
    Oracle 数据库 sysbench 压力测试
    MySQL 数据库 Sysbench压力测试
  • 原文地址:https://www.cnblogs.com/bicewow/p/14577506.html
Copyright © 2011-2022 走看看