zoukankan      html  css  js  c++  java
  • kubeadm安装的多master节点的k8s高可用集群

    kubeadm安装的多master节点的k8s高可用集群

    一、环境规划

    1.1、实验环境规划

    K8S集群角色 Ip 主机名 安装的组件
    控制节点 192.168.40.180 k8s-master1 apiserver、controller-manager、scheduler、etcd、docker、kubelet、kube-proxy、keepalived、nginx、calico
    控制节点 192.168.40.181 k8s-master2 apiserver、controller-manager、scheduler、etcd、docker、kubelet、kube-proxy、keepalived、nginx、calico
    工作节点 192.168.40.182 k8s-node1 kubelet、kube-proxy、docker、calico、coredns
    Vip 192.168.40.199

    实验环境规划:

    • 操作系统:centos7.6
    • 配置: 4Gib内存/4vCPU/100G硬盘
    • 网络:Vmware NAT模式

    k8s网络环境规划:

    • k8s版本:v1.20.6

    • Pod网段:10.244.0.0/16

    • Service网段:10.10.0.0/16

    1.2、节点初始化

    1)配置静态ip地址

    # 把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变。以master1主机修改静态IP为例
    ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
    TYPE=Ethernet
    BOOTPROTO=none
    NAME=eth0
    DEVICE=eth0
    ONBOOT=yes
    IPADDR=192.168.40.180	# 按实验规划修改
    NETMASK=255.255.255.0
    GATEWAY=192.168.40.2
    DNS1=223.5.5.5
    
    # 重启网络
    ~]# systemctl restart network
    
    # 测试网络连通信
    ~]# ping baidu.com
    PING baidu.com (39.156.69.79) 56(84) bytes of data.
    64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=63.2 ms
    64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=47.3 ms
    

    2)配置主机名

    ~]# hostnamectl set-hostname <主机名> && bash
    

    3)配置hosts文件

    # 所有机器
    cat >> /etc/hosts << EOF 
    192.168.40.180 k8s-master1
    192.168.40.181 k8s-master2
    192.168.40.182 k8s-node1 
    EOF
    
    # 测试
    ~]# ping k8s-master1
    PING k8s-master1 (192.168.40.180) 56(84) bytes of data.
    64 bytes from k8s-master1 (192.168.40.180): icmp_seq=1 ttl=64 time=0.015 ms
    64 bytes from k8s-master1 (192.168.40.180): icmp_seq=2 ttl=64 time=0.047 ms
    

    4)配置主机之间无密码登录

    # 生成ssh 密钥对,一路回车,不输入密码
    ssh-keygen -t rsa
    
    # 把本地的ssh公钥文件安装到远程主机对应的账户
    ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
    ssh-copy-id -i .ssh/id_rsa.pub k8s-master2
    ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
    

    5)关闭firewalld防火墙

    systemctl stop firewalld && systemctl disable firewalld
    

    6)关闭selinux

    # 临时关闭
    setenforce 0
    # 永久关闭
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    # 查看
    getenforce
    

    7)关闭交换分区swap

    #临时关闭
    swapoff -a
    #永久关闭:注释swap挂载,给swap开头加一下注释
    sed -ri 's/.*swap.*/#&/' /etc/fstab
    #注意:如果是克隆的虚拟机,需要删除UUID一行
    

    问题一:为什么要关闭swap交换分区?

    Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。

    8)修改内核参数

    # 1、加载br_netfilter模块
    modprobe br_netfilter
    
    # 2、验证模块是否加载成功
    lsmod |grep br_netfilter
    
    # 3、修改内核参数
    cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    
    # 4、使刚才修改的内核参数生效
    sysctl -p /etc/sysctl.d/k8s.conf
    

    9)配置阿里云repo源

    # 备份
    mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
    
    # 下载新的CentOS-Base.repo
    wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    
    # 生成缓存
    yum clean all && yum makecache
    

    10)配置时间同步

    # 安装ntpdate命令,
    yum install ntpdate -y
    
    # 跟网络源做同步
    ntpdate cn.pool.ntp.org
    
    # 把时间同步做成计划任务
    crontab -e
    * */1 * * * /usr/sbin/ntpdate   cn.pool.ntp.org
    
    # 重启crond服务
    service crond restart
    

    11)安装iptables

    # 安装iptables
    yum install iptables-services -y
    
    # 禁用iptables
    service iptables stop && systemctl disable iptables
    
    # 清空防火墙规则
    iptables -F
    

    12)开启ipvs

    不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。

    # 创建ipvs.modules文件
    ~]# vim /etc/sysconfig/modules/ipvs.modules
    #!/bin/bash
    ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
    for kernel_module in ${ipvs_modules}; do
     /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
     if [ 0 -eq 0 ]; then
     /sbin/modprobe ${kernel_module}
     fi
    done
    
    # 执行脚本
    ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
    ip_vs_ftp              13079  0 
    nf_nat                 26787  1 ip_vs_ftp
    ip_vs_sed              12519  0 
    ip_vs_nq               12516  0 
    ip_vs_sh               12688  0 
    ip_vs_dh               12688  0 
    ip_vs_lblcr            12922  0 
    ip_vs_lblc             12819  0 
    ip_vs_wrr              12697  0 
    ip_vs_rr               12600  0 
    ip_vs_wlc              12519  0 
    ip_vs_lc               12516  0 
    ip_vs                 141092  22 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wlc,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
    nf_conntrack          133387  2 ip_vs,nf_nat
    libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
    

    注意事项:

    # ipvs是什么?
    	ipvs (IP Virtual Server) 实现了传输层负载均衡,也就是我们常说的4层LAN交换,作为 Linux 内核的一部分。ipvs运行在主机上,在真实服务器集群前充当负载均衡器。ipvs可以将基于TCP和UDP的服务请求转发到真实服务器上,并使真实服务器的服务在单个 IP 地址上显示为虚拟服务。
    
    # ipvs和iptable对比分析
    	kube-proxy支持 iptables 和 ipvs 两种模式, 在kubernetes v1.8 中引入了 ipvs 模式,在 v1.9 中处于 beta 阶段,在 v1.11 中已经正式可用了。iptables 模式在 v1.1 中就添加支持了,从 v1.2 版本开始 iptables 就是 kube-proxy 默认的操作模式,ipvs 和 iptables 都是基于netfilter的,但是ipvs采用的是hash表,因此当service数量达到一定规模时,hash查表的速度优势就会显现出来,从而提高service的服务性能。那么 ipvs 模式和 iptables 模式之间有哪些差异呢?
    1、ipvs 为大型集群提供了更好的可扩展性和性能
    2、ipvs 支持比 iptables 更复杂的复制均衡算法(最小负载、最少连接、加权等等)
    3、ipvs 支持服务器健康检查和连接重试等功能
    

    13)安装基础软件包

    ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel  python-devel epel-release openssh-server socat  ipvsadm conntrack ntpdate telnet rsync
    

    14)安装docker-ce

    ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    ~]# yum install docker-ce docker-ce-cli containerd.io -y
    ~]# systemctl start docker && systemctl enable docker.service && systemctl status docker
    

    15)配置docker镜像加速器

    # 注意:修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以
    ~]# tee /etc/docker/daemon.json << 'EOF'
    {
     "registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
      "exec-opts": ["native.cgroupdriver=systemd"]
    } 
    EOF
    
    ~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
    

    二、部署nginx及keepalived

    1)安装nginx,keepalived

    # 在k8s-master1和k8s-master2上做nginx主备安装
    [root@k8s-master1 ~]#  yum install nginx keepalived -y
    [root@k8s-master2 ~]#  yum install nginx keepalived -y
    
    # 注意需要安装如下模块,报错:nginx: [emerg] unknown directive "stream" in /etc/nginx/nginx.conf:13
    [root@k8s-master1 ~]# yum install nginx-mod-stream -y
    [root@k8s-master1 ~]# nginx -v
    nginx version: nginx/1.20.1
    

    2)修改nginx配置文件,主备一样

    [root@k8s-master1 ~]# cat /etc/nginx/nginx.conf
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
    
    include /usr/share/nginx/modules/*.conf;
    
    events {
        worker_connections 1024;
    }
    
    # 四层负载均衡,为两台Master apiserver组件提供负载均衡
    stream {
    
        log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    
        access_log  /var/log/nginx/k8s-access.log  main;
    
        upstream k8s-apiserver {
           server 192.168.40.180:6443;   # k8s-master1 APISERVER IP:PORT
           server 192.168.40.181:6443;   # k8s-master2 APISERVER IP:PORT
        }
        
        server {
           listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
           proxy_pass k8s-apiserver;
        }
    }
    
    http {
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
    
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
    
        server {
            listen       80 default_server;
            server_name  _;
    
            location / {
            }
        }
    }
    
    [root@k8s-master2 ~]# cat /etc/nginx/nginx.conf
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log;
    pid /run/nginx.pid;
    
    include /usr/share/nginx/modules/*.conf;
    
    events {
        worker_connections 1024;
    }
    
    # 四层负载均衡,为两台Master apiserver组件提供负载均衡
    stream {
    
        log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    
        access_log  /var/log/nginx/k8s-access.log  main;
    
        upstream k8s-apiserver {
           server 192.168.40.180:6443;   # k8s-master1 APISERVER IP:PORT
           server 192.168.40.181:6443;   # k8s-master2 APISERVER IP:PORT
    
        }
        
        server {
           listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
           proxy_pass k8s-apiserver;
        }
    }
    
    http {
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile            on;
        tcp_nopush          on;
        tcp_nodelay         on;
        keepalive_timeout   65;
        types_hash_max_size 2048;
    
        include             /etc/nginx/mime.types;
        default_type        application/octet-stream;
    
        server {
            listen       80 default_server;
            server_name  _;
    
            location / {
            }
        }
    }
    

    3)keepalive配置

    # master上配置
    [root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf 
    global_defs { 
       notification_email { 
         acassen@firewall.loc 
         failover@firewall.loc 
         sysadmin@firewall.loc 
       } 
       notification_email_from Alexandre.Cassen@firewall.loc  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_MASTER
    } 
    
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    }
    
    vrrp_instance VI_1 { 
        state MASTER 
        interface eth0  # 修改为实际网卡名
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 100    # 优先级,备服务器设置 90 
        advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒 
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        }  
        # 虚拟IP
        virtual_ipaddress { 
            192.168.40.199/24
        } 
        track_script {
            check_nginx
        } 
    }
    
    # master节点脚本配置
    [root@k8s-master1 ~]# cat /etc/keepalived/check_nginx.sh 
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    fi
    
    [root@k8s-master1 ~]# chmod +x  /etc/keepalived/check_nginx.sh
    
    
    # 备节点配置
    [root@k8s-master2 ~]# cat /etc/keepalived/keepalived.conf 
    global_defs { 
       notification_email { 
         acassen@firewall.loc 
         failover@firewall.loc 
         sysadmin@firewall.loc 
       } 
       notification_email_from Alexandre.Cassen@firewall.loc  
       smtp_server 127.0.0.1 
       smtp_connect_timeout 30 
       router_id NGINX_BACKUP
    } 
    
    vrrp_script check_nginx {
        script "/etc/keepalived/check_nginx.sh"
    }
    
    vrrp_instance VI_1 { 
        state BACKUP 
        interface eth0
        virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的 
        priority 90
        advert_int 1
        authentication { 
            auth_type PASS      
            auth_pass 1111 
        }  
        virtual_ipaddress { 
            192.168.40.199/24
        } 
        track_script {
            check_nginx
        } 
    }
    
    
    [root@k8s-master2 ~]# cat /etc/keepalived/check_nginx.sh 
    #!/bin/bash
    count=$(ps -ef |grep nginx | grep sbin | egrep -cv "grep|$$")
    if [ "$count" -eq 0 ];then
        systemctl stop keepalived
    fi
    [root@k8s-master2 ~]# chmod +x /etc/keepalived/check_nginx.sh
    #注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
    

    4)启动服务

    [root@k8s-master1 ~]# systemctl daemon-reload && systemctl start nginx keepalived && systemctl enable nginx keepalived
    [root@k8s-master2 ~]# systemctl daemon-reload && systemctl start nginx keepalived && systemctl enable nginx keepalived
    

    5)测试vip是否绑定成功

    [root@k8s-master1 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 192.168.40.199/24 scope global secondary eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
        link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
           
           
    [root@k8s-master2 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fef1:8161/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
        link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    6)测试keepalived

    # 停掉k8s-master1上的nginx,vip会漂移到k8s-master2
    [root@k8s-master1 ~]# systemctl stop nginx
    [root@k8s-master1 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
        link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
           
    [root@k8s-master2 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:f1:81:61 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.181/24 brd 192.168.40.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 192.168.40.199/24 scope global secondary eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fef1:8161/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
        link/ether 02:42:c6:90:ba:4c brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
           
    # 重新启动master1的nginx和keepalived,vip会漂移回来
    [root@k8s-master1 ~]# systemctl start nginx
    [root@k8s-master1 ~]# systemctl start keepalived
    [root@k8s-master1 ~]# ip a
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host 
           valid_lft forever preferred_lft forever
    2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
        link/ether 00:0c:29:52:bf:68 brd ff:ff:ff:ff:ff:ff
        inet 192.168.40.180/24 brd 192.168.40.255 scope global eth0
           valid_lft forever preferred_lft forever
        inet 192.168.40.199/24 scope global secondary eth0
           valid_lft forever preferred_lft forever
        inet6 fe80::20c:29ff:fe52:bf68/64 scope link 
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN 
        link/ether 02:42:fc:92:c8:72 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
           valid_lft forever preferred_lft forever
    

    7)查看端口状态

    [root@k8s-master1 ~]# netstat -lntp|grep 16443
    tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      22461/nginx: master
    [root@k8s-master2 ~]# netstat -lntp|grep 16443
    tcp        0      0 0.0.0.0:16443           0.0.0.0:*               LISTEN      22461/nginx: master
    

    三、kubeadm部署集群

    3.1、配置kubernetes的repo源

    [root@k8s-master1 ~]# vim  /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    
    # 将k8s-master1上Kubernetes的repo源复制给k8s-master2和k8s-node1
    [root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-master2:/etc/yum.repos.d/
    [root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
    

    3.2、下载相关软件包

    # 注意:可以看到kubelet状态不是running状态,这个是正常的,不用管,等k8s组件起来这个kubelet就正常了
    [root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
    [root@k8s-master1 ~]# systemctl enable kubelet && systemctl start kubelet
    [root@k8s-master1 ~]# systemctl status kubelet
    
    [root@k8s-master2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
    [root@k8s-master2 ~]# systemctl enable kubelet && systemctl start kubelet
    [root@k8s-master2 ~]# systemctl status kubelet
    
    [root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
    [root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
    [root@k8s-node1 ~]# systemctl status kubelet
    

    3.3、kubeadm初始化k8s集群

    1)创建kubeadm-config.yaml文件

    [root@k8s-master1 ~]# vim kubeadm-config.yaml 
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.20.6
    controlPlaneEndpoint: 192.168.40.199:16443
    imageRepository: registry.aliyuncs.com/google_containers
    apiServer:
     certSANs:
     - 192.168.40.180
     - 192.168.40.181
     - 192.168.40.182
     - 192.168.40.199
    networking:
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.10.0.0/16
    ---
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    kind:  KubeProxyConfiguration
    mode: ipvs
    

    参数--image-repository registry.aliyuncs.com/google_containers说明::手动指定仓库地址为registry.aliyuncs.com/google_containers,kubeadm默认从k8s.grc.io拉取镜像,但是k8s.gcr.io访问不到,所以需要指定从registry.aliyuncs.com/google_containers仓库拉取镜像

    2)使用kubeadm初始化k8s集群

    [root@k8s-master1 ~]# kubeadm init --config kubeadm-config.yaml
    [init] Using Kubernetes version: v1.20.6
    [preflight] Running pre-flight checks
    	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.40.180 192.168.40.199 192.168.40.181 192.168.40.182]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.40.180 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master1 localhost] and IPs [192.168.40.180 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 113.537013 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
    [mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: d8j1ts.o62xh6zi98031f5l
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of control-plane nodes by copying certificate authorities
    and service account keys on each node and then running the following as root:
    
      kubeadm join 192.168.40.199:16443 --token d8j1ts.o62xh6zi98031f5l 
        --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b 
        --control-plane 
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.40.199:16443 --token d8j1ts.o62xh6zi98031f5l 
        --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b
    

    3)配置kubectl的配置文件config

    [root@k8s-master1 ~]# mkdir -p $HOME/.kube
    [root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    [root@k8s-master1 ~]# kubectl get nodes
    NAME          STATUS     ROLES                  AGE     VERSION
    k8s-master1   NotReady   control-plane,master   2m27s   v1.20.6
    # 此时集群状态还是NotReady状态,因为没有安装网络插件。
    

    3.4、扩容集群-添加master节点

    1)创建证书存放目录

    [root@k8s-master2 ~]# cd /root && mkdir -p /etc/kubernetes/pki/etcd && mkdir -p ~/.kube/
    

    2)拷贝证书

    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.crt k8s-master2:/etc/kubernetes/pki/
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/ca.key k8s-master2:/etc/kubernetes/pki/
    
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.key k8s-master2:/etc/kubernetes/pki/
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/sa.pub k8s-master2:/etc/kubernetes/pki/
    
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt k8s-master2:/etc/kubernetes/pki/
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/front-proxy-ca.key k8s-master2:/etc/kubernetes/pki/
    
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.crt k8s-master2:/etc/kubernetes/pki/etcd/
    [root@k8s-master1 ~]# scp /etc/kubernetes/pki/etcd/ca.key k8s-master2:/etc/kubernetes/pki/etcd/
    

    3)新控制节点加入集群

    # 查看token,注意控制节点加入集群需要加上--control-plane
    [root@k8s-master1 ~]# kubeadm token create --print-join-command
    kubeadm join 192.168.40.199:16443 --token qh0gw4.2brd8ioh2hyscd1a     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b
    
    # 新节点加入
    [root@k8s-master2 ~]# kubeadm join 192.168.40.199:16443 --token qh0gw4.2brd8ioh2hyscd1a     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b --control-plane
    [preflight] Running pre-flight checks
    	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [preflight] Running pre-flight checks before initializing the new control plane instance
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 192.168.40.181 192.168.40.199 192.168.40.180 192.168.40.182]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.40.181 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master2 localhost] and IPs [192.168.40.181 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [certs] Using the existing "sa" key
    [kubeconfig] Generating kubeconfig files
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [check-etcd] Checking that the etcd cluster is healthy
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    [etcd] Announced new etcd member joining to the existing etcd cluster
    [etcd] Creating static Pod manifest for "etcd"
    [etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [mark-control-plane] Marking the node k8s-master2 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
    [mark-control-plane] Marking the node k8s-master2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    
    This node has joined the cluster and a new control plane instance was created:
    
    * Certificate signing request was sent to apiserver and approval was received.
    * The Kubelet was informed of the new secure connection details.
    * Control plane (master) label and taint were applied to the new node.
    * The Kubernetes control plane instances scaled up.
    * A new etcd member was added to the local/stacked etcd cluster.
    
    To start administering your cluster from this node, you need to run the following as a regular user:
    
    	mkdir -p $HOME/.kube
    	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    	sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Run 'kubectl get nodes' to see this node join the cluster.
    

    4)创建kubeconfig

    [root@k8s-master2 ~]# mkdir -p $HOME/.kube
    [root@k8s-master2 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master2 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    5)查看集群状态

    [root@k8s-master1 ~]# kubectl get nodes
    NAME          STATUS     ROLES                  AGE     VERSION
    k8s-master1   NotReady   control-plane,master   24m     v1.20.6
    k8s-master2   NotReady   control-plane,master   3m41s   v1.20.6
    

    3.5、扩容集群-添加node节点

    1)加入集群

    # 查看加入集群命令
    [root@k8s-master1 ~]# kubeadm token create --print-join-command
    kubeadm join 192.168.40.199:16443 --token ay4uyg.1x09kgx6ihjii29c     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b
    
    # 新node节点加入集群
    [root@k8s-node1 ~]# kubeadm join 192.168.40.199:16443 --token ay4uyg.1x09kgx6ihjii29c     --discovery-token-ca-cert-hash sha256:1fa4f67a0e1ee0c3277f10929df562b0fa621cc44ffb16ff8fadb52c667f0a1b
    [preflight] Running pre-flight checks
    	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    

    2)查看状态并打标签

    [root@k8s-master1 ~]# kubectl get nodes
    NAME          STATUS     ROLES                  AGE   VERSION
    k8s-master1   NotReady   control-plane,master   36m   v1.20.6
    k8s-master2   NotReady   control-plane,master   16m   v1.20.6
    k8s-node1     NotReady   <none>                 93s   v1.20.6
    [root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
    node/k8s-node1 labeled
    [root@k8s-master1 ~]# kubectl get nodes
    NAME          STATUS     ROLES                  AGE    VERSION
    k8s-master1   NotReady   control-plane,master   36m    v1.20.6
    k8s-master2   NotReady   control-plane,master   16m    v1.20.6
    k8s-node1     NotReady   worker                 107s   v1.20.6
    
    # 上面状态都是NotReady状态,说明没有安装网络插件
    

    3)查看pod状态

    [root@k8s-master1 ~]# kubectl get pods -n kube-system
    NAME                                  READY   STATUS    RESTARTS   AGE
    coredns-7f89b7bc75-5d8vn              0/1     Pending   0          37m	# 网络插件未安装好
    coredns-7f89b7bc75-xvkth              0/1     Pending   0          37m
    etcd-k8s-master1                      1/1     Running   0          37m
    etcd-k8s-master2                      1/1     Running   0          17m
    kube-apiserver-k8s-master1            1/1     Running   1          37m
    kube-apiserver-k8s-master2            1/1     Running   0          17m
    kube-controller-manager-k8s-master1   1/1     Running   1          37m
    kube-controller-manager-k8s-master2   1/1     Running   0          17m
    kube-proxy-4r7kf                      1/1     Running   0          37m
    kube-proxy-6mwh6                      1/1     Running   0          17m
    kube-proxy-qsbp5                      1/1     Running   0          2m44s
    kube-scheduler-k8s-master1            1/1     Running   1          37m
    kube-scheduler-k8s-master2            1/1     Running   0          17m
    

    3.6、安装Calico

    下载地址:https://docs.projectcalico.org/manifests/calico.yaml

    [root@k8s-master1 ~]# kubectl apply -f calico.yaml
    [root@k8s-master1 ~]# kubectl get pod -n kube-system 
    NAME                                       READY   STATUS    RESTARTS   AGE
    calico-kube-controllers-6949477b58-lb52j   1/1     Running   0          3m58s
    calico-node-9rqdd                          1/1     Running   0          3m58s
    calico-node-xdr5t                          1/1     Running   0          3m58s
    calico-node-xvkv5                          1/1     Running   0          3m58s
    coredns-7f89b7bc75-5d8vn                   1/1     Running   0          49m
    coredns-7f89b7bc75-xvkth                   1/1     Running   0          49m
    etcd-k8s-master1                           1/1     Running   0          49m
    etcd-k8s-master2                           1/1     Running   0          29m
    kube-apiserver-k8s-master1                 1/1     Running   1          49m
    kube-apiserver-k8s-master2                 1/1     Running   0          29m
    kube-controller-manager-k8s-master1        1/1     Running   1          49m
    kube-controller-manager-k8s-master2        1/1     Running   0          29m
    kube-proxy-4r7kf                           1/1     Running   0          49m
    kube-proxy-6mwh6                           1/1     Running   0          29m
    kube-proxy-qsbp5                           1/1     Running   0          14m
    kube-scheduler-k8s-master1                 1/1     Running   1          49m
    kube-scheduler-k8s-master2                 1/1     Running   0          29m
    [root@k8s-master1 ~]# kubectl get nodes
    NAME          STATUS   ROLES                  AGE   VERSION
    k8s-master1   Ready    control-plane,master   50m   v1.20.6
    k8s-master2   Ready    control-plane,master   29m   v1.20.6
    k8s-node1     Ready    worker                 15m   v1.20.6
    

    测试网络连通性:

    [root@k8s-master1 ~]# kubectl get pod -n kube-system -o wide
    NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE          NOMINATED NODE   READINESS GATES
    calico-kube-controllers-6949477b58-lb52j   1/1     Running   0          5m49s   10.244.36.65     k8s-node1     <none>           <none>
    calico-node-9rqdd                          1/1     Running   0          5m49s   192.168.40.182   k8s-node1     <none>           <none>
    calico-node-xdr5t                          1/1     Running   0          5m49s   192.168.40.180   k8s-master1   <none>           <none>
    calico-node-xvkv5                          1/1     Running   0          5m49s   192.168.40.181   k8s-master2   <none>           <none>
    coredns-7f89b7bc75-5d8vn                   1/1     Running   0          51m     10.244.36.67     k8s-node1     <none>           <none>
    coredns-7f89b7bc75-xvkth                   1/1     Running   0          51m     10.244.36.66     k8s-node1     <none>           <none>
    etcd-k8s-master1                           1/1     Running   0          51m     192.168.40.180   k8s-master1   <none>           <none>
    etcd-k8s-master2                           1/1     Running   0          30m     192.168.40.181   k8s-master2   <none>           <none>
    kube-apiserver-k8s-master1                 1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
    kube-apiserver-k8s-master2                 1/1     Running   0          30m     192.168.40.181   k8s-master2   <none>           <none>
    kube-controller-manager-k8s-master1        1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
    kube-controller-manager-k8s-master2        1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>
    kube-proxy-4r7kf                           1/1     Running   0          51m     192.168.40.180   k8s-master1   <none>           <none>
    kube-proxy-6mwh6                           1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>
    kube-proxy-qsbp5                           1/1     Running   0          16m     192.168.40.182   k8s-node1     <none>           <none>
    kube-scheduler-k8s-master1                 1/1     Running   1          51m     192.168.40.180   k8s-master1   <none>           <none>
    kube-scheduler-k8s-master2                 1/1     Running   0          31m     192.168.40.181   k8s-master2   <none>           <none>
    
    # 注意使用busybox:1.28版本
    [root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
    If you don't see a command prompt, try pressing enter.
    / # ping 10.244.36.67
    PING 10.244.36.67 (10.244.36.67): 56 data bytes
    64 bytes from 10.244.36.67: seq=0 ttl=63 time=0.113 ms
    64 bytes from 10.244.36.67: seq=1 ttl=63 time=0.203 ms
    / # ping baidu.com
    PING baidu.com (39.156.69.79): 56 data bytes
    64 bytes from 39.156.69.79: seq=0 ttl=127 time=47.840 ms
    64 bytes from 39.156.69.79: seq=1 ttl=127 time=62.833 ms
    

    3.7、测试部署tomcat服务

    [root@k8s-master1 ~]# cat tomcat.yaml
    apiVersion: v1
    kind: Pod
    metadata:
      name: demo-pod
      namespace: default
      labels:
        app: myapp
        env: dev
    spec:
      containers:
      - name:  tomcat-pod-java
        ports:
        - containerPort: 8080
        image: tomcat:8.5-jre8-alpine
        imagePullPolicy: IfNotPresent
      - name: busybox
        image: busybox:latest
        command:
        - "/bin/sh"
        - "-c"
        - "sleep 3600"
    [root@k8s-master1 ~]# cat tomcat-service.yaml 
    apiVersion: v1
    kind: Service
    metadata:
      name: tomcat
    spec:
      type: NodePort
      ports:
        - port: 8080
          nodePort: 30080
      selector:
        app: myapp
        env: dev
        
    [root@k8s-master1 ~]# kubectl apply -f tomcat.yaml
    pod/demo-pod created
    [root@k8s-master1 ~]# kubectl apply -f tomcat-service.yaml
    service/tomcat created
    
    [root@k8s-master1 ~]# kubectl get pods
    NAME       READY   STATUS    RESTARTS   AGE
    demo-pod   2/2     Running   0          116s
    [root@k8s-master1 ~]# kubectl get svc
    NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
    kubernetes   ClusterIP   10.10.0.1       <none>        443/TCP          59m
    tomcat       NodePort    10.10.235.180   <none>        8080:30080/TCP   114s
    

    浏览器访问测试:

    image-20210708095621739

    3.8、测试coredns服务

    # 注意:busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip 
    [root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
    If you don't see a command prompt, try pressing enter.
    / # nslookup kubernetes.default.svc.cluster.local
    Server:    10.10.0.10
    Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes.default.svc.cluster.local
    Address 1: 10.10.0.1 kubernetes.default.svc.cluster.local
    
    / # nslookup tomcat.default.svc.cluster.local
    Server:    10.10.0.10
    Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      tomcat.default.svc.cluster.local
    Address 1: 10.10.235.180 tomcat.default.svc.cluster.local
    
    作者:Lawrence

    -------------------------------------------

    个性签名:独学而无友,则孤陋而寡闻。做一个灵魂有趣的人!

    扫描上面二维码关注我
    如果你真心觉得文章写得不错,而且对你有所帮助,那就不妨帮忙“推荐"一下,您的“推荐”和”打赏“将是我最大的写作动力!
    本文版权归作者所有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接.
  • 相关阅读:
    oracle里某列插入多行数据
    子类与父类的一些关系
    math.round()的值怎么取的
    oracle replace函数
    spring5.1.5使用的jackson依赖版本
    eclipse设置新建jsp页面的默认编码为utf-8
    c3p0启动失败
    sql的左联右联合内联的区别
    eclipse安装spring tools插件的问题
    HTML点击按钮button跳转页面的四种方法
  • 原文地址:https://www.cnblogs.com/hujinzhong/p/14984785.html
Copyright © 2011-2022 走看看