zoukankan      html  css  js  c++  java
  • 入门-k8s集群环境搭建(二)

    对于 Kubernetes 初学者,在搭建K8S集群时,推荐在阿里云或腾讯云采购如下配置:(您也可以使用自己的虚拟机、私有云等您最容易获得的 Linux 环境)

    • 至少2台 2核4G 的服务器
    • Cent OS 7.6

     1. 检查 centos / hostname

    # 在 master 节点和 worker 节点都要执行
    cat /etc/redhat-release
    
    # 此处 hostname 的输出将会是该机器在 Kubernetes 集群中的节点名字
    # 不能使用 localhost 作为节点的名字
    hostname
    
    # 请使用 lscpu 命令,核对 CPU 信息
    # Architecture: x86_64    本安装文档不支持 arm 架构
    # CPU(s):       2         CPU 内核数量不能低于 2
    lscpu

     2. 修改 hostname

    # 修改 hostname
    hostnamectl set-hostname your-new-host-name
    # 查看修改结果
    hostnamectl status
    # 设置 hostname 解析
    echo "127.0.0.1   $(hostname)" >> /etc/hosts

    3. 检查网络

    [root@guanbin-k8s-master ~]# ip route show
    default via 192.168.85.1 dev ens160
    blackhole 10.100.110.64/26 proto bird
    10.100.110.65 dev cali055187aeadb scope link
    10.100.110.66 dev cali84c4725d535 scope link
    10.100.110.67 dev cali5ed802dd6c6 scope link
    169.254.0.0/16 dev ens160 scope link metric 1002
    172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
    192.168.85.0/24 dev ens160 proto kernel scope link src 192.168.85.163
    [root@guanbin-k8s-master ~]# ip address
    1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
           valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
           valid_lft forever preferred_lft forever
    2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
        link/ether 00:50:56:98:55:c5 brd ff:ff:ff:ff:ff:ff
        inet 192.168.85.163/24 brd 192.168.85.255 scope global ens160
           valid_lft forever preferred_lft forever
        inet6 fe80::250:56ff:fe98:55c5/64 scope link
           valid_lft forever preferred_lft forever
    3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
        link/ether 02:42:d0:41:25:02 brd ff:ff:ff:ff:ff:ff
        inet 172.17.0.1/16 scope global docker0
           valid_lft forever preferred_lft forever
    4: cali055187aeadb@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
        link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
        inet6 fe80::ecee:eeff:feee:eeee/64 scope link
           valid_lft forever preferred_lft forever
    5: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1440 qdisc noqueue state UNKNOWN group default qlen 1000
        link/ipip 0.0.0.0 brd 0.0.0.0
        inet 10.100.110.64/32 brd 10.100.110.64 scope global tunl0
           valid_lft forever preferred_lft forever
    6: cali84c4725d535@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
        link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
        inet6 fe80::ecee:eeff:feee:eeee/64 scope link
           valid_lft forever preferred_lft forever
    7: cali5ed802dd6c6@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP group default
        link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 2
        inet6 fe80::ecee:eeff:feee:eeee/64 scope link
           valid_lft forever preferred_lft forever

    kubelet使用的IP地址

    • ip route show 命令中,可以知道机器的默认网卡,通常是 eth0,如 default via 172.21.0.23 dev eth0
    • ip address 命令中,可显示默认网卡的 IP 地址,Kubernetes 将使用此 IP 地址与集群内的其他节点通信,如 172.17.216.80
    • 所有节点上 Kubernetes 所使用的 IP 地址必须可以互通(无需 NAT 映射、无安全组或防火墙隔离)

    4.关闭防火墙及selinux

     2个节点都操作

    systemctl stop firewalld && systemctl disable firewalld
    sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config  && setenforce 0

    5.关闭分区

    swapoff -a # 临时
    sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab #永久

    6. 添加hosts

    在所有主机上上添加如下命令

    cat >> /etc/hosts << EOF 192.168.85.163 guanbin-k8s-master 192.168.12.38 guanbin-k8s-node EOF

    7.设置系统时区并同步时间服务器

    # yum install -y ntpdate
    
    # ntpdate time.windows.com

    8.安装docker (所有节点)

    $ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    $ yum -y install docker-ce-18.06.1.ce-3.el7
    $ systemctl enable docker && systemctl start docker
    $ docker --version
    Docker version 18.06.1-ce, build e68fc7a 

    9. 添加kubernetes YUM软件源

    $ cat > /etc/yum.repos.d/kubernetes.repo << EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    10.安装kubeadm,kubelet和kubectl (所有主机)

    $ yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
    $ systemctl enable kubelet

    11.部署k8s master

    # 只在 master 节点执行
    # 替换 x.x.x.x 为 master 节点实际 IP(请使用内网 IP)
    # export 命令只在当前 shell 会话中有效,开启新的 shell 窗口后,如果要继续安装过程,请重新执行此处的 export 命令
    export MASTER_IP=x.x.x.x
    # 替换 apiserver.demo 为 您想要的 dnsName
    export APISERVER_NAME=apiserver.demo
    # Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
    export POD_SUBNET=10.100.0.1/16
    echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
    curl -sSL https://kuboard.cn/install-script/v1.18.x/init_master.sh | sh -s 1.18.0
     

    输出日志为:

    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/

    12.执行日志提示的命令

     mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config

    13.检查 master 初始化结果

    # 只在 master 节点执行
    
    # 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
    watch kubectl get pod -n kube-system -o wide
    
    # 查看 master 节点初始化结果
    kubectl get nodes -o wide

    14.添加 node节点

     在master节点执行,可获取kubeadm join 命令及参数

    # 只在 master 节点执行 kubeadm token create --print-join-command

     输出如下所示:

     该 token 的有效时间为 2 个小时,2小时内,您可以使用此 token 初始化任意数量的 worker 节点。

    kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303

     在node节点执行上面的输出命令即可:

    # 只在 worker 节点执行
    # 替换 x.x.x.x 为 master 节点的内网 IP
    export MASTER_IP=x.x.x.x
    # 替换 apiserver.demo 为初始化 master 节点时所使用的 APISERVER_NAME
    export APISERVER_NAME=apiserver.demo
    echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
    
    # 替换为 master 节点上 kubeadm token create 命令的输出
    kubeadm join apiserver.demo:6443 --token mpfjma.4vjjg8flqihor4vt     --discovery-token-ca-cert-hash sha256:6f7a8e40a810323672de5eee6f4d19aa2dbdb38411845a1bf5dd63485c43d303    

    15.查看添加唉node节点是否成功

    [root@guanbin-k8s-master ~]# kubectl get nodes
    NAME                 STATUS   ROLES    AGE   VERSION
    guanbin-k8s-master   Ready    master   59m   v1.18.0
    guanbin-k8s-node     Ready    <none>   36m   v1.18.0

     注意:status是Ready说明已经成功了,若非Ready则执行以下命令查看原因:

    kubectl describe node k8s-master | grep Ready

    参考:https://kuboard.cn/install/install-k8s.html#%E5%AE%89%E8%A3%85-ingress-controller

  • 相关阅读:
    IOS学习笔记 ---- 15/09/14
    IOS学习笔记 ---- 15/09/07
    IOS学习笔记 ---- 15/09/06
    IOS学习笔记 ---- 15/09/02
    IOS学习笔记 ---- 15/09/01
    IOS学习笔记 ---- 15/08/31
    IOS学习笔记 ---- 15/08/30之前
    ios之UITableViewController(二) tableView的编辑模式
    ios之UIPageController和UIScrollView配合使用
    ios之UIScrollView
  • 原文地址:https://www.cnblogs.com/guanbin-529/p/12729413.html
Copyright © 2011-2022 走看看