zoukankan      html  css  js  c++  java
  • 使用kubeadm搭建Kubernetes(K8S)集群

    系统版本:CentOS Linux release 7.6.1810 (Core)
    软件版本:kubeadm、kubernetes-1.15、docker-ce-18.09
    硬件要求:最少需要2GB或者以上的内存,最少需要2核或者以上更多的CPU
    官方文档:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

    主机规划

    主机名       主机地址          主机角色                 运行服务
    k8s-master  172.16.254.134  control-plane(master) kube-apiserveretcdkube-schedulerkube-controller-managerdockerkubelet
    k8s-ndoe01  172.16.254.135  woker node(node)      kubeletkube-proxydocker
    

    1、配置系统环境

    我们需要在主机上配置以下操作:
    - 主机名
    - 名称解析
    - 关闭SWAP交换内存
    - 关闭防火墙和SeLinux
    - 启用bridge-nf功能
    - 检查集群中每个主机的Mac地址和ProductUUID唯一性
    - 检查可用端口是否被占用
    集群中主机运行服务所需端口   
    主机 协议 运行服务 所需端口
    Master TCP Kubernetes API serveretcd server client APIKubelet APIkube-schedulerkube-controller-manager 6443*2379-2380102501025110252
    Node TCP Kubelet APINodePort Services** 1025030000-32767
    为啥要启用bridge-nf?
    答:默认情况下iptables不对二层帧数据做任何处理,为了使Pod进行网络通信时也可以受到IPtables链上的规则所影响,所以我们需要开启IPtables的网桥透明工作模式,即来自二层的流量也将会被IPtables所过滤,避免出现IPtables被绕过而导致Pod流量路由不正确的问题。
    

    (1)主机(172.16.254.134)(k8s-master)上操作

    [root@localhost ~]# echo "k8s-master" >/etc/hostname
    [root@localhost ~]# cat /etc/hostname |xargs hostname
    [root@localhost ~]# bash
    [root@k8s-master ~]# vim /etc/hosts
    172.16.254.134 k8s-master
    172.16.254.135 k8s-node01
    [root@k8s-master ~]# swapoff -a
    [root@k8s-master ~]# systemctl stop firewalld
    [root@k8s-master ~]# systemctl disable firewalld
    [root@k8s-master ~]# setenforce 0
    [root@k8s-master ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
    [root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@k8s-master ~]# sysctl --system
    [root@k8s-master ~]# ip link |grep 'link/ether'
        link/ether 00:0c:29:d1:7c:b1 brd ff:ff:ff:ff:ff:ff
    [root@k8s-master ~]# cat /sys/class/dmi/id/product_uuid
    8A2E4D56-EE76-A6CE-0E12-70F4B8D17CB1
    [root@k8s-master ~]# netstat -lnupt
    

    (2)主机(172.16.254.135)(k8s-node01)上操作

    [root@localhost ~]# echo "k8s-node01" >/etc/hostname
    [root@localhost ~]# cat /etc/hostname |xargs hostname
    [root@localhost ~]# bash
    [root@k8s-node01 ~]# 
    [root@k8s-node01 ~]# vim /etc/hosts
    172.16.254.134 k8s-master
    172.16.254.135 k8s-node01
    [root@k8s-node01 ~]# swapoff -a
    [root@k8s-node01 ~]# systemctl stop firewalld
    [root@k8s-node01 ~]# systemctl disable firewalld
    [root@k8s-node01 ~]# setenforce 0
    [root@k8s-node01 ~]# sed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/sysconfig/selinux
    [root@k8s-node01 ~]# vim /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    [root@k8s-node01 ~]# ip link |grep 'link/ether'
        link/ether 00:0c:29:6d:40:2b brd ff:ff:ff:ff:ff:ff
    [root@k8s-node01 ~]# cat /sys/class/dmi/id/product_uuid
    4D854D56-E60A-69DD-CC05-4BF03A6D402B
    [root@k8s-node01 ~]# netstat -lnupt
    

    2、安装Docker(Kubernetes容器运行环境)

    两台主机上操作相同!

    [root@localhost ~]# yum -y install epel-release.noarch yum-utils
    [root@localhost ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@localhost ~]# yum -y install device-mapper-persistent-data  lvm2
    [root@localhost ~]# yum -y install docker-ce-18.09.1
    [root@localhost ~]# systemctl start docker
    [root@localhost ~]# systemctl enable docker
    

    3、配置Docker,重启Docker服务

    配置Docker在线镜像源为国内镜像源,配置Docker使用的cgroup驱动为"systemd"。
    两台主机上操作相同。

    [root@k8s-master ~]# vim /etc/docker/daemon.json
    {
      "registry-mirrors": ["http://hub-mirror.c.163.com"],
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ]
    }
    [root@k8s-master ~]# systemctl restart docker
    

    4、配置YUM-Kubernetes存储库

    YUM-Kubernetes存储库由阿里云开源镜像网提供。
    两台主机上操作相同!

    [root@k8s-master ~]# vim /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    

    5、安装kubelet、kubeadm、kubectl

    两台主机上操作相同!

    [root@k8s-master ~]# yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubelet-1.15.0 --disableexcludes=kubernetes
    

    6、配置Kubelet

    避免Kubelet受swap影响导致服务启动失败。
    两台主机上操作相同!

    [root@k8s-master ~]# vim /etc/sysconfig/kubelet
    KUBELET_EXTRA_ARGS="--fail-swap-on=false"
    [root@k8s-master ~]# systemctl enable kubelet
    

    7、查看使用Kubeadm创建Kubernetes集群所需要的Docker镜像

    由于Kubeadm默认使用的在线镜像源地址是"k8s.gcr.io"。所以我们需要手动下载所需镜像才能去创建集群。
    主机(k8s-master)上操作!

    [root@k8s-master ~]# kubeadm config print init-defaults |grep imageRepository
    imageRepository: k8s.gcr.io
    [root@k8s-master ~]# kubeadm config images list
    W0708 15:58:04.237960   23951 version.go:99] falling back to the local client version: v1.15.0
    k8s.gcr.io/kube-apiserver:v1.15.0
    k8s.gcr.io/kube-controller-manager:v1.15.0
    k8s.gcr.io/kube-scheduler:v1.15.0
    k8s.gcr.io/kube-proxy:v1.15.0
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    

    8、手动从第三方存储库下载镜像,并重新标记

    我们可以先从在线的镜像源中搜索一下。
    两台主机上操作相同!

    [root@k8s-master ~]# docker search kube-apiserver:v1.15.0
    [root@k8s-master ~]# docker search richarddockerimage
    NAME                                             DESCRIPTION                                     STARS              
    richarddockerimage/kube-apiserver-v15            k8s.gcr.io/kube-apiserver:v1.15.0               0                                       
    richarddockerimage/tomcat_env_image              Based on ubuntu 14.04, plus java7 and tomcat…  0                                       
    richarddockerimage/docker-whale                  Demo for docker                                 0                                       
    richarddockerimage/kube-controller-manager-v15   k8s.gcr.io/kube-controller-manager:v1.15.0      0                                       
    richarddockerimage/kube-proxy-v15                k8s.gcr.io/kube-proxy:v1.15.0                   0                                       
    richarddockerimage/kube-scheduler-v15            k8s.gcr.io/kube-scheduler:v1.15.0               0                                       
    richarddockerimage/coredns-v15                   k8s.gcr.io/coredns:1.3.1                        0                                       
    richarddockerimage/etcd                          k8s.gcr.io/etcd:3.3.10                          0                                       
    richarddockerimage/pause-v15                     k8s.gcr.io/pause:3.1                            0                                       
    richarddockerimage/oracle12                      Oracle database 12                              0                                       
    richarddockerimage/sqlserver                     sql server 2017                                 0                                       
    richarddockerimage/image_from_dockerfile                                                         0                           
    [root@k8s-master ~]# docker pull richarddockerimage/kube-apiserver-v15
    [root@k8s-master ~]# docker pull richarddockerimage/kube-controller-manager-v15
    [root@k8s-master ~]# docker pull richarddockerimage/kube-scheduler-v15
    [root@k8s-master ~]# docker pull richarddockerimage/kube-proxy-v15
    [root@k8s-master ~]# docker pull richarddockerimage/pause-v15
    [root@k8s-master ~]# docker pull richarddockerimage/etcd
    [root@k8s-master ~]# docker pull richarddockerimage/coredns-v15
    [root@k8s-master ~]# docker tag richarddockerimage/kube-apiserver-v15 k8s.gcr.io/kube-apiserver:v1.15.0
    [root@k8s-master ~]# docker tag richarddockerimage/kube-controller-manager-v15 k8s.gcr.io/kube-controller-manager:v1.15.0
    [root@k8s-master ~]# docker tag richarddockerimage/kube-scheduler-v15 k8s.gcr.io/kube-scheduler:v1.15.0
    [root@k8s-master ~]# docker tag richarddockerimage/kube-proxy-v15 k8s.gcr.io/kube-proxy:v1.15.0
    [root@k8s-master ~]# docker tag richarddockerimage/pause-v15 k8s.gcr.io/pause:3.1
    [root@k8s-master ~]# docker tag richarddockerimage/etcd k8s.gcr.io/etcd:3.3.10
    [root@k8s-master ~]# docker tag richarddockerimage/coredns-v15 k8s.gcr.io/coredns:1.3.1
    [root@k8s-master ~]# docker rmi richarddockerimage/kube-apiserver-v15
    [root@k8s-master ~]# docker rmi richarddockerimage/kube-controller-manager-v15
    [root@k8s-master ~]# docker rmi richarddockerimage/kube-scheduler-v15
    [root@k8s-master ~]# docker rmi richarddockerimage/kube-proxy-v15
    [root@k8s-master ~]# docker rmi richarddockerimage/pause-v15
    [root@k8s-master ~]# docker rmi richarddockerimage/etcd
    [root@k8s-master ~]# docker rmi richarddockerimage/coredns-v15
    [root@k8s-master ~]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/etcd                      3.3.10              aae65e9fad13        9 days ago          258MB
    k8s.gcr.io/coredns                   1.3.1               12dcba476018        9 days ago          40.3MB
    k8s.gcr.io/pause                     3.1                 f3120a7daf47        9 days ago          742kB
    k8s.gcr.io/kube-proxy                v1.15.0             b39aca5c3855        9 days ago          82.4MB
    k8s.gcr.io/kube-scheduler            v1.15.0             9270c92a5165        9 days ago          81.1MB
    k8s.gcr.io/kube-controller-manager   v1.15.0             79939977718a        9 days ago          159MB
    k8s.gcr.io/kube-apiserver            v1.15.0             6ea465931092        9 days ago          207MB
    

    9、创建一个Kubernetes集群

    主机(k8s-master)上操作!
    使用以下命令将会在主机上自动安装并运行控制平面(Master)组件服务。

    [root@k8s-master ~]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=0.0.0.0 --ignore-preflight-errors=Swap
    Your Kubernetes control-plane has initialized successfully!
    你的Kubernetes 控制平面节点(Master)安装成功!
    
    To start using your cluster, you need to run the following as a regular user:
    要开始使用集群,你需要作为常规用户运行一下内容:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    你现在需要向集群中部署一个Pod网络。
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    你可以通过下面链接中的帮助文档,安装一个适用的网络插件用于Pod网络通信。
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    你可以运行以下命令将任意Node节点的加入到Kubernetes集群中。
    kubeadm join 172.16.254.134:6443 --token c4p317.ia0w2uc6m1f4pmnn 
        --discovery-token-ca-cert-hash sha256:ef2c778a8d7c6c2df000449249f45f55bf35356239fdaefa84822fde4b2f4b71 
    

    10、拷贝kubectl的配置文件

    主机(k8s-master)上操作!
    集群创建完成后,我们需要使用"kubectl"客户端连接管理集群,kubectl使用生成的配置文件连接并管理操作集群,源配置文件路径:/etc/kubernetes/admin.conf。

    [root@k8s-master ~]# mkdir -p $HOME/.kube
    [root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    11、部署网络插件(flannel)

    主机(k8s-master)上操作!
    Kubenetes中Pod之间网络通信通过第三方扩展来实现的,所以我们需要安装第三方网络插件,flannel是常用的网络插件,当然可以选择其他,请参考官方文档。
    安装完成后我们使用"kubectl get pods --all-namespaces"命令查看下"kube-system"这个名称空间下的Pod运行情况,发现都是"Running"运行状态,说明Kubernetes集群已正常工作了。

    [root@k8s-master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
    [root@k8s-master ~]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                 READY   STATUS     RESTARTS   AGE
    kube-system   coredns-5c98db65d4-sj2j6             1/1     Running    0          93m
    kube-system   coredns-5c98db65d4-zhpdw             1/1     Running    0          93m
    kube-system   etcd-k8s-master                      1/1     Running    0          92m
    kube-system   kube-apiserver-k8s-master            1/1     Running    0          92m
    kube-system   kube-controller-manager-k8s-master   1/1     Running    0          92m
    kube-system   kube-flannel-ds-amd64-22fnl          1/1     Running    0          12m
    kube-system   kube-proxy-dlxwl                     1/1     Running    0          93m
    kube-system   kube-proxy-mtplf                     1/1     Running    0          100s
    kube-system   kube-scheduler-k8s-master            1/1     Running    0          92m
    

    12、将Node节点主机加入到集群中

    主机(k8s-Node01)上操作!
    默认情况下,在创建的集群的时候,就会创建一个Token和CA证书,用于Node节点连接并接入到集群中,令牌的过期时间默认是24小时,当超过这个时间,如果还需要Node节点加入到集群中的话,则我们需要手动创建Token和CA证书。

    [root@k8s-master ~]# kubeadm token list
    [root@k8s-master ~]# kubeadm token create
    [root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    [root@k8s-node01 ~]# kubeadm join 172.16.254.134:6443 --token c4p317.ia0w2uc6m1f4pmnn --discovery-token-ca-cert-hash sha256:ef2c778a8d7c6c2df000449249f45f55bf35356239fdaefa84822fde4b2f4b71 --ignore-preflight-errors=Swap
    This node has joined the cluster:
    这个节点已加入到集群中:
    * Certificate signing request was sent to apiserver and a response was received.
    证书签名请求已发送到API Server,并接受到响应。
    * The Kubelet was informed of the new secure connection details.
    Kubelet被告知新的安全连接细节。
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    在控制平面上运行这个命令"kubectl get nodes"可以看到这个节点已加入到集群中。
    

    13、查看集群中节点情况

    主机(k8s-master)上操作。
    我们已经看到集群已经构建完成!Master和Node节点处于就绪状态。

    [root@k8s-master ~]#  kubectl get nodes
    NAME         STATUS   ROLES    AGE     VERSION
    k8s-master   Ready    master   95m     v1.15.0
    k8s-node01   Ready    <none>   2m57s   v1.15.0
    

    14、查看控制平面(Master)节点组件运行情况

    [root@k8s-master ~]# kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}  
    

    15、查看集群状态信息

    [root@k8s-master ~]# kubectl cluster-info
    Kubernetes master is running at https://172.16.254.134:6443
    KubeDNS is running at https://172.16.254.134:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    

    16、查看集群版本

    [root@k8s-master ~]# kubectl version --short=true
    Client Version: v1.15.0
    Server Version: v1.15.0
    
  • 相关阅读:
    【Node.js 自己封装的库 http_parse, libuv】
    select遍历list默认选中初始值
    mybatis入门基础----高级映射(一对一,一对多,多对多)
    工具类 | window批处理杀死指定端口进程
    eclipse 关闭控制台 自动弹出
    mybatis的jdbcType和javaType、oracle,MySQL的对应类型
    mysql 创建表格 AUTO_INCREMENT
    Linux shell脚本启动 停止 重启jar包
    Tomcat结合nginx使用小结
    集成maven和Spring boot的profile功能
  • 原文地址:https://www.cnblogs.com/network-ren/p/13865572.html
Copyright © 2011-2022 走看看