zoukankan      html  css  js  c++  java
  • Centos7 使用 Kubeadm 搭建 k8s 集群

    本文安装Kubernetes的方式是使用kubeadm安装,还有其他安装方式,kubeadm是较为简单的方法。

    这里先说一下安装的步骤,由于环境问题和网络问题,安装可能并不是一篇教程跟着下来就可以,所以建议先了解下安装的步骤,看看官网。

    环境

    主机1 主机2
    k8s-master k8s-node-1
    144.34.220.135 106.14.141.90
    2u2g 1u1g
    国外 国内
    centos7 centos7

    安装验证与准备

    此部分两台主机都要进行

    设置主机名与host文件

    设置主机名

    # master
    hostnamectl --static set-hostname  k8s-master
    
    # node
    hostnamectl --static set-hostname  k8s-node-1
    

    查看系统信息

    
    hostnamectl status
    

    修改/etc/hosts文件, 两台主机都要修改

    vi /etc/hosts
    
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    127.0.0.1 localhost
    # 添加下面两行
    144.34.220.135 k8s-master
    106.14.141.90 k8s-node-1
    

    禁用 Swap 交换分区

    为了保证 kubelet 正确运行,您 必须 禁用交换分区。

    • 使用free -h 查看交换分区是否启用
    [root@k8s-master ~]# free -h
                  total        used        free      shared  buff/cache   available
    Mem:           2.0G         80M        334M         16M        1.6G        1.7G
    Swap:          511M          0B        511M
    

    如何关闭swap?

    swapoff -a
    
    vim /etc/fstab
    
    
    #
    # /etc/fstab
    # Created by anaconda on Mon Mar 13 20:42:02 2017
    #
    # Accessible filesystems, by reference, are maintained under '/dev/disk'
    # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
    #
    UUID=3437f1a0-f850-4f1b-8a7c-819c5f6a29e4 /                       ext4    defaults,discard,noatime        1 1
    UUID=ad1361f7-4ab4-4252-ba00-5d4e5d8590fb /boot                   ext3    defaults        1 2
    /swap none swap sw 0 0
    
    

    这里没有swap的声明,如果有需要注释掉

    [root@k8s-master ~]# free -h
                  total        used        free      shared  buff/cache   available
    Mem:           2.0G         80M        333M         16M        1.6G        1.7G
    Swap:            0B          0B          0B
    

    禁用SELinux

    # 将 SELinux 设置为 permissive 模式(将其禁用)
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    

    只有执行这一操作之后,容器才能访问宿主的文件系统,进而能够正常使用 Pod 网络。您必须这么做,直到 kubelet 做出升级支持 SELinux 为止。

    开放k8s用的端口

    我们直接禁用掉防火墙

    # centos 7
    
    systemctl status firewalld.service
    ● firewalld.service - firewalld - dynamic firewall daemon
       Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
       Active: inactive (dead)
         Docs: man:firewalld(1)
         
    systemctl stop firewalld.service #停止firewall
    systemctl disable firewalld.service #禁止firewall开机启动
    
    

    确保每个节点上 MAC 地址和 product_uuid 的唯一性。

    一般来讲,硬件设备会拥有独一无二的地址,但是有些虚拟机可能会雷同。

    • MAC 地址:ip link 或是 ifconfig -a
    • 获取 product_uuid sudo cat /sys/class/dmi/id/product_uuid
    [root@localhost ~]# sudo cat /sys/class/dmi/id/product_uuid
    552404A0-F6C4-4A3A-BC3E-A3B4A032501B
    

    确保两台主机网络互通

    ping

    安装docker

    此部分两台主机都要进行

    从 v1.6.0 起,Kubernetes 开始允许使用 CRI,容器运行时接口。默认的容器运行时是 Docker,这是由 kubelet 内置的 CRI 实现 dockershim 开启的。

    还有其他容器

    • containerd (containerd 的内置 CRI 插件)
    • cri-o
    • frakti
    • rkt

    这里我们使用docker

    注意:此篇文章写于2019.8.9,此时安装的kubelet版本为1.15.2,最高支持docker-ce 18.9,而docker已经更新到19,因次你可能需要控制下docker的版本

    ## Set up the repository
    ### Install required packages.
    yum install yum-utils device-mapper-persistent-data lvm2
    
    ### Add Docker repository.
    yum-config-manager 
      --add-repo 
      https://download.docker.com/linux/centos/docker-ce.repo
    
    ## Install Docker CE.
    yum update && yum install docker-ce-18.06.2.ce
    
    [root@k8s-master ~]# docker --version
    Docker version 18.06.2-ce, build 6d37f41
    

    安装 kubeadm, kubelet 和 kubectl

    此部分两台主机都要进行

    • kubeadm: 用来初始化集群的指令。

    • kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。

    • kubectl: 用来与集群通信的命令行工具。

    cat <<eof> /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    exclude=kube*
    EOF
    
    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    
    systemctl enable kubelet && systemctl start kubelet
    
    [root@k8s-master ~]# systemctl status kubelet
    ● kubelet.service - kubelet: The Kubernetes Node Agent
       Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
      Drop-In: /usr/lib/systemd/system/kubelet.service.d
               └─10-kubeadm.conf
       Active: activating (auto-restart) (Result: exit-code) since 五 2019-08-09 15:08:39 EDT; 740ms ago
         Docs: https://kubernetes.io/docs/
      Process: 26821 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
     Main PID: 26821 (code=exited, status=255)
    
    8月 09 15:08:39 k8s-master systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
    8月 09 15:08:39 k8s-master systemd[1]: Unit kubelet.service entered failed state.
    8月 09 15:08:39 k8s-master systemd[1]: kubelet.service failed.
    

    启动失败,没关系,先不管它

    一些 RHEL/CentOS 7 的用户曾经遇到过:由于 iptables 被绕过导致网络请求被错误的路由。您得保证 在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被设为1。

    cat <<eof>  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    # 使配置生效
    sysctl --system
    

    在 Master 节点上配置 kubelet 所需的 cgroup 驱动

    使用 Docker 时,kubeadm 会自动为其检测 cgroup 驱动在运行时对 /var/lib/kubelet/kubeadm-flags.env 文件进行配置。 如果您使用了不同的 CRI, 您得把 /etc/default/kubelet 文件中的 cgroup-driver 位置改为对应的值,像这样:

    KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
    

    这个文件将会被 kubeadm initkubeadm join 用于为 kubelet 获取 额外的用户参数。

    请注意,您只需要在您的 cgroup driver 不是 cgroupfs 时这么做,因为 cgroupfs 已经是 kubelet 的默认值了。

    需要重启 kubelet:

    systemctl daemon-reload
    systemctl restart kubelet
    
    查看docker cgronp driver
    docker info | grep -i cgroup
    

    如果查询不到,重启docker(在运行kubelet之后)

    # 重启docker
    [root@k8s-master ~]# systemctl start docker && systemctl enable docker
    Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
    [root@k8s-master ~]# docker info | grep -i cgroup
    Cgroup Driver: cgroupfs
    
    # 重启kubelet
    systemctl restart kubelet
    

    初始化Master集群

    拉取镜像

    这一步不是必须的,在使用kubeadm init时也会执行此命令,这里单独拿出来是因为此步骤国内主机需要FQ。否则初始化时会报如下错误

    [ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.4: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    , error: exit status 1
    

    国内拉取镜像解决方法点这里

    注意:这是我国外的主机,国内主机需要FQ,但设置代理时要记得取消掉,否则可能启动失败,还有一种方法是使用docker提供的包然后修改镜像名。链接在上面。

    kubeadm config images pull
    
    [root@k8s-master kubelet.service.d]# kubeadm config images pull
    [config/images] Pulled k8s.gcr.io/kube-apiserver:v1.15.2
    [config/images] Pulled k8s.gcr.io/kube-controller-manager:v1.15.2
    [config/images] Pulled k8s.gcr.io/kube-scheduler:v1.15.2
    [config/images] Pulled k8s.gcr.io/kube-proxy:v1.15.2
    [config/images] Pulled k8s.gcr.io/pause:3.1
    [config/images] Pulled k8s.gcr.io/etcd:3.3.10
    [config/images] Pulled k8s.gcr.io/coredns:1.3.1
    

    通过docker来看拉取的镜像

    [root@k8s-master kubelet.service.d]# docker images
    REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-apiserver            v1.15.2             34a53be6c9a7        4 days ago          207MB
    k8s.gcr.io/kube-controller-manager   v1.15.2             9f5df470155d        4 days ago          159MB
    k8s.gcr.io/kube-scheduler            v1.15.2             88fa9cb27bd2        4 days ago          81.1MB
    k8s.gcr.io/kube-proxy                v1.15.2             167bbf6c9338        4 days ago          82.4MB
    k8s.gcr.io/coredns                   1.3.1               eb516548c180        6 months ago        40.3MB
    k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        8 months ago        258MB
    k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        19 months ago       742kB
    

    初始化kubeadm init

    kubeadm init --apiserver-advertise-address=144.34.220.135 --pod-network-cidr=10.244.0.0/16
    

    启动成功会打印如下输出:

    [root@k8s-master kubelet.service.d]# kubeadm init --apiserver-advertise-address=144.34.220.135 --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.15.2
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [144.34.220.135 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [144.34.220.135 127.0.0.1 ::1]
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 144.34.220.135]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [kubelet-check] Initial timeout of 40s passed.
    [apiclient] All control plane components are healthy after 40.514041 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: 9f8hc3.mvm668g4rwmtiwpl
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 144.34.220.135:6443 --token 9f8hc3.mvm668g4rwmtiwpl 
        --discovery-token-ca-cert-hash sha256:e828f328183d747f2f9171476ddd3187d380c372080e0776dfd59165c7b99815
    [root@k8s-master kubelet.service.d]#
    

    根据控制台输出,要使kubectl为非root用户工作,请运行以下命令

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    或者,如果您是root用户

    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    

    记录输出的kubeadm join命令kubeadm init。您需要此命令才能将节点加入群集。

    安装pod网络附加组件

    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
    
    [root@k8s-master kubelet.service.d]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml
    podsecuritypolicy.extensions/psp.flannel.unprivileged created
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
    

    安装了pod网络后,您可以通过检查CoreDNS pod是否在输出中运行来确认它是否正常工作kubectl get pods --all-namespaces。一旦CoreDNS pod启动并运行,您可以继续加入您的节点。

    [root@k8s-master kubelet.service.d]# kubectl get pods --all-namespaces
    NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
    kube-system   coredns-5c98db65d4-kcvfp             1/1     Running   0          9m51s
    kube-system   coredns-5c98db65d4-p8fd8             1/1     Running   0          9m51s
    kube-system   etcd-k8s-master                      1/1     Running   0          9m13s
    kube-system   kube-apiserver-k8s-master            1/1     Running   0          9m10s
    kube-system   kube-controller-manager-k8s-master   1/1     Running   0          9m7s
    kube-system   kube-flannel-ds-amd64-zdsrf          1/1     Running   0          77s
    kube-system   kube-proxy-dkfq9                     1/1     Running   0          9m51s
    kube-system   kube-scheduler-k8s-master            1/1     Running   0          8m53s
    [root@k8s-master kubelet.service.d]#
    
    [root@k8s-master kubelet.service.d]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    k8s-master   Ready    master   12m   v1.15.2
    

    初始化node

    ssh 144.34.220.135 -p ssh端口
    

    拉取镜像

    同样,这里拉取的镜像需要kexue上网,与master的节点相同,不同的是node节点只需要kube-proxypause这两个镜像。

    如果你不想FQ的话,请参考这篇文章

    [root@k8s-node-1 docker]# docker images
    REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
    k8s.gcr.io/kube-proxy   v1.15.2             167bbf6c9338        5 days ago          82.4MB
    k8s.gcr.io/pause        3.1                 da86e6ba6ca1        19 months ago       742kB
    [root@k8s-node-1 docker]#
    

    加入集群

    kubeadm join 144.34.220.135:6443 --token 9f8hc3.mvm668g4rwmtiwpl 
        --discovery-token-ca-cert-hash sha256:e828f328183d747f2f9171476ddd3187d380c372080e0776dfd59165c7b99815
    

    这里也可以带上版本号

    这条命令是master kubeadm init的输出,其中token是有时限的,默认是24小时。

    可以先在k8s-master上通过kubeadm token list查看token

    [root@k8s-master ~]# kubeadm token list
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    9f8hc3.mvm668g4rwmtiwpl   6h        2019-08-10T15:39:03-04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
    [root@k8s-master ~]#
    

    如果没有,在k8s-master执行kubeadm token create创建。

    如果没有值--discovery-token-ca-cert-hash,可以通过在master节点上运行以下命令链来获取它:

    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | 
       openssl dgst -sha256 -hex | sed 's/^.* //'
    

    加入集群

    [root@k8s-node-1 docker]# kubeadm join 144.34.220.135:6443 --token qekggk.bqsgueieehkhli45 
    >     --discovery-token-ca-cert-hash sha256:16606f1055983847eaaba5f283b634a96160de22fc48a58c72b0045f78de3b31
    [preflight] Running pre-flight checks
    	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@k8s-node-1 docker]#
    

    在master主机上查看集群状态:

    [root@k8s-master ~]# kubectl get nodes
    NAME         STATUS   ROLES    AGE   VERSION
    k8s-master   Ready    master   10m   v1.15.2
    k8s-node-1   Ready    <none>   25s   v1.15.2
    
    [root@k8s-master ~]# kubectl get pod --all-namespaces
    NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
    kube-system   coredns-5c98db65d4-7ghr6             1/1     Running   0          10m
    kube-system   coredns-5c98db65d4-gl9kw             1/1     Running   0          10m
    kube-system   etcd-k8s-master                      1/1     Running   0          9m42s
    kube-system   kube-apiserver-k8s-master            1/1     Running   0          9m55s
    kube-system   kube-controller-manager-k8s-master   1/1     Running   0          9m35s
    kube-system   kube-flannel-ds-amd64-5bcg4          1/1     Running   0          54s
    kube-system   kube-flannel-ds-amd64-v7grj          1/1     Running   0          8m12s
    kube-system   kube-proxy-q52r2                     1/1     Running   0          10m
    kube-system   kube-proxy-z7687                     1/1     Running   0          54s
    kube-system   kube-scheduler-k8s-master            1/1     Running   0          9m27s
    

  • 相关阅读:
    python中的基础坑
    Django组件content-type使用方法详解
    数据库范式
    MySQL常见面试题索引、表设计
    python操作MySQL之pymysql模块
    MySQL备份与还原
    MySQL查询缓存
    MySQL索引及查询优化
    MySQL事务隔离之MVCC版本控制
    MySQL事务及实现、隔离级别及锁与优化
  • 原文地址:https://www.cnblogs.com/lifan1998/p/14326754.html
Copyright © 2011-2022 走看看