zoukankan      html  css  js  c++  java
  • [原]使用kubeadm部署kubernetes(一)

    #######################    以下为声明  #####################

    在公众号  木子李的菜田

    输入关键词:   k8s        

    有系列安装文档

    此文档是之前做笔记在两台机上进行的实践,kubernetes处于不断开发阶段

    不能保证每个步骤都能准确到同步开发进度,所以如果安装部署过程中有问题请尽量google

    本文章分为两部分:

    [原]使用kubeadm部署kubernetes(一)

    [原]部署kubernetes dashboard(二)

    按照下面步骤能得到什么?

    1.两台主机:一台为server ,另外一台为node节点

    2.在node节点上安装部署dashboard插件 并以kubernetes dashboard的方式呈现

    3.解决遇到的问题

    #######################    以下为正文  #####################
    ###
    OS: CentOS 7.5
    kubernetes :Kubernetes v1.14.1
    network model: flannel
    ###
    【在两台机器上都要做的步骤如下】
     
    0.修改添加node的hosts文件和停止防火墙
        
    systemctl stop firewalld && systemctl disable firewalld
    1.免密双方登录
    2.时间同步   (可参见https://www.cnblogs.com/horizonli/p/9539436.html
    3.关闭selinux
    # Set SELinux in permissive mode (effectively disabling it)
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    4.打开数据包转发
    cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    sysctl --system
    

      

    5.加载网络筛选器
    modprobe br_netfilter
    lsmod | grep br_netfilter
    6.检查桥工具包(如果没有需要重新安装)
    rpm -qa |grep bridge-utils
    7关闭swap
    swapoff -a
    
    vim /etc/fstab
    在这行前面添加#
    #/dev/mapper/centos-swap swap                    swap    defaults        0 0
    

      

    8(可选项,可以不选择执行,只作为参考,则使用的是iptables模式).kube-proxy开启ipvs的前置条件
    cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    
    【注意】上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
    接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm 如果以上前提条件如果不满足,
    则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式

      

     
    9.安装docker
    Kubernetes从1.6开始使用CRI(Container Runtime Interface)容器运行时接口。默认的容器运行时仍然是Docker,使用的是kubelet中内置dockershim CRI实现
    安装docker的yum源
    yum install -y yum-utils device-mapper-persistent-data lvm2
    sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    sudo yum makecache fast
    

     或者使用 

    yum install -y yum-utils device-mapper-persistent-data lvm2
    yum-config-manager 
    --add-repo 
    https://download.docker.com/linux/centos/docker-ce.repo
    

    【注意】  

    在后面kubeadm进行init的时候会报个类似这样的错误:
    [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
    通过查看k8s更改记录可以查到支持的docker版本号
    指定特定的docker版本号进行安装
    • kubeadm now properly recognizes Docker 18.09.0 and newer, but still treats 18.06 as the default supported version.
    如果之前有错误的docker需要先卸载: yum remove docker-ce* -y
    列出可安装的docker版本
    yum list docker-ce.x86_64  --showduplicates |sort -r
    docker-ce.x86_64            3:18.09.0-3.el7                    docker-ce-stable
    docker-ce.x86_64            18.06.3.ce-3.el7                   docker-ce-stable
    docker-ce.x86_64            18.06.2.ce-3.el7                   docker-ce-stable
    docker-ce.x86_64            18.06.1.ce-3.el7                   docker-ce-stable
    docker-ce.x86_64            18.06.0.ce-3.el7                   docker-ce-stable
     
    yum install docker-ce-18.06.3.ce-3.el7 -y
     
    systemctl start docker && systemctl enable docker  
     
    编辑或者创建/etc/docker/daemon.json  更改设置镜像库
    [root@k8s-master flannel]# cat <<EOF  >/etc/docker/daemon.json
    {
    "registry-mirrors": ["https://72idtxd8.mirror.aliyuncs.com"]
    }
    EOF
     
    systemctl reset-failed docker.service && systemctl restart docker.service
     
    10安装[kubernetes]
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    能kexue上网的用下面这个
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    exclude=kube*
    EOF
    sudo yum makecache fast
    yum install -y kubelet kubeadm kubectl 
    官网对三个工具的介绍
    kubeadm: the command to bootstrap the cluster.
    kubelet: the component that runs on all of the machines in your cluster and does things like starting pods and containers.
    kubectl: the command line util to talk to your cluster.
    
    11 【只在master机器上执行】检查cgroup driver:
    docker info | grep -i cgroup
    Cgroup Driver: cgroupfs
    如果不是cgroupfs 就用下面的命令改变:
    sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    systemctl daemon-reload
    
    systemctl restart kubelet    ---》这句不执行会报错如下:
    failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file
    "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory  
    需要在成功启动初始化cluster的时候才能执行成功  所以需要先执行 kubeadm --init
    %%%%%  下面是官方文档参考
    
    
    When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime.
    If you are using a different CRI, you have to modify the file /etc/default/kubelet with your cgroup-driver value, like so:
    
    KUBELET_EXTRA_ARGS=--cgroup-driver=<value>
    This file will be used by kubeadm init and kubeadm join to source extra user defined arguments for the kubelet.
    Please mind, that you only have to do that if the cgroup driver of your CRI is not cgroupfs, because that is the default value in the kubelet already.
    Restarting the kubelet is required:
    
    
    systemctl daemon-reload
    systemctl restart kubelet
    

    12【这步在每个node上(包括master)都要进行操作】

    刚才安装的是k8s软件,现在来处理k8s需要使用的镜像【可能在kubelet init的时候会遇到如下的错误】:

      Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64" 等等

    为了能解决这些问题,可以提前处理好镜像问题,使用kubeadm config images list 查看需要处理的基础镜像问题,

    之所以为基础镜像是因为在kubelet init的时候就需要用到的镜像,后面还会有其他插件安装时候需要的镜像,当遇到

    问题时再看看是什么镜像需要存在。

    [root@k8s-master ~]# kubeadm config images list
    k8s.gcr.io/kube-apiserver:v1.13.4
    k8s.gcr.io/kube-controller-manager:v1.13.4
    k8s.gcr.io/kube-scheduler:v1.13.4
    k8s.gcr.io/kube-proxy:v1.13.4
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/coredns:1.2.6  

    根据上面查出来的镜像名称,我写了个脚本来处理:

    编辑拉取脚本:vim pull_image.sh
    #!/bin/bash
    #### 基础镜像 ##### images=( kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 etcd:3.3.10 coredns:1.3.1 kubernetes-dashboard-amd64:v1.10.1 ) for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName done ### 插件镜像 network: flannel image ### docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
    ### 运行时插件镜像pause image ### docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    chmod a+x  pull_image.sh
    ./pull_image.sh
    docker images
    

    13【这步骤在master机器上执行】

     

     ~]# kubelet --version
    Kubernetes v1.14.1  

     初始化kubernetes master 机器:

    kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1  > kube_init.log
    
    10.244.0.0/16  是使用flannel网络要设置的ip网络地址(官网介绍一定要使用这个)
    【官网介绍】
    For flannel to work correctly, you must pass --pod-network-cidr=10.244.0.0/16 to kubeadm init.
    kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.50.10.10 --kubernetes-version=v1.13.4
    192.168.0.166  为maser的主机ip地址
    v1.14.1  是上面查出来的kubernetes 版本号

    下面是我的操作记录:仅供参考:

     

    [root@rancher ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.0.166 --kubernetes-version=v1.14.1
    [init] Using Kubernetes version: v1.14.1
    [preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". 
    Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [rancher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.166] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [rancher localhost] and IPs [192.168.0.166 127.0.0.1 ::1] [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 15.505486 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --experimental-upload-certs [mark-control-plane] Marking the node rancher as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node rancher as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: dije5w.ipijm49d8c9isxie [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.0.166:6443 --token dije5w.ipijm49d8c9isxie --discovery-token-ca-cert-hash sha256:c1aaaafc79d85141e60a73c43562e6b06cb8d9cdc24cc0a649c8d6e0f24c5f42

      

    14【在master上执行下面的操作】

    以下完整步骤内容请在公众号:  木子李的菜田  输入: k8s 

    为了能再node上正常操作kubelet,需要将master上的admin配置文件保存到每个node节点上

    。。。。。  

    【在node上执行下面操作】

    。。。。。
    

      下面是非常很重要的一步!!!!

    。。。。。。
    

    【注意注意】如果在第一次kubeadm init的时候失败了,需要重新kubeadm 进行重新初始化,需要做下面的操作:

     

    。。。。。。

    检查kubernetes是否安装正确:

    [root@master ~]# kubectl get pods --namespace=kube-system
    NAME                             READY   STATUS    RESTARTS   AGE
    coredns-86c58d9df4-srtht         0/1     Pending   0          3h
    coredns-86c58d9df4-tl7ww         0/1     Pending   0          3h
    etcd-master                      1/1     Running   0          179m
    kube-apiserver-master            1/1     Running   0          179m
    kube-controller-manager-master   1/1     Running   0          3h
    kube-proxy-2sdmn                 1/1     Running   1          3h
    kube-proxy-ln5tk                 1/1     Running   1          173m
    kube-scheduler-master            1/1     Running   0          3h
    

     

    发现很多都还不正确,这个时候先别急,继续第15和16步

     15 【只在master上进行操作】配置CNI-----这步跟网络息息相关

    vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
    添加下面的内容
    。。。。。。

      

    16安装flannel

    #]kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
    

      

     17最后在master和node上检查:

    全部是Running 才算是真正的正确部署

    [root@master ~]# kubectl get pods --namespace=kube-system
    NAME                             READY   STATUS    RESTARTS   AGE
    coredns-86c58d9df4-srtht         1/1     Running   0          3h16m
    coredns-86c58d9df4-tl7ww         1/1     Running   0          3h16m
    etcd-master                      1/1     Running   1          3h15m
    kube-apiserver-master            1/1     Running   1          3h15m
    kube-controller-manager-master   1/1     Running   1          3h16m
    kube-flannel-ds-amd64-7mntm      1/1     Running   0          12m
    kube-flannel-ds-amd64-bxzdn      1/1     Running   0          12m
    kube-proxy-2sdmn                 1/1     Running   1          3h16m
    kube-proxy-ln5tk                 1/1     Running   1          3h9m
    kube-scheduler-master            1/1     Running   1          3h16m
    

     End

    文章中有些地方的坑已经通过相关步骤填了,如果还遇到其他问题 请google一下或者使用 

    使用describe 查看原因   特别是要注意这个原因是在master还是在node节点上。

     

     

      

      

  • 相关阅读:
    C++ 的那些坑 (Day 0)
    LeetCode Search a 2D Matrix II
    最大子序列和问题的解(共4种,层层推进)
    如何编译文件(gcc + nasm)
    os如何处理键盘的所有按键,显示or不显示,显示是如何显示
    汇编操作显存
    diy文件系统上创建文件的流程
    在diy的文件系统上创建文件的流程
    建立文件系统
    快速选择
  • 原文地址:https://www.cnblogs.com/horizonli/p/10855666.html
Copyright © 2011-2022 走看看