zoukankan      html  css  js  c++  java
  • Kubernetes 部署笔记

          说在前面,部署环境:阿里云三台虚机,OS:CentOS7.7,操作系统6.10不行,刚开始就有bridge module无法加载,开始操作吧:

          1、每台机器环境准备:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

    yum update
    modprobe br_netfilter
    cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl --system

         2、install docker :https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker

    # Install Docker CE
    ## Set up the repository
    ### Install required packages.
    yum install -y yum-utils device-mapper-persistent-data lvm2
    
    ### Add Docker repository.
    yum-config-manager --add-repo 
      https://download.docker.com/linux/centos/docker-ce.repo
    
    ## Install Docker CE.
    yum update -y && yum install -y 
      containerd.io-1.2.10 
      docker-ce-19.03.4 
      docker-ce-cli-19.03.4
    
    ## Create /etc/docker directory.
    mkdir /etc/docker
    
    # Setup daemon.
    cat > /etc/docker/daemon.json <<EOF
    {
      "exec-opts": ["native.cgroupdriver=systemd"],
      "log-driver": "json-file",
      "log-opts": {
        "max-size": "100m"
      },
      "storage-driver": "overlay2",
      "storage-opts": [
        "overlay2.override_kernel_check=true"
      ]
    }
    EOF
    
    mkdir -p /etc/systemd/system/docker.service.d
    
    # Restart Docker
    systemctl daemon-reload
    systemctl restart docker

     3.安装kubeadm等组件

    
    

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    
    

    # Set SELinux in permissive mode (effectively disabling it)
    setenforce 0
    sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

    
    

    yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

    
    

    systemctl enable --now kubelet

     4、在控制节点初始化kubeadm,确认网络插件的话,kubeadm初始化命令为:

    kubeadm init --pod-network-cidr=192.168.0.0/16

    复制配置

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    或者

    export KUBECONFIG=/etc/kubernetes/admin.conf

    如果拉取镜像失败,可用国内源拉取再改名的方法:https://www.cnblogs.com/pu20065226/p/10612607.html

    5、网络插件安装

    kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

     6、取消主控节点的pods部署污染

    kubectl taint nodes --all node-role.kubernetes.io/master-

    7、添加其他节点,这个命令的准确格式在kubeadm init的输出提示文本里,这一步最好等到网络插件加载完毕再操作

    kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

      7.1 、问题诊断,添加节点后,查看节点有以下error

    network plugin is not ready: cni config uninitialized

     这个问题的原因可能有两种:1、没有安装网络插件,不过按上面的操作,主控节点上的网路插件已经安装成功了,2、第二种就是这台机器上网络等相关的pod还没有起来,此处是第二种情况,没起来的原因极有可能是相关image没下载下来,就按前面提到的镜像下载方法下载即可。


    HA Cluster

    4、上面的过程从第四步开始是做单控节点的K8S,下面开始的章节是做Stacked ETCD的高可用集群

    部署并配置haproxy: https://520mwx.com/view/51242

    yum install haproxy.x86_64
    haproxy -f /etc/haproxy/haproxy.cfg

    5、初始化第一个主控节点

    sudo kubeadm init --control-plane-endpoint "172.26.177.162:haport"  --pod-network-cidr=192.168.0.0/16 --upload-certs

    复制配置

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config

    或者

    export KUBECONFIG=/etc/kubernetes/admin.conf

     6、网络插件安装

    kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml

    7、添加其他主控节点,请注意这个地方的命令除了加了--control-plane ,还加了--certificate-key选项,从init的output copy过来的

    sudo kubeadm join 192.168.0.200:6443 --token 9vr73a.a8uxyaju799qwdjv --discovery-token-ca-cert-hash sha256:7c2e69131a36ae2a042a339b33381c6d0d43887e2de83720eff5359e26aec866 --control-plane --certificate-key f8902e114ef118304e561c3ecd4d0b543adc226b7a07f675f56564185ffe0c07

     8、给kubernetes下载docker image的脚本

    #!/bin/bash
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.17.4 k8s.gcr.io/kube-proxy:v1.17.4;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0;
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5;
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.5 k8s.gcr.io/coredns:1.6.5;

     9、原始token24小时生效后 需要新建token:

    kubeadm token create --print-join-command

    这个新token在join woker节点时出现error:

    [kubelet-start] Checking for an existing Node in the cluster with name "iz8vbcrus31oj8ui5lmr2ez" and status "Ready"
    nodes "iz8vbcrus31oj8ui5lmr2ez" is forbidden: User "system:bootstrap:pey0jv" cannot get resource "nodes" in API group "" at the cluster scope
    cannot get Node "iz8vbcrus31oj8ui5lmr2ez"
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join.runKubeletStartJoinPhase
            /workspace/anago-v1.18.0-rc.1.21+8be33caaf953ac/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/join/kubelet.go:148

    解决办法:(说明一下:必须是user,给组system:bootstrap授权无效)

    kubectl create clusterrolebinding kubeadm-user-node-bootstrap --clusterrole=system:node --user system:bootstrap:pey0jv

     10、ingress-nginx部署的文档,不过先说明一下,如果外部IaaS层没有负载均衡服务提供,那K8S里面的负载均衡服务以及基于其之上的Ingress都不可用。

    https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/

     11、prometheus的部署

    https://github.com/coreos/prometheus-operator

    https://github.com/helm/charts/tree/master/stable/prometheus-operator

    12、生成https tls证书(https://blog.csdn.net/m0_37518406/article/details/79380534)

    openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./tls.key -out ./tls.crt -subj "/CN=139.10.2.123"

     13、如果要改单端口svc的的端口配置,最好删掉svc重新来,以apply yaml配置方式传入,会出错说有两个port,但是你配置里面只有一个port信息,原因是k8s merge的时候把旧的那个也保留算上了

  • 相关阅读:
    Practice II 字符串
    Euleriar Path 入门
    2-SAT 入门
    Practice I 图论
    游戏中寻找学习JAVA的乐趣之坦克大战系列5-坦克的动态参数
    JQuery教程:实现轮播图效果
    HTML表格应用
    菜鸟Vue学习笔记(三)
    Java成神路上之设计模式系列教程之一
    JVM垃圾回收机制之对象回收算法
  • 原文地址:https://www.cnblogs.com/dhcn/p/12551477.html
Copyright © 2011-2022 走看看