zoukankan      html  css  js  c++  java
  • Step by Step To Create a K8S Cluster

    Step by Step To Create a K8S Cluster

    Refer:

    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
    https://kubernetes.io/zh/docs/setup/production-environment/container-runtimes/
    https://github.com/maguowei/gotok8s
    https://www.cnblogs.com/xuxinkun/p/11025020.html

    1. Create an Vitural Machine with image CentOS-8.3.2011-x86_64-minimal.iso

    2. Install Container runtimes (Docker)

    refer: https://docs.docker.com/engine/install/centos/

    # Uninstall old versions  
    sudo yum remove docker 
                      docker-client 
                      docker-client-latest 
                      docker-common 
                      docker-latest 
                      docker-latest-logrotate 
                      docker-logrotate 
                      docker-engine  
      
    # Install using the repository  
      
    #  1. Set up the repository  
    sudo yum install -y yum-utils  
    sudo yum-config-manager   
        --add-repo   
        https://download.docker.com/linux/centos/docker-ce.repo  
    
    ## yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
      
    #  2. Install Docker Engine  
    sudo yum install docker-ce docker-ce-cli containerd.io  
      
    #  3. 配置 Docker 守护程序,尤其是使用 systemd 来管理容器的 cgroup  
      
    sudo mkdir /etc/docker  
    cat <<EOF | sudo tee /etc/docker/daemon.json  
    {  
      "exec-opts": ["native.cgroupdriver=systemd"],  
      "log-driver": "json-file",  
      "log-opts": {  
        "max-size": "100m"  
      },  
      "storage-driver": "overlay2",
      "registry-mirrors": ["http://hub-mirror.c.163.com"]
    }  
    EOF
      
    #  4. 重新启动 Docker 并在启动时启用  
    sudo systemctl enable docker  
    sudo systemctl daemon-reload  
    sudo systemctl restart docker  
    

    3. Letting iptables see bridged traffic

    cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf  
    br_netfilter  
    EOF
      
    cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf  
    net.bridge.bridge-nf-call-ip6tables = 1  
    net.bridge.bridge-nf-call-iptables = 1  
    EOF  
    sudo sysctl --system  
    

    4. Disable Firewalld 端口开放

    systemctl disable firewalld  
    systemctl stop firewalld  
      
    

    https://www.cnblogs.com/larrypeng/p/11950498.html 关闭swap

    5. Installing kubeadm, kubelet and kubectl

    cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo  
    [kubernetes]  
    name=Kubernetes  
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch  
    enabled=1  
    gpgcheck=1  
    repo_gpgcheck=1  
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg  
    exclude=kubelet kubeadm kubectl  
    EOF  
    
    # CentOS/RHEL/Fedora 国内
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    exclude=kubelet kubeadm kubectl  
    EOF
      
    # Set SELinux in permissive mode (effectively disabling it) 禁用SELinux, 让容器可以读取主机文件系统 which is needed by pod networks for example  
    sudo setenforce 0
    sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config  
      
    sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes  
      
    sudo systemctl enable --now kubelet  
      
    

    6. Configuring the kubelet cgroup driver

    cat <<EOF | sudo tee ./kubeadm-config.yaml  
    # kubeadm-config.yaml  
    kind: ClusterConfiguration  
    apiVersion: kubeadm.k8s.io/v1beta2  
    kubernetesVersion: v1.21.1
    imageRepository: registry.aliyuncs.com/google_containers  
    networking: 
      podSubnet: "10.100.0.1/24"  
    ---  
    kind: KubeletConfiguration  
    apiVersion: kubelet.config.k8s.io/v1beta1  
    cgroupDriver: systemd  
    EOF  
    
    docker pull coredns/coredns:1.8.0
    docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
      
    kubeadm init --config kubeadm-config.yaml  
    
    [init] Using Kubernetes version: v1.21.0
    [preflight] Running pre-flight checks
            [WARNING FileExisting-tc]: tc not found in system path `dnf install -y iproute-tc` [CentOS-8]
    error execution phase preflight: [preflight] Some fatal errors occurred:
            [ERROR NumCPU]: the number of available CPUs 1 is less than the required 2
            [ERROR Mem]: the system RAM (744 MB) is less than the minimum 1700 MB
            [ERROR Swap]: running with swap on is not supported. Please disable swap `swapoff -a`
    [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`    
    To see the stack trace of this error execute with --v=5 or higher  
    

    Run

    kubeadm init --config kubeadm-config.yaml  
    
    [init] Using Kubernetes version: v1.21.1
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost.localdomain] and IPs [10.96.0.1 192.168.0.2]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.0.2 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [localhost localhost.localdomain] and IPs [192.168.0.2 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Starting the kubelet
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 14.006711 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
    [mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: qrjh2f.agl6zjgrhky7x105
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
      export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.0.2:6443 --token qrjh2f.agl6zjgrhky7x105 
            --discovery-token-ca-cert-hash sha256:73e9c5cecfc31872a9c91c404ad92843f30c775d59651b81735fd76bb21eaf8d
    
    

    Computer restart but IP all Changed.
    Set CentOS 8 Static Ip: https://www.cnblogs.com/lidezhen/p/13520728.html
    Hyper-V Vitural Machine Static Ip: https://www.cnblogs.com/kasnti/p/11727755.html

  • 相关阅读:
    201771030123-王爽 实验一 软件工程准备—阅读《现代软件工程—构建之法》并提问
    《面向对象程序设计课程学习进度条》
    201771010132-徐思 实验四 软件项目案例分析
    201771010132-徐思 实验三 结对项目-《西北师范大学疫情防控信息系统》
    201771010132-徐思 实验一 软件工程准备-浏览《构建之法》
    徐思201771010132 《面向对象程序设计(java)》课程学习总结
    徐思 201771010132
    徐思201771010132《面向对象程序设计(java)》第十六周学习总结
    徐思201771010132《面向对象程序设计(java)》第十五周学习总结
    徐思201771010132《面向对象程序设计(java)》第十四周学习总结
  • 原文地址:https://www.cnblogs.com/ArvinZhao/p/14794371.html
Copyright © 2011-2022 走看看