zoukankan      html  css  js  c++  java
  • 国内不fq安装K8S二: 安装kubernet

    国内不fq安装K8S一: 安装docker
    国内不fq安装K8S二: 安装kubernet
    国内不fq安装K8S三: 使用helm安装kubernet-dashboard
    国内不fq安装K8S四: 安装过程中遇到的问题和解决方法

    2 安装kubelet

    2.1 环境准备

    #关闭SElinux
    $ setenforce 0
    $ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    
    #关闭防火墙
    $ systemctl stop firewalld
    $ systemctl disable --now firewalld  
    
    #设置iptables(略)
    
    #安装kubelet kubeadm kubectl
    $ yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
    
    #开机启动kubelet
    $ systemctl enable --now kubelet  
    
    

    2.2 设置国内的源

    ps: master、node节点都需要安装kubelet kubeadm kubectl。
    官方的源是packages.cloud.google.com,国内访问不了,因此使用阿里云的源

    $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    

    2.3 重要的设置

    确认/etc/hosts(kub1和kub2时是自己写的,也可以写node1、node2之类,localhost不能删)

    $ cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    
    kub1    192.168.15.174
    kub2    192.168.15.175
    

    创建/etc/sysctl.d/k8s.conf文件

    $ cat <<EOF >  /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    

    安装必要组件

    $ yum install -y bridge-utils.x86_64
    #ipvsadm和ipset是为了方便查看ipvs的
    $ yum install ipset
    $ yum install ipvsadm
    

    使配置生效

    $ modprobe br_netfilter
    $ sysctl -p /etc/sysctl.d/k8s.conf
    $ sysctl --system  
    

    关闭swap & 取消开机挂载swap

    $ swapoff -a && sysctl -w vm.swappiness=0
    $ sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
    

    安装必要的内核模块

    $ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    

    modprobe之后可以用lsmod查看是否生效

    2.4 获取镜像

    列出需要的镜像

    $ kubeadm config images list
    W0809 11:32:51.518614   18214 version.go:98] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
    W0809 11:32:51.519080   18214 version.go:99] falling back to the local client version: v1.15.2
    k8s.gcr.io/kube-apiserver:v1.15.2
    k8s.gcr.io/kube-controller-manager:v1.15.2
    k8s.gcr.io/kube-scheduler:v1.15.2
    k8s.gcr.io/kube-proxy:v1.15.2
    k8s.gcr.io/pause:3.1
    k8s.gcr.io/etcd:3.3.10
    k8s.gcr.io/coredns:1.3.1
    

    说明:上段中提示连不上dl.k8s.io/:
    我们fq访问一下:https://storage.googleapis.com/kubernetes-release/release/stable-1.txt发现也是:v1.15.2

    ### 从亚马逊获取镜像(国内可以访问,而且速度不慢)
    docker pull gcr.azk8s.cn/google_containers/kube-apiserver:v1.15.2
    docker pull gcr.azk8s.cn/google_containers/kube-controller-manager:v1.15.2
    docker pull gcr.azk8s.cn/google_containers/kube-scheduler:v1.15.2
    docker pull gcr.azk8s.cn/google_containers/kube-proxy:v1.15.2
    docker pull gcr.azk8s.cn/google_containers/pause:3.1
    docker pull gcr.azk8s.cn/google_containers/etcd:3.3.10
    docker pull gcr.azk8s.cn/google_containers/coredns:1.3.1
    
    # 将镜像打Tag成目标镜像
    docker tag gcr.azk8s.cn/google_containers/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
    docker tag gcr.azk8s.cn/google_containers/kube-controller-manager:v1.15.2  k8s.gcr.io/kube-controller-manager:v1.15.2
    docker tag gcr.azk8s.cn/google_containers/kube-scheduler:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2
    docker tag gcr.azk8s.cn/google_containers/kube-apiserver:v1.15.2  k8s.gcr.io/kube-apiserver:v1.15.2
    docker tag gcr.azk8s.cn/google_containers/coredns:1.3.1  k8s.gcr.io/coredns:1.3.1
    docker tag gcr.azk8s.cn/google_containers/etcd:3.3.10  k8s.gcr.io/etcd:3.3.10
    docker tag gcr.azk8s.cn/google_containers/pause:3.1  k8s.gcr.io/pause:3.1
    
    # 删除下载的镜像
    docker rmi gcr.azk8s.cn/google_containers/kube-apiserver:v1.15.2
    docker rmi gcr.azk8s.cn/google_containers/kube-controller-manager:v1.15.2
    docker rmi gcr.azk8s.cn/google_containers/kube-scheduler:v1.15.2
    docker rmi gcr.azk8s.cn/google_containers/kube-proxy:v1.15.2
    docker rmi gcr.azk8s.cn/google_containers/pause:3.1
    docker rmi gcr.azk8s.cn/google_containers/etcd:3.3.10
    docker rmi gcr.azk8s.cn/google_containers/coredns:1.3.1
    

    2.5 使用kubeadm init初始化集群

    查看集群的默认配置

    $ kubeadm config print init-defaults
    结果(略)
    

    使用kubeadm默认配置初始化的集群,会在master节点打上node-role.kubernetes.io/master:NoSchedule的污点,阻止master节点接受调度运行工作负载。这里测试环境只有两个节点,所以将这个taint修改为node-role.kubernetes.io/master:PreferNoSchedule。

    根据上面的结果编辑yaml文件

    $ vi kubeadm.yaml
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: InitConfiguration
    localAPIEndpoint:
      advertiseAddress: 192.168.15.174
      bindPort: 6443
    nodeRegistration:
      taints:
      - effect: PreferNoSchedule
        key: node-role.kubernetes.io/master
    ---
    apiVersion: kubeadm.k8s.io/v1beta2
    kind: ClusterConfiguration
    kubernetesVersion: v1.15.2 
    networking:
      podSubnet: 10.244.0.0/16
    

    初始化

    $ kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.15.2
    [preflight] Running pre-flight checks
    	[WARNING Hostname]: hostname "kub1" could not be reached
    	[WARNING Hostname]: hostname "kub1": lookup kub1 on 114.114.114.114:53: no such host
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [kub1 localhost] and IPs [192.168.15.174 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [kub1 localhost] and IPs [192.168.15.174 127.0.0.1 ::1]
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [kub1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.15.174]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 39.505847 seconds
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-check] Initial timeout of 40s passed.
    [upload-certs] Skipping phase. Please see --upload-certs
    [mark-control-plane] Marking the node kub1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node kub1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
    [bootstrap-token] Using token: xzmioa.hnr8r2qrghsr9xje
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join 192.168.15.174:6443 --token xzmioa.hnr8r2qrghsr9xje 
        --discovery-token-ca-cert-hash sha256:779d4c9330409f67b584f36baf2e882c42ac9d6c9e2c3765904c341fb3b89d10 
    
    

    按提示设置

    $ mkdir -p $HOME/.kube
    $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    $ sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    查看一下集群状态,确认个组件都处于healthy状态:

    $ kubectl get cs
    NAME                 STATUS    MESSAGE             ERROR
    scheduler            Healthy   ok                  
    controller-manager   Healthy   ok                  
    etcd-0               Healthy   {"health":"true"}   
    

    如果kubeadm init不成功,执行下面的命令重置

    $ kubeadm reset
    $ ifconfig cni0 down
    $ ip link delete cni0
    $ ifconfig flannel.1 down
    $ ip link delete flannel.1
    $ rm -rf /var/lib/cni/
    

    2.6 安装Pod Network

    $ curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    $ kubectl apply -f  kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io/flannel created
    clusterrolebinding.rbac.authorization.k8s.io/flannel created
    serviceaccount/flannel created
    configmap/kube-flannel-cfg created
    daemonset.extensions/kube-flannel-ds-amd64 created
    daemonset.extensions/kube-flannel-ds-arm64 created
    daemonset.extensions/kube-flannel-ds-arm created
    daemonset.extensions/kube-flannel-ds-ppc64le created
    daemonset.extensions/kube-flannel-ds-s390x created
    

    如果node有多个网卡,需编辑kube-flannel.yml,用--iface指定网卡

    ......
    containers:
          - name: kube-flannel
            image: quay.io/coreos/flannel:v0.11.0-amd64
            command:
            - /opt/bin/flanneld
            args:
            - --ip-masq
            - --kube-subnet-mgr
            - --iface=eth1
    ......
    

    查看状态(必须保证所有pod都Running)

    $ kubectl get pod -n kube-system
    NAME                            READY   STATUS    RESTARTS   AGE
    coredns-5c98db65d4-dr8lf        1/1     Running   0          52m
    coredns-5c98db65d4-lp8dg        1/1     Running   0          52m
    etcd-node1                      1/1     Running   0          51m
    kube-apiserver-node1            1/1     Running   0          51m
    kube-controller-manager-node1   1/1     Running   0          51m
    kube-flannel-ds-amd64-mm296     1/1     Running   0          44s
    kube-proxy-kchkf                1/1     Running   0          52m
    kube-scheduler-node1            1/1     Running   0          51m
    

    2.7 测试集群DNS是否可用

    确保coredns运行正常后,启动一个虚拟机测试

    $ kubectl run curl --image=radial/busyboxplus:curl -it
    kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
    If you don't see a command prompt, try pressing enter.
    [ root@curl-5cc7b478b6-r997p:/ ]$ nslookup kubernetes.default
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes.default
    Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
    

    nslookup kubernetes.default是在pod中运行的。

    2.8 向集群中添加Node节点

    在其他节点上执行kubeadm join(这个命令就是主节点kubeadm init打印出来的)

    $ kubeadm join 192.168.15.174:6443 --token xzmioa.hnr8r2qrghsr9xje 
        --discovery-token-ca-cert-hash sha256:779d4c9330409f67b584f36baf2e882c42ac9d6c9e2c3765904c341fb3b89d10 
    

    查看集群中所有节点:

    $ kubectl get node
    NAME   STATUS   ROLES         AGE     VERSION
    kub1   Ready    master        5h51m   v1.15.2
    kub2   Ready    <none>        5h44m   v1.15.2
    

    2.9 kube-proxy开启ipvs

    将配置中的“mode " " ”改成“mode "ipvs"”

    $ kubectl edit cm kube-proxy -n kube-system
    .......
        ipvs:
          excludeCIDRs: null
          minSyncPeriod: 0s
          scheduler: ""
          strictARP: false
          syncPeriod: 30s
        kind: KubeProxyConfiguration
        metricsBindAddress: 127.0.0.1:10249
        mode: "ipvs"
        nodePortAddresses: null
        oomScoreAdj: -999
    ......
    

    重启kube-proxy 的 pod

    $ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
    

    查看ipvs是否成功

    $ kubectl get pod -n kube-system | grep kube-proxy
    kube-proxy-7fsrg                1/1     Running   0          3s
    kube-proxy-k8vhm                1/1     Running   0          9s
    
    $ kubectl logs kube-proxy-7fsrg  -n kube-system
    I0703 04:42:33.308289       1 server_others.go:170] Using ipvs Proxier.
    ....
    

    如果不成功kubectl logs会显示出Using iptables,可以看到有两个kube-proxy pod,如果其中有一个没成功,很有可能是有一个节点上没有执行“/etc/sysconfig/modules/ipvs.modules”那一步。

  • 相关阅读:
    javascript线性渐变2
    javascript无缝滚动2
    javascript Object对象
    javascript无缝滚动
    javascript图片轮换2
    javascript图片轮换
    用C/C++写CGI程序
    linux shell 的 for 循环
    重磅分享:微软等数据结构+算法面试100题全部答案完整亮相
    查看linux服务器硬盘IO读写负载
  • 原文地址:https://www.cnblogs.com/bugutian/p/11366253.html
Copyright © 2011-2022 走看看