zoukankan      html  css  js  c++  java
  • centos7之使用最新版的kubeadm体验k8s1.12.0

    1、环境准备

    centos7 、docker-ce18.06.1-ce、kubeadm、kubelet、kubectl

    2、安装

    yum安装,准备repo文件

    docker:

    [docker-ce-stable]
    name=Docker CE Stable - $basearch
    baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg
    
    [docker-ce-stable-debuginfo]
    name=Docker CE Stable - Debuginfo $basearch
    baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable
    enabled=0
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg
    
    [docker-ce-stable-source]
    name=Docker CE Stable - Sources
    baseurl=https://download.docker.com/linux/centos/7/source/stable
    enabled=1
    gpgcheck=1
    gpgkey=https://download.docker.com/linux/centos/gpg

    kubeadm、kubelet、kubctl

    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    yum -y install docker-ce-18.06.1.ce-3.el7.x86_64
    yum -y install  kubelet-1.12.2-0.x86_64  kubectl-1.12.2-0.x86_64 kubeadm-1.12.2-0.x86_64

    3、配置docker

    #vim /usr/lib/systemd/system/docker.service
     ExecStart=/usr/bin/dockerd --graph=/data/docker --storage-driver=overlay2
    # mkdir -p /etc/systemd/system/docker.service.d
    #vim http-proxy.conf
    [Service]
    Environment="HTTP_PROXY=http://10.10.23.74:8118" "NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.10.29.43,10.10.25.49,172.11.0.0,172.10.0.0,172.11.0.0/16,172.10.0.0/16,10.,172.,.evo.get.com,.kube.hpp.com,charts.gitlab.io,.mirror.ucloud.cn"

      #cat https-proxy.conf
     [Service]

      Environment="HTTPS_PROXY=http://10.10.23.74:8118" "NO_PROXY=localhost,127.0.0.1,0.0.0.0,10.10.29.43,10.10.25.49,172.11.0.0,172.10.0.0,172.11.0.0/16,172.10.0.0/16,10.,172.,.evo.get.com,.kube.hpp.com,charts.gitlab.io,.mirror.ucloud.cn"

    shadowsocks的安装参考我的另外的一篇博客:https://www.cnblogs.com/cuishuai/p/8463458.html

    4、初始化

    创建/etc/sysctl.d/k8s.conf文件

    cat /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    vm.swappiness=0

    sysctl -p /etc/sysctl.d/k8s.conf
    
    
    swapoff -a 
    systemctl enable kubelet
    systemctl start kubelet
    kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.18.1.12

    --apiserver-advertise-address
    是master的apiserver的监听地址,默认是本机ip。
     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config

    默认master节点是node-role.kubernetes.io/master:NoSchedule,需要做一个修改临时的,为了测试和后面部署flannel。

    kubectl taint nodes ku node-role.kubernetes.io/master-

    ku是我的master的节点名称。当然可以制定为--all。

    安装flannel

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    vim kube-flannel.yml
       args:
            - --ip-masq
            - --kube-subnet-mgr
            - --iface=eth0
    
    kubectl apply -f kube-flannel.yml

    由于前面已经修改了master的taint,可以直接部署,如果没有修改的话,可以修改flannel的压马路文件来实现部署,否则会部署失败。

    修改

    spec:
    hostNetwork: true
    nodeSelector:
    beta.kubernetes.io/arch: amd64
    tolerations:
    - operator: Exists
    effect: NoSchedule

    tolerations:
          - key: node-role.kubernetes.io/master
            operator: Exists
            effect: NoSchedule
          - key: node.kubernetes.io/not-ready
            operator: Exists
            effect: NoSchedule

    测试DNS

    kubectl run curl --image=radial/busyboxplus:curl -it
    [ root@curl-5cc7b478b6-6cfqr:/ ]$ nslookup kubernetes.default
    Server:    10.96.0.10
    Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
    
    Name:      kubernetes.default
    Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

    进入pod

    kubectl exec -it curl-5cc7b478b6-6cfqr -n default -- /bin/sh

    !添加node节点到集群:

    使用初始化得到的命令直接加入即可,node节点需要安装kubelet、kubeadm

    kubeadm默认创建的token24小时失效,再次添加节点的时候,就会报错unauthorized。这里有两种方法:

    1、直接生成一个永不过期的token(不推荐这种方式,线上集群的token还是要定时更换)

    kubeadm  token create  --ttl 0 

    2、定时更换,设置一个合理的ttl,定时更换集群的token

    # cat uptk.sh
    #!/bin/bash
    token=` kubeadm token list| awk '{print $1}' | grep -v TOKEN`
    
     for i in $token
     do
      kubeadm token delete $i
     done
    kubeadm token create --ttl 72h

    设置crontab,每两天更新一次,这个可以根据自己的需求做,因为我生成的token是72h,即3天。

    crontab -e
    * * */2 * * /data/scripts/uptk.sh 2>&1 &

    获取kubeadm  join需要的信息:

    1)token

    kubeadm token list | grep authentication,signing | awk '{print $1}'

    2)discovery-token-ca-cert-hash

    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

    3)加入节点

    kubeadm join --token c04f89.b781cdb55d83c1ef 10.10.3.4:6443 --discovery-token-ca-cert-hash sha256:986e83a9cb948368ad0552b95232e31d3b76e2476b595bd1d905d5242ace29af

    !集群移除节点

    在master节点:

    kubectl drain node2 --delete-local-data --force --ignore-daemonsets
    kubectl delete node node2

    node2是要删除的节点的名称。

    在node2节点上执行:

    kubeadm reset
    ifconfig cni0 down
    ip link delete cni0
    ifconfig flannel.1 down
    ip link delete flannel.1
    rm -rf /var/lib/cni/

    安装helm

    wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-amd64.tar.gz
    tar xf helm-v2.11.0-linux-amd64.tar.gz
    cp  helm tiller  /usr/local/bin

    创建tiller需要的用户,这里为了可以使用helm部署到所有的namespace里面,赋予clusterole权限,创建rbac-tiller.yaml

    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: tiller
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    kubectl  apply  -f  rbac-tiller.yaml

    创建tiller部署在特定的namespace,并且helm部署的程序也在这个namespace里面:

    参考:https://whmzsu.github.io/helm-doc-zh-cn/quickstart/rbac-zh_cn.html

    创建namespace:

    kubectl create namespace tiller-world
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: tiller
      namespace: tiller-world
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: Role
    metadata:
      name: tiller-manager
      namespace: tiller-world
    rules:
    - apiGroups: ["","extensions","apps"]
      resources: ["*"]
      verbs: ["*"]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: RoleBinding
    metadata:
      name: tiller-binding
      namespace: tiller-world
    subjects:
    - kind: ServiceAccount
      name: tiller
      namespace: tiller-world
    roleRef:
      kind: Role
      name: tiller-manager
      apiGroup: rbac.authorization.k8s.io

    初始化:

    helm init --service-account tiller  --upgrade

    默认安装到kube-system空间下,可以自己指定namespace和image等。

    如果将tiller安装到tiller-world下面则需要执行如下命令:

    helm init --service-account tiller --tiller-namespace tiller-world  --upgrade

    镜像:

    # kubernetes
    k8s.gcr.io/kube-apiserver:v1.12.0
    k8s.gcr.io/kube-controller-manager:v1.12.0
    k8s.gcr.io/kube-scheduler:v1.12.0
    k8s.gcr.io/kube-proxy:v1.12.0
    k8s.gcr.io/etcd:3.2.24
    k8s.gcr.io/pause:3.1
    
    # network and dns
    quay.io/coreos/flannel:v0.10.0-amd64
    k8s.gcr.io/coredns:1.2.2
    
    
    # helm and tiller
    gcr.io/kubernetes-helm/tiller:v2.11.0
    
    # nginx ingress
    quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
    k8s.gcr.io/defaultbackend:1.4
    
    # dashboard and metric-sever
    k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    gcr.io/google_containers/metrics-server-amd64:v0.3.0

    QA

    1.12.1版本也测试过了,但是启动coredns的时候总是报错,起不来。

    2018/10/04 11:04:55 [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769
    2018/10/04 11:04:55 [FATAL] plugin/loop: Seen "HINFO IN 3256902131464476443.1309143030470211725." more than twice, loop detected

    按照google上找到的方法在/etc/systemd/system/kubelet.service.d/10-kubeadm.conf添加如下内容:

    Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --resolv-conf=/etc/resolv.conf"

    并没有解决,看github上面的issue也是没有解决,有的是停掉systemd-resolved。但是是ubuntu上的。我是的是centos,感觉也是不好,于是回退到1.12.0没有出现那个问题。

  • 相关阅读:
    C#程序调用cmd执行命令(转)
    命名管道跨进程通信实例2(转)
    C#异步编程的实现方式——ThreadPool线程池
    命名管道跨进程通信实例1(转)
    No_16_0324 Java基础学习第二十三天
    mac osx加入全局启动terminal快捷键
    UVa 164
    Android OpenGL加入光照和材料属性
    51系列小型操作系统精髓 简单实现
    ubuntu下安装tomcat
  • 原文地址:https://www.cnblogs.com/cuishuai/p/9767954.html
Copyright © 2011-2022 走看看