zoukankan      html  css  js  c++  java
  • K8S搭建-1 Master 2 Workers(dashboard+ingress)

    本文讲述k8s最新版的搭建(v1.15.2)

    分如下几个topic步骤:

    1. 各个节点的基本配置
    2. master节点的构建
    3. worker节点的构建
    4. 安装dashboard
    5. 安装ingress
    6. 常见命令
    7. docker镜像惹的祸

    各个节点的基本配置(以下命令每个节点都要执行:Master, Work1, Work2)

    IP自己变化下,根据实际情况

    systemctl stop firewalld && systemctl disable firewalld
    
    cat >>/etc/hosts<<EOF
    10.8.1.1 k8s-master1  api.k8s.cn
    10.8.1.2 k8s-slave1
    10.8.1.3 k8s-slave2
    EOF
    
    # 新建 iptable 配置修改文件
    cat <<EOF >  net.iptables.k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    
    #关闭 swap 分区
    sudo swapoff -a
    
    #防止开机自动挂载 swap 分区,注释掉配置
    sudo sed -i '/ swap / s/^(.*)$/#1/g' /etc/fstab
    
    #关闭 SELinux
    sudo setenforce 0
    
    #防止开机启动开启,修改 SELINUX 配置
    sudo sed -i s'/SELINUX=enforcing/SELINUX=disabled'/g /etc/selinux/config
    
    配置 iptables
    sudo mv net.iptables.k8s.conf /etc/sysctl.d/ && sudo sysctl --system
    
    #yum增加阿里云针对kubernetes镜像
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    yum update
    
    #安装 wget 
    sudo yum install -y wget
    
    #安装docker
    yum install docker.x86_64 -y
    
    #安装k8s工具
    yum install -y kubelet kubeadm kubectl
    
    #重启服务
    systemctl enable kubelet && systemctl start kubelet  

    然后需要准备docker images(任然需要在master, worker1, worker2节点上分别执行)

    vi init-docker-images.sh
    
    #然后内容如下
    docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.2
    docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.2
    docker pull registry.aliyuncs.com/google_containers/coredns:1.3.1
    docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.15.2
    docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.2
    docker pull registry.aliyuncs.com/google_containers/etcd:3.2.24
    docker pull registry.aliyuncs.com/google_containers/etcd:3.3.10
    docker pull registry.aliyuncs.com/google_containers/pause:3.1
    docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
    docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1
    docker pull docker.io/jmgao1983/flannel:v0.11.0-amd64
    docker pull quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0
    
    docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.15.2 k8s.gcr.io/kube-apiserver:v1.15.2
    docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.15.2 k8s.gcr.io/kube-controller-manager:v1.15.2
    docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.15.2 k8s.gcr.io/kube-proxy:v1.15.2
    docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.15.2 k8s.gcr.io/kube-scheduler:v1.15.2
    docker tag registry.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag registry.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
    docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
    docker tag registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
    docker tag docker.io/jmgao1983/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
    docker tag quay-mirror.qiniu.com/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0

    执行直到下载完成:

    ./init-docker-images.sh
    

      

    master节点的构建

    kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=0.0.0.0  --kubernetes-version=v1.15.2

    然后记录下类似如下的string:

    kubeadm join api.k8s.cn:6443 --token b5jxzg.5jtj2odzoqujikk1 
        --discovery-token-ca-cert-hash sha256:90d0ad57b39bf47bface0c7f4edec480aaf8352cab872f4d52072f998cf45105   

    此时,k8s集群会处于NotReady状态(Master节点,用如下命令查看kubectl get nodes),需要如下:

        mkdir -p $HOME/.kube

        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

        sudo chown $(id -u):$(id -g) $HOME/.kube/config


    vi /var/lib/kubelet/kubeadm-flags.env #然后把--network-plugin=cni 这些文本删除 #保存后重启k8s服务 service kubelet restart  

    稍等片刻,master节点就会变成Ready状态(kubectl get nodes)

    worker节点的构建(worker1和worker2节点分别执行)

    kubeadm join {master1的ip,需自行替换}:6443 --token rntn5f.vy9h28s4pxwx6eig 
        --discovery-token-ca-cert-hash sha256:62624adcc8aa5baa095dae607b8e57c8b619db956ad69e0e97f0e40c74542a92 
    
    vi /var/lib/kubelet/kubeadm-flags.env
    
    #然后把--network-plugin=cni 这些文本删除
    #保存后重启k8s服务
    
    
    service kubelet restart  
    

    安装dashboard(只要在master节点操作)

    wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
    
    
    vi kubernetes-dashboard.yaml
    #开始修改NodePort,在文件的最后
    
    # ------------------- Dashboard Service ------------------- #
    
    kind: Service
    apiVersion: v1
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
      name: kubernetes-dashboard
      namespace: kube-system
    spec:
      type: NodePort
      ports:
        - port: 443
          targetPort: 8443
          nodePort: 30001
      selector:
        k8s-app: kubernetes-dashboard
    
    kubectl apply -f kubernetes-dashboard.yaml  

    新建账号

    vi dashboard-account.yaml
    
    #内容如下
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: aks-dashboard-admin
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: aks-dashboard-admin
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: aks-dashboard-admin
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard
      labels:
        k8s-app: kubernetes-dashboard
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1beta1
    kind: ClusterRoleBinding
    metadata:
      name: kubernetes-dashboard-head
      labels:
        k8s-app: kubernetes-dashboard-head
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:
    - kind: ServiceAccount
      name: kubernetes-dashboard-head
      namespace: kube-system
    
    
    #执行
    kubectl apply -f dashboard-account.yaml  

      然后就能用firefox访问https://master的ip地址:30001

    接下来找出token:

    [root@k8s-master1 ~]# kubectl -n kube-system get secrets|grep aks-dashboard-admin
    aks-dashboard-admin-token-gmjfv                  kubernetes.io/service-account-token   3      4h52m
    
    
    [root@k8s-master1 ~]# kubectl -n kube-system describe secret aks-dashboard-admin-token-gmjfv
    Name:         aks-dashboard-admin-token-gmjfv
    Namespace:    kube-system
    Labels:       <none>
    Annotations:  kubernetes.io/service-account.name: aks-dashboard-admin
                  kubernetes.io/service-account.uid: 87d4ec1b-1829-4420-98d6-e77c1519aed6
    
    Type:  kubernetes.io/service-account-token
    
    Data
    ====
    ca.crt:     1025 bytes
    namespace:  11 bytes
    token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJha3MtZGFzaGJvYXJkLWFkbWluLXRva2VuLWdtamZ2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFrcy1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4N2Q0ZWMxYi0xODI5LTQ0MjAtOThkNi1lNzdjMTUxOWFlZDYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWtzLWRhc2hib2FyZC1hZG1pbiJ9.ELpsYbmWhW1sr3DOZfyupOkb87AbJ7sVoXEBitoTD46kuuNYcn8ajvwJcdfGruwrM9LwDcvMN7jD5UFF7-rgz1MUBEOZCoAjXFRrM1-Jn59TlXMk9W9JRD3DhMtuBRh6XUgPRjf755qr7WzR_DC8aCwjywAvFE1_R4N2oMZIU8gdmG0BsqwACHIbBnLJDAElBvgnKl8Jm4_XzKZW5ls-C45PSu-GC-yszt8qSN2bO5Z_rIUXhvK13Es5d0nUBvcanFBOsLjotWry195SWKEAuLiMp7qm6RJRrYWEpObh81w3MvbtrycZGMP7g-9H3s5vmHgs7HAnvjTEQht4c0F5qA
    [root@k8s-master1 ~]# 
    

    用上面这个token登录就行了 

    安装ingress

    kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml
    
    #新建ingress-service,NodePort类型
    #ingress-service.yaml
    
    apiVersion: v1
    kind: Service
    metadata:
      name: ingress-nginx
      namespace: ingress-nginx
    spec:
      type: NodePort
      ports:
      - name: http
        port: 80
        targetPort: 80
        protocol: TCP
        nodePort: 30080
      - name: https
        port: 443
        targetPort: 443
        protocol: TCP
        nodePort: 30443
      selector:
         "app.kubernetes.io/name": "ingress-nginx",
         "app.kubernetes.io/part-of": "ingress-nginx"
    
    
    kubectl apply -f ingress-service.yaml
    

    需要在k8s外部部署1个nginx slb,把所有流量都转发到worker节点的上述端口-30080,proxy_pass ip为worker节点的2个ip地址

    如果CoreDNS运转不正常,则需要删除重装CoreDNS组件

    kubectl delete deploy coredns -n kube-system
    
    
    
    vim coredns-ha.yaml
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        k8s-app: kube-dns
      name: coredns
      namespace: kube-system
    spec:
      #集群规模可自行配置
      replicas: 2
      selector:
        matchLabels:
          k8s-app: kube-dns
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 1
        type: RollingUpdate
      template:
        metadata:
          labels:
            k8s-app: kube-dns
        spec:
          affinity:
            podAntiAffinity:
              preferredDuringSchedulingIgnoredDuringExecution:
              - weight: 100
                podAffinityTerm:
                  labelSelector:
                    matchExpressions:
                    - key: k8s-app
                      operator: In
                      values:
                      - kube-dns
                  topologyKey: kubernetes.io/hostname
          containers:
          - args:
            - -conf
            - /etc/coredns/Corefile
            image: registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6
            imagePullPolicy: IfNotPresent
            livenessProbe:
              failureThreshold: 5
              httpGet:
                path: /health
                port: 8080
                scheme: HTTP
              initialDelaySeconds: 60
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 5
            name: coredns
            ports:
            - containerPort: 53
              name: dns
              protocol: UDP
            - containerPort: 53
              name: dns-tcp
              protocol: TCP
            - containerPort: 9153
              name: metrics
              protocol: TCP
            resources:
              limits:
                memory: 170Mi
              requests:
                cpu: 100m
                memory: 70Mi
            securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                add:
                - NET_BIND_SERVICE
                drop:
                - all
              readOnlyRootFilesystem: true
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
            volumeMounts:
            - mountPath: /etc/coredns
              name: config-volume
              readOnly: true
          dnsPolicy: Default
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          serviceAccount: coredns
          serviceAccountName: coredns
          terminationGracePeriodSeconds: 30
          tolerations:
          - key: CriticalAddonsOnly
            operator: Exists
          - effect: NoSchedule
            key: node-role.kubernetes.io/master
          volumes:
          - configMap:
              defaultMode: 420
              items:
              - key: Corefile
                path: Corefile
              name: coredns
            name: config-volume
    
    
    
    
    kubectl apply -f coredns-ha.yaml 
    

    flannel部署

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    
    kubectl apply -f kube-flannel.yml  

    需要先下载下来yaml,然后修改里面的子网配置,最好reboot  

    常见命令 

    #当join或者初始化k8s集群失败时,执行
    kubeadm reset
    
    #查看集群所有节点
    kubectl get nodes
    
    
    #查看secrets
    kubectl get secrets
    kubectl -n kube-system get secrets
    
    #查看具体pod明细,比如非Running状态下想看原因
    kubectl describe pod {pod名称}
    kubectl -n kube-system describe pod {pod名称}
    
    #查看docker容器监控信息
    docker stats -a
    
    #查看token列表
    kubeadm token list
    
    #查看证书hash
    openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

      

    docker镜像惹的祸

    由于有些docker镜像被墙了,因此默认安装时会阻塞住,解决方法是提前下载,配合国内镜像下载

    如果是国内镜像下载的,namespace就会发生改变,因此下载完成后,需要docker tag命令重新tag到国外namespace

    不足点,后续解决:

    • master节点的高可用
    • etcd的高可用(其实包含在上面的)
    • journalctl -f -u kubelet.service
      https://blog.csdn.net/wangmiaoyan/article/details/101216496

  • 相关阅读:
    华南虎原图找到了
    电脑高手的7大标准
    科幻小说一代宗师阿瑟•克拉克过逝
    看英文片最容易误解的10个单词(感觉对大家很有用,转过来的)
    地震了,人跑的真快啊
    John Titor一个来自未来的人
    马云扮白雪公主
    世界上最冷的脑筋急转弯
    告别人肉刷,让房源自己送上门
    来测下你的浏览器对标准的支持情况吧
  • 原文地址:https://www.cnblogs.com/aarond/p/k8s.html
Copyright © 2011-2022 走看看