zoukankan      html  css  js  c++  java
  • 使用 kubeadm 搭建 kubernetes1.10 集群

    PS:所有节点安装之前记得先把镜像准备好,否者将无法启动,也不报错。

    $ cat /etc/hosts
    192.168.11.1 master
    192.168.11.2 node

    禁用防火墙:

    $ systemctl stop firewalld
    $ systemctl disable firewalld

    禁用SELINUX:

    $ setenforce 0
    $ cat /etc/selinux/config
    SELINUX=disabled

    创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1

    执行如下命令使修改生效:

    $ modprobe br_netfilter
    $ sysctl -p /etc/sysctl.d/k8s.conf

    镜像

    如果你的节点上面有科学上网的工具,可以忽略这一步,我们需要提前将所需的gcr.io上面的镜像下载到节点上面,当然前提条件是你已经成功安装了docker

    master节点,执行下面的命令:

    docker pull registry-vpc.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0
    docker pull registry-vpc.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1

    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-apiserver-amd64:v1.10.0 k8s.gcr.io/kube-apiserver-amd64:v1.10.0
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-scheduler-amd64:v1.10.0 k8s.gcr.io/kube-scheduler-amd64:v1.10.0
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-controller-manager-amd64:v1.10.0 k8s.gcr.io/kube-controller-manager-amd64:v1.10.0
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-kube-dns-amd64:1.14.8 k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-dnsmasq-nanny-amd64:1.14.8 k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/k8s-dns-sidecar-amd64:1.14.8 k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/etcd-amd64:3.1.12 k8s.gcr.io/etcd-amd64:3.1.12
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

     

    可以将上面的命令保存为一个shell脚本,然后直接执行即可。这些镜像是在master节点上需要使用到的镜像,一定要提前下载下来。

    其他Node,执行下面的命令:

    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/kubernetes-dashboard-amd64:v1.8.3
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-influxdb-amd64:v1.3.3
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-grafana-amd64:v4.4.3
    docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-amd64:v1.4.2

    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kube-proxy-amd64:v1.10.0 k8s.gcr.io/kube-proxy-amd64:v1.10.0

    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/kubernetes-dashboard-amd64:v1.8.3 k8s.gcr.io/kubernetes-dashboard-amd64:v1.8.3
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-influxdb-amd64:v1.3.3 k8s.gcr.io/heapster-influxdb-amd64:v1.3.3
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-grafana-amd64:v4.4.3 k8s.gcr.io/heapster-grafana-amd64:v4.4.3
    docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/heapster-amd64:v1.4.2 k8s.gcr.io/heapster-amd64:v1.4.2

     

    上面的这些镜像是在Node节点中需要用到的镜像,在join节点之前也需要先下载到节点上面。

    安装 kubeadm、kubelet、kubectl

    在确保docker安装完成后,上面的相关环境配置也完成了,对应所需要的镜像(如果可以科学上网可以跳过这一步)也下载完成了,现在我们就可以来安装kubeadm了,我们这里是通过指定yum源的方式来进行安装的:

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
           https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF

    当然了,上面的yum源也是需要科学上网的,如果不能科学上网的话,我们可以使用阿里云的源进行安装:

    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
           http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    目前阿里云的源最新版本已经是1.10版本,所以可以直接安装。yum源配置完成后,执行安装命令即可:

    $ yum makecache fast && yum install -y kubelet kubeadm kubectl

    正常情况我们可以都能顺利安装完成上面的文件。

    配置 kubelet

    安装完成后,我们还需要对kubelet进行配置,因为用yum源的方式安装的kubelet生成的配置文件将参数--cgroup-driver改成了systemd,而dockercgroup-drivercgroupfs,这二者必须一致才行,我们可以通过docker info命令查看:

    $ docker info |grep Cgroup
    Cgroup Driver: cgroupfs

    修改文件kubelet的配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf,将其中的KUBELET_CGROUP_ARGS参数更改成cgroupfs

    Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

    另外还有一个问题是关于交换分区的,之前我们在手动搭建高可用的kubernetes 集群一文中已经提到过,Kubernetes从1.8开始要求关闭系统的 Swap ,如果不关闭,默认配置的kubelet将无法启动,我们可以通过 kubelet 的启动参数--fail-swap-on=false更改这个限制,所以我们需要在上面的配置文件中增加一项配置(在ExecStart之前):

    Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

    当然最好的还是将swap给关掉,这样能提高kubelet的性能。修改完成后,重新加载我们的配置文件即可:

    $ systemctl daemon-reload

    集群安装

    初始化

    到这里我们的准备工作就完成了,接下来我们就可以在master节点上用kubeadm命令来初始化我们的集群了:

    $ kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.11.1

    命令非常简单,就是kubeadm init,后面的参数是需要安装的集群版本,因为我们这里选择flannel作为 Pod 的网络插件,所以需要指定–pod-network-cidr=10.244.0.0/16,然后是apiserver的通信地址,这里就是我们master节点的IP 地址。执行上面的命令,如果出现
    running with swap on is not supported. Please disable swap之类的错误,则我们还需要增加一个参数–ignore-preflight-errors=Swap来忽略swap的错误提示信息:

    $ kubeadm init 
      --kubernetes-version=v1.10.0
      --pod-network-cidr=10.244.0.0/16
      --apiserver-advertise-address=192.168.11.1
      --ignore-preflight-errors=Swap
    [init] Using Kubernetes version: v1.10.0
    [init] Using Authorization modes: [Node RBAC]
    [preflight] Running pre-flight checks.
       [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [preflight] Starting the kubelet service
    [certificates] Generated ca certificate and key.
    [certificates] Generated apiserver certificate and key.
    [certificates] apiserver serving cert is signed for DNS names [ydzs-master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.151.30.57]
    [certificates] Generated apiserver-kubelet-client certificate and key.
    [certificates] Generated etcd/ca certificate and key.
    [certificates] Generated etcd/server certificate and key.
    [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]
    [certificates] Generated etcd/peer certificate and key.
    [certificates] etcd/peer serving cert is signed for DNS names [ydzs-master1] and IPs [10.151.30.57]
    [certificates] Generated etcd/healthcheck-client certificate and key.
    [certificates] Generated apiserver-etcd-client certificate and key.
    [certificates] Generated sa key and public key.
    [certificates] Generated front-proxy-ca certificate and key.
    [certificates] Generated front-proxy-client certificate and key.
    [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
    [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
    [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
    [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
    [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
    [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
    [init] This might take a minute or longer if the control plane images have to be pulled.
    [apiclient] All control plane components are healthy after 22.007661 seconds
    [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [markmaster] Will mark node ydzs-master1 as master by adding a label and a taint
    [markmaster] Master ydzs-master1 tainted and labelled with key/value: node-role.kubernetes.io/master=""
    [bootstraptoken] Using token: 8xomlq.0cdf2pbvjs2gjho3
    [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: kube-dns
    [addons] Applied essential addon: kube-proxy

    Your Kubernetes master has initialized successfully!

    To start using your cluster, you need to run the following as a regular user:

     mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config

    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
     https://kubernetes.io/docs/concepts/cluster-administration/addons/

    You can now join any number of machines by running the following on each node
    as root:

     kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368

    上面的信息记录了kubeadm初始化整个集群的过程,生成相关的各种证书、kubeconfig文件、bootstraptoken等等,后边是使用kubeadm join往集群中添加节点时用到的命令,下面的命令是配置如何使用kubectl访问集群的方式:

      mkdir -p $HOME/.kube
     sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
     sudo chown $(id -u):$(id -g) $HOME/.kube/config

    最后给出了将节点加入集群的命令:

    kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368

    我们根据上面的提示配置好kubectl后,就可以使用kubectl来查看集群的信息了:

    $ kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok
    controller-manager   Healthy   ok
    etcd-0               Healthy   {"health": "true"}
    $ kubectl get csr
    NAME                                                   AGE       REQUESTOR                 CONDITION
    node-csr-8qygb8Hjxj-byhbRHawropk81LHNPqZCTePeWoZs3-g   1h        system:bootstrap:8xomlq   Approved,Issued
    $ kubectl get nodes
    NAME           STATUS    ROLES     AGE       VERSION
    ydzs-master1   Ready     master    3h        v1.10.0

    如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

    $ kubeadm reset
    $ ifconfig cni0 down && ip link delete cni0
    $ ifconfig flannel.1 down && ip link delete flannel.1
    $ rm -rf /var/lib/cni/

    安装 Pod Network

    接下来我们来安装flannel网络插件,很简单,和安装普通的POD没什么两样:

    $ wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
    $ kubectl apply -f  kube-flannel.yml
    clusterrole.rbac.authorization.k8s.io "flannel" created
    clusterrolebinding.rbac.authorization.k8s.io "flannel" created
    serviceaccount "flannel" created
    configmap "kube-flannel-cfg" created
    daemonset.extensions "kube-flannel-ds" created

    另外需要注意的是如果你的节点有多个网卡的话,需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,否则可能会出现dns无法解析。flanneld启动参数加上--iface=<iface-name>

    args:
    - --ip-masq
    - --kube-subnet-mgr
    - --iface=eth0

    安装完成后使用kubectl get pods命令可以查看到我们集群中的组件运行状态,如果都是Running状态的话,那么恭喜你,你的master节点安装成功了。

    $ kubectl get pods --all-namespaces
    NAMESPACE     NAME                                   READY     STATUS    RESTARTS   AGE
    kube-system   etcd-ydzs-master1                      1/1       Running   0          10m
    kube-system   kube-apiserver-ydzs-master1            1/1       Running   0          10m
    kube-system   kube-controller-manager-ydzs-master1   1/1       Running   0          10m
    kube-system   kube-dns-86f4d74b45-f5595              3/3       Running   0          10m
    kube-system   kube-flannel-ds-qxjs2                  1/1       Running   0          1m
    kube-system   kube-proxy-vf5fg                       1/1       Running   0          10m
    kube-system   kube-scheduler-ydzs-master1            1/1       Running   0          10m

    kubeadm初始化完成后,默认情况下Pod是不会被调度到master节点上的,所以现在还不能直接测试普通的Pod,需要添加一个工作节点后才可以。

    添加节点

    同样的上面的环境配置、docker 安装、kubeadmin、kubelet、kubectl 这些都在Node(192.168.11.2)节点安装配置好过后,我们就可以直接在 Node 节点上执行kubeadm join命令了(上面初始化的时候有),同样加上参数--ignore-preflight-errors=Swap:

    $ kubeadm join 192.168.11.1:6443 --token 8xomlq.0cdf2pbvjs2gjho3 --discovery-token-ca-cert-hash sha256:92802317cb393682c1d1356c15e8b4ec8af2b8e5143ffd04d8be4eafb5fae368 --ignore-preflight-errors=Swap
    [preflight] Running pre-flight checks.
       [WARNING Swap]: running with swap on is not supported. Please disable swap
       [WARNING FileExisting-crictl]: crictl not found in system path
    Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
    [discovery] Trying to connect to API Server "10.151.30.57:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://192.168.11.1:6443"
    [discovery] Requesting info from "https://10.151.30.57:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.151.30.57:6443"
    [discovery] Successfully established connection with API Server "10.151.30.57:6443"

    This node has joined the cluster:
    * Certificate signing request was sent to master and a response
     was received.
    * The Kubelet was informed of the new secure connection details.

    Run 'kubectl get nodes' on the master to see this node join the cluster.

    我们可以看到该节点已经加入到集群中去了,然后我们把master节点的~/.kube/config文件拷贝到当前节点对应的位置即可使用kubectl命令行工具了。

    $ kubectl get nodes
    NAME           STATUS    ROLES     AGE       VERSION
    evjfaxic       Ready     <none>    1h        v1.10.0
    ydzs-master1   Ready     master    3h        v1.10.0

    创建个nginx pod测试一下
    docker pull
    registry.cn-hangzhou.aliyuncs.com/qinyujia-test/nginx
    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
      labels:
         app: nginx
    spec:
         containers:
            - name: nginx
              image: registry.cn-hangzhou.aliyuncs.com/qinyujia-test/nginx
              imagePullPolicy: IfNotPresent
              ports:
              - containerPort: 80
         restartPolicy: Always
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-service
    spec:
      type: NodePort
      sessionAffinity: ClientIP
      selector:
        app: nginx
      ports:
        - port: 80
          nodePort: 30080

    kubectl create -f hello.yaml



  • 相关阅读:
    自然语言交流系统 phxnet团队 创新实训 项目博客 (十一)
    install ubuntu on Android mobile phone
    Mac OS, Mac OSX 与Darwin
    About darwin OS
    自然语言交流系统 phxnet团队 创新实训 项目博客 (十)
    Linux下编译安装qemu和libvirt
    libvirt(virsh命令总结)
    深入浅出 kvm qemu libvirt
    自然语言交流系统 phxnet团队 创新实训 项目博客 (九)
    自然语言交流系统 phxnet团队 创新实训 项目博客 (八)
  • 原文地址:https://www.cnblogs.com/cp-miao/p/8891200.html
Copyright © 2011-2022 走看看