zoukankan      html  css  js  c++  java
  • k8s重要概念及部署k8s集群(一)--技术流ken

    重要概念

    1. cluster

    cluster是 计算、存储和网络资源的集合,k8s利用这些资源运行各种基于容器的应用。

    2.master

    master是cluster的大脑,他的主要职责是调度,即决定将应用放在那里运行。master运行linux操作系统,可以是物理机或者虚拟机。为了实现高可用,可以运行多个master。

    3.node

    node的职责是运行容器应用。node由master管理,node负责监控并汇报容器的状态,同时根据master的要求管理容器的生命周期。node运行在linux的操作系统上,可以是物理机或者是虚拟机。

    4.pod

    pod是k8s的最小工作单元。每个pod包含一个或者多个容器。pod中的容器会作为一个整体被master调度到一个node上运行。

    5.controller

    k8s通常不会直接创建pod,而是通过controller来管理pod的。controller中定义了pod的部署特性,比如有几个剧本,在什么样的node上运行等。为了满足不同的业务场景,k8s提供了多种controller,包括deployment、replicaset、daemonset、statefulset、job等。

    6.deployment

    是最常用的controller。deployment可以管理pod的多个副本,并确保pod按照期望的状态运行。

    7.replicaset

    实现了pod的多副本管理。使用deployment时会自动创建replicaset,也就是说deployment是通过replicaset来管理pod的多个副本的,我们通常不需要直接使用replicaset。

    8.daemonset

    用于每个node最多只运行一个pod副本的场景。正如其名称所示的,daemonset通常用于运行daemon。

    9.statefuleset

    能够保证pod的每个副本在整个生命周期中名称是不变的,而其他controller不提供这个功能。当某个pod发生故障需要删除并重新启动时,pod的名称会发生变化,同时statefulset会保证副本按照固定的顺序启动、更新或者删除。、

    10.job

    用于运行结束就删除的应用,而其他controller中的pod通常是长期持续运行的。

    11.service

    deployment可以部署多个副本,每个pod 都有自己的IP,外界如何访问这些副本那?

    答案是service

    k8s的 service定义了外界访问一组特定pod的方式。service有自己的IP和端口,service为pod提供了负载均衡。

    k8s运行容器pod与访问容器这两项任务分别由controller和service执行。

    12.namespace

    可以将一个物理的cluster逻辑上划分成多个虚拟cluster,每个cluster就是一个namespace。不同的namespace里的资源是完全隔离的。

    安装 kubelet、kubeadm 和 kubectl

    master: 172.20.10.2

    node1: 172.20.10.7

    node2: 172.20.10.9

    官方安装文档可以参考 https://kubernetes.io/docs/setup/independent/install-kubeadm/

    第一步:安装docker

    所有节点都需要安装docker

    每个节点都需要使docker开机自启

    [root@localhost yum.repos.d]# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    [root@ken ~]# yum install docker-ce -y
    [root@ken ~]# mkdir /etc/docker
    [root@ken ~]# cat /etc/docker/daemon.json
    {
      "registry-mirrors": ["https://XXX.mirror.aliyuncs.com"]
    }
    [root@ken ~]# systemctl restart docker
    [root@ken ~]# systemctl enable docker

    第二步:配置k8s的yum文件

    [k8s]
    name=k8s
    enabled=1
    gpgcheck=0
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

    第三步:安装 kubelet、kubeadm 和 kubectl(所有节点执行)

    kubelet 运行在 Cluster 所有节点上,负责启动 Pod 和容器。

    kubeadm 用于初始化 Cluster。

    kubectl 是 Kubernetes 命令行工具。通过 kubectl 可以部署和管理应用,查看各种资源,创建、删除和更新各种组件。

    [root@ken ~]# yum install kubelet kubeadm kubectl -y

    第四步:启动kubelet

    此时,还不能启动kubelet,因为此时配置还不能,现在仅仅可以设置开机自启动

    [root@ken ~]# systemctl enable kubelet

    用 kubeadm 创建 Cluster

    第一步:环境准备(各个节点都需要执行下面的操作master,node)

    1.CPU数量至少两个否则会报错

    2. 主机名必须解析

    [root@ken ~]# cat /etc/hosts
    127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
    172.20.10.2 ken
    172.20.10.7 host1
    172.20.10.9 host2

    3.要保证打开内置的桥功能,这个是借助于iptables来实现的

    [root@ken ~]# echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables

    4. 需要禁止各个节点启用swap,如果启用了swap,那么kubelet就无法启动

    [root@ken ~]# swapoff -a && sysctl -w vm.swappiness=0
    vm.swappiness = 0
    
    [root@ken ~]# free -m
                  total        used        free      shared  buff/cache   available
    Mem:            991         151         365           7         475         674
    Swap:             0           0           0

    5.关闭防火墙和selinux 

    第二步:初始化master

    1.13.1版本可能太老了,在初始化的时候可以选择更高的版本,例如:1.14.1

    [root@ken ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --apiserver-advertise-address 172.20.10.2 --pod-network-cidr=10.244.0.0/16

    --image-repository string:这个用于指定从什么位置来拉取镜像(1.13版本才有的),默认值是k8s.gcr.io,我们将其指定为国内镜像地址:registry.aliyuncs.com/google_containers

     

    --kubernetes-version string:指定kubenets版本号,默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.2)来跳过网络请求。

     

    --apiserver-advertise-address 指明用 Master 的哪个 interface 与 Cluster 的其他节点通信。如果 Master 有多个 interface,建议明确指定,如果不指定,kubeadm 会自动选择有默认网关的 interface。

     

     

    --pod-network-cidr指定 Pod 网络的范围。Kubernetes 支持多种网络方案,而且不同网络方案对  --pod-network-cidr有自己的要求,这里设置为10.244.0.0/16 是因为我们将使用 flannel 网络方案,必须设置成这个 CIDR。

     

    看到下面的输出就表示你的集群创建成功了

    [root@ken ~]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --apiserver-advertise-address 172.20.10.2 --pod-network-cidr=10.244.0.0/16
    [init] Using Kubernetes version: v1.13.1
    [preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.0. Latest validated version: 18.06
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Activating the kubelet service
    [certs] Using certificateDir folder "/etc/kubernetes/pki"
    [certs] Generating "etcd/ca" certificate and key
    [certs] Generating "etcd/server" certificate and key
    [certs] etcd/server serving cert is signed for DNS names [ken localhost] and IPs [172.20.10.2 127.0.0.1 ::1]
    [certs] Generating "etcd/peer" certificate and key
    [certs] etcd/peer serving cert is signed for DNS names [ken localhost] and IPs [172.20.10.2 127.0.0.1 ::1]
    [certs] Generating "etcd/healthcheck-client" certificate and key
    [certs] Generating "apiserver-etcd-client" certificate and key
    [certs] Generating "front-proxy-ca" certificate and key
    [certs] Generating "front-proxy-client" certificate and key
    [certs] Generating "ca" certificate and key
    [certs] Generating "apiserver" certificate and key
    [certs] apiserver serving cert is signed for DNS names [ken kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.20.10.2]
    [certs] Generating "apiserver-kubelet-client" certificate and key
    [certs] Generating "sa" key and public key
    [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
    [kubeconfig] Writing "admin.conf" kubeconfig file
    [kubeconfig] Writing "kubelet.conf" kubeconfig file
    [kubeconfig] Writing "controller-manager.conf" kubeconfig file
    [kubeconfig] Writing "scheduler.conf" kubeconfig file
    [control-plane] Using manifest folder "/etc/kubernetes/manifests"
    [control-plane] Creating static Pod manifest for "kube-apiserver"
    [control-plane] Creating static Pod manifest for "kube-controller-manager"
    [control-plane] Creating static Pod manifest for "kube-scheduler"
    [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
    [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
    [apiclient] All control plane components are healthy after 26.507041 seconds
    [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ken" as an annotation
    [mark-control-plane] Marking the node ken as control-plane by adding the label "node-role.kubernetes.io/master=''"
    [mark-control-plane] Marking the node ken as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
    [bootstrap-token] Using token: rn816q.zj0crlasganmrzsr
    [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
    [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    Your Kubernetes master has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
      https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of machines by running the following on each node
    as root:
    
      kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

    如果初始化失败,请使用如下代码清除后重新初始化

    # kubeadm reset

    # ifconfig cni0 down

    # ip link delete cni0

    # ifconfig flannel.1 down

    # ip link delete flannel.1

    # rm -rf /var/lib/cni/

    # rm -rf /var/lib/etcd/*

    docker初始化成功下载的镜像

    [root@ken ~]# docker image ls
    REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
    registry.aliyuncs.com/google_containers/kube-proxy                v1.13.1             fdb321fd30a0        6 weeks ago         80.2MB
    registry.aliyuncs.com/google_containers/kube-controller-manager   v1.13.1             26e6f1db2a52        6 weeks ago         146MB
    registry.aliyuncs.com/google_containers/kube-apiserver            v1.13.1             40a63db91ef8        6 weeks ago         181MB
    registry.aliyuncs.com/google_containers/kube-scheduler            v1.13.1             ab81d7360408        6 weeks ago         79.6MB
    tomcat                                                            latest              48dd385504b1        7 weeks ago         475MB
    memcached                                                         latest              8230c836a4b3        2 months ago        62.2MB
    registry.aliyuncs.com/google_containers/coredns                   1.2.6               f59dcacceff4        2 months ago        40MB
    busybox                                                           latest              59788edf1f3e        3 months ago        1.15MB
    registry.aliyuncs.com/google_containers/etcd                      3.2.24              3cab8e1b9802        4 months ago        220MB
    registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        13 months ago       742kB

    第三步:配置kubectl

    kubectl 是管理 Kubernetes Cluster 的命令行工具,前面我们已经在所有的节点安装了 kubectl。Master 初始化完成后需要做一些配置工作,然后 kubectl 就能使用了。

    [root@ken ~]#  mkdir -p $HOME/.kube
    [root@ken ~]#  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    [root@ken ~]# chown $(id -u):$(id -g) $HOME/.kube/config

    为了使用更便捷,启用 kubectl 命令的自动补全功能。

    [root@ken ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

    现在kubectl可以使用了

    [root@ken ~]# kubectl get cs
    NAME                 STATUS    MESSAGE              ERROR
    scheduler            Healthy   ok                   
    controller-manager   Healthy   ok                   
    etcd-0               Healthy   {"health": "true"}   

    第四步:安装pod网络

    要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。

    Kubernetes 支持多种网络方案,这里我们先使用 flannel,后面还会讨论 Canal。

    [root@ken ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

    每个节点启动kubelet

    [root@ken ~]# systemctl restart kubelet

    等镜像下载完成以后,看到node的状态是ready了

    [root@ken ~]# kubectl get nodes
    NAME   STATUS   ROLES    AGE   VERSION
    ken    Ready    master   17m   v1.13.2

    此时,就可以看到pod信息了

    [root@ken ~]# kubectl get pods -n kube-system
    NAME                          READY   STATUS    RESTARTS   AGE
    coredns-78d4cf999f-dbxpc      1/1     Running   0          19m
    coredns-78d4cf999f-q9vq2      1/1     Running   0          19m
    etcd-ken                      1/1     Running   0          18m
    kube-apiserver-ken            1/1     Running   0          18m
    kube-controller-manager-ken   1/1     Running   0          18m
    kube-flannel-ds-amd64-fd8mv   1/1     Running   0          3m26s
    kube-proxy-gwmr2              1/1     Running   0          19m
    kube-scheduler-ken            1/1     Running   0          18m

    添加 k8s-node1 和 k8s-node2

    第一步:环境准备

    1.node节点关闭防火墙和selinux

    2.禁用swap

    3. 解析主机名

    4.启动内核功能

    启动kubeket

    只需要设置为开机自启动就可以了

    [root@host1 ~]#  systemctl enable kubelet

    第二步:添加nodes

    这里的--token 来自前面kubeadm init输出提示,如果当时没有记录下来可以通过kubeadm token list 查看。

    kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

    输出如下的信息

    [root@host2 ~]# kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903
    [preflight] Running pre-flight checks
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.1. Latest validated version: 18.06
    [discovery] Trying to connect to API Server "172.20.10.2:6443"
    [discovery] Created cluster-info discovery client, requesting info from "https://172.20.10.2:6443"
    [discovery] Requesting info from "https://172.20.10.2:6443" again to validate TLS against the pinned public key
    [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.20.10.2:6443"
    [discovery] Successfully established connection with API Server "172.20.10.2:6443"
    [join] Reading configuration from the cluster...
    [join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Activating the kubelet service
    [tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
    [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "host2" as an annotation
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the master to see this node join the cluster.

    第三步:查看nodes

    根据上面最后一行的输出信息提示查看nodes

    [root@ken ~]# kubectl get nodes
    NAME    STATUS     ROLES    AGE     VERSION
    host1   NotReady   <none>   2m54s   v1.13.2
    host2   NotReady   <none>   2m16s   v1.13.2
    ken     Ready      master   38m     v1.13.2

    这里其实需要等一会,这个node1节点才会变成Ready状态,因为node节点需要下载四个镜像flannel coredns kube-proxy pause

    过了一会查看节点状态

    [root@ken ~]# kubectl get nodes
    NAME    STATUS   ROLES    AGE     VERSION
    host1   Ready    <none>   4m15s   v1.13.2
    host2   Ready    <none>   3m37s   v1.13.2
    ken     Ready    master   39m     v1.13.2

    补充:移除NODE节点的方法

    第一步:先将节点设置为维护模式(host1是节点名称)

    [root@ken ~]# kubectl drain host1 --delete-local-data --force --ignore-daemonsets
    node/host1 cordoned
    WARNING: Ignoring DaemonSet-managed pods: kube-flannel-ds-amd64-ssqcl, kube-proxy-7cnsr
    node/host1 drained

    第二步:然后删除节点

    [root@ken ~]# kubectl delete node host1
    node "host1" deleted

     

    第三步:查看节点

    发现host1节点已经被删除了

    [root@ken ~]# kubectl get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    host2   Ready    <none>   13m   v1.13.2
    ken     Ready    master   49m   v1.13.2

    如果这个时候再想添加进来这个node,需要执行两步操作

    第一步:停掉kubelet(需要添加进来的节点操作)

    [root@host1 ~]# systemctl stop kubelet

    第二步:删除相关文件

    [root@host1 ~]# rm -rf /etc/kubernetes/*

    第三步:添加节点

    [root@host1 ~]# kubeadm join 172.20.10.2:6443 --token rn816q.zj0crlasganmrzsr --discovery-token-ca-cert-hash sha256:e339e4dbf6bd1323c13e794760fff3cbeb7a3f6f42b71d4cb3cffdde72179903

    第四步:查看节点

    [root@ken ~]# kubectl get nodes
    NAME    STATUS   ROLES    AGE   VERSION
    host1   Ready    <none>   13s   v1.13.2
    host2   Ready    <none>   17m   v1.13.2
    ken     Ready    master   53m   v1.13.2

    忘掉token再次添加进k8s集群

    第一步:主节点执行命令

    获取token

    [root@ken-master ~]# kubeadm token list
    TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
    ojxdod.fb7tqipat46yp8ti   10h       2019-05-06T04:55:42+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

    第二步: 获取ca证书sha256编码hash值

    [root@ken-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
    2f8888cdb01191ff6dbca0edb02dbb21a14469028e4ff2598854a4544c5fa751

    第三步:从节点执行如下的命令

    [root@ken-node1 ~]# systemctl stop kubelet

    第四步:删除相关文件

    [root@ken-node1 ~]# rm -rf /etc/kubernetes/*

    第五步:加入集群

    指定主节点IP,端口是6443

    在生成的证书前有sha256:

    [root@ken-node1 ~]# kubeadm join 192.168.64.10:6443 --token ojxdod.fb7tqipat46yp8ti  --discovery-token-ca-cert-hash sha256:2f8888cdb01191ff6dbca0edb02dbb21a14469028e4ff2598854a4544c5fa751
  • 相关阅读:
    Java实现 蓝桥杯VIP 基础练习 完美的代价
    Java实现 蓝桥杯VIP基础练习 矩形面积交
    Java实现 蓝桥杯VIP 基础练习 完美的代价
    Java实现 蓝桥杯 蓝桥杯VIP 基础练习 数的读法
    Java实现 蓝桥杯 蓝桥杯VIP 基础练习 数的读法
    Java实现 蓝桥杯 蓝桥杯VIP 基础练习 数的读法
    Java实现 蓝桥杯 蓝桥杯VIP 基础练习 数的读法
    Java实现 蓝桥杯 蓝桥杯VIP 基础练习 数的读法
    核心思想:想清楚自己创业的目的(如果你没有自信提供一种更好的产品或服务,那就别做了,比如IM 电商 搜索)
    在Linux中如何利用backtrace信息解决问题
  • 原文地址:https://www.cnblogs.com/kenken2018/p/10332648.html
Copyright © 2011-2022 走看看