zoukankan      html  css  js  c++  java
  • 容器编排系统k8s之DaemonSet、Job和CronJob控制器

      前文我们了解了k8s上的pod控制器中的常用的两种控制器ReplicaSet和Deployment控制器的相关话题,回顾请参考:https://www.cnblogs.com/qiuhom-1874/p/14149042.html;今天我们来了解下DaemonSet、Job和CronJob控制器相关话题;

      1、DaemonSet控制器

      从名字上就可以看出这个控制器是管理守护进程类的pod;DaemonSet控制器的主要作用是管理守护进程类的Pod,通常用于在每个节点需要运行一个这样的Pod场景;比如我们要收集日志到es中,我们就可以使用这种控制器在每个节点上运行一个Pod;DaemonSet控制器和Deployment控制器很类似,不同的是ds(DaemonSet的简写)控制器不需要我们手动指定其运行的pod数量,它会根据k8s集群节点数量的变化而变化,如果新加入一个节点它会自动扩展对应pod数量,减少节点时,它也不会把之前运行在该节点上的pod调度到其他节点,总之一个节点上只能运行同类型Pod1个;除此之外ds它还支持通过节点选择器来做选择性的调度;比如,在某个拥有对应标签的节点就运行对应pod,没有就不运行;其他的更新操作和deployment差不多;

      示例:创建DaemonSet控制器

    [root@master01 ~]# cat ds-demo-nginx-1.14.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata: 
      name: ds-demo
      namespace: default
    spec:
      selector: 
        matchLabels:
          app: ngx-ds
      template:
        metadata:
          labels:
            app: ngx-ds
        spec:
          containers:
          - name: nginx
            image: nginx:1.14-alpine
            ports:
            - name: http
              containerPort: 80
      minReadySeconds: 5
    [root@master01 ~]# 
    

      提示:对于ds控制器来说,它在spec中最主要定义选择器和pod模板,这个定义和deploy控制器一样;上述配置文件主要使用ds控制器运行一个nginx pod,其标签名为ngx-ds;

      应用配置清单

    [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
    daemonset.apps/ds-demo created
    [root@master01 ~]# kubectl get ds -o wide
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   3         3         3       3            3           <none>          14s   nginx        nginx:1.14-alpine   app=ngx-ds
    

      提示:可以看到我们并没有指定pod数量,对应控制器它会根据节点数量在每个node节点上创建pod;

      验证:查看pod情况,看看是不是每个节点都被调度运行了一个pod?

    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-fm9cb   1/1     Running   0          27s   10.244.1.57   node01.k8s.org   <none>           <none>
    ds-demo-pspbk   1/1     Running   0          27s   10.244.3.57   node03.k8s.org   <none>           <none>
    ds-demo-zvpbb   1/1     Running   0          27s   10.244.2.69   node02.k8s.org   <none>           <none>
    [root@master01 ~]#
    

      提示:可以看到对应pod在每个node节点都仅跑了一个pod;

      定义节点选择器

    [root@master01 ~]# cat ds-demo-nginx-1.14.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata: 
      name: ds-demo
      namespace: default
    spec:
      selector: 
        matchLabels:
          app: ngx-ds
      template:
        metadata:
          labels:
            app: ngx-ds
        spec:
          containers:
          - name: nginx
            image: nginx:1.14-alpine
            ports:
            - name: http
              containerPort: 80
          nodeSelector:
            app: nginx-1.14-alpine
      minReadySeconds: 5
    [root@master01 ~]# 
    

      提示:定义节点选择器需要在pod模板中的spec字段下使用nodeSelector字段来定义,这个字段的值为一个字典;以上配置定义了只有节点标签为app=nginx-1.14-alpine才会在对应节点上创建pod,否则就不予创建;

      应用配置清单

    [root@master01 ~]# kubectl get ds
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    ds-demo   3         3         3       3            3           <none>          14m
    [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
    daemonset.apps/ds-demo configured
    [root@master01 ~]# kubectl get ds
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE
    ds-demo   0         0         0       0            0           app=nginx-1.14-alpine   14m
    [root@master01 ~]# kubectl get pod
    NAME            READY   STATUS        RESTARTS   AGE
    ds-demo-pspbk   0/1     Terminating   0          14m
    [root@master01 ~]# kubectl get pod
    No resources found in default namespace.
    [root@master01 ~]# 
    

      提示:可以看到加上了节点选择器以后,对应pod都被删除了,原因是在k8s节点上没有任何一个节点拥有对应节点选择器中的标签,所以都不满足调度条件,当然对应pod就被控制器删除了;

      测试:给node01.k8s.org节点添加一个节点标签,其名为app=nginx-1.14-alpine,看看对应节点是否会创建pod?

    [root@master01 ~]# kubectl label node node01.k8s.org app=nginx-1.14-alpine
    node/node01.k8s.org labeled
    [root@master01 ~]# kubectl get ds -o wide
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR           AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   1         1         1       1            1           app=nginx-1.14-alpine   20m   nginx        nginx:1.14-alpine   app=ngx-ds
    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-8hfnq   1/1     Running   0          18s   10.244.1.58   node01.k8s.org   <none>           <none>
    [root@master01 ~]# 
    

      提示:可以看到只要k8snode节点拥有对应node选择器匹配的标签时,对应pod就会精准调度到对应节点上运行;

      删除节点选择器,应用资源配置清单,然后再新加一个节点,看看新加的节点是否会自动创建对应pod?

      删除节点选择器,应用资源配置清单

    [root@master01 ~]# cat ds-demo-nginx-1.14.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata: 
      name: ds-demo
      namespace: default
    spec:
      selector: 
        matchLabels:
          app: ngx-ds
      template:
        metadata:
          labels:
            app: ngx-ds
        spec:
          containers:
          - name: nginx
            image: nginx:1.14-alpine
            ports:
            - name: http
              containerPort: 80
      minReadySeconds: 5
    [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
    daemonset.apps/ds-demo configured
    [root@master01 ~]# kubectl get ds -o wide
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   3         3         3       3            3           <none>          26m   nginx        nginx:1.14-alpine   app=ngx-ds
    [root@master01 ~]# 
    

      准备一台node节点,其主机名为node04.k8s.org,准备步骤请参考https://www.cnblogs.com/qiuhom-1874/p/14126750.html

      在master节点上创建节点加入集群的命令

    [root@master01 ~]# kubeadm token create --print-join-command 
    kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f     --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 
    [root@master01 ~]# 
    

      复制命令在node04上执行

    [root@node04 ~]# kubeadm join 192.168.0.41:6443 --token 8rdaut.qeeyf9cw5e1dur8f     --discovery-token-ca-cert-hash sha256:330db1e5abff4d0e62150596f3e989cde40e61bdc73d6477170d786fcc1cfc67 --ignore-preflight-errors=Swap
    [preflight] Running pre-flight checks
            [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
            [WARNING Swap]: running with swap on is not supported. Please disable swap
            [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.1. Latest validated version: 19.03
            [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
    [preflight] Reading configuration from the cluster...
    [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
    [kubelet-start] Starting the kubelet
    [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
    
    This node has joined the cluster:
    * Certificate signing request was sent to apiserver and a response was received.
    * The Kubelet was informed of the new secure connection details.
    
    Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
    
    [root@node04 ~]# 
    

      提示:如果开启了Swap,需要在命令后面加上--ignore-preflight-errors=Swap选项;

      在master节点上查看node状态,看看node04是否加入到k8s集群

    [root@master01 ~]# kubectl get node
    NAME               STATUS     ROLES                  AGE    VERSION
    master01.k8s.org   Ready      control-plane,master   10d    v1.20.0
    node01.k8s.org     Ready      <none>                 10d    v1.20.0
    node02.k8s.org     Ready      <none>                 10d    v1.20.0
    node03.k8s.org     Ready      <none>                 10d    v1.20.0
    node04.k8s.org     NotReady   <none>                 117s   v1.20.0
    [root@master01 ~]# 
    

      提示:能够看到node04已经加入节点,只是状态还未准备好,待节点准备就绪后,再看看dspod数量是否增加,是否自动在node04上运行了nginx pod;

      查看ds控制器,看看现在运行了几个pod

    [root@master01 ~]# kubectl get node
    NAME               STATUS   ROLES                  AGE     VERSION
    master01.k8s.org   Ready    control-plane,master   10d     v1.20.0
    node01.k8s.org     Ready    <none>                 10d     v1.20.0
    node02.k8s.org     Ready    <none>                 10d     v1.20.0
    node03.k8s.org     Ready    <none>                 10d     v1.20.0
    node04.k8s.org     Ready    <none>                 8m10s   v1.20.0
    [root@master01 ~]# kubectl get ds
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    ds-demo   4         4         4       4            4           <none>          53m
    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-g74s8   1/1     Running   0          72s   10.244.4.2    node04.k8s.org   <none>           <none>
    ds-demo-h4b77   1/1     Running   0          27m   10.244.2.70   node02.k8s.org   <none>           <none>
    ds-demo-hpmrg   1/1     Running   0          27m   10.244.3.58   node03.k8s.org   <none>           <none>
    ds-demo-kjf6f   1/1     Running   0          27m   10.244.1.59   node01.k8s.org   <none>           <none>
    [root@master01 ~]# 
    

      提示:可以看到新加的节点准备就绪以后,对应pod会自动在新加的节点上创建pod;

      更新pod版本

    [root@master01 ~]# cat ds-demo-nginx-1.14.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata: 
      name: ds-demo
      namespace: default
    spec:
      selector: 
        matchLabels:
          app: ngx-ds
      template:
        metadata:
          labels:
            app: ngx-ds
        spec:
          containers:
          - name: nginx
            image: nginx:1.16-alpine
            ports:
            - name: http
              containerPort: 80
      minReadySeconds: 5
    [root@master01 ~]# kubectl get ds -o wide
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   4         4         4       4            4           <none>          55m   nginx        nginx:1.14-alpine   app=ngx-ds
    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-g74s8   1/1     Running   0          3m31s   10.244.4.2    node04.k8s.org   <none>           <none>
    ds-demo-h4b77   1/1     Running   0          30m     10.244.2.70   node02.k8s.org   <none>           <none>
    ds-demo-hpmrg   1/1     Running   0          30m     10.244.3.58   node03.k8s.org   <none>           <none>
    ds-demo-kjf6f   1/1     Running   0          30m     10.244.1.59   node01.k8s.org   <none>           <none>
    [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml
    daemonset.apps/ds-demo configured
    [root@master01 ~]# kubectl get ds -o wide                  
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   4         4         3       0            3           <none>          56m   nginx        nginx:1.16-alpine   app=ngx-ds
    [root@master01 ~]#kubectl get pod -o wide
    NAME            READY   STATUS              RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-47gtq   0/1     ContainerCreating   0          7s    <none>        node04.k8s.org   <none>           <none>
    ds-demo-h4b77   1/1     Running             0          31m   10.244.2.70   node02.k8s.org   <none>           <none>
    ds-demo-jp9dz   1/1     Running             0          38s   10.244.1.60   node01.k8s.org   <none>           <none>
    ds-demo-t4njt   1/1     Running             0          21s   10.244.3.59   node03.k8s.org   <none>           <none>
    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-47gtq   1/1     Running   0          37s   10.244.4.3    node04.k8s.org   <none>           <none>
    ds-demo-8txr9   1/1     Running   0          14s   10.244.2.71   node02.k8s.org   <none>           <none>
    ds-demo-jp9dz   1/1     Running   0          68s   10.244.1.60   node01.k8s.org   <none>           <none>
    ds-demo-t4njt   1/1     Running   0          51s   10.244.3.59   node03.k8s.org   <none>           <none>
    [root@master01 ~]# 
    

      提示:可以看到我们修改了pod模板中镜像的版本,应用以后,对应pod会一一更新;

      查看ds详细信息

    [root@master01 ~]# kubectl describe ds ds-demo
    Name:           ds-demo
    Selector:       app=ngx-ds
    Node-Selector:  <none>
    Labels:         <none>
    Annotations:    deprecated.daemonset.template.generation: 4
    Desired Number of Nodes Scheduled: 4
    Current Number of Nodes Scheduled: 4
    Number of Nodes Scheduled with Up-to-date Pods: 4
    Number of Nodes Scheduled with Available Pods: 4
    Number of Nodes Misscheduled: 0
    Pods Status:  4 Running / 0 Waiting / 0 Succeeded / 0 Failed
    Pod Template:
      Labels:  app=ngx-ds
      Containers:
       nginx:
        Image:        nginx:1.16-alpine
        Port:         80/TCP
        Host Port:    0/TCP
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Events:
      Type    Reason            Age                From                  Message
      ----    ------            ----               ----                  -------
      Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-fm9cb
      Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-zvpbb
      Normal  SuccessfulCreate  59m                daemonset-controller  Created pod: ds-demo-pspbk
      Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-fm9cb
      Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-pspbk
      Normal  SuccessfulDelete  44m (x2 over 44m)  daemonset-controller  Deleted pod: ds-demo-zvpbb
      Normal  SuccessfulCreate  38m                daemonset-controller  Created pod: ds-demo-8hfnq
      Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-h4b77
      Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-hpmrg
      Normal  SuccessfulDelete  33m                daemonset-controller  Deleted pod: ds-demo-8hfnq
      Normal  SuccessfulCreate  33m                daemonset-controller  Created pod: ds-demo-kjf6f
      Normal  SuccessfulCreate  6m57s              daemonset-controller  Created pod: ds-demo-g74s8
      Normal  SuccessfulDelete  3m8s               daemonset-controller  Deleted pod: ds-demo-kjf6f
      Normal  SuccessfulCreate  2m58s              daemonset-controller  Created pod: ds-demo-jp9dz
      Normal  SuccessfulDelete  2m52s              daemonset-controller  Deleted pod: ds-demo-hpmrg
      Normal  SuccessfulCreate  2m41s              daemonset-controller  Created pod: ds-demo-t4njt
      Normal  SuccessfulDelete  2m35s              daemonset-controller  Deleted pod: ds-demo-g74s8
      Normal  SuccessfulCreate  2m27s              daemonset-controller  Created pod: ds-demo-47gtq
      Normal  SuccessfulDelete  2m13s              daemonset-controller  Deleted pod: ds-demo-h4b77
      Normal  SuccessfulCreate  2m4s               daemonset-controller  Created pod: ds-demo-8txr9
    [root@master01 ~]# 
    

      使用命令更新pod版本

    [root@master01 ~]# kubectl set image ds ds-demo nginx=nginx:1.18-alpine --record
    daemonset.apps/ds-demo image updated
    [root@master01 ~]# kubectl get ds -o wide
    NAME      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE   CONTAINERS   IMAGES              SELECTOR
    ds-demo   4         4         3       0            3           <none>          84m   nginx        nginx:1.18-alpine   app=ngx-ds
    [root@master01 ~]# kubectl rollout status ds/ds-demo
    Waiting for daemon set "ds-demo" rollout to finish: 1 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 2 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 3 out of 4 new pods have been updated...
    Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available...
    Waiting for daemon set "ds-demo" rollout to finish: 3 of 4 updated pods are available...
    daemon set "ds-demo" successfully rolled out
    [root@master01 ~]# kubectl get pod -o wide
    NAME            READY   STATUS    RESTARTS   AGE   IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-6qr6g   1/1     Running   0          70s   10.244.2.77   node02.k8s.org   <none>           <none>
    ds-demo-7gnxd   1/1     Running   0          57s   10.244.3.66   node03.k8s.org   <none>           <none>
    ds-demo-g44bd   1/1     Running   0          24s   10.244.1.66   node01.k8s.org   <none>           <none>
    ds-demo-hb8vl   1/1     Running   0          43s   10.244.4.10   node04.k8s.org   <none>           <none>
    [root@master01 ~]# kubectl describe pod/ds-demo-6qr6g |grep Image
        Image:          nginx:1.18-alpine
        Image ID:       docker-pullable://nginx@sha256:a7bdf9e789a40bf112c87672a2495fc49de7c89f184a252d59061c1ae800ee52
    [root@master01 ~]# 
    

      提示:默认更新策略是删除一个pod,然后再新建一个pod;

      定义更新策略

    [root@master01 ~]# cat ds-demo-nginx-1.14.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata: 
      name: ds-demo
      namespace: default
    spec:
      selector: 
        matchLabels:
          app: ngx-ds
      template:
        metadata:
          labels:
            app: ngx-ds
        spec:
          containers:
          - name: nginx
            image: nginx:1.16-alpine
            ports:
            - name: http
              containerPort: 80
      minReadySeconds: 5
      updateStrategy:
        type: RollingUpdate
        rollingUpdate:
          maxUnavailable: 2
    [root@master01 ~]# 
    

      提示:定义ds的更新策略需要在spec字段下使用updateStrategy字段,该字段的值为一个对象,其中type是更新类型,该类型有两个值,一个是OnDelete,一个是RollingUpdate;rollingUpdate字段用于指定更新策略,只有当type的值为RollingUpdate时,定义rollingUpdate字段才有意义,其中maxUnavaiable是用来定义一次删除几个pod(最大允许不可用的pod数量);ds更新只能先删除再创建,不能先创建再删除,因为它只允许一个node上运行一个pod,所以只有删除pod后再创建;默认情况是删除一个,新建一个;上述配置定义更新策略为一次删除两个;

      应用配置,查看更新过程

    [root@master01 ~]# kubectl apply -f ds-demo-nginx-1.14.yaml && kubectl get pod -w
    daemonset.apps/ds-demo configured
    NAME            READY   STATUS        RESTARTS   AGE
    ds-demo-4k2x7   1/1     Terminating   0          15m
    ds-demo-b9djn   1/1     Running       0          16m
    ds-demo-bxkj7   1/1     Running       0          15m
    ds-demo-cg49r   1/1     Terminating   0          16m
    ds-demo-cg49r   0/1     Terminating   0          16m
    ds-demo-4k2x7   0/1     Terminating   0          15m
    ds-demo-cg49r   0/1     Terminating   0          16m
    ds-demo-cg49r   0/1     Terminating   0          16m
    ds-demo-dtsgc   0/1     Pending       0          0s
    ds-demo-dtsgc   0/1     Pending       0          0s
    ds-demo-dtsgc   0/1     ContainerCreating   0          0s
    ds-demo-dtsgc   1/1     Running             0          2s
    ds-demo-4k2x7   0/1     Terminating         0          15m
    ds-demo-4k2x7   0/1     Terminating         0          15m
    ds-demo-8d7g9   0/1     Pending             0          0s
    ds-demo-8d7g9   0/1     Pending             0          0s
    ds-demo-8d7g9   0/1     ContainerCreating   0          0s
    ds-demo-8d7g9   1/1     Running             0          1s
    ds-demo-b9djn   1/1     Terminating         0          16m
    ds-demo-b9djn   0/1     Terminating         0          16m
    ds-demo-bxkj7   1/1     Terminating         0          16m
    ds-demo-bxkj7   0/1     Terminating         0          16m
    ds-demo-b9djn   0/1     Terminating         0          16m
    ds-demo-b9djn   0/1     Terminating         0          16m
    ds-demo-dkxfs   0/1     Pending             0          0s
    ds-demo-dkxfs   0/1     Pending             0          0s
    ds-demo-dkxfs   0/1     ContainerCreating   0          0s
    ds-demo-dkxfs   1/1     Running             0          2s
    ds-demo-bxkj7   0/1     Terminating         0          16m
    ds-demo-bxkj7   0/1     Terminating         0          16m
    ds-demo-q6b5f   0/1     Pending             0          0s
    ds-demo-q6b5f   0/1     Pending             0          0s
    ds-demo-q6b5f   0/1     ContainerCreating   0          0s
    ds-demo-q6b5f   1/1     Running             0          1s
    

      提示:可以看到现在更新pod版本就是一次删除两个pod,然后新建两个pod;

       2、Job控制器

      job控制器主要作用是用来运行一个或多个pod来执行任务,当任务执行完成后,自动退出,如果在执行任务过程中pod故障了,job控制器会根据重启策略将其进行重启,直到任务完成pod正常退出;如果重启策略为Never,则pod异常后将不再重启,它会重新创建一个新pod再次执行任务,最后直到任务完成正常退出;

      Job控制器pod状态

      提示:上图主要描述了对于job控制器创建的pod的状态,正常情况pod执行完任务正常退出,其状态为completed;如果pod非正常退出(即退出码非0),并且重启策略为never,表示不重启pod,此时pod的状态就为Failure:虽然不重启pod,但是对应的任务还是在,所以重启策略为never时,pod非正常退出,job控制器会重新创建一个pod再次执行任务;如果pod非正常退出且重启策略为OnFailure时,pod会被重启,然后再次执行任务,直到最后任务执行完成pod正常退出,此时pod的状态为completed;

      任务作业方式

      串行作业

      提示:串行作业一次只有一个pod被创建,只有当pod任务执行完成后,第二个pod才会被创建;

      并行作业

      提示:并行作业可以并行启动多个pod同时作业;

       示例:定义job控制器

    [root@master01 ~]# cat job-demo.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: job-demo
    spec:
      template:
        metadata:
          labels:
            app: myjob
        spec:
          containers:
          - name: myjob
            image: alpine
            command: ["/bin/sh",  "-c", "sleep 10"]
          restartPolicy: Never
    [root@master01 ~]# 
    

      提示:创建job控制最主要是定义对应pod模板;定义方式和pod其他控制器定义方式相同;在spec字段下使用template指定来定义pod模板;

      应用配置清单

    [root@master01 ~]# kubectl apply -f job-demo.yaml
    job.batch/job-demo created
    [root@master01 ~]# kubectl get jobs -o wide
    NAME       COMPLETIONS   DURATION   AGE   CONTAINERS   IMAGES   SELECTOR
    job-demo   0/1           7s         7s    myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
    [root@master01 ~]# kubectl get pod 
    NAME             READY   STATUS    RESTARTS   AGE
    ds-demo-8d7g9    1/1     Running   0          91m
    ds-demo-dkxfs    1/1     Running   0          91m
    ds-demo-dtsgc    1/1     Running   0          91m
    ds-demo-q6b5f    1/1     Running   0          91m
    job-demo-4h9gb   1/1     Running   0          16s
    [root@master01 ~]# kubectl get pod 
    NAME             READY   STATUS      RESTARTS   AGE
    ds-demo-8d7g9    1/1     Running     0          91m
    ds-demo-dkxfs    1/1     Running     0          91m
    ds-demo-dtsgc    1/1     Running     0          91m
    ds-demo-q6b5f    1/1     Running     0          91m
    job-demo-4h9gb   0/1     Completed   0          30s
    [root@master01 ~]# 
    

      提示:可以看到创建job控制后,对应启动pod执行完任务后就正常退出,此时pod的状态为completed;

      定义并行job控制器

    [root@master01 ~]# cat job-multi.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: job-multi-demo
    spec:
      completions: 6
      template:
        metadata:
          labels:
            app: myjob
        spec:
          containers:
          - name: myjob
            image: alpine
            command: ["/bin/sh",  "-c", "sleep 10"]
          restartPolicy: Never
    [root@master01 ~]# 
    

      提示:定义多路并行pod需要在spec字段下使用completions来指定执行任务需要的对应pod的数量;以上配置表示job-multi-demo这个job控制器执行任务需要启动6个pod;

      应用配置清单

    [root@master01 ~]# kubectl apply -f job-multi.yaml
    job.batch/job-multi-demo created
    [root@master01 ~]# kubectl get jobs -o wide
    NAME             COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
    job-demo         1/1           18s        9m49s   myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
    job-multi-demo   0/6           6s         6s      myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
    [root@master01 ~]# kubectl get pods -o wide
    NAME                   READY   STATUS      RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-8d7g9          1/1     Running     0          101m   10.244.3.69   node03.k8s.org   <none>           <none>
    ds-demo-dkxfs          1/1     Running     0          101m   10.244.1.69   node01.k8s.org   <none>           <none>
    ds-demo-dtsgc          1/1     Running     0          101m   10.244.4.13   node04.k8s.org   <none>           <none>
    ds-demo-q6b5f          1/1     Running     0          100m   10.244.2.80   node02.k8s.org   <none>           <none>
    job-demo-4h9gb         0/1     Completed   0          10m    10.244.3.70   node03.k8s.org   <none>           <none>
    job-multi-demo-rbw7d   1/1     Running     0          21s    10.244.1.70   node01.k8s.org   <none>           <none>
    [root@master01 ~]# kubectl get pods -o wide
    NAME                   READY   STATUS      RESTARTS   AGE    IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-8d7g9          1/1     Running     0          101m   10.244.3.69   node03.k8s.org   <none>           <none>
    ds-demo-dkxfs          1/1     Running     0          101m   10.244.1.69   node01.k8s.org   <none>           <none>
    ds-demo-dtsgc          1/1     Running     0          101m   10.244.4.13   node04.k8s.org   <none>           <none>
    ds-demo-q6b5f          1/1     Running     0          101m   10.244.2.80   node02.k8s.org   <none>           <none>
    job-demo-4h9gb         0/1     Completed   0          10m    10.244.3.70   node03.k8s.org   <none>           <none>
    job-multi-demo-f7rz4   1/1     Running     0          21s    10.244.3.71   node03.k8s.org   <none>           <none>
    job-multi-demo-rbw7d   0/1     Completed   0          43s    10.244.1.70   node01.k8s.org   <none>           <none>
    [root@master01 ~]#
    

      提示:默认情况没有指定并行度,其并行度为1,即pod和pod之间就是串行执行任务;

      定义并行度

    [root@master01 ~]# cat job-multi.yaml
    apiVersion: batch/v1
    kind: Job
    metadata:
      name: job-multi-demo2
    spec:
      completions: 6
      parallelism: 2
      template:
        metadata:
          labels:
            app: myjob
        spec:
          containers:
          - name: myjob
            image: alpine
            command: ["/bin/sh",  "-c", "sleep 10"]
          restartPolicy: Never
    [root@master01 ~]# 
    

      提示:定义并行度需要在spec字段下使用parallelism字段来指定,所谓并行度指一次并行运行几个pod,上述配置表示一次运行2个pod;即2个pod同时作业;

      应用配置清单

    [root@master01 ~]# kubectl apply -f job-multi.yaml
    job.batch/job-multi-demo2 created
    [root@master01 ~]# kubectl get jobs -o wide
    NAME              COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
    job-demo          1/1           18s        18m     myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
    job-multi-demo    6/6           116s       8m49s   myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
    job-multi-demo2   0/6           8s         8s      myjob        alpine   controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43
    [root@master01 ~]# kubectl get pod -o wide
    NAME                    READY   STATUS      RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-8d7g9           1/1     Running     0          109m    10.244.3.69   node03.k8s.org   <none>           <none>
    ds-demo-dkxfs           1/1     Running     0          109m    10.244.1.69   node01.k8s.org   <none>           <none>
    ds-demo-dtsgc           1/1     Running     0          110m    10.244.4.13   node04.k8s.org   <none>           <none>
    ds-demo-q6b5f           1/1     Running     0          109m    10.244.2.80   node02.k8s.org   <none>           <none>
    job-demo-4h9gb          0/1     Completed   0          18m     10.244.3.70   node03.k8s.org   <none>           <none>
    job-multi-demo-f7rz4    0/1     Completed   0          8m44s   10.244.3.71   node03.k8s.org   <none>           <none>
    job-multi-demo-hhcrm    0/1     Completed   0          7m23s   10.244.3.72   node03.k8s.org   <none>           <none>
    job-multi-demo-kjmld    0/1     Completed   0          8m20s   10.244.2.81   node02.k8s.org   <none>           <none>
    job-multi-demo-lfzrj    0/1     Completed   0          8m1s    10.244.2.82   node02.k8s.org   <none>           <none>
    job-multi-demo-rbw7d    0/1     Completed   0          9m6s    10.244.1.70   node01.k8s.org   <none>           <none>
    job-multi-demo-vdkrm    0/1     Completed   0          7m41s   10.244.2.83   node02.k8s.org   <none>           <none>
    job-multi-demo2-66tdd   0/1     Completed   0          25s     10.244.2.84   node02.k8s.org   <none>           <none>
    job-multi-demo2-fsl9r   0/1     Completed   0          25s     10.244.3.73   node03.k8s.org   <none>           <none>
    job-multi-demo2-js7qs   1/1     Running     0          9s      10.244.2.85   node02.k8s.org   <none>           <none>
    job-multi-demo2-nqmps   1/1     Running     0          12s     10.244.1.71   node01.k8s.org   <none>           <none>
    [root@master01 ~]# kubectl get pod -o wide
    NAME                    READY   STATUS      RESTARTS   AGE     IP            NODE             NOMINATED NODE   READINESS GATES
    ds-demo-8d7g9           1/1     Running     0          110m    10.244.3.69   node03.k8s.org   <none>           <none>
    ds-demo-dkxfs           1/1     Running     0          109m    10.244.1.69   node01.k8s.org   <none>           <none>
    ds-demo-dtsgc           1/1     Running     0          110m    10.244.4.13   node04.k8s.org   <none>           <none>
    ds-demo-q6b5f           1/1     Running     0          109m    10.244.2.80   node02.k8s.org   <none>           <none>
    job-demo-4h9gb          0/1     Completed   0          19m     10.244.3.70   node03.k8s.org   <none>           <none>
    job-multi-demo-f7rz4    0/1     Completed   0          8m57s   10.244.3.71   node03.k8s.org   <none>           <none>
    job-multi-demo-hhcrm    0/1     Completed   0          7m36s   10.244.3.72   node03.k8s.org   <none>           <none>
    job-multi-demo-kjmld    0/1     Completed   0          8m33s   10.244.2.81   node02.k8s.org   <none>           <none>
    job-multi-demo-lfzrj    0/1     Completed   0          8m14s   10.244.2.82   node02.k8s.org   <none>           <none>
    job-multi-demo-rbw7d    0/1     Completed   0          9m19s   10.244.1.70   node01.k8s.org   <none>           <none>
    job-multi-demo-vdkrm    0/1     Completed   0          7m54s   10.244.2.83   node02.k8s.org   <none>           <none>
    job-multi-demo2-5f5tn   1/1     Running     0          9s      10.244.1.72   node01.k8s.org   <none>           <none>
    job-multi-demo2-66tdd   0/1     Completed   0          38s     10.244.2.84   node02.k8s.org   <none>           <none>
    job-multi-demo2-fsl9r   0/1     Completed   0          38s     10.244.3.73   node03.k8s.org   <none>           <none>
    job-multi-demo2-js7qs   0/1     Completed   0          22s     10.244.2.85   node02.k8s.org   <none>           <none>
    job-multi-demo2-md84p   1/1     Running     0          9s      10.244.3.74   node03.k8s.org   <none>           <none>
    job-multi-demo2-nqmps   0/1     Completed   0          25s     10.244.1.71   node01.k8s.org   <none>           <none>
    [root@master01 ~]# 
    

      提示:可以看到现在pod就一次运行两个;

      3、CronJob控制器

      这种类型的控制器主要用来创建周期性任务pod;

      示例:定义CronJob控制器

    [root@master01 ~]# cat cronjob-demo.yaml
    apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
      name: cronjob-demo
      labels:
        app: mycronjob
    spec:
      schedule: "*/2 * * * *"
      jobTemplate:
        metadata:
          labels:
            app: mycronjob-jobs
        spec:
          parallelism: 2
          template:
            spec:
              containers:
              - name: myjob
                image: alpine
                command:
                - /bin/sh
                - -c
                - date; echo Hello from the Kubernetes cluster; sleep 10
              restartPolicy: OnFailure
    [root@master01 ~]# 
    

      提示:定义cronjob控制器,最主要是定义job模板;其实cronjob控制器是通过job控制器来管理pod,这个逻辑有点类似deploy控制器通过rs来控制pod;其中schedule字段用来指定周期性调度时间策略,这个定义和我们在linux上定义周期性任务一样;对于job模板和我们定义job一样;以上配置表示每2分钟执行一次job模板中的定义的job任务;其每次并行运行2个pod;

      执行配置清单

    [root@master01 ~]# kubectl apply -f cronjob-demo.yaml
    cronjob.batch/cronjob-demo created
    [root@master01 ~]# kubectl get cronjob -o wide
    NAME           SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE   CONTAINERS   IMAGES   SELECTOR
    cronjob-demo   */2 * * * *   False     0        <none>          12s   myjob        alpine   <none>
    [root@master01 ~]# kubectl get pod 
    NAME                            READY   STATUS      RESTARTS   AGE
    cronjob-demo-1608307560-5hwmb   1/1     Running     0          9s
    cronjob-demo-1608307560-rgkkr   1/1     Running     0          9s
    ds-demo-8d7g9                   1/1     Running     0          125m
    ds-demo-dkxfs                   1/1     Running     0          125m
    ds-demo-dtsgc                   1/1     Running     0          125m
    ds-demo-q6b5f                   1/1     Running     0          125m
    job-demo-4h9gb                  0/1     Completed   0          34m
    job-multi-demo-f7rz4            0/1     Completed   0          24m
    job-multi-demo-hhcrm            0/1     Completed   0          23m
    job-multi-demo-kjmld            0/1     Completed   0          24m
    job-multi-demo-lfzrj            0/1     Completed   0          23m
    job-multi-demo-rbw7d            0/1     Completed   0          25m
    job-multi-demo-vdkrm            0/1     Completed   0          23m
    job-multi-demo2-5f5tn           0/1     Completed   0          15m
    job-multi-demo2-66tdd           0/1     Completed   0          16m
    job-multi-demo2-fsl9r           0/1     Completed   0          16m
    job-multi-demo2-js7qs           0/1     Completed   0          16m
    job-multi-demo2-md84p           0/1     Completed   0          15m
    job-multi-demo2-nqmps           0/1     Completed   0          16m
    [root@master01 ~]#
    

      提示:可以看到对应就有两个pod正在运行;

      查看是否创建的有job控制器呢?

    [root@master01 ~]# kubectl get job -o wide
    NAME                      COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES   SELECTOR
    cronjob-demo-1608307560   2/1 of 2      15s        3m18s   myjob        alpine   controller-uid=4a84b474-b890-4dd2-80d4-a6115130785a
    cronjob-demo-1608307680   2/1 of 2      17s        77s     myjob        alpine   controller-uid=affecad9-03e6-430c-8c58-c845773c8ff7
    job-demo                  1/1           18s        37m     myjob        alpine   controller-uid=4ded17a8-fc39-480e-8f37-6bb2bd328997
    job-multi-demo            6/6           116s       28m     myjob        alpine   controller-uid=80f44cbd-f7e5-4eb4-a286-39bfcfc9fe39
    job-multi-demo2           6/6           46s        19m     myjob        alpine   controller-uid=d40f47ea-e58d-4424-97bd-7fda6bdf4e43
    [root@master01 ~]# 
    

      提示:有两个job控制器,名字都一样;从上面显示的结果,不难理解,cronjob每执行一次,它都会调用对应的job控制来创建新pod;

  • 相关阅读:
    单元测试笔记
    centos7安装rabbitmq
    spring cache之redis使用示例
    ObjectMapper序列化时间
    安装alertmanager
    prometheus安装
    Ribbon配置随访问策略
    优化if..else代码的两种方式
    spring bean的生命周期
    idea热部署
  • 原文地址:https://www.cnblogs.com/qiuhom-1874/p/14157306.html
Copyright © 2011-2022 走看看