zoukankan      html  css  js  c++  java
  • (十一)Kubernetes StatefulSet控制器

    StatefulSet介绍

    前面使用Deployment创建的Pod是无状态的,当挂载了volume之后,如果该Pod挂了,Replication Controller会再启动一个Pod来保证可用性,但是由于Pod是无状态的,pod挂了就会和之前的Volume的关系断开,新创建的Pod无法找到之前的Pod。但是对于用户来说,他们对底层的Pod挂了是没有感知的,但是当Pod挂了之后就无法再使用之前挂载的存储卷。为了解决这一问题,就引入了StatefulSet用于保留Pod的状态信息。

    StatefulSetPod资源控制器的一种实现,用于部署和扩展有状态应用的Pod资源,确保它们的运行顺序及每个Pod资源的唯一性。其应用场景包括:

    • 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现。

    • 稳定的网络标识,即Pod重新调度后其PodNameHostName不变,基于Headless Service(即没有Cluster IPService)来实现

    • 有序部署,有序扩展,即Pod是有顺序的,在部署或者扩展的时候要依据定义的顺序依次进行(即从0到N-1,在下一个Pod运行之前的所有之前的Pod必须都是RunningReady状态),基于init Containers来实现

    • 有序收缩,有序删除(即从N-1到0)

    StatefulSet由以下几个部分组成:

    • 用于定义网络标志(DNS domain)和Headless Service

    • 用于创建PersistentVolumesVolumeClaimTemplates

    • 定义具体应用的StatefulSet

    StatefulSet中的每个PodDNS格式为statefulSetName-{0..N-1}.serviceName.namespace.svc.cluster.local,其中

    • serviceName:为Headless Service的名字

    • 0..N-1:为Pod所在的序号,从0开始到N-1

    • statefulSetName:为StatefulSet的名字

    • namespace:为服务所在的namaspaceHeadless ServiceStatefulSet必须在相同的namespace

    • .cluster.local:为Cluster Domain

    为什么要有headless?

    Deployment中,每一个pod是没有名称,是随机字符串,是无序的。而statefulSet中是要求有序的,每一个Pod的名称必须是固定的。当节点挂了,重建之后的标识符是不变的,每一个节点的节点名称是不会改变的。Pod名称是作为Pod识别的唯一标识符,必须保证其标识符的稳定并且唯一。

    为了实现标识符的稳定,这时候就需要一个headless service解析直达到Pod,还需要给Pod配置一个唯一的名称。

    为什么要有volumeClainTemplate?

    大部分有状态副本集都会用到持久存储,比如分布式系统来说,由于数据是不一样的,每个节点都需要自己专用的存储节点。而在DeploymentPod模板中创建的存储卷是一个共享的存储卷,多个Pod使用同一个存储卷,而statefulSet定义中的每一个Pod都不能使用同一个存储卷,由此基于Pod模板创建Pod是不适应的,这就需要引入volumeClainTemplate,当在使用StatefulSet创建Pod时,会自动生成一个PVC,从而请求绑定一个PV,从而有自己专用的存储卷。

    Pod名称、PVCPV的关系图如下:

    StatefulSet定义

    在创建StatefulSet之前需要准备的东西,创建顺序非常关键,如下

    1、Volume

    2、Persistent Volume

    3、Persistent Volume Clain

    4、Service

    5、StatefulSet

    Volume可以有很多中类型,比如nfs、gluster等,下面使用nfs

    statefulSet字段说明:

    [root@k8s-master ~]# kubectl explain statefulset
    KIND:     StatefulSet
    VERSION:  apps/v1
    
    DESCRIPTION:
         StatefulSet represents a set of pods with consistent identities. Identities
         are defined as: - Network: A single stable DNS and hostname. - Storage: As
         many VolumeClaims as requested. The StatefulSet guarantees that a given
         network identity will always map to the same storage identity.
    FIELDS:
       apiVersion    <string>
       kind    <string>
       metadata    <Object>
       spec    <Object>
       status    <Object>
    
    [root@k8s-master ~]# kubectl explain statefulset.spec
    podManagementPolicy    <string>    #Pod管理策略
    replicas    <integer>    #Pod副本数量
    revisionHistoryLimit    <integer>    #历史版本限制
    selector    <Object> -required-    #标签选择器,根据标签选择管理的Pod资源;必选字段
    serviceName    <string> -required-    #服务名称,必选字段
    template    <Object> -required-    #模板,定义pod资源,必选字段
    updateStrategy    <Object>    #更新策略
    volumeClaimTemplates    <[]Object>    #存储卷申请模板,列表对象形式

    示例,清单定义StatefulSet

    通过上面的描述,下面示例定义StatefulSet资源,在定义之前首先得准备PV资源对象。这里同样使用NFS作为后端存储。

    1)准备NFS(安装软件省略,参考

    (1)创建存储卷对应的目录
    [root@storage ~]# mkdir /data/volumes/v{1..5} -p
    
    (2)修改nfs的配置文件
    [root@storage ~]# vim /etc/exports
    /data/volumes/v1  192.168.1.0/24(rw,no_root_squash)
    /data/volumes/v2  192.168.1.0/24(rw,no_root_squash)
    /data/volumes/v3  192.168.1.0/24(rw,no_root_squash)
    /data/volumes/v4  192.168.1.0/24(rw,no_root_squash)
    /data/volumes/v5  192.168.1.0/24(rw,no_root_squash)
    
    (3)查看nfs的配置
    [root@storage ~]# exportfs -arv
    exporting 192.168.1.0/24:/data/volumes/v5
    exporting 192.168.1.0/24:/data/volumes/v4
    exporting 192.168.1.0/24:/data/volumes/v3
    exporting 192.168.1.0/24:/data/volumes/v2
    exporting 192.168.1.0/24:/data/volumes/v1
    
    (4)使配置生效
    [root@storage ~]# showmount -e
    Export list for storage:
    /data/volumes/v5 192.168.1.0/24
    /data/volumes/v4 192.168.1.0/24
    /data/volumes/v3 192.168.1.0/24
    /data/volumes/v2 192.168.1.0/24
    /data/volumes/v1 192.168.1.0/24

    2)创建PV;这里创建5PV,存储大小各不相等,是否可读也不相同,这里新创建一个目录用于存放statefulset所有的资源清单文件等

    [root@k8s-master ~]# mkdir statefulset && cd statefulset
    
    (1)编写创建pv的资源清单
    [root@k8s-master statefulset]# vim pv-nfs.yaml
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nfs-001
      labels:
        name: pv001
    spec:
      nfs:
        path: /data/volumes/v1
        server: 192.168.1.34
        readOnly: false 
      accessModes: ["ReadWriteOnce","ReadWriteMany"]
      capacity:
        storage: 5Gi
      persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nfs-002
      labels:
        name: pv002
    spec:
      nfs:
        path: /data/volumes/v2
        server: 192.168.1.34
        readOnly: false 
      accessModes: ["ReadWriteOnce"]
      capacity:
        storage: 5Gi
      persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nfs-003
      labels:
        name: pv003
    spec:
      nfs:
        path: /data/volumes/v3
        server: 192.168.1.34
        readOnly: false 
      accessModes: ["ReadWriteOnce","ReadWriteMany"]
      capacity:
        storage: 5Gi
      persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nfs-004
      labels:
        name: pv004
    spec:
      nfs:
        path: /data/volumes/v4
        server: 192.168.1.34
        readOnly: false 
      accessModes: ["ReadWriteOnce","ReadWriteMany"]
      capacity:
        storage: 5Gi
      persistentVolumeReclaimPolicy: Retain
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: pv-nfs-005
      labels:
        name: pv005
    spec:
      nfs:
        path: /data/volumes/v5
        server: 192.168.1.34
        readOnly: false 
      accessModes: ["ReadWriteOnce","ReadWriteMany"]
      capacity:
        storage: 5Gi
      persistentVolumeReclaimPolicy: Retain
    
    (2)创建PV
    [root@k8s-master statefulset]# kubectl apply -f pv-nfs.yaml  
    persistentvolume/pv-nfs-001 created
    persistentvolume/pv-nfs-002 created
    persistentvolume/pv-nfs-003 created
    persistentvolume/pv-nfs-004 created
    persistentvolume/pv-nfs-005 created
    
    (3)查看PV
    [root@k8s-master statefulset]# kubectl get pv 
    NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
    pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                   3s
    pv-nfs-002   5Gi        RWO            Retain           Available                                   3s
    pv-nfs-003   5Gi        RWO,RWX        Retain           Available                                   3s
    pv-nfs-004   5Gi        RWO,RWX        Retain           Available                                   3s
    pv-nfs-005   5Gi        RWO,RWX        Retain           Available                                   3s

    3)编写定义StatefulSet的资源清单,首先我们要定义一个Headless Service,这里headless ServiceStatefulSet写在一个文件。

    [root@k8s-master statefulset]# vim statefulset-demo.yaml
    #定义一个Headless Service
    apiVersion: v1
    kind: Service
    metadata:
      name: nginx-svc
      labels:
        app: nginx-svc
    spec:
      ports:
      - name: http
        port: 80
      clusterIP: None
      selector:
        app: nginx-pod
    ---
    #定义StatefulSet
    apiVersion: apps/v1
    kind: StatefulSet
    metadata:
      name: nginx-statefulset
    spec:
      serviceName: nginx-svc    #指定service,和上面定义的service对应
      replicas: 5    #指定副本数量
      selector:    #指定标签选择器,和后面的pod的标签对应
        matchLabels:
          app: nginx-pod
      template:    #定义后端Pod的模板
        metadata:
          labels:
            app: nginx-pod
        spec:
          containers:
          - name: nginx
            image: nginx:1.12
            imagePullPolicy: IfNotPresent
            ports:
            - name: http
              containerPort: 80
            volumeMounts:
            - name: nginxdata
              mountPath: /usr/share/nginx/html
      volumeClaimTemplates:    #定义存储卷申请模板
      - metadata: 
          name: nginxdata
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 5Gi
    
    #---
    解析上面的资源清单:由于StatefulSet资源依赖于一个事先存在的Service资源,所以需要先定义一个名为nginx-svc的Headless Service资源,用于关联到每个Pod资源创建DNS资源记录。接着定义了一个名为nginx-statefulset的StatefulSet资源,它通过Pod模板创建了5个Pod资源副本,并基于volumeClaiTemplate向前面创建的PV进行了请求大小为5Gi的专用存储卷。

    4)创建StatefulSet资源,这里打开另外一个窗口实时查看pod

    [root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml 
    service/nginx-svc created
    statefulset.apps/nginx-statefulset created
    
    [root@k8s-master statefulset]# kubectl get svc   #查看创建的无头服务nginx-svc
    NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
    kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   5d19h
    nginx-svc    ClusterIP   None         <none>        80/TCP    29s
    
    [root@k8s-master statefulset]# kubectl get pv     #查看PV绑定
    NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS   REASON   AGE
    pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                                                   3m49s
    pv-nfs-002   5Gi        RWO            Retain           Bound       default/nginxdata-nginx-statefulset-0                           3m49s
    pv-nfs-003   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-1                           3m49s
    pv-nfs-004   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-2                           3m49s
    pv-nfs-005   5Gi        RWO,RWX        Retain           Available                                                                   3m48s
    [root@k8s-master statefulset]# kubectl get pvc     #查看PVC绑定
    NAME                            STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nginxdata-nginx-statefulset-0   Bound    pv-nfs-002   5Gi        RWO                           21s
    nginxdata-nginx-statefulset-1   Bound    pv-nfs-003   5Gi        RWO,RWX                       18s
    nginxdata-nginx-statefulset-2   Bound    pv-nfs-004   5Gi        RWO,RWX                       15s
    [root@k8s-master statefulset]# kubectl get statefulset    #查看StatefulSet
    NAME                READY   AGE
    nginx-statefulset   3/3     58s
    
    [root@k8s-master statefulset]# kubectl get pods    #查看Pod信息
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          78s
    nginx-statefulset-1   1/1     Running   0          75s
    nginx-statefulset-2   1/1     Running   0          72s
    
    
    [root@k8s-master ~]# kubectl get pods -w    #动态查看pod创建过程,可以发现它是按照顺序从0-(n-1)的顺序创建
    nginx-statefulset-0   0/1   Pending   0     0s
    nginx-statefulset-0   0/1   Pending   0     0s
    nginx-statefulset-0   0/1   Pending   0     1s
    nginx-statefulset-0   0/1   ContainerCreating   0     1s
    nginx-statefulset-0   1/1   Running             0     3s
    nginx-statefulset-1   0/1   Pending             0     0s
    nginx-statefulset-1   0/1   Pending             0     0s
    nginx-statefulset-1   0/1   Pending             0     1s
    nginx-statefulset-1   0/1   ContainerCreating   0     1s
    nginx-statefulset-1   1/1   Running             0     3s
    nginx-statefulset-2   0/1   Pending             0     0s
    nginx-statefulset-2   0/1   Pending             0     0s
    nginx-statefulset-2   0/1   Pending             0     2s
    nginx-statefulset-2   0/1   ContainerCreating   0     2s
    nginx-statefulset-2   1/1   Running             0     4s

    5)删除测试,同样在另外一个窗口动态查看pod

    [root@k8s-master statefulset]# kubectl delete -f statefulset-demo.yaml 
    service "nginx-svc" deleted
    statefulset.apps "nginx-statefulset" deleted
    
    [root@k8s-master ~]# kubectl get pods -w     #动态查看删除过程,可以也是按照顺序删除,逆向关闭。
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          18m
    nginx-statefulset-1   1/1     Running   0          18m
    nginx-statefulset-2   1/1     Running   0          18m
    nginx-statefulset-2   1/1     Terminating   0          18m
    nginx-statefulset-0   1/1     Terminating   0          18m
    nginx-statefulset-1   1/1     Terminating   0          18m
    nginx-statefulset-2   0/1     Terminating   0          18m
    nginx-statefulset-0   0/1     Terminating   0          18m
    nginx-statefulset-1   0/1     Terminating   0          18m
    nginx-statefulset-2   0/1     Terminating   0          18m
    nginx-statefulset-2   0/1     Terminating   0          18m
    nginx-statefulset-2   0/1     Terminating   0          18m
    nginx-statefulset-1   0/1     Terminating   0          18m
    nginx-statefulset-1   0/1     Terminating   0          18m
    nginx-statefulset-0   0/1     Terminating   0          18m
    nginx-statefulset-0   0/1     Terminating   0          18m
    
    
    此时PVC依旧存在的,再重新创建pod时,依旧会重新去绑定原来的PVC
    [root@k8s-master statefulset]# kubectl apply -f statefulset-demo.yaml 
    service/nginx-svc created
    statefulset.apps/nginx-statefulset created
    
    [root@k8s-master statefulset]# kubectl get pvc     #查看PVC绑定
    NAME                            STATUS   VOLUME       CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    nginxdata-nginx-statefulset-0   Bound    pv-nfs-002   5Gi        RWO                           30m
    nginxdata-nginx-statefulset-1   Bound    pv-nfs-003   5Gi        RWO,RWX                       30m
    nginxdata-nginx-statefulset-2   Bound    pv-nfs-004   5Gi        RWO,RWX                       30m

    6)名称解析,在创建的每一个Pod中,每一个Pod自己的名称都是可以被解析的,如下:

    [root@k8s-master statefulset]# kubectl get pods -o wide 
    NAME                  READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE   READINESS GATES
    nginx-statefulset-0   1/1     Running   0          12m   10.244.2.96   k8s-node2   <none>           <none>
    nginx-statefulset-1   1/1     Running   0          12m   10.244.1.96   k8s-node1   <none>           <none>
    nginx-statefulset-2   1/1     Running   0          12m   10.244.2.97   k8s-node2   <none>           <none>
    
    
    [root@k8s-master statefulset]# dig -t A nginx-statefulset-0.nginx-svc.default.svc.cluster.local @10.96.0.10
    ......
    ;; ANSWER SECTION:
    nginx-statefulset-0.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.96
    
    [root@k8s-master statefulset]# dig -t A nginx-statefulset-1.nginx-svc.default.svc.cluster.local @10.96.0.10
    ......
    ;; ANSWER SECTION:
    nginx-statefulset-1.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.1.96
    
    [root@k8s-master statefulset]# dig -t A nginx-statefulset-2.nginx-svc.default.svc.cluster.local @10.96.0.10
    ......
    ;; ANSWER SECTION:
    nginx-statefulset-2.nginx-svc.default.svc.cluster.local. 30 IN A 10.244.2.97
    
    也可以进入到容器中进行解析,通过对Pod的名称解析得到IP
    # pod_name.service_name.ns_name.svc.cluster.local
    eg: nginx-statefulset-0.nginx-svc.default.svc.cluster.local

    StatefulSet资源扩缩容

    StatefulSet资源的扩缩容与Deployment资源相似,即通过修改资源的副本数来改动其目标Pod资源数量。对StatefulSet资源来说,kubectl scalekubectl patch命令均可以实现此功能,也可以使用kubectl edit命令直接修改其副本数,或者修改资源清单文件,由kubectl apply命令重新声明。

    1)通过scalenginx-statefulset资源副本数量扩容为4个

    [root@k8s-master statefulset]# kubectl scale statefulset/nginx-statefulset --replicas=4   #扩容副本增加到4个
    statefulset.apps/nginx-statefulset scaled
    [root@k8s-master statefulset]# kubectl get pods     #查看pv信息
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          16m
    nginx-statefulset-1   1/1     Running   0          16m
    nginx-statefulset-2   1/1     Running   0          16m
    nginx-statefulset-3   1/1     Running   0          3s
    
    [root@k8s-master statefulset]# kubectl get pv   #查看pv绑定
    NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM                                   STORAGECLASS   REASON   AGE
    pv-nfs-001   2Gi        RWO,RWX        Retain           Available                                                                   21m
    pv-nfs-002   5Gi        RWO            Retain           Bound       default/nginxdata-nginx-statefulset-0                           21m
    pv-nfs-003   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-1                           21m
    pv-nfs-004   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-2                           21m
    pv-nfs-005   5Gi        RWO,RWX        Retain           Bound       default/nginxdata-nginx-statefulset-3                           21m

    2)通过patchnginx-statefulset资源副本数量缩容为3个

    [root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":2}}'    #通过patch打补丁方式缩容
    statefulset.apps/nginx-statefulset patched
    
    [root@k8s-master ~]# kubectl get pods -w    #动态查看缩容过程
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          17m
    nginx-statefulset-1   1/1     Running   0          17m
    nginx-statefulset-2   1/1     Running   0          17m
    nginx-statefulset-3   1/1     Running   0          1m
    nginx-statefulset-3   1/1     Terminating   0          20s
    nginx-statefulset-3   0/1     Terminating   0          20s
    nginx-statefulset-3   0/1     Terminating   0          22s
    nginx-statefulset-3   0/1     Terminating   0          22s
    nginx-statefulset-2   1/1     Terminating   0          24s
    nginx-statefulset-2   0/1     Terminating   0          24s
    nginx-statefulset-2   0/1     Terminating   0          36s
    nginx-statefulset-2   0/1     Terminating   0          36s

    更新策略

    StatefulSet的默认更新策略为滚动更新,也可以暂停更新

    滚动更新示例:

    [root@k8s-master statefulset]# kubectl patch sts/nginx-statefulset -p '{"spec":{"replicas":4}}'    #这里先将副本扩容到4个。方便测试
    
    [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.14    #更新镜像版本
    statefulset.apps/nginx-statefulset image updated
    
    [root@k8s-master ~]# kubectl get pods -w    #动态查看更新
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          18m
    nginx-statefulset-1   1/1     Running   0          18m
    nginx-statefulset-2   1/1     Running   0          13m
    nginx-statefulset-3   1/1     Running   0          13m
    nginx-statefulset-3   1/1     Terminating   0          13m
    nginx-statefulset-3   0/1     Terminating   0          13m
    nginx-statefulset-3   0/1     Terminating   0          13m
    nginx-statefulset-3   0/1     Terminating   0          13m
    nginx-statefulset-3   0/1     Pending       0          0s
    nginx-statefulset-3   0/1     Pending       0          0s
    nginx-statefulset-3   0/1     ContainerCreating   0          0s
    nginx-statefulset-3   1/1     Running             0          2s
    nginx-statefulset-2   1/1     Terminating         0          13m
    nginx-statefulset-2   0/1     Terminating         0          13m
    nginx-statefulset-2   0/1     Terminating         0          14m
    nginx-statefulset-2   0/1     Terminating         0          14m
    nginx-statefulset-2   0/1     Pending             0          0s
    nginx-statefulset-2   0/1     Pending             0          0s
    nginx-statefulset-2   0/1     ContainerCreating   0          0s
    nginx-statefulset-2   1/1     Running             0          1s
    nginx-statefulset-1   1/1     Terminating         0          18m
    nginx-statefulset-1   0/1     Terminating         0          18m
    nginx-statefulset-1   0/1     Terminating         0          18m
    nginx-statefulset-1   0/1     Terminating         0          18m
    nginx-statefulset-1   0/1     Pending             0          0s
    nginx-statefulset-1   0/1     Pending             0          0s
    nginx-statefulset-1   0/1     ContainerCreating   0          0s
    nginx-statefulset-1   1/1     Running             0          2s
    nginx-statefulset-0   1/1     Terminating         0          18m
    nginx-statefulset-0   0/1     Terminating         0          18m
    nginx-statefulset-0   0/1     Terminating         0          18m
    nginx-statefulset-0   0/1     Terminating         0          18m
    nginx-statefulset-0   0/1     Pending             0          0s
    nginx-statefulset-0   0/1     Pending             0          0s
    nginx-statefulset-0   0/1     ContainerCreating   0          0s
    nginx-statefulset-0   1/1     Running             0          2s
    
    [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image    #查看更新完成后的镜像版本
    NAME                  IMAGE
    nginx-statefulset-0   nginx:1.14
    nginx-statefulset-1   nginx:1.14
    nginx-statefulset-2   nginx:1.14
    nginx-statefulset-3   nginx:1.14   

    通过上面示例可以看出,默认为滚动更新,倒序更新,更新完成一个接着更新下一个。

    暂停更新示例

    有时候设定了一个更新操作,但是又不希望一次性全部更新完成,想先更新几个,观察其是否稳定,然后再更新所有的。这时候只需要将.spec.spec.updateStrategy.rollingUpdate.partition字段的值进行修改即可。(默认值为0,所以我们看到了更新效果为上面那样,全部更新)。该字段表示如果设置为2,那么只有当编号大于等于2的才会进行更新。类似于金丝雀的发布方式。示例如下:

    [root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":2}}}}}'     #将更新值partition设置为2
    statefulset.apps/nginx-statefulset patched
    
    [root@k8s-master ~]# kubectl set image statefulset nginx-statefulset nginx=nginx:1.12    #更新镜像版本
    statefulset.apps/nginx-statefulset image updated
    
    [root@k8s-master ~]# kubectl get pods -w     #动态查看更新
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          11m
    nginx-statefulset-1   1/1     Running   0          11m
    nginx-statefulset-2   1/1     Running   0          11m
    nginx-statefulset-3   1/1     Running   0          11m
    nginx-statefulset-3   1/1     Terminating   0          12m
    nginx-statefulset-3   0/1     Terminating   0          12m
    nginx-statefulset-3   0/1     Terminating   0          12m
    nginx-statefulset-3   0/1     Terminating   0          12m
    nginx-statefulset-3   0/1     Pending       0          0s
    nginx-statefulset-3   0/1     Pending       0          0s
    nginx-statefulset-3   0/1     ContainerCreating   0          0s
    nginx-statefulset-3   1/1     Running             0          2s
    nginx-statefulset-2   1/1     Terminating         0          11m
    nginx-statefulset-2   0/1     Terminating         0          11m
    nginx-statefulset-2   0/1     Terminating         0          12m
    nginx-statefulset-2   0/1     Terminating         0          12m
    nginx-statefulset-2   0/1     Pending             0          0s
    nginx-statefulset-2   0/1     Pending             0          0s
    nginx-statefulset-2   0/1     ContainerCreating   0          0s
    nginx-statefulset-2   1/1     Running             0          2s
    
    [root@k8s-master statefulset]# kubectl get pods -l app=nginx-pod -o custom-columns=NAME:metadata.name,IMAGE:spec.containers[0].image    #查看更新完成后的镜像版本,可以发现只有当编号大于等于2的进行了更新。
    NAME                  IMAGE
    nginx-statefulset-0   nginx:1.14
    nginx-statefulset-1   nginx:1.14
    nginx-statefulset-2   nginx:1.12
    nginx-statefulset-3   nginx:1.12
    
    
    
    将剩余的也全部更新,只需要将更新策略的partition的值改为0即可,如下:
    [root@k8s-master ~]# kubectl patch sts/nginx-statefulset -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}}'    #将更新值partition设置为0
    statefulset.apps/nginx-statefulset patche
    [root@k8s-master ~]# kubectl get pods -w    #动态查看更新
    NAME                  READY   STATUS    RESTARTS   AGE
    nginx-statefulset-0   1/1     Running   0          18m
    nginx-statefulset-1   1/1     Running   0          18m
    nginx-statefulset-2   1/1     Running   0          6m44s
    nginx-statefulset-3   1/1     Running   0          6m59s
    nginx-statefulset-1   1/1     Terminating   0          19m
    nginx-statefulset-1   0/1     Terminating   0          19m
    nginx-statefulset-1   0/1     Terminating   0          19m
    nginx-statefulset-1   0/1     Terminating   0          19m
    nginx-statefulset-1   0/1     Pending       0          0s
    nginx-statefulset-1   0/1     Pending       0          0s
    nginx-statefulset-1   0/1     ContainerCreating   0          0s
    nginx-statefulset-1   1/1     Running             0          2s
    nginx-statefulset-0   1/1     Terminating         0          19m
    nginx-statefulset-0   0/1     Terminating         0          19m
    nginx-statefulset-0   0/1     Terminating         0          19m
    nginx-statefulset-0   0/1     Terminating         0          19m
    nginx-statefulset-0   0/1     Pending             0          0s
    nginx-statefulset-0   0/1     Pending             0          0s
    nginx-statefulset-0   0/1     ContainerCreating   0          0s
    nginx-statefulset-0   1/1     Running             0          2s
  • 相关阅读:
    RocketMQ之二:分布式开放消息系统RocketMQ的原理与实践(消息的顺序问题、重复问题、可靠消息/事务消息)
    Redis 发布/订阅机制原理分析
    Guava 12-数学运算
    Guava] 11
    Guava 10-散列
    Guava 9-I/O
    Guava 8-区间
    mat(Eclipse Memory Analyzer tool)之二--heap dump分析
    Python流程控制语句
    监控和管理Cassandra
  • 原文地址:https://www.cnblogs.com/yanjieli/p/11858397.html
Copyright © 2011-2022 走看看