zoukankan      html  css  js  c++  java
  • Kubernetes ---- Pod控制器之StatefulSet

    StatefulSet

    cattle: 关注群体
    pet: 关注个体

    特性:
      1. 稳定且需要有唯一的网络标识符;
      2. 稳定且持久的存储设备;
      3. 有序、平滑的部署和扩展;
      4. 有序、平滑的终止和删除;
      5. 有序的滚动更新;
    StatefulSet必备的三个组件:
      1. headless Service
      2. StatefulSet
      3. volumeClaimTemplate

      StatefulSet中的headless Service的作用:

      StatefulSet控制的服务集群中,启动服务,顺序启动,停止服务,逆序启动;当集群中有Pod要重新启动时,Pod的名称必须要与第一次启动时的Pod的名称一致,Pod名称是作为识别Pod唯一性的标识符(必须稳定、持久、有效)
    使用headless Service确保解析的名称是直达后端Pod IP地址的,并确保给每个Pod配置一个唯一的名称;

      StatefulSet中的volumeClaimTemplate:

      在分布式系统当中,数据分别存储的数据是不一致的,每个节点应该有自己专用的存储卷,创建每一个Pod时会自动生成一个pvc,从而请求绑定一个pv,然后拥有自己单独的存储卷;

    创建StatefulSet:

    1. 创建pv,pv与pvc内容见博客另一篇文章;

    $ vim pv-demo.yaml
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv001
        labels:
          name: pv0001
        spec:
          accessModes: ["ReadWriteOnce","ReadWriteMany"]
          capacity:
          storage: 5Gi
          nfs:
            path: /data/volumes/v1
           server: 192.168.222.103
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv002
        labels:
          name: pv0002
        spec:
          accessModes: ["ReadWriteOnce"]
          capacity:
          storage: 5Gi
        nfs:
          path: /data/volumes/v2
          server: 192.168.222.103
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv003
        labels:
          name: pv0003
        spec:
          accessModes: ["ReadWriteOnce","ReadWriteMany"]
          capacity:
          storage: 5Gi
        nfs:
          path: /data/volumes/v3
          server: 192.168.222.103
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv004
        labels:
          name: pv0004
        spec:
          accessModes: ["ReadWriteOnce","ReadWriteMany"]
          capacity:
          storage: 10Gi
          nfs:
            path: /data/volumes/v4
            server: 192.168.222.103
      ---
      apiVersion: v1
      kind: PersistentVolume
      metadata:
        name: pv005
        labels:
          name: pv0005
        spec:
          accessModes: ["ReadWriteOnce","ReadWriteMany"]
          capacity:
            storage: 10Gi
        nfs:
          path: /data/volumes/v5
          server: 192.168.222.103

      $ kubectl apply -f pv-demo.yaml   $ kubectl get pv NAME   CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pv001   5Gi    RWO,RWX    Retain     Available               78m pv002   5Gi    RWO       Retain     Available               78m pv003   5Gi    RWO,RWX    Retain     Available               78m pv004   10Gi   RWO,RWX    Retain     Available               78m pv005   10Gi   RWO,RWX    Retain     Available               78m

    2. 创建statefulSet应用

    $ vim state-demo.yaml
      apiVersion: v1
      kind: Service
      metadata:
        name: myapp
        namespace: default
          labels:
            app: myapp
        spec:
          clusterIP: None
          ports:
          - name: web
            port: 80
          selector:
            app: myapp
      ---
      apiVersion: apps/v1
      kind: StatefulSet
      metadata:
        name: myapp
        namespace: myapp
      spec:
        serviceName: myapp
        replicas: 3
        selector:
          matchLabels:
            app: myapp-pod
        template:
          metadata:
          labels:
            app: myapp-pod
          spec:
            containers:
            - name: myapp
              image: ikubernetes/myapp:v5
              imagePullPolicy: IfNotPresent
            ports:
            - name: web
              containerPort: 80
            volumeMounts:
            - name: myappdata
              mountPath: /usr/share/nginx/html
        volumeClaimTemplates:
        - metadata:
            name: myappdata
          spec:
            accessModes: ["ReadWriteOnce"]
            resources:
              requests:
                storage: 5Gi
    $ kubectl apply -f state-demo.yaml

    3. 成果

    ## 下面的三个Pods的名称有规律性的,不像deployment控制器创建出来的随机名称,因为当我们删除其中一个Pod之后,K8s还会创建出来之前那个一样名称的Pod;

    $ kubectl get pods
    NAME     READY STATUS RESTARTS   AGE
    myapp-0   1/1   Running   0      44m
    myapp-1   1/1   Running   0      44m
    myapp-2   1/1   Running   0      44m

    ## 下面的myapp就是我们创建的statefulSet类型的控制器

    $ kubectl get sts
    NAME READY AGE
    myapp 3/3 45m

    ## 我们发现,当创建Pod的时候,Pod会自动拥有一个pvc,然后这个pvc再去系统上找寻与之规则匹配的pv进行绑定,从而使每个Pod都能单独挂载在某个pvc上,且pvc的名字都隐含了Pod的名字,所以就使得他们能够
    持续的为同一个Pod而使用,这就是VolumeClaimTemplates的功能;

    $ kubectl get pvc
    NAME         STATUS VOLUME CAPACITY ACCESS    MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound pv002   5Gi      RWO 51m
    myappdata-myapp-1 Bound pv003   5Gi      RWO,RWX             49m
    myappdata-myapp-2 Bound pv004   10Gi     RWO,RWX             49m

    注:
      当我们手动删除Pod时,pvc并不会被删除,且Pod的删除顺序是倒序删除的,从2 --> 0逐一删除;
      创建时则是正序创建,从0 --> 2;
      每一个Pod的名称都可以被解析为IP地址;
      pod_name.service_name.ns_name.svc.cluster.local
      myapp-0.myapp.default.svc.cluster.local
      myapp-1.myapp.default.svc.cluster.local
      myapp-2.myapp.default.svc.cluster.local

    扩展副本:

    # 将副本扩展至5个,k8s会先扩展4,再扩展5.
    $ kubectl patch -f state-demo.yaml -p '{"spec":{"replicas":5}}'
    
    # 查看一下pvc,发现所有pvc都已经被绑定了;
    $ kubectl get pvc
    NAME         STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
    myappdata-myapp-0 Bound  pv002   5Gi     RWO              69m
    myappdata-myapp-1 Bound  pv003   5Gi     RWO,RWX              67m
    myappdata-myapp-2 Bound  pv004   10Gi    RWO,RWX           67m
    myappdata-myapp-3 Bound  pv005   10Gi    RWO,RWX           2m50s
    myappdata-myapp-4 Bound  pv001   5Gi     RWO,RWX           2m36s

    缩减副本:

    # 将副本再次缩减至3个,监控Pod变化顺序,会先删除4,再删除3,直至副本数量满足要求;
    $ kubectl get pods -o wide -w
    $ kubectl patch -f state-demo.yaml -p '{"spec":{"replicas":3}}'
    
    myapp-4 1/1 Terminating 0 5m21s 10.244.1.127 node3 <none> <none>
    myapp-4 0/1 Terminating 0 5m22s 10.244.1.127 node3 <none> <none>
    myapp-4 0/1 Terminating 0 5m28s 10.244.1.127 node3 <none> <none>
    myapp-4 0/1 Terminating 0 5m28s 10.244.1.127 node3 <none> <none>
    myapp-3 1/1 Terminating 0 5m41s 10.244.2.160 node2 <none> <none>
    myapp-3 0/1 Terminating 0 5m44s 10.244.2.160 node2 <none> <none>
    myapp-3 0/1 Terminating 0 5m45s 10.244.2.160 node2 <none> <none>
    myapp-3 0/1 Terminating 0 5m45s 10.244.2.160 node2 <none> <none>

    滚动升级:

      金丝雀升级:

        自定义更新策略:
        sts.spec.updateStrategy.rollingUpdate
        partition: N
        意味着>=N的Pod会被更新
        partition: 2
        myapp-0 不会更新
        myapp-1 不会更新
        myapp-2 会更新

      演示:

    # 将partition设为4,那么Pod中大于等于4的Pod才会更新;
    $ kubectl patch sts/myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":4}}}}'
    
    # 更换镜像
    $ kubectl set image sts/myapp myapp=ikubernetes/myapp:v2
    
    # 验证,发现只有myapp-4的镜像变化了;
    $ kubectl describe pod myapp-4
    ....
    10.244.1.129
    ....
    $ kubectl describe pod myapp-0
    ....
    Image: ikubernetes/myapp:v5
    ....

    正式升级:
    当发现金丝雀升级后的其中一个Pod运行没有任何异常的状况,可以将所有Pod都升级,监控升级过程,发现是从大到小进行升级的;

    $ kubectl get pods -o wide -w
    $ kubectl patch sts/myapp -p '{"spec":{"updateStrategy":{"rollingUpdate":{"partition":0}}}}'
    
    # 所有的Pod的镜像都已经改变了
    $ kubectl describe pod myapp-0
    ....
    Image: ikubernetes/myapp:v2
    ....
  • 相关阅读:
    JavaScript之保留两位小数
    mybatis框架resultMap的自动映射级别partial 和full的探讨
    MySql数据库中的datediff函数
    mybatis框架choose when otherwise 的使用
    mybatis框架,使用foreach实现复杂结果的查询循环List集合方式
    Spring框架的设计理念
    mybatis框架的分页功能
    mybatis框架,使用foreach实现复杂结果的查询循环集合数组
    mybatis框架使用resultMap实现高级结果映射,collection属性的使用
    [Linux] ubuntu 的介绍百科
  • 原文地址:https://www.cnblogs.com/k-free-bolg/p/13181957.html
Copyright © 2011-2022 走看看