zoukankan      html  css  js  c++  java
  • Kubernetes全栈架构师(资源调度下)--学习笔记

    目录

    • StatefulSet扩容缩容
    • StatefulSet更新策略
    • StatefulSet灰度发布
    • StatefulSet级联删除和非级联删除
    • 守护进程服务DaemonSet
    • DaemonSet的使用
    • DaemonSet的更新和回滚
    • Label&Selector
    • 什么是HPA?
    • 自动扩缩容HPA实践

    StatefulSet扩容缩容

    查看nginx副本

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   1 (7h1m ago)   22h
    web-1                    1/1     Running   1 (7h1m ago)   22h
    web-2                    1/1     Running   1 (7h1m ago)   22h
    

    StatefulSet副本启动顺序按照名称0,1,2,只有web-0完全启动之后才会启动web-1,web-1完全启动之后才会启动web-2

    删除的时候顺序与启动相反,从最后一个序号开始,2,1,0,如果web-2删除过程中,web-0挂掉了,那么web-1不会被删除,必须等待web-0启动状态变为ready之后,才会删除web-1

    打开另一个窗口监控StatefulSet

    [root@k8s-master01 ~]# kubectl get po -l app=nginx -w
    NAME                     READY   STATUS    RESTARTS        AGE
    web-0                    1/1     Running   1 (7h14m ago)   22h
    web-1                    1/1     Running   1 (7h14m ago)   22h
    web-2                    1/1     Running   1 (7h14m ago)   22h
    

    扩容到5个副本

    [root@k8s-master01 ~]# kubectl scale --replicas=5 sts web
    statefulset.apps/web scaled
    

    监控情况(可以看到按顺序启动)

    [root@k8s-master01 ~]# kubectl get po -l app=nginx -w
    NAME                     READY   STATUS    RESTARTS        AGE
    web-3                    0/1     Pending   0               0s
    web-3                    0/1     Pending   0               0s
    web-3                    0/1     ContainerCreating   0               0s
    web-3                    1/1     Running             0               1s
    web-4                    0/1     Pending             0               0s
    web-4                    0/1     Pending             0               0s
    web-4                    0/1     ContainerCreating   0               0s
    web-4                    1/1     Running             0               1s
    

    缩容到2个副本

    [root@k8s-master01 ~]# kubectl scale --replicas=2 sts web
    statefulset.apps/web scaled
    

    监控情况(可以看到删除的顺序与启动的顺序相反)

    web-4                    1/1     Terminating         0               14m
    web-4                    0/1     Terminating         0               14m
    web-4                    0/1     Terminating         0               14m
    web-4                    0/1     Terminating         0               14m
    web-3                    1/1     Terminating         0               14m
    web-3                    0/1     Terminating         0               14m
    web-3                    0/1     Terminating         0               14m
    web-3                    0/1     Terminating         0               14m
    web-2                    1/1     Terminating         1 (7h29m ago)   22h
    web-2                    0/1     Terminating         1 (7h29m ago)   22h
    web-2                    0/1     Terminating         1 (7h29m ago)   22h
    web-2                    0/1     Terminating         1 (7h29m ago)   22h
    

    StatefulSet滚动更新的时候会先删除旧的副本,再创建新的副本,如果只有一个副本的话,会导致业务不可用,所以要根据自己的实际情况选择使用StatefulSet或者Deployment,如果必须固定主机名或者pod名称,建议使用StatefulSet

    查看主机名称

    [root@k8s-master01 ~]# kubectl exec -ti web-0 -- sh
    # hostname
    web-0
    # exit
    

    StatefulSet更新策略

    • RollingUpdate
    • OnDelete

    StatefulSet和Deployment一样,有几种更新方式

    RollingUpdate

    查看更新方式

    [root@k8s-master01 ~]# kubectl get sts -o yaml
        updateStrategy:
          rollingUpdate:
            partition: 0
          type: RollingUpdate # 默认滚动更新,从下往上更新
    

    扩容到3个副本

    [root@k8s-master01 ~]# kubectl scale --replicas=3 sts web
    statefulset.apps/web scaled
    

    查看pod

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              53m
    web-1                    1/1     Running   1 (8h ago)     23h
    web-2                    1/1     Running   0              15s
    

    滚动更新顺序是web-2,web-1,web-0,从下往上更新,如果更新过程中web-0挂掉了,则会等待web-0恢复到状态为ready之后再继续从下往上滚动更新

    打开另一个窗口监控StatefulSet

    [root@k8s-master01 ~]# kubectl get po -l app=nginx -w
    NAME                     READY   STATUS    RESTARTS     AGE
    web-0                    1/1     Running   0            13s
    web-1                    1/1     Running   0            23s
    web-2                    1/1     Running   0            33s
    

    修改镜像地址触发更新

    [root@k8s-master01 ~]# kubectl edit sts web
    /image 回车
    # 修改镜像
          - image: nginx:1.15.3
    

    查看更新过程

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS        RESTARTS       AGE
    web-0                    1/1     Running       0              58m
    web-1                    0/1     Terminating   1 (8h ago)     23h
    web-2                    1/1     Running       0              4s
    

    查看监控

    web-2                    1/1     Terminating   0            101s
    web-2                    0/1     Terminating   0            101s
    web-2                    0/1     Terminating   0            110s
    web-2                    0/1     Terminating   0            110s
    web-2                    0/1     Pending       0            0s
    web-2                    0/1     Pending       0            0s
    web-2                    0/1     ContainerCreating   0            0s
    web-2                    1/1     Running             0            2s
    web-1                    1/1     Terminating         0            102s
    web-1                    0/1     Terminating         0            103s
    web-1                    0/1     Terminating         0            110s
    web-1                    0/1     Terminating         0            110s
    web-1                    0/1     Pending             0            0s
    web-1                    0/1     Pending             0            0s
    web-1                    0/1     ContainerCreating   0            0s
    web-1                    1/1     Running             0            1s
    web-0                    1/1     Terminating         0            101s
    web-0                    0/1     Terminating         0            102s
    web-0                    0/1     Terminating         0            110s
    web-0                    0/1     Terminating         0            110s
    web-0                    0/1     Pending             0            0s
    web-0                    0/1     Pending             0            0s
    web-0                    0/1     ContainerCreating   0            0s
    web-0                    1/1     Running             0            1s
    

    OnDelete

    修改更新状态为OnDelete

    [root@k8s-master01 ~]# kubectl edit sts web
    # 修改以下内容
      updateStrategy:
        type: OnDelete
    

    修改镜像地址

    [root@k8s-master01 ~]# kubectl edit sts web
    /image 回车
    # 修改镜像
          - image: nginx:1.15.2
    

    查看pod,可以看到没有更新

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              3m26s
    web-1                    1/1     Running   0              3m36s
    web-2                    1/1     Running   0              3m49s
    

    手动删除pod触发更新

    [root@k8s-master01 ~]# kubectl delete po web-2
    pod "web-2" deleted
    

    查看pod

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              5m6s
    web-1                    1/1     Running   0              5m16s
    web-2                    1/1     Running   0              9s
    

    查看web-2镜像,可以看到更新成功

    [root@k8s-master01 ~]# kubectl get po web-2 -oyaml | grep image
      - image: nginx:1.15.2
        imagePullPolicy: IfNotPresent
        image: nginx:1.15.2
        imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
    

    查看web-1镜像,可以看到没有更新,所以只有删除的时候才会更新镜像

    [root@k8s-master01 ~]# kubectl get po web-1 -oyaml | grep image
      - image: nginx:1.15.3
        imagePullPolicy: IfNotPresent
        image: nginx:1.15.3
        imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
    

    删除两个pod

    [root@k8s-master01 ~]# kubectl delete po web-0 web-1
    pod "web-0" deleted
    pod "web-1" deleted
    

    查看监控,可以看到按照删除顺序创建

    web-0                    0/1     Pending             0            0s
    web-0                    0/1     ContainerCreating   0            0s
    web-0                    1/1     Running             0            1s
    web-1                    0/1     Pending             0            0s
    web-1                    0/1     Pending             0            0s
    web-1                    0/1     ContainerCreating   0            0s
    web-1                    1/1     Running             0            1s
    

    查看所有pod镜像,可以看到三个pod的镜像都更新了

    [root@k8s-master01 ~]# kubectl get po -oyaml | grep image
          imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
        - image: nginx:1.15.2
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.2
          imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
        - image: nginx:1.15.2
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.2
          imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
        - image: nginx:1.15.2
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.2
          imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
    

    StatefulSet灰度发布

    修改配置

    [root@k8s-master01 ~]# kubectl edit sts web
    # 修改以下内容
      updateStrategy:
        type: RollingUpdate 
        rollingUpdate:
            partition: 2 # 小于2的不会被更新
    

    打开另一个窗口监控

    [root@k8s-master01 ~]# kubectl get po -l app=nginx -w
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              44h
    web-1                    1/1     Running   0              44h
    web-2                    1/1     Running   0              44h
    

    修改镜像(nginx:1.15.2 -> nginx:1.15.3)

    [root@k8s-master01 ~]# kubectl edit sts web
    # 修改以下内容s
        spec:
          containers:
          - image: nginx:1.15.3
    

    查看监控,可以看到只有大于2的在更新

    [root@k8s-master01 ~]# kubectl get po -l app=nginx -w
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              44h
    web-1                    1/1     Running   0              44h
    web-2                    1/1     Running   0              44h
    web-2                    1/1     Terminating   0              44h
    web-2                    0/1     Terminating   0              44h
    web-2                    0/1     Terminating   0              44h
    web-2                    0/1     Terminating   0              44h
    web-2                    0/1     Pending       0              0s
    web-2                    0/1     Pending       0              0s
    web-2                    0/1     ContainerCreating   0              0s
    web-2                    1/1     Running             0              3s
    

    查看镜像,可以看到web-2的镜像是nginx:1.15.3,另外两个是nginx:1.15.2

    [root@k8s-master01 ~]# kubectl get po -oyaml | grep image
        - image: nginx:1.15.2
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.2
          imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
        - image: nginx:1.15.2
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.2
          imageID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
        - image: nginx:1.15.3
          imagePullPolicy: IfNotPresent
          image: nginx:1.15.3
          imageID: docker-pullable://nginx@sha256:24a0c4b4a4c0eb97a1aabb8e29f18e917d05abfe1b7a7c07857230879ce7d3d3
    

    可以使用这种机制实现灰度机制,先发布一两个实例,确认没有问题之后再发布所有实例,这就是stateful的分段更新,相当于灰度发布的机制,也可以使用其它的方式,比如服务网格,或者myservices

    StatefulSet级联删除和非级联删除

    • 级联删除:删除sts时同时删除Pod
    • 非级联删除:删除sts时不删Pod

    获取sts

    [root@k8s-master01 ~]# kubectl get sts
    NAME   READY   AGE
    web    3/3     2d20h
    

    级联删除

    [root@k8s-master01 ~]# kubectl delete sts web
    statefulset.apps "web" deleted
    

    查看pod

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS        RESTARTS       AGE
    web-0                    0/1     Terminating   0              45h
    web-1                    0/1     Terminating   0              45h
    web-2                    0/1     Terminating   0              11m
    

    创建pod

    [root@k8s-master01 ~]# kubectl create -f nginx-sts.yaml
    statefulset.apps/web created
    Error from server (AlreadyExists): error when creating "nginx-sts.yaml": services "nginx" already exists
    

    查看pod

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              7s
    web-1                    1/1     Running   0              5s
    

    非级联删除

    [root@k8s-master01 ~]# kubectl delete sts web --cascade=false
    warning: --cascade=false is deprecated (boolean value) and can be replaced with --cascade=orphan.
    statefulset.apps "web" deleted
    

    查看sts,可以看到sts被删除了

    [root@k8s-master01 ~]# kubectl get sts
    No resources found in default namespace.
    

    查看pod,可以看到pod依然存在,只是没有sts管理了,再次删除pod不会被重新创建

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS       AGE
    web-0                    1/1     Running   0              3m37s
    web-1                    1/1     Running   0              3m35s
    

    删除web-1,web-0

    [root@k8s-master01 ~]# kubectl delete po web-1 web-0
    pod "web-1" deleted
    pod "web-0" deleted
    

    查看pod,可以看到没有sts管理的pod,删除之后不会重新创建

    [root@k8s-master01 ~]# kubectl get po
    NAME                     READY   STATUS    RESTARTS         AGE
    

    守护进程服务DaemonSet

    DaemonSet:守护进程集,缩写为ds,在所有节点或者是匹配的节点上都部署一个Pod。

    使用DaemonSet的场景

    • 运行集群存储的daemon,比如ceph或者glusterd
    • 节点的CNI网络插件,calico
    • 节点日志的收集:fluentd或者是filebeat
    • 节点的监控:node exporter
    • 服务暴露:部署一个ingress nginx

    DaemonSet的使用

    新建DaemonSet

    [root@k8s-master01 ~]# cp nginx-deploy.yaml nginx-ds.yaml
    [root@k8s-master01 ~]# vim nginx-ds.yaml 
    # 修改内容如下
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      labels:
        app: nginx
      name: nginx
    spec:
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          creationTimestamp: null
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:1.15.2
            imagePullPolicy: IfNotPresent
            name: nginx
            resources: {}
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
    

    创建一个ds,因为没有配置notselect,所有它会在每个节点启动一个

    [root@k8s-master01 ~]# kubectl create -f nginx-ds.yaml
    daemonset.apps/nginx created
    

    查看pod

    [root@k8s-master01 ~]# kubectl get po -owide
    NAME                     READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
    nginx-2xtms              1/1     Running   0          90s     172.25.244.196   k8s-master01   <none>           <none>
    nginx-66bbc9fdc5-4xqcw   1/1     Running   0          5m43s   172.25.244.195   k8s-master01   <none>           <none>
    nginx-ct4xh              1/1     Running   0          90s     172.17.125.2     k8s-node01     <none>           <none>
    nginx-hx9ws              1/1     Running   0          90s     172.27.14.195    k8s-node02     <none>           <none>
    nginx-mjph9              1/1     Running   0          90s     172.18.195.2     k8s-master03   <none>           <none>
    nginx-p64rf              1/1     Running   0          90s     172.25.92.67     k8s-master02   <none>           <none>
    

    给需要部署的容器打标签

    [root@k8s-master01 ~]# kubectl label node k8s-node01 k8s-node02 ds=true
    node/k8s-node01 labeled
    node/k8s-node02 labeled
    

    查看容器标签

    [root@k8s-master01 ~]# kubectl get node --show-labels
    NAME           STATUS   ROLES    AGE   VERSION   LABELS
    k8s-master01   Ready    <none>   3d    v1.20.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master01,kubernetes.io/os=linux,node.kubernetes.io/node=
    k8s-master02   Ready    <none>   3d    v1.20.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master02,kubernetes.io/os=linux,node.kubernetes.io/node=
    k8s-master03   Ready    <none>   3d    v1.20.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master03,kubernetes.io/os=linux,node.kubernetes.io/node=
    k8s-node01     Ready    <none>   3d    v1.20.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node01,kubernetes.io/os=linux,node.kubernetes.io/node=
    k8s-node02     Ready    <none>   3d    v1.20.9   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ds=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node02,kubernetes.io/os=linux,node.kubernetes.io/node=
    

    修改nginx-ds.yaml

    [root@k8s-master01 ~]# vim nginx-ds.yaml
    #修改以下内容
        spec:
          nodeSelector:
            ds: "true"
    

    更新配置

    [root@k8s-master01 ~]# kubectl replace -f nginx-ds.yaml
    

    查看pod,可以看到不符合标签的pod被删除了

    [root@k8s-master01 ~]# kubectl get po -owide
    NAME                     READY   STATUS    RESTARTS   AGE   IP               NODE           NOMINATED NODE   READINESS GATES
    nginx-66bbc9fdc5-4xqcw   1/1     Running   0          15m   172.25.244.195   k8s-master01   <none>           <none>
    nginx-gd6sp              1/1     Running   0          44s   172.27.14.196    k8s-node02     <none>           <none>
    nginx-pl4dz              1/1     Running   0          47s   172.17.125.3     k8s-node01     <none>           <none>
    

    DaemonSet的更新和回滚

    Statefulset 和 DaemonSet 更新回滚和 Deployment 一致

    更新策略推荐使用 OnDelete

    updateStrategy:
        type: OnDelete
    

    因为 DaemonSet 可能部署在 k8s 集群的很多节点上,一开始先在一些节点上进行测试,删除后触发更新不影响其他节点

    查看更新记录

    kubectl rollout history ds nginx
    

    Label&Selector

    Label:对k8s中各种资源进行分类、分组,添加一个具有特别属性的一个标签

    Selector:通过一个过滤的语法进行查找到对应标签的资源

    当Kubernetes对系统的任何API对象如Pod和节点进行“分组”时,会对其添加Label(key=value形式的“键-值对”)用以精准地选择对应的API对象。而Selector(标签选择器)则是针对匹配对象的查询方法。注:键-值对就是key-value pair

    例如,常用的标签tier可用于区分容器的属性,如frontend、backend;或者一个release_track用于区分容器的环境,如canary、production等

    Label

    定义 Label

    [root@k8s-master01 ~]# kubectl label node k8s-node02 region=subnet7
    node/k8s-node02 labeled
    

    通过Selector对其筛选

    [root@k8s-master01 ~]# kubectl get no -l region=subnet7
    NAME         STATUS   ROLES    AGE     VERSION
    k8s-node02   Ready    <none>   3d17h   v1.17.3
    

    在Deployment或其他控制器中指定将Pod部署到该节点

    containers:
      ......
    dnsPolicy: ClusterFirst
    nodeSelector:
      region: subnet7
    restartPolicy: Always
    ......
    

    对Service进行Label

    [root@k8s-master01 ~]# kubectl label svc canary-v1 -n canary-production env=canary version=v1
    service/canary-v1 labeled
    

    查看Labels

    [root@k8s-master01 ~]# kubectl get svc -n canary-production --show-labels
    NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   LABELS
    canary-v1   ClusterIP   10.110.253.62   <none>        8080/TCP   24h   env=canary,version=v1
    

    查看所有Version为v1的svc

    [root@k8s-master01 canary]# kubectl get svc --all-namespaces -l version=v1
    NAMESPACE           NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
    canary-production   canary-v1   ClusterIP   10.110.253.62   <none>        8080/TCP   25h
    

    Selector

    Selector主要用于资源的匹配,只有符合条件的资源才会被调用或使用,可以使用该方式对集群中的各类资源进行分配

    假如对Selector进行条件匹配,目前已有的Label如下

    [root@k8s-master01 ~]# kubectl get svc --show-labels
    NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     LABELS
    details       ClusterIP   10.99.9.178      <none>        9080/TCP   45h     app=details
    kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    3d19h   component=apiserver,provider=kubernetes
    nginx         ClusterIP   10.106.194.137   <none>        80/TCP     2d21h   app=productpage,version=v1
    nginx-v2      ClusterIP   10.108.176.132   <none>        80/TCP     2d20h   <none>
    productpage   ClusterIP   10.105.229.52    <none>        9080/TCP   45h     app=productpage,tier=frontend
    ratings       ClusterIP   10.96.104.95     <none>        9080/TCP   45h     app=ratings
    reviews       ClusterIP   10.102.188.143   <none>        9080/TCP   45h     app=reviews
    

    选择app为reviews或者productpage的svc

    [root@k8s-master01 ~]# kubectl get svc -l  'app in (details, productpage)' --show-labels
    NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     LABELS
    details       ClusterIP   10.99.9.178      <none>        9080/TCP   45h     app=details
    nginx         ClusterIP   10.106.194.137   <none>        80/TCP     2d21h   app=productpage,version=v1
    productpage   ClusterIP   10.105.229.52    <none>        9080/TCP   45h     app=productpage,tier=frontend
    

    选择app为productpage或reviews但不包括version=v1的svc

    [root@k8s-master01 ~]# kubectl get svc -l  version!=v1,'app in (details, productpage)' --show-labels
    NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   LABELS
    details       ClusterIP   10.99.9.178     <none>        9080/TCP   45h   app=details
    productpage   ClusterIP   10.105.229.52   <none>        9080/TCP   45h   app=productpage,tier=frontend
    

    选择labelkey名为app的svc

    [root@k8s-master01 ~]# kubectl get svc -l app --show-labels
    NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE     LABELS
    details       ClusterIP   10.99.9.178      <none>        9080/TCP   45h     app=details
    nginx         ClusterIP   10.106.194.137   <none>        80/TCP     2d21h   app=productpage,version=v1
    productpage   ClusterIP   10.105.229.52    <none>        9080/TCP   45h     app=productpage,tier=frontend
    ratings       ClusterIP   10.96.104.95     <none>        9080/TCP   45h     app=ratings
    reviews       ClusterIP   10.102.188.143   <none>        9080/TCP   45h     app=reviews
    

    在实际使用中,Label的更改是经常发生的事情,可以使用overwrite参数修改标签

    修改标签,比如将version=v1改为version=v2

    [root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
    NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   LABELS
    canary-v1   ClusterIP   10.110.253.62   <none>        8080/TCP   26h   env=canary,version=v1
    [root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version=v2 --overwrite
    service/canary-v1 labeled
    [root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
    NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   LABELS
    canary-v1   ClusterIP   10.110.253.62   <none>        8080/TCP   26h   env=canary,version=v2
    

    删除标签,比如删除version

    [root@k8s-master01 canary]# kubectl label svc canary-v1 -n canary-production version-
    service/canary-v1 labeled
    [root@k8s-master01 canary]# kubectl get svc -n canary-production --show-labels
    NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE   LABELS
    canary-v1   ClusterIP   10.110.253.62   <none>        8080/TCP   26h   env=canary
    

    什么是HPA?

    Horizontal Pod Autoscaler

    水平 pod 自动伸缩器

    k8s 不推荐使用 VPA,因为节点有很多,推荐将流量分发到不同的节点上,而不是分发到同一个节点上

    • HPA v1为稳定版自动水平伸缩,只支持CPU指标
    • V2为beta版本,分为v2beta1(支持CPU、内存和自定义指标)
    • v2beta2(支持CPU、内存、自定义指标Custom和额外指标ExternalMetrics)

    自动扩缩容HPA实践

    • 必须安装metrics-server或其他自定义metrics-server
    • 必须配置requests参数
    • 不能扩容无法缩放的对象,比如DaemonSet

    dry-run导出yaml文件,以便于进行二次修改

    kubectl create deployment hpa-nginx --image=registry.cn-beijing.aliyuncs.com/dotbalo/nginx --dry-run=client -oyaml > hpa-nginx.yaml
    

    编辑文件 hpa-nginx.yaml,containers 添加参数

    containers:
    - image: registry.cn-beijing.aliyuncs.com/dotbalo/nginx
      name: nginx
      resources:
        requests:
          cpu: 10m
    

    创建

    kubectl create hpa-nginx.yaml
    

    暴露一个服务

    kubectl expose deployment hpa-nginx --port=80
    

    配置autoscale

    kubectl autoscale deployment hpa-nginx --cpu-percent=10 --min=1 --max=10
    

    循环执行提高cpu,暂停后cpu下降

    while true; do wget -q -O- http://192.168.42.44 > /dev/null; done
    

    课程链接

    http://www.kubeasy.com/

    知识共享许可协议

    本作品采用知识共享署名-非商业性使用-相同方式共享 4.0 国际许可协议进行许可。

    欢迎转载、使用、重新发布,但务必保留文章署名 郑子铭 (包含链接: http://www.cnblogs.com/MingsonZheng/ ),不得用于商业目的,基于本文修改后的作品务必以相同的许可发布。

  • 相关阅读:
    linux学习之linux的hostname修改详解《转》
    不想作死系列--win7远程linux桌面之vncserver
    不想作死系列---virtualbox最小化安装centos6.5
    基于支持向量机的车牌识别-- opencv2.4.7+vs2012环境搭建
    python文件的中文处理以及个人思路
    haskell学习笔记<1>--基本语法
    提醒
    C语言矩阵传递给函数的方法
    0x01数据结构——C语言实现(二叉查找树)
    0x01数据结构——C语言实现(二叉树)
  • 原文地址:https://www.cnblogs.com/MingsonZheng/p/15377751.html
Copyright © 2011-2022 走看看