zoukankan      html  css  js  c++  java
  • docker+k8s基础篇四

    Docker+K8s基础篇(四)


    • pod控制器
      • A:pod控制器类型
    • ReplicaSet控制器
      • A:ReplicaSet控制器介绍
      • B:ReplicaSet控制器的使用
    • Deployment控制器
      • A:Deployment控制器的介绍和简单使用
    • DaemonSet控制器
      • A:Deployment控制器的介绍
      • B:Deployment控制器的简单使用
      • C:pod的共享字段介绍

    ♣一:Pod控制器 

    A:Pod控制器类型

    通过yaml格式创建的pod资源我们手动delete之后,是不会重建的,因为这个属于自主式pod,不是属于控制器控制的pod。之前我们直接通过run启动是通过控制器来管理的,delete之后还能通过控制器来重构一个新的一模一样的pod,控制器会严格控制其控制的pod数量符合用户的期望
    而且控制器管理端的pod是不建议直接delete的,可以通过修改控制器管理相应的pod的数量从而达到我们的预期。
    pod控制器主要功能也就是带我们去管理端pod的中间层,并帮我们确保每一个pod资源始终处于我们期望的状态,例如pod里面的容器出现故障,控制器会去尝试重启容器,当一直重启不成功,就会基于内部策略来进行重新的编排和部署。
    如果容器的数量低于用户的目标数据就会新建pod资源,多余则会终止。
    控制器是一种泛称,真正的控制器资源有多种类型:
    1:ReplicaSet:(带用户创建指定数量的pod副本,并确保pod副本数量一直处于满足用户期望的数量,另外ReplicaSet还支持扩缩容机制,而且已经替代了ReplicationController)
    ReplicaSet的三种核心资源:
    1:用户定义的pod副本
    2:标签选择器
    3:pod模板
    ReplicaSet功能如此强大,但是我们却不能直接使用ReplicaSet,而且连k8s也建议用户不直接使用ReplicaSet,而是转而使用Deployment。
    Deployment(也是一种控制器,但是Deployment不是直接替代了ReplicaSet来控制pod的,而是通过控制ReplicaSet,再由ReplicaSet来控制pod,由此Deployment是建构在ReplicaSet之上的,不是建立在pod之上的,除了控制ReplicaSet自身所带的两个功能之外,Deployment还支持滚动更新和回滚等强大的功能,还支持声明式配置的功能,声明式可以使得我们创建的时候根据声明的逻辑来定义,方便我们随时动态修改在apiservice上定义的目标期望状态)
    Deployment也是目前最好的控制器之一
    Deployment指用于管理无状态应用,指需要关注群体,无需关注个体的时候更加需要Deployment来完成。
    控制器管理pod的工作特点:
    1:pod的数量可以大于node节点数量,pod的数量不和node的数量形成精准匹配的关系,大于node节点数量的pod会通过策略分派不通的node节点上,可能一个node有5个,一个node有3个这样的情况,但是这对某些服务来说一个节点出现多个相同pod是完全没有必要的,例如elk的日志收集服务亦或者监控工具等,一个节点只需要跑一个pod即可来完成node节点上所有的pod所产生的日志收集工作,多起就等于在消耗资源
    对于这种情况,Deployment就不能很好的完成,我既要日志收集pod数量每个节点是唯一的,又要保证一旦pod挂掉之后还能精准的从挂掉的pod上重构起来,那么就需要另外一种控制器DaemonSet。
    DaemonSet:
    用于控制运行的集群每一个节点只运行一个特定的pod副本,这样不仅能规避我们上面的问题,还能完成当新的节点加入集群的时候,上面能运行一个特定的pod,那这种控制器控制的pod数量就直接取决于你的集群的规模,当然pod模板和标签选择器依然是不能少的
    Job
    Job可以用于指需要在计划内按照指定的时间节点取执行一次,执行完成之后就退出,无需长期运行在后台,例如数据库的备份操作,当备份完成应当立即退出,但是还有特殊的情况,例如mysql程序连接数满了或者mysql挂了,这个时候job控制器控制的pod就需要把指定的任务完成才能结束,如果中途退出了需要重建来直道任务完成才能退出。Job适合完成一次性的任务。
    Cronjob:
    Cronjob和job的实现的功能类似,但是适合完成周期性的计划任务,面对周期性计划任务我们需要考虑到就是上一次任务执行还没有完成下一次的时间节点又到了应该怎么处理。
    StatefulSet
    StatefulSet就适合管理有状态的应用,更加关系个体,例如我们创建了一个redis集群,如果集群中某一个redis挂了,新起的pod是无法替代之前的redis的,因为之前的redis存储的数据可能被redis一起带走了。
    StatefulSet是将没一个pod单独管理的,每一个pod都有自己独有的标识和独有的数据集,一旦出现故障新的pod加进来之前需要做很多初始化操作才能被加进来,但是我们对于这些有状态而且有数据的应用如果是出现故障需要重构的时候,会变得很麻烦,因为redis和mysql重构和主从复制的配置是完全不一样的,这就意味需要将这些内容编写脚本的形式放到StatefulSet的模板中,这就需要人为的去做大量的验证,因为控制器一旦加载模块都是自动完成的,可能弄不好数据就丢失了。
    不管是k8s还是直接部署的应用,只要是有状态的应用都会面临这种难题,一旦故障怎么保证数据不会丢失,而且能快速用新的应用顶上来接着之前的数据继续工作,可能在直接部署的应用上完成了,但是移植到k8s上的时候将会面临的又是另外一种情况。
    在k8s上还支持一种特殊类型的资源TPR,但是在1.8版本之后就被CDR取代了,其主要功能就是自定义资源,可以将目标资源管理成一种独特的管理逻辑,然后将这种管理逻辑灌注到Operator里面,但是这种难度会变的很大,以至于到目前支持这种形式的pod资源并不多。
    k8s为了使得使用变得简单,后面也提供了一种Helm的工具,这个工具类似centos上的yum工具一样,我们只需要定义存储卷在哪里,使用多少内存空间等等资源,然后直接安装即可,helm现在已经支持很多主流的应用,但是这些应用很多时候都适用于我们的环境,所以也导致helm使用的人也不是很多。

    ♣二:ReplicaSet控制器

    A:ReplicaSet控制器介绍:

    我们可以通过kubectl explain rc(ReplicaSet的简写)

    [root@www kubeadm]# kubectl explain rc  
    可以看到一级字段也我们看
    KIND:     ReplicationController
    VERSION:  v1
    
    DESCRIPTION:
         ReplicationController represents the configuration of a replication
         controller.
    
    FIELDS:
       apiVersion   <string>
         APIVersion defines the versioned schema of this representation of an
         object. Servers should convert recognized schemas to the latest internal
         value, and may reject unrecognized values. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
    
       kind <string>
         Kind is a string value representing the REST resource this object
         represents. Servers may infer this from the endpoint the client submits
         requests to. Cannot be updated. In CamelCase. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
    
       metadata     <Object>
         If the Labels of a ReplicationController are empty, they are defaulted to
         be the same as the Pod(s) that the replication controller manages. Standard
         object's metadata. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    
       spec <Object>
         Spec defines the specification of the desired behavior of the replication
         controller. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
    
       status       <Object>
         Status is the most recently observed status of the replication controller.
         This data may be out of date by some window of time. Populated by the
         system. Read-only. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
    spec:
    [root@www kubeadm]# kubectl explain rc.spec
    KIND:     ReplicationController
    VERSION:  v1
    
    RESOURCE: spec <Object>
    
    DESCRIPTION:
         Spec defines the specification of the desired behavior of the replication
         controller. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
    
         ReplicationControllerSpec is the specification of a replication controller.
    
    FIELDS:
       minReadySeconds      <integer>
         Minimum number of seconds for which a newly created pod should be ready
         without any of its container crashing, for it to be considered available.
         Defaults to 0 (pod will be considered available as soon as it is ready)
    
       replicas     <integer>
         Replicas is the number of desired replicas. This is a pointer to
         distinguish between explicit zero and unspecified. Defaults to 1. More
         info:
         https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#what-is-a-replicationcontroller
    
       selector     <map[string]string>
         Selector is a label query over pods that should match the Replicas count.
         If Selector is empty, it is defaulted to the labels present on the Pod
         template. Label keys and values that must match in order to be controlled
         by this replication controller, if empty defaulted to labels on Pod
         template. More info:
         https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
    
       template     <Object>
         Template is the object that describes the pod that will be created if
         insufficient replicas are detected. This takes precedence over a
         TemplateRef. More info:
         https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
    
    [root@www kubeadm]#
    ReplicaSet字段内容

    ReplicaSet的spec中最主要需要定义的内容是:
    1:副本数,
    2:标签选择器,
    3:pod模板

    案例:
    apiVersion: apps/v1
    kind: ReplicaSet 使用类型是ReplicaSet
    metadata:
      name: myapp
      namespace: default
    spec:
      replicas: 2 创建两个pods资源
      selector: 使用什么样的标签选择器
        matchLabels: 如果使用多个标签就是逻辑域的关系,就需要使用matchLabels字段
          app: myapp 可以使用多个标签
          release: Public survey 声明两个标签就意味标签选择的时候必须满足两个标签内容
      template: 定义资源模板
        metadata: 资源模板下有两个字段就是matadata和spec,这个用法就是kind类型是pod的一样了
          name: myapp-pod
          labels: 注意这里的labels的标签必须包含上面matchLabels的两个标签,可以多,但是不能少,如果控制器创建一个发现不能满足就会又建一个,周而复始环境可能被创建的pod给撑死了
            app: myapp
            release: Public survey
            time: current
        spec:
          containers:
            - name: myapp-test
            image: ikubernetes/myapp:v1
            ports:
              - name: http
               containerPort: 80

    B:ReplicaSet控制器的使用:

    [root@www TestYaml]# cat pp.yaml
    apiVersion: apps/v1
    kind: ReplicaSet
    metadata:
         name: myapp
         namespace: default
    spec:
        replicas: 2
        selector:
            matchLabels:
                 app: myapp
        template:
            metadata:
                 name: myapp-pod
                 labels:
                     app: myapp
            spec:
                 containers:
                 - name: myapp-containers
                   image: ikubernetes/myapp:v1
    
    [root@www TestYaml]# kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    myapp-7ttch   1/1     Running   0          3m31s
    myapp-8w2f2   1/1     Running   0          3m31s
    我们看到我们在yaml文件里面定义的名字控制器会自动的生成在后面跟上随机串
    [root@www TestYaml]# kubectl get rs
    NAME    DESIRED   CURRENT   READY   AGE
    myapp   2         2         2       3m35s
    [root@www TestYaml]# kubectl describe pods myapp-7ttch
    Name:               myapp-7ttch
    Namespace:          default
    Priority:           0
    PriorityClassName:  <none>
    Node:               www.kubernetes.node1.com/192.168.181.140
    Start Time:         Sun, 07 Jul 2019 16:07:42 +0800
    Labels:             app=myapp
    Annotations:        <none>
    Status:             Running
    IP:                 10.244.1.27
    Controlled By:      ReplicaSet/myapp
    Containers:
      myapp-containers:
        Container ID:   docker://17288f7aed7f62a983c35cabfd061a22f94c8e315da475fcfe4b276d49b22e33
        Image:          ikubernetes/myapp:v1
        Image ID:       docker-pullable://ikubernetes/myapp@sha256:9c3dc30b5219788b2b8a4b065f548b922a34479577befb54b03330999d30d513
        Port:           <none>
        Host Port:      <none>
        State:          Running
          Started:      Sun, 07 Jul 2019 16:07:45 +0800
        Ready:          True
        Restart Count:  0
        Environment:    <none>
        Mounts:
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-h5ddf (ro)
    Conditions:
      Type              Status
      Initialized       True
      Ready             True
      ContainersReady   True
      PodScheduled      True
    Volumes:
      default-token-h5ddf:
        Type:        Secret (a volume populated by a Secret)
        SecretName:  default-token-h5ddf
        Optional:    false
    QoS Class:       BestEffort
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                     node.kubernetes.io/unreachable:NoExecute for 300s
    Events:
      Type    Reason     Age   From                               Message
      ----    ------     ----  ----                               -------
      Normal  Scheduled  16m   default-scheduler                  Successfully assigned default/myapp-7ttch to www.kubernetes.node1.com
      Normal  Pulled     16m   kubelet, www.kubernetes.node1.com  Container image "ikubernetes/myapp:v1" already present on machine
      Normal  Created    16m   kubelet, www.kubernetes.node1.com  Created container myapp-containers
      Normal  Started    16m   kubelet, www.kubernetes.node1.com  Started container myapp-containers
    [root@www TestYaml]# kubectl delete pods myapp-7ttch  当我们删除7ttch这个pods的时候,发现控制器立马帮忙创建了一个n8lt4后缀的pods
    pod "myapp-7ttch" deleted
    [root@www ~]# kubectl get pods -w
    NAME          READY   STATUS    RESTARTS   AGE
    myapp-7ttch   1/1     Running   0          18m
    myapp-8w2f2   1/1     Running   0          18m
    myapp-7ttch   1/1     Terminating   0          18m
    myapp-n8lt4   0/1     Pending       0          0s
    myapp-n8lt4   0/1     Pending       0          0s
    myapp-n8lt4   0/1     ContainerCreating   0          0s
    myapp-7ttch   0/1     Terminating         0          18m
    myapp-n8lt4   1/1     Running             0          2s
    myapp-7ttch   0/1     Terminating         0          18m
    myapp-7ttch   0/1     Terminating         0          18m
    如果我们创建一个新的pod,把标签设置成myapp一样,这个控制器或怎么去控制副本的数量
    [root@www ~]# kubectl get pods --show-labels
    NAME          READY   STATUS    RESTARTS   AGE     LABELS
    myapp-8w2f2   1/1     Running   0          26m     app=myapp
    myapp-n8lt4   1/1     Running   0          7m53s   app=myapp
    [root@www ~]#
    
    [root@www TestYaml]# kubectl create -f pod-test.yaml
    pod/myapp created
    [root@www TestYaml]# kubectl get pods --show-labels
    NAME          READY   STATUS              RESTARTS   AGE   LABELS
    myapp         0/1     ContainerCreating   0          2s    <none>
    myapp-8w2f2   1/1     Running             1          41m   app=myapp
    myapp-n8lt4   1/1     Running             0          22m   app=myapp,time=july
    mypod-g7rgq   1/1     Running             0          10m   app=mypod,time=july
    mypod-z86bg   1/1     Running             0          10m   app=mypod,time=july
    [root@www TestYaml]# kubectl label pods myapp app=myapp   给新建的pod打上myapp的标签
    pod/myapp labeled
    [root@www TestYaml]# kubectl get pods --show-labels
    NAME          READY   STATUS        RESTARTS   AGE   LABELS
    myapp         0/1     Terminating   1          53s   app=myapp
    myapp-8w2f2   1/1     Running       1          42m   app=myapp
    myapp-n8lt4   1/1     Running       0          23m   app=myapp,time=july
    mypod-g7rgq   1/1     Running       0          11m   app=mypod,time=july
    mypod-z86bg   1/1     Running       0          11m   app=mypod,time=july
    [root@www TestYaml]# kubectl get pods --show-labels
    NAME          READY   STATUS    RESTARTS   AGE   LABELS
    myapp-8w2f2   1/1     Running   1          42m   app=myapp   可以发现只要标签和控制器定义的pod标签一致了可能就会被误杀掉
    myapp-n8lt4   1/1     Running   0          23m   app=myapp,time=july
    mypod-g7rgq   1/1     Running   0          11m   app=mypod,time=july
    mypod-z86bg   1/1     Running   0          11m   app=mypod,time=july
    ReplicaSet案例

    ReplicaSet的特性之一就是指关心集体不关心个体,严格按照内部定义的pod数量,标签来控制pods资源,所以在定义ReplicaSet控制器的时候需要把条件设置复杂,避免出现上面的情况
    使用ReplicaSet创建的集体pods的时候,需要注意到一旦pods的挂了,控制器新起的pods地址肯定会变化,这个时候就需要在外面加一层service,让service的标签和ReplicaSet一致,通过标签选择器关联至后端的pods,这样就避免地址变化导致访问中断的情况。
    ReplicaSet的动态手动扩缩容也很简单。

    [root@www TestYaml]# kubectl edit rs myapp  使用edit参数进入myapp的模板信息,直接修改replicas值即可
    .....
    spec:
      replicas: 5
      selector:
        matchLabels:
          app: myapp
          ........
    replicaset.extensions/myapp edited
    [root@www TestYaml]# kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    myapp-6d4nd   1/1     Running   0          10s
    myapp-8w2f2   1/1     Running   1          73m
    myapp-c85dt   1/1     Running   0          10s
    myapp-n8lt4   1/1     Running   0          54m
    myapp-prdmq   1/1     Running   0          10s
    mypod-g7rgq   1/1     Running   0          42m
    mypod-z86bg   1/1     Running   0          42m
    ReplicaSet的资源扩缩容
    [root@www TestYaml]# curl 10.244.2.8
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@www TestYaml]# kubectl edit rs myapp
    .......
     spec:
          containers:
          - image: ikubernetes/myapp:v2  升级为v2版本
            imagePullPolicy: IfNotPresent
        .......
    replicaset.extensions/myapp edited
    NAME    DESIRED   CURRENT   READY   AGE   CONTAINERS         IMAGES                 SELECTOR
    myapp   3         3         3       79m   myapp-containers   ikubernetes/myapp:v2   app=myapp
    可以看到镜像版本已经是v2版本
    [root@www TestYaml]# curl 10.244.2.8
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    但是我们访问结果还是v1的版本,这个是因为pods一直处于运行中,并没有被重建,只有重建的pod资源才会是v2版本
    [root@www TestYaml]# kubectl get pods -o wide
    NAME          READY   STATUS    RESTARTS   AGE   IP            NODE                       NOMINATED NODE   READINESS GATES
    myapp-6d4nd   1/1     Running   0          10m   10.244.1.30   www.kubernetes.node1.com   <none>           <none>
    myapp-8w2f2   1/1     Running   1          83m   10.244.2.8    www.kubernetes.node2.com   <none>           <none>
    myapp-n8lt4   1/1     Running   0          64m   10.244.1.28   www.kubernetes.node1.com   <none>           <none>
    mypod-g7rgq   1/1     Running   0          52m   10.244.1.29   www.kubernetes.node1.com   <none>           <none>
    mypod-z86bg   1/1     Running   0          52m   10.244.2.9    www.kubernetes.node2.com   <none>           <none>
    [root@www TestYaml]# curl 10.244.1.30  我们访问myapp-6d4nd版本还是v1
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    [root@www TestYaml]# kubectl delete pods myapp-6d4nd  删除这个pods资源让其重构
    pod "myapp-6d4nd" deleted
    [root@www TestYaml]# kubectl get pods -o wide   重构之后的pods是myapp-bsdlk
    NAME          READY   STATUS    RESTARTS   AGE   IP            NODE                       NOMINATED NODE   READINESS GATES
    myapp-8w2f2   1/1     Running   1          83m   10.244.2.8    www.kubernetes.node2.com   <none>           <none>
    myapp-bsdlk   1/1     Running   0          17s   10.244.2.16   www.kubernetes.node2.com   <none>           <none>
    myapp-n8lt4   1/1     Running   0          65m   10.244.1.28   www.kubernetes.node1.com   <none>           <none>
    mypod-g7rgq   1/1     Running   0          52m   10.244.1.29   www.kubernetes.node1.com   <none>           <none>
    mypod-z86bg   1/1     Running   0          52m   10.244.2.9    www.kubernetes.node2.com   <none>           <none>
    [root@www TestYaml]# curl 10.244.2.16  访问对应的地址,发现现在已经是v2版本
    Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
    [root@www TestYaml]# curl 10.244.2.8 还没有被重构的pods还是属于v1版本
    Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
    ReplicaSet的版本迭代升级
    [root@www TestYaml]# kubectl delete rs myapp mypod
    replicaset.extensions "myapp" deleted
    replicaset.extensions "mypod" deleted
    ReplicaSet的删除

    这样有个好处就是在更新版本的时候是平滑过渡的,我留有缓冲期,当访问v2版本的用户无问题了,我再快速的更新剩下的v1版本,然后通过脚本等形式发布v2版本,这个就属于金丝雀发布。
    如图:

                                                    

    如果是一些重要的pods,可能金丝雀不是一种好的更新方式,我们可以使用蓝绿发布的方式,在创建一个其模板一直,标签选择器类似的新pods资源,但是这种情况需要考虑到访问地址,所以service需要同时关联新老两边的pods资源。

                                     

    还可以通过deployment来关联至后端的多个service上,service在关联pods资源,例如pods资源副本是3个,此时关闭一个pods资源,同时新建一个版本是v2的pods资源,这个pods资源对应的service是一个新的service资源,这个时候用户的请求一部分请求会被
    deployment引导至新service资源后端的v2版本上,然后在停止一个v1版本的pods资源同时创建v2版本的资源,直到把所有的pods资源更新完毕。

    一个deployment默认最多只能管理10个rc控制资源,当然也可以手动的去调整这个数
    deployment还能提供声明式更新配置,这个时候就不使用create来创建pods,而是使用apply声明式更新,而且这种形式创建的pods,不需要edit来去改相关的pods模板信息了,可以通过patch打补丁的形式,直接通过命令行纯命令的形式对pods资源的内部进行修改。
    对于deployment更新时还能控制更新节奏和更新逻辑
    假如现在服务器的ReplicaSet控制的pods数量有5个,这5个刚刚好满足用户的访问请求,当我们使用上面的办法删除一个在重建一个的方式就不太可取,因为删除和创建中间需要消耗时间,这时间足以导致用户访问请求过大导致其他pods承载不了而崩溃,
    这个时候就需要我们采用另外的方式了,我们可以指定控制在滚动更新期间能临时多起几个pods,我们完全可以控制,控制最多能多余我们定义的副本数量几个,最少能少于我们定义副本数量的几个,这样我们定义最多多1个出来,这样更新的适合就是先起一个新的,然后删除一个老的,在起一个新的,在删除一个老的。
    如果是pods资源过多,一个个更新过慢,可以一次多起几个新的,例如一次创建新的5个,删除5个老的,我们通过这样更新也可以控制更新的粒度。
    最少能少于我们定义副本数量的几个的更新形式就和最多的反过来,先删一个老的,在创建新的,先减后加。
    那如果是最多多一个,最少少一个,如果基数是5,那么最少是4个,最多是6个,这个时候更新就是先加1删2,然后加2删2。
    基数5,一个都不能少,最多可以到5个,那么这种就是直接删加5删5,这个就属于蓝绿部署。
    这些更新的方式默认是滚动更新。
    上面这些更新方式一定要考虑到就绪性状态和存活性状态,避免加1的还没有就绪,老的直接就删掉了。

    ♣三:Deployment控制器

    A:Deployment控制器的介绍和简单使用:

    上面我们说明了很多种依赖Deployment更新的方式,那在Deployment下主要会用到这些字段:

    [root@www TestYaml]# kubectl explain deploy(Deployment的简写)
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    DESCRIPTION:
         DEPRECATED - This group version of Deployment is deprecated by
         apps/v1beta2/Deployment. See the release notes for more information.
         Deployment enables declarative updates for Pods and ReplicaSets.
    
    FIELDS:
       apiVersion   <string>
         APIVersion defines the versioned schema of this representation of an
         object. Servers should convert recognized schemas to the latest internal
         value, and may reject unrecognized values. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
    
       kind <string>
         Kind is a string value representing the REST resource this object
         represents. Servers may infer this from the endpoint the client submits
         requests to. Cannot be updated. In CamelCase. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
    
       metadata     <Object>
         Standard object metadata.
    
       spec <Object>
         Specification of the desired behavior of the Deployment.
    
       status       <Object>
         Most recently observed status of the Deployment.
    
    可以看到所包含的以及字段名称和ReplicaSet一样,而且注意这个VERSION:  extensions/v1beta1群组是特殊的,由于k8s提供的文档是落后于实际的版本信息的,我们可以看到现在已经挪动到另外一个群组了
    apps/v1beta2/Deployment属于apps群组了
    
    [root@www TestYaml]# kubectl explain deploy.spec  spec字段的内容和ReplicaSet区别又不大。
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    RESOURCE: spec <Object>
    
    DESCRIPTION:
         Specification of the desired behavior of the Deployment.
    
         DeploymentSpec is the specification of the desired behavior of the
         Deployment.
    
    FIELDS:
       minReadySeconds      <integer>
         Minimum number of seconds for which a newly created pod should be ready
         without any of its container crashing, for it to be considered available.
         Defaults to 0 (pod will be considered available as soon as it is ready)
    
       paused       <boolean>
         Indicates that the deployment is paused and will not be processed by the
         deployment controller.
    
       progressDeadlineSeconds      <integer>
         The maximum time in seconds for a deployment to make progress before it is
         considered to be failed. The deployment controller will continue to process
         failed deployments and a condition with a ProgressDeadlineExceeded reason
         will be surfaced in the deployment status. Note that progress will not be
         estimated during the time a deployment is paused. This is set to the max
         value of int32 (i.e. 2147483647) by default, which means "no deadline".
    
       replicas     <integer>
         Number of desired pods. This is a pointer to distinguish between explicit
         zero and not specified. Defaults to 1.
    
       revisionHistoryLimit <integer>
         The number of old ReplicaSets to retain to allow rollback. This is a
         pointer to distinguish between explicit zero and not specified. This is set
         to the max value of int32 (i.e. 2147483647) by default, which means
         "retaining all old RelicaSets".
    
       rollbackTo   <Object>
         DEPRECATED. The config this deployment is rolling back to. Will be cleared
         after rollback is done.
    
       selector     <Object>
         Label selector for pods. Existing ReplicaSets whose pods are selected by
         this will be the ones affected by this deployment.
    
       strategy     <Object>
         The deployment strategy to use to replace existing pods with new ones.
    
       template     <Object> -required-
         Template describes the pods that will be created.
    除了部分字段和ReplicaSet一样之外,还多了几个重要的字段,strategy(定义更新策略)
    strategy支持的更新策略:
    [root@www TestYaml]# kubectl explain deploy.spec.strategy
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    RESOURCE: strategy <Object>
    
    DESCRIPTION:
         The deployment strategy to use to replace existing pods with new ones.
    
         DeploymentStrategy describes how to replace existing pods with new ones.
    
    FIELDS:
       rollingUpdate        <Object>
         Rolling update config params. Present only if DeploymentStrategyType =
         RollingUpdate.
    
       type <string>
         Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
         RollingUpdate.
    1:Recreate(重建式更新,删1建1的策略,此类型rollingUpdate对其是无效的)
    2:RollingUpdate(滚动更新,如果type的更新类型是RollingUpdate,那么还可以使用上面的rollingUpdate来定义)
    rollingUpdate(主要功能就是来定义更新粒度的)
    [root@www TestYaml]# kubectl explain deploy.spec.strategy.rollingUpdate
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    RESOURCE: rollingUpdate <Object>
    
    DESCRIPTION:
         Rolling update config params. Present only if DeploymentStrategyType =
         RollingUpdate.
    
         Spec to control the desired behavior of rolling update.
    
    FIELDS:
       maxSurge (对应的更新过程中,最多能超出之前定义的目标副本数有几个)   <string>
         The maximum number of pods that can be scheduled above the desired number
         of pods. Value can be an absolute number (ex: 5) or a percentage of desired
         pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
         is calculated from percentage by rounding up. By default, a value of 1 is
         used. Example: when this is set to 30%, the new RC can be scaled up
         immediately when the rolling update starts, such that the total number of
         old and new pods do not exceed 130% of desired pods. Once old pods have
         been killed, new RC can be scaled up further, ensuring that total number of
         pods running at any time during the update is at most 130% of desired pods.
         maxSurge有两种取值方式,一种是 Value can be an absolute number (ex: 5)直接指定数量,还有一种是a percentage of desired pods (ex: 10%).指定百分比
       maxUnavailable (定义最多有几个不可用)     <string>
         The maximum number of pods that can be unavailable during the update. Value
         can be an absolute number (ex: 5) or a percentage of desired pods (ex:
         10%). Absolute number is calculated from percentage by rounding down. This
         can not be 0 if MaxSurge is 0. By default, a fixed value of 1 is used.
         Example: when this is set to 30%, the old RC can be scaled down to 70% of
         desired pods immediately when the rolling update starts. Once new pods are
         ready, old RC can be scaled down further, followed by scaling up the new
         RC, ensuring that the total number of pods available at all times during
         the update is at least 70% of desired pods.
    若果这两个字段都设置为0,那等于怎么更新都更新不了,所以这两个字段只能有一个为0,另外一个为指定数字
    
    revisionHistoryLimit(代表我们滚动更新之后,最多能保留几个历史版本,方便我们回滚)
    [root@www TestYaml]# kubectl explain deploy.spec.revisionHistoryLimit
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    FIELD:    revisionHistoryLimit <integer>
    
    DESCRIPTION:
         The number of old ReplicaSets to retain to allow rollback. This is a
         pointer to distinguish between explicit zero and not specified. This is set
         to the max value of int32 (i.e. 2147483647) by default, which means
         "retaining all old RelicaSets".
         默认是10个
    
    paused(暂停,当我们滚动更新之后,如果不想立即启动,就可以通过paused来控制暂停一会儿,默认都是不暂停的)
    [root@www TestYaml]# kubectl explain deploy.spec.paused
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    FIELD:    paused <boolean>
    
    DESCRIPTION:
         Indicates that the deployment is paused and will not be processed by the
         deployment controller.
    
    template(Deployment会控制ReplicaSet自动来创建pods)
    [root@www TestYaml]# kubectl explain deploy.spec.template
    KIND:     Deployment
    VERSION:  extensions/v1beta1
    
    RESOURCE: template <Object>
    
    DESCRIPTION:
         Template describes the pods that will be created.
    
         PodTemplateSpec describes the data a pod should have when created from a
         template
    
    FIELDS:
       metadata     <Object>
         Standard object's metadata. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    
       spec <Object>
         Specification of the desired behavior of the pod. More info:
         https://git.k8s.io/community/contributors/devel/api-conventions.md#spec-and-status
    Deployment的字段说明
    [root@www TestYaml]# cat deploy.test.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mydeploy
      namespace: default
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: mydeploy
          release: Internal-measurement
      template:
        metadata:
          labels:
            app: mydeploy
            release: Internal-measurement
        spec:
          containers:
          - name: myapp-containers
            image: ikubernetes/myapp:v1
    
    [root@www TestYaml]# kubectl apply -f deploy.test.yaml  这个时候不是使用create来创建了而是使用apply声明的方式来创建pods资源
    deployment.apps/mydeploy created
    [root@www TestYaml]# kubectl get deploy  
    NAME       READY   UP-TO-DATE   AVAILABLE   AGE
    mydeploy   2/2     2            2           2m
    [root@www TestYaml]# kubectl get pods
    NAME                        READY   STATUS    RESTARTS   AGE
    mydeploy-74b7786d9b-kq88g   1/1     Running   0          2m4s
    mydeploy-74b7786d9b-mp2mb   1/1     Running   0          2m4s
    [root@www TestYaml]# kubectl get rs  可以看到我们创建deployment的时候自动帮忙创建了rs pod资源,而且可以看到命名方式就知道deployment,rs和pods之间的关系了
    NAME                  DESIRED   CURRENT   READY   AGE
    mydeploy-74b7786d9b   2         2         2       2m40s
    [root@www TestYaml]#
    deployment的名字是mydeploy,rs的名字是mydeploy-74b7786d9b(注意这个随机数值串,它是模板的hash值),pods的名字是mydeploy-74b7786d9b-kq88g
    由此可见rs和pods资源是由deployment控制自动去创建的
    Deployment案例
    deployment扩缩容不同于rs的扩缩容,我们直接通过修yaml模板,然后通过apply声明就可以达到扩缩容的机制。
    [root@www TestYaml]# cat deploy.test.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: mydeploy
      namespace: default
    spec:
      replicas: 3  直接加到三个
      selector:
        matchLabels:
          app: mydeploy
          release: Internal-measurement
      template:
        metadata:
          labels:
            app: mydeploy
            release: Internal-measurement
        spec:
          containers:
          - name: myapp-containers
            image: ikubernetes/myapp:v1
    [root@www TestYaml]# kubectl get pods
    NAME                        READY   STATUS    RESTARTS   AGE
    mydeploy-74b7786d9b-4bcln   1/1     Running   0          7s  可以看到直接加了一个新的pods资源
    mydeploy-74b7786d9b-kq88g   1/1     Running   0          13m
    mydeploy-74b7786d9b-mp2mb   1/1     Running   0          13m
    [root@www TestYaml]# kubectl get deploy
    NAME       READY   UP-TO-DATE   AVAILABLE   AGE
    mydeploy   3/3     3            3           14m
    [root@www TestYaml]# kubectl get rs
    NAME                  DESIRED   CURRENT   READY   AGE
    mydeploy-74b7786d9b   3         3         3       14m
    deployment和rs的状态数量也随之更新
    我们改变模板之后,使用apply声明资源变化情况,这个变化直接回存储到etcd或者apiservice里面,然后通知下游节点做出相应的改变
    [root@www TestYaml]# kubectl describe deploy mydeploy
    Name:                   mydeploy
    Namespace:              default
    CreationTimestamp:      Sun, 07 Jul 2019 21:31:01 +0800
    Labels:                 <none>
    Annotations:            deployment.kubernetes.io/revision: 1   我们每一次的变化都会存在Annotations里面,而且是自动维护的
                            kubectl.kubernetes.io/last-applied-configuration:
                              {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"mydeploy","namespace":"default"},"spec":{"replicas":3,"se...
    Selector:               app=mydeploy,release=Internal-measurement
    Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
    StrategyType:           RollingUpdate  默认的更新策略就是滚动更新
    MinReadySeconds:        0
    RollingUpdateStrategy:  25% max unavailable, 25% max surge 这里的最大和最小都是25% 
    Pod Template:
      Labels:  app=mydeploy
               release=Internal-measurement
      Containers:
       myapp-containers:
        Image:        ikubernetes/myapp:v1
        Port:         <none>
        Host Port:    <none>
        Environment:  <none>
        Mounts:       <none>
      Volumes:        <none>
    Conditions:
      Type           Status  Reason
      ----           ------  ------
      Progressing    True    NewReplicaSetAvailable
      Available      True    MinimumReplicasAvailable
    OldReplicaSets:  <none>
    NewReplicaSet:   mydeploy-74b7786d9b (3/3 replicas created)
    Events:
      Type    Reason             Age    From                   Message
      ----    ------             ----   ----                   -------
      Normal  ScalingReplicaSet  17m    deployment-controller  Scaled up replica set mydeploy-74b7786d9b to 2
      Normal  ScalingReplicaSet  3m42s  deployment-controller  Scaled up replica set mydeploy-74b7786d9b to 3
    对于deployment的更新也很简单,如果是单纯的更新镜像资源可以直接使用set image参数来更新,也可以直接修改配置文件的形式来更新
    [root@www TestYaml]# cat deploy.test.yaml
    .......
        spec:
          containers:
          - name: myapp-containers
            image: ikubernetes/myapp:v2  升级到v2版本
    
    [root@www TestYaml]# kubectl apply -f deploy.test.yaml
    deployment.apps/mydeploy configured
    [root@www ~]# kubectl get pods -w
    NAME                        READY   STATUS    RESTARTS   AGE
    mydeploy-74b7786d9b-8jjvv   1/1     Running   0          82s
    mydeploy-74b7786d9b-mp84r   1/1     Running   0          84s
    mydeploy-74b7786d9b-qdzc5   1/1     Running   0          86s
    mydeploy-6fbdd45d4c-kbcmh   0/1     Pending   0          0s   可以看到更新逻辑是先多一个
    mydeploy-6fbdd45d4c-kbcmh   0/1     Pending   0          0s
    mydeploy-6fbdd45d4c-kbcmh   0/1     ContainerCreating   0          0s  然后终止一个,一次的轮询直到全部完成
    mydeploy-6fbdd45d4c-kbcmh   1/1     Running             0          1s
    mydeploy-74b7786d9b-8jjvv   1/1     Terminating         0          99s
    mydeploy-6fbdd45d4c-qqgb8   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-qqgb8   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-qqgb8   0/1     ContainerCreating   0          0s
    mydeploy-74b7786d9b-8jjvv   0/1     Terminating         0          100s
    mydeploy-6fbdd45d4c-qqgb8   1/1     Running             0          1s
    mydeploy-74b7786d9b-mp84r   1/1     Terminating         0          102s
    mydeploy-6fbdd45d4c-ng99s   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-ng99s   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-ng99s   0/1     ContainerCreating   0          0s
    mydeploy-74b7786d9b-mp84r   0/1     Terminating         0          103s
    mydeploy-6fbdd45d4c-ng99s   1/1     Running             0          2s
    mydeploy-74b7786d9b-qdzc5   1/1     Terminating         0          106s
    mydeploy-74b7786d9b-qdzc5   0/1     Terminating         0          107s
    mydeploy-74b7786d9b-qdzc5   0/1     Terminating         0          113s
    mydeploy-74b7786d9b-qdzc5   0/1     Terminating         0          113s
    mydeploy-74b7786d9b-8jjvv   0/1     Terminating         0          109s
    mydeploy-74b7786d9b-8jjvv   0/1     Terminating         0          109s
    mydeploy-74b7786d9b-mp84r   0/1     Terminating         0          113s
    mydeploy-74b7786d9b-mp84r   0/1     Terminating         0          113s
    全成自动完成自动更新,只需要指定版本号。
    [root@www TestYaml]# kubectl get rs -o wide
    NAME                  DESIRED   CURRENT   READY   AGE   CONTAINERS         IMAGES                 SELECTOR
    mydeploy-6fbdd45d4c   3         3         3       25m   myapp-containers   ikubernetes/myapp:v2   app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
    mydeploy-74b7786d9b   0         0         0       33m   myapp-containers   ikubernetes/myapp:v1   app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
    可以看到我们又要两个版本的镜像,然后使用v2版本的有三个,使用v1的是没有的,还可以看到两个模板的标签信息基本是一致的,保留老版本随时等待回滚。
    [root@www TestYaml]# kubectl rollout history deployment mydeploy  我们还用过命令rollout history来查看滚动更新的次数和痕迹
    deployment.extensions/mydeploy
    REVISION  CHANGE-CAUSE
    3         <none>
    4         <none>
    
    [root@www TestYaml]# kubectl rollout undo deployment mydeploy  回滚直接使用rollout undo来进行回滚,它会根据保留的老版本模板来进行回滚,回滚的逻辑和升级的也一样,加1停1。
    deployment.extensions/mydeploy rolled back
    [root@www TestYaml]# kubectl get rs -o wide
    NAME                  DESIRED   CURRENT   READY   AGE   CONTAINERS         IMAGES                 SELECTOR
    mydeploy-6fbdd45d4c   0         0         0       34m   myapp-containers   ikubernetes/myapp:v2   app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
    mydeploy-74b7786d9b   3         3         3       41m   myapp-containers   ikubernetes/myapp:v1   app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
    [root@www TestYaml]#
    可以看到v1的版本又回来了
    deployment的扩缩容
    [root@www TestYaml]# kubectl patch --help
    Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
    
     JSON and YAML formats are accepted.
    
    Examples:
      # Partially update a node using a strategic merge patch. Specify the patch as JSON.
      kubectl patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}'
    
      # Partially update a node using a strategic merge patch. Specify the patch as YAML.
      kubectl patch node k8s-node-1 -p $'spec:
     unschedulable: true'
    
      # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch.
      kubectl patch -f node.json -p '{"spec":{"unschedulable":true}}'
    
      # Update a container's image; spec.containers[*].name is required because it's a merge key.
      kubectl patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}'
    
      # Update a container's image using a json patch with positional arrays.
      kubectl patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new
    image"}]'
    
    Options:
          --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
    the template. Only applies to golang and jsonpath output formats.
          --dry-run=false: If true, only print the object that would be sent, without sending it.
      -f, --filename=[]: Filename, directory, or URL to files identifying the resource to update
      -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R.
          --local=false: If true, patch will operate on the content of the file, not the server-side resource.
      -o, --output='': Output format. One of:
    json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
      -p, --patch='': The patch to be applied to the resource JSON file.
          --record=false: Record current kubectl command in the resource annotation. If set to false, do not record the
    command. If set to true, record the command. If not set, default to updating the existing annotation value only if one
    already exists.
      -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
    related manifests organized within the same directory.
          --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
    template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
          --type='strategic': The type of patch being provided; one of [json merge strategic]
    
    Usage:
      kubectl patch (-f FILENAME | TYPE NAME) -p PATCH [options]
    
    Use "kubectl options" for a list of global command-line options (applies to all commands).
    patch不仅能扩充资源还能完成其它的操作
    [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"replicas":5}}'
    -p选项可以用来指定一级菜单下二级三级菜单指的变动,但是注意的是外面使用单引号,里面一级字段的词就需要用双引号
    deployment.extensions/mydeploy patched
    [root@www ~]# kubectl get pods -w
    NAME                        READY   STATUS    RESTARTS   AGE
    mydeploy-74b7786d9b-qnqg2   1/1     Running   0          8m41s
    mydeploy-74b7786d9b-tz6xk   1/1     Running   0          8m43s
    mydeploy-74b7786d9b-vt659   1/1     Running   0          8m45s
    mydeploy-74b7786d9b-hlwbp   0/1     Pending   0          0s
    mydeploy-74b7786d9b-hlwbp   0/1     Pending   0          0s
    mydeploy-74b7786d9b-zpcxb   0/1     Pending   0          0s
    mydeploy-74b7786d9b-zpcxb   0/1     Pending   0          0s
    mydeploy-74b7786d9b-hlwbp   0/1     ContainerCreating   0          0s
    mydeploy-74b7786d9b-zpcxb   0/1     ContainerCreating   0          0s
    mydeploy-74b7786d9b-hlwbp   1/1     Running             0          2s
    mydeploy-74b7786d9b-zpcxb   1/1     Running             0          2s
    可以看到更新的过程,因为我们回滚过版本,但是deploy版本定义的是v2的版本,现在应该是v1有3个,v2有两个
    [root@www TestYaml]# kubectl get rs -o wide
    NAME                  DESIRED   CURRENT   READY   AGE   CONTAINERS         IMAGES                 SELECTOR
    mydeploy-6fbdd45d4c   0         0         0       45m   myapp-containers   ikubernetes/myapp:v2   app=mydeploy,pod-template-hash=6fbdd45d4c,release=Internal-measurement
    mydeploy-74b7786d9b   5         5         5       52m   myapp-containers   ikubernetes/myapp:v1   app=mydeploy,pod-template-hash=74b7786d9b,release=Internal-measurement
    但是实际不是这样的,当你只指定某一个字段进行打补丁的时候,是不会改变其它字段的值的,除非将image的版本也给到v2版本
    patch的好处在于如果只想对某些字段的值进行变更,不想去调整yaml模板的值,就可以使用patch,但是patch绝对不适合完成很多字段的调整,因为会使得命令行结构变的复杂
    [root@www TestYaml]# kubectl patch deployment mydeploy -p '{"spec":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":0}}}}'  例如我们去改最少0个,最多1个,就会使得很复杂,如果是很多指,这个结构变的就会复杂,如果是修改多个指,直接apply更加方便
    deployment.extensions/mydeploy patched (no change)
    [root@www TestYaml]# kubectl set image deployment mydeploy myapp-containers=ikubernetes/myapp:v2 && kubectl rollout pause deployment mydeploy  我们使用直接set image来直接更新镜像的版本,而且更新1个之后就直接暂停
    deployment.extensions/mydeploy image updated
    deployment.extensions/mydeploy paused  可以看到更新一个之后就直接paused了
    [root@www ~]# kubectl get pods -w
    NAME                        READY   STATUS    RESTARTS   AGE
    mydeploy-74b7786d9b-hlwbp   1/1     Running   0          30m
    mydeploy-74b7786d9b-qnqg2   1/1     Running   0          40m
    mydeploy-74b7786d9b-tz6xk   1/1     Running   0          40m
    mydeploy-74b7786d9b-vt659   1/1     Running   0          40m
    mydeploy-74b7786d9b-zpcxb   1/1     Running   0          30m
    mydeploy-6fbdd45d4c-phcp4   0/1     Pending   0          0s
    mydeploy-6fbdd45d4c-phcp4   0/1     Pending   0          0s
    mydeploy-74b7786d9b-hlwbp   1/1     Terminating   0          33m
    mydeploy-6fbdd45d4c-wllm7   0/1     Pending       0          0s
    mydeploy-6fbdd45d4c-wllm7   0/1     Pending       0          0s
    mydeploy-6fbdd45d4c-wllm7   0/1     ContainerCreating   0          0s
    mydeploy-6fbdd45d4c-dc84z   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-dc84z   0/1     Pending             0          0s
    mydeploy-6fbdd45d4c-phcp4   0/1     ContainerCreating   0          0s
    mydeploy-6fbdd45d4c-dc84z   0/1     ContainerCreating   0          0s
    mydeploy-74b7786d9b-hlwbp   0/1     Terminating         0          33m
    mydeploy-6fbdd45d4c-wllm7   1/1     Running             0          2s
    mydeploy-6fbdd45d4c-phcp4   1/1     Running             0          3s
    mydeploy-6fbdd45d4c-dc84z   1/1     Running             0          3s
    mydeploy-74b7786d9b-hlwbp   0/1     Terminating         0          33m
    mydeploy-74b7786d9b-hlwbp   0/1     Terminating         0          33m
    [root@www TestYaml]# kubectl rollout status deployment mydeploy  也可以使用其他命令来监控更新的过程
    Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
    因为我们前面执行暂停了,结果更新几个之后就暂停下来了,如果我们已经更新几个小时了,没有用户反馈有问题,想继续把剩下的更新掉,就可以使用resume命令来继续更新
    [root@www ~]# kubectl rollout resume deployment mydeploy  直接继续更新
    deployment.extensions/mydeploy resumed
    [root@www TestYaml]# kubectl rollout status deployment mydeploy
    Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
    Waiting for deployment spec update to be observed...
    Waiting for deployment spec update to be observed...
    Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
    Waiting for deployment "mydeploy" rollout to finish: 3 out of 5 new replicas have been updated...
    Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
    Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
    Waiting for deployment "mydeploy" rollout to finish: 1 old replicas are pending termination...
    Waiting for deployment "mydeploy" rollout to finish: 4 of 5 updated replicas are available...
    deployment "mydeploy" successfully rolled out
    可以看到全部更新完毕,这个就是金丝雀更新。
    patch补丁形式更新
    [root@www TestYaml]# kubectl rollout undo --help
    Rollback to a previous rollout.
    
    Examples:
      # Rollback to the previous deployment
      kubectl rollout undo deployment/abc
    
      # Rollback to daemonset revision 3
      kubectl rollout undo daemonset/abc --to-revision=3  能指定回滚到那个版本
    
      # Rollback to the previous deployment with dry-run
      kubectl rollout undo --dry-run=true deployment/abc  不指定默认是上一个版本
    
    Options:
          --allow-missing-template-keys=true: If true, ignore any errors in templates when a field or map key is missing in
    the template. Only applies to golang and jsonpath output formats.
          --dry-run=false: If true, only print the object that would be sent, without sending it.
      -f, --filename=[]: Filename, directory, or URL to files identifying the resource to get from a server.
      -k, --kustomize='': Process the kustomization directory. This flag can't be used together with -f or -R.
      -o, --output='': Output format. One of:
    json|yaml|name|go-template|go-template-file|template|templatefile|jsonpath|jsonpath-file.
      -R, --recursive=false: Process the directory used in -f, --filename recursively. Useful when you want to manage
    related manifests organized within the same directory.
          --template='': Template string or path to template file to use when -o=go-template, -o=go-template-file. The
    template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].
          --to-revision=0: The revision to rollback to. Default to 0 (last revision).
    
    Usage:
      kubectl rollout undo (TYPE NAME | TYPE/NAME) [flags] [options]
    
    Use "kubectl options" for a list of global command-line options (applies to all commands).
    [root@www TestYaml]# kubectl rollout undo deployment mydeploy --to-revision=1  我们可以通过命令快速进行版本的回滚操作
    kubectl rollout undo的使用

    ♣四:DaemonSet控制器

    A:Deployment控制器的介绍:

    DaemonSet的主要是在集群的每一个节点上运行一个指定的pod,而且此pod只有一个副本,或者是符合选择器的节点上运行指定的pod(例如有些机器是实体机,有些是虚拟机,那么上面跑的一些程序是不同的,这个时候就需要选择器来选择运行pod)
    还可以将某些目录关联至pod中,来实现某些特定的功能。

    [root@www TestYaml]# kubectl explain ds.spec   (Daemonset简写ds,也是包含5个一级字段)
    KIND:     DaemonSet
    VERSION:  extensions/v1beta1
    
    RESOURCE: spec <Object>
    
    DESCRIPTION:
         The desired behavior of this daemon set. More info:
         https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
    
         DaemonSetSpec is the specification of a daemon set.
    
    FIELDS:
       minReadySeconds      <integer>
         The minimum number of seconds for which a newly created DaemonSet pod
         should be ready without any of its container crashing, for it to be
         considered available. Defaults to 0 (pod will be considered available as
         soon as it is ready).
    
       revisionHistoryLimit(保存历史版本数) <integer>
         The number of old history to retain to allow rollback. This is a pointer to
         distinguish between explicit zero and not specified. Defaults to 10.
    
       selector     <Object>
         A label query over pods that are managed by the daemon set. Must match in
         order to be controlled. If empty, defaulted to labels on Pod template. More
         info:
         https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors
    
       template     <Object> -required-
         An object that describes the pod that will be created. The DaemonSet will
         create exactly one copy of this pod on every node that matches the
         template's node selector (or on every node if no node selector is
         specified). More info:
         https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template
    
       templateGeneration   <integer>
         DEPRECATED. A sequence number representing a specific generation of the
         template. Populated by the system. It can be set only during the creation.
    
       updateStrategy  (更新策略)     <Object>
         An update strategy to replace existing DaemonSet pods with new pods.
    DaemonSet字段介绍

    B:Deployment控制器的简单使用:

    [root@www TestYaml]# cat ds.test.yaml
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: myds
      namespace: default
    spec:
      selector:
        matchLabels:
          app: myds
          release: Only
      template:
        metadata:
          labels:
            app: myds
            release: Only
        spec:
          containers:
          - name: mydaemonset
            image: ikubernetes/filebeat:5.6.5-alpine
            env:  因为filebeat监控日志需要指定服务名称和日志级别,这个不能在启动之后传,我们需要提前定义
            - name: REDIS_HOST  
              value: redis.default.svc.cluster.local  这个值是redis名称+名称空间default+- name: REDIS_LOG
              value: info  日志级别我们定义为info级别
    [root@www TestYaml]# kubectl get ds
    NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    myds   2         2         1       2            1           <none>          4m28s
    [root@www TestYaml]# kubectl get pods
    NAME         READY   STATUS             RESTARTS   AGE
    myds-9kt2j   0/1     ImagePullBackOff   0          2m18s
    myds-jt8kd   1/1     Running            0          2m14s
    [root@www TestYaml]# kubectl get pods -o wide
    NAME         READY   STATUS             RESTARTS   AGE     IP            NODE                       NOMINATED NODE   READINESS GATES
    myds-9kt2j   0/1     ImagePullBackOff   0          2m24s   10.244.1.43   www.kubernetes.node1.com   <none>           <none>
    myds-jt8kd   1/1     Running            0          2m20s   10.244.2.30   www.kubernetes.node2.com   <none>           <none>
    可以看到整个节点上至跑了两个pods,不会多也不会少,无论我们怎么定义,一个节点只能运行一个由DaemonSet控制的pods资源
    filebeat来监控日志案例
    [root@www TestYaml]# cat ds.test.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: redis
      namespace: default
    spec:
      replicas: 1
      selector:
        matchLabels:
           app: redis
           role: loginfo
      template:
         metadata:
            labels:
              app: redis
              role: loginfo
         spec:
           containers:
           - name: redis
             image: redis:4.0-alpine
             ports:
             - name: redis
               containerPort: 6379
    --- 可以将两个资源定义的yaml写在一个文件当中,但是需要注意的是这样写最好是有关联的两个资源对象,如果没有关联还是建议分开写。
    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: myds
      namespace: default
    spec:
      selector:
        matchLabels:
          app: myds
          release: Only
      template:
        metadata:
          labels:
            app: myds
            release: Only
        spec:
          containers:
          - name: mydaemonset
            image: ikubernetes/filebeat:5.6.5-alpine
            env:
            - name: REDIS_HOST
              value: redis.default.svc.cluster.local
            - name: REDIS_LOG
              value: info
    通过定义清单文件,我们就能通过filebeat来收集redis日志。
    yaml的多资源合并编写
    [root@www TestYaml]# kubectl explain ds.spec.updateStrategy
    KIND:     DaemonSet
    VERSION:  extensions/v1beta1
    
    RESOURCE: updateStrategy <Object>
    
    DESCRIPTION:
         An update strategy to replace existing DaemonSet pods with new pods.
    
    FIELDS:
       rollingUpdate        <Object>
         Rolling update config params. Present only if type = "RollingUpdate".
    
       type <string>  默认更新的方式也是有两种,一种是滚动更新,还有一种是在删除时候更新
         Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is
         OnDelete.
    
    
    rollingUpdate滚动更新
    [root@www TestYaml]# kubectl explain ds.spec.updateStrategy.rollingUpdate
    KIND:     DaemonSet
    VERSION:  extensions/v1beta1
    
    RESOURCE: rollingUpdate <Object>
    
    DESCRIPTION:
         Rolling update config params. Present only if type = "RollingUpdate".
    
         Spec to control the desired behavior of daemon set rolling update.
    
    FIELDS:
       maxUnavailable ds控制器的更新策略只能支持先删在更新,因为一个节点支持一个pods资源,此处的数量是和节点数量相关的,一次更新几个节点的pods资源     <string>
         The maximum number of DaemonSet pods that can be unavailable during the
         update. Value can be an absolute number (ex: 5) or a percentage of total
         number of DaemonSet pods at the start of the update (ex: 10%). Absolute
         number is calculated from percentage by rounding up. This cannot be 0.
         Default value is 1. Example: when this is set to 30%, at most 30% of the
         total number of nodes that should be running the daemon pod (i.e.
         status.desiredNumberScheduled) can have their pods stopped for an update at
         any given time. The update starts by stopping at most 30% of those
         DaemonSet pods and then brings up new DaemonSet pods in their place. Once
         the new pods are available, it then proceeds onto other DaemonSet pods,
         thus ensuring that at least 70% of original number of DaemonSet pods are
         available at all times during the update.
    
    [root@www TestYaml]# kubectl set image --help
    Update existing container image(s) of resources.
    
     Possible resources include (case insensitive):
    
      pod (po), replicationcontroller (rc), deployment (deploy), daemonset (ds), replicaset (rs)
    set image目前支持更新的控制器类别
    [root@www TestYaml]# kubectl set image daemonsets myds mydaemonset=ikubernetes/filebeat:5.6.6-alpine
    daemonset.extensions/myds image updated
    [root@www TestYaml]# kubectl get ds
    NAME   DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
    myds   2         2         1       0            1           <none>          19m
    [root@www TestYaml]# kubectl get pods
    NAME                    READY   STATUS              RESTARTS   AGE
    myds-lmw5d              0/1     ContainerCreating   0          7s
    myds-mhw89              1/1     Running             0          19m
    redis-fdc8c666b-spqlc   1/1     Running             0          19m
    [root@www TestYaml]# kubectl get pods -w
    NAME                    READY   STATUS              RESTARTS   AGE
    myds-lmw5d              0/1     ContainerCreating   0          15s  可以看到更新的时候先停一个,然后去pull镜像来更新
    myds-mhw89              1/1     Running             0          19m
    redis-fdc8c666b-spqlc   1/1     Running             0          19m
    .......
    myds-546lq              1/1     Running             0          46s  可以看到一件更新完毕
    DaemonSet的滚动更新

    C:pod的共享字段介绍:

    容器是可以共享使用主机的网络名称空间,这样容器监听的端口将是监听了宿主机至上了
    [root@www TestYaml]# kubectl explain pod.spec.hostNetwork
    KIND: Pod
    VERSION: v1

    FIELD: hostNetwork <boolean>

    DESCRIPTION:
    Host networking requested for this pod. Use the host's network namespace.
    If this option is set, the ports that will be used must be specified.
    Default to false.
    可以看到pods直接使用主机的网络名称空间,那么在创建ds控制器的时候,直接共享使用宿主机的网络名称空间,这样我们直接可以使用节点ip来进行访问了,无需通过service来进行暴露端口
    还可以共享的有hostPID,hostIPC等字段。

  • 相关阅读:
    redis是什么?
    mysql用户权限设置
    大白话说Java反射:入门、使用、原理 (转)
    iptables只允许指定ip访问本机的指定端口
    CSS系列——前端进阶之路:初涉Less
    MySQL分页查询优化
    linux 面试题
    CSS中定义CLASS时,中间有空格和没空格的区别是什么?
    MySQL Explain详解
    EBS 系统当前完成请求时间监测
  • 原文地址:https://www.cnblogs.com/ppc-srever/p/11129070.html
Copyright © 2011-2022 走看看