zoukankan      html  css  js  c++  java
  • [k8s]k8s-ceph-statefulsets-storageclass-nfs 有状态应用布署实践

    k8s stateful sets storageclass 有状态应用布署实践v2

    Copyright 2017-05-22 xiaogang(172826370@qq.com)

    参考

    由于网上的文章全部是抄袭官网等,烂文章一堆,误导一堆人,完美没有实用性,特写此文章,nfs相对来说比较简单,一般都会安装
    先送上nfs的相关文档,稍后将为大家献上ceph rbd动态卷文档,同时还有几个redis和mysql主从实例

    有状态容器的工作过程中,存储是一个关键问题,Kubernetes 对存储的管理提供了有力的支持。Kubernetes 独有的动态卷供给特性,
    实现了存储卷的按需创建。在这一特性面世之前,集群管理员首先要给云供应商或者存储供应商致电,来申请新的存储卷,然后创建持
    久卷(PersistentVolue),使其在 Kubernetes 中可见。而动态卷供给功能则实现了这两个步骤的自动化,让管理员无需再进行存储卷
    预分配。存储资源会依照 StorageClass 定义的方式进行供给。StorageClass 是对底层存储资源的抽象,包含了存储相关的参数,
    例如磁盘类型(标准类型和 SSD)。

    StorageClass 的多种供给者(Previsioner),为 Kubernetes 提供了针对特定物理存储或云存储的访问能力。目前提供了多种开箱即
    用的存储支持,另外还有一些在 Kubernetes 孵化器中提供的其他存储支持。

    在 Kubernetes 1.6 中,动态卷供给提升为稳定版(1.4 开始进入 Beta 版)。这在 Kubernetes 的存储自动化过程中是很重要的一步,
    让管理员能够控制资源的供给方式,让用户能够更专注于自己的应用。在上面提到的益处之外,在升级到 Kubernetes 1.6 之前,还需
    要了解一下这里涉及到的针对用户方面的变更。
    有状态的应用程序
    一般情况下,nginx或者web server(不包含MySQL)自身都是不需要保存数据的,对于 web server,数据会保存在专门做持久化的节点
    上。所以这些节点可以随意扩容或者缩容,只要简单的增加或减少副本的数量就可以。但是很多有状态的程序都需要集群式的部署,
    意味着节点需要形成群组关系,每个节点需要一个唯一的ID(例如Kafka BrokerId, Zookeeper myid)来作为集群内部每个成员的标识,
    集群内节点之间进行内部通信时需要用到这些标识。传统的做法是管理员会把这些程序部署到稳定的,长期存活的节点上去,这些节点
    有持久化的存储和静态的IP地址。这样某个应用的实例就跟底层物理基础设施比如某台机器,某个IP地址耦合在一起了。Kubernets中
    StatefulSet的目标是通过把标识分配给应用程序的某个不依赖于底层物理基础设施的特定实例来解耦这种依赖关系。(消费方不使用静
    态的IP,而是通过DNS域名去找到某台特定机器)

    StatefulSet

    前提

    使用StatefulSet的前提:

    • Kubernetes集群的版本 >=1.5
    • 安装好DNS集群插件,版本 >=15

    特点

    StatefulSet(1.5版本之前叫做PetSet)为什么适合有状态的程序,因为它相比于Deployment有以下特点:

    • 稳定的,唯一的网络标识,可以用来发现集群内部的其他成员。比如StatefulSet的名字叫kafka,那么第一个起来的Pet叫kafka-0,mysql-0
      第二个叫 kafk-1, mysql-1依次类推。
    • 稳定的持久化存储:通过Kubernetes的PV/PVC或者外部存储(预先提供的)来实现
      启动或关闭时保证有序:优雅的部署和伸缩性: - 操作第n个pod时,前n-1个pod已经是运行且准备好的状态。 有序的,优雅的删除和
      终止操作:从 n, n-1, ... 1, 0 这样的顺序删除
    • 上述提到的“稳定”指的是Pod在多次重新调度时保持稳定,即存储,DNS名称,hostname都是跟Pod绑定到一起的,跟Pod被调度到哪个
      节点没关系。

    所以Zookeeper, Etcd 或 Elasticsearch这类需要稳定的集群成员的应用时,就可以用StatefulSet。通过查询无头服务域名的A记录,
    就可以得到集群内成员的域名信息。

    限制

    StatefulSet也有一些限制:

    • Pod的存储必须是通过 PersistentVolume Provisioner基于 storeage类来提供,或者是管理员预先提供的外部存储
      删除或者缩容不会删除跟StatefulSet相关的卷,这是为了保证数据的安全
      StatefulSet现在需要一个无头服务(Headless Service)来负责生成Pods的唯一网络标示,需要开发人员创建这个服务
      对StatefulSet的升级是一个手工的过程
    • 无头服务(Headless Service)

    要定义一个服务(Service)为无头服务(Headless Service),需要把Service定义中的ClusterIP配置项设置为空: spec.clusterIP:None。
    和普通Service相比,Headless Service没有ClusterIP(所以没有负载均衡),它会给一个集群内部的每个成员提供一个唯一的DNS- 域名来
    作为每个成员的网络标识,集群内部成员之间使用域名通信。无头服务管理的域名是如下的格式:$(service_name).$(k8s_namespace).svc.cluster.local。
    其中的 "cluster.local"是集群的域名,除非做了配置,否则集群域名默认就是cluster.local。StatefulSet下创建的每个Pod,得到一个对应的DNS子域名,
    格式如下:
    $(podname).$(governing_service_domain),这里 governing_service_domain是由StatefulSet中定义的serviceName来决定。举例子,
    无头服务管理的kafka的域名是:kafka.test.svc.cluster.local, 创建的Pod得到的子域名是 kafka-1.kafka.test.svc.cluster.local。
    注意这里提到的域名,都是由kuber-dns组件管理的集群内部使用的域名,可以通过命令来查询:

    1.nfs-client storage class动态卷

    在nfs-server物理机上配置权限 cat /etc/exports 
    
    /data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
    

    下载nfs-client 插件

    docker pull quay.io/kubernetes_incubator/nfs-client-provisioner:v1
    docker tag quay.io/kubernetes_incubator/nfs-client-provisioner:v1 192.168.1.103/k8s_public/nfs-client-provisioner:v1
    docker push 192.168.1.103/k8s_public/nfs-client-provisioner:v1
    

    布署供应卷,实际上是把pv挂载成class供应卷

     cat deployment-nfs.yaml
     
     kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          containers:
            - name: nfs-client-provisioner
              image: 192.168.1.103/k8s_public/nfs-client-provisioner:v1
              volumeMounts:
                - name: nfs-client-root
                  mountPath: /persistentvolumes
              env:
                - name: PROVISIONER_NAME
                  value: fuseim.pri/ifs
                - name: NFS_SERVER
                  value: 192.168.1.103
                - name: NFS_PATH
                  value: /data/nfs-storage/k8s-storage/ssd
          volumes:
            - name: nfs-client-root
              nfs:
                server: 192.168.1.103
                path: /data/nfs-storage/k8s-storage/ssd #此处填写nfs 存储路径 跟据实际情况填写
    
    [root@master3 deploy]#  kubectl create -f  deployment-nfs.yaml
    
    kubectl get pod 
    nfs-client-provisioner-4163627910-fn70d   1/1       Running             0          1m
    
    

    布署storageclass.yaml

    
    [root@master3 deploy]# cat nfs-class.yaml 
    apiVersion: storage.k8s.io/v1beta1
    kind: StorageClass
    metadata:
      name: managed-nfs-storage 
    provisioner: fuseim.pri/ifs # 此处引用nfs-client-provisioner里面的 fuseim.pri/ifs or choose another name, must match deployment's env PROVISIONER_NAME'
    
    [root@master3 deploy]#  kubectl create -f nfs-class.yaml
    
    [root@master3 deploy]# kubectl get storageclass 
    NAME                  TYPE
    ceph-web              kubernetes.io/rbd   
    managed-nfs-storage   fuseim.pri/ifs  
    

    创建一个pod引用storageclass

    [root@master3 stateful-set]# cat nginx.yaml 
    apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
      name: web
    spec:
      serviceName: "nginx1"
      replicas: 2
      volumeClaimTemplates:
      - metadata:
          name: test 
          annotations:
            volume.beta.kubernetes.io/storage-class: "managed-nfs-storage" #此处引用classname
        spec:
          accessModes: [ "ReadWriteOnce" ]
          resources:
            requests:
              storage: 2Gi 
      template:
        metadata:
          labels:
            app: nginx1
        spec:
          containers:
          - name: nginx1
            image: 192.168.1.103/k8s_public/nginx:latest
            volumeMounts:
            - mountPath: "/mnt"
              name: test
          imagePullSecrets:
            - name: "registrykey" #注意此处注名了secret安全连接registy 本地镜相服务器
    

    验证pv pvc 是否自己创建成功

    [root@master3 stateful-set]# kubectl get pv |grep web
    default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           Delete          Bound     default/test-web-0                                           1m
    default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           Delete          Bound     default/test-web-1                                           1m
    [root@master3 stateful-set]# kubectl get pvc |grep web
    test-web-0                                 Bound     default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           1m
    test-web-1                                 Bound     default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59                                 2Gi        RWO           1m
    [root@master3 stateful-set]# kubectl get storageclass |grep web
    ceph-web              kubernetes.io/rbd   
    [root@master3 stateful-set]# kubectl get storageclass 
    NAME                  TYPE
    ceph-web              kubernetes.io/rbd   
    managed-nfs-storage   fuseim.pri/ifs  
    
    [root@master3 stateful-set]# kubectl get pod |grep web
    web-0                                     1/1       Running             0          2m
    web-1                                     1/1       Running             0          2m
    

    扩展 pod

    [root@master3 stateful-set]#  kubectl scale statefulset web --replicas=3
    [root@master3 stateful-set]# kubectl get pod |grep web
    web-0                                     1/1       Running             0          10m
    web-1                                     1/1       Running             0          10m
    web-3                                     1/1       Running             0          1m
    

    收缩 pod 至1个

    kubectl scale statefulset web --replicas=1
    [root@master3 stateful-set]# kubectl get pod |grep web
    web-0                                     1/1       Running             0          11m
    

    ok ,创建完成 pod也正常

    进入web-0验证pvc挂载目录

    [root@master3 stateful-set]# kubectl exec -it web-0 /bin/bash
    root@web-0:/# 
    root@web-0:/# df -h
    Filesystem                                                                                                   Size  Used Avail Use% Mounted on
    /dev/mapper/docker-253:0-654996-18a8b448ce9ebf898e46c4468b33093ed9a5f81794d82a271124bcd1eb27a87c              10G  230M  9.8G   3% /
    tmpfs                                                                                                        1.6G     0  1.6G   0% /dev
    tmpfs                                                                                                        1.6G     0  1.6G   0% /sys/fs/cgroup
    192.168.1.103:/data/nfs-storage/k8s-storage/ssd/default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59  189G   76G  104G  43% /mnt
    /dev/mapper/centos-root                                                                                       37G  9.1G   26G  27% /etc/hosts
    shm                                                                                                           64M     0   64M   0% /dev/shm
    tmpfs                                                                                                        1.6G   12K  1.6G   1% /run/secrets/kubernetes.io/serviceaccount
    root@web-0:/# 
    

    去nfs-server上看看pvc卷

    root@pxt:/data/nfs-storage/k8s-storage/ssd# ll
    total 40
    drwxr-xr-x 10 root root 4096 May 22 17:53 ./
    drwxr-xr-x  7 root root 4096 May 12 17:26 ../
    drwxr-xr-x  3 root root 4096 May 16 16:19 default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59/
    drwxr-xr-x  3 root root 4096 May 16 16:20 default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59/
    drwxr-xr-x  3 root root 4096 May 16 16:21 default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59/
    drwxr-xr-x  2 root root 4096 May 17 17:49 default-redis-primary-volume-redis-primary-0-pvc-bb19aa13-3ad3-11e7-b646-525400c2bc59/
    drwxr-xr-x  2 root root 4096 May 17 17:56 default-redis-secondary-volume-redis-secondary-0-pvc-16c8749d-3ae7-11e7-b646-525400c2bc59/
    drwxr-xr-x  2 root root 4096 May 17 17:58 default-redis-secondary-volume-redis-secondary-1-pvc-16da7ba5-3ae7-11e7-b646-525400c2bc59/
    drwxr-xr-x  2 root root 4096 May 22 17:53 default-test-web-0-pvc-6b82cdd6-3ed4-11e7-9818-525400c2bc59/
    drwxr-xr-x  2 root root 4096 May 22 17:53 default-test-web-1-pvc-6bbec6a0-3ed4-11e7-9818-525400c2bc59/
    
    root@pxt:/data/nfs-storage/k8s-storage/ssd# showmount -e
    Export list for pxt.docker.agent103:
    /data/nfs_ssd                          *
    /data/nfs-storage/k8s-storage/standard *
    /data/nfs-storage/k8s-storage/ssd      *
    /data/nfs-storage/k8s-storage/redis    *
    /data/nfs-storage/k8s-storage/nginx    *
    /data/nfs-storage/k8s-storage/mysql    *
    
    
    root@pxt:/data/nfs-storage/k8s-storage/ssd# cat /etc/exports 
    # /etc/exports: the access control list for filesystems which may be exported
    #		to NFS clients.  See exports(5).
    #
    # Example for NFSv2 and NFSv3:
    # /srv/homes       hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
    #
    # Example for NFSv4:
    # /srv/nfs4        gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
    # /srv/nfs4/homes  gss/krb5i(rw,sync,no_subtree_check)
    #/data/nfs-storage/k8s-storage *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs-storage/k8s-storage/mysql *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs-storage/k8s-storage/nginx *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs-storage/k8s-storage/redis *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs-storage/k8s-storage/ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs-storage/k8s-storage/standard *(rw,insecure,sync,no_subtree_check,no_root_squash)
    /data/nfs_ssd *(rw,insecure,sync,no_subtree_check,no_root_squash)
    

    2.布署一个可伸缩的mysql 主从集群,基于mysql5.7 一主多从,准备3个yaml文件

    参考

    mysql-configmap.yaml mysql-services.yaml mysql-statefulset.yaml 
    
    [root@master3 setateful-set-mysql]# cat mysql-configmap.yaml 
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: mysql
      labels:
        app: mysql
    data:
      master.cnf: |
        # Apply this config only on the master.
        [mysqld]
        log-bin
      slave.cnf: |
        # Apply this config only on slaves.
        [mysqld]
        super-read-only
    
    [root@master3 setateful-set-mysql]# cat mysql-services.yaml
    # Headless service for stable DNS entries of StatefulSet members.
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      clusterIP: None
      selector:
        app: mysql
    ---
    # Client service for connecting to any MySQL instance for reads.
    # For writes, you must instead connect to the master: mysql-0.mysql.
    apiVersion: v1
    kind: Service
    metadata:
      name: mysql-read
      labels:
        app: mysql
    spec:
      ports:
      - name: mysql
        port: 3306
      selector:
        app: mysql
    
    [root@master3 setateful-set-mysql]# cat mysql-statefulset.yaml
    apiVersion: apps/v1beta1
    kind: StatefulSet
    metadata:
      name: mysql
    spec:
      serviceName: mysql
      replicas: 3
      template:
        metadata:
          labels:
            app: mysql
          annotations:
            pod.beta.kubernetes.io/init-containers: '[
              {
                "name": "init-mysql",
    			#原始镜相:image: msql:5.7
                "image": "192.168.1.103/k8s_public/mysql:5.7",
                "command": ["bash", "-c", "
                  set -ex
    
                  # Generate mysql server-id from pod ordinal index.
    
                  [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
    
                  ordinal=${BASH_REMATCH[1]}
    
                  echo [mysqld] > /mnt/conf.d/server-id.cnf
    
                  # Add an offset to avoid reserved server-id=0 value.
    
                  echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
    
                  # Copy appropriate conf.d files from config-map to emptyDir.
    
                  if [[ $ordinal -eq 0 ]]; then
    
                    cp /mnt/config-map/master.cnf /mnt/conf.d/
    
                  else
    
                    cp /mnt/config-map/slave.cnf /mnt/conf.d/
    
                  fi
    
                "],
                "volumeMounts": [
                  {"name": "conf", "mountPath": "/mnt/conf.d"},
                  {"name": "config-map", "mountPath": "/mnt/config-map"}
                ]
              },
              {
                "name": "clone-mysql",
    			#"image": gcr.io/google-samples/xtrabackup:1.0 原始镜相自己打tag push 到私库
                "image": "192.168.1.103/k8s_public/xtrabackup:1.0",
                "command": ["bash", "-c", "
                  set -ex
    
                  # Skip the clone if data already exists.
    
                  [[ -d /var/lib/mysql/mysql ]] && exit 0
    
                  # Skip the clone on master (ordinal index 0).
    
                  [[ `hostname` =~ -([0-9]+)$ ]] || exit 1
    
                  ordinal=${BASH_REMATCH[1]}
    
                  [[ $ordinal -eq 0 ]] && exit 0
    
                  # Clone data from previous peer.
    
                  ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
    
                  # Prepare the backup.
    
                  xtrabackup --prepare --target-dir=/var/lib/mysql
    
                "],
                "volumeMounts": [
                  {"name": "data", "mountPath": "/var/lib/mysql", "subPath": "mysql"},
                  {"name": "conf", "mountPath": "/etc/mysql/conf.d"}
                ]
              }
            ]'
        spec:
          containers:
          - name: mysql
            image: 192.168.1.103/k8s_public/mysql:5.7
            env:
            - name: MYSQL_ALLOW_EMPTY_PASSWORD
              value: "1"
            ports:
            - name: mysql
              containerPort: 3306
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 1
                memory: 1Gi
                #memory: 500Mi
            livenessProbe:
              exec:
                command: ["mysqladmin", "ping"]
              initialDelaySeconds: 30
              timeoutSeconds: 5
            readinessProbe:
              exec:
                # Check we can execute queries over TCP (skip-networking is off).
                command: ["mysql", "-h", "127.0.0.1", "-e", "SELECT 1"]
              initialDelaySeconds: 5
              timeoutSeconds: 1
          - name: xtrabackup
            image: 192.168.1.103/k8s_public/xtrabackup:1.0
            ports:
            - name: xtrabackup
              containerPort: 3307
            command:
            - bash
            - "-c"
            - |
              set -ex
              cd /var/lib/mysql
    
              # Determine binlog position of cloned data, if any.
              if [[ -f xtrabackup_slave_info ]]; then
                # XtraBackup already generated a partial "CHANGE MASTER TO" query
                # because we're cloning from an existing slave.
                mv xtrabackup_slave_info change_master_to.sql.in
                # Ignore xtrabackup_binlog_info in this case (it's useless).
                rm -f xtrabackup_binlog_info
              elif [[ -f xtrabackup_binlog_info ]]; then
                # We're cloning directly from master. Parse binlog position.
                [[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
                rm xtrabackup_binlog_info
                echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',
                      MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
              fi
    
              # Check if we need to complete a clone by starting replication.
              if [[ -f change_master_to.sql.in ]]; then
                echo "Waiting for mysqld to be ready (accepting connections)"
                until mysql -h 127.0.0.1 -e "SELECT 1"; do sleep 1; done
    
                echo "Initializing replication from clone position"
                # In case of container restart, attempt this at-most-once.
                mv change_master_to.sql.in change_master_to.sql.orig
                mysql -h 127.0.0.1 <<EOF
              $(<change_master_to.sql.orig),
                MASTER_HOST='mysql-0.mysql',
                MASTER_USER='root',
                MASTER_PASSWORD='',
                MASTER_CONNECT_RETRY=10;
              START SLAVE;
              EOF
              fi
    
              # Start a server to send backups when requested by peers.
              exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c 
                "xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
            volumeMounts:
            - name: data
              mountPath: /var/lib/mysql
              subPath: mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            resources:
              requests:
                cpu: 100m
                memory: 100Mi
          nodeSelector:
            zone: mysql
          volumes:
          - name: conf
            emptyDir: {}
          - name: config-map
            configMap:
              name: mysql
      volumeClaimTemplates:
      - metadata:
          name: data
          annotations:
            #volume.alpha.kubernetes.io/storage-class: "managed-nfs-storage" #不同版本这里引用的alpha/beta不同注意
            volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
        spec:
          accessModes: ["ReadWriteOnce"]
          resources:
            requests:
              storage: 10Gi
    
    [root@master3 setateful-set-mysql]# kubectl create -f mysql-configmap.yaml  -f mysql-services.yaml  -f mysql-statefulset.yaml
    
    [root@master3 setateful-set-mysql]# kubectl get storageclass,pv,pvc,statefulset,pod,service |grep mysql
    
    
    
    pv/default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-0                                         6d
    pv/default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-1                                         6d
    pv/default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           Delete          Bound     default/data-mysql-2   
                                          6d
    pvc/data-mysql-0                               Bound     default-data-mysql-0-pvc-3954b59e-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d
    pvc/data-mysql-1                               Bound     default-data-mysql-1-pvc-396bd26f-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d
    pvc/data-mysql-2                               Bound     default-data-mysql-2-pvc-39958611-3a10-11e7-b646-525400c2bc59                               10Gi       RWO           6d
    
    statefulsets/mysql             3         3         5d
    
    po/mysql-0                                   2/2       Running             0          5d
    po/mysql-1                                   2/2       Running             0          5d
    po/mysql-2                                   2/2       Running             0          5d
    
    svc/mysql                    None            <none>        3306/TCP       6d  #同一个namespaces 下面是可以ping 的 ping mysql-0.mysql  ; ping mysql-1.mysql
    svc/mysql-read               172.1.11.160    <none>        3306/TCP       6d
    

    [root@master3 setateful-set-mysql]# ok 所有pok创建完成,注意这里的service 没有clusterip 这种就是headless service无头类型,
    注意删除kubectl delete statefulset yaml pv和pvc还是会存在的

    扩容mysql slave 扩容后可以看到pv,pvc相应的自动创建了

    kubectl scale --replicas=5 statefulset mysql
    kubectl get pod|grep mysql
    po/mysql-0                                   2/2       Running             0          5d
    po/mysql-1                                   2/2       Running             0          5d
    po/mysql-2                                   2/2       Running             0          5d
    po/mysql-3                                   2/2       Running             0          5m
    po/mysql-4                                   2/2       Running             0          5m
    
    收宿 kubectl scale --replicas=2 statefulset mysql
    
    kubectl get pod|grep mysql
    
    po/mysql-0                                   2/2       Running             0          5d
    po/mysql-1                                   2/2       Running             0          5d
    
    

    测试

    连接mysql测试

    方法1: 通过容器连接
    启动1个mysql-client pod

    #启动1个容器,这里测了下,执行成功了, 没反应 ctrl+c下. 看到查看pod 可以看到mysql-client的pod
    
    kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --
      mysql -h mysql-0.mysql <<EOF
    CREATE DATABASE test;
    CREATE TABLE test.messages (message VARCHAR(250));
    INSERT INTO test.messages VALUES ('hello');
    EOF
    
    
    kubectl exec -it mysql-client bash
    
    #连接从库
    root@mysql-client:/# mysql -h mysql-read
    
    #连接主库
    mysql -h mysql-0.mysql
    

    方法2: 可以物理机安装mysql-client

    #安装
    yum install mysql -y
    
    #查看pod的ip
    [root@node131 images]# kubectl get po -o wide|grep mysql
    mysql-0                                  2/2       Running   0          25m       172.30.2.4    192.168.6.133
    mysql-1                                  2/2       Running   1          24m       172.30.28.4   192.168.6.132
    mysql-2                                  2/2       Running   1          24m       172.30.2.5    192.168.6.133
    mysql-client                             1/1       Running   0          22m       172.30.28.5   192.168.6.132
    
    #通过本地mysql客户端登录mysql
    mysql -h 172.30.2.5
    

    检查mysql-read svc

    kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --
      bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
    
    [root@node131 images]# kubectl run mysql-client-loop --image=mysql:5.7 -i -t --rm --restart=Never --
    >   bash -ic "while sleep 1; do mysql -h mysql-read -e 'SELECT @@server_id,NOW()'; done"
    If you don't see a command prompt, try pressing enter.
                                                          +-------------+---------------------+
    +-------------+---------------------+
    | @@server_id | NOW()               |
    +-------------+---------------------+
    |         100 | 2017-05-23 08:58:31 |
    +-------------+---------------------+
    +-------------+---------------------+
    | @@server_id | NOW()               |
    +-------------+---------------------+
    |         101 | 2017-05-23 08:58:32 |
    +-------------+---------------------+
    +-------------+---------------------+
    | @@server_id | NOW()               |
    +-------------+---------------------+
    |         102 | 2017-05-23 08:58:33 |
    +-------------+---------------------+
    
    ^C
    

    上面窗口保留

    模拟mysql-node宕机

    kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql /usr/bin/mysql.off
    

    从窗口可以看到只有id是100和101的了.

    +-------------+---------------------+
    | @@server_id | NOW()               |
    +-------------+---------------------+
    |         100 | 2017-05-23 09:03:05 |
    +-------------+---------------------+
    +-------------+---------------------+
    | @@server_id | NOW()               |
    +-------------+---------------------+
    |         100 | 2017-05-23 09:03:06 |
    +-------------+---------------------+
    
    

    恢复102,后自动有添加到从库了

    kubectl exec mysql-2 -c mysql -- mv /usr/bin/mysql.off /usr/bin/mysql
    

    删除pod:

    kubectl delete pod mysql-2
    

    删掉后,StatefulSet controller会自动创建mysql-2

    维护node: 当1个node需要被维护,所有的所在此node的pod都要被驱逐出来.pod会自动实现调用到别的节点

    kubectl drain <node-name> --force --delete-local-data --ignore-daemonsets
    kubectl get pod mysql-2 -o wide --watch
    

    维护好node后,加入集群

    kubectl uncordon <node-name>
    kubectl get pods -l app=mysql --watch
    

    扩展节点

    kubectl scale --replicas=5 statefulset mysql
    
    kubectl get pods -l app=mysql --watch
    
    kubectl run mysql-client --image=mysql:5.7 -i -t --rm --restart=Never --
      mysql -h mysql-3.mysql -e "SELECT * FROM test.messages"
      
    kubectl scale --replicas=3 statefulset mysql
    
    kubectl get pvc -l app=mysql
    

    缩小:
    Which shows that all 5 PVCs still exist, despite having scaled the StatefulSet down to 3:

    
    NAME           STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
    data-mysql-0   Bound     pvc-8acbf5dc-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
    data-mysql-1   Bound     pvc-8ad39820-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
    data-mysql-2   Bound     pvc-8ad69a6d-b103-11e6-93fa-42010a800002   10Gi       RWO           20m
    data-mysql-3   Bound     pvc-50043c45-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m
    data-mysql-4   Bound     pvc-500a9957-b1c5-11e6-93fa-42010a800002   10Gi       RWO           2m
    

    If you don’t intend to reuse the extra PVCs, you can delete them:

    kubectl delete pvc data-mysql-3
    kubectl delete pvc data-mysql-4
    

    清理环境:

    kubectl delete pod mysql-client-loop --now
    kubectl delete statefulset mysql
    kubectl get pods -l app=mysql
    kubectl delete configmap,service,pvc -l app=mysql
    

    授权解决

    因为k8s 1.6开启了rbac授权

    创建statfulset后,看了下pod的日志

    kubectl logs -f  nfs-client-provisioner-2387627438-hs250 
    ...
    E0523 02:47:32.695718       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
    E0523 02:47:32.696305       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
    E0523 02:47:32.697326       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
    E0523 02:47:33.697467       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:397: Failed to list *v1.PersistentVolume: User "system:serviceaccount:default:default" cannot list persistentvolumes at the cluster scope. (get persistentvolumes)
    E0523 02:47:33.697967       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:369: Failed to list *v1.StorageClass: User "system:serviceaccount:default:default" cannot list storageclasses.storage.k8s.io at the cluster scope. (get storageclasses.storage.k8s.io)
    E0523 02:47:33.699042       1 reflector.go:201] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:396: Failed to list *v1.PersistentVolumeClaim: User "system:serviceaccount:default:default" cannot list persistentvolumeclaims at the cluster scope. (get persistentvolumeclaims)
    ...
    ^C
    
    

    解决:

    [root@node131 rbac]# cat serviceaccount.yaml 
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: nfs-provisioner
    
    [root@node131 rbac]# cat clusterrole.yaml 
    kind: ClusterRole
    apiVersion: rbac.authorization.k8s.io/v1alpha1
    metadata:
      name: nfs-provisioner-runner
    rules:
      - apiGroups: [""]
        resources: ["persistentvolumes"]
        verbs: ["get", "list", "watch", "create", "delete"]
      - apiGroups: [""]
        resources: ["persistentvolumeclaims"]
        verbs: ["get", "list", "watch", "update"]
      - apiGroups: ["storage.k8s.io"]
        resources: ["storageclasses"]
        verbs: ["get", "list", "watch"]
      - apiGroups: [""]
        resources: ["events"]
        verbs: ["watch", "create", "update", "patch"]
      - apiGroups: [""]
        resources: ["services", "endpoints"]
        verbs: ["get"]
      - apiGroups: ["extensions"]
        resources: ["podsecuritypolicies"]
        resourceNames: ["nfs-provisioner"]
        verbs: ["use"]
    
    [root@node131 rbac]# cat  clusterrolebinding.yaml
    kind: ClusterRoleBinding
    apiVersion: rbac.authorization.k8s.io/v1alpha1
    metadata:
      name: run-nfs-provisioner
    subjects:
      - kind: ServiceAccount
        name: nfs-provisioner
        namespace: default
    roleRef:
      kind: ClusterRole
      name: nfs-provisioner-runner
      apiGroup: rbac.authorization.k8s.io
    [root@node131 rbac]# 
    

    注意点

    [root@node131 nfs]# cat nfs-stateful.yaml 
    kind: Deployment
    apiVersion: extensions/v1beta1
    metadata:
      name: nfs-client-provisioner
    spec:
      replicas: 1
      strategy:
        type: Recreate
      template:
        metadata:
          labels:
            app: nfs-client-provisioner
        spec:
          **serviceAccount: nfs-provisioner** #这里需要调用刚创建的sa
    
    

    按照以依次创建,然后执行上面的 pv

    kubectl create -f serviceaccount.yaml -f clusterrole.yaml -f clusterrolebinding.yaml
    
  • 相关阅读:
    MYSQL学习中
    正则相关记录
    JS前台相关
    .net 时间格式
    SQL问题整理
    IIS 错误
    小型文件系统(littlefs)
    三极管NPN和PNP开关电路
    事件EVENT与waitforsingleobject的使用
    UpdateData(TRUE)与UpdateData(FALSE)的使用
  • 原文地址:https://www.cnblogs.com/iiiiher/p/7159810.html
Copyright © 2011-2022 走看看