https://www.cnblogs.com/kevingrace/p/13969995.html
StatefulSet是为了解决有状态服务的容器问题而设计的,对应的Deployment和ReplicaSet是为了无状态服务而设计的。
StatefulSet应用场景包括:
- 稳定的持久化存储,即Pod重新调度后还是能访问到相同的持久化数据,基于PVC来实现
- 稳定的网络标志,即Pod重新调度后其PodName和HostName不变,基于Headless Service(即没有Cluster IP的Service)来实现
- 有序部署,有序扩展,即Pod是有顺序的。在部署或者扩展的时候要依据定义的顺序依次依次进行(即从0到N-1,在下一个Pod运行之前所有之前的Pod必须都是Running和Ready状态),基于init containers来实现
- 有序收缩,有序删除(即从N-1到0)
- StatefulSet要求Pod的名称是有顺序的,每一个Pod都不能被随意取代,即使Pod重建之后,名称依然不变。
StatefulSet由以下几部分组成:
- 用于定义网络标志的Headless Service(headless-svc:无头服务。因为没有IP地址,所以它不具备负载均衡的功能了)
- 用于创建PersistentVolumes的volumeClaimTemplates。
- 定义具体应用的StatefulSet。
StatefulSet配置NFS动态持久化存储的流程:
- 创建 NFS 服务器。
- 创建 Service Account。用来管控 NFS provisioner 在k8s集群中运行的权限。
- 创建 StorageClass。负责创建 PVC 并调用 NFS provisioner 进行预定的工作,并关联 PV 和 PVC。
- 创建 NFS provisioner。有两个功能,一个是在NFS共享目录下创建挂载点(volume);二是建立 PV 并将 PV 与 NFS 挂载点建立关联。
StatefulSet有状态应用pod访问地址
pod_name.service_name.namespace_name.svc.cluster.local:pod_app_port
一、NFS部署
可以参考:NFS双机热备高可用环境
NFS服务器为:172.16.60.194
NFS挂载目录为:/data/storage/mobile-decision-server_data
NFS的配置:
|
1
2
3
4
5
|
[root@k8s-storage01 ~]# cat /etc/exports/data/storage 172.16.60.197(rw,sync,no_root_squash)/data/storage 172.16.60.193(rw,sync,no_root_squash)/data/storage 172.16.60.48(rw,sync,no_root_squash)/data/storage 172.16.60.120(rw,sync,no_root_squash) |
二、StatefulSet使用NFS配置动态持久化存储
下面是使用NFS作为StatefulSet持久化存储的操作记录,分别需要创建nfs-provisioner的rbac、storageclass、nfs-client-provisioner和statefulset的pod。
1)创建nfs-rbac.yaml
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
[root@k8s-master01 mobile-decision-server]# cat nfs-rbac.yaml---apiVersion: v1kind: ServiceAccountmetadata: name: nfs-provisioner namespace: kevin---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata: name: nfs-provisioner-runner namespace: kevinrules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get","create","list", "watch","update"] - apiGroups: ["extensions"] resources: ["podsecuritypolicies"] resourceNames: ["nfs-provisioner"] verbs: ["use"]---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: run-nfs-provisionersubjects: - kind: ServiceAccount name: nfs-provisioner namespace: kevinroleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io |
创建并查看
|
1
2
3
4
5
6
7
8
|
[root@k8s-master01 mobile-decision-server]# kubectl apply -f nfs-rbac.yaml [root@k8s-master01 mobile-decision-server]# kubectl get sa -n kevin|grep nfsnfs-provisioner 1 7h7m[root@k8s-master01 mobile-decision-server]# kubectl get clusterrole -n kevin|grep nfsnfs-provisioner-runner 7h7m[root@k8s-master01 mobile-decision-server]# kubectl get clusterrolebinding -n kevin|grep nfsrun-nfs-provisioner 7h7m |
2)创建nfs-class.yaml
|
1
2
3
4
5
6
7
8
|
[root@k8s-master01 mobile-decision-server]# cat nfs-class.yamlapiVersion: storage.k8s.io/v1beta1kind: StorageClassmetadata: name: managed-nfs-storage namespace: kevinprovisioner: mobile-decision-server/nfsreclaimPolicy: Retain |
创建
|
1
|
[root@k8s-master01 mobile-decision-server]# kubectl apply -f nfs-class.yaml |
3)创建mobile-decision-server-nfs.yml
PROVISIONER_NAME的值一定要和StorageClass中的provisioner相等!
/persistentvolumes的mountPath 挂载点不能修改!
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
|
[root@k8s-master01 mobile-decision-server]# cat mobile-decision-server-nfs.ymlapiVersion: apps/v1kind: Deploymentmetadata: name: nfs-client-provisioner namespace: kevinspec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-client-provisioner image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: mobile-decision-server/nfs - name: NFS_SERVER value: 172.16.60.194 - name: NFS_PATH value: /data/storage/mobile-decision-server_data volumes: - name: nfs-client-root nfs: server: 172.16.60.194 path: /data/storage/mobile-decision-server_data |
创建并查看
|
1
2
3
4
|
[root@k8s-master01 mobile-decision-server]# kubectl apply -f mobile-decision-server-nfs.yml [root@k8s-master01 mobile-decision-server]# kubectl get pods -n kevin|grep nfsnfs-client-provisioner-6d78cd7874-krpvj 1/1 Running 0 5h3m |
4)创建statefulset
|
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
|
[root@k8s-master01 mobile-decision-server]# cat mobile-decision-server.ymlapiVersion: v1kind: Servicemetadata: name: mobile-decision-server namespace: kevin labels: app: mobile-decision-serverspec: clusterIP: None selector: app: mobile-decision-server ports: - port: 8080 name: server targetPort: 8080---apiVersion: apps/v1kind: StatefulSetmetadata: name: mobile-decision-server namespace: kevinspec: serviceName: mobile-decision-server replicas: 2 selector: matchLabels: app: mobile-decision-server template: metadata: labels: app: mobile-decision-server spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: nodetype operator: In values: - mobile-decision-server serviceAccount: nfs-provisioner containers: - name: mobile-decision-server image: 172.16.60.196/finhub/mobile-decision-server__20 imagePullPolicy: Always ports: - name: dserverport containerPort: 8080 resources: requests: cpu: 200m memory: 600Mi limits: cpu: 200m memory: 1024Mi lifecycle: postStart: exec: command: ["/bin/sh","-c","touch /tmp/health"] preStop: exec: command: ["/bin/sh","-c","kill 1"] livenessProbe: exec: command: ["test","-e","/tmp/health"] initialDelaySeconds: 5 timeoutSeconds: 5 periodSeconds: 10 readinessProbe: tcpSocket: port: dserverport initialDelaySeconds: 15 timeoutSeconds: 5 periodSeconds: 20 volumeMounts: - name: data mountPath: /onestop/app volumeClaimTemplates: - metadata: name: data annotations: volume.beta.kubernetes.io/storage-class: managed-nfs-storage spec: accessModes: - ReadWriteMany resources: requests: storage: 20Gi |
创建并查看
|
1
2
3
4
5
|
[root@k8s-master01 mobile-decision-server]# kubectl apply -f mobile-decision-server.yml [root@k8s-master01 mobile-decision-server]# kubectl get pods -n kevin|grep mobile-decision-servermobile-decision-server-0 1/1 Running 0 5h4mmobile-decision-server-1 1/1 Running 0 5h4m |
查看pv和pvc
|
1
2
3
4
5
6
7
8
9
|
[root@k8s-master01 mobile-decision-server]# kubectl get pvc -n kevinNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGEdata-mobile-decision-server-0 Bound pvc-9433d725-bc0a-4fae-a2dc-5c22b2c3928a 20Gi RWX managed-nfs-storage 5h10mdata-mobile-decision-server-1 Bound pvc-3615cf99-31ea-491b-8808-8800057d0a21 20Gi RWX managed-nfs-storage 5h9m [root@k8s-master01 mobile-decision-server]# kubectl get pvNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGEpvc-3615cf99-31ea-491b-8808-8800057d0a21 20Gi RWX Delete Bound kevin/data-mobile-decision-server-1 managed-nfs-storage 5h9mpvc-9433d725-bc0a-4fae-a2dc-5c22b2c3928a 20Gi RWX Delete |
5)查看NFS共享目录
只要pvc不删除,NFS共享目录下的文件夹名称不会改变。
|
1
2
3
4
5
|
[root@k8s-storage01 ~]# cd /data/storage/mobile-decision-server_data/[root@k8s-storage01 mobile-decision-server_data]# lltotal 8drwxrwxrwx 4 root root 4096 Oct 10 11:53 kevin-data-mobile-decision-server-0-pvc-9433d725-bc0a-4fae-a2dc-5c22b2c3928adrwxrwxrwx 4 root root 4096 Oct 10 11:53 kevin-data-mobile-decision-server-1-pvc-3615cf99-31ea-491b-8808-8800057d0a21 |
可以手动删除pvc和pv
|
1
2
|
# kubectl delete pvc pvc-name -n namespace# kubectl delete pv pv-name |