zoukankan      html  css  js  c++  java
  • 08-kubernetes 存储卷

    存储卷

    分为四种:

    1. 有状态,需要存储
    2. 有状态,无需存储
    3. 无状态,需要存储
    4. 无状态,无需存储

    Pod挂载在本地的磁盘或者内存,被称为emptyDIr ,称为临时空目录,随着Pod删除,也会被删除。

    hostPath 主机目录,在Pod容器中能看到宿主机目录数据。

    分布式存储:

    glusterfs, ceph-rbd, cephfs

    emptyDir 测试及使用

    emptyDir 表示使用的是本地磁盘或者内存(如果是内存,则表示当做缓存来使用).

    创建相应的清单文件如下:

    [root@master volume]# cat pod-vol-demo.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-demo
      namespace: default
      labels:
        app: myapp
        tier: frontend
      annotations:
        jubaozhu.com/created-by: "cluster admin"
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/         # myapp 容器中,把名称为html的卷挂载到 /usr/share/nginx/html/ 目录下
      - name: busybox
        image: busybox:latest
        imagePullPolicy: IfNotPresent
        volumeMounts:
        - name: html
          mountPath: /data/                         # busybox 容器中,,把名称为html的卷挂载到 /data/ 目录下
        command: ["/bin/sh", "-c", "while true; do echo $$(date) >> /data/index.html; sleep 2; done"]       # 这里的往 /data/index.html 写入时间,用于myapp容器中web访问使用
      volumes:
      - name: html          # 创建一个名称为html的volumes
        emptyDir: {}        # 这里一个空字典,表示 emptyDir下的 medium 使用默认参数 和 sizeLimit 不限制空间大小。
    

    创建

    [root@master volume]# kubectl apply -f pod-vol-demo.yaml 
    pod/pod-demo created
    [root@master volume]# kubectl get pods -o wide
    NAME                             READY   STATUS    RESTARTS   AGE     IP            NODE                NOMINATED NODE   READINESS GATES
    pod-demo                         2/2     Running   0          29s     10.244.2.25   node02.kubernetes   <none>           <none>
    

    测试访问Pod 对应的 ip

    [root@master volume]# curl 10.244.2.25
    Thu Aug 1 08:41:18 UTC 2019
    Thu Aug 1 08:41:20 UTC 2019
    Thu Aug 1 08:41:22 UTC 2019
    Thu Aug 1 08:41:24 UTC 2019
    Thu Aug 1 08:41:26 UTC 2019
    Thu Aug 1 08:41:28 UTC 2019
    Thu Aug 1 08:41:30 UTC 2019
    Thu Aug 1 08:41:32 UTC 2019
    Thu Aug 1 08:41:34 UTC 2019
    Thu Aug 1 08:41:36 UTC 2019
    Thu Aug 1 08:41:38 UTC 2019
    Thu Aug 1 08:41:40 UTC 2019
    Thu Aug 1 08:41:42 UTC 2019
    Thu Aug 1 08:41:44 UTC 2019
    Thu Aug 1 08:41:46 UTC 2019
    Thu Aug 1 08:41:48 UTC 2019
    Thu Aug 1 08:41:50 UTC 2019
    Thu Aug 1 08:41:52 UTC 2019
    

    可以看到写如和访问都正常,达到了期望的效果。

    Pod测试挂在共享NFS

    本次测试,在master节点上安装了 NFS,配置如下

    [root@master volume]# cat /etc/exports
    /data/volumes	*(rw,no_root_squash)
    

    启动测试

    [root@master data]# systemctl start rpcbind
    [root@master data]# systemctl start nfs
    [root@master data]# showmount -e localhost
    Export list for localhost:
    /data/volumes 0.0.0.0/0
    

    注意

    需要在所有节点安装 `nfs-utils` 组件,否则当Pod被分配到没有组件的节点,会启动失败,因为没有`mount.nfs` 
    

    写入测试页面

    [root@master volume]# echo '<h1>NFS stor01</h1>' > /data/volumes/index.html
    

    写测试清单

    [root@master volume]# cat pod-vol-nfs.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-nfs
      namespace: default
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: html
        nfs:
          path: /data/volumes
          server: 10.0.20.20
    

    创建查看

    [root@master volume]# kubectl apply -f pod-vol-nfs.yaml 
    pod/pod-vol-nfs created
    [root@master volume]# kubectl get pods -o wide
    NAME               READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
    pod-hostpath-vol   1/1     Running   0          88m   10.244.3.32   node01.kubernetes   <none>           <none>
    pod-vol-nfs        1/1     Running   0          5s    10.244.1.29   node03.kubernetes   <none>           <none>
    

    可以看到Pod分配在 node03 节点上

    测试

    测试访问

    [root@master volume]# curl 10.244.1.29
    <h1>NFS stor01</h1>     # 访问正常
    

    删除Pod后再次创建测试

    [root@master volume]# kubectl delete -f pod-vol-nfs.yaml 
    pod "pod-vol-nfs" deleted
    [root@master volume]# kubectl apply -f pod-vol-nfs.yaml 
    pod/pod-vol-nfs created
    [root@master volume]# kubectl get pods -o wide
    NAME               READY   STATUS    RESTARTS   AGE   IP            NODE                NOMINATED NODE   READINESS GATES
    pod-hostpath-vol   1/1     Running   0          90m   10.244.3.32   node01.kubernetes   <none>           <none>
    # 下面的数据可以看到Pod分配在 node02 上
    pod-vol-nfs        1/1     Running   0          2s    10.244.2.27   node02.kubernetes   <none>           <none>
    [root@master volume]# curl 10.244.2.27
    <h1>NFS stor01</h1>     # 测试访问正常
    

    pv, pvc

    PV 是属于集群资源, 在集群中所有名称空间都可用, 全程 PersistentVolume.

    PVC 是名称空间级别, 也就是一个标准资源类,全程 PersistentVolumeClaim.

    在Pod定义PVC, 之后会根据定义的容量大小,PVC会自动绑定对应大于等于某一个PV.

    创建几个PV

    [root@master volumes]# mkdir /data/volumes/v{1,2,3,4,5} -p
    [root@master volume]# cat pv-demo.yaml 
    apiVersion: v1
    kind: PersistentVolume              # 资源名称
    metadata:   
      name: pv001                       # PV 的名称
      labels:
        name: pv001                     # 标签
    spec:
      nfs:
        path: /data/volumes/v1          # PV所对应的目录
        server: 10.0.20.20
      accessModes: ["ReadWriteMany", "ReadWriteOnce"]   # 权限
      capacity:
        storage: 2Gi                    # PV空间大小
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata: 
      name: pv002
      labels:
        name: pv002
    spec:
      nfs:
        path: /data/volumes/v2
        server: 10.0.20.20
      accessModes: ["ReadWriteMany"]
      capacity:
        storage: 5Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata: 
      name: pv003
      labels:
        name: pv003
    spec:
      nfs:
        path: /data/volumes/v3
        server: 10.0.20.20
      accessModes: ["ReadWriteMany", "ReadWriteOnce"]
      capacity:
        storage: 20Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata: 
      name: pv004
      labels:
        name: pv004
    spec:
      nfs:
        path: /data/volumes/v4
        server: 10.0.20.20
      accessModes: ["ReadWriteMany", "ReadWriteOnce"]
      capacity:
        storage: 10Gi
    ---
    apiVersion: v1
    kind: PersistentVolume
    metadata: 
      name: pv005
      labels:
        name: pv005
    spec:
      nfs:
        path: /data/volumes/v5
        server: 10.0.20.20
      accessModes: ["ReadWriteMany", "ReadWriteOnce"]
      capacity:
        storage: 10Gi
    

    创建查看

    [root@master volume]# kubectl apply -f pv-demo.yaml 
    persistentvolume/pv001 created
    persistentvolume/pv002 created
    persistentvolume/pv003 created
    persistentvolume/pv004 created
    persistentvolume/pv005 created
    [root@master volume]# kubectl get pv -o wide
    NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE   VOLUMEMODE
    pv001   2Gi        RWO,RWX        Retain           Available                                   51s   Filesystem
    pv002   5Gi        RWX            Retain           Available                                   29s   Filesystem
    pv003   20Gi       RWO,RWX        Retain           Available                                   29s   Filesystem
    pv004   10Gi       RWO,RWX        Retain           Available                                   29s   Filesystem
    pv005   10Gi       RWO,RWX        Retain           Available                                   51s   Filesystem
    

    创建测试的Pod 和 PVC

    [root@master volume]# cat pod-vol-pvc.yaml 
    apiVersion: v1
    kind: PersistentVolumeClaim     # PVC资源
    metadata:
      name: mypvc
      namespace: default            # 名称空间
    spec:
      accessModes: ["ReadWriteMany"]    # 权限
      resources:
        requests:
          storage: 6Gi              # 定义的磁盘空间带下
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: pod-vol-pvc
      namespace: default
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v1
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html/
      volumes:
      - name: html
        persistentVolumeClaim:
          claimName: mypvc
    

    创建和查看 PVC PV 状态

    [root@master volume]# kubectl apply -f pod-vol-pvc.yaml 
    persistentvolumeclaim/mypvc created
    pod/pod-vol-pvc created
    [root@master volume]# kubectl get pods
    NAME          READY   STATUS    RESTARTS   AGE
    pod-vol-pvc   1/1     Running   0          3s
    [root@master volume]# kubectl get pvc
    NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    mypvc   Bound    pv005    10Gi       RWO,RWX                       36s          # 这里看到PVC 已经正常绑定了一个PV,PV名称是 pv005, 空间是 10G
    [root@master volume]# kubectl get pv
    NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
    pv001   2Gi        RWO,RWX        Retain           Available                                           9m37s
    pv002   5Gi        RWX            Retain           Available                                           9m15s
    pv003   20Gi       RWO,RWX        Retain           Available                                           9m15s
    pv004   10Gi       RWO,RWX        Retain           Available                                           9m15s
    pv005   10Gi       RWO,RWX        Retain           Bound       default/mypvc                           9m37s        # 这里看到状态是 Bound, 回收策略是 Retain
    
  • 相关阅读:
    [POJ 2777]Count Color 线段树+二进制状态压缩
    [git] git push问题 解决 Updates were rejected because the tip of your current branch is behind 和每次输入用户名和密码
    [hdu-5795]A Simple Nim 博弈 尼姆博弈 SG函数打表找规律
    [codeforces1284E]New Year and Castle Construction 几何
    Spring事务相关接口以及实现类
    MyBatis与Spring整合
    实现一个简易RPC
    使用CAS实现一个超时锁
    阻塞队列
    Java中的Lock接口
  • 原文地址:https://www.cnblogs.com/winstom/p/11308450.html
Copyright © 2011-2022 走看看