zoukankan      html  css  js  c++  java
  • SUSE Ceph iSCSI 网关管理

         iSCSI网关集成了Ceph存储和iSCSI标准,以提供一个高可用性(HA) iSCSI目标,该目标将RADOS块设备(RBD)映像导出为SCSI磁盘。iSCSI协议允许客户机 (initiator) 通过TCP/IP网络向SCSI存储设备( targets )发送SCSI命令。这允许异构客户机访问Ceph存储集群。

        每个iSCSI网关运行Linux IO目标内核子系统(LIO),以提供iSCSI协议支持。LIO利用用户空间通过( TCMU ) 与Ceph的librbd库交互,并向iSCSI客户机暴露RBD镜像。使用Ceph的iSCSI网关,可以有效地运行一个完全集成的块存储基础设施,它具有传统存储区域网络(SAN)的所有特性和优点。

    RBD 作为 VMware ESXI datastore 是否支持?

    (1)目前来说,RBD是不支持datastore形式。

    (2)iSCSI 是支持 datastore 这种方式,可以作为VMware Esxi 虚拟机提供存储功能,性价比非常不错的选择。

     1、创建池和镜像

    (1)创建池

    # ceph osd pool create iscsi-images 128 128 replicated
    # ceph osd pool application enable iscsi-images rbd

    (2)创建images

    # rbd --pool iscsi-images create --size=2048 'iscsi-gateway-image001'
    # rbd --pool iscsi-images create --size=4096 'iscsi-gateway-image002'
    # rbd --pool iscsi-images create --size=2048 'iscsi-gateway-image003'
    # rbd --pool iscsi-images create --size=4096 'iscsi-gateway-image004'

    (3)显示images

    # rbd ls -p iscsi-images
    iscsi-gateway-image001
    iscsi-gateway-image002
    iscsi-gateway-image003
    iscsi-gateway-image004

    2、deepsea 方式安装iSCSI网关

    (1)node001 和 node002节点上安装,编辑policy.cfg 文件

    vim /srv/pillar/ceph/proposals/policy.cfg
      ......
    # IGW
    role-igw/cluster/node00[1-2]*.sls
      ......

     (2)运行 stage 2 和 stage 4

    # salt-run state.orch ceph.stage.2
    # salt 'node001*' pillar.items
        public_network:
            192.168.2.0/24
        roles:
            - mon
            - mgr
            - storage
            - igw
        time_server:
            admin.example.com
    # salt-run state.orch ceph.stage.4

    3、手动方式安装iSCSI网关

    (1)node003 节点安装 iscsi 软件包

    # zypper -n in -t pattern ceph_iscsi
    # zypper -n in tcmu-runner  tcmu-runner-handler-rbd 
      ceph-iscsi patterns-ses-ceph_iscsi python3-Flask python3-click python3-configshell-fb 
      python3-itsdangerous python3-netifaces python3-rtslib-fb 
      python3-targetcli-fb python3-urwid targetcli-fb-common 

    (2)admin节点创建key,并复制到 node003

    # ceph auth add client.igw.node003 mon 'allow *' osd 'allow *' mgr 'allow r'
    # ceph auth get client.igw.node003
    client.igw.node003
            key: AQC0eotdAAAAABAASZrZH9KEo0V0WtFTCW9AHQ==
            caps: [mgr] allow r
            caps: [mon] allow *
            caps: [osd] allow *
    # ceph auth get client.igw.node003 >> /etc/ceph/ceph.client.igw.node003.keyring
    # scp /etc/ceph/ceph.client.igw.node003.keyring node003:/etc/ceph

     (3)node003 节点启动服务

    # systemctl start tcmu-runner.service
    # systemctl enable tcmu-runner.service 

    (4)node003 节点创建配置文件

    # vim /etc/ceph/iscsi-gateway.cfg
    [config]
    cluster_client_name = client.igw.node003
    pool = iscsi-images
    trusted_ip_list = 192.168.2.42,192.168.2.40,192.168.2.41
    minimum_gateways = 1
    fqdn_enabled=true
    
    # Additional API configuration options are as follows, defaults shown.
    api_port = 5000
    api_user = admin
    api_password = admin
    api_secure = false
    
    # Log level
    logger_level = WARNING

    (5)启动 RBD target 服务

    # systemctl start rbd-target-api.service
    # systemctl enable rbd-target-api.service

    (6)显示配置信息

    # gwcli info
    HTTP mode          : http
    Rest API port      : 5000
    Local endpoint     : http://localhost:5000/api
    Local Ceph Cluster : ceph
    2ndary API IP's    : 192.168.2.42,192.168.2.40,192.168.2.41
    # gwcli ls
    o- / ...................................................................... [...]
      o- cluster ...................................................... [Clusters: 1]
      | o- ceph ......................................................... [HEALTH_OK]
      |   o- pools ....................................................... [Pools: 1]
      |   | o- iscsi-images ........ [(x3), Commit: 0.00Y/15718656K (0%), Used: 192K]
      |   o- topology ............................................. [OSDs: 6,MONs: 3]
      o- disks .................................................... [0.00Y, Disks: 0]
      o- iscsi-targets ............................ [DiscoveryAuth: None, Targets: 0]

    4、Dashboard 添加 iscsi 网关

    (1)Admin节点上,查看 dashboard iSCSI 网关

    admin:~ # ceph dashboard iscsi-gateway-list
    {"gateways": {"node002.example.com": {"service_url": "http://admin:admin@192.168.2.41:5000"},
     "node001.example.com": {"service_url": "http://admin:admin@192.168.2.40:5000"}}}
    

     (2)添加 iSCSI 网关

    # ceph dashboard iscsi-gateway-add http://admin:admin@192.168.2.42:5000
    # ceph dashboard iscsi-gateway-list      
    {"gateways": {"node002.example.com": {"service_url": "http://admin:admin@192.168.2.41:5000"},
     "node001.example.com": {"service_url": "http://admin:admin@192.168.2.40:5000"},
     "node003.example.com": {"service_url": "http://admin:admin@192.168.2.42:5000"}}}  

    (3)登陆 Dashboard 查看 iSCSI 网关

     5、Export RBD Images via iSCSI

    (1)创建 iSCSI target name

    # gwcli
    gwcli > /> cd /iscsi-targets
    gwcli > /iscsi-targets> create iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01

    (2)添加 iSCSI 网关

    gwcli > /iscsi-targets> cd iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01/gateways
    /iscsi-target...tvol/gateways> create node001.example.com 172.200.50.40
    /iscsi-target...tvol/gateways> create node002.example.com 172.200.50.41
    /iscsi-target...tvol/gateways> create node003.example.com 172.200.50.42

    /iscsi-target...ay01/gateways> ls o- gateways ......................................................... [Up: 3/3, Portals: 3] o- node001.example.com ............................................. [172.200.50.40 (UP)] o- node002.example.com ............................................. [172.200.50.41 (UP)] o- node003.example.com ............................................. [172.200.50.42 (UP)]

    注意:安装主机名来定义

    /iscsi-target...tvol/gateways> create node002 172.200.50.41
    The first gateway defined must be the local machine

    (3)添加 RBD 镜像

    /iscsi-target...tvol/gateways> cd /disks
    /disks> attach iscsi-images/iscsi-gateway-image001
    /disks> attach iscsi-images/iscsi-gateway-image002

    (4)target 和 RBD 镜像建立映射关系

    /disks> cd /iscsi-targets/iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01/disks
    /iscsi-target...teway01/disks> add iscsi-images/iscsi-gateway-image001
    /iscsi-target...teway01/disks> add iscsi-images/iscsi-gateway-image002

    (5)设置不验证

    gwcli > /> cd /iscsi-targets/iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01/hosts
    /iscsi-target...teway01/hosts> auth disable_acl
    /iscsi-target...teway01/hosts> exit

    (6)查看配置信息

    node001:~ # gwcli ls
    o- / ............................................................................... [...]
      o- cluster ............................................................... [Clusters: 1]
      | o- ceph .................................................................. [HEALTH_OK]
      |   o- pools ................................................................ [Pools: 1]
      |   | o- iscsi-images .................. [(x3), Commit: 6G/15717248K (40%), Used: 1152K]
      |   o- topology ...................................................... [OSDs: 6,MONs: 3]
      o- disks ................................................................ [6G, Disks: 2]
      | o- iscsi-images .................................................. [iscsi-images (6G)]
      |   o- iscsi-gateway-image001 ............... [iscsi-images/iscsi-gateway-image001 (2G)]
      |   o- iscsi-gateway-image002 ............... [iscsi-images/iscsi-gateway-image002 (4G)]
      o- iscsi-targets ..................................... [DiscoveryAuth: None, Targets: 1]
        o- iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01 .............. [Gateways: 3]
          o- disks ................................................................ [Disks: 2]
          | o- iscsi-images/iscsi-gateway-image001 .............. [Owner: node001.example.com]
          | o- iscsi-images/iscsi-gateway-image002 .............. [Owner: node002.example.com]
          o- gateways .................................................. [Up: 3/3, Portals: 3]
          | o- node001.example.com ...................................... [172.200.50.40 (UP)]
          | o- node002.example.com ...................................... [172.200.50.41 (UP)]
          | o- node003.example.com ...................................... [172.200.50.42 (UP)]
          o- host-groups ........................................................ [Groups : 0]
          o- hosts .................................................... [Hosts: 0: Auth: None]

     6、使用 Dashboard 界面输出 RBD Images

     (1)添加 iSCSI target

     (2)编写 target IQN,并且添加镜像 Portals 和 images

     (3)查看新添加 iSCSI target 信息

     7、Linux 客户端访问

    (1)启动 iscsid 服务

    • SLES or RHEL
    # systemctl start iscsid.service
    # systemctl enable iscsid.service 
    • Debian or Ubuntu
    # systemctl start open-iscsi

    (2)发现和连接 targets

    # iscsiadm -m discovery -t st -p 172.200.50.40
    172.200.50.40:3260,1 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01
    172.200.50.41:3260,2 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01
    172.200.50.42:3260,3 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway01
    172.200.50.40:3260,1 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway02
    172.200.50.41:3260,2 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway02
    172.200.50.42:3260,3 iqn.2019-10.com.suse-iscsi.iscsi01.x86:iscsi-gateway02

    (3)登录target

    # iscsiadm -m node -p 172.200.50.40 --login
    # iscsiadm -m node -p 172.200.50.41 --login
    # iscsiadm -m node -p 172.200.50.42 --login
    # lsblk
    NAME            MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda               8:0    0    25G  0 disk
    ├─sda1            8:1    0   509M  0 part /boot
    └─sda2            8:2    0  24.5G  0 part
      ├─vg00-lvswap 254:0    0     2G  0 lvm  [SWAP]
      └─vg00-lvroot 254:1    0 122.5G  0 lvm  /
    sdb               8:16   0   100G  0 disk
    └─vg00-lvroot   254:1    0 122.5G  0 lvm  /
    sdc               8:32   0     2G  0 disk
    sdd               8:48   0     2G  0 disk
    sde               8:64   0     4G  0 disk
    sdf               8:80   0     4G  0 disk
    sdg               8:96   0     2G  0 disk
    sdh               8:112  0     4G  0 disk
    sdi               8:128  0     2G  0 disk
    sdj               8:144  0     4G  0 disk
    sdk               8:160  0     2G  0 disk
    sdl               8:176  0     2G  0 disk
    sdm               8:192  0     4G  0 disk
    sdn               8:208  0     4G  0 disk

    (4)如果系统上已安装 lsscsi 实用程序,您可以使用它来枚举系统上可用的 SCSI 设备:

    # lsscsi
    [1:0:0:0]    cd/dvd  NECVMWar VMware SATA CD01 1.00  /dev/sr0
    [30:0:0:0]   disk    VMware,  VMware Virtual S 1.0   /dev/sda
    [30:0:1:0]   disk    VMware,  VMware Virtual S 1.0   /dev/sdb
    [33:0:0:0]   disk    SUSE     RBD              4.0   /dev/sdc
    [33:0:0:1]   disk    SUSE     RBD              4.0   /dev/sde
    [34:0:0:2]   disk    SUSE     RBD              4.0   /dev/sdd
    [34:0:0:3]   disk    SUSE     RBD              4.0   /dev/sdf
    [35:0:0:0]   disk    SUSE     RBD              4.0   /dev/sdg
    [35:0:0:1]   disk    SUSE     RBD              4.0   /dev/sdh
    [36:0:0:2]   disk    SUSE     RBD              4.0   /dev/sdi
    [36:0:0:3]   disk    SUSE     RBD              4.0   /dev/sdj
    [37:0:0:0]   disk    SUSE     RBD              4.0   /dev/sdk
    [37:0:0:1]   disk    SUSE     RBD              4.0   /dev/sdm
    [38:0:0:2]   disk    SUSE     RBD              4.0   /dev/sdl
    [38:0:0:3]   disk    SUSE     RBD              4.0   /dev/sdn 

    (5)多路径设置

    # zypper in multipath-tools
    # modprobe dm-multipath path
    # systemctl start multipathd.service
    # systemctl enable multipathd.service
    # multipath -ll
    36001405863b0b3975c54c5f8d1ce0e01 dm-3 SUSE,RBD
    size=4.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=50 status=active
    | `- 35:0:0:1 sdh 8:112 active ready running  <=== 单条链路 active
    `-+- policy='service-time 0' prio=10 status=enabled
      |- 33:0:0:1 sde 8:64  active ready running
      `- 37:0:0:1 sdm 8:192 active ready running
    3600140529260bf41c294075beede0c21 dm-2 SUSE,RBD
    size=2.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=50 status=active
    | `- 33:0:0:0 sdc 8:32  active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
      |- 35:0:0:0 sdg 8:96  active ready running
      `- 37:0:0:0 sdk 8:160 active ready running
    360014055d00387c82104d338e81589cb dm-4 SUSE,RBD
    size=2.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=50 status=active
    | `- 38:0:0:2 sdl 8:176 active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
      |- 34:0:0:2 sdd 8:48  active ready running
      `- 36:0:0:2 sdi 8:128 active ready running
    3600140522ec3f9612b64b45aa3e72d9c dm-5 SUSE,RBD
    size=4.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    |-+- policy='service-time 0' prio=50 status=active
    | `- 34:0:0:3 sdf 8:80  active ready running
    `-+- policy='service-time 0' prio=10 status=enabled
      |- 36:0:0:3 sdj 8:144 active ready running
      `- 38:0:0:3 sdn 8:208 active ready running

    (5)编辑多路径配置文件

    # vim /etc/multipath.conf
    defaults {
        user_friendly_names yes
    }
    
    devices {
        device {
            vendor "(LIO-ORG|SUSE)"
            product "RBD"
            path_grouping_policy "multibus" # 所有有效路径在一个优先组群中
            path_checker "tur"              # 在设备中执行 TEST UNIT READY 命令。
            features "0"
            hardware_handler "1 alua"       # 在切换路径组群或者处理 I/O 错误时用来执行硬件具体动作的模块。
            prio "alua"
            failback "immediate"
            rr_weight "uniform"             # 所有路径都有相同的加权
            no_path_retry 12                # 路径故障后,重试12次,每次5秒
            rr_min_io 100                   # 指定切换到当前路径组的下一个路径前路由到该路径的 I/O 请求数。
        }
    }
    # systemctl stop multipathd.service
    # systemctl start multipathd.service

    (6)查看多路径状态

    # multipath -ll                     
    mpathd (3600140522ec3f9612b64b45aa3e72d9c) dm-5 SUSE,RBD
    size=4.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    `-+- policy='service-time 0' prio=23 status=active
      |- 34:0:0:3 sdf 8:80  active ready running  <=== 多条链路 active
      |- 36:0:0:3 sdj 8:144 active ready running
      `- 38:0:0:3 sdn 8:208 active ready running
    mpathc (360014055d00387c82104d338e81589cb) dm-4 SUSE,RBD
    size=2.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    `-+- policy='service-time 0' prio=23 status=active
      |- 34:0:0:2 sdd 8:48  active ready running
      |- 36:0:0:2 sdi 8:128 active ready running
      `- 38:0:0:2 sdl 8:176 active ready running
    mpathb (36001405863b0b3975c54c5f8d1ce0e01) dm-3 SUSE,RBD
    size=4.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    `-+- policy='service-time 0' prio=23 status=active
      |- 33:0:0:1 sde 8:64  active ready running
      |- 35:0:0:1 sdh 8:112 active ready running
      `- 37:0:0:1 sdm 8:192 active ready running
    mpatha (3600140529260bf41c294075beede0c21) dm-2 SUSE,RBD
    size=2.0G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
    `-+- policy='service-time 0' prio=23 status=active
      |- 33:0:0:0 sdc 8:32  active ready running
      |- 35:0:0:0 sdg 8:96  active ready running
      `- 37:0:0:0 sdk 8:160 active ready running

    (7)显示当前的device mapper的信息

    # dmsetup ls --tree
    mpathd (254:5)
     ├─ (8:208)
     ├─ (8:144)
     └─ (8:80)
    mpathc (254:4)
     ├─ (8:176)
     ├─ (8:128)
     └─ (8:48)
    mpathb (254:3)
     ├─ (8:192)
     ├─ (8:112)
     └─ (8:64)
    mpatha (254:2)
     ├─ (8:160)
     ├─ (8:96)
     └─ (8:32)
    vg00-lvswap (254:0)
     └─ (8:2)
    vg00-lvroot (254:1)
     ├─ (8:16)
     └─ (8:2)

    (8)客户端 yast iscsi-client 工具查看

    iSCSI的其他常用操作(客户端)

    (1)列出所有target

    # iscsiadm -m node

    (2)连接所有target

    # iscsiadm -m node -L all

    (3)连接指定target

    # iscsiadm -m node -T iqn.... -p 172.29.88.62 --login

    (4)使用如下命令可以查看配置信息

    # iscsiadm -m node -o show -T iqn.2000-01.com.synology:rackstation.exservice-bak

    (5)查看目前 iSCSI target 连接状态

    # iscsiadm -m session
    # iscsiadm: No active sessions.

    (目前没有已连接的 iSCSI target)
    (6)断开所有target

    # iscsiadm -m node -U all

    (7)断开指定target

    # iscsiadm -m node -T iqn... -p 172.29.88.62 --logout

    (8)删除所有node信息

    # iscsiadm -m node --op delete

    (9)删除指定节点(/var/lib/iscsi/nodes目录下,先断开session)

    # iscsiadm -m node -o delete -name iqn.2012-01.cn.nayun:test-01

    (10)删除一个目标(/var/lib/iscsi/send_targets目录下)

    # iscsiadm --mode discovery -o delete -p 172.29.88.62:3260

     

  • 相关阅读:
    C#图片无损压缩
    as3.0 动态文本属性大全
    卡​马​克​卷​轴​算​法​研​究​_​地​图​双​缓​冲
    春卷活动心得
    移动端videojs视频插件使用直播流rtmp、hls、http-flv的注意事项
    在Windows2008系统中利用IIS建立FTP服务器
    winform 窗体自适应 根据新窗体大小按比例放缩
    HTTPS抓包
    数据库 事物 锁
    sql 事物 锁 快照(转发的,写的非常好)
  • 原文地址:https://www.cnblogs.com/alfiesuse/p/11575063.html
Copyright © 2011-2022 走看看