zoukankan      html  css  js  c++  java
  • 使用OpenFiler来模拟存储配置RAC中ASM共享盘及多路径(multipath)的测试

    第一章 本篇总览

     

    之前发布了一篇《Oracle_lhr_RAC 12cR1安装》,但是其中的存储并没有使用多路径,而是使用了VMware自身提供的存储。所以,年前最后一件事就是把多路径学习一下,本文介绍了OpenFiler、iSCSI和多路径的配置。

    本文内容:

    wpsE474.tmp 

     

    第二章 安装OpenFiler

    OpenFile是在rPath Linux基础上开发的,它能够作为一个独立的Linux操作系统发行。Openfiler是一款非常好的存储管理操作系统,开源免费,通过web界面对存储磁盘的管理,支持现在流行的网络存储技术IP-SANNAS,支持iSCSINFSSMB/CIFSFTP等协议。

    本次安装OpenFiler锁需要的软件如下所示:

    序号

    类型

    内容

    1

    openfiler

    openfileresa-2.99.1-x86_64-disc1.iso

    注:这些软件小麦苗已上传到腾讯微云(http://blog.itpub.net/26736162/viewspace-1624453/),各位朋友可以去下载。另外,小麦苗已经将安装好的虚拟机上传到了云盘,里边已集成了rlwrap软件。

    2.1  安装

    详细安装过程小麦苗就不一个一个截图了,网上已经有网友贴出了一步一步的过程,OpenFiler的内存设置为1G大小或再小点也无所谓,磁盘选用IDE磁盘格式,由于后续要配置多路径,所以需要安装2块网卡。安装完成后,重新启动,界面如下所示:

    wpsE475.tmp 

     

    注意,方框中的内容,可以在浏览器中直接打开。可以用root用户登录进行用户的维护,若进行存储的维护则只能使用openfiler用户。openfiler是在远程使用Web界面进行管理的,小麦苗这里的管理地址是https://192.168.59.200:446,其管理初始用户名是openfiler(小写的),密码是password,可以在登录之后,修改这个密码。

    wpsE476.tmp 

     

    2.2  基本配置

    2.2.1  网卡配置

    wpsE477.tmp 

    配置静态网卡地址:

    [root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth0

    # Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE]

    DEVICE=eth0

    BOOTPROTO=static

    BROADCAST=192.168.59.255

    HWADDR=00:0C:29:98:1A:CD

    IPADDR=192.168.59.200

    NETMASK=255.255.255.0

    NETWORK=192.168.59.0

    ONBOOT=yes

    [root@OFLHR ~]# more /etc/sysconfig/network-scripts/ifcfg-eth1

    DEVICE=eth1

    MTU=1500

    USERCTL=no

    ONBOOT=yes

    BOOTPROTO=static

    IPADDR=192.168.2.200

    NETMASK=255.255.255.0

    HWADDR=00:0C:29:98:1A:D7

    [root@OFLHR ~]# ip a

    1: lo: <loopback,up,10000> mtu 16436 qdisc noqueue

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <broadcast,multicast,up,10000> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

        inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

        inet6 fe80::20c:29ff:fe98:1acd/64 scope link

           valid_lft forever preferred_lft forever

    3: eth1: <broadcast,multicast,up,10000> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

        inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

        inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

           valid_lft forever preferred_lft forever

    [root@OFLHR ~]#

     

     

    2.2.2  添加硬盘

    添加一块100G大小的IDE格式的硬盘作为存储。

    wpsE478.tmp 

    [root@OFLHR ~]# fdisk -l

     

    Disk /dev/sda: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x000adc2c

     

       Device Boot      Start         End      Blocks   Id  System

    /dev/sda1   *          63      610469      305203+  83  Linux

    /dev/sda2          610470    17382329     8385930   83  Linux

    /dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

     

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

    Disk /dev/sdb doesn't contain a valid partition table

    [root@OFLHR ~]#

     

     

    2.3  iscsi target配置

    openfiler服务器配置了两块硬盘,其中10GB的硬盘已经用来安装openfiler操作系统,而200GB的硬盘则会用做数据存储。

    2.3.1  创建逻辑卷

     

    登录地址:https://192.168.59.200:446

    初始用户名和密码:openfiler/password

     

    在独立存储设备中,LUN(Logical Unit Number)是最重要的基本单位。LUN可以被SAN中的任何主机访问,不管是透过HBA或是iSCSI。就算是软件激活的iSCSI,也可以在不同的操作系统之下,在操作系统启动之后利用软件的iSCSI initiator访问LUN。在OpenFiler之下,LUN被称为Logical VolumeLV),因此在OpenFiler下创建LUN就是创建LV

    当你安装好OpenFiler之后,接下来就是要将OpenFiler下的磁盘分享出来给虚拟机或网络上的其他主机使用了。在标准的SAN之后,这些可以在RAID层面完成,但VG的好处及弹性是RAID无法比较的,下面看看OpenFiler下的VG是如何一步一步创建的。

     创建VG的步骤:

    1)进入OpenFiler的接口,并且选择要使用的实体硬盘。

    2)将要加入的实体硬盘格式化成Physical Volume格式。

    3)创建一个VG组,并且将格式化成为PV格式的实体硬盘加入。

    4)加入完毕之后,就成为一个大的VG组,被视为系统的一个大实体硬盘。

    5)在这个VG中添加逻辑分割区LUN,在OpenFiler中称为Logical Volume

    6)指定LUN的文件格式,如iSCSIext3或是NFS,并且格式化。

    7)如果是iSCSI则需要再配置,如果是其他文件格式,就可以用NAS的方式分享出去而

     

     

    登录后,点击Volumes标签

    openfiler服务器配置了两块硬盘,其中10GB的硬盘已经用来安装openfiler操作系统,而200GB的硬盘则会用做数据存储。

    wpsE479.tmp 

    wpsE47A.tmp 

    点击create new physical volumes点击/dev/sdb

    wpsE47B.tmp 

    点击页面右下角Reset,然后点击Create。分区类型为Physical volume

    wpsE48B.tmp 

    点击Volume Groups

    wpsE48C.tmp 

    wpsE48D.tmp 

    输入名称,勾选复选框,单击Add volume group

    wpsE48E.tmp 

    [root@OFLHR ~]# fdisk -l

     

    Disk /dev/sda: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x000adc2c

     

       Device Boot      Start         End      Blocks   Id  System

    /dev/sda1   *          63      610469      305203+  83  Linux

    /dev/sda2          610470    17382329     8385930   83  Linux

    /dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

     

    WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

     

     

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

       Device Boot      Start         End      Blocks   Id  System

    /dev/sdb1               1   209715199   104857599+  ee  GPT

    [root@OFLHR ~]# pvs

      PV         VG    Fmt  Attr PSize  PFree

      /dev/sdb1  vmlhr lvm2 a-   95.34g 95.34g

    [root@OFLHR ~]#

     

     

    点击Add Volume

    wpsE48F.tmp 

     

    输入内容,调整磁盘大小为10G,卷类型选择blockiSCSIFCetc

    wpsE490.tmp 

    wpsE491.tmp 

    依次共创建4个逻辑卷:

    wpsE492.tmp 

    [root@OFLHR ~]# vgs

      VG    #PV #LV #SN Attr   VSize  VFree

      vmlhr   1   4   0 wz--n- 95.34g 55.34g

    [root@OFLHR ~]# pvs

      PV         VG    Fmt  Attr PSize  PFree

      /dev/sdb1  vmlhr lvm2 a-   95.34g 55.34g

    [root@OFLHR ~]# lvs

      LV   VG    Attr   LSize  Origin Snap%  Move Log Copy%  Convert

      lv01 vmlhr -wi-a- 10.00g                                     

      lv02 vmlhr -wi-a- 10.00g                                     

      lv03 vmlhr -wi-a- 10.00g                                     

      lv04 vmlhr -wi-a- 10.00g                                     

    [root@OFLHR ~]# fdisk -l

     

    Disk /dev/sda: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x000adc2c

     

       Device Boot      Start         End      Blocks   Id  System

    /dev/sda1   *          63      610469      305203+  83  Linux

    /dev/sda2          610470    17382329     8385930   83  Linux

    /dev/sda3        17382330    19486844     1052257+  82  Linux swap / Solaris

     

    WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.

     

     

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    255 heads, 63 sectors/track, 13054 cylinders, total 209715200 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

       Device Boot      Start         End      Blocks   Id  System

    /dev/sdb1               1   209715199   104857599+  ee  GPT

     

    Disk /dev/dm-0: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

    Disk /dev/dm-0 doesn't contain a valid partition table

     

    Disk /dev/dm-1: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

    Disk /dev/dm-1 doesn't contain a valid partition table

     

    Disk /dev/dm-2: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

    Disk /dev/dm-2 doesn't contain a valid partition table

     

    Disk /dev/dm-3: 10.7 GB, 10737418240 bytes

    255 heads, 63 sectors/track, 1305 cylinders, total 20971520 sectors

    Units = sectors of 1 * 512 = 512 bytes

    Sector size (logical/physical): 512 bytes / 512 bytes

    I/O size (minimum/optimal): 512 bytes / 512 bytes

    Disk identifier: 0x00000000

     

    Disk /dev/dm-3 doesn't contain a valid partition table

    [root@OFLHR ~]#

     

    2.3.2  开启iSCSI Target服务

    wpsE493.tmp 

    点击Services标签栏设置iSCSI Target Enable 开启服务Start

     

    2.3.3  LUN Mapping操作

    wpsE494.tmp 

    返回Volumes标签页,点击iSCSI Targets

    wpsE495.tmp 

    点击Add

    选择LUN Mapping标签 点击Map

    wpsE496.tmp 

    2.3.4  Network ACL

    由于iSCSI是走IP网络,因此我们要允许网络中的计算机可以透过IP来访问。下面就是OpenFilerIP网络和同一网段中其他主机的连接方法。

    1.进入OpenFiler中的System,并且直接拉到页面的下方。

    2.在Network Access Configuration的地方输入这个网络访问的名称,如VM_LHR

    3.输入主机的IP段。注意不可以输入单一主机的IP,这样会都无法访问。我们在这边输入192.168.59.0,表示从192.168.59.1一直到192.168.59.254都能访问。

    4.在Netmask中选择255.255.255.0,并且在Type下拉列表框中选择Share,之后即可以单击Update按钮。

    wpsE497.tmp 

    选择完之后就更新

    至此就可以在这个OpenFiler中看到被授权的网段了。

     

    iSCSI Targets中,点击 Network ACL 标签

    wpsE498.tmp 

    设置AccessAllow 然后点击Update

    到此存储的配置已经完成

    2.3.5  /etc/initiators.deny

    注释掉iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL

    [root@OFLHR ~]# more /etc/initiators.deny  

     

    # PLEASE DO NOT MODIFY THIS CONFIGURATION FILE!

    #       This configuration file was autogenerated

    #       by Openfiler. Any manual changes will be overwritten

    #       Generated at: Sat Jan 21 1:49:55 CST 2017

     

     

    #iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 ALL

     

     

    # End of Openfiler configuration

     

    [root@OFLHR ~]#

     

     

    第三章 RAC中配置共享

     

    3.1  RAC节点配置iSCSI

    iSCSI(Internet Small Computer System InterfaceiSCSI技术由IBM公司研究开发,是一个供硬件设备使用的、可以在IP协议的上层运行的SCSI指令集,这种指令集合可以实现在IP网络上运行SCSI协议,使其能够在诸如高速千兆以太网上进行路由选择。iSCSI技术是一种新储存技术,该技术是将现有SCSI接口与以太网络(Ethernet)技术结合,使服务器可与使用IP网络的储存装置互相交换资料。iSCSI是一种基于 TCP/IP 的协议,用来建立和管理 IP存储设备、主机和客户机等之间的相互连接,并创建存储区域网络(SAN)。

    iSCSI target:就是储存设备端,存放磁盘或RAID的设备,目前也能够将Linux主机模拟成iSCSI target了!目的在提供其他主机使用的『磁盘』;

    iSCSI initiator:就是能够使用target的用户端,通常是服务器。也就是说,想要连接到iSCSI target的服务器,也必须要安装iSCSI initiator的相关功能后才能够使用iSCSI target提供的磁盘。

    3.1.1  iSCSI target

    [root@OFLHR ~]# service iscsi-target start

    Starting iSCSI target service: [  OK  ]

    [root@OFLHR ~]# more /etc/ietd.conf      

    #####   WARNING!!! - This configuration file generated by Openfiler. DO NOT MANUALLY EDIT.  ##### 

     

     

    Target iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

            HeaderDigest None

            DataDigest None

            MaxConnections 1

            InitialR2T Yes

            ImmediateData No

            MaxRecvDataSegmentLength 131072

            MaxXmitDataSegmentLength 131072

            MaxBurstLength 262144

            FirstBurstLength 262144

            DefaultTime2Wait 2

            DefaultTime2Retain 20

            MaxOutstandingR2T 8

            DataPDUInOrder Yes

            DataSequenceInOrder Yes

            ErrorRecoveryLevel 0

            Lun 0 Path=/dev/vmlhr/lv01,Type=blockio,ScsiSN=22llvD-CacO-MOMA,ScsiId=22llvD-CacO-MOMA,IOMode=wt

            Lun 1 Path=/dev/vmlhr/lv02,Type=blockio,ScsiSN=BgLpy9-u7PH-csDC,ScsiId=BgLpy9-u7PH-csDC,IOMode=wt

            Lun 2 Path=/dev/vmlhr/lv03,Type=blockio,ScsiSN=38KsSC-REKL-yPgW,ScsiId=38KsSC-REKL-yPgW,IOMode=wt

            Lun 3 Path=/dev/vmlhr/lv04,Type=blockio,ScsiSN=aN5blo-NyMp-L4Jl,ScsiId=aN5blo-NyMp-L4Jl,IOMode=wt

     

     

    [root@OFLHR ~]# ps -ef|grep iscsi

    root       937     2  0 01:01 ?        00:00:00 [iscsi_eh]

    root       946     1  0 01:01 ?        00:00:00 iscsid

    root       947     1  0 01:01 ?        00:00:00 iscsid

    root     13827  1217  0 02:43 pts/1    00:00:00 grep iscsi

    [root@OFLHR ~]# cat /proc/net/iet/volume

    tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

            lun:0 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv01

            lun:1 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv02

            lun:2 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv03

            lun:3 state:0 iotype:blockio iomode:wt path:/dev/vmlhr/lv04

    [root@OFLHR ~]# cat /proc/net/iet/session

    tid:1 name:iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

    [root@OFLHR ~]#

     

     

    3.1.2  iSCSI initiator

    3.1.2.1  安装iSCSI initiator

    RAC的2个节点分别安装iSCSI initiator

    [root@raclhr-12cR1-N1 ~]# rpm -qa|grep iscsi

    iscsi-initiator-utils-6.2.0.873-10.el6.x86_64

    [root@raclhr-12cR1-N1 ~]#

     

     

    若未安装可使用yum install iscsi-initiator-utils*进行安装。

     

    3.1.2.2  iscsiadm

    iscsi initiator主要通过iscsiadm命令管理,我们先查看提供服务的iscsi target机器上有哪些target:

    [root@raclhr-12cR1-N1 ~]# iscsiadm --mode discovery --type sendtargets --portal 192.168.59.200

    [  OK  ] iscsid: [  OK  ]

    192.168.59.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

    192.168.2.200:3260,1 iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

    [root@raclhr-12cR1-N1 ~]# ps -ef|grep iscsi

    root      2619     2  0 11:32 ?        00:00:00 [iscsi_eh]

    root      2651     1  0 11:32 ?        00:00:00 iscsiuio

    root      2658     1  0 11:32 ?        00:00:00 iscsid

    root      2659     1  0 11:32 ?        00:00:00 iscsid

    root      2978 56098  0 11:33 pts/1    00:00:00 grep iscsi

    [root@raclhr-12cR1-N1 ~]#

     

     

    到这一步就可以看出,你服务端创建的iSCSI Target 的编号和名称。这条命令只需记住-p后面跟iSCSI服务的地址就行了,也可以是主机名,都可以!3260是服务的端口号,默认的!

    然后就可以登陆某个target了,登陆成功某个target后,这个target下的硬盘也就都共享过来了:

    [root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev

    Disk /dev/sda: 21.5 GB, 21474836480 bytes

    /dev/sda1   *           1          26      204800   83  Linux

    /dev/sda2              26        1332    10485760   8e  Linux LVM

    /dev/sda3            1332        2611    10279936   8e  Linux LVM

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    /dev/sdb1               1        1306    10485760   8e  Linux LVM

    /dev/sdb2            1306        2611    10485760   8e  Linux LVM

    /dev/sdb3            2611        3917    10485760   8e  Linux LVM

    /dev/sdb4            3917       13055    73399296    5  Extended

    /dev/sdb5            3917        5222    10485760   8e  Linux LVM

    /dev/sdb6            5223        6528    10485760   8e  Linux LVM

    /dev/sdb7            6528        7834    10485760   8e  Linux LVM

    /dev/sdb8            7834        9139    10485760   8e  Linux LVM

    /dev/sdb9            9139       10445    10485760   8e  Linux LVM

    /dev/sdb10          10445       11750    10485760   8e  Linux LVM

    /dev/sdb11          11750       13055    10477568   8e  Linux LVM

    Disk /dev/sde: 10.7 GB, 10737418240 bytes

    Disk /dev/sdc: 6442 MB, 6442450944 bytes

    Disk /dev/sdd: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

    Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

    Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

    [root@raclhr-12cR1-N1 ~]# iscsiadm --mode node --targetname iqn.2006-01.com.openfiler:tsn.5e423e1e4d90 –portal 192.168.59.200:3260 --login

    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] (multiple)

    Logging in to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] (multiple)

    Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.59.200,3260] successful.

    Login to [iface: default, target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90, portal: 192.168.2.200,3260] successful.

    [root@raclhr-12cR1-N1 ~]#

    [root@raclhr-12cR1-N1 ~]# fdisk -l | grep dev

    Disk /dev/sda: 21.5 GB, 21474836480 bytes

    /dev/sda1   *           1          26      204800   83  Linux

    /dev/sda2              26        1332    10485760   8e  Linux LVM

    /dev/sda3            1332        2611    10279936   8e  Linux LVM

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    /dev/sdb1               1        1306    10485760   8e  Linux LVM

    /dev/sdb2            1306        2611    10485760   8e  Linux LVM

    /dev/sdb3            2611        3917    10485760   8e  Linux LVM

    /dev/sdb4            3917       13055    73399296    5  Extended

    /dev/sdb5            3917        5222    10485760   8e  Linux LVM

    /dev/sdb6            5223        6528    10485760   8e  Linux LVM

    /dev/sdb7            6528        7834    10485760   8e  Linux LVM

    /dev/sdb8            7834        9139    10485760   8e  Linux LVM

    /dev/sdb9            9139       10445    10485760   8e  Linux LVM

    /dev/sdb10          10445       11750    10485760   8e  Linux LVM

    /dev/sdb11          11750       13055    10477568   8e  Linux LVM

    Disk /dev/sde: 10.7 GB, 10737418240 bytes

    Disk /dev/sdc: 6442 MB, 6442450944 bytes

    Disk /dev/sdd: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

    Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

    Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

    Disk /dev/sdf: 10.7 GB, 10737418240 bytes

    Disk /dev/sdi: 10.7 GB, 10737418240 bytes

    Disk /dev/sdh: 10.7 GB, 10737418240 bytes

    Disk /dev/sdl: 10.7 GB, 10737418240 bytes

    Disk /dev/sdj: 10.7 GB, 10737418240 bytes

    Disk /dev/sdg: 10.7 GB, 10737418240 bytes

    Disk /dev/sdk: 10.7 GB, 10737418240 bytes

    Disk /dev/sdm: 10.7 GB, 10737418240 bytes

     

     

    这里多出了8块盘,在openfiler中只map了四次,为什么这里是8块而不是4块呢?因为openfiler2块网卡,使用两个IP登录两次iscsi target,所以这里有两块是重复的

    要查看各个iscsi的信息:

    # iscsiadm -m session -P 3

    [root@raclhr-12cR1-N1 ~]#

    [root@raclhr-12cR1-N1 ~]# iscsiadm -m session -P 3

    iSCSI Transport Class version 2.0-870

    version 6.2.0-873.10.el6

    Target: iqn.2006-01.com.openfiler:tsn.5e423e1e4d90

            Current Portal: 192.168.59.200:3260,1

            Persistent Portal: 192.168.59.200:3260,1

                    **********

                    Interface:

                    **********

                    Iface Name: default

                    Iface Transport: tcp

                    Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

                    Iface IPaddress: 192.168.59.160

                    Iface HWaddress:

                    Iface Netdev:

                    SID: 1

                    iSCSI Connection State: LOGGED IN

                    iSCSI Session State: LOGGED_IN

                    Internal iscsid Session State: NO CHANGE

                    *********

                    Timeouts:

                    *********

                    Recovery Timeout: 120

                    Target Reset Timeout: 30

                    LUN Reset Timeout: 30

                    Abort Timeout: 15

                    *****

                    CHAP:

                    *****

                    username:

                    password: ********

                    username_in:

                    password_in: ********

                    ************************

                    Negotiated iSCSI params:

                    ************************

                    HeaderDigest: None

                    DataDigest: None

                    MaxRecvDataSegmentLength: 262144

                    MaxXmitDataSegmentLength: 131072

                    FirstBurstLength: 262144

                    MaxBurstLength: 262144

                    ImmediateData: No

                    InitialR2T: Yes

                    MaxOutstandingR2T: 1

                    ************************

                    Attached SCSI devices:

                    ************************

                    Host Number: 4  State: running

                    scsi4 Channel 00 Id 0 Lun: 0

                            Attached scsi disk sdg          State: running

                    scsi4 Channel 00 Id 0 Lun: 1

                            Attached scsi disk sdj          State: running

                    scsi4 Channel 00 Id 0 Lun: 2

                            Attached scsi disk sdk          State: running

                    scsi4 Channel 00 Id 0 Lun: 3

                            Attached scsi disk sdm          State: running

            Current Portal: 192.168.2.200:3260,1

            Persistent Portal: 192.168.2.200:3260,1

                    **********

                    Interface:

                    **********

                    Iface Name: default

                    Iface Transport: tcp

                    Iface Initiatorname: iqn.1994-05.com.redhat:61d32512355

                    Iface IPaddress: 192.168.2.100

                    Iface HWaddress:

                    Iface Netdev:

                    SID: 2

                    iSCSI Connection State: LOGGED IN

                    iSCSI Session State: LOGGED_IN

                    Internal iscsid Session State: NO CHANGE

                    *********

                    Timeouts:

                    *********

                    Recovery Timeout: 120

                    Target Reset Timeout: 30

                    LUN Reset Timeout: 30

                    Abort Timeout: 15

                    *****

                    CHAP:

                    *****

                    username:

                    password: ********

                    username_in:

                    password_in: ********

                    ************************

                    Negotiated iSCSI params:

                    ************************

                    HeaderDigest: None

                    DataDigest: None

                    MaxRecvDataSegmentLength: 262144

                    MaxXmitDataSegmentLength: 131072

                    FirstBurstLength: 262144

                    MaxBurstLength: 262144

                    ImmediateData: No

                    InitialR2T: Yes

                    MaxOutstandingR2T: 1

                    ************************

                    Attached SCSI devices:

                    ************************

                    Host Number: 5  State: running

                    scsi5 Channel 00 Id 0 Lun: 0

                            Attached scsi disk sdf          State: running

                    scsi5 Channel 00 Id 0 Lun: 1

                            Attached scsi disk sdh          State: running

                    scsi5 Channel 00 Id 0 Lun: 2

                            Attached scsi disk sdi          State: running

                    scsi5 Channel 00 Id 0 Lun: 3

                            Attached scsi disk sdl          State: running

    [root@raclhr-12cR1-N1 ~]#

     

    登陆之后要对新磁盘进行分区,格式化,然后在挂载即可

    完成这些命令后,iscsi initator会把这些信息记录到/var/lib/iscsi目录下:

    /var/lib/iscsi/send_targets记录了各个target的情况,/var/lib/iscsi/nodes记录了各个target下的nodes情况。下次再启动iscsi initator时(service iscsi start),就会自动登陆各个target上。如果想让重新手工登陆各个target,需要把/var/lib/iscsi/send_targets目录下的内容和/var/lib/iscsi/nodes下的内容全部删除掉。

    3.2  多路径multipath

    3.2.1  RAC的2个节点上分别安装multipath软件

    1、安装多路径软件包:

    [root@raclhr-12cR1-N1 ~]# mount /dev/sr0 /media/lhr/cdrom/

    mount: block device /dev/sr0 is write-protected, mounting read-only

    [root@raclhr-12cR1-N1 ~]# cd /media/lhr/cdrom/Packages/

    [root@raclhr-12cR1-N1 Packages]# ll device-mapper-*.x86_64.rpm

    -r--r--r-- 104 root root  168424 Oct 30  2013 device-mapper-1.02.79-8.el6.x86_64.rpm

    -r--r--r-- 104 root root  118316 Oct 30  2013 device-mapper-event-1.02.79-8.el6.x86_64.rpm

    -r--r--r-- 104 root root  112892 Oct 30  2013 device-mapper-event-libs-1.02.79-8.el6.x86_64.rpm

    -r--r--r-- 104 root root  199924 Oct 30  2013 device-mapper-libs-1.02.79-8.el6.x86_64.rpm

    -r--r--r--  95 root root  118892 Oct 25  2013 device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

    -r--r--r--  95 root root  184760 Oct 25  2013 device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

    -r--r--r--  96 root root 2444388 Oct 30  2013 device-mapper-persistent-data-0.2.8-2.el6.x86_64.rpm

    [root@raclhr-12cR1-N1 Packages]# ll iscsi*

    -r--r--r-- 101 root root 702300 Oct 29  2013 iscsi-initiator-utils-6.2.0.873-10.el6.x86_64.rpm

    [root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper

    device-mapper-persistent-data-0.2.8-2.el6.x86_64

    device-mapper-1.02.79-8.el6.x86_64

    device-mapper-event-libs-1.02.79-8.el6.x86_64

    device-mapper-event-1.02.79-8.el6.x86_64

    device-mapper-libs-1.02.79-8.el6.x86_64

    [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-1.02.79-8.el6.x86_64.rpm

    warning: device-mapper-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

    Preparing...                ########################################### [100%]

            package device-mapper-1.02.79-8.el6.x86_64 is already installed

    [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-event-1.02.79-8.el6.x86_64.rpm

    warning: device-mapper-event-1.02.79-8.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

    Preparing...                ########################################### [100%]

            package device-mapper-event-1.02.79-8.el6.x86_64 is already installed

    [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

    warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

    error: Failed dependencies:

            device-mapper-multipath-libs = 0.4.9-72.el6 is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

            libmpathpersist.so.0()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

            libmultipath.so()(64bit) is needed by device-mapper-multipath-0.4.9-72.el6.x86_64

    [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

    warning: device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

    Preparing...                ########################################### [100%]

       1:device-mapper-multipath########################################### [100%]

    [root@raclhr-12cR1-N1 Packages]# rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm   

    warning: device-mapper-multipath-0.4.9-72.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY

    Preparing...                ########################################### [100%]

       1:device-mapper-multipath########################################### [100%]

    [root@raclhr-12cR1-N1 Packages]# rpm -qa|grep device-mapper

    device-mapper-multipath-0.4.9-72.el6.x86_64

    device-mapper-persistent-data-0.2.8-2.el6.x86_64

    device-mapper-1.02.79-8.el6.x86_64

    device-mapper-event-libs-1.02.79-8.el6.x86_64

    device-mapper-event-1.02.79-8.el6.x86_64

    device-mapper-multipath-libs-0.4.9-72.el6.x86_64

    device-mapper-libs-1.02.79-8.el6.x86_64

    [root@raclhr-12cR1-N2 Packages]#

     

     

    rpm -ivh device-mapper-multipath-libs-0.4.9-72.el6.x86_64.rpm

    rpm -ivh device-mapper-multipath-0.4.9-72.el6.x86_64.rpm

     

     

     

    3.2.2  启动multipath

    将多路径软件添加至内核模块中

    modprobe dm-multipath

    modprobe dm-round-robin

     

    检查内核添加情况

    [root@raclhr-12cR1-N1 Packages]# lsmod |grep multipath

    dm_multipath           17724  1 dm_round_robin

    dm_mod                 84209  16 dm_multipath,dm_mirror,dm_log

    [root@raclhr-12cR1-N1 Packages]#

     

    将多路径软件multipath设置为开机自启动

    [root@raclhr-12cR1-N1 Packages]# chkconfig  --level 2345 multipathd on

    [root@raclhr-12cR1-N1 Packages]#

    [root@raclhr-12cR1-N1 Packages]# chkconfig  --list|grep multipathd

    multipathd      0:off   1:off   2:on    3:on    4:on    5:on    6:off

    [root@raclhr-12cR1-N1 Packages]#

     

    启动multipath服务

    [root@raclhr-12cR1-N1 Packages]# service multipathd restart

    ux_socket_connect: No such file or directory

    Stopping multipathd daemon: [FAILED]

    Starting multipathd daemon: [  OK  ]

    [root@raclhr-12cR1-N1 Packages]#

     

    3.2.3  配置多路径软件/etc/multipath.conf

    1、配置multipath软件编辑/etc/multipath.conf

       注意:默认情况下/etc/multipath.conf是不存在的需要用如下命令生成multipath.conf文件:

    /sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y

    [root@raclhr-12cR1-N1 ~]# multipath -ll

    Jan 23 12:52:54 | /etc/multipath.conf does not exist, blacklisting all devices.

    Jan 23 12:52:54 | A sample multipath.conf file is located at

    Jan 23 12:52:54 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

    Jan 23 12:52:54 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf

    [root@raclhr-12cR1-N1 ~]# multipath -ll

    Jan 23 12:53:49 | /etc/multipath.conf does not exist, blacklisting all devices.

    Jan 23 12:53:49 | A sample multipath.conf file is located at

    Jan 23 12:53:49 | /usr/share/doc/device-mapper-multipath-0.4.9/multipath.conf

    Jan 23 12:53:49 | You can run /sbin/mpathconf to create or modify /etc/multipath.conf

    [root@raclhr-12cR1-N1 ~]# /sbin/mpathconf --enable --find_multipaths y --with_module y --with_chkconfig y

    [root@raclhr-12cR1-N1 ~]#

    [root@raclhr-12cR1-N1 ~]# ll /etc/multipath.conf

    -rw------- 1 root root 2775 Jan 23 12:55 /etc/multipath.conf

    [root@raclhr-12cR1-N1 ~]#

     

     

     

    2、查看并获取存储分配给服务器的逻辑盘lunwwid信息

    [root@raclhr-12cR1-N1 multipath]# multipath -v0

    [root@raclhr-12cR1-N1 multipath]# more /etc/multipath/wwids

    # Multipath wwids, Version : 1.0

    # NOTE: This file is automatically maintained by multipath and multipathd.

    # You should not need to edit this file in normal circumstances.

    #

    # Valid WWIDs:

    /14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

    /14f504e46494c455242674c7079392d753750482d63734443/

    /14f504e46494c455233384b7353432d52454b4c2d79506757/

    /14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

    [root@raclhr-12cR1-N1 multipath]#

     

     

    将文件/etc/multipath/wwids和/etc/multipath/bindings的内容覆盖节点2

    [root@raclhr-12cR1-N2 ~]# multipath -v0

    [root@raclhr-12cR1-N2 ~]# more /etc/multipath/wwids

    # Multipath wwids, Version : 1.0

    # NOTE: This file is automatically maintained by multipath and multipathd.

    # You should not need to edit this file in normal circumstances.

    #

    # Valid WWIDs:

    /14f504e46494c455232326c6c76442d4361634f2d4d4f4d41/

    /14f504e46494c455242674c7079392d753750482d63734443/

    /14f504e46494c455233384b7353432d52454b4c2d79506757/

    /14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c/

    [root@raclhr-12cR1-N1 ~]# more /etc/multipath/bindings

    # Multipath bindings, Version : 1.0

    # NOTE: this file is automatically maintained by the multipath program.

    # You should not need to edit this file in normal circumstances.

    #

    # Format:

    # alias wwid

    #

    mpatha 14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

    mpathb 14f504e46494c455242674c7079392d753750482d63734443

    mpathc 14f504e46494c455233384b7353432d52454b4c2d79506757

    mpathd 14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

    [root@raclhr-12cR1-N1 ~]#

     

     

     

    [root@raclhr-12cR1-N2 ~]#

    [root@raclhr-12cR1-N1 multipath]# multipath -ll

    mpathd (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

    size=10G features='0' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 5:0:0:3 sdk 8:160 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 4:0:0:3 sdm 8:192 active ready running

    mpathc (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

    size=10G features='0' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 5:0:0:2 sdj 8:144 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 4:0:0:2 sdl 8:176 active ready running

    mpathb (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

    size=10G features='0' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 4:0:0:1 sdh 8:112 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 5:0:0:1 sdi 8:128 active ready running

    mpatha (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

    size=10G features='0' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 4:0:0:0 sdf 8:80  active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 5:0:0:0 sdg 8:96  active ready running

    [root@raclhr-12cR1-N1 multipath]# fdisk -l | grep dev

    Disk /dev/sda: 21.5 GB, 21474836480 bytes

    /dev/sda1   *           1          26      204800   83  Linux

    /dev/sda2              26        1332    10485760   8e  Linux LVM

    /dev/sda3            1332        2611    10279936   8e  Linux LVM

    Disk /dev/sdb: 107.4 GB, 107374182400 bytes

    /dev/sdb1               1        1306    10485760   8e  Linux LVM

    /dev/sdb2            1306        2611    10485760   8e  Linux LVM

    /dev/sdb3            2611        3917    10485760   8e  Linux LVM

    /dev/sdb4            3917       13055    73399296    5  Extended

    /dev/sdb5            3917        5222    10485760   8e  Linux LVM

    /dev/sdb6            5223        6528    10485760   8e  Linux LVM

    /dev/sdb7            6528        7834    10485760   8e  Linux LVM

    /dev/sdb8            7834        9139    10485760   8e  Linux LVM

    /dev/sdb9            9139       10445    10485760   8e  Linux LVM

    /dev/sdb10          10445       11750    10485760   8e  Linux LVM

    /dev/sdb11          11750       13055    10477568   8e  Linux LVM

    Disk /dev/sdc: 6442 MB, 6442450944 bytes

    Disk /dev/sdd: 10.7 GB, 10737418240 bytes

    Disk /dev/sde: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_rootlhr-Vol02: 2147 MB, 2147483648 bytes

    Disk /dev/mapper/vg_rootlhr-Vol00: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_u01: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_orasoft-lv_orasoft_soft: 21.5 GB, 21474836480 bytes

    Disk /dev/mapper/vg_rootlhr-Vol01: 3221 MB, 3221225472 bytes

    Disk /dev/mapper/vg_rootlhr-Vol03: 3221 MB, 3221225472 bytes

    Disk /dev/sdf: 10.7 GB, 10737418240 bytes

    Disk /dev/sdg: 10.7 GB, 10737418240 bytes

    Disk /dev/sdh: 10.7 GB, 10737418240 bytes

    Disk /dev/sdi: 10.7 GB, 10737418240 bytes

    Disk /dev/sdj: 10.7 GB, 10737418240 bytes

    Disk /dev/sdk: 10.7 GB, 10737418240 bytes

    Disk /dev/sdl: 10.7 GB, 10737418240 bytes

    Disk /dev/sdm: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/mpatha: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/mpathb: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/mpathc: 10.7 GB, 10737418240 bytes

    Disk /dev/mapper/mpathd: 10.7 GB, 10737418240 bytes

    [root@raclhr-12cR1-N1 multipath]#

     

     

    3.2.4  编辑/etc/multipath.conf

    for i in f g h i j k l m ;

    do

    echo "KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="`scsi_id --whitelisted  --device=/dev/sd$i`",NAME="asm-disk$i",OWNER="grid",GROUP="asmadmin",MODE="0660""

    done

     

     

    [root@raclhr-12cR1-N1 multipath]# for i in f g h i j k l m ;

    > do

    > echo "KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="`scsi_id --whitelisted  --device=/dev/sd$i`",NAME="asm-disk$i",OWNER="grid",GROUP="asmadmin",MODE="0660""

    > done

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskg",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diski",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskl",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="sd*", BUS=="scsi", PROGRAM=="/sbin/scsi_id --whitelisted  --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskm",OWNER="grid",GROUP="asmadmin",MODE="0660"

    [root@raclhr-12cR1-N1 multipath]#

    [root@raclhr-12cR1-N1 multipath]# more   /etc/multipath.conf

    defaults {

            find_multipaths yes

            user_friendly_names yes

    }

     

    blacklist {

          wwid 3600508b1001c5ae72efe1fea025cd2e5

          devnode "^hd[a-z]"

          devnode "^sd[a-e]"

          devnode "^sda"

    }

     

    multipaths {

           multipath {

                   wwid                    14f504e46494c455232326c6c76442d4361634f2d4d4f4d41

                   alias                   VMLHRStorage000

                   path_grouping_policy    multibus

                   path_selector           "round-robin 0"

                   failback                manual

                   rr_weight               priorities

                   no_path_retry           5

          }

           multipath {

                   wwid                    14f504e46494c455242674c7079392d753750482d63734443

                   alias                   VMLHRStorage001

                   path_grouping_policy    multibus

                   path_selector           "round-robin 0"

                   failback                manual

                   rr_weight               priorities

                   no_path_retry           5

           }

           multipath {

                   wwid                    14f504e46494c455233384b7353432d52454b4c2d79506757

                   alias                   VMLHRStorage002

                   path_grouping_policy    multibus

                   path_selector           "round-robin 0"

                   failback                manual

                   rr_weight               priorities

                   no_path_retry           5

           }

           multipath {

                   wwid                    14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c

                   alias                   VMLHRStorage003

                   path_grouping_policy    multibus

                   path_selector           "round-robin 0"

                   failback                manual

                   rr_weight               priorities

                   no_path_retry           5

           }

    }

    devices {

           device {

                   vendor                  "VMWARE"

                   product                 "VIRTUAL-DISK"

                   path_grouping_policy    multibus

                   getuid_callout          "/lib/udev/scsi_id --whitelisted --device=/dev/%n"

                   path_checker            readsector0

                   path_selector           "round-robin 0"

                   hardware_handler        "0"

                   failback                15

                   rr_weight               priorities

                   no_path_retry           queue

           }

    }

    [root@raclhr-12cR1-N1 multipath]#

     

     

     

    启动multipath配置

    [root@raclhr-12cR1-N1 ~]# service multipathd restart

    ok

    Stopping multipathd daemon: [  OK  ]

    Starting multipathd daemon: [  OK  ]

    [root@raclhr-12cR1-N1 ~]# multipath -ll

    VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 5:0:0:3 sdk 8:160 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 4:0:0:3 sdm 8:192 active ready running

    VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 5:0:0:2 sdj 8:144 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 4:0:0:2 sdl 8:176 active ready running

    VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 4:0:0:1 sdh 8:112 active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 5:0:0:1 sdi 8:128 active ready running

    VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    |-+- policy='round-robin 0' prio=1 status=active

    | `- 4:0:0:0 sdf 8:80  active ready running

    `-+- policy='round-robin 0' prio=1 status=enabled

      `- 5:0:0:0 sdg 8:96  active ready running

    [root@raclhr-12cR1-N1 ~]#

    [root@raclhr-12cR1-N1 ~]# multipath -ll|grep LHR

    VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-9 OPNFILER,VIRTUAL-DISK

    VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-8 OPNFILER,VIRTUAL-DISK

    VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

    VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

    [root@raclhr-12cR1-N1 ~]#

     

    启用multipath配置后,会在/dev/mapper下生成多路径逻辑盘

    [root@raclhr-12cR1-N1 ~]# cd /dev/mapper

    [root@raclhr-12cR1-N1 mapper]# ll

    total 0

    crw-rw---- 1 root root 10, 58 Jan 23 12:49 control

    lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_orasoft-lv_orasoft_soft -> ../dm-3

    lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_orasoft-lv_orasoft_u01 -> ../dm-2

    lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol00 -> ../dm-1

    lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol01 -> ../dm-4

    lrwxrwxrwx 1 root root      7 Jan 23 12:49 vg_rootlhr-Vol02 -> ../dm-0

    lrwxrwxrwx 1 root root      7 Jan 23 12:50 vg_rootlhr-Vol03 -> ../dm-5

    lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage000 -> ../dm-6

    lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage001 -> ../dm-7

    lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage002 -> ../dm-8

    lrwxrwxrwx 1 root root      7 Jan 23 13:55 VMLHRStorage003 -> ../dm-9

    [root@raclhr-12cR1-N1 mapper]#

     

    至此多路径multipath配置完成

    3.2.5  配置multipath设备的权限

    6.2之前配置multipath设备的权限只需要在设备配置里增加uid,gid,mode就可以

    uid 1100 #uid

    gid 1020 #gid

    如:

            multipath {

                    wwid                    360050763008101d4e00000000000000a

                    alias                   DATA03

                    uid                     501                                               #uid

                    gid                     501                                               #gid

    }

     

    6.2之后配置multipath配置文件里去掉uid,gid,mode这三个参数,需要使用udev使用,示例文件在/usr/share/doc/device-mapper-version中有一个模板文件,名为12-dm-permissions.rules,您可以使用它并将其放在 /etc/udev/rules.d 目录中使其生效。

    [root@raclhr-12cR1-N1 rules.d]# ll /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

    -rw-r--r--. 1 root root 3186 Aug 13  2013 /usr/share/doc/device-mapper-1.02.79/12-dm-permissions.rules

    [root@raclhr-12cR1-N1 rules.d]#

    [root@raclhr-12cR1-N1 rules.d]# ll

    total 24

    -rw-r--r-- 1 root root  77 Jan 23 18:06 12-dm-permissions.rules

    -rw-r--r-- 1 root root 190 Jan 23 15:40 55-usm.rules

    -rw-r--r-- 1 root root 549 Jan 23 15:17 70-persistent-cd.rules

    -rw-r--r-- 1 root root 585 Jan 23 15:09 70-persistent-net.rules

    -rw-r--r-- 1 root root 633 Jan 23 15:46 99-oracle-asmdevices.rules

    -rw-r--r-- 1 root root 916 Jan 23 15:16 99-oracleasm.rules

    [root@raclhr-12cR1-N1 rules.d]# more /etc/udev/rules.d/12-dm-permissions.rules

    ENV{DM_NAME}=="VMLHRStorage*", OWNER:="grid", GROUP:="asmadmin", MODE:="660"

    [root@raclhr-12cR1-N1 rules.d]#

     

     

    将文件/etc/udev/rules.d/12-dm-permissions.rules复制到节点2上。

     

    3.2.6  配置udev规则

    脚本如下所示:

    for i in f g h i j k l m ;

    do

    echo "KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`",NAME="asm-disk$i",OWNER="grid",GROUP="asmadmin",MODE="0660"" >> /etc/udev/rules.d/99-oracleasm.rules

    done

     

     

    由于多路径的设置WWID有重复,所以应该去掉文件/etc/udev/rules.d/99-oracleasm.rules中的重复的行。

    在节点1执行以下操作:

    [root@raclhr-12cR1-N1 rules.d]# for i in f g h i j k l m ;

    > do

    > echo "KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="`scsi_id --whitelisted --replace-whitespace --device=/dev/sd$i`",NAME="asm-disk$i",OWNER="grid",GROUP="asmadmin",MODE="0660"" >> /etc/udev/rules.d/99-oracleasm.rules

    > done

     

     

    打开文件/etc/udev/rules.d/99-oracleasm.rules去掉WWID重复的行只保留一行即可。

    [root@raclhr-12cR1-N1 ~]# cat /etc/udev/rules.d/99-oracleasm.rules

    KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455232326c6c76442d4361634f2d4d4f4d41",NAME="asm-diskf",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455242674c7079392d753750482d63734443",NAME="asm-diskh",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c455233384b7353432d52454b4c2d79506757",NAME="asm-diskj",OWNER="grid",GROUP="asmadmin",MODE="0660"

    KERNEL=="dm-*", BUS=="block", PROGRAM=="/sbin/scsi_id --whitelisted --replace-whitespace --device=/dev/$name",RESULT=="14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c",NAME="asm-diskk",OWNER="grid",GROUP="asmadmin",MODE="0660"

    [root@raclhr-12cR1-N1 ~]#

     

     

    将文件/etc/udev/rules.d/99-oracleasm.rules的内容拷贝到节点2,然后重启udev

    [root@raclhr-12cR1-N1 ~]# start_udev

    Starting udev: [  OK  ]

    [root@raclhr-12cR1-N1 ~]#

    [root@raclhr-12cR1-N1 ~]# ll /dev/asm-*

    brw-rw---- 1 grid asmadmin   8, 32 Jan 23 15:50 /dev/asm-diskc

    brw-rw---- 1 grid asmadmin   8, 48 Jan 23 15:48 /dev/asm-diskd

    brw-rw---- 1 grid asmadmin   8, 64 Jan 23 15:48 /dev/asm-diske

    brw-rw---- 1 grid asmadmin 253,  7 Jan 23 15:46 /dev/asm-diskf

    brw-rw---- 1 grid asmadmin 253,  9 Jan 23 15:46 /dev/asm-diskh

    brw-rw---- 1 grid asmadmin 253,  6 Jan 23 15:46 /dev/asm-diskj

    brw-rw---- 1 grid asmadmin 253,  8 Jan 23 15:46 /dev/asm-diskk

    [root@raclhr-12cR1-N1 ~]#

    [grid@raclhr-12cR1-N1 ~]$ $ORACLE_HOME/bin/kfod disks=all s=true ds=true

    --------------------------------------------------------------------------------

    Disk          Size Header    Path                                     Disk Group   User     Group  

    ================================================================================

       1:       6144 Mb MEMBER    /dev/asm-diskc                           OCR          grid     asmadmin

       2:      10240 Mb MEMBER    /dev/asm-diskd                           DATA         grid     asmadmin

       3:      10240 Mb MEMBER    /dev/asm-diske                           FRA          grid     asmadmin

       4:      10240 Mb CANDIDATE /dev/asm-diskf                           #            grid     asmadmin

       5:      10240 Mb CANDIDATE /dev/asm-diskh                           #            grid     asmadmin

       6:      10240 Mb CANDIDATE /dev/asm-diskj                           #            grid     asmadmin

       7:      10240 Mb CANDIDATE /dev/asm-diskk                           #            grid     asmadmin

    --------------------------------------------------------------------------------

    ORACLE_SID ORACLE_HOME                                                         

    ================================================================================

         +ASM2 /u01/app/12.1.0/grid                                                           

         +ASM1 /u01/app/12.1.0/grid                                                           

    [grid@raclhr-12cR1-N1 ~]$ asmcmd

     

    ASMCMD> lsdg

    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

    MOUNTED  EXTERN  N         512   4096  1048576     10240     6487                0            6487              0             N  DATA/

    MOUNTED  EXTERN  N         512   4096  1048576     10240    10144                0           10144              0             N  FRA/

    MOUNTED  EXTERN  N         512   4096  1048576      6144     1672                0            1672              0             Y  OCR/

    ASMCMD> lsdsk

    Path

    /dev/asm-diskc

    /dev/asm-diskd

    /dev/asm-diske

    ASMCMD>  lsdsk --candidate -p

    Group_Num  Disk_Num      Incarn  Mount_Stat  Header_Stat  Mode_Stat  State   Path

            0         1           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskf

            0         3           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskh

            0         2           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskj

            0         0           0  CLOSED      CANDIDATE    ONLINE     NORMAL  /dev/asm-diskk

    ASMCMD>

     

     

    3.3  利用新磁盘创建磁盘组

    CREATE DISKGROUP FRA external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';

    SQL> select path from v$asm_disk;

     

    PATH

    --------------------------------------------------------------------------------

    /dev/asm-diskk

    /dev/asm-diskf

    /dev/asm-diskj

    /dev/asm-diskh

    /dev/asm-diske

    /dev/asm-diskd

    /dev/asm-diskc

     

    7 rows selected.

     

    SQL> CREATE DISKGROUP TESTMUL external redundancy DISK '/dev/asm-diskf','/dev/asm-diskh' ATTRIBUTE 'compatible.rdbms' = '12.1', 'compatible.asm' = '12.1';

     

    Diskgroup created.

     

    SQL>

    ASMCMD> lsdg

    State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name

    MOUNTED  EXTERN  N         512   4096  1048576     10240     6487                0            6487              0             N  DATA/

    MOUNTED  EXTERN  N         512   4096  1048576     10240    10144                0           10144              0             N  FRA/

    MOUNTED  EXTERN  N         512   4096  1048576      6144     1672                0            1672              0             Y  OCR/

    MOUNTED  EXTERN  N         512   4096  1048576     20480    20381                0           20381              0             N  TESTMUL/

    ASMCMD>

     

    [root@raclhr-12cR1-N1 ~]# crsctl stat res -t | grep -2 TESTMUL

                   ONLINE  ONLINE       raclhr-12cr1-n1          STABLE

                   ONLINE  ONLINE       raclhr-12cr1-n2          STABLE

    ora.TESTMUL.dg

                   ONLINE  ONLINE       raclhr-12cr1-n1          STABLE

                   ONLINE  ONLINE       raclhr-12cr1-n2          STABLE

    [root@raclhr-12cR1-N1 ~]#

     

     

    3.3.1  测试磁盘组

    [oracle@raclhr-12cR1-N1 ~]$ sqlplus / as sysdba

     

    SQL*Plus: Release 12.1.0.2.0 Production on Mon Jan 23 16:17:28 2017

     

    Copyright (c) 1982, 2014, Oracle.  All rights reserved.

     

     

    Connected to:

    Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

    With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,

    Advanced Analytics and Real Application Testing options

     

    SQL> create tablespace TESTMUL datafile '+TESTMUL' size 10M;

     

    Tablespace created.

     

    SQL> select name from v$datafile;

     

    NAME

    --------------------------------------------------------------------------------

    +DATA/LHRRAC/DATAFILE/system.258.933550527

    +DATA/LHRRAC/DATAFILE/undotbs2.269.933551323

    +DATA/LHRRAC/DATAFILE/sysaux.257.933550483

    +DATA/LHRRAC/DATAFILE/undotbs1.260.933550575

    +DATA/LHRRAC/DATAFILE/example.268.933550723

    +DATA/LHRRAC/DATAFILE/users.259.933550573

    +TESTMUL/LHRRAC/DATAFILE/testmul.256.934042679

     

    7 rows selected.

     

    SQL>

     

     

    将存储停掉一块网卡eth1

    [root@OFLHR ~]# ip a

    1: lo: <loopback,up,10000> mtu 16436 qdisc noqueue

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <broadcast,multicast,up,10000> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

        inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

        inet6 fe80::20c:29ff:fe98:1acd/64 scope link

           valid_lft forever preferred_lft forever

    3: eth1: <broadcast,multicast,up,10000> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

        inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

        inet6 fe80::20c:29ff:fe98:1ad7/64 scope link

           valid_lft forever preferred_lft forever

    [root@OFLHR ~]# ifconfig eth1 down

    [root@OFLHR ~]# ip a

    1: lo: <loopback,up,10000> mtu 16436 qdisc noqueue

        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

        inet 127.0.0.1/8 scope host lo

        inet6 ::1/128 scope host

           valid_lft forever preferred_lft forever

    2: eth0: <broadcast,multicast,up,10000> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:cd brd ff:ff:ff:ff:ff:ff

        inet 192.168.59.200/24 brd 192.168.59.255 scope global eth0

        inet6 fe80::20c:29ff:fe98:1acd/64 scope link

           valid_lft forever preferred_lft forever

    3: eth1: <broadcast,multicast> mtu 1500 qdisc pfifo_fast qlen 1000

        link/ether 00:0c:29:98:1a:d7 brd ff:ff:ff:ff:ff:ff

        inet 192.168.2.200/24 brd 192.168.2.255 scope global eth1

    [root@OFLHR ~]#

     

     

    rac节点查看日志:

    [root@raclhr-12cR1-N1 ~]# tail -f /var/log/messages

    Jan 23 16:20:51 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

    Jan 23 16:20:57 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

    Jan 23 16:21:03 raclhr-12cR1-N1 iscsid: connect to 192.168.2.200:3260 failed (No route to host)

    [root@raclhr-12cR1-N1 ~]# multipath -ll

    VMLHRStorage003 (14f504e46494c4552614e35626c6f2d4e794d702d4c344a6c) dm-8 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 5:0:0:3 sdm 8:192 failed faulty running

      `- 4:0:0:3 sdl 8:176 active ready  running

    VMLHRStorage002 (14f504e46494c455233384b7353432d52454b4c2d79506757) dm-9 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 5:0:0:2 sdj 8:144 failed faulty running

      `- 4:0:0:2 sdk 8:160 active ready  running

    VMLHRStorage001 (14f504e46494c455242674c7079392d753750482d63734443) dm-7 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 4:0:0:1 sdi 8:128 active ready  running

      `- 5:0:0:1 sdh 8:112 failed faulty running

    VMLHRStorage000 (14f504e46494c455232326c6c76442d4361634f2d4d4f4d41) dm-6 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 4:0:0:0 sdf 8:80  active ready  running

      `- 5:0:0:0 sdg 8:96  failed faulty running

    [root@raclhr-12cR1-N1 ~]#

     

    表空间可以正常访问:

    SQL> create table tt tablespace TESTMUL as select * from dual;

     

    Table created.

     

    SQL> select * from tt;

     

    D

    -

    X

     

    SQL>

     

    同理,将eth1进行up,而将eth0宕掉,表空间依然正常。重启集群和存储后,集群一切正常。

    第四章 测试多路径

    重新搭建一套多路径的环境来测试多路径。

    最简单的测试方法,是用dd往磁盘读写数据,然后用iostat观察各通道的流量和状态,以判断Failover或负载均衡方式是否正常:

    # dd if=/dev/zero of=/dev/mapper/mpath0

    # iostat -k 2

    [root@orcltest ~]# multipath -ll

    VMLHRStorage003 (14f504e46494c4552674a61727a472d523449782d5336784e) dm-3 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 35:0:0:2 sdf 8:80  active ready running

      `- 36:0:0:2 sdg 8:96  active ready running

    VMLHRStorage002 (14f504e46494c4552506a5a5954422d6f6f4e652d34423171) dm-2 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 35:0:0:3 sdh 8:112 active ready running

      `- 36:0:0:3 sdi 8:128 active ready running

    VMLHRStorage001 (14f504e46494c4552324b583573332d774e5a622d696d7334) dm-1 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 35:0:0:1 sdd 8:48  active ready running

      `- 36:0:0:1 sde 8:64  active ready running

    VMLHRStorage000 (14f504e46494c45523431576859532d643246412d5154564f) dm-0 OPNFILER,VIRTUAL-DISK

    size=10G features='1 queue_if_no_path' hwhandler='0' wp=rw

    `-+- policy='round-robin 0' prio=1 status=active

      |- 35:0:0:0 sdb 8:16  active ready running

      `- 36:0:0:0 sdc 8:32  active ready running

    [root@orcltest ~]# dd if=/dev/zero of=/dev/mapper/VMLHRStorage001

     

     

     

     

    重新开一个窗口执行iostat -k 2可以看到

    avg-cpu:  %user   %nice %system %iowait  %steal   %idle

               0.00    0.00    5.23   20.78    0.00   73.99

     

    Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn

    sda               9.00         0.00        92.00          0        184

    scd0              0.00         0.00         0.00          0          0

    sdb               0.00         0.00         0.00          0          0

    sdc               0.00         0.00         0.00          0          0

    sdd            1197.50      4704.00     10886.00       9408      21772

    sde            1197.50      4708.00     10496.00       9416      20992

    sdh               0.00         0.00         0.00          0          0

    sdi               0.00         0.00         0.00          0          0

    sdf               0.00         0.00         0.00          0          0

    sdg               0.00         0.00         0.00          0          0

    dm-0              0.00         0.00         0.00          0          0

    dm-4              0.00         0.00         0.00          0          0

    dm-10             0.00         0.00         0.00          0          0

    dm-1           2395.00      9412.00     21382.00      18824      42764

    dm-2              0.00         0.00         0.00          0          0

    dm-3              0.00         0.00         0.00          0          0

    dm-5              0.00         0.00         0.00          0          0

    dm-6              0.00         0.00         0.00          0          0

    dm-7              0.00         0.00         0.00          0          0

    dm-8              0.00         0.00         0.00          0          0

    dm-9              0.00         0.00         0.00          0          0

     

     

    wpsE4B9.tmp 

    好了,有关使用OpenFiler来模拟存储配置RACASM共享盘及多路径的测试就到此为止了,2016年结束了,今天是123日,明天是124日,小麦苗回家过年了,O(_)O~

    4.1  有关多路径其它理论知识

    multipath生成映射后,会在/dev目录下产生多个指向同一条链路的设备:

    /dev/mapper/mpathn

    /dev/mpath/mpathn

    /dev/dm-n

    但它们的来源是完全不同的

    /dev/mapper/mpathn multipath虚拟出来的多路径设备我们应该使用这个设备;/dev/mapper 中的设备是在引导过程中生成的。可使用这些设备访问多路径设备,例如在生成逻辑卷时。

    /dev/mpath/mpathn udev设备管理器创建的,实际上就是指向下面的dm-n设备,仅为了方便,不能用来挂载;提供 /dev/mpath 中的设备是为了方便,这样可在一个目录中看到所有多路径设备。这些设备是由 udev 设备管理器生成的,且在系统需要访问它们时不一定能启动。请不要使用这些设备生成逻辑卷或者文件系统。

    /dev/dm-n 是软件内部自身使用的,不能被软件以外使用,不可挂载。所有 /dev/dm-n 格式的设备都只能是作为内部使用,且应该永远不要使用。

    简单来说,就是我们应该使用/dev/mapper/下的设备符。对该设备即可用fdisk进行分区,或创建为pv

     

    About Me

    ...............................................................................................................................

    本文作者:小麦苗,只专注于数据库的技术,更注重技术的运用

    本文在itpubhttp://blog.itpub.net/26736162)、博客园http://www.cnblogs.com/lhrbest和个人微信公众号(xiaomaimiaolhr)上有同步更新

    本文itpub地址:http://blog.itpub.net/26736162/viewspace-2132858/

    本文博客园地址:http://www.cnblogs.com/lhrbest/p/6345157.html

    本文pdf小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/

    ● QQ群:230161599     微信群:私聊

    联系我请加QQ好友(642808185),注明添加缘由

    2017-01-22 08:00 ~ 2016-01-23 24:00农行完成

    文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解

    版权所有,欢迎分享本文,转载请保留出处

    ...............................................................................................................................

    拿起手机使用微信客户端扫描下边的左边图片来关注小麦苗的微信公众号:xiaomaimiaolhr,扫描右边的二维码加入小麦苗的QQ群,学习最实用的数据库技术。

       DBA笔试面试讲解

  • 相关阅读:
    Caffe + Ubuntu 15.04 + CUDA 7.0 新手安装配置指南
    姚斌分布式作业一
    一个简单正则表达式引擎的实现
    学习编程的方法
    [Leetcode]012. Integer to Roman
    [Leetcode]011. Container With Most Water
    JOS lab1 part2 分析
    我的Android Studio配置
    [Leetcode]009.Palindrome Number
    [Leetcode]008.String to Integer (atoi)
  • 原文地址:https://www.cnblogs.com/lhrbest/p/6345157.html
Copyright © 2011-2022 走看看