磁盘与分区及文件系统
块大小是文件系统的抽象,不是磁盘本身的属性。 扇区大小则是磁盘的物理属性,它是磁盘设备寻址的最小单元。 Linux中, 1.查看某个分区的Block Size: blockdev --getbsz /dev/sda7 2.查看硬盘的扇区大小(Sector Size): 通过fdisk -l可以看到 Linux内核还要求 Block_Size = Sector_Size * (2的n次方),并且Block_Size <= 内存的Page_Size(页大小) [root@250-shiyan ~]# blockdev --help Usage: blockdev -V blockdev --report [devices] blockdev [-v|-q] commands devices Available commands: --getsz get size in 512-byte sectors --setro set read-only --setrw set read-write --getro get read-only --getss get logical block (sector) size --getpbsz get physical block (sector) size --getiomin get minimum I/O size --getioopt get optimal I/O size --getalignoff get alignment offset --getmaxsect get max sectors per request --getbsz get blocksize --setbsz BLOCKSIZE set blocksize --getsize get 32-bit sector count --getsize64 get size in bytes --setra READAHEAD set readahead --getra get readahead --setfra FSREADAHEAD set filesystem readahead --getfra get filesystem readahead --flushbufs flush buffers --rereadpt reread partition table [root@250-shiyan ~]# blockdev --report RO RA SSZ BSZ StartSec Size Device rw 256 512 4096 0 17179869184 /dev/sda rw 256 512 1024 2048 524288000 /dev/sda1 rw 256 512 4096 1026048 16654532608 /dev/sda2 rw 256 512 4096 0 15611199488 /dev/dm-0 rw 256 512 4096 0 1040187392 /dev/dm-1 [root@250-shiyan ~]# getconf -a LINK_MAX 32000 _POSIX_LINK_MAX 32000 MAX_CANON 255 _POSIX_MAX_CANON 255 MAX_INPUT 255 _POSIX_MAX_INPUT 255 NAME_MAX 255 _POSIX_NAME_MAX 255 PATH_MAX 4096 _POSIX_PATH_MAX 4096 PIPE_BUF 4096 _POSIX_PIPE_BUF 4096 SOCK_MAXBUF _POSIX_ASYNC_IO _POSIX_CHOWN_RESTRICTED 1 _POSIX_NO_TRUNC 1 _POSIX_PRIO_IO _POSIX_SYNC_IO _POSIX_VDISABLE 0 ARG_MAX 2621440 ATEXIT_MAX 2147483647 CHAR_BIT 8 CHAR_MAX 127 CHAR_MIN -128 CHILD_MAX 4096 [root@250-shiyan ~]# getconf PAGESIZE 4096 [root@250-shiyan ~]# getconf --help Usage: getconf [-v SPEC] VAR or: getconf [-v SPEC] PATH_VAR PATH Get the configuration value for variable VAR, or for variable PATH_VAR for path PATH. If SPEC is given, give values for compilation environment SPEC. For bug reporting instructions, please see: <http://www.gnu.org/software/libc/bugs.html>. [root@250-shiyan ~]# man getconf No manual entry for getconf [root@250-shiyan ~]# tune2fs -l /dev/sda1 tune2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: /boot Filesystem UUID: b2a693c0-8b1a-464e-946b-ced87e573fe8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 128016 Block count: 512000 Reserved block count: 25600 Free blocks: 463173 Free inodes: 127978 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2032 Inode blocks per group: 254 Flex block group size: 16 Filesystem created: Thu Sep 18 17:32:11 2014 Last mount time: Thu Nov 27 07:36:12 2014 Last write time: Thu Nov 27 07:36:12 2014 Mount count: 10 Maximum mount count: -1 Last checked: Thu Sep 18 17:32:11 2014 Check interval: 0 (<none>) Lifetime writes: 46 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: f59608bb-17c6-455b-8ccd-e3d4d5e99f2e Journal backup: inode blocks [root@250-shiyan ~]# dumpe2fs -h /dev/sda1 dumpe2fs 1.41.12 (17-May-2010) Filesystem volume name: <none> Last mounted on: /boot Filesystem UUID: b2a693c0-8b1a-464e-946b-ced87e573fe8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 128016 Block count: 512000 Reserved block count: 25600 Free blocks: 463173 Free inodes: 127978 First block: 1 Block size: 1024 Fragment size: 1024 Reserved GDT blocks: 256 Blocks per group: 8192 Fragments per group: 8192 Inodes per group: 2032 Inode blocks per group: 254 Flex block group size: 16 Filesystem created: Thu Sep 18 17:32:11 2014 Last mount time: Thu Nov 27 07:36:12 2014 Last write time: Thu Nov 27 07:36:12 2014 Mount count: 10 Maximum mount count: -1 Last checked: Thu Sep 18 17:32:11 2014 Check interval: 0 (<none>) Lifetime writes: 46 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 128 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: f59608bb-17c6-455b-8ccd-e3d4d5e99f2e Journal backup: inode blocks Journal features: (none) Journal size: 8M Journal length: 8192 Journal sequence: 0x00000030 Journal start: 1
[root@Firewall ~]# hdparm -i /dev/sda 查看硬盘参数 [root@Firewall ~]# fdisk -l 查看所有分区 [root@Firewall ~]# df -h|column -t 查看各分区使用情况 [root@Firewall ~]# mount|column -t 查看挂载的分区状态 [root@Firewall ~]# du -sh /var/log 查看指定目录大小 [root@Firewall ~]# swapon -s 查看所有交换分区 Filename Type Size Used Priority /dev/sda3 partition 6144852 0 -1 [root@8a2serv ~]# dmesg |grep -i raid 查看raid卡 device-mapper: dm-raid45: initialized v0.2594l md: Autodetecting RAID arrays. [root@rac01 ~]# dmesg |grep -i raid 查看raid卡 scsi0 : LSI SAS based MegaRAID driver Vendor: IBM Model: ServeRAID M1015 Rev: 2.13 device-mapper: dm-raid45: initialized v0.2594l md: Autodetecting RAID arrays. [root@rac01 proc]# cd /proc/scsi/ 查看此目录下的信息,与磁盘息息相关
查看光纤卡连接的逻辑卷 [root@rac01 DS_3524]# pwd /proc/mpp/DS_3524 可以看到此主机连接到了两个控制器,每一个控制器有5个lun [root@rac01 DS_3524]# ls controllerA controllerB virtualLun0 virtualLun1 virtualLun2 virtualLun3 virtualLun4 [root@rac01 DS_3524]# cat virtualLun0 50G 1_file_data /dev/sdb1 Lun WWN: 60080e5000364d6c000010f25253578b linux主机上的这个号对应存储上的这个号 Logical Drive ID: 60:08:0e:50:00:36:4d:6c:00:00:10:f2:52:53:57:8b 存储上看到的
cat virtualLun1 5G 2_ora_crs
Lun WWN: 60080e5000364ef200000cf0525355eb
Logical Drive ID: 60:08:0e:50:00:36:4e:f2:00:00:0c:f0:52:53:55:eb
cat virtualLun2 100G 2_ora_flash
Lun WWN: 60080e5000364d6c000010f5525357c7
Logical Drive ID: 60:08:0e:50:00:36:4d:6c:00:00:10:f5:52:53:57:c7
cat virtualLun3 100G 2_ora_arch
Lun WWN: 60080e5000364ef200000cf25253562a
Logical Drive ID: 60:08:0e:50:00:36:4e:f2:00:00:0c:f2:52:53:56:2a
cat virtualLun4 1.4T 2_ora_data /dev/sdf1
Lun WWN: 60080e5000364d6c000010f752535806
Logical Drive ID: 60:08:0e:50:00:36:4d:6c:00:00:10:f7:52:53:58:06
/proc/mpp/DS_3524/controllerA/qla2xxx_h8c0t0 /proc/mpp/DS_3524/controllerB/qla2xxx_h7c0t0 [root@rac01 controllerA]# cd qla2xxx_h8c0t0 [root@rac01 qla2xxx_h8c0t0]# ll total 0 -rw-r--r-- 1 root root 0 Apr 17 15:28 LUN0 -rw-r--r-- 1 root root 0 Apr 17 15:28 LUN1 -rw-r--r-- 1 root root 0 Apr 17 15:28 LUN2 -rw-r--r-- 1 root root 0 Apr 17 15:28 LUN3 -rw-r--r-- 1 root root 0 Apr 17 15:28 LUN4 [root@rac02 fc_host]# cat /proc/mpp/DS_3524/controllerA/qla2xxx_h8c0t0/LUN0 Linux MPP driver. Version:09.03.0C05.0642 Build:Wed Jul 18 18:01:59 CDT 2012 Lun WWN:60080e5000364d6c000010f25253578b Physical HBA driver: qla2xxx Device Scsi Address: host_no:8 channel:0 target:0 Lun:0 Queue Depth = 32 I/O Statistics: Number of IOs:5559178 Longest trip of all I/Os:2 Shortest trip of all I/Os:0 Number of occurences of IO failed events:1 Device state: [0] OPTIMAL Device state: [1] OPTIMAL Device state: [2] OPTIMAL Device state: [3] OPTIMAL Device state: [4] OPTIMAL Device state: [5] OPTIMAL Device state: [6] OPTIMAL Device state: [7] OPTIMAL Device state: [8] OPTIMAL Device state: [9] OPTIMAL Path state:[8] OPTIMAL Path state:[9] OPTIMAL Path state:[0] OPTIMAL_NEED_CHECK Path state:[1] OPTIMAL_CHECKING Path state:[2] OPTIMAL Path state:[3] OPTIMAL_NEED_CHECK Path state:[4] OPTIMAL_CHECKING Path state:[5] OPTIMAL Path state:[6] OPTIMAL_NEED_CHECK Path state:[7] OPTIMAL_CHECKING Controller Failed? 0 Outstanding IOs on this device: total size:1143 光纤存储上做好映射后,linux如何识别新的lun,通常有以下几种方法 A.重启操作系统 重启主机是检测新添加磁盘设备的可靠方式。在所有I/O停止之后方可重启主机,同时静态或以模块方式连接磁盘驱动。系统初始化时会扫描PCI总线,因此挂载其上的SCSI host adapter会被扫描到,并生成一个PCI device。之后扫描软件会为该PCI device加载相应的驱动程序。加载SCSI host驱动时,其探测函数会初始化SCSI host,注册中断处理函数,最后调用scsi_scan_host函数扫描scsi host adapter所管理的所有scsi总线。 B.重启HBA卡驱动 通常情况下,HBA驱动在系统中以模块形式加载。从而允许模块被卸载并重新加载,在该过程中SCSI扫描函数得以调用。通常,在卸载HBA驱动之前,SCSI设备的所有I/O都应该停止,卸载文件系统,多路径服务应用也需停止。如果有代理或HBA应用帮助模块,也应当中止。 命令示例: 例如,rac节点上某台服务器执行fdisk -l命令看不到共享磁盘,可尝试执行如下命令: # modprobe -r lpfc(卸载驱动) # modprobe lpfc(加载驱动) 如果你将一个LUN映射过来了,你可以重新加载一遍驱动,就可以认识新的LUN: qla2xxx是qlogic的对应的光纤HBA卡型号 # modprobe -r qla2xxx # modprobe -v qla2xxx C.系统下运行kudzu命令,重新扫描新的硬件设备 D.直接修改文件权限,让系统重新扫描新的硬件设备。/sys下SCSI扫描,2.6内核中,HBA驱动将SCAN功能导出至/sys目录下,可用来重新扫描该接口下的SCSI磁盘设备。 查看机器有几块HBA卡 [root@rac02 host7]# cd /sys/class/fc_host/ [root@rac02 fc_host]# ls host7 host8 然后在/sys/class/scsi_host/hostX文件系统中找到对应的卡的目录,会存在一个文件叫做scan。该文件的权限只有write,没有read。只需要执行echo "- - -" > scan即可扫描对应的新LUN [root@rac02 fc_host]# echo "- - -" > /sys/class/scsi_host/host7/scan ‘- - -’代表channel,target和LUN编号。以上命令会导致hba7下所有channel,target以及可见LUN被扫描。 [root@rac02 fc_host]# fdisk -l 便可看到新添加的lun E.通过HBA厂商脚本进行SCSI扫描 QLogic 下载专门的工具。本人经常使用的qlogic的光纤HBA卡,可以到Qlogic网站上下载对应的工具,也就是一个linux的脚本,执行一下即可。 http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/SearchByOs.aspx?ProductCategory=39&OsCategory=2&Os=65&OsCategoryName=Linux&ProductCategoryName=Fibre+Channel+Adapters&OSName=Linux+Red+Hat+(64-bit) 利用QLogic FC HBA LUN Scan Utility 脚本即可无需重启动系统而识别新添加的LUN。也无需对QLogic FC driver 的重新加载。 命令示例: 通过以下命令重新扫描所有HBA: # ./ql-dynamic-tgt-lun-disc.sh # ./ql-dynamic-tgt-lun-disc.sh -s # ./ql-dynamic-tgt-lun-disc.sh –scan 重新扫描并删除丢失的LUN,使用以下两个命令其中任何一个: # ./ql-dynamic-tgt-lun-disc.sh -s -r # ./ql-dynamic-tgt-lun-disc.sh --scan –refresh Emulex 使用 Emulex LUN Scan Utility 脚本可以动态扫描新添加的LUN。 命令示例: # gunzip lun_scan.sh.gz # chmod a+x lun_scan 扫描所有lpfc HBA: # lun_scan all 扫描scsi主机编号2的lpfc HBA: # lun_scan 2 之后确认OS 识别到新设备: # fdisk -l 如果系统中有PowerPath ,还需要运行: # powermt config
MegaCli工具使用
http://www.opstool.com/article/184
dmesg|grep -i raid
lspci|grep -i raid
这是一个,20190107发现的
这是博客地址:https://blog.csdn.net/xinqidian_xiao/article/details/80940306
这是下载地址:wget https://docs.broadcom.com/docs-and-downloads/raid-controllers/raid-controllers-common-files/8-07-06_MegaCLI.zip
这是另一个,2016年发现的
http://support.lenovo.com/us/en/downloads/ds031558
wget https://download.lenovo.com/ibmdl/pub/pc/pccbbs/thinkservers/te8msm01sr17.tgz
gzip -d te8msm01sr17.tgz
tar xvf te8msm01sr17.tar
cd linux/
less readme.txt
unzip MegaCliLin.zip
[root@rac1 linux]# ll
total 5608
-rw-r--r--. 1 root root 1588725 5月 17 2011 Lib_Utils-1.00-09.noarch.rpm
-rw-r--r--. 1 root root 1286113 3月 15 2012 MegaCli-8.03.08-1.noarch.rpm
-rwxr--r--. 1 root root 2859511 3月 15 2012 MegaCliLin.zip
-rwxr--r--. 1 root root 2736 3月 15 2012 readme.txt
rpm -ivh *.rpm
cd /opt
RAID Level对应关系:
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0 | RAID 1 |
RAID Level : Primary-0, Secondary-0, RAID Level Qualifier-0 | RAID 0 |
RAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3 | RAID 5 |
RAID Level : Primary-1, Secondary-3, RAID Level Qualifier-0 | RAID 10 |
查看磁盘数目及参数
/opt/MegaRAID/MegaCli/MegaCli64 -PDList -aAll -NoLog
查看raid级别
/opt/MegaRAID/MegaCli/MegaCli64 -LdPdInfo -aAll -NoLog
[root@rac1 ~]# /opt/MegaRAID/MegaCli/MegaCli64 -LDInfo -LALL -aAll
[root@rac1 ~]# /opt/MegaRAID/MegaCli/MegaCli64 -cfgdsply -aALL
==============================================================================
Adapter: 0
Product Name: ServeRAID M5110e
Memory: 512MB
BBU: Present
Serial No: 52B06N
==============================================================================
Number of DISK GROUPS: 1
DISK GROUP: 0
Number of Spans: 1
SPAN: 0
Span Reference: 0x00
Number of PDs: 3
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :
RAID Level : Primary-5, Secondary-0, RAID Level Qualifier-3
Size : 556.929 GB
Parity Size : 278.464 GB
State : Optimal
Strip Size : 128 KB
Number Of Drives : 3
Span Depth : 1
[root@rac01 linux]# /opt/MegaRAID/MegaCli/MegaCli64 -cfgdsply -aALL
==============================================================================
Adapter: 0
Product Name: ServeRAID M1015 SAS/SATA Controller
Memory: 0MB
BBU: Absent
Serial No: SP24304951
==============================================================================
Number of DISK GROUPS: 1
DISK GROUP: 0
Number of Spans: 1
SPAN: 0
Span Reference: 0x00
Number of PDs: 2
Number of VDs: 1
Number of dedicated Hotspares: 0
Virtual Drive Information:
Virtual Drive: 0 (Target Id: 0)
Name :
RAID Level : Primary-1, Secondary-0, RAID Level Qualifier-0
Size : 278.464 GB
Mirror Data : 278.464 GB
State : Optimal
Strip Size : 64 KB
Number Of Drives : 2
Span Depth : 1
我刚搜索了下multi-path. 找到2个关键字mpio,rdac。 这些都是说的一台机器,通过2条光纤访问同一个lun,不是多个机器访问同一个lun. 可以同时挂载,都会出现数据不一致的问题,除非安装集群文件系统 可以同时挂载,如果都是只读可以用,不过如果2台都有写入就要安装共享软件了; 同时挂载没问题,关键看文件系统和应用软件支不支持同时写入。有些文件系统支持并发写入,比如redhat的gfs。有些软件支持并发写入,好像oracle rac就可以。 你现在这种用法本身是有问题的,写入的数据不可能实现共享的,还很容易丢失数据 生产环境千万不能这么用 A机,B机都是linux,挂载fc生成的同一个lun A:fdisk -l看到以后,分区并格式化,再挂载可以用 但此时不管A机挂载与否,B机扫描到这个lun后,却挂载不上,报错 [root@rac02 ~]# mount /dev/sdg1 /VM mount: special device /dev/sdg1 does not exist 此时B机重新分区时,报错 Partition number (1-4): 1 Partition 1 is already defined. Delete it before re-adding it. 通过删除这个分区,并重新分区并格式化,再挂载可以用 这时只能A或B一个机器写,另外一个机器只有卸载再挂载,才可以看到另一个机器写入的内容
[root@rac02 log]# mount /dev/sda2 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw) /dev/mapper/vg0-var on /var type ext3 (rw) /dev/mapper/vg0-tmp on /tmp type ext3 (rw) /dev/mapper/vg0-home on /home type ext3 (rw) /dev/mapper/vg0-bak on /backup type ext3 (rw) /dev/mapper/vg0-oracle on /u01 type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) /dev/asm/archlv-215 on /archive type acfs (rw) /dev/sdb1 on /fileserv type ext3 (rw) 192.168.2.2:/fileserv/ on /var/www/html/upfile type nfs (rw,addr=192.168.2.2) 192.168.2.2:/fileserv/ on /mnt/fileserver type nfs (rw,addr=192.168.2.2) 192.168.2.2:/fileserv/db on /mnt/db type nfs (rw,addr=192.168.2.2) login as: root root@192.168.2.1's password: Last login: Tue Oct 28 08:19:41 2014 from 192.168.2.80 [root@rac01 ~]# mount /dev/sda2 on / type ext3 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) /dev/sda6 on /var type ext3 (rw) /dev/sda1 on /boot type ext3 (rw) tmpfs on /dev/shm type tmpfs (rw,size=16384M) /dev/mapper/vg0-tmp on /tmp type ext3 (rw) /dev/mapper/vg0-home on /home type ext3 (rw) /dev/mapper/vg0-bak on /backup type ext3 (rw) /dev/mapper/vg0-oracle on /u01 type ext3 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw) oracleasmfs on /dev/oracleasm type oracleasmfs (rw) nfsd on /proc/fs/nfsd type nfsd (rw) /dev/asm/archlv-215 on /archive type acfs (rw) 192.168.2.2:/fileserv/db on /mnt/db type nfs (rw,addr=192.168.2.2) [root@rac01 ~]#
首先确认是哪种光纤卡: lspci | grep -i fibre 光纤卡基本上就以下两种: Emulex: lsmod |grep lpfc qlogic: lsmod |grep qla [root@rac02 scsi]# lspci | grep -i fibre 04:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) 05:00.0 Fibre Channel: QLogic Corp. ISP2532-based 8Gb Fibre Channel to PCI Express HBA (rev 02) [root@rac02 scsi]# lsmod |grep qla qla2xxx 1260801 10 scsi_transport_fc 83145 1 qla2xxx scsi_mod 199640 11 scsi_dh,sr_mod,mppVhba,usb_storage,qla2xxx,scsi_transport_fc,libata,megaraid_sas,mppUpper,sg,sd_mod [root@rac02 scsi]# lsmod |grep lpfc
[root@250-shiyan dev]# free total used free shared buffers cached Mem: 502168 312656 189512 0 119592 81648 -/+ buffers/cache: 111416 390752 Swap: 1015800 0 1015800 [root@250-shiyan dev]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup-lv_root 15006084 2083212 12160608 15% / tmpfs 251084 0 251084 0% /dev/shm /dev/sda1 495844 32671 437573 7% /boot [root@250-shiyan dev]# fdisk -l Disk /dev/sda: 17.2 GB, 17179869184 bytes 255 heads, 63 sectors/track, 2088 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00024d46 Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 2089 16264192 8e Linux LVM Disk /dev/mapper/VolGroup-lv_root: 15.6 GB, 15611199488 bytes 255 heads, 63 sectors/track, 1897 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/VolGroup-lv_swap: 1040 MB, 1040187392 bytes 255 heads, 63 sectors/track, 126 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 ###记录了每个交换区域的大小,以及在用的交换区域个数 [root@250-shiyan dev]# cat /proc/swaps Filename Type Size Used Priority /dev/dm-1 partition 1015800 0 -1
[root@250-shiyan dev]# swapon -s
Filename Type Size Used Priority
/dev/dm-1 partition 1015800 0 -1
###记录了系统中每个磁盘分区的主辅设备编号,大小和名称 [root@250-shiyan dev]# cat /proc/partitions major minor #blocks name 8 0 16777216 sda 8 1 512000 sda1 8 2 16264192 sda2 ###pv 253 0 15245312 dm-0 ###lv_root 253 1 1015808 dm-1 ###lv_swap ###设备专用文件与系统的某个设备相对应。在内核中,每种设备类型都有与之相应的设备驱动程序,用来处理设备的所有I/O请求 ###每个设备文件都有主辅设备id号,内核会使用主id号查找与该类设备相应的驱动程序 ###每个设备驱动程序都会将自己与特定主设备号的关联关系向内核注册,藉此建立设备 ###专用文件和设备驱动程序之间的关系,内核不使用设备文件名来查找驱动程序 ###参照:http://www.lanana.org/docs/device-list/devices-2.6+.txt [root@250-shiyan dev]# ll total 0 crw-rw----. 1 root video 10, 175 Nov 27 07:36 agpgart drwxr-xr-x. 2 root root 640 Nov 27 07:36 block drwxr-xr-x. 2 root root 80 Nov 27 07:36 bsg lrwxrwxrwx. 1 root root 3 Nov 27 07:36 cdrom -> sr0 lrwxrwxrwx. 1 root root 3 Nov 27 07:36 cdrw -> sr0 drwxr-xr-x. 2 root root 2480 Nov 27 07:36 char crw-------. 1 root root 5, 1 Nov 27 07:36 console lrwxrwxrwx. 1 root root 11 Nov 27 07:36 core -> /proc/kcore drwxr-xr-x. 3 root root 60 Nov 27 07:36 cpu crw-rw----. 1 root root 10, 61 Nov 27 07:36 cpu_dma_latency crw-rw----. 1 root root 10, 62 Nov 27 07:36 crash drwxr-xr-x. 5 root root 100 Nov 27 07:36 disk brw-rw----. 1 root disk 253, 0 Nov 27 07:36 dm-0 brw-rw----. 1 root disk 253, 1 Nov 27 07:36 dm-1 [root@250-shiyan dev]# ll|awk '$5~/^4,/{print $0}' crw--w----. 1 root tty 4, 0 Nov 27 07:36 tty0 crw-------. 1 root root 4, 1 Nov 27 07:36 tty1 crw--w----. 1 root tty 4, 10 Nov 27 07:36 tty10 crw--w----. 1 root tty 4, 11 Nov 27 07:36 tty11 crw--w----. 1 root tty 4, 12 Nov 27 07:36 tty12 crw--w----. 1 root tty 4, 13 Nov 27 07:36 tty13 [root@250-shiyan dev]# ll|awk '$5!~/^[0-9]*,/{print $0}'|sort
drwxrwxrwt. 2 root root 40 Nov 27 07:36 shm
drwxr-xr-x. 2 root root 0 Nov 27 07:36 pts
drwxr-xr-x. 2 root root 100 Nov 27 07:36 mapper
drwxr-xr-x. 2 root root 2480 Nov 27 07:36 char
drwxr-xr-x. 2 root root 40 Nov 27 07:36 hugepages
drwxr-xr-x. 2 root root 60 Nov 27 07:36 net
drwxr-xr-x. 2 root root 60 Nov 27 07:36 raw
drwxr-xr-x. 2 root root 640 Nov 27 07:36 block
drwxr-xr-x. 2 root root 80 Nov 27 07:36 bsg
drwxr-xr-x. 2 root root 80 Nov 27 07:36 VolGroup
drwxr-xr-x. 3 root root 200 Nov 27 07:36 input
drwxr-xr-x. 3 root root 60 Nov 27 07:36 cpu
drwxr-xr-x. 5 root root 100 Nov 27 07:36 disk
lrwxrwxrwx. 1 root root 11 Nov 27 07:36 core -> /proc/kcore
lrwxrwxrwx. 1 root root 13 Nov 27 07:36 fd -> /proc/self/fd
lrwxrwxrwx. 1 root root 13 Nov 27 07:36 MAKEDEV -> /sbin/MAKEDEV
lrwxrwxrwx. 1 root root 15 Nov 27 07:36 stderr -> /proc/self/fd/2
lrwxrwxrwx. 1 root root 15 Nov 27 07:36 stdin -> /proc/self/fd/0
lrwxrwxrwx. 1 root root 15 Nov 27 07:36 stdout -> /proc/self/fd/1
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 cdrom -> sr0
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 cdrw -> sr0
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 dvdrw -> sr0
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 dvd -> sr0
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 fb -> fb0
lrwxrwxrwx. 1 root root 3 Nov 27 07:36 scd0 -> sr0
lrwxrwxrwx. 1 root root 4 Nov 27 07:36 root -> dm-0
lrwxrwxrwx. 1 root root 4 Nov 27 07:36 rtc -> rtc0
lrwxrwxrwx. 1 root root 4 Nov 27 07:36 systty -> tty0
srw-rw-rw-. 1 root root 0 Nov 27 07:36 log
[root@rac01 dev]# cat /proc/partitions major minor #blocks name 8 0 291991552 sda 8 1 200781 sda1 8 2 30716280 sda2 8 3 1 sda3 8 4 228307747 sda4 8 5 18434556 sda5 8 6 14329948 sda6 8 16 52428800 sdb 8 17 52428784 sdb1 8 32 5242880 sdc 8 33 5238597 sdc1 8 48 104857600 sdd 8 49 104856223 sdd1 8 64 104857600 sde 8 65 104856223 sde1 8 80 1489224192 sdf 8 81 1489217436 sdf1 253 0 5242880 dm-0 253 1 5242880 dm-1 253 2 20971520 dm-2 253 3 10485760 dm-3 252 110081 104595456 asm/archlv-215 8 96 585537024 sdg 8 97 585529056 sdg1
[root@250-shiyan dev]# file -s /dev/sda /dev/sda: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, boot drive 0x80, 1st sector stage2 0x849fc, GRUB version 0.94; partition 1: ID=0x83, active, starthead 32, startsector 2048, 1024000 sectors; partition 2: ID=0x8e, starthead 221, startsector 1026048, 32528384 sectors, code offset 0x48 [root@250-shiyan dev]# file -s /dev/sda1 /dev/sda1: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (huge files) [root@250-shiyan dev]# file -s /dev/sda2 /dev/sda2: LVM2 (Linux Logical Volume Manager) , UUID: 5jEyfdbGOeFeIcRUKGcMR0Y8NfUk1NN [root@250-shiyan dev]# file -s /dev/dm-0 /dev/dm-0: Linux rev 1.0 ext4 filesystem data (needs journal recovery) (extents) (large files) (huge files) [root@250-shiyan dev]# file -s /dev/dm-1 /dev/dm-1: Linux/i386 swap file (new style) 1 (4K pages) size 253951 pages [root@250-shiyan fs]# pwd /proc/fs [root@250-shiyan fs]# ll total 0 dr-xr-xr-x. 4 root root 0 Dec 3 16:27 ext4 dr-xr-xr-x. 2 root root 0 Dec 3 16:27 fscache dr-xr-xr-x. 4 root root 0 Dec 3 16:27 jbd2 dr-xr-xr-x. 2 root root 0 Dec 3 16:27 nfsd dr-xr-xr-x. 2 root root 0 Dec 3 16:27 nfsfs ###linux下查看当前为内核所知的文件系统类型 [root@250-shiyan dev]# cat /proc/filesystems nodev sysfs nodev rootfs nodev bdev nodev proc nodev cgroup nodev cpuset nodev tmpfs nodev devtmpfs nodev binfmt_misc nodev debugfs nodev securityfs nodev sockfs nodev usbfs nodev pipefs nodev anon_inodefs nodev inotifyfs nodev devpts nodev ramfs nodev hugetlbfs iso9660 nodev pstore nodev mqueue nodev selinuxfs ext4 nodev rpc_pipefs nodev nfs nodev nfs4 ###Linux下查看当前内核版本支持的文件系统: [root@250-shiyan fs]# pwd /lib/modules/2.6.32-431.el6.x86_64/kernel/fs [root@250-shiyan fs]# ll total 132 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 autofs4 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 btrfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 cachefiles drwxr-xr-x. 2 root root 4096 Sep 18 17:34 cifs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 configfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 cramfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 dlm drwxr-xr-x. 2 root root 4096 Sep 18 17:34 ecryptfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 exportfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 ext2 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 ext3 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 ext4 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 fat drwxr-xr-x. 2 root root 4096 Sep 18 17:34 fscache drwxr-xr-x. 2 root root 4096 Sep 18 17:34 fuse drwxr-xr-x. 2 root root 4096 Sep 18 17:34 gfs2 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 jbd drwxr-xr-x. 2 root root 4096 Sep 18 17:34 jbd2 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 jffs2 drwxr-xr-x. 2 root root 4096 Sep 18 17:34 lockd -rwxr--r--. 1 root root 19920 Nov 22 2013 mbcache.ko drwxr-xr-x. 2 root root 4096 Sep 18 17:34 nfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 nfs_common drwxr-xr-x. 2 root root 4096 Sep 18 17:34 nfsd drwxr-xr-x. 2 root root 4096 Sep 18 17:34 nls drwxr-xr-x. 2 root root 4096 Sep 18 17:34 squashfs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 ubifs drwxr-xr-x. 2 root root 4096 Sep 18 17:34 udf drwxr-xr-x. 2 root root 4096 Sep 18 17:34 xfs [root@250-shiyan nfsfs]# fdisk /dev/sda WARNING: DOS-compatible mode is deprecated. It's strongly recommended to switch off the mode (command 'c') and change display units to sectors (command 'u'). Command (m for help): l 0 Empty 24 NEC DOS 81 Minix / old Lin bf Solaris 1 FAT12 39 Plan 9 82 Linux swap / So c1 DRDOS/sec (FAT- 2 XENIX root 3c PartitionMagic 83 Linux c4 DRDOS/sec (FAT- 3 XENIX usr 40 Venix 80286 84 OS/2 hidden C: c6 DRDOS/sec (FAT- 4 FAT16 <32M 41 PPC PReP Boot 85 Linux extended c7 Syrinx 5 Extended 42 SFS 86 NTFS volume set da Non-FS data 6 FAT16 4d QNX4.x 87 NTFS volume set db CP/M / CTOS / . 7 HPFS/NTFS 4e QNX4.x 2nd part 88 Linux plaintext de Dell Utility 8 AIX 4f QNX4.x 3rd part 8e Linux LVM df BootIt 9 AIX bootable 50 OnTrack DM 93 Amoeba e1 DOS access a OS/2 Boot Manag 51 OnTrack DM6 Aux 94 Amoeba BBT e3 DOS R/O b W95 FAT32 52 CP/M 9f BSD/OS e4 SpeedStor c W95 FAT32 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb BeOS fs e W95 FAT16 (LBA) 54 OnTrackDM6 a5 FreeBSD ee GPT f W95 Ext'd (LBA) 55 EZ-Drive a6 OpenBSD ef EFI (FAT-12/16/ 10 OPUS 56 Golden Bow a7 NeXTSTEP f0 Linux/PA-RISC b 11 Hidden FAT12 5c Priam Edisk a8 Darwin UFS f1 SpeedStor 12 Compaq diagnost 61 SpeedStor a9 NetBSD f4 SpeedStor 14 Hidden FAT16 <3 63 GNU HURD or Sys ab Darwin boot f2 DOS secondary 16 Hidden FAT16 64 Novell Netware af HFS / HFS+ fb VMware VMFS 17 Hidden HPFS/NTF 65 Novell Netware b7 BSDI fs fc VMware VMKCORE 18 AST SmartSleep 70 DiskSecure Mult b8 BSDI swap fd Linux raid auto 1b Hidden W95 FAT3 75 PC/IX bb Boot Wizard hid fe LANstep 1c Hidden W95 FAT3 80 Old Minix be Solaris boot ff BBT 1e Hidden W95 FAT1
Smart Array 6系列和5系列陣列卡驅動cpq_cciss
/dev/cciss/c0d0p* 是什麼意思
對於直接接SCSI卡的硬盤設備名叫/dev/sda,sdb.......
而HP DL 380 530 570 580 他們的硬盤是接到整列卡上面的故叫/dev/cciss/c0d0px
c0:表示第一快控制器
d0:表示第一快磁盤
p1:表示第一快分區
(他的分區命名類似BSD的分區)
在某些机器上安装Linux后,发现在/dev目录下找不到hda、hdb、sda等磁盘设备文件,那么挂接的磁盘在哪里呢?使用mount命令查看挂接设备情况,发现磁盘文件在、devcciss目录下,并以c0d0p1的名字表示。 cciss何方神圣? -原来是 HP Smart Array block driver,是一个比较旧的HP RAID控制器块驱动,在Linux中以module的方式提供和使用,可以使用modprobe进行加载或卸载,支持的设备包括: Smart Array 5300 Smart Array 5i Smart Array 532 Smart Array 5312 Smart Array 641 Smart Array 642 Smart Array 6400 Smart Array 6400 EM Smart Array 6i Smart Array P600 Smart Array P400i Smart Array E200i Smart Array E200 Smart Array E200i Smart Array E200i Smart Array E200i Smart Array E500 其设备命名规则如下: Major numbers: 104 cciss0 105 cciss1 106 cciss2 105 cciss3 108 cciss4 109 cciss5 110 cciss6 111 cciss7 Minor numbers: b7 b6 b5 b4 b3 b2 b1 b0 |----+----| |----+----| | | | +-------- Partition ID (0=wholedev, 1-15 partition) | +-------------------- Logical Volume number 例如在两个控制器情况下: /dev/cciss/c0d0 Controller 0, disk 0, 磁盘1 /dev/cciss/c0d0p1 Controller 0, disk 0, partition 1 /dev/cciss/c0d0p2 Controller 0, disk 0, partition 2 /dev/cciss/c0d0p3 Controller 0, disk 0, partition 3 /dev/cciss/c1d1 Controller 1, disk 1, 磁盘2 /dev/cciss/c1d1p1 Controller 1, disk 1, partition 1 /dev/cciss/c1d1p2 Controller 1, disk 1, partition 2 /dev/cciss/c1d1p3 Controller 1, disk 1, partition 3 其中: c0:表示第一块控制器 d0:表示第一块磁盘 p1:表示第一块分区 在目录 /proc/driver/cciss/ 中包含关于每个控制器的信息,例如: # cd /proc/driver/cciss # ls -l total 0 -rw-r--r-- 1 root root 0 2012-06-10 10:38 cciss0 -rw-r--r-- 1 root root 0 2012-06-10 10:38 cciss1 -rw-r--r-- 1 root root 0 2012-06-10 10:38 cciss2 # cat cciss2 cciss2: HP Smart Array P800 Controller Board ID: 0x3223103c Firmware Version: 7.14 IRQ: 16 Logical drives: 1 Current Q depth: 0 Current # commands on controller: 0 Max Q depth since init: 1 Max # commands on controller since init: 2 Max SG entries since init: 32 Sequential access devices: 0 cciss/c2d0: 36.38GB RAID 0 对此类磁盘的操作与IDE或直接连接在SCSI卡上的磁盘相同,如分区、格式化等操作,区别是 操作的设备文件路径不同: fdisk /dev/cciss/c0d0 mkfs -t ext3 /dev/cciss/c0d0p1
[root@flt8a cciss]# cd /dev/cciss/
[root@flt8a cciss]# ls
c0d0 c0d0p1 c0d0p2 c0d0p3 c0d0p4 c0d0p5
[root@flt8a cciss]# ll
总计 0
brw-r----- 1 root disk 104, 0 12-17 12:07 c0d0
brw-r----- 1 root disk 104, 1 12-17 12:08 c0d0p1
brw-r----- 1 root disk 104, 2 12-17 12:08 c0d0p2
brw-r----- 1 root disk 104, 3 12-17 12:07 c0d0p3
brw-r----- 1 root disk 104, 4 12-17 12:07 c0d0p4
brw-r----- 1 root disk 104, 5 12-17 12:07 c0d0p5
[root@flt8a cciss]# cd /proc/driver/cciss
[root@flt8a cciss]# cat cciss0
cciss0: HP Smart Array P400 Controller
Board ID: 0x3234103c
Firmware Version: 5.20
IRQ: 130
Logical drives: 1
Sector size: 2048
Current Q depth: 0
Current # commands on controller: 0
Max Q depth since init: 163
Max # commands on controller since init: 191
Max SG entries since init: 31
Sequential access devices: 0
cciss/c0d0: 146.77GB RAID 0