zoukankan      html  css  js  c++  java
  • [svc]为何linux ext4文件系统目录默认大小是4k?

    linux ext4普通盘为什么目录大小是4k?

    Why does every directory have a size 4096 bytes (4 K)?
    To understand this, you'd better have some basic knowledge of the following (file system):

    • inode (contains file attributes, metadata of file, pointer structure)
    • file (can be considered a table with 2 columns, filename and its inode, inode points to the raw data blocks on the block device)
    • directory (just a special file, container for other filenames. It contains an array of filenames and inode numbers for each filename. Also it describes the relationship between parent and children.)
    • symbolic link VS hard link
    • dentry (directory entries)
      ...
      On typical ext4 file system (I reckon most likely this is what you are using), the default inode size is 256 bytes, block size is 4096 bytes.

    A directory is just a special file which contains an array of filenames and inode numbers. When the directory was created, the file system allocated 1 inode to the directory with a "filename" (dir name in fact). The inode points to a single data block (minimum overhead), which is 4096 bytes. That's why you see 4096 / 4.0K when using ls.

    要理解这个,你首先要懂:(注: 这里我用c7.4,默认xfs:/etc/fstab,我又挂了一个格式化成ext4做的实验)

    inode:      
    file
    directory
    symblic:    软链接
    

    参阅:
    磁盘MBR分区机制- inode/Block深入实战
    linux的inode和block-软硬链接


    我猜目录名字占了第一个inode-index.

    inode-index和inodetable单独开辟了一个block

    目录是特殊的文件,目录内容包含: 文件名+inode号, 当目录被创建后,文件系统会给这个目录分配1个inode+特殊文件(即以目录为名). inode指向一个block. 所有你才会看到目录大小为4k.

    这就能理解目录里的文件,文件名字保存在了所在目录的block里.

    查看和修改ext4的inode和block

    ext4 默认inode大小是256bytes, block大小是4096bytes

    [root@moban ~]# dumpe2fs /dev/sdb |grep "Inode size"
    Inode size:     256
    
    [root@moban ~]# dumpe2fs /dev/sdb |grep "Block size"
    Block size: 4096
    
    - 格式化时候指定inode和block大小
    [root@moban ~]# mkfs.ext4 -I 2048 -b 2048 /dev/sdb
    

    查看目录的inode内容

  • 相关阅读:
    比较重量(网易笔试题)
    抽象工厂模式
    简单工厂模式
    R语言dai xie
    Hadoop综合大作业
    hive基本操作与应用
    用mapreduce 处理气象数据集
    熟悉常用的HBase操作,编写MapReduce作业
    爬虫大作业
    第三章 熟悉常用的HDFS操作
  • 原文地址:https://www.cnblogs.com/iiiiher/p/8511351.html
Copyright © 2011-2022 走看看