zoukankan      html  css  js  c++  java
  • HFile解析 基于0.96

    什么是HFile

    HBase、BigTable以及其他分布式存储、查询系统的底层存储都采用SStable的思想,HBase的底层存储是HFile,他要解决的问题就是如果将内容存储到磁盘,以及如何高效的读取数据,HFile文件格式对特定字节存储什么内容做了规定,以便可以按照格式读取数据。

    sstable就是sorted string table,存储的基本数据是key-value对,string意为着key和value都是以string/bytes的形式组织的,当然也可以是其他数据结构经序列化后的二进制流形式。

    sorted意为数据的存储顺序是按照key的字符串的字典排列顺序升序组织的,并且对key构建了索引。检索时,把data index加载到内存中,通过二分查找,在data index中查找当前查询的key可能在哪个data block中,接着顺序遍历这个data block,找到key及value。

    对HFile添加了布隆过滤器,这样可以过滤掉Table中不存在的key,(布隆过滤器有一定误判性,即将集合中本来不存在的数据,判断为存在,不过在这里还是可用的)

    写HFile

    流程

    HFile问题分为写文件、读文件,而分析文件结构,得先看下写了什么,怎么写的:

    1、写KeyValue之前一般会先new一个KeyValue;

    分析 new KeyValue(keyBytes, "info".getBytes(), "city".getBytes(), valueBytes);

    可以看到HBase会按照以下组织方式将KeyValue数据加载到内存中。

    this.bytes = createByteArray(row, roffset, rlength,
            family, foffset, flength, qualifier, qoffset, qlength,
            timestamp, type, value, voffset, vlength);
    KeyValue维护了一个byte数组,里面保存了这个KeyValue要写入HFile的全部信息。
    
    /**
       * Write KeyValue format into a byte array.
       *
       * @param row row key
       * @param roffset row offset
       * @param rlength row length
       * @param family family name
       * @param foffset family offset
       * @param flength family length
       * @param qualifier column qualifier
       * @param qoffset qualifier offset
       * @param qlength qualifier length
       * @param timestamp version timestamp
       * @param type key type
       * @param value column value
       * @param voffset value offset
       * @param vlength value length
       * @return The newly created byte array.
       */
      private static byte [] createByteArray(final byte [] row, final int roffset,
          final int rlength, final byte [] family, final int foffset, int flength,
          final byte [] qualifier, final int qoffset, int qlength,
          final long timestamp, final Type type,
          final byte [] value, final int voffset, int vlength) {
        checkParameters(row, rlength, family, flength, qualifier, qlength, value, vlength);
        // Allocate right-sized byte array.
        int keyLength = (int) getKeyDataStructureSize(rlength, flength, qlength);
        byte [] bytes =
            new byte[(int) getKeyValueDataStructureSize(rlength, flength, qlength, vlength)];
        // Write key, value and key row length.
        int pos = 0;
        pos = Bytes.putInt(bytes, pos, keyLength);//4B的KeyLength
        pos = Bytes.putInt(bytes, pos, vlength);//4B valueLength
        pos = Bytes.putShort(bytes, pos, (short)(rlength & 0x0000ffff));//2B RowLength
        pos = Bytes.putBytes(bytes, pos, row, roffset, rlength);//Row内容
        pos = Bytes.putByte(bytes, pos, (byte)(flength & 0x0000ff));//1B FamilyLength
        if(flength != 0) {
          pos = Bytes.putBytes(bytes, pos, family, foffset, flength);
        }//将famliy内容加载到bytes
        if(qlength != 0) {
          pos = Bytes.putBytes(bytes, pos, qualifier, qoffset, qlength);
        }//将qualifer内容加载到bytes
        pos = Bytes.putLong(bytes, pos, timestamp);//timestamp
        pos = Bytes.putByte(bytes, pos, type.getCode());//type
        if (value != null && value.length > 0) {
          pos = Bytes.putBytes(bytes, pos, value, voffset, vlength);
        }//value
        return bytes;
      }
    

      可分析出KeyValue的这个byte数组,开始是两个固定长度的数值,分别表示Key的长度和Value的长度。紧接着是Key,开始是2自己长度数据,表示Row的长度,紧接着是 Row,然后是1字节的数据,表示Family的长度,然后是Family的字节内容,接着是Qualifier,然后是两个固定长度的数值,表示(8B)Time Stamp和(1B)Key Type(Put/Delete)。Value就是纯粹的二进制数据。

     

    2、指定压缩方式,块大小,写目录等参数,创建一个HFileWriterV2对象,写入KeyValue是调用HFileWriterV2 的append方法实现的

    由于HFile的写入必须顺序写入,所以首先会checkKey做校验,写入过程会将上次写入的key保存在lastKeyBuffer,当前的key和上次写入的key作比较,保证(row,family,coqualifier,ts)顺序。
     

    可以看到HBase写入HFile其实就是将KeyValue中的bytes相应内容写到HFile,最终的格式如下(memStoreTS):

     

    在操作过程中,根据上面的offset以及length等可以定位到任何一部分内容。比如获取row ---  Bytes(bytes,10,rlength),便可以对rowkey进行读操作,

    上面操作并没有真正的写到磁盘上 (fsBlockWriter.getUserDataStream())是对ByteArrayOutputStream包装 ,write操作其实是写到了baosInMemory对象下面的buf字节数组里,而newblock()操作是对其构造参数添加了一个假的Header.

    private void append(final long memstoreTS, final byte[] key,
                final int koffset, final int klength, final byte[] value,
                final int voffset, final int vlength) throws IOException {
           boolean dupKey = checkKey(key, koffset, klength);//校验是否符合规则
            if (!dupKey) {
                checkBlockBoundary();
            }//检查边界
            if (!fsBlockWriter.isWriting())
                newBlock();//第一次写入,会进行这个操作,之后在checkBlockBoundary()内执行此操作
     
            {
                DataOutputStream out = fsBlockWriter.getUserDataStream();
                out.writeInt(klength);
                totalKeyLength += klength;
                out.writeInt(vlength);
                totalValueLength += vlength;
                out.write(key, koffset, klength);
                out.write(value, voffset, vlength);
                if (this.includeMemstoreTS) {
                    WritableUtils.writeVLong(out, memstoreTS);
                }
            }
            // Are we the first key in this block?
            if (firstKeyInBlock == null) {
                // Copy the key.
                firstKeyInBlock = new byte[klength];
                System.arraycopy(key, koffset, firstKeyInBlock, 0, klength);
            }
            lastKeyBuffer = key;//记录最后一个key
            lastKeyOffset = koffset;
            lastKeyLength = klength;
            entryCount++;
        }

    2、随着不断地加入新的KeyValue对,当到调用checkBlockBoundary()检查当前DataBlock是否超过达一定的阀值(fsBlockWriter.blockSizeWritten() < blockSize),则执行finishBlock()等操作,

    在finishBlock阶段:

    取出写入byte[]的uncompressedBytesWithHeader = baosInMemory.toByteArray();数据

    1、按照指定压缩方式,对DataBlock(KeyValue部分)进行编码和压缩

    2、组装Header,循环冗余校验信息添加等操作,将这个块加入输出流

    3、dataBlockIndexWriter.addEntry(indexKey, lastDataBlockOffset, onDiskSize);

    添加索引项

    跟踪 writeHeaderAndData()函数

    /**
       * Put the header into the given byte array at the given offset.
       * @param onDiskSize size of the block on disk header + data + checksum
       * @param uncompressedSize size of the block after decompression (but
       *          before optional data block decoding) including header
       * @param onDiskDataSize size of the block on disk with header
       *        and data but not including the checksums
       */
      private void putHeader(byte[] dest, int offset, int onDiskSize,
          int uncompressedSize, int onDiskDataSize) {
        offset = blockType.put(dest, offset);//DATABLK*(8B)
        offset = Bytes.putInt(dest, offset, onDiskSize - HConstants.HFILEBLOCK_HEADER_SIZE);//在磁盘上block的大小(不包含头,主要用来处理压缩文件)
        offset = Bytes.putInt(dest, offset, uncompressedSize - HConstants.HFILEBLOCK_HEADER_SIZE);//未压缩时block的大小(不包含头)
        offset = Bytes.putLong(dest, offset, prevOffset);//前一个块的标识
        offset = Bytes.putByte(dest, offset, checksumType.getCode());//校验和类型
        offset = Bytes.putInt(dest, offset, bytesPerChecksum);
        Bytes.putInt(dest, offset, onDiskDataSize);//
      }
    

      

     

    分析出Header的结构,进而得出Data Block的结构如下:

    3、上面提到每次finish block会添加一条索引记录,将这个块的firstKey,lastDataBlockOffset,onDiskDataSize加入LEAF_INDEX索引项,(索引大小大概(56+AvgKeySize)*NumBlocks)

    索引记录内容如下:

    /**
    * Add one index entry to the current leaf-level block. When the leaf-level
    * block gets large enough, it will be flushed to disk as an inline block.
    *
    * @param firstKey the first key of the data block
    * @param blockOffset the offset of the data block
    * @param blockDataSize the on-disk size of the data block ({@link HFile}
    * format version 2), or the uncompressed size of the data block (
    * {@link HFile} format version 1).
    */
    public void addEntry(byte[] firstKey, long blockOffset, int blockDataSize)
    {
    curInlineChunk.add(firstKey, blockOffset, blockDataSize);
    ++totalNumEntries;
    }

    得出 Data Block的块索引项:

    key=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaZI/info:city/LATEST_TIMESTAMP/Put
     
      offset=2155038, dataSize=38135
     
    key=aaaaaaaaaaaaaaaaabbbabbaaaabbbbaTU/info:city/LATEST_TIMESTAMP/Put
     
      offset=4343199, dataSize=38358
     
    key=aaaaaaaaaaaaaaaabbbababbbbababaaxe1T70L5R6F2/info:city/LATEST_TIMESTAMP/Put
     
      offset=6539008, dataSize=38081
     
    .....
     

    并将其加入curInlineChunk (BlockIndexChunk用来保存HFileBlockIndex的索引信息,此时索引类型为为 LEAF_INDEX)

    当索引的内容达到一定阀值(默认HFileBlockIndex.DEFAULT_MAX_CHUNK_SIZE 128KB)或者关闭时,便会调用 writeInlineBlocks 方法通过InlineBlockWriter将之前累积的索引项写入输出流,

    会在DataBlock之后写入一个Leaf index block,其写入过程和DataBlock基本一致,值得注意的是索引也是被压缩的。

    LEAF_INDEX归根到底是一个InlineBlock,所以包含和DataBlock 结构一致的Header,写入过程参见:HFileBlockIndex.BlockIndexChunk.writeNonRoot(DataOutput out)方法

    LEAF_INDEX主要包含:

    • 索引项的个数;
    • currentSecondaryIndexes //TODO 为了提高查询速度

    • 多个索引项,每个索引项在磁盘的顺序为
      • lastDataBlockOffset
      • onDiskDataSize
      • firstKey

    同时将LEAF_INDEX的信息加入ROOT_INDEX,rootChunk.add(firstKey, offset, onDiskSize, totalNumEntries);

    可以看到ROOT_INDEX,的每一个索引项包含了firstKey, offset, onDiskSize, totalNumEntries等内容。

    4、当写完之后,调用write的close方法,首先执行finishBlock()将剩余的数据加入输出流,并写InlineBlockIndex,添加ROOT_INDEX对这个InlineBlockIndex的索引项。

    之后顺序添加metadata、rootIndex、metaBlockIndex、FileInfo、Trailer等信息。

    我比较关心的是ROOT_INDEX的构造,是由HFlieBlockIndex.IndexBlockWriter.writeIndexBlock来完成:

    while (rootChunk.getRootSize() > maxChunkSize) {
        rootChunk = writeIntermediateLevel(out, rootChunk);
        numLevels += 1;
    }

    发现如果root_index的size达到一定阀值(128K),便会加一级索引,达到3级;

     

    之后设置 trailer.setLoadOnOpenOffset(rootIndexOffset);从这里我们便可以知道如果读取索引必须先读取Trailer。

    HFile的fix trailer 包含了指向其它块的offsets信息

    各个模块的内容:

    执行 bin/hbase org.apache.hadoop.hbase.io.hfie.HFile -m -f /opt/local/hbase/uuid_rtgeo/06faaf5e0ea384d277061bf00757710e/info/a596111cbc8f4e1b96da230fd588f8a5

    Block index size as per heapsize: 1528
    reader=/opt/local/hbase/uuid_rtgeo/06faaf5e0ea384d277061bf00757710e/info/a596111cbc8f4e1b96da230fd588f8a5,
        compression=lzo,
        cacheConf=CacheConfig:enabled [cacheDataOnRead=true] [cacheDataOnWrite=false] [cacheIndexesOnWrite=false] [cacheBloomsOnWrite=false] [cacheEvictOnClose=false] [cacheCompressed=false],
        firstKey=F40000408A862DC6426C8E32B0F9971AAAA7B386C2F43FD841B40A362EA09259/info:recent/1385032809401/Put,
        lastKey=F4FCC94050C968B4CFDE49EA4F4A1F185FB1D72D7346005A227A40A9A7358BC8/info:recent/1382710394100/Put,
        avgKeyLen=85,
        avgValueLen=68,
        entries=4186248,
        length=142479945
    Trailer:
        fileinfoOffset=142479169,
        loadOnOpenDataOffset=142478318,
        dataIndexCount=9,
        metaIndexCount=0,
        totalUncomressedBytes=683010319,
        entryCount=4186248,
        compressionCodec=LZO,
        uncompressedDataIndexSize=1060555,
        numDataIndexLevels=2,
        firstDataBlockOffset=0,
        lastDataBlockOffset=142336376,
        comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator,
        majorVersion=2,
        minorVersion=0
    Fileinfo:
        BLOOM_FILTER_TYPE = ROW
        DATA_BLOCK_ENCODING = NONE
        DELETE_FAMILY_COUNT = x00x00x00x00x00x00x00x00
        EARLIEST_PUT_TS = x00x00x01Ax9Dx18xE6x8E
        LAST_BLOOM_KEY = F4FCC94050C968B4CFDE49EA4F4A1F185FB1D72D7346005A227A40A9A7358BC8
        MAJOR_COMPACTION_KEY = xFF
        MAX_SEQ_ID_KEY = 304738752
        TIMERANGE = 1381320156814....1386635121792
        hfile.AVG_KEY_LEN = 85
        hfile.AVG_VALUE_LEN = 68
        hfile.LASTKEY = x00@F4FCC94050C968B4CFDE49EA4F4A1F185FB1D72D7346005A227A40A9A7358BC8x04inforecentx00x00x01AxEFxF6<xF4x04
    Mid-key: x00@F47AB33AB35809CB16CEE830C04F5A7FB530C4E6AC75EAB8C67E4ABD7F488100x04inforecentx00x00x01B(q2)x04x00x00x00x00x04:h*x00x005l
    Bloom filter:
        BloomSize: 131072
        No of Keys in bloom: 94949
        Max Keys for bloom: 109306
        Percentage filled: 87%
        Number of chunks: 1
        Comparator: ByteArrayComparator
    Delete Family Bloom filter:
        Not present

     

    之于HFileV1

    V1:

    HFileV2相对于V1:主要做了几个改进,其中2个主个改进都主要是为了降低内存使用和启动时间,思路都是将它们切分为多个block。

    1. 增加了树状结构的数据块索引(data block index)支持。原因是在数据块的索引很大时,很难全部load到内存,比如当前的一个data block会在data block indxe区域对应一个数据项,假设每个block 64KB,每个索引项64Byte,这样如果每条机器上存放了6TB数据,那么索引数据就得有6GB,因此这个占用的内存还是很高的。通过对这些索引以树状结构进行组织,只让顶层索引常驻内存,其他索引按需读取并通过LRU cache进行缓存,这样就不需要将全部索引加载到内存。参见https://issues.apache.org/jira/browse/HBASE-3856
      相当于把原先平坦的索引结构以树状的结构进行分散化的组织。现在的index block也是与data block一样是散布到整个文件之中,而不再是单纯的在结尾处。同时为支持对序列化数据进行二分查找,还为非root的block index设计了新的"non-root index block"。 
      具体实现来看,比如在写入HFile时,在内存中会存放当前的inline block index,当inline block index大小达到一定阈值(默认128KB)时就直接flush到磁盘,而不再是最后做一次flush,这样就不需要在内存中一直保持所有的索引数据。当所有的inline block index生成之后,HFile writer会生成更上一级的block index,它里面的内容就是这些inline block index的offset,依次递归,逐步生成更上层的block index,上层的包含的就是下层的offset,直到最顶层大小小于阈值时为止。所以整个过程就是自底向上的通过下层index block逐步构建出上层index block。 

    2. 之前的bloom filter数据是存放在单独的一个meta block里,新版里它将可以被存为多个block。这就允许在处理一个查询时不必将所有的数据都load到内存。

    3. 对压缩数据的支持,记录compressed size可以在扫描HFile时不用解压,直接skip过当前块。

    读HFile

    读操作比较简单,在创建reader的时候会先加载trailer,seek的过程中首先会根据各级BlockIndex通过二分查找,找到key所在的HFileBlock,之后判断HFileBlock是否在内存,不在则将HFileBlock加载到内存,之后顺序扫描这个块,进而seek到对应的key。


     

    在创建Reader时会首先加载trailler,并根据trailer加载rootBlockIndex,FileInfo等信息,

    读取Trailer,首先将磁盘上Trailer加载到内存,之后按照Trailer固定的格式解析出Trailer的内容

    trailer = FixedFileTrailer.readFromStream(fsdis.getStream(isHBaseChecksum), size);

    public static FixedFileTrailer readFromStream(FSDataInputStream istream,
    long fileSize) throws IOException {
        int bufferSize = MAX_TRAILER_SIZE;
        long seekPoint = fileSize - bufferSize; //trailer 位置
        if (seekPoint < 0) {
        // It is hard to imagine such a small HFile.
            seekPoint = 0;
            bufferSize = (int) fileSize;
        }
        //加载trailer
        istream.seek(seekPoint);
        ByteBuffer buf = ByteBuffer.allocate(bufferSize);
        istream.readFully(buf.array(), buf.arrayOffset(),
        buf.arrayOffset() + buf.limit());
        .....
        FixedFileTrailer fft = new FixedFileTrailer(majorVersion, minorVersion);
        fft.deserialize(new DataInputStream(new ByteArrayInputStream(buf.array(),
        buf.arrayOffset() + bufferSize - trailerSize, trailerSize)));
        return fft;
    }

    trailer内容:

    fileinfoOffset=43649909, loadOnOpenDataOffset=43649226, dataIndexCount=20, metaIndexCount=3, totalUncomressedBytes=78307153, entryCount=600000, compressionCodec=GZ, uncompressedDataIndexSize=2608931, numDataIndexLevels=2, firstDataBlockOffset=0, lastDataBlockOffset=43613833, comparatorClassName=org.apache.hadoop.hbase.KeyValue$KeyComparator, majorVersion=2, minorVersion=3

    读取trailer之后便可以获取rootIndex的位置,加载rootIndex size=2

    key=aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaZI/info:city/LATEST_TIMESTAMP/Put
     
      offset=3944715, dataSize=38949
     
    key=aaaaaaaaaaaaaaaabbbaaabbaabaabbaiSje/info:city/LATEST_TIMESTAMP/Put
     
      offset=6825203, dataSize=27845

    获取rootLevel的索引

    int rootLevelIndex = rootBlockContainingKey(key, keyOffset, keyLength);

    查找的过程实质上就是对各级索引进行二分查找,并根据BlockType来判断块是索引还是Data已决定是否需要继续向下寻找。

    int pos = Bytes.binarySearch(blockKeys, key, offset, length, comparator);

    public static int binarySearch(byte [][]arr, byte []key, int offset,
          int length, RawComparator<?> comparator) {
        int low = 0;
        int high = arr.length - 1;
        while (low <= high) {
          int mid = (low+high) >>> 1;
          // we have to compare in this order, because the comparator order
          // has special logic when the 'left side' is a special key.
          int cmp = comparator.compare(key, offset, length,
              arr[mid], 0, arr[mid].length);
          // key lives above the midpoint
          if (cmp > 0)
            low = mid + 1;
          // key lives below the midpoint
          else if (cmp < 0)
            high = mid - 1;
          // BAM. how often does this really happen?
          else
            return mid;
        }
        return - (low+1);
      }
     

    进而获取下一级索引的offset等信息,

    接着继续向下寻找,直到

    if (block.getBlockType().equals(BlockType.DATA

    即查找的块是一个DataBlock时结束;

    之后 loadBlockAndSeekToKey ,然后块内查找:

    BlockWithScanInfo初始化会产生一个ByteBuffer对象,是去掉了头的block块,之后在block内的查找操作均在ByteBuffer中进行.

    /**
      * Returns a buffer that does not include the header. The array offset points
      * to the start of the block data right after the header. The underlying data
      * array is not copied. Checksum data is not included in the returned buffer.
      *
      * @return the buffer with header skipped
      */
     public ByteBuffer getBufferWithoutHeader() {
       return ByteBuffer.wrap(buf.array(), buf.arrayOffset() + headerSize(),
           buf.limit() - headerSize() - totalChecksumBytes()).slice();
     }

     

     

    值得注意的是:如果开启cache则首先会 HFileBlock cachedBlock = (HFileBlock) cacheConf.getBlockCache().getBlock(cacheKey, cacheBlock, useLock)从cache中获取块;

     

    如果没有则load:HFileBlock hfileBlock = fsBlockReader.readBlockData(dataBlockOffset, onDiskBlockSize, -1,pread);

    并加入BlockCache

     

    // Cache the block if necessary
      if (cacheBlock && cacheConf.shouldCacheBlockOnRead(hfileBlock.getBlockType().getCategory())) {
           cacheConf.getBlockCache().cacheBlock(cacheKey, hfileBlock, cacheConf.isInMemory());
      }

    0.96中Cache包含:BuketCache(性能不错,值得推广,后续会仔细研究下),LRUBlockCache,SlabCache,DoubleBlockCache等。

    //TODO HBase中的Cache可以作为一个切入点来研究下,CacheConfig类记录了配置Cache的详细参数及默认值。

    blockSeek(key, offset, length, seekBefore)函数其实就是做了一个顺序查找,seek到指定位置。

     

    private int blockSeek(byte[] key, int offset, int length,
        boolean seekBefore) {
      int klen, vlen;
      long memstoreTS = 0;
      int memstoreTSLen = 0;
      int lastKeyValueSize = -1;
    //循环扫描block 查找key
      do {
        blockBuffer.mark();
        klen = blockBuffer.getInt();//key的长度
        vlen = blockBuffer.getInt();//value的长度
        blockBuffer.reset();
        if (this.reader.shouldIncludeMemstoreTS()) {
          if (this.reader.decodeMemstoreTS) {
            try {
              int memstoreTSOffset = blockBuffer.arrayOffset()
                  + blockBuffer.position() + KEY_VALUE_LEN_SIZE + klen + vlen;
              memstoreTS = Bytes.readVLong(blockBuffer.array(),
                  memstoreTSOffset);
              memstoreTSLen = WritableUtils.getVIntSize(memstoreTS);
            catch (Exception e) {
              throw new RuntimeException("Error reading memstore timestamp", e);
            }
          else {
            memstoreTS = 0;
            memstoreTSLen = 1;
          }
        }
        int keyOffset = blockBuffer.arrayOffset() + blockBuffer.position()
            + KEY_VALUE_LEN_SIZE;
        int comp = reader.getComparator().compareFlatKey(key, offset, length,
            blockBuffer.array(), keyOffset, klen); //比较
     
        if (comp == 0) {
          if (seekBefore) {
            if (lastKeyValueSize < 0) {
              throw new IllegalStateException("blockSeek with seekBefore "
                  "at the first key of the block: key="
                  + Bytes.toStringBinary(key) + ", blockOffset="
                  + block.getOffset() + ", onDiskSize="
                  + block.getOnDiskSizeWithHeader());
            }
            blockBuffer.position(blockBuffer.position() - lastKeyValueSize);
            readKeyValueLen();
            return 1// non exact match.
          }
          currKeyLen = klen;
          currValueLen = vlen;
          if (this.reader.shouldIncludeMemstoreTS()) {
            currMemstoreTS = memstoreTS;
            currMemstoreTSLen = memstoreTSLen;
          }
          return 0// indicate exact match
        else if (comp < 0) {
          if (lastKeyValueSize > 0)
            blockBuffer.position(blockBuffer.position() - lastKeyValueSize);
          readKeyValueLen();
          if (lastKeyValueSize == -1 && blockBuffer.position() == 0
              && this.reader.trailer.getMinorVersion() >= MINOR_VERSION_WITH_FAKED_KEY) {
            return HConstants.INDEX_KEY_MAGIC;
          }
          return 1;
        }
        // The size of this key/value tuple, including key/value length fields.
        lastKeyValueSize = klen + vlen + memstoreTSLen + KEY_VALUE_LEN_SIZE;
        blockBuffer.position(blockBuffer.position() + lastKeyValueSize);
      while (blockBuffer.remaining() > 0);//扫描到块末尾时结束
      blockBuffer.position(blockBuffer.position() - lastKeyValueSize);
      readKeyValueLen();
      return 1// didn't exactly find it.
    }

     

    seek 到之后,进而获取value

     public KeyValue getKeyValue() {

          if (!isSeeked())

            return null;

          KeyValue ret = new KeyValue(blockBuffer.array(),

              blockBuffer.arrayOffset() + blockBuffer.position(),

              KEY_VALUE_LEN_SIZE + currKeyLen + currValueLen,

              currKeyLen);

          if (this.reader.shouldIncludeMemstoreTS()) {

            ret.setMvccVersion(currMemstoreTS);

          }

          return ret;

        }

    这里用到比较关键的类包含ByteBuffer的各种操作,比较重要。

    过程中关键类:

    1. ByteBuffer :block在内存的载体
    2. HFileReaderV2:包含对HFile进行读写操作的各种方法
    3. Bytes :块内二分查找方法
    4. HFileScanner 由reader构造出来,负责读取HFile操作
    5. HFileBlockIndex 记录块的索引信息,如每个块的起始key,midKey等信息
    6. BlockCache、BlockCacheKey
    7. FixedFileTrailer
    8. HFileBlock block对象
    9. ByteArrayOutputStream

    阅读HBase代码时maven依赖hadoop源码无法查看,解决办法:1、下载hadoop-1.1.2.tar.gz 2、找到其源码包,打包jar,buildpath添加、刷新即可

     

    思考

    1、如果建表时,一个row对应多个列,或者多个版本,那么一个HFileBlock所能存放的row数就会越少,这样一来块内seek速度会变快,但是对于按顺序查找或者扫描同样个数的row来说,需要不停地load块,反而效率变低。

    2、扫描操作会不停地将块加到缓存,如果多个进行扫描缓存块,而只有一个进程来淘汰块,那么很可能会发生内存溢出,所以在线下计算需要大量扫描的情况下,最好能关闭cache.

    目前默认缓存策略为LRU

    1.一个Block从被缓存至被淘汰,基本就伴随着Heap中的位置从New区晋升到Old区 
    2.晋升在Old区的Block被淘汰后,最终由CMS进行垃圾回收,随之带来的是Heap碎片 
    3.因为碎片问题,随之而来的是GC时晋升失败的FullGC。  
    4.对于高频率的,如果缓存的速度比淘汰的速度快,有OOM的风险 

    3、StoreFile过多会影响读性能,因为一个读操作很可能会需要打开多个HFile,io次数太多,会影响性能

    4、HFile block越大越适合顺序扫描(大的快解压速度会变慢)但是随机读写性能降低,小的HFile block更加适合随机读写,但是需要更多地内存来存放index.默认(64 * 1024B)

    5、HFileV2的分级索引,虽然可以减少内存的使用量,加快启动速度,但是多了1-2次IO,尤其是HFile文件较大时,随机读很可能比V1要多2次IO操作,进而读取速度变慢。

    6、gz压缩方式比lzo好,但是压缩和解压时消耗的CPU大于lzo

    参考:

    http://hbase.apache.org/book/hfilev2.html

    http://www.slideshare.net/schubertzhang/hfile-a-blockindexed-file-format-to-store-sorted-keyvalue-pairs

    http://duanple.blog.163.com/blog/static/709717672011916103311428/

  • 相关阅读:
    C#:将空间数据加载到树视图控件
    C# 常见错误
    C#:Application操作(待补充)
    C#:XML操作(简单)
    C#:xml操作(待补充)
    C#:消息框
    C#:数学运算(待补充)
    C#:Ini文件操作(待补充)
    C#:文件操作(待补充)
    2015河南省农村拆迁赔偿流程
  • 原文地址:https://www.cnblogs.com/pingyuyue/p/3469025.html
Copyright © 2011-2022 走看看