zoukankan      html  css  js  c++  java
  • Huffman 压缩与解压缩

    The simplest construction algorithm uses a priority queue where the node with lowest probability is given highest priority:

    1. Create a leaf node for each symbol and add it to the priority queue.
    2. While there is more than one node in the queue:
      1. Remove the two nodes of highest priority (lowest probability) from the queue
      2. Create a new internal node with these two nodes as children and with probability equal to the sum of the two nodes' probabilities.
      3. Add the new node to the queue.
    3. The remaining node is the root node and the tree is complete.

    Since efficient priority queue data structures require O(log n) time per insertion, and a tree with n leaves has 2n−1 nodes, this algorithm operates in O(n log n) time, where n is the number of symbols.

    If the symbols are sorted by probability, there is a linear-time (O(n)) method to create a Huffman tree using two queues, the first one containing the initial weights (along with pointers to the associated leaves), and combined weights (along with pointers to the trees) being put in the back of the second queue. This assures that the lowest weight is always kept at the front of one of the two queues:

    1. Start with as many leaves as there are symbols.
    2. Enqueue all leaf nodes into the first queue (by probability in increasing order so that the least likely item is in the head of the queue).
    3. While there is more than one node in the queues:
      1. Dequeue the two nodes with the lowest weight by examining the fronts of both queues.
      2. Create a new internal node, with the two just-removed nodes as children (either node can be either child) and the sum of their weights as the new weight.
      3. Enqueue the new node into the rear of the second queue.
    4. The remaining node is the root node; the tree has now been generated.

    Although this algorithm may appear "faster" complexity-wise than the previous algorithm using a priority queue, this is not actually the case because the symbols need to be sorted by probability before-hand, a process that takes O(n log n) time in itself.

    以上内容摘自维基百科(现在越来越喜欢这个了): http://en.wikipedia.org/wiki/Huffman_coding

    source forge上有个实现霍夫曼压缩解压的简单代码:http://sourceforge.net/projects/huffman/

    注释比较详细,基础内容不清楚还可以看看这个页:

    http://www.siggraph.org/education/materials/HyperGraph/video/mpeg/mpegfaq/huffman_tutorial.html

    先睹为快,贴上两个函数:

    /*

     * huffman_encode_file huffman encodes in to out.

     
    */

    int huffman_encode_file(FILE *in, FILE *out)

    {

    SymbolFrequencies sf;

    SymbolEncoder *se;

    huffman_node *root = NULL;

    int rc;

    unsigned int symbol_count;



    /* Get the frequency of each symbol in the input file. */

    symbol_count = get_symbol_frequencies(&sf, in);



    /* Build an optimal table from the symbolCount. */

    se = calculate_huffman_codes(&sf);

    root = sf[0];



    /* Scan the file again and, using the table

      previously built, encode it into the output file. 
    */

    rewind(in);

    rc = write_code_table(out, se, symbol_count);

    if(rc == 0)

    rc = do_file_encode(inout, se);



    /* Free the Huffman tree. */

    free_huffman_tree(root);

    free_encoder(se);

    return rc;

    }

    /*这个函数通过建立哈弗曼树来得到编码表*/



    static SymbolEncoder* calculate_huffman_codes(SymbolFrequencies * pSF)

    {

    unsigned int i = 0;

    unsigned int n = 0;

    huffman_node *m1 = NULL, *m2 = NULL;

    SymbolEncoder *pSE = NULL;



    /* Sort the symbol frequency array by ascending frequency. */

    qsort((*pSF), MAX_SYMBOLS, sizeof((*pSF)[0]), SFComp);



    /* Get the number of symbols. */

    for(n = 0; n < MAX_SYMBOLS && (*pSF)[n]; ++n)

    ;



    /*

    * Construct a Huffman tree. This code is based

    * on the algorithm given in Managing Gigabytes

    * by Ian Witten et al, 2nd edition, page 34.

    * Note that this implementation uses a simple

    * count instead of probability.

    */

    for(i = 0; i < n - 1; ++i)

    {

    /* Set m1 and m2 to the two subsets of least probability. */

    m1 = (*pSF)[0];

    m2 = (*pSF)[1];



    /* Replace m1 and m2 with a set {m1, m2} whose probability

    * is the sum of that of m1 and m2. 
    */

    (*pSF)[0] = m1->parent = m2->parent =

    new_nonleaf_node(m1->count + m2->count, m1, m2);

    (*pSF)[1] = NULL;



    /* Put newSet into the correct count position in pSF. */

    qsort((*pSF), n, sizeof((*pSF)[0]), SFComp);

    }



    /* Build the SymbolEncoder array from the tree. */

    pSE = (SymbolEncoder*)malloc(sizeof(SymbolEncoder));

    memset(pSE, 0sizeof(SymbolEncoder));

    build_symbol_encoder((*pSF)[0], pSE);

    return pSE;

    } 

    That's Cool!赶紧去下载来看看吧。

  • 相关阅读:
    ASP.NET Web API 框架研究 Self Host模式下的消息处理管道
    ASP.NET Web API 框架研究 Web Host模式下的消息处理管道
    ASP.NET Web API 框架研究 核心的消息处理管道
    ASP.NET Web API 框架研究 Web Host模式路由及将请求转出到消息处理管道
    ASP.NET Web API 框架研究 ASP.NET Web API 路由
    ASP.NET Web API 框架研究 ASP.NET 路由
    ASP.NET Web API 入门 (API接口、寄宿方式、HttpClient调用)
    MVVM模式
    RESTful Web API 理解
    C# 函数式编程及Monads.net库
  • 原文地址:https://www.cnblogs.com/liujiahi/p/2196342.html
Copyright © 2011-2022 走看看