zoukankan      html  css  js  c++  java
  • go-ethereum源码分析 PartII 共识算法

    首先从共识引擎-Engine开始记录

    Engine是一个独立于具体算法的共识引擎接口

    Author(header) (common.Address, error) 返回打包header对应的区块的矿工地址

    VerifyHeader(chain ChainReader, header, seal bool) 验证header是否遵循当前Engine的共识原则。seal代表是否要顺便把VerifySeal做了

    VerifyHeaders

    VerifyUncles(chain ChainReader, block) error

    VerifySeal(chain ChainReader, header) error

    Prepare(chain, header) error 为header的共识fields的初始化做准备, The changes are executed incline.

    Finalize 

    (chain ChainReader, header *types.Header, state *state.StateDB, txs []*types.Transaction,
            uncles []*types.Header, receipts []*types.Receipt) (*types.Block, error)
    会话结束后状态的修改

    Seal(chain, block,results chan<- *types.Block, stop <-chan struct{}) 生成sealing request并且把结果加入channel中。注意,异步(the method returns immediately and will send the result async),根据共识算法不同,可能获得复数个blocks

    SealHash(header) common.Hash 在sealed之前的块的Hash值

    CalcDifficulty(chain, time uint64, parent *types.Header) *bigInt 难度调节算法,返回新块的难度 

    APIs(chain ChainReader)[]rpc.API 返回该共识引擎所提供的RPC APIs

    // PoW is a consensus engine based on proof-of-work.
    type PoW interface {
        Engine

        // Hashrate returns the current mining hashrate of a PoW consensus engine.
        Hashrate() float64
    }
     
    引擎的实现是Ethhash算法,Ethash算法是Ethereum1.0提出的PoW算法,也是Dagger-Hashimoto方法用在Ether上的最后一版改编版。(之后可能用PoS代替)
    Ethash的设计目的是:
    1. IO饱和,以便抵抗ASIC类似机器的挖掘。尽量消耗IO带宽
    2. GPU friendliness
    3. light client容易验证,能够在0.01s(c语言)内验证一回合的挖矿验证,只占1MB以内的内存
    4. light client更慢: light client的速度更慢,就算用特制硬件也要比full node 更慢-对应hashimotolight和hashimotofull,目前看来前者要比后者用到的memory少但是可能需要频繁调用
    5. light client fast startup: 在js中用40s以内的时间
    基本流程
    1. 首先根据当前epoch生成一个seed
    2. 通过seed,计算一块伪cache,大小16MB
    3. 通过16MBcache,可以生成约1G大小的dataset(或者说DAG)(DAG每30000个blocks完全更新一次)
    4. 随机抽取dataset的一部分(2块cache),与当前header的RLP的keccak256hash值做hash。
    5. 验证者可以生成那一块cache来验证,耗CPU较少
     6. 不忘记删掉旧的epoch
     
    基本参数
     
    WORD_BYTES = 4                    # bytes in word
    DATASET_BYTES_INIT = 2**30        # bytes in dataset at genesis
    DATASET_BYTES_GROWTH = 2**23      # dataset growth per epoch
    CACHE_BYTES_INIT = 2**24          # bytes in cache at genesis
    CACHE_BYTES_GROWTH = 2**17        # cache growth per epoch
    CACHE_MULTIPLIER=1024             # Size of the DAG relative to the cache
    EPOCH_LENGTH = 30000              # blocks per epoch
    MIX_BYTES = 128                   # width of mix
    HASH_BYTES = 64                   # hash length in bytes
    DATASET_PARENTS = 256             # number of parents of each dataset element
    CACHE_ROUNDS = 3                  # number of rounds in cache production
    ACCESSES = 64                     # number of accesses in hashimoto loop

    cache size和dataset size因为要随着时间增长,所以是CACHE_BYTES_INIT + CACHE_BYTES_GROWTH * (block_number // EPOCH_LENGTH)之下的最大质数

    生成mkcache的seed

     def get_seedhash(block):
         s = 'x00' * 32
         for i in range(block.number // EPOCH_LENGTH):
             s = serialize_hash(sha3_256(s))
         return s
    

      

     生成cache的方法

    注意ETH用的hash是sha3的变体,更接近keccak算法

    def mkcache(cache_size, seed):
        n = cache_size // HASH_BYTES
    
        # Sequentially produce the initial dataset
        o = [sha3_512(seed)]
        for i in range(1, n):
            o.append(sha3_512(o[-1]))
    
        # Use a low-round version of randmemohash
        for _ in range(CACHE_ROUNDS):
            for i in range(n):
                v = o[i][0] % n
                o[i] = sha3_512(map(xor, o[(i-1+n) % n], o[v]))
    
        return o

    生成dataset的方法

    FNV_PRIME = 0x01000193
    
    def fnv(v1, v2):
        return ((v1 * FNV_PRIME) ^ v2) % 2**32
    
    def calc_dataset_item(cache, i):
        n = len(cache)
        r = HASH_BYTES // WORD_BYTES
        # initialize the mix
        mix = copy.copy(cache[i % n])
        mix[0] ^= i
        mix = sha3_512(mix)
        # fnv it with a lot of random cache nodes based on i
        for j in range(DATASET_PARENTS):
            cache_index = fnv(i ^ j, mix[j % r])
            mix = map(fnv, mix, cache[cache_index % n])
        return sha3_512(mix)
    def calc_dataset(full_size, cache):
        return [calc_dataset_item(cache, i) for i in range(full_size // HASH_BYTES)]

    算法主体

    注意这里的s与seed不要混淆

    def hashimoto(header, nonce, full_size, dataset_lookup):
        n = full_size / HASH_BYTES
        w = MIX_BYTES // WORD_BYTES
        mixhashes = MIX_BYTES / HASH_BYTES
        # combine header+nonce into a 64 byte seed
        s = sha3_512(header + nonce[::-1])
        # start the mix with replicated s
        mix = []
        for _ in range(MIX_BYTES / HASH_BYTES):
            mix.extend(s)
        # mix in random dataset nodes
        for i in range(ACCESSES):
            p = fnv(i ^ s[0], mix[i % w]) % (n // mixhashes) * mixhashes
            newdata = []
            for j in range(MIX_BYTES / HASH_BYTES):
                newdata.extend(dataset_lookup(p + j))
            mix = map(fnv, mix, newdata)
        # compress mix
        cmix = []
        for i in range(0, len(mix), 4):
            cmix.append(fnv(fnv(fnv(mix[i], mix[i+1]), mix[i+2]), mix[i+3]))
        return {
            "mix digest": serialize_hash(cmix),
            "result": serialize_hash(sha3_256(s+cmix))
        }
    
    def hashimoto_light(full_size, cache, header, nonce):
        return hashimoto(header, nonce, full_size, lambda x: calc_dataset_item(cache, x))
    
    def hashimoto_full(full_size, dataset, header, nonce):
        return hashimoto(header, nonce, full_size, lambda x: dataset[x])
    def mine(full_size, dataset, header, difficulty):
        # zero-pad target to compare with hash on the same digit when reversed
        target = zpad(encode_int(2**256 // difficulty), 64)[::-1]
        from random import randint
        nonce = randint(0, 2**64)
        while hashimoto_full(full_size, dataset, header, nonce) > target:
            nonce = (nonce + 1) % 2**64
        return nonce
    结构定义
    // Ethash is a consensus engine based on proof-of-work implementing the ethash
    // algorithm.
    type Ethash struct {
        config Config
    
        caches   *lru // In memory caches to avoid regenerating too often
        datasets *lru // In memory datasets to avoid regenerating too often
    
        // Mining related fields
        rand     *rand.Rand    // Properly seeded random source for nonces
        threads  int           // Number of threads to mine on if mining
        update   chan struct{} // Notification channel to update mining parameters
        hashrate metrics.Meter // Meter tracking the average hashrate
    
        // Remote sealer related fields
        workCh       chan *sealTask   // Notification channel to push new work and relative result channel to remote sealer
        fetchWorkCh  chan *sealWork   // Channel used for remote sealer to fetch mining work
        submitWorkCh chan *mineResult // Channel used for remote sealer to submit their mining result
        fetchRateCh  chan chan uint64 // Channel used to gather submitted hash rate for local or remote sealer.
        submitRateCh chan *hashrate   // Channel used for remote sealer to submit their mining hashrate
    
        // The fields below are hooks for testing
        shared    *Ethash       // Shared PoW verifier to avoid cache regeneration
        fakeFail  uint64        // Block number which fails PoW check even in fake mode
        fakeDelay time.Duration // Time delay to sleep for before returning from verify
    
        lock      sync.Mutex      // Ensures thread safety for the in-memory caches and mining fields
        closeOnce sync.Once       // Ensures exit channel will not be closed twice.
        exitCh    chan chan error // Notification channel to exiting backend threads
    }

    Ethash在ethash.go中有以下一些方法:

    1. cache

    2. dataset

    这两个方法先在内存中找,接着去磁盘的DAG中找,最后创建对应数据结构

    3. Hashrate

    收集本机和网络peer的过去一分钟内search invocation的速率(单位每秒)

    SUGAR:

    Go non-blocking channel op

    如果通道恰好能够收发,那么就对通道做对应操作

        select {
        case msg := <-messages:
            fmt.Println("received message", msg)
        case sig := <-signals:
            fmt.Println("received signal", sig)
        default:
            fmt.Println("no activity")
        }
  • 相关阅读:
    截图片
    C#根据字节数截取字符串
    学习ObjectiveC: 入门教程
    [原]32位libusb
    [转]vim下鼠标右键无法复制的解决
    [原]c语言问号表达式
    [转]Linux下的帧缓冲lcd应用编程及Framebuffer驱动程序模型
    [转] android移植详解
    [转]Linux 串口编程
    curl 使用代理
  • 原文地址:https://www.cnblogs.com/xuesu/p/10564982.html
Copyright © 2011-2022 走看看