zoukankan      html  css  js  c++  java
  • bip39

      BIP: 39 (助记词)
      Layer: Applications
      Title: Mnemonic code for generating deterministic keys
      Author: Marek Palatinus <slush@satoshilabs.com>
              Pavol Rusnak <stick@satoshilabs.com>
              Aaron Voisine <voisine@gmail.com>
              Sean Bowe <ewillbefull@gmail.com>
      Comments-Summary: Unanimously Discourage for implementation
      Comments-URI: https://github.com/bitcoin/bips/wiki/Comments:BIP-0039
      Status: Proposed
      Type: Standards Track
      Created: 2013-09-10

    Abstract

    This BIP describes the implementation of a mnemonic code or mnemonic sentence -- a group of easy to remember words -- for the generation of deterministic wallets.

    下面就是描述助记码或词(即为了生成hd wallet而生成的一组容易记住的词)的生成

    It consists of two parts: generating the mnemonic, and converting it into a binary seed. This seed can be later used to generate deterministic wallets using BIP-0032 or similar methods.

    包括两部分:一是生成助记词并将其转成二进制的seed,这个能够作为bip32中的seed来生成hd wallet

    Motivation

    A mnemonic code or sentence is superior for human interaction compared to the handling of raw binary or hexadecimal representations of a wallet seed. The sentence could be written on paper or spoken over the telephone.

    与处理钱包seed的原始二进制或十六进制表示形式相比,助记码或句子更适合于人类交互。这个句子可以写在纸上,也可以通过电话告诉对方。

    This guide is meant to be a way to transport computer-generated randomness with a human readable transcription. It's not a way to process user-created sentences (also known as brainwallets) into a wallet seed.

    本指南旨在通过人类可读的转换来传输计算机生成的随机数。

    Generating the mnemonic

    The mnemonic must encode entropy in a multiple of 32 bits. With more entropy security is improved but the sentence length increases. We refer to the initial entropy length as ENT. The allowed size of ENT is 128-256 bits.

    助记词必须以32位的倍数选择熵值entropy。随着熵值的增加,句子长度增加,安全性提高。我们将初始熵长度称为ENT。ENT的允许大小是128-256位。

     

    ENT / 32

    First, an initial entropy of ENT bits is generated. A checksum is generated by taking the first bits of its SHA256 hash. This checksum is appended to the end of the initial entropy. Next, these concatenated bits are split into groups of 11 bits, each encoding a number from 0-2047, serving as an index into a wordlist. Finally, we convert these numbers into words and use the joined words as a mnemonic sentence.

    (1)首先,生成ENT比特的初始熵entropy(如下面的例子00000000000000000000000000000000,16进制,熵长度为32*4=128)。

    (2)通过对初始熵entropy取SHA256散列来获得CS位 (CS= 熵长度/32=4,取得到的SHA256散列的前CS位)校验和,然后将校验和附加到初始熵的末尾。

    (3)接下来,(熵entropy+校验和)被分成以11位为一组(一共MS组),每个组编码对应一个0-2047的数字,该数字作为一个索引到wordlist,对应获得wordlist上相应索引的值。

    (4)最后,我们将这些数字转换成单词,最终合在一起作为助记句。

    The following table describes the relation between the initial entropy length (ENT), the checksum length (CS) and the length of the generated mnemonic sentence (MS) in words.

    下表描述了单词中初始熵长度(ENT)、校验和长度(CS)和生成的助记句长度(MS)之间的关系。

    CS = ENT / 32
    MS = (ENT + CS) / 11
    |  ENT  | CS | ENT+CS |  MS  |
    +-------+----+--------+------+
    |  128  |  4 |   132  |  12  |
    |  160  |  5 |   165  |  15  |
    |  192  |  6 |   198  |  18  |
    |  224  |  7 |   231  |  21  |
    |  256  |  8 |   264  |  24  |
    

    Wordlist

    An ideal wordlist has the following characteristics:

    一个理想的单词表有以下特点:

    a) smart selection of words聪明的词汇选择

       - the wordlist is created in such way that it's enough to type the first four
         letters to unambiguously identify the word
    创建单词列表的方式是这样的:只需输入前四个字母就可以清楚地识别单词

    b) similar words avoided避免相似的词

       - word pairs like "build" and "built", "woman" and "women", or "quick" and "quickly"
         not only make remembering the sentence difficult, but are also more error
         prone and more difficult to guess
    -像“build”和“build”、“woman”和“women”、或“quick”和“quick”这样的词对不仅使记忆句子变得困难,而且更容易出错,更难以猜测

    c) sorted wordlists分类词库

       - the wordlist is sorted which allows for more efficient lookup of the code words
         (i.e. implementations can use binary search instead of linear search)
       - this also allows trie (a prefix tree) to be used, e.g. for better compression
    - wordlist是有序的,这允许更有效的查找代码字
    (例如,实现可以使用二进制搜索代替线性搜索)
    -这也允许使用trie(一个前缀树),例如为了更好的压缩
    
    

    The wordlist can contain native characters, but they must be encoded in UTF-8 using Normalization Form Compatibility Decomposition (NFKD).

    wordlist可以包含本机字符,但它们必须使用规范化形式兼容分解(NFKD)以UTF-8编码。

    From mnemonic to seed从助记词转成seed

    A user may decide to protect their mnemonic with a passphrase. If a passphrase is not present, an empty string "" is used instead.

    用户可能决定使用密码来保护他们的助记符。如果没有密码,则使用空字符串“”。

    ⚠️密码可以作为一个额外的安全因子来保护种子,即使助记词的备份被窃取,也可以保证钱包的安全(也要求密码拥有足够的复杂度和长度),不过另外一方面,如果我们忘记密码,那么将无法恢复我们的数字资产。

    To create a binary seed from the mnemonic, we use the PBKDF2 function with a mnemonic sentence (in UTF-8 NFKD) used as the password and the string "mnemonic" + passphrase (again in UTF-8 NFKD) used as the salt. The iteration count is set to 2048 and HMAC-SHA512 is used as the pseudo-random function. The length of the derived key is 512 bits (= 64 bytes).

    (1)为了从助记符创建二进制种子,我们使用PBKDF2函数(密钥拉伸(Key stretching)函数),使用助记词(UTF-8 NFKD)作为密码,使用字符串“助记词”+密码(UTF-8 NFKD)作为salt。迭代计数设置为2048(即重复运算2048次),使用hma - sha512作为伪随机函数。派生键的长度是512位(= 64字节,即最后的seed的长度)。

    pbkdf2(mnemonicBuffer, saltBuffer, 2048, 64, 'sha512')

    This seed can be later used to generate deterministic wallets using BIP-0032 or similar methods.

    这个seed之后将被bip32或相似的方法使用来生成hd wallet

    The conversion of the mnemonic sentence to a binary seed is completely independent from generating the sentence. This results in rather simple code; there are no constraints on sentence structure and clients are free to implement their own wordlists or even whole sentence generators, allowing for flexibility in wordlists for typo detection or other purposes.

    将助记句转换为二进制种子句与生成句子完全无关。这导致了相当简单的代码;句子结构没有限制,客户机可以自由地实现自己的单词列表,甚至可以实现整个句子生成器,这允许在单词列表中灵活地进行类型检测或其他目的。

    Although using a mnemonic not generated by the algorithm described in "Generating the mnemonic" section is possible, this is not advised and software must compute a checksum for the mnemonic sentence using a wordlist and issue a warning if it is invalid.

    虽然使用不是由“生成助记符”部分中描述的算法生成的助记符是可能的,但不建议这样做,软件必须使用wordlist计算助记符句子的校验和,并在其无效时发出警告。

    The described method also provides plausible deniability, because every passphrase generates a valid seed (and thus a deterministic wallet) but only the correct one will make the desired wallet available.

    所描述的方法还提供了可信的可否认性,因为每个密码都生成一个有效的种子(从而产生一个hd wallet),但是只有正确的一个才能使所需的钱包可用。

    实现代码:

    BIP39标准就是为了解决助记词的需求,通过随机生成12~24个容易记住的单词,单词序列通过PBKDF2与HMAC-SHA512函数创建出随机种子作为BIP32的种子。

    bip39/index.js

    var Buffer = require('safe-buffer').Buffer
    var createHash = require('create-hash')
    var pbkdf2 = require('pbkdf2').pbkdf2Sync
    var randomBytes = require('randombytes')
    
    // use unorm until String.prototype.normalize gets better browser support
    var unorm = require('unorm')
    
    var CHINESE_SIMPLIFIED_WORDLIST = require('./wordlists/chinese_simplified.json')
    var CHINESE_TRADITIONAL_WORDLIST = require('./wordlists/chinese_traditional.json')
    var ENGLISH_WORDLIST = require('./wordlists/english.json')
    var FRENCH_WORDLIST = require('./wordlists/french.json')
    var ITALIAN_WORDLIST = require('./wordlists/italian.json')
    var JAPANESE_WORDLIST = require('./wordlists/japanese.json')
    var KOREAN_WORDLIST = require('./wordlists/korean.json')
    var SPANISH_WORDLIST = require('./wordlists/spanish.json')
    var DEFAULT_WORDLIST = ENGLISH_WORDLIST
    
    var INVALID_MNEMONIC = 'Invalid mnemonic'
    var INVALID_ENTROPY = 'Invalid entropy'
    var INVALID_CHECKSUM = 'Invalid mnemonic checksum'
    
    function lpad (str, padString, length) {
      while (str.length < length) str = padString + str
      return str
    }
    
    function binaryToByte (bin) {
      return parseInt(bin, 2)
    }
    
    function bytesToBinary (bytes) {
      return bytes.map(function (x) {
        return lpad(x.toString(2), '0', 8)
      }).join('')
    }
    
    function deriveChecksumBits (entropyBuffer) {
      var ENT = entropyBuffer.length * 8
      var CS = ENT / 32
      var hash = createHash('sha256').update(entropyBuffer).digest()
    
      return bytesToBinary([].slice.call(hash)).slice(0, CS)
    }
    
    function salt (password) {
      return 'mnemonic' + (password || '')
    }
    
    function mnemonicToSeed (mnemonic, password) {
      var mnemonicBuffer = Buffer.from(unorm.nfkd(mnemonic), 'utf8')
      var saltBuffer = Buffer.from(salt(unorm.nfkd(password)), 'utf8')
    
      return pbkdf2(mnemonicBuffer, saltBuffer, 2048, 64, 'sha512')
    }
    
    function mnemonicToSeedHex (mnemonic, password) {
      return mnemonicToSeed(mnemonic, password).toString('hex')
    }
    
    function mnemonicToEntropy (mnemonic, wordlist) {
      wordlist = wordlist || DEFAULT_WORDLIST
    
      var words = unorm.nfkd(mnemonic).split(' ')
      if (words.length % 3 !== 0) throw new Error(INVALID_MNEMONIC)
    
      // convert word indices to 11 bit binary strings
      var bits = words.map(function (word) {
        var index = wordlist.indexOf(word)
        if (index === -1) throw new Error(INVALID_MNEMONIC)
    
        return lpad(index.toString(2), '0', 11)
      }).join('')
    
      // split the binary string into ENT/CS
      var dividerIndex = Math.floor(bits.length / 33) * 32
      var entropyBits = bits.slice(0, dividerIndex)
      var checksumBits = bits.slice(dividerIndex)
    
      // calculate the checksum and compare
      var entropyBytes = entropyBits.match(/(.{1,8})/g).map(binaryToByte)
      if (entropyBytes.length < 16) throw new Error(INVALID_ENTROPY)
      if (entropyBytes.length > 32) throw new Error(INVALID_ENTROPY)
      if (entropyBytes.length % 4 !== 0) throw new Error(INVALID_ENTROPY)
    
      var entropy = Buffer.from(entropyBytes)
      var newChecksum = deriveChecksumBits(entropy)
      if (newChecksum !== checksumBits) throw new Error(INVALID_CHECKSUM)
    
      return entropy.toString('hex')
    }
    
    function entropyToMnemonic (entropy, wordlist) {
      if (!Buffer.isBuffer(entropy)) entropy = Buffer.from(entropy, 'hex')
      wordlist = wordlist || DEFAULT_WORDLIST
    
      // 128 <= ENT <= 256
      if (entropy.length < 16) throw new TypeError(INVALID_ENTROPY)
      if (entropy.length > 32) throw new TypeError(INVALID_ENTROPY)
      if (entropy.length % 4 !== 0) throw new TypeError(INVALID_ENTROPY)
    
      var entropyBits = bytesToBinary([].slice.call(entropy))
      var checksumBits = deriveChecksumBits(entropy)
    
      var bits = entropyBits + checksumBits
      var chunks = bits.match(/(.{1,11})/g)
      var words = chunks.map(function (binary) {
        var index = binaryToByte(binary)
        return wordlist[index]
      })
    
      return wordlist === JAPANESE_WORDLIST ? words.join('u3000') : words.join(' ')
    }
    
    function generateMnemonic (strength, rng, wordlist) {
      strength = strength || 128
      if (strength % 32 !== 0) throw new TypeError(INVALID_ENTROPY)
      rng = rng || randomBytes
    
      return entropyToMnemonic(rng(strength / 8), wordlist)
    }
    
    function validateMnemonic (mnemonic, wordlist) {
      try {
        mnemonicToEntropy(mnemonic, wordlist)
      } catch (e) {
        return false
      }
    
      return true
    }
    
    module.exports = {
      mnemonicToSeed: mnemonicToSeed,
      mnemonicToSeedHex: mnemonicToSeedHex,
      mnemonicToEntropy: mnemonicToEntropy,
      entropyToMnemonic: entropyToMnemonic,
      generateMnemonic: generateMnemonic,
      validateMnemonic: validateMnemonic,
      wordlists: {
        EN: ENGLISH_WORDLIST,
        JA: JAPANESE_WORDLIST,
    
        chinese_simplified: CHINESE_SIMPLIFIED_WORDLIST,
        chinese_traditional: CHINESE_TRADITIONAL_WORDLIST,
        english: ENGLISH_WORDLIST,
        french: FRENCH_WORDLIST,
        italian: ITALIAN_WORDLIST,
        japanese: JAPANESE_WORDLIST,
        korean: KOREAN_WORDLIST,
        spanish: SPANISH_WORDLIST
      }
    }

    test vector

    "00000000000000000000000000000000",//entropy
    "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about",//mnemonic
    "c55257c360c07c72029aebc1b53c05ed0362ada38ead3e3e9efa3708e53495531f09a6987599d18264c1e1c92f2cf141630c7a3c4ab7c81b2f001698e7463b04",//seed
    "xprv9s21ZrQH143K3h3fDYiay8mocZ3afhfULfb5GX8kCBdno77K4HiA15Tg23wpbeF1pLfs1c5SPmYHrEpTuuRhxMwvKDwqdKiGJS9XFKzUsAF"//root key

    实例测试:

    npm install bip-39 --save
    //+ bip39@2.5.0
    var bip39 = require('bip39')
     
    // defaults to BIP39 English word list
    // uses HEX strings for entropy
    var mnemonic = bip39.entropyToMnemonic('00000000000000000000000000000000')
    console.log(mnemonic)
    // => abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about
     
    // reversible
    console.log(bip39.mnemonicToEntropy(mnemonic))
    // => '00000000000000000000000000000000'
    
    // Generate a random mnemonic (uses crypto.randomBytes under the hood), defaults to 128-bits of entropy
    // var mnemonic = bip39.generateMnemonic()
    
     
    console.log(bip39.mnemonicToSeedHex(mnemonic))
    // => '5eb00bbddcf069084889a8ab9155568165f5c453ccb85e70811aaed6f6da5fc19a5ac40b389cd370d086206dec8aa6c43daea6690f20ad3d8d48b2d2ce9e38e4'
     
    console.log(bip39.mnemonicToSeed(mnemonic))
    // => <Buffer 5e b0 0b bd dc f0 69 08 48 89 a8 ab 91 55 56 81 65 f5 c4 53 cc b8 5e 70 81 1a ae d6 f6 da 5f c1 9a 5a c4 0b 38 9c d3 70 d0 86 20 6d ec 8a a6 c4 3d ae ... 14 more bytes>
     
    console.log(bip39.validateMnemonic(mnemonic))
    // => true
     
    console.log(bip39.validateMnemonic('basket actual'))
    // => false
  • 相关阅读:
    ubuntu 安装docker
    docker 版本与查看某个容器详情信息
    linux namespace 分为有名和无名两种,默认情况下都是有名的
    查看centos 版本信息
    centos7 一键安装openstack 方法
    centos8 安装openstack (失败告终),参见centos7安装
    Windows7 安装docker工具的方法
    网络架构--防火墙双机热备(主备、负载均衡)
    ubuntu server 无线网口配置
    ubuntu server 18.04 单机安装openstack
  • 原文地址:https://www.cnblogs.com/wanghui-garcia/p/9983033.html
Copyright © 2011-2022 走看看