zoukankan      html  css  js  c++  java
  • hyperledger fabric相关记录

    证书目录说明

    ordererOrganization

    peerOrganization-orgXXXX

    --ca                 组织根证书和私钥1

    --tlsca             tls根证书和私钥2

    --msp

    ----admincerts 组织管理员身份验证证书3

    ----cacerts       组织根证书  就是上面的1

    ----tlscacerts   tls根证书,就是上面的2

    --peers

    ----peer0.orgXXXX

    ------msp

    --------admincerts  组织管理员的身份验证证书,就是3

    --------cacert          组织根证书,就是1

    --------keystore      节点私钥

    --------signcerts      组织根证书签名的本节点证书

    --------tlscacerts     就是上面的2

    ------tls

    --------ca.crt            组织tls根证书,就是上面的2

    --------server.crt      组织根证书签名的本节点证书

    --------server.key    本节点私钥

    --users

    ----Admin@orgXXX

    ----msp

    --------admincerts  组织管理员的身份验证证书,就是3

    --------cacert          组织根证书,就是1

    --------keystore      用户私钥

    --------signcerts      组织根证书签名的本用户证书

    --------tlscacerts     就是上面的2

    ----tls

    ------ca.crt

    ------server.crt

    ------server.key

    ----User1@orgXXX

    ------msp

    --------admincerts  组织管理员的身份验证证书,就是3

    --------cacert          组织根证书,就是1

    --------keystore      用户私钥

    --------signcerts      组织根证书签名的本用户证书

    --------tlscacerts     就是上面的2

    ----tls

    ------ca.crt              就是上面的tls根证书2

    ------server.crt

    ------server.key

    打开对接监控软件(statsd或者prometheus)开关

    每个peer上

    CORE_OPERATIONS_LISTENADDRESS=peer0.orgxxxxxxxxx:9443

    CORE_METRICS_PROVIDER=prometheus

    同时打开容器的9443端口映射

    对应core.yaml配置,orderer节点也有core.yaml文件,不确定是否也能打开,待验证

    https://www.zhihu.com/question/311029640

    如何看待「男人四不娶(护士、幼师、银行女、女公务员)」这种说法?

    https://www.linuxidc.com/Linux/2017-03/141593.htm

    CentOS 7下Keepalived + HAProxy 搭建配置详解 HAProxy 的安装与配置

    https://www.zhihu.com/question/54626462
    关于 jira confluence gitlab jenkins 的配置与整合以及常见的使用方式?

    https://www.cnblogs.com/haoprogrammer/p/10245561.html
    kubernetes系列:(一)、kubeadm搭建kubernetes(v1.13.1)单节点集群

    支持pvtdata,需要在channel的配置文件configtx.yaml中打开支持
    参考https://stackoverflow.com/questions/52987188/how-to-enable-private-data-in-fabric-v1-3-0
    Application: &ApplicationCapabilities
    V1_3: true


    fabric-samples的chaincode/marbles02_private/go/marbles_chaincode_private.go中首部的很多
    export MARBLE=$(echo -n "{"name":"marble1","color":"blue","size":35,"owner":"tom","price":99}" | base64)
    默认base64处理字符串后会在76字符后换行,关闭换行使用
    export MARBLE=$(echo -n "{"name":"marble1","color":"blue","size":35,"owner":"tom","price":99}" | base64 -w 0)
    或者
    export MARBLE=$(echo -n "{"name":"marble1","color":"blue","size":35,"owner":"tom","price":99}" | base64 |td -d \n)

    couchdb的web访问
    http://localhost:5984/_utils/

    解释了区块一致性以及对背书的影响

     https://lists.hyperledger.org/g/fabric/message/4896

    https://www.jianshu.com/p/5e6cbdfe2657

    样例代码直接访问账本区块文件,在1.4版本下编译成功但是运行报错

    https://developer.ibm.com/cn/os-academy-hyperledger-fabric/

    https://developer.ibm.com/cn/os-academy-hyperledger-fabric/

    一个不错的ibm系列课程

    ---------------------------------------------------环境变量说明,和ubuntu环境搭建---------------------------------------------

    docker镜像方式的多机集群环境

    节点角色和数量
    kafka,2f+1,3个节点
    zookeeper,3个节点
    orderer,1个或者多个节点
    peer,一个或者多个节点,和组织数量有关,至少一个组织,每个组织至少一个peer

    最分散布局,每个角色一个节点,
    最典型布局,3节点,
    和最小布局,1节点,测试用途

    分散布局
            kafka节点    zookeeper节点    orderer节点    peer节点    client节点(可选)
    images  kafka        zookeeper        orderer       peer        client
                                                        couchdb
                                                        javaenv
                                                        ca(org内一个即可)
                                                        tools(可选)
                                                        ccenv
                                                        baseos
    典型布局
            节点1        节点2            节点3
    images  kafka        kafka           kafka
            zookeeper    zookeeper       zookeeper
            orderer      orderer(可选)   orderer(可选)
            peer         peer(可选)      peer(可选)
            ca(org内一个)ca              ca
            tools(可选)  tools(可选)     tools(可选)
            ccenv        ccenv           ccenv
            javaenv      javaenv         javaenv
            baseos       baseos          baseos
            couchdb      couchdb         couchdb
            
    启动顺序
    1.zookeeper
    2.kafka
    3.couchdb
    4.ca
    5.orderer
    6.peer
    8.cli

    kafka配置项
        hostname: kafka0
        enviroment:
          - KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
            #表示单条消息体的最大大小
          - KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
            #从leader可以拉取的消息最大大小
          - KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
            #是否从非ISR集合中选举follower副本称为新的leader
          - KAFKA_BROKER_ID=0
            #Kafka集群中每个broker的唯一id值
          - KAFKA_MIN_INSYNC_REPLICAS=2
            #确认写成功的最小副本数,达到这个数目时才能确认最终写成功
            #以上应该不会修改
          - KAFKA_DEFAULT_REPLICATION_FACTOR=3
            #副本数,应该指除leader外的replication数目
          - KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
            #zookeeper连接参数,且与名字解析以及zookeeper的服务配置一致
        ports:
          - 9092:9092
            #将容器内端口映射到宿主机端口,冒号前宿主机端口,冒号后容器端口
        extra_hosts:
            #可选,名字解析可以使用名字服务器模式,这里的名字是否和hostname以及zookeeper连接参数一致待定

    zookeeper配置项
        hostname:zookeeper0
        #这里的定义,必须和下方的服务定义,以及kafka节点中的zookeeper连接参数一致
        environment:
          - ZOO_MY_ID=1
          - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
            #服务定义,这里的节点名字必须和hostname一致
        ports:
          - 2181:2181
          - 2888:2888
          - 3888:3888

    orderer配置项
          - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=fabric_default
            #使用的docker network名称,若dockercompose启动,必须和默认名称或者事先创建的名称一致
            #若其他方式启动,可选项有host(默认),bridge,ipvlan和none。就是docker容器的网络选择
          - ORDERER_GENERAL_LOGLEVEL=warn
          - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
            #绑定的ip地址
          - ORDERER_GENERAL_LISTENPORT=7050
            #绑定的port
          - ORDERER_GENERAL_GENESISMETHOD=file
            #创世区块形式,如果是provisional则指定Profile动态生成,如果是file则指定位置
          - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
            #创世区块文件位置
          - ORDERER_GENERAL_LOCALMSPID=OrdererMSP
            #向msp manager注册local msp材料的id。这里的id必须和org定义的系统channel(/Channel/Orderer)配置的msp id一致
          - ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
            #指定orderer所需的私有加密文件的位置
            # enabled TLS
          - ORDERER_GENERAL_TLS_ENABLED=false
            #连接kafka时使用tls
          - ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
            #pem编码私钥,用作认证
          - ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
            #公钥
          - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
            #根证书,用来验证来自kafka集群的证书

            #当channel创建,或者channel加载(比如orderer重启),orderer和kafka集群按照如下方式交互
            # 为对应于该channel的kafka partition创建producer/writer
            # 使用该producer提交一个no-op CONNECT消息给那个partition
            # 为该partition创建一个consumer/reader
            #如果以上某个步骤失败,则在shorttotal期内每shortinterval重试一次,
            #且longtotal内每隔longinterval重复,直到成功为止
          - ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
          - ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
          - ORDERER_KAFKA_VERBOSE=true #打开和kafka的交互日志
            #
          - ORDERER_KAFKA_RETRY_LONGINTERVAL=10s
          - ORDERER_KAFKA_RETRY_LONGTOTAL=100s
          - ORDERER_KAFKA_VERBOSE=true
          - ORDERER_KAFKA_BROKERS=[kafka0:9092,kafka1:9092,kafka2:9092,kafka3:9092]
            #kafka集群连接参数,服务器信息和名字服务器解析或者hosts文件一致
        volumes:
        - ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
        - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
        - ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
        ports:
          - 7050:7050

    couchdb配置项
        environment:
          - COUCHDB_USER=
          - COUCHDB_PASSWORD=
        ports:
          - "5984:5984"

    ca配置项
        environment:
          - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
            #org的中间证书ICA目录,对应cryptogen生成目录的peerOrganizations/org1.example.com/ca/
          - FABRIC_CA_SERVER_CA_NAME=ca
          - FABRIC_CA_SERVER_TLS_ENABLED=true
          - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
            #tls公钥
          - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/a272512e465ff74a214e6333916777912f08600a80a4597b4d9289e3b03231df_sk
            #tls私钥
        ports:
          - "7054:7054"caikanda

    peer配置项(和core.yaml配置文件重复)
        environment:
          - CORE_PEER_ID=peer0.org1.example.com
            #peer在fabric网络中的唯一id
          - CORE_PEER_ADDRESS=peer0.org1.example.com:7051
            #peer在fabric网络中org内的的地址/服务端口,也是client连接的端点
          - CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
            #peer在fabric网络中的链码地址/服务端口。这里未指定则使用下方的CORE_PEER_CHAINCODELISTENADDRESS,后者未指定则使用CORE_PEER_ADDRESS
          - CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
            #本地网络监听地址和端口,这里监听所有接口
          - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.example.com:7051
            #peer在fabric网络中org之间的的地址/服务端口
          - CORE_PEER_LOCALMSPID=Org1MSP
            #这里的设置必须符合本peer作为成员的每个channel的MSP名字,否则peer消息不会为其他节点识别
          - CORE_LEDGER_STATE_STATEDATABASE=CouchDB
            #couchdb或者leveldb
          - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
            #如果选择couchdb,这里设置访问接口,名字为couchdb服务主机名,通过名字服务器或者hosts设置
          - CORE_PEER_NETWORKID=fabric
            #网络的逻辑分隔
          - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock

          - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=e2e_default
            #使用的docker network名称,若dockercompose启动,必须和默认名称或者事先创建的名称一致
            #若其他方式启动,可选项有host(默认),bridge,ipvlan和none。就是docker容器的网络选择
          - CORE_LOGGING_LEVEL=DEBUG
            #日志级别
          - CORE_PEER_GOSSIP_USELEADERELECTION=true
            #同一org内至少1个peer配置为true。使用动态算法选择leader,后者和orderer服务连接拉取账单区块
            #本条和下一条不能同时为true,否则此时状态不确定。
          - CORE_PEER_GOSSIP_ORGLEADER=false
            #静态指定leader
          - CORE_PEER_PROFILE_ENABLED=true
            #商用环境下必须关闭(false),Go语言的取样分析工具所用。
          - CORE_PEER_TLS_ENABLED=true
            #要求server端tls
          - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/fabric/tls/server.crt
            #tls服务器使用的x.509证书文件
          - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/fabric/tls/server.key
            #tls服务器使用的私钥。如果clientAuthEnabled为true那么tls客户端私钥一并提供)
          - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/tls/ca.crt
            #根证书,CORE_PEER_TLS_KEY_FILE使用的证书链
        volumes:
            - /var/run/:/host/var/run/
            - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp
            - ../crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls:/etc/hyperledger/fabric/tls
        ports:
          - 7051:7051
          - 7052:7052
          - 7053:7053
    --------------------------------------
    多机集群环境搭建过程
    物理机配置256gmem,10.42.120.237 root/Zchain1234$,使用kvm+qemu+libvirt虚机架构
    虚机配置2c8g20g,ubuntu1604,账户zchain/zchain
    zk0,zk1,zk2,kafka0,kafka1,kafka2,kafka3,orderer,peer0.org1,peer1.org1,peer0.org2,peer0.org2
    虚机地址使用dhcp(dnsmasq)方式,和mac地址关联地址稳定
    192.168.122.73     kafka0
    192.168.122.162    kafka1
    192.168.122.154    kafka2
    192.168.122.42     kafka3
    192.168.122.6      orderer.example.com
    192.168.122.106    peer0.org1.example.com
    192.168.122.48     peer0.org2.example.com
    192.168.122.59     peer1.org1.example.com
    192.168.122.129    peer1.org2.example.com
    192.168.122.181    zookeeper0
    192.168.122.100    zookeeper1
    192.168.122.217    zookeeper2
    对于每个虚机配置网络和docker环境,可以配置一个基准虚机,基于此进行复制生成不同的业务虚机,拉取不同的业务docker镜像
    以下是ubuntu1604的配置
    dns
    ----
    cat /etc/resolv.conf
    10.30.1.10

    proxy
    -----
     ~/.bashrc
    export http_proxy='http://proxyxa.example.com.cn:80'
    export https_proxy='http://proxyxa.example.com.cn:80/'
    export no_proxy="10.0.0.0/8,127.0.0.1,.example.com.cn"

    apt source
    -----------
    sources.list
    参考 http://mirrors.example.com.cn/help/#ubuntu

    apt proxy
    -----------
    Acquire::http::proxy "http://proxyxa.example.com.cn:80/";
    Acquire::https::proxy "https://proxyxa.example.com.cn:80/";

    openssh-server
    ---------------
    apt install openssh-server

    docker
    -------
    sudo apt-get update
    sudo apt-get install
        apt-transport-https
        ca-certificates
        curl
        software-properties-common
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    sudo apt-key fingerprint 0EBFCD88
    sudo add-apt-repository
       "deb [arch=amd64] https://download.docker.com/linux/ubuntu
       $(lsb_release -cs)
       stable"
    sudo apt-get update
    apt-cache madison docker-ce
    sudo apt-get install docker-ce=17.03.3~ce-0~ubuntu-xenial

    docker-compose install
    --------------
    https://github.com/docker/compose/releases


    docker proxy
    -------------
    sudo mkdir -p /etc/systemd/system/docker.service.d
    sudo touch /etc/systemd/system/docker.service.d/http-proxy.conf
    sudo vi /etc/systemd/system/docker.service.d/http-proxy.conf
    [Service]
    Environment="HTTP_PROXY=http://proxyxa.example.com.cn:80/" "HTTPS_PROXY=https://proxyxa.example.com.cn:80/"
    Environment="NO_PROXY=localhost,127.0.0.0/8,.example.com.cn,10.0.0.0/8"

    docker registry
    ----------------
    sudo vi /etc/docker/daemon.json
    官网镜像
    {
      "registry-mirrors": ["https://registry.docker-cn.com"]
    }
    内网私库
    {
      "registry-mirrors": ["https://public-docker-virtual.artnj.example.com.cn"],
      "insecure-registries": ["0.0.0.0/0"]
    }
    sudo systemctl daemon-reload
    sudo systemctl restart docker
    systemctl show --property=Environment docker
    -------------------------------------------------------一个区块文件读取的例子,依赖configtxlator工具--------------------------------------------------

    package main
    
    import (
    	"bufio"
    	"bytes"
    	"encoding/base64"
    	"errors"
    	"fmt"
    	"io"
    	"io/ioutil"
    	"os"
    	"os/exec"
    
    	"github.com/golang/protobuf/proto"
    	lutil "github.com/hyperledger/fabric/common/ledger/util"
    	"github.com/hyperledger/fabric/protos/common"
    	putil "github.com/hyperledger/fabric/protos/utils"
    )
    
    var ErrUnexpectedEndOfBlockfile = errors.New("unexpected end of blockfile")
    
    var (
    	file       *os.File
    	fileName   string
    	fileSize   int64
    	fileOffset int64
    	fileReader *bufio.Reader
    )
    
    // Parse a block
    func handleBlock(block *common.Block) {
    	fmt.Printf("Block: Number=[%d], CurrentBlockHash=[%s], PreviousBlockHash=[%s]
    ",
    		block.GetHeader().Number,
    		base64.StdEncoding.EncodeToString(block.GetHeader().DataHash),
    		base64.StdEncoding.EncodeToString(block.GetHeader().PreviousHash))
    
    	if putil.IsConfigBlock(block) {
    		fmt.Printf("    txid=CONFIGBLOCK
    ")
    	} else {
    		for _, txEnvBytes := range block.GetData().GetData() {
    			if txid, err := extractTxID(txEnvBytes); err != nil {
    				fmt.Printf("ERROR: Cannot extract txid, error=[%v]
    ", err)
    				return
    			} else {
    				fmt.Printf("    txid=%s
    ", txid)
    			}
    		}
    	}
    
    	// write block to file
    	b, err := proto.Marshal(block)
    	if err != nil {
    		fmt.Printf("ERROR: Cannot marshal block, error=[%v]
    ", err)
    		return
    	}
    
    	filename := fmt.Sprintf("block%d.block", block.GetHeader().Number)
    	if err := ioutil.WriteFile(filename, b, 0644); err != nil {
    		fmt.Printf("ERROR: Cannot write block to file:[%s], error=[%v]
    ", filename, err)
    	}
    
    	strCommand := fmt.Sprintf("configtxlator proto_decode --input %s --type common.Block", filename)
    	command := execCommand(strCommand)
    
    	jsonFileName := fmt.Sprintf("block%d.json", block.GetHeader().Number)
    	if err = ioutil.WriteFile(jsonFileName, []byte(command), 0644); err != nil {
    		fmt.Printf("ERROR: Cannot write json string to file:[%s], error=[%v]
    ", jsonFileName, err)
    	}
    
    }
    
    func execCommand(strCommand string) string {
    	cmd := exec.Command("/bin/bash", "-c", strCommand)
    	var out bytes.Buffer
    	cmd.Stdout = &out
    	err := cmd.Run()
    	if err != nil {
    		fmt.Println("Execute failed when run cmd:" + err.Error())
    		return ""
    	}
    	return out.String()
    }
    
    func nextBlockBytes() ([]byte, error) {
    	var lenBytes []byte
    	var err error
    
    	// At the end of file
    	if fileOffset == fileSize {
    		return nil, nil
    	}
    
    	remainingBytes := fileSize - fileOffset
    	peekBytes := 8
    	if remainingBytes < int64(peekBytes) {
    		peekBytes = int(remainingBytes)
    	}
    	if lenBytes, err = fileReader.Peek(peekBytes); err != nil {
    		return nil, err
    	}
    
    	length, n := proto.DecodeVarint(lenBytes)
    	if n == 0 {
    		return nil, fmt.Errorf("Error in decoding varint bytes [%#v]", lenBytes)
    	}
    
    	bytesExpected := int64(n) + int64(length)
    	if bytesExpected > remainingBytes {
    		return nil, ErrUnexpectedEndOfBlockfile
    	}
    
    	// skip the bytes representing the block size
    	if _, err = fileReader.Discard(n); err != nil {
    		return nil, err
    	}
    
    	blockBytes := make([]byte, length)
    	if _, err = io.ReadAtLeast(fileReader, blockBytes, int(length)); err != nil {
    		return nil, err
    	}
    
    	fileOffset += int64(n) + int64(length)
    	return blockBytes, nil
    }
    
    func deserializeBlock(serializedBlockBytes []byte) (*common.Block, error) {
    	block := &common.Block{}
    	var err error
    	b := lutil.NewBuffer(serializedBlockBytes)
    	if block.Header, err = extractHeader(b); err != nil {
    		return nil, err
    	}
    	if block.Data, err = extractData(b); err != nil {
    		return nil, err
    	}
    	if block.Metadata, err = extractMetadata(b); err != nil {
    		return nil, err
    	}
    	return block, nil
    }
    
    func extractHeader(buf *lutil.Buffer) (*common.BlockHeader, error) {
    	header := &common.BlockHeader{}
    	var err error
    	if header.Number, err = buf.DecodeVarint(); err != nil {
    		return nil, err
    	}
    	if header.DataHash, err = buf.DecodeRawBytes(false); err != nil {
    		return nil, err
    	}
    	if header.PreviousHash, err = buf.DecodeRawBytes(false); err != nil {
    		return nil, err
    	}
    	if len(header.PreviousHash) == 0 {
    		header.PreviousHash = nil
    	}
    	return header, nil
    }
    
    func extractData(buf *lutil.Buffer) (*common.BlockData, error) {
    	data := &common.BlockData{}
    	var numItems uint64
    	var err error
    
    	if numItems, err = buf.DecodeVarint(); err != nil {
    		return nil, err
    	}
    	for i := uint64(0); i < numItems; i++ {
    		var txEnvBytes []byte
    		if txEnvBytes, err = buf.DecodeRawBytes(false); err != nil {
    			return nil, err
    		}
    		data.Data = append(data.Data, txEnvBytes)
    	}
    	return data, nil
    }
    
    func extractMetadata(buf *lutil.Buffer) (*common.BlockMetadata, error) {
    	metadata := &common.BlockMetadata{}
    	var numItems uint64
    	var metadataEntry []byte
    	var err error
    	if numItems, err = buf.DecodeVarint(); err != nil {
    		return nil, err
    	}
    	for i := uint64(0); i < numItems; i++ {
    		if metadataEntry, err = buf.DecodeRawBytes(false); err != nil {
    			return nil, err
    		}
    		metadata.Metadata = append(metadata.Metadata, metadataEntry)
    	}
    	return metadata, nil
    }
    
    func extractTxID(txEnvelopBytes []byte) (string, error) {
    	txEnvelope, err := putil.GetEnvelopeFromBlock(txEnvelopBytes)
    	if err != nil {
    		return "", err
    	}
    	txPayload, err := putil.GetPayload(txEnvelope)
    	if err != nil {
    		return "", nil
    	}
    	chdr, err := putil.UnmarshalChannelHeader(txPayload.Header.ChannelHeader)
    	if err != nil {
    		return "", err
    	}
    	return chdr.TxId, nil
    }
    
    func input(message string) string {
    	scanner := bufio.NewScanner(os.Stdin)
    	fmt.Print(message)
    	scanner.Scan()
    	if err := scanner.Err(); err != nil {
    		fmt.Fprintln(os.Stderr, "error:", err)
    	}
    	return scanner.Text()
    }
    
    func main() {
    	fileName := input("Please input block file name: ")
    
    	var err error
    	if file, err = os.OpenFile(fileName, os.O_RDONLY, 0600); err != nil {
    		fmt.Printf("ERROR: Cannot Open file: [%s], error=[%v]
    ", fileName, err)
    		return
    	}
    	defer file.Close()
    
    	if fileInfo, err := file.Stat(); err != nil {
    		fmt.Printf("ERROR: Cannot Stat file: [%s], error=[%v]
    ", fileName, err)
    		return
    	} else {
    		fileOffset = 0
    		fileSize = fileInfo.Size()
    		fileReader = bufio.NewReader(file)
    	}
    
    	execCommand("rm -rf ./block*.block ./block*.json")
    
    	// Loop each block
    	for {
    		if blockBytes, err := nextBlockBytes(); err != nil {
    			fmt.Printf("ERROR: Cannot read block file: [%s], error=[%v]
    ", fileName, err)
    			break
    		} else if blockBytes == nil {
    			// End of file
    			break
    		} else {
    			if block, err := deserializeBlock(blockBytes); err != nil {
    				fmt.Printf("ERROR: Cannot deserialize block from file: [%s], error=[%v]
    ", fileName, err)
    				break
    			} else {
    				handleBlock(block)
    			}
    		}
    	}
    
    	fmt.Println("
    Parse block file to json files successfully, please check files named "block*.json" on current directory!")
    }
    

      

  • 相关阅读:
    jQuery入门教程
    vue-lazyload 图片不更新
    Eggjs 设置跨域请求
    Vue.js错误: Maximum call stack size exceeded
    ubuntu nginx ssl 证书配置
    ubuntu 安装nginx, 出现 Unable to locate package
    nginx 判断移动端或者PC端 进入不同域名
    node.js 生成二维码
    Linux 配置ssh 免密码登录
    nodejs 从部署到域名访问
  • 原文地址:https://www.cnblogs.com/dablyo/p/10697459.html
Copyright © 2011-2022 走看看