zoukankan      html  css  js  c++  java
  • go gRPC 客户端内存暴涨原因分析

    创建一个 gRPC 客户端连接,会创建的几个协程:

    1)transport.loopyWriter.run 往服务端发送数据协程,流控时会阻塞,结果是数据堆积,内存上涨

    2)transport.http2Client.reader 接收服务端数据协程,并会调用 t.controlBuf.throttle() 执行流控

    现象描述:

    客户端到服务端单个连接,压测时内存快速增长,直到 OOM 挂掉。在 OOM 之前停止压测,内存会逐渐下降。客户端到服务端改为两个连接时,压测时未出现内存快速增长。

    问题原因:

    每一个 gRPC 连接均有一个独立的队列,挂在该连接的所有 streams 共享,请求相当于生产,往服务端发送请求相当于消费,当生产速度大于消费速度时,就会出现内存持续上长。该队列没有长度限制,所以会持续上长。快速上涨的原因是协程 transport.loopyWriter).run 没有被调度运行,队列消费停止,导致队列只增不减。停止压测后,协程 transport.loopyWriter).run 会恢复执行。

    当不再消费时,可观察到大量如下协程:

    grpc/internal/transport.(*Stream).waitOnHeader (0x90c8d5)
    runtime/netpoll.go:220 internal/poll.runtime_pollWait (0x46bdd5)
    

    使用 netstat 命令可观察到发送队列大量堆积。

    解决方案:

    控制生产速度,即控制单个 gRPC 客户端连接发送的请求数量。此外,还可以启用客户端的 keepalive 关闭连接。

    后话

    go gRPC 如果提供取 controlBuffer 的队列 list 的大小接口,可使得更为简单和友好。

    相关源码:

    • http2Client
    // 源码所在文件:google.golang.org/grpc/http2_client.go
    // http2Client 实现了接口 ClientTransport
    // http2Client implements the ClientTransport interface with HTTP2.
    type http2Client struct {
      conn net.Conn // underlying communication channel
      loopy *loopyWriter // 生产和消费关联的队列在这里面,所在文件:controlbuf.go
    
      // controlBuf delivers all the control related tasks (e.g., window
    	// updates, reset streams, and various settings) to the controller.
    	controlBuf *controlBuffer // 所在文件:controlbuf.go
      
      maxConcurrentStreams  uint32
      streamQuota           int64
    	streamsQuotaAvailable chan struct{}
      waitingStreams        uint32
      
      initialWindowSize int32
    }
    
    type controlBuffer struct {
      list *itemList // 队列
    }
    
    type loopyWriter struct {
      // 关联上 controlBuffer,
      // 消费 controlBuffer 中的队列 list,
      // 生产由 http2Client 通过 controlBuffer 进行。
      cbuf *controlBuffer
    }
    
    • 一个 gRPC 客户端连接被创建时,即会创建一个 run 协程,run 协程为队列的消费者
    // 源码所在文件:internal/transport/http2_client.go
    // 所在包名:transport
    // 打断点方法:
    // (dlv) b transport.newHTTP2Client
    // 被调用:协程 grpc.addrConn.resetTransport
    func newHTTP2Client(connectCtx, ctx context.Context, addr resolver.Address, opts ConnectOptions, onPrefaceReceipt func(), onGoAway func(GoAwayReason), onClose func()) (_ *http2Client, err error) {
      // 建立连接,注意不同于 grpc.Dial,
      // grpc.Dial 实际不包含连接,对于 block 调用也只是等待连接状态为 Ready 。
      // transport.dial 的实现调用了 net.Dialer.DialContext,
      // 而 net.Dialer.DialContext 是更底层 Go 自带包的组成部分,不是 gRPC 的组成部分。
      // net.Dialer.DialContext 的实现支持:TCP、UDP、Unix等:。
      conn, err := dial(connectCtx, opts.Dialer, addr.Addr)
      t.controlBuf = newControlBuffer(t.ctxDone) // 含发送队列的初始化
    
      if t.keepaliveEnabled {
    		t.kpDormancyCond = sync.NewCond(&t.mu)
    		go t.keepalive() // 保活协程
    	}
      
      // Start the reader goroutine for incoming message. Each transport has
    	// a dedicated goroutine which reads HTTP2 frame from network. Then it
    	// dispatches the frame to the corresponding stream entity.
      go t.reader()
      
      // Send connection preface to server.
    	n, err := t.conn.Write(clientPreface)
      
      go func() {
        t.loopy = newLoopyWriter(clientSide, t.framer, t.controlBuf, t.bdpEst)
        err := t.loopy.run()
      }
    }
    
    0  0x00000000008f305b in google.golang.org/grpc/internal/transport.newHTTP2Client
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/http2_client.go:166
    1  0x00000000009285a8 in google.golang.org/grpc/internal/transport.NewClientTransport
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/transport.go:577
    2  0x00000000009285a8 in google.golang.org/grpc.(*addrConn).createTransport
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/clientconn.go:1297
    3  0x0000000000927e48 in google.golang.org/grpc.(*addrConn).tryAllAddrs
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/clientconn.go:1227
    
       // 下列的 grpc.addrConn.resetTransport 是一个协程
    4  0x000000000092737f in google.golang.org/grpc.(*addrConn).resetTransport
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/clientconn.go:1142
    5  0x0000000000471821 in runtime.goexit
       at /usr/local/go/src/runtime/asm_amd64.s:1374
       
    // 源码所在文件:grpc/clientconn.go
    // 所在包名:grpc
    // 被调用:grpc.addrConn.getReadyTransport
    func (ac *addrConn) connect() error {
      // Start a goroutine connecting to the server asynchronously.
    	go ac.resetTransport()
    }
    
    // 传统类型的 RPC 调用从 grpc.ClientConn.Invoke 开始:
    //    XXX.pb.go // 编译 .proto 生成的文件
    // -> main.helloServiceClient.Hello
    // -> grpc.ClientConn.Invoke // 在 call.go 中,如果是 stream RPC,则从调用 grpc.ClientConn.NewStream 开始
    // -> grpc.invoke // 在 call.go 中
    // -> grpc.newClientStream // 在 stream.go 中
    // -> grpc.clientStream.newAttemptLocked // 在 stream.go 中
    // -> grpc.ClientConn.getTransport // 在 clientconn.go 中
    // -> grpc.pickerWrapper.pick // 在 picker_wrapper.go 中
    // -> grpc.addrConn.getReadyTransport
    // -> grpc.addrConn.connect // 创建协程 resetTransport
    // -> grpc.addrConn.resetTransport // ***是一个协程***
    // -> grpc.addrConn.tryAllAddrs
    // -> grpc.addrConn.createTransport // 在clientconn.go 中
    // -> transport.NewClientTransport // 在 transport.go 中
    // -> transport.newHTTP2Client
    // -> transport.dial
    // -> net.Dialer.DialContext // net 为 Go 自带包,不是 gRPC 包
    // -> net.sysDialer.dialSerial
    // -> net.sysDialer.dialSingle
    // -> net.sysDialer.dialTCP/dialUDP/dialUnix/dialIP
    // -> net.sysDialer.doDialTCP // 以 dialTCP 为例
    // -> net.internetSocket // 从这开始,和 C 语言的使用类似了,只不过包装了不同平台的
    // -> net.socket
    // -> net.sysSocket
    //
    // stream 类型的 RPC 从 NewStream 开始:
    // grpc.newClientStream 除被 grpc.invoke 调用外,还会被 stream.go 中的 grpc.ClientConn.NewStream 直接调用
    //    XXX.pb.go // 编译 .proto 生成的文件
    // -> grpc.ClientConn.NewStream // 在 stream.go 中
    // -> grpc.newClientStream // 在 stream.go 中
    // -> 从这开始同上述流程
    
    // 源码所在文件:grpc/clientconn.go
    // 所在包名:grpc
    // 被调用:调用源头为 grpc.ClientConn.NewStream,其实是 grpc.newClientStream 。
    // getReadyTransport returns the transport if ac's state is READY.
    // Otherwise it returns nil, false.
    // If ac's state is IDLE, it will trigger ac to connect.
    func (ac *addrConn) getReadyTransport() (transport.ClientTransport, bool) {
      // Trigger idle ac to connect.
    	if idle {
    		ac.connect()
    	}
    }
    
    • 消费协程 run 相关源代码摘要
    // 源码所在文件:internal/transport/controlbuf.go
    // Loopy receives frames from the control buffer.
    // Each frame is handled individually; most of the work done by loopy goes
    // into handling data frames. Loopy maintains a queue of active streams, and each
    // stream maintains a queue of data frames; as loopy receives data frames
    // it gets added to the queue of the relevant stream.
    // Loopy goes over this list of active streams by processing one node every iteration,
    // thereby closely resemebling to a round-robin scheduling over all streams. While
    // processing a stream, loopy writes out data bytes from this stream capped by the min
    // of http2MaxFrameLen, connection-level flow control and stream-level flow control.
    type loopyWriter struct {
      // cbuf 维护了队列(list *itemList),
      // 如果不加控制,就会导致内存大涨。
      cbuf      *controlBuffer
      sendQuota uint32
      
      // estdStreams is map of all established streams that are not cleaned-up yet.
    	// On client-side, this is all streams whose headers were sent out.
    	// On server-side, this is all streams whose headers were received.
      estdStreams map[uint32]*outStream // Established streams.
    
      // activeStreams is a linked-list of all streams that have data to send and some
    	// stream-level flow control quota.
    	// Each of these streams internally have a list of data items(and perhaps trailers
    	// on the server-side) to be sent out.
      activeStreams *outStreamList
    }
    
    // 源码所在文件:internal/transport/controlbuf.go
    func (l *loopyWriter) run() (err error) {
      // 通过 get 间接调用 dequeue 和 dequeueAll
      for {
        it, err := l.cbuf.get(true)
        if err != nil {
    			return err
    		}
    		if err = l.handle(it); err != nil {
    			return err
    		}
    		if _, err = l.processData(); err != nil {
    			return err
    		}
      }
    }
    
    func (c *controlBuffer) get(block bool) (interface{}, error) {
      for {
        c.mu.Lock() // 队列操作需要加锁保护
        ......
        // 消费队列(出队)
        h := c.list.dequeue().(cbItem)
        ......
        if !block {
    			c.mu.Unlock()
    			return nil, nil
    		}
        // 阻塞
        c.consumerWaiting = true
    		c.mu.Unlock()
    		select {
    		case <-c.ch: // 对应 executeAndPut 中唤醒的:c.ch <- struct{}
    		case <-c.done:
    			c.finish() // 清空队列
    			return nil, ErrConnClosing // indicates that the transport is closing
    		}
      }
    }
    
    func (c *controlBuffer) finish() {
      ......
      // 清空队列
      for head := c.list.dequeueAll(); head != nil; head = head.next {
      ......
    }
    
    • 特别说明

    每一次 gRPC 调用,客户端均会创建一个新的 Stream,
    该特性使得同一 gRPC 连接可以同时处理多个调用。请求的发送并不是同步的,而是基于队列的异步发送。
    每一个 gRPC 客户端连接均有一个自己的队列,gRPC 并没有直接限定队列大小,所以如果不加任何限制则会内存暴涨,直到 OOM 发生。

    • 生产者:发起调用的客户端*
    message HelloReq { // 请求
        string text = 1;
    }
    message HelloRes { // 响应
        string text = 1;
    }
    service HelloService {
        rpc Hello(HelloReq) returns (HelloRes) {}
    }
    
    grpcClient := grpc.Dial(endpoint, opts)
    helloClient := NewHelloServiceClient(grpcClient)
    // Hello 调用为生产源头
    res, err := helloClient.Hello(ctx, &req)
    
    // Hello 的实现,为 protoc 编译生成的代码
    func (c *helloServiceClient) Hello(ctx context.Context, in *HelloReq, opts ...grpc.CallOption) (*HelloRes, error) {
    	out := new(HelloRes)
    	err := c.cc.Invoke(ctx, "/main.HelloService/Hello", in, out, opts...)
    	if err != nil {
    		return nil, err
    	}
    	return out, nil
    }
    
    // 源码所在文件:google.golang.org/grpc/call.go
    func (cc *ClientConn) Invoke(ctx context.Context, method string, args, reply interface{}, opts ...CallOption) error {
    	// allow interceptor to see all applicable call options, which means those
    	// configured as defaults from dial option as well as per-call options
    	opts = combine(cc.dopts.callOptions, opts)
    
    	if cc.dopts.unaryInt != nil {
    		return cc.dopts.unaryInt(ctx, method, args, reply, cc, invoke, opts...)
    	}
      // 转调用私有的 invoke 函数
    	return invoke(ctx, method, args, reply, cc, opts...)
    }
    
    // 源码所在文件:google.golang.org/grpc/call.go
    func invoke(ctx context.Context, method string, req, reply interface{}, cc *ClientConn, opts ...CallOption) error {
      // newClientStream 间接往队列中生产消息
      // pprof 显示 newClientStream 调用的 withRetry 占用内存大头
    	cs, err := newClientStream(ctx, unaryStreamDesc, cc, method, opts...)
    	if err != nil {
    		return err
    	}
    	if err := cs.SendMsg(req); err != nil {
    		return err
    	}
    	return cs.RecvMsg(reply)
    }
    
    // 源码所在文件:google.golang.org/grpc/stream.go
    // 设置断点:(dlv) b clientStream.SendMsg
    func (cs *clientStream) SendMsg(m interface{}) (err error) {
    }
    
    // 源码所在文件:google.golang.org/grpc/stream.go
    // pprof 显示 newClientStream 消耗太多内存,而这又发生在其调用的 withRetry 中
    func newClientStream(ctx context.Context, desc *StreamDesc, cc *ClientConn, method string, opts ...CallOption) (_ ClientStream, err error) {
      // 问题出在 newStream 分配的内存越来越多,
      // 但并非严格的泄漏,只是不断积累,但压力下来后会缓慢释放。
      // newStream 的实现是调用 NewStream:
      // cs.callHdr.PreviousAttempts = cs.numRetries
      // s, err := a.t.NewStream(cs.ctx, cs.callHdr)
      // cs.attempt.s = s
      // 这里 a 的类型为 csAttempt:
      // implements a single transport stream attempt within a clientStream
      op := func(a *csAttempt) error { return a.newStream() }
      // 内存问题所在:withRetry,进一步内存发生在非直接调用的:NewStream
    	if err := cs.withRetry(op, func() { cs.bufferForRetryLocked(0, op) }); err != nil {
    		cs.finish(err)
    		return nil, err
    	}
    }
    
    // 源码所在文件:google.golang.org/grpc/stream.go
    func (cs *clientStream) withRetry(op func(a *csAttempt) error, onSuccess func()) error {
      for { // 循环,内存上涨,这儿并没有循环
        // 调用 newStream,
        // 这里间接往队列中生产消息。
        err := op(a)
      }
    }
    
    // 源码所在文件:google.golang.org/grpc/stream.go
    func (a *csAttempt) newStream() error {
    	cs := a.cs
    	cs.callHdr.PreviousAttempts = cs.numRetries
      // 下列的 t 类型为:transport.ClientTransport,
      // 但注意 transport.ClientTransport 是一个 interface,并不是 struct。
      // 而 http2Client 是一个针对 ClientTransport 接口的实现。
    	s, err := a.t.NewStream(cs.ctx, cs.callHdr)
    }
    
    // 源码所在文件:google.golang.org/grpc/internal/transport/http2_client.go
    // NewStream creates a stream and registers it into the transport as "active"
    // streams.
    func (t *http2Client) NewStream(ctx context.Context, callHdr *CallHdr) (_ *Stream, err error) {
      // 内存上涨,是因为队列中存了大量的 headerFields 和 s
      headerFields, err := t.createHeaderFields(ctx, callHdr)
      s := t.newStream(ctx, callHdr)
      // hdr 聚合了 headerFields 和 s
      hdr := &headerFrame{
        hf:        headerFields,
        initStream: func(id uint32) error {
          t.activeStreams[id] = s
        },
        wq:         s.wq,
      }
      for {
        // 调用 executeAndPut 入队(生产)
        // 内存上涨,是因为队列中存了大量的 hdr 。
        success, err := t.controlBuf.executeAndPut(func(it interface{}) bool {
        }, hdr)
      }
    }
    
    // 源码所在文件:google.golang.org/grpc/internal/transport/controlbuf.go
    func (c *controlBuffer) executeAndPut(f func(it interface{}) bool, it cbItem) (bool, error) {
      // 入队操作(生产)
      // 当入队快于出队(消费)时,就会出现内存上涨。
      c.list.enqueue(it)
      if it.isTransportResponseFrame() { // 调用接口 cbItem 定义的方法
        // counts the number of queued items that represent the response of an action initiated by the peer
        // 变量 transportResponseFrames 记录了队列大小
        c.transportResponseFrames++
      }
    }
    
    // 源码所在文件:google.golang.org/grpc/internal/transport/controlbuf.go
    func (c *controlBuffer) put(it cbItem) error {
      // 入队操作(生产)
    	_, err := c.executeAndPut(nil, it)
    	return err
    }
    
    type cbItem interface {
    	isTransportResponseFrame() bool
    }
    func (*registerStream) isTransportResponseFrame() bool { return false }
    func (h *headerFrame) isTransportResponseFrame() bool {
    	return h.cleanup != nil && h.cleanup.rst // Results in a RST_STREAM
    }
    func (c *cleanupStream) isTransportResponseFrame() bool { return c.rst } // Results in a RST_STREAM
    func (*dataFrame) isTransportResponseFrame() bool { return false }
    func (*incomingWindowUpdate) isTransportResponseFrame() bool { return false }
    func (*outgoingWindowUpdate) isTransportResponseFrame() bool {
    	return false // window updates are throttled by thresholds
    }
    func (*incomingSettings) isTransportResponseFrame() bool { return true } // Results in a settings ACK
    func (*outgoingSettings) isTransportResponseFrame() bool { return false }
    func (*incomingGoAway) isTransportResponseFrame() bool { return false }
    func (*goAway) isTransportResponseFrame() bool { return false }
    func (*ping) isTransportResponseFrame() bool { return true }
    func (*outFlowControlSizeRequest) isTransportResponseFrame() bool { return false }
    
    • **协程 transport.loopyWriter.run **
    0  0x000000000043bfa5 in runtime.gopark
       at /usr/local/go/src/runtime/proc.go:307
    1  0x000000000044c10f in runtime.selectgo
       at /usr/local/go/src/runtime/select.go:338
       一个连接只有一个 run 协程
    2  0x00000000008eca2f in google.golang.org/grpc/internal/transport.(*controlBuffer).get
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/controlbuf.go:417
    3  0x00000000008ed76e in google.golang.org/grpc/internal/transport.(*loopyWriter).run
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/controlbuf.go:544
    4  0x000000000090f13b in google.golang.org/grpc/internal/transport.newHTTP2Client.func3
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/http2_client.go:356
    5  0x0000000000471821 in runtime.goexit
       at /usr/local/go/src/runtime/asm_amd64.s:1374
    
    • 协程 transport.controlBuffer.throttle
    • 协程 transport.http2Client.reader
    0  0x000000000043bfa5 in runtime.gopark
       at /usr/local/go/src/runtime/proc.go:307
    1  0x000000000044c10f in runtime.selectgo
       at /usr/local/go/src/runtime/select.go:338
    2  0x00000000008ec335 in google.golang.org/grpc/internal/transport.(*controlBuffer).throttle
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/controlbuf.go:319
       
       文件 http2_client.go 函数 transport.newHTTP2Client 会创建协程 reader
    3  0x00000000008fdadd in google.golang.org/grpc/internal/transport.(*http2Client).reader
       at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/http2_client.go:1293
    4  0x0000000000471821 in runtime.goexit
       at /usr/local/go/src/runtime/asm_amd64.s:1374
    
    • 协程 transport.Stream.waitOnHeader
    // 大量 waitOnHeader 协程
    func (s *Stream) waitOnHeader() {
    	if s.headerChan == nil {
    		// On the server headerChan is always nil since a stream originates
    		// only after having received headers.
    		return
    	}
    	select {
    	case <-s.ctx.Done():
    		// Close the stream to prevent headers/trailers from changing after
    		// this function returns.
    		s.ct.CloseStream(s, ContextErr(s.ctx.Err()))
    		// headerChan could possibly not be closed yet if closeStream raced
    		// with operateHeaders; wait until it is closed explicitly here.
    		<-s.headerChan
    	case <-s.headerChan:
    	}
    }
    
    // 实为调用协程
    0  0x000000000043bfa5 in runtime.gopark
        at /usr/local/go/src/runtime/proc.go:307
     1  0x000000000044c10f in runtime.selectgo
        at /usr/local/go/src/runtime/select.go:338
    
     2  0x000000000090c8d5 in google.golang.org/grpc/internal/transport.(*Stream).waitOnHeader
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/transport.go:321
     3  0x0000000000942805 in google.golang.org/grpc/internal/transport.(*Stream).RecvCompress
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/internal/transport/transport.go:336
     4  0x0000000000942805 in google.golang.org/grpc.(*csAttempt).recvMsg
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/stream.go:894
     5  0x000000000094ad06 in google.golang.org/grpc.(*clientStream).RecvMsg.func1
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/stream.go:759
     6  0x000000000094057c in google.golang.org/grpc.(*clientStream).withRetry
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/stream.go:617
     7  0x0000000000941505 in google.golang.org/grpc.(*clientStream).RecvMsg
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/stream.go:758
     8  0x0000000000921d3b in google.golang.org/grpc.invoke
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/call.go:73
     9  0x0000000000921ad3 in google.golang.org/grpc.(*ClientConn).Invoke
        at /root/go/pkg/mod/google.golang.org/grpc@v1.33.2/call.go:37
    10  0x0000000000b185f4 in /root/hello/grpc/proto.(*HelloClient).Call
        at /root/hello/hello.pb.go:70
    
    • 协程 poll_runtime_pollWait
    // poll_runtime_pollWait, which is internal/poll.runtime_pollWait,
    // waits for a descriptor to be ready for reading or writing,
    // according to mode, which is 'r' or 'w'.
    // This returns an error code; the codes are defined above.
    //go:linkname poll_runtime_pollWait internal/poll.runtime_pollWait
    func poll_runtime_pollWait(pd *pollDesc, mode int) int {
            errcode := netpollcheckerr(pd, int32(mode))
            if errcode != pollNoError {
                    return errcode
            }
            // As for now only Solaris, illumos, and AIX use level-triggered IO.
            if GOOS == "solaris" || GOOS == "illumos" || GOOS == "aix" {
                    netpollarm(pd, mode)
            }
            for !netpollblock(pd, int32(mode), false) {
                    errcode = netpollcheckerr(pd, int32(mode))
                    if errcode != pollNoError {
                            return errcode
                    }
                    // Can happen if timeout has fired and unblocked us,
                    // but before we had a chance to run, timeout has been reset.
                    // Pretend it has not happened and retry.
            }
            return pollNoError
    }
    
    函数 runtime.gopark 用于协程的切换
    0  0x000000000043bfa5 in runtime.gopark
       at /usr/local/go/src/runtime/proc.go:307
    1  0x000000000043447b in runtime.netpollblock
       at /usr/local/go/src/runtime/netpoll.go:436
    
    2  0x000000000046bdd5 in internal/poll.runtime_pollWait
       at /usr/local/go/src/runtime/netpoll.go:220
    3  0x00000000004d9685 in internal/poll.(*pollDesc).wait
       at /usr/local/go/src/internal/poll/fd_poll_runtime.go:87
    4  0x00000000004da6c5 in internal/poll.(*pollDesc).waitRead
       at /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
    5  0x00000000004da6c5 in internal/poll.(*FD).Read
       at /usr/local/go/src/internal/poll/fd_unix.go:159
    6  0x00000000005327af in net.(*netFD).Read
       at /usr/local/go/src/net/fd_posix.go:55
    7  0x000000000054688e in net.(*conn).Read
       at /usr/local/go/src/net/net.go:182
    8  0x00000000006e14b8 in net/http.(*connReader).backgroundRead
       at /usr/local/go/src/net/http/server.go:690
    9  0x0000000000471821 in runtime.goexit
       at /usr/local/go/src/runtime/asm_amd64.s:1374
    
    • 队列
    type itemList struct {
    	head *itemNode
    	tail *itemNode
    }
    
    type itemNode struct {
    	it   interface{}
    	next *itemNode
    }
    
    // 入队(生产)
    //
    // 从这可看到,
    // 队列没有大小限制,生产(入队)不受限,
    // 所以一旦生产速度大于消费速度,就会出现堆积导致内存上涨。
    func (il *itemList) enqueue(i interface{}) {
    	n := &itemNode{it: i}
    	if il.tail == nil {
    		il.head, il.tail = n, n
    		return
    	}
    	il.tail.next = n
    	il.tail = n
    }
    
    // 出队(消费)
    func (il *itemList) dequeue() interface{} {
    	if il.head == nil {
    		return nil
    	}
    	i := il.head.it
    	il.head = il.head.next
    	if il.head == nil {
    		il.tail = nil
    	}
    	return i
    }
    
    // 清空(消费),直接丢弃了
    func (il *itemList) dequeueAll() *itemNode {
    	h := il.head
    	il.head, il.tail = nil, nil
    	return h
    }
    
    func (il *itemList) isEmpty() bool {
    	return il.head == nil
    }
    
    // 源码所在文件:internal/transport/controlbuf.go
    //    transport.loopyWriter.run
    // -> transport.loopyWriter.handle
    // -> transport.loopyWriter.headerHandler
    // -> transport.loopyWriter.writeHeader // 阻塞在这了
    func (l *loopyWriter) handle(i interface{}) error {
    	switch i := i.(type) {
    	case *incomingWindowUpdate:
    		return l.incomingWindowUpdateHandler(i)
    	case *outgoingWindowUpdate:
    		return l.outgoingWindowUpdateHandler(i)
    	case *incomingSettings:
    		return l.incomingSettingsHandler(i)
    	case *outgoingSettings:
    		return l.outgoingSettingsHandler(i)
    	case *headerFrame:
    		return l.headerHandler(i) // 阻塞在这了
      ......
    }
    
    // 源码所在文件:internal/transport/controlbuf.go
    func (l *loopyWriter) writeHeader(streamID uint32, endStream bool, hf []hpack.HeaderField, onWrite func()) error {
      // 阻塞在这儿:
      // 结构体 frame 定义在 internal/transport/http_util.go 文件中,
      // 成员 fr 的类型为 http2.Framer,定义在 x/net/http2/frame.go 文件中
      err = l.framer.fr.WriteHeaders(http2.HeadersFrameParam{
      })
    }
    
  • 相关阅读:
    hadoop节点的增删
    hadoop集群搭建
    主从配置
    CentOS7ssh互信
    Java根据视频的URL地址,获取视频时长
    Mybatis-plus使用@Select注解使用IN查询不出数据的问题
    洗牌算法
    1525
    SpringBoot+Quartz+MySQL实现分布式定时任务
    微信小程序授权登录解密失败问题
  • 原文地址:https://www.cnblogs.com/aquester/p/14344975.html
Copyright © 2011-2022 走看看