zoukankan      html  css  js  c++  java
  • webrtc源码分析-视频发送流程

    1.前言

    本文介绍了webrtc中视频的基本发送流程,阐述了视频如何从编码,到RTP打包,到Paced控制,经过ICE发送的流程

    2.正文

    2.1整体概览

    本节介绍涉及到媒体发送设计的整体类图层次,结构上如下:

    • PeerConnection:代表对等连接的一端,其下有Transceiver数组

    • Transceiver :在Uplan中一个Track对应一个mid,一个mid对应一个Transceiver,每个Transceiver下有一个RtpSender和RtpReceiver,用于发送、接收媒体当前mid的媒体,Transceiver属于Uplan,和PlanB下一个mid可能有多个Track是不同的

    • RtpSender: 只是一个中间类,其下有MediaChannel、DataChannel,表征媒体收发通道

    • MediaChannel: 媒体通道,其下管理着SendStream和ReceiveStream数组,由于UPlan中一个Track对应一个主ssrc,一个主ssrc对应一个stream,数组在这里应该是为了兼容以前的PlanB; 同时还有各种Transport如SRtpTransport用于对数据做dtls加密处理, ICETransport提供网络数据发送功能

    • SendStream: 发送流,不同的主ssrc对应不同的SendStream。其下管理着编码器StreamEncoder,对媒体数据编码;管理着RtpVideoSender用以对编码数据进行Rtp打包和发送。

    • RTPVideoSender: 视频帧的RTP发送器,主要是对编码出来的帧信息进行解析,然后转发给RtpSenderVideo

    • RTPSenderVideo: 将payload编码完成后的帧进行RTP封装,传给PacedSender发送

    • PacedSender: 码率控制发送器,将Rtp包按照一定的速率发送

    2.2 视频发送过程分析

    视频的实际编码和发送过程如上图所示:

    • 媒体源将原始图像传递到StreamEncoder进行编码
    • StreamEncoder将编码出的帧给到RtpVideoSender, 让其解析帧相关信息,更新编码和发送码率控制
    • RtpVideoSender将编码帧转发给RtpSenderVideo进行Rtp封装
    • RtpSenderVideo将封装的好的Rtp Packet投递到PacedSebder的队列,让其按照设定的速率发送
    • PacedSender将包发到MediaChannel下, MediaChannel将传递到传输链SrtpTransport, RtpTransport, DtlsTransport, IceTransport, 最后发往网络

    2.2.1 StreamEncoder编码帧

    本节概述下图标记部分:

    class VideoSendStreamImpl下拥有两个重要对象: video_stream_encoder_rtp_video_sender_

    class VideoSendStreamImpl{
        ...
        VideoStreamEncoderInterface* const video_stream_encoder_;	
        RtpVideoSenderInterface* const rtp_video_sender_;
    }
    

    video_stream_encoder_ 是负责视频编码的类,其从媒体源获取视频帧然后进行编码,类为 class VideoStreamEncoder

    // VideoStreamEncoder represent a video encoder that accepts raw video frames as
    // input and produces an encoded bit stream.
    // Usage:
    //  Instantiate.
    //  Call SetSink.
    //  Call SetSource.
    //  Call ConfigureEncoder with the codec settings.
    //  Call Stop() when done.
    class VideoStreamEncoder : public VideoStreamEncoderInterface,
                               private EncodedImageCallback,
                               public VideoSourceRestrictionsListener {
     public:
      // TODO(bugs.webrtc.org/12000): Reporting of VideoBitrateAllocation is being
      // deprecated. Instead VideoLayersAllocation should be reported.
      enum class BitrateAllocationCallbackType {
        kVideoBitrateAllocation,
        kVideoBitrateAllocationWhenScreenSharing,
        kVideoLayersAllocation
      };
      VideoStreamEncoder(Clock* clock,
                         uint32_t number_of_cores,
                         VideoStreamEncoderObserver* encoder_stats_observer,
                         const VideoStreamEncoderSettings& settings,
                         std::unique_ptr<OveruseFrameDetector> overuse_detector,
                         TaskQueueFactory* task_queue_factory,
                         BitrateAllocationCallbackType allocation_cb_type);
      ~VideoStreamEncoder() override;
    
      void AddAdaptationResource(rtc::scoped_refptr<Resource> resource) override;
      std::vector<rtc::scoped_refptr<Resource>> GetAdaptationResources() override;
    
      void SetSource(rtc::VideoSourceInterface<VideoFrame>* source,
                     const DegradationPreference& degradation_preference) override;
    
      void SetSink(EncoderSink* sink, bool rotation_applied) override;
    
      // TODO(perkj): Can we remove VideoCodec.startBitrate ?
      void SetStartBitrate(int start_bitrate_bps) override;
    
      void SetFecControllerOverride(
          FecControllerOverride* fec_controller_override) override;
    
      void ConfigureEncoder(VideoEncoderConfig config,
                            size_t max_data_payload_length) override;
    
      // Permanently stop encoding. After this method has returned, it is
      // guaranteed that no encoded frames will be delivered to the sink.
      void Stop() override;
    
      void SendKeyFrame() override;
    
      void OnLossNotification(
          const VideoEncoder::LossNotification& loss_notification) override;
    
      void OnBitrateUpdated(DataRate target_bitrate,
                            DataRate stable_target_bitrate,
                            DataRate target_headroom,
                            uint8_t fraction_lost,
                            int64_t round_trip_time_ms,
                            double cwnd_reduce_ratio) override;
    
      DataRate UpdateTargetBitrate(DataRate target_bitrate,
                                   double cwnd_reduce_ratio);
    
     protected:
      .....
     
    };
    

    类很长,在类的开头已经很明确的说明该类的工作逻辑

    2.2.1.1 VideoStreamEncoder的工作逻辑

    ​ 对象实例化
    -> Call SetSink() : 设置好外来的事件接收槽
    -> Call SetSource(): 设置好媒体源
    -> Call ConfigureEncoder():设置编码器
    -> Call Stop():结束的时候使用Stop()

    2.2.1.2 SetSink()和EncoderSink

    WebRTC提供了EncoderSink编码回调的接口给其它类继承实现,其中就有我们最关注的用于获取编码后的帧的接口OnEncodedImage()

    class RTC_EXPORT EncodedImageCallback {
     public:
      virtual ~EncodedImageCallback() {}
    
      // Callback function which is called when an image has been encoded.
      virtual Result OnEncodedImage(
          const EncodedImage& encoded_image,
          const CodecSpecificInfo* codec_specific_info) = 0;
    
      virtual void OnDroppedFrame(DropReason reason) {}
    };
    
    
    class EncoderSink : public EncodedImageCallback {
       public:
        virtual void OnEncoderConfigurationChanged(
            std::vector<VideoStream> streams,
            bool is_svc,
            VideoEncoderConfig::ContentType content_type,
            int min_transmit_bitrate_bps) = 0;
    
        virtual void OnBitrateAllocationUpdated(
            const VideoBitrateAllocation& allocation) = 0;
    
        virtual void OnVideoLayersAllocationUpdated(
            VideoLayersAllocation allocation) = 0;
      };
    

    VideoStreamEncoder的SetSink()设置的是class VideoSendStreamImpl, 即当前对象的持有者,能够看到其继承了VideoStreamEncoderInterface::EncoderSink,并override相关的方法,其中OnEncodedImage()是我们特别关注的,一旦encoder编好了帧会调用该回调。

    class VideoSendStreamImpl : public webrtc::BitrateAllocatorObserver,
                                public VideoStreamEncoderInterface::EncoderSink {
    ...
      void OnBitrateAllocationUpdated(
          const VideoBitrateAllocation& allocation) override;
      void OnVideoLayersAllocationUpdated(
          VideoLayersAllocation allocation) override;
    
      // Implements EncodedImageCallback. The implementation routes encoded frames
      // to the |payload_router_| and |config.pre_encode_callback| if set.
      // Called on an arbitrary encoder callback thread.
      EncodedImageCallback::Result OnEncodedImage(
          const EncodedImage& encoded_image,
          const CodecSpecificInfo* codec_specific_info) override;
    
      // Implements EncodedImageCallback.
      void OnDroppedFrame(EncodedImageCallback::DropReason reason) override;
    };
    

    2.2.1.3 SetSource()

    WebRTC中使用Source和sink的方式来管理媒体,source代表媒体源,是我们在创建PeerConnection后addTrack()时添加的,sink代表媒体订阅槽,给要获取媒体的类实现后,通过source实现的class VideoSourceInterface可被添加/移除订阅列表

    template <typename VideoFrameT>
    class VideoSourceInterface {
     public:
      virtual ~VideoSourceInterface() = default;
    
      virtual void AddOrUpdateSink(VideoSinkInterface<VideoFrameT>* sink,
                                   const VideoSinkWants& wants) = 0;
      // RemoveSink must guarantee that at the time the method returns,
      // there is no current and no future calls to VideoSinkInterface::OnFrame.
      virtual void RemoveSink(VideoSinkInterface<VideoFrameT>* sink) = 0;
    };
    

    此处的VideoStreamEncoder是直接继承了VideoStreamEncoderInterface, 从而间接继承了sink

    class VideoStreamEncoderInterface : public rtc::VideoSinkInterface<VideoFrame> {
    	....
    };
    

    能够在class VideoStreamEncoder 看到其override相应函数

    class VideoStreamEncoder{
    ...
      void OnFrame(const VideoFrame& video_frame) override;
      void OnDiscardedFrame() override;
    }
    

    VideoStreamEncoder::SetSource()在实现上绕了一圈,使用了一个代理对象VideoSourceSinkController video_source_sink_controller_去进行SetSource()

    void VideoStreamEncoder::SetSource(
        rtc::VideoSourceInterface<VideoFrame>* source,
        const DegradationPreference& degradation_preference) {
      RTC_DCHECK_RUN_ON(main_queue_);
      video_source_sink_controller_.SetSource(source);
      input_state_provider_.OnHasInputChanged(source);
    
      // This may trigger reconfiguring the QualityScaler on the encoder queue.
      encoder_queue_.PostTask([this, degradation_preference] {
        RTC_DCHECK_RUN_ON(&encoder_queue_);
        degradation_preference_manager_->SetDegradationPreference(
            degradation_preference);
        stream_resource_manager_.SetDegradationPreferences(degradation_preference);
        if (encoder_) {
          stream_resource_manager_.ConfigureQualityScaler(
              encoder_->GetEncoderInfo());
        }
      });
    }
    

    在代理对象中可以看见source添加了订阅槽,sink_就是VideoStreamEncoder

    void VideoSourceSinkController::SetSource(
        rtc::VideoSourceInterface<VideoFrame>* source) {
      RTC_DCHECK_RUN_ON(&sequence_checker_);
    
      rtc::VideoSourceInterface<VideoFrame>* old_source = source_;
      source_ = source;
    
      if (old_source != source && old_source)
        old_source->RemoveSink(sink_);
    
      if (!source)
        return;
     
      // source添加订阅槽
      source->AddOrUpdateSink(sink_, CurrentSettingsToSinkWants());
    }
    

    2.2.1.4 编码帧

    当source有媒体出现的时候会调用 VideoStreamEncoder::OnFrame()

    void VideoStreamEncoder::OnFrame(const VideoFrame& video_frame) {
      RTC_DCHECK_RUNS_SERIALIZED(&incoming_frame_race_checker_);
      VideoFrame incoming_frame = video_frame;
    
      // Local time in webrtc time base.
      Timestamp now = clock_->CurrentTime();
    
      // In some cases, e.g., when the frame from decoder is fed to encoder,
      // the timestamp may be set to the future. As the encoding pipeline assumes
      // capture time to be less than present time, we should reset the capture
      // timestamps here. Otherwise there may be issues with RTP send stream.
      if (incoming_frame.timestamp_us() > now.us())
        incoming_frame.set_timestamp_us(now.us());
    
      // Capture time may come from clock with an offset and drift from clock_.
      int64_t capture_ntp_time_ms;
      if (video_frame.ntp_time_ms() > 0) {
        capture_ntp_time_ms = video_frame.ntp_time_ms();
      } else if (video_frame.render_time_ms() != 0) {
        capture_ntp_time_ms = video_frame.render_time_ms() + delta_ntp_internal_ms_;
      } else {
        capture_ntp_time_ms = now.ms() + delta_ntp_internal_ms_;
      }
      incoming_frame.set_ntp_time_ms(capture_ntp_time_ms);
    
      // Convert NTP time, in ms, to RTP timestamp.
      const int kMsToRtpTimestamp = 90;
      incoming_frame.set_timestamp(
          kMsToRtpTimestamp * static_cast<uint32_t>(incoming_frame.ntp_time_ms()));
    
      if (incoming_frame.ntp_time_ms() <= last_captured_timestamp_) {
        // We don't allow the same capture time for two frames, drop this one.
        RTC_LOG(LS_WARNING) << "Same/old NTP timestamp ("
                            << incoming_frame.ntp_time_ms()
                            << " <= " << last_captured_timestamp_
                            << ") for incoming frame. Dropping.";
        encoder_queue_.PostTask([this, incoming_frame]() {
          RTC_DCHECK_RUN_ON(&encoder_queue_);
          accumulated_update_rect_.Union(incoming_frame.update_rect());
          accumulated_update_rect_is_valid_ &= incoming_frame.has_update_rect();
        });
        return;
      }
    
      bool log_stats = false;
      if (now.ms() - last_frame_log_ms_ > kFrameLogIntervalMs) {
        last_frame_log_ms_ = now.ms();
        log_stats = true;
      }
    
      last_captured_timestamp_ = incoming_frame.ntp_time_ms();
    
      int64_t post_time_us = clock_->CurrentTime().us();
      ++posted_frames_waiting_for_encode_;
    
      encoder_queue_.PostTask(
          [this, incoming_frame, post_time_us, log_stats]() {
            RTC_DCHECK_RUN_ON(&encoder_queue_);
            encoder_stats_observer_->OnIncomingFrame(incoming_frame.width(),
                                                     incoming_frame.height());
            ++captured_frame_count_;
            const int posted_frames_waiting_for_encode =
                posted_frames_waiting_for_encode_.fetch_sub(1);
            RTC_DCHECK_GT(posted_frames_waiting_for_encode, 0);
            CheckForAnimatedContent(incoming_frame, post_time_us);
            bool cwnd_frame_drop =
                cwnd_frame_drop_interval_ &&
                (cwnd_frame_counter_++ % cwnd_frame_drop_interval_.value() == 0);
            if (posted_frames_waiting_for_encode == 1 && !cwnd_frame_drop) {
              MaybeEncodeVideoFrame(incoming_frame, post_time_us);
            } else {
              if (cwnd_frame_drop) {
                // Frame drop by congestion window pusback. Do not encode this
                // frame.
                ++dropped_frame_cwnd_pushback_count_;
                encoder_stats_observer_->OnFrameDropped(
                    VideoStreamEncoderObserver::DropReason::kCongestionWindow);
              } else {
                // There is a newer frame in flight. Do not encode this frame.
                RTC_LOG(LS_VERBOSE)
                    << "Incoming frame dropped due to that the encoder is blocked.";
                ++dropped_frame_encoder_block_count_;
                encoder_stats_observer_->OnFrameDropped(
                    VideoStreamEncoderObserver::DropReason::kEncoderQueue);
              }
              accumulated_update_rect_.Union(incoming_frame.update_rect());
              accumulated_update_rect_is_valid_ &= incoming_frame.has_update_rect();
            }
            if (log_stats) {
              RTC_LOG(LS_INFO) << "Number of frames: captured "
                               << captured_frame_count_
                               << ", dropped (due to congestion window pushback) "
                               << dropped_frame_cwnd_pushback_count_
                               << ", dropped (due to encoder blocked) "
                               << dropped_frame_encoder_block_count_
                               << ", interval_ms " << kFrameLogIntervalMs;
              captured_frame_count_ = 0;
              dropped_frame_cwnd_pushback_count_ = 0;
              dropped_frame_encoder_block_count_ = 0;
            }
          });
    }
    

    OnFrame()主要:

    • 记录了该帧的NTP时间
    • 向任务线程post了一个任务去调用MaybeEncodeVideoFrame()处理该帧
    void VideoStreamEncoder::MaybeEncodeVideoFrame(const VideoFrame& video_frame,
                                                   int64_t time_when_posted_us) {
      RTC_DCHECK_RUN_ON(&encoder_queue_);
      input_state_provider_.OnFrameSizeObserved(video_frame.size());
    
      if (!last_frame_info_ || video_frame.width() != last_frame_info_->width ||
          video_frame.height() != last_frame_info_->height ||
          video_frame.is_texture() != last_frame_info_->is_texture) {
        pending_encoder_reconfiguration_ = true;
        last_frame_info_ = VideoFrameInfo(video_frame.width(), video_frame.height(),
                                          video_frame.is_texture());
        RTC_LOG(LS_INFO) << "Video frame parameters changed: dimensions="
                         << last_frame_info_->width << "x"
                         << last_frame_info_->height
                         << ", texture=" << last_frame_info_->is_texture << ".";
        // Force full frame update, since resolution has changed.
        accumulated_update_rect_ =
            VideoFrame::UpdateRect{0, 0, video_frame.width(), video_frame.height()};
      }
    
      // We have to create then encoder before the frame drop logic,
      // because the latter depends on encoder_->GetScalingSettings.
      // According to the testcase
      // InitialFrameDropOffWhenEncoderDisabledScaling, the return value
      // from GetScalingSettings should enable or disable the frame drop.
    
      // Update input frame rate before we start using it. If we update it after
      // any potential frame drop we are going to artificially increase frame sizes.
      // Poll the rate before updating, otherwise we risk the rate being estimated
      // a little too high at the start of the call when then window is small.
      uint32_t framerate_fps = GetInputFramerateFps();
      input_framerate_.Update(1u, clock_->TimeInMilliseconds());
    
      int64_t now_ms = clock_->TimeInMilliseconds();
      // 对编码器进行reconfig
      if (pending_encoder_reconfiguration_) {
        ReconfigureEncoder();
        last_parameters_update_ms_.emplace(now_ms);
      } else if (!last_parameters_update_ms_ ||
                 now_ms - *last_parameters_update_ms_ >=
                     kParameterUpdateIntervalMs) {
        if (last_encoder_rate_settings_) {
          // Clone rate settings before update, so that SetEncoderRates() will
          // actually detect the change between the input and
          // |last_encoder_rate_setings_|, triggering the call to SetRate() on the
          // encoder.
          EncoderRateSettings new_rate_settings = *last_encoder_rate_settings_;
          new_rate_settings.rate_control.framerate_fps =
              static_cast<double>(framerate_fps);
          SetEncoderRates(UpdateBitrateAllocation(new_rate_settings));
        }
        last_parameters_update_ms_.emplace(now_ms);
      }
    
      // Because pending frame will be dropped in any case, we need to
      // remember its updated region.
      if (pending_frame_) {
        encoder_stats_observer_->OnFrameDropped(
            VideoStreamEncoderObserver::DropReason::kEncoderQueue);
        accumulated_update_rect_.Union(pending_frame_->update_rect());
        accumulated_update_rect_is_valid_ &= pending_frame_->has_update_rect();
      }
    
      if (DropDueToSize(video_frame.size())) {
        RTC_LOG(LS_INFO) << "Dropping frame. Too large for target bitrate.";
        stream_resource_manager_.OnFrameDroppedDueToSize();
        // Storing references to a native buffer risks blocking frame capture.
        if (video_frame.video_frame_buffer()->type() !=
            VideoFrameBuffer::Type::kNative) {
          pending_frame_ = video_frame;
          pending_frame_post_time_us_ = time_when_posted_us;
        } else {
          // Ensure that any previously stored frame is dropped.
          pending_frame_.reset();
          accumulated_update_rect_.Union(video_frame.update_rect());
          accumulated_update_rect_is_valid_ &= video_frame.has_update_rect();
        }
        return;
      }
      stream_resource_manager_.OnMaybeEncodeFrame();
    
      if (EncoderPaused()) {
        // Storing references to a native buffer risks blocking frame capture.
        if (video_frame.video_frame_buffer()->type() !=
            VideoFrameBuffer::Type::kNative) {
          if (pending_frame_)
            TraceFrameDropStart();
          pending_frame_ = video_frame;
          pending_frame_post_time_us_ = time_when_posted_us;
        } else {
          // Ensure that any previously stored frame is dropped.
          pending_frame_.reset();
          TraceFrameDropStart();
    
          accumulated_update_rect_.Union(video_frame.update_rect());
          accumulated_update_rect_is_valid_ &= video_frame.has_update_rect();
        }
        return;
      }
    
      pending_frame_.reset();
    
      frame_dropper_.Leak(framerate_fps);
      // Frame dropping is enabled iff frame dropping is not force-disabled, and
      // rate controller is not trusted.
      const bool frame_dropping_enabled =
          !force_disable_frame_dropper_ &&
          !encoder_info_.has_trusted_rate_controller;
      frame_dropper_.Enable(frame_dropping_enabled);
      if (frame_dropping_enabled && frame_dropper_.DropFrame()) {
        RTC_LOG(LS_VERBOSE)
            << "Drop Frame: "
               "target bitrate "
            << (last_encoder_rate_settings_
                    ? last_encoder_rate_settings_->encoder_target.bps()
                    : 0)
            << ", input frame rate " << framerate_fps;
        OnDroppedFrame(
            EncodedImageCallback::DropReason::kDroppedByMediaOptimizations);
        accumulated_update_rect_.Union(video_frame.update_rect());
        accumulated_update_rect_is_valid_ &= video_frame.has_update_rect();
        return;
      }
    
      EncodeVideoFrame(video_frame, time_when_posted_us);
    }
    

    MaybeEncodeVideoFrame()主要:

    • 检查是否执行了ConfigureEncoder() ,如果是则在编码前重设编码器

    • 如果开启了丢帧,则检查帧的大小和帧率去判断是否丢帧,如果不丢帧则执行EncodeVideoFrame()去对帧进行编码

    void VideoStreamEncoder::EncodeVideoFrame(const VideoFrame& video_frame,
                                              int64_t time_when_posted_us) {
      RTC_DCHECK_RUN_ON(&encoder_queue_);
    
      // If the encoder fail we can't continue to encode frames. When this happens
      // the WebrtcVideoSender is notified and the whole VideoSendStream is
      // recreated.
      if (encoder_failed_)
        return;
    
      TraceFrameDropEnd();
    
      // Encoder metadata needs to be updated before encode complete callback.
      VideoEncoder::EncoderInfo info = encoder_->GetEncoderInfo();
      if (info.implementation_name != encoder_info_.implementation_name) {
        encoder_stats_observer_->OnEncoderImplementationChanged(
            info.implementation_name);
        if (bitrate_adjuster_) {
          // Encoder implementation changed, reset overshoot detector states.
          bitrate_adjuster_->Reset();
        }
      }
    
      // 检查编码器是否变化,通知上层
      if (encoder_info_ != info) {
        OnEncoderSettingsChanged();
        stream_resource_manager_.ConfigureEncodeUsageResource();
        RTC_LOG(LS_INFO) << "Encoder settings changed from "
                         << encoder_info_.ToString() << " to " << info.ToString();
      }
    
      // 根据变更的帧率,重新设定码率调节器
      if (bitrate_adjuster_) {
        for (size_t si = 0; si < kMaxSpatialLayers; ++si) {
          if (info.fps_allocation[si] != encoder_info_.fps_allocation[si]) {
            bitrate_adjuster_->OnEncoderInfo(info);
            break;
          }
        }
      }
      encoder_info_ = info;
      last_encode_info_ms_ = clock_->TimeInMilliseconds();
    
    
      // 对原始帧进行420转码
      VideoFrame out_frame(video_frame);
      if (out_frame.video_frame_buffer()->type() ==
              VideoFrameBuffer::Type::kNative &&
          !info.supports_native_handle) {
        // This module only supports software encoding.
        rtc::scoped_refptr<VideoFrameBuffer> buffer =
            out_frame.video_frame_buffer()->GetMappedFrameBuffer(
                info.preferred_pixel_formats);
        bool buffer_was_converted = false;
        if (!buffer) {
          // 上面的preferred format失败了,强制420转码
          buffer = out_frame.video_frame_buffer()->ToI420();
          // TODO(https://crbug.com/webrtc/12021): Once GetI420 is pure virtual,
          // this just true as an I420 buffer would return from
          // GetMappedFrameBuffer.
          buffer_was_converted =
              (out_frame.video_frame_buffer()->GetI420() == nullptr);
        }
        if (!buffer) {
          RTC_LOG(LS_ERROR) << "Frame conversion failed, dropping frame.";
          return;
        }
    
        VideoFrame::UpdateRect update_rect = out_frame.update_rect();
        if (!update_rect.IsEmpty() &&
            out_frame.video_frame_buffer()->GetI420() == nullptr) {
          // UpdatedRect is reset to full update if it's not empty, and buffer was
          // converted, therefore we can't guarantee that pixels outside of
          // UpdateRect didn't change comparing to the previous frame.
          update_rect =
              VideoFrame::UpdateRect{0, 0, out_frame.width(), out_frame.height()};
        }
        out_frame.set_video_frame_buffer(buffer);
        out_frame.set_update_rect(update_rect);
      }
    
      // 对帧按照预设的crop_width_和crop_height_进行剪裁
      // Crop frame if needed.
      if ((crop_width_ > 0 || crop_height_ > 0) &&
          out_frame.video_frame_buffer()->type() !=
              VideoFrameBuffer::Type::kNative) {
        // If the frame can't be converted to I420, drop it.
        int cropped_width = video_frame.width() - crop_width_;
        int cropped_height = video_frame.height() - crop_height_;
        rtc::scoped_refptr<VideoFrameBuffer> cropped_buffer;
        // TODO(ilnik): Remove scaling if cropping is too big, as it should never
        // happen after SinkWants signaled correctly from ReconfigureEncoder.
        VideoFrame::UpdateRect update_rect = video_frame.update_rect();
        if (crop_width_ < 4 && crop_height_ < 4) {
          cropped_buffer = video_frame.video_frame_buffer()->CropAndScale(
              crop_width_ / 2, crop_height_ / 2, cropped_width, cropped_height,
              cropped_width, cropped_height);
          update_rect.offset_x -= crop_width_ / 2;
          update_rect.offset_y -= crop_height_ / 2;
          update_rect.Intersect(
              VideoFrame::UpdateRect{0, 0, cropped_width, cropped_height});
    
        } else {
          cropped_buffer = video_frame.video_frame_buffer()->Scale(cropped_width,
                                                                   cropped_height);
          if (!update_rect.IsEmpty()) {
            // Since we can't reason about pixels after scaling, we invalidate whole
            // picture, if anything changed.
            update_rect =
                VideoFrame::UpdateRect{0, 0, cropped_width, cropped_height};
          }
        }
        if (!cropped_buffer) {
          RTC_LOG(LS_ERROR) << "Cropping and scaling frame failed, dropping frame.";
          return;
        }
    
        out_frame.set_video_frame_buffer(cropped_buffer);
        out_frame.set_update_rect(update_rect);
        out_frame.set_ntp_time_ms(video_frame.ntp_time_ms());
        // Since accumulated_update_rect_ is constructed before cropping,
        // we can't trust it. If any changes were pending, we invalidate whole
        // frame here.
        if (!accumulated_update_rect_.IsEmpty()) {
          accumulated_update_rect_ =
              VideoFrame::UpdateRect{0, 0, out_frame.width(), out_frame.height()};
          accumulated_update_rect_is_valid_ = false;
        }
      }
    
      if (!accumulated_update_rect_is_valid_) {
        out_frame.clear_update_rect();
      } else if (!accumulated_update_rect_.IsEmpty() &&
                 out_frame.has_update_rect()) {
        accumulated_update_rect_.Union(out_frame.update_rect());
        accumulated_update_rect_.Intersect(
            VideoFrame::UpdateRect{0, 0, out_frame.width(), out_frame.height()});
        out_frame.set_update_rect(accumulated_update_rect_);
        accumulated_update_rect_.MakeEmptyUpdate();
      }
      accumulated_update_rect_is_valid_ = true;
    
      TRACE_EVENT_ASYNC_STEP0("webrtc", "Video", video_frame.render_time_ms(),
                              "Encode");
    
      stream_resource_manager_.OnEncodeStarted(out_frame, time_when_posted_us);
    
      RTC_DCHECK_LE(send_codec_.width, out_frame.width());
      RTC_DCHECK_LE(send_codec_.height, out_frame.height());
      // Native frames should be scaled by the client.
      // For internal encoders we scale everything in one place here.
      RTC_DCHECK((out_frame.video_frame_buffer()->type() ==
                  VideoFrameBuffer::Type::kNative) ||
                 (send_codec_.width == out_frame.width() &&
                  send_codec_.height == out_frame.height()));
    
      TRACE_EVENT1("webrtc", "VCMGenericEncoder::Encode", "timestamp",
                   out_frame.timestamp());
    
      frame_encode_metadata_writer_.OnEncodeStarted(out_frame);
    
      // 将帧送入编码队列
      const int32_t encode_status = encoder_->Encode(out_frame, &next_frame_types_);
      was_encode_called_since_last_initialization_ = true;
    
      if (encode_status < 0) {
        if (encode_status == WEBRTC_VIDEO_CODEC_ENCODER_FAILURE) {
          RTC_LOG(LS_ERROR) << "Encoder failed, failing encoder format: "
                            << encoder_config_.video_format.ToString();
    
          if (settings_.encoder_switch_request_callback) {
            if (encoder_selector_) {
              if (auto encoder = encoder_selector_->OnEncoderBroken()) {
                QueueRequestEncoderSwitch(*encoder);
              }
            } else {
              encoder_failed_ = true;
              main_queue_->PostTask(ToQueuedTask(task_safety_, [this]() {
                RTC_DCHECK_RUN_ON(main_queue_);
                settings_.encoder_switch_request_callback->RequestEncoderFallback();
              }));
            }
          } else {
            RTC_LOG(LS_ERROR)
                << "Encoder failed but no encoder fallback callback is registered";
          }
        } else {
          RTC_LOG(LS_ERROR) << "Failed to encode frame. Error code: "
                            << encode_status;
        }
    
        return;
      }
    
      for (auto& it : next_frame_types_) {
        it = VideoFrameType::kVideoFrameDelta;
      }
    }
    

    EncodeVideoFrame()主要:

    • 检查编码器是否变化,通知observer
    • 根据变更的帧率,重新设定码率调节器
    • 对原始帧进行420转码
    • 对视频帧按照预设的crop_width_和crop_height_进行剪裁
    • 将帧送入编码队列

    由于篇幅原因,此处不再深入具体的编码过程,直接过度到出帧

    2.2.1.5 出帧

    由于ReConfigureEncoder()中将当前对象注册进去编码器中

    encoder_->RegisterEncodeCompleteCallback(this);
    

    在编码完成后会执行回调VideoStreamEncoder::OnEncodedImage(),

    EncodedImageCallback::Result VideoStreamEncoder::OnEncodedImage(
        const EncodedImage& encoded_image,
        const CodecSpecificInfo* codec_specific_info) {
      TRACE_EVENT_INSTANT1("webrtc", "VCMEncodedFrameCallback::Encoded",
                           "timestamp", encoded_image.Timestamp());
      
      // 解析Image的Info: ExperimentId,simulcast
      const size_t spatial_idx = encoded_image.SpatialIndex().value_or(0);
      EncodedImage image_copy(encoded_image);
    
      frame_encode_metadata_writer_.FillTimingInfo(spatial_idx, &image_copy);
    
      frame_encode_metadata_writer_.UpdateBitstream(codec_specific_info,
                                                    &image_copy);
    
      // Piggyback ALR experiment group id and simulcast id into the content type.
      const uint8_t experiment_id =
          experiment_groups_[videocontenttypehelpers::IsScreenshare(
              image_copy.content_type_)];
    
      // TODO(ilnik): This will force content type extension to be present even
      // for realtime video. At the expense of miniscule overhead we will get
      // sliced receive statistics.
      RTC_CHECK(videocontenttypehelpers::SetExperimentId(&image_copy.content_type_,
                                                         experiment_id));
      // We count simulcast streams from 1 on the wire. That's why we set simulcast
      // id in content type to +1 of that is actual simulcast index. This is because
      // value 0 on the wire is reserved for 'no simulcast stream specified'.
      RTC_CHECK(videocontenttypehelpers::SetSimulcastId(
          &image_copy.content_type_, static_cast<uint8_t>(spatial_idx + 1)));
    
      // vp9使用了内部的qp scaler,需要显式的传一些指标过去
      // Currently internal quality scaler is used for VP9 instead of webrtc qp
      // scaler (in no-svc case or if only a single spatial layer is encoded).
      // It has to be explicitly detected and reported to adaptation metrics.
      // Post a task because |send_codec_| requires |encoder_queue_| lock.
      unsigned int image_width = image_copy._encodedWidth;
      unsigned int image_height = image_copy._encodedHeight;
      VideoCodecType codec = codec_specific_info
                                 ? codec_specific_info->codecType
                                 : VideoCodecType::kVideoCodecGeneric;
      encoder_queue_.PostTask([this, codec, image_width, image_height] {
        RTC_DCHECK_RUN_ON(&encoder_queue_);
        if (codec == VideoCodecType::kVideoCodecVP9 &&
            send_codec_.VP9()->automaticResizeOn) {
          unsigned int expected_width = send_codec_.width;
          unsigned int expected_height = send_codec_.height;
          int num_active_layers = 0;
          for (int i = 0; i < send_codec_.VP9()->numberOfSpatialLayers; ++i) {
            if (send_codec_.spatialLayers[i].active) {
              ++num_active_layers;
              expected_width = send_codec_.spatialLayers[i].width;
              expected_height = send_codec_.spatialLayers[i].height;
            }
          }
          RTC_DCHECK_LE(num_active_layers, 1)
              << "VP9 quality scaling is enabled for "
                 "SVC with several active layers.";
          // 告知编码器降分辨率是否已经完成
          encoder_stats_observer_->OnEncoderInternalScalerUpdate(
              image_width < expected_width || image_height < expected_height);
        }
      });
    
      // Encoded is called on whatever thread the real encoder implementation run
      // on. In the case of hardware encoders, there might be several encoders
      // running in parallel on different threads.
      encoder_stats_observer_->OnSendEncodedImage(image_copy, codec_specific_info);
    
      // bug: simulcast_id用了image.SpatialIndex()的位置,对于提供spatial的编码器就无法
      // 获取spatial layer信息了,
      // The simulcast id is signaled in the SpatialIndex. This makes it impossible
      // to do simulcast for codecs that actually support spatial layers since we
      // can't distinguish between an actual spatial layer and a simulcast stream.
      // TODO(bugs.webrtc.org/10520): Signal the simulcast id explicitly.
      int simulcast_id = 0;
      if (codec_specific_info &&
          (codec_specific_info->codecType == kVideoCodecVP8 ||
           codec_specific_info->codecType == kVideoCodecH264 ||
           codec_specific_info->codecType == kVideoCodecGeneric)) {
        simulcast_id = encoded_image.SpatialIndex().value_or(0);
      }
    
      // 将帧传给VideoSendStreamImpl
      EncodedImageCallback::Result result =
          sink_->OnEncodedImage(image_copy, codec_specific_info);
    
      // We are only interested in propagating the meta-data about the image, not
      // encoded data itself, to the post encode function. Since we cannot be sure
      // the pointer will still be valid when run on the task queue, set it to null.
      DataSize frame_size = DataSize::Bytes(image_copy.size());
      image_copy.ClearEncodedData();
    
      int temporal_index = 0;
      if (codec_specific_info) {
        if (codec_specific_info->codecType == kVideoCodecVP9) {
          temporal_index = codec_specific_info->codecSpecific.VP9.temporal_idx;
        } else if (codec_specific_info->codecType == kVideoCodecVP8) {
          temporal_index = codec_specific_info->codecSpecific.VP8.temporalIdx;
        }
      }
      if (temporal_index == kNoTemporalIdx) {
        temporal_index = 0;
      }
    
      // 使用该帧去更新码率调节器,媒体源调节器等
      RunPostEncode(image_copy, clock_->CurrentTime().us(), temporal_index,
                    frame_size);
    
      if (result.error == Result::OK) {
        // In case of an internal encoder running on a separate thread, the
        // decision to drop a frame might be a frame late and signaled via
        // atomic flag. This is because we can't easily wait for the worker thread
        // without risking deadlocks, eg during shutdown when the worker thread
        // might be waiting for the internal encoder threads to stop.
        if (pending_frame_drops_.load() > 0) {
          int pending_drops = pending_frame_drops_.fetch_sub(1);
          RTC_DCHECK_GT(pending_drops, 0);
          result.drop_next_frame = true;
        }
      }
    
      return result;
    }
    

    VideoStreamEncoder::OnEncodedImage() 主要:

    • 解析Image的Info,比如ExperimentId, simulcast
    • vp9使用了内部的qp scaler, 显式告知编码器降分辨率是否已经完成
    • 将帧传给sink(也就是VideoSendStreamImpl)
    • 调用RunPostEncode使用该帧去更新码率调节器,媒体源调节器等

    其中还提到一个bug: 由于simulcast_id用了image.SpatialIndex()的位置,对于支持spatial的编码器image就无法获取spatial layer信息了

    2.2.2 VideoSendStreamImpl处理帧

    在2.2.1.5中所述,编码器出帧之后,会将帧传给VideoSendStreamImpl, 其会做一些码率分配的检查后才将帧转发出去,本节描述的部分如下图所示:

    2.2.2.1 收帧和转发

    编码器的出帧回调,会调用VideoSendStreamImpl::OnEncodedImage()

    EncodedImageCallback::Result VideoSendStreamImpl::OnEncodedImage(
        const EncodedImage& encoded_image,
        const CodecSpecificInfo* codec_specific_info) {
      // Encoded is called on whatever thread the real encoder implementation run
      // on. In the case of hardware encoders, there might be several encoders
      // running in parallel on different threads.
    
      // Indicate that there still is activity going on.
      activity_ = true;
    
      // enable padding
      auto enable_padding_task = [this]() {
        if (disable_padding_) {
          RTC_DCHECK_RUN_ON(worker_queue_);
          disable_padding_ = false;
          // To ensure that padding bitrate is propagated to the bitrate allocator.
          SignalEncoderActive();
        }
      };
      if (!worker_queue_->IsCurrent()) {
        worker_queue_->PostTask(enable_padding_task);
      } else {
        enable_padding_task();
      }
    
      // 将image发送给RtpVideoSender
      EncodedImageCallback::Result result(EncodedImageCallback::Result::OK);
      result =
          rtp_video_sender_->OnEncodedImage(encoded_image, codec_specific_info);
      // Check if there's a throttled VideoBitrateAllocation that we should try
      // sending.
      rtc::WeakPtr<VideoSendStreamImpl> send_stream = weak_ptr_;
      auto update_task = [send_stream]() {
        if (send_stream) {
          RTC_DCHECK_RUN_ON(send_stream->worker_queue_);
          auto& context = send_stream->video_bitrate_allocation_context_;
          if (context && context->throttled_allocation) {
            //告知相关观察者,分配码率的变化
            send_stream->OnBitrateAllocationUpdated(*context->throttled_allocation);
          }
        }
      };
      if (!worker_queue_->IsCurrent()) {
        worker_queue_->PostTask(update_task);
      } else {
        update_task();
      }
    
      return result;
    }
    

    VideoSendStreamImpl::OnEncodedImage()主要:

    • 启用enable_padding,通知编码器中的码率分配器,让其做码率分配的时候把padding也考虑上
    • 将编好的帧和相关信息转到RtpVideoSender处理
    • 检查码率分配是否已经改变,通知下层

    其中关于第三点的处理分配码率更新的函数OnBitrateAllocationUpdated(),存在一些细节

    void VideoSendStreamImpl::OnBitrateAllocationUpdated(
        const VideoBitrateAllocation& allocation) {
      if (!worker_queue_->IsCurrent()) {
        auto ptr = weak_ptr_;
        worker_queue_->PostTask([=] {
          if (!ptr.get())
            return;
          ptr->OnBitrateAllocationUpdated(allocation);
        });
        return;
      }
    
      RTC_DCHECK_RUN_ON(worker_queue_);
    
      int64_t now_ms = clock_->TimeInMilliseconds();
      if (encoder_target_rate_bps_ != 0) {
        if (video_bitrate_allocation_context_) {
          // If new allocation is within kMaxVbaSizeDifferencePercent larger than
          // the previously sent allocation and the same streams are still enabled,
          // it is considered "similar". We do not want send similar allocations
          // more once per kMaxVbaThrottleTimeMs.
          // 如果allocation处于previously allocation + kMaxVbaSizeDifferencePercent 区间
          // 则被当作similar,认为一个kMaxVbaThrottleTimeMs不更新
          const VideoBitrateAllocation& last =
              video_bitrate_allocation_context_->last_sent_allocation;
          const bool is_similar =
              allocation.get_sum_bps() >= last.get_sum_bps() &&
              allocation.get_sum_bps() <
                  (last.get_sum_bps() * (100 + kMaxVbaSizeDifferencePercent)) /
                      100 &&
              SameStreamsEnabled(allocation, last);
          if (is_similar &&
              (now_ms - video_bitrate_allocation_context_->last_send_time_ms) <
                  kMaxVbaThrottleTimeMs) {
            // This allocation is too similar, cache it and return.
            video_bitrate_allocation_context_->throttled_allocation = allocation;
            return;
          }
        } else {
          video_bitrate_allocation_context_.emplace();
        }
    
        video_bitrate_allocation_context_->last_sent_allocation = allocation;
        video_bitrate_allocation_context_->throttled_allocation.reset();
        video_bitrate_allocation_context_->last_send_time_ms = now_ms;
    
        // Send bitrate allocation metadata only if encoder is not paused.
        // 告知给下层的观察者
        rtp_video_sender_->OnBitrateAllocationUpdated(allocation);
      }
    }
    
      struct VbaSendContext {
      	// 上一次分配的码率
        VideoBitrateAllocation last_sent_allocation;
        // 对频繁更新进行节流的分配码率
        absl::optional<VideoBitrateAllocation> throttled_allocation;
        // 上次的发送时间
        int64_t last_send_time_ms;
      };
    

    上述出现的video_bitrate_allocation_context_的的类型是struct VbaSendContext,这个对象是用来缓存分配码率的更改的context,下层对码流分配的变化进行注册,但如果每次细微的改变是没有必要引起下层观察者的处理的,所以引入了一个throttled_allocation,当判断一定时间内码率分配没有发生大的变化,则将这个变更存起来``throttled_allocation`,等到一定事件后再把这个值重新传入这个函数看看是否要通知下层的观察者;

    这里再进一步细说一下: VideoSendStreamImpl::OnBitrateAllocationUpdated()这个函数有两个地方会调用,一个是在编码器设置编码码率的时候通知VideoSendStreamImpl,而另一个则是此处的收帧函数OnEncodedImage(),这里只是将缓存的旧值(throttled_allocation)传入,

    2.2.3 RtpVideoSender处理帧

    RtpVideoSender接收该帧的函数是RtpVideoSender::OnEncodedImage()

    EncodedImageCallback::Result RtpVideoSender::OnEncodedImage(
        const EncodedImage& encoded_image,
        const CodecSpecificInfo* codec_specific_info) {
      // fec_controller在分配的网络容量下,计算多少用于编码
      // 多少用于fec和nack
      // 根据image的size和type更新fec_controller
      fec_controller_->UpdateWithEncodedData(encoded_image.size(),
                                             encoded_image._frameType);
      MutexLock lock(&mutex_);
      RTC_DCHECK(!rtp_streams_.empty());
      if (!active_)
        return Result(Result::ERROR_SEND_FAILED);
    
      shared_frame_id_++;
      size_t stream_index = 0;
      if (codec_specific_info &&
          (codec_specific_info->codecType == kVideoCodecVP8 ||
           codec_specific_info->codecType == kVideoCodecH264 ||
           codec_specific_info->codecType == kVideoCodecGeneric)) {
        // Map spatial index to simulcast.
        // webrtc内使用spatial_index 作为simulcast id
        stream_index = encoded_image.SpatialIndex().value_or(0);
      }
      RTC_DCHECK_LT(stream_index, rtp_streams_.size());
    
      uint32_t rtp_timestamp =
          encoded_image.Timestamp() +
          rtp_streams_[stream_index].rtp_rtcp->StartTimestamp();
    
      // RTCPSender has it's own copy of the timestamp offset, added in
      // RTCPSender::BuildSR, hence we must not add the in the offset for this call.
      // TODO(nisse): Delete RTCPSender:timestamp_offset_, and see if we can confine
      // knowledge of the offset to a single place.
      // 检测是否要给该stream发送rtcp report
      if (!rtp_streams_[stream_index].rtp_rtcp->OnSendingRtpFrame(
              encoded_image.Timestamp(), encoded_image.capture_time_ms_,
              rtp_config_.payload_type,
              encoded_image._frameType == VideoFrameType::kVideoFrameKey)) {
        // The payload router could be active but this module isn't sending.
        return Result(Result::ERROR_SEND_FAILED);
      }
    
      absl::optional<int64_t> expected_retransmission_time_ms;
      if (encoded_image.RetransmissionAllowed()) {
        expected_retransmission_time_ms =
            rtp_streams_[stream_index].rtp_rtcp->ExpectedRetransmissionTimeMs();
      }
    
      // 解析编码帧携带有帧依赖信息,可在后续rtp扩展头中使用
      if (IsFirstFrameOfACodedVideoSequence(encoded_image, codec_specific_info)) {
        // If encoder adapter produce FrameDependencyStructure, pass it so that
        // dependency descriptor rtp header extension can be used.
        // If not supported, disable using dependency descriptor by passing nullptr.
        rtp_streams_[stream_index].sender_video->SetVideoStructure(
            (codec_specific_info && codec_specific_info->template_structure)
                ? &*codec_specific_info->template_structure
                : nullptr);
      }
    
      // 发送视频帧
      bool send_result = rtp_streams_[stream_index].sender_video->SendEncodedImage(
          rtp_config_.payload_type, codec_type_, rtp_timestamp, encoded_image,
          params_[stream_index].GetRtpVideoHeader(
              encoded_image, codec_specific_info, shared_frame_id_),
          expected_retransmission_time_ms);
      if (frame_count_observer_) {
        FrameCounts& counts = frame_counts_[stream_index];
        if (encoded_image._frameType == VideoFrameType::kVideoFrameKey) {
          ++counts.key_frames;
        } else if (encoded_image._frameType == VideoFrameType::kVideoFrameDelta) {
          ++counts.delta_frames;
        } else {
          RTC_DCHECK(encoded_image._frameType == VideoFrameType::kEmptyFrame);
        }
        frame_count_observer_->FrameCountUpdated(counts,
                                                 rtp_config_.ssrcs[stream_index]);
      }
      if (!send_result)
        return Result(Result::ERROR_SEND_FAILED);
    
      return Result(Result::OK, rtp_timestamp);
    }
    

    RtpVideoSender::OnEncodedImage()主要:

    • 根据image的size和type更新fec_controller, 让其更新对fec和nack编码包的码率分配
    • 检测是否要给该stream发送rtcp report
    • 解析编码帧携带有帧依赖信息,可放入后续rtp扩展头中
    • 解析video_header,将视频帧和header转发给RtpSenderVideo

    ! 注意rtp_streams_这个变量,其是RtpVideoSender下的一个类型为RtpStreamSender的数组,其以simulcast index去标识每一个simulcast stream;
    RtpStreamSender有三个成员变量:rtp_rtcp(rtp tcp 打包、接收、发送), sender_video(pacer 发送), fec_generator(fec),

    
    struct RtpStreamSender {
      RtpStreamSender(std::unique_ptr<ModuleRtpRtcpImpl2> rtp_rtcp,
                      std::unique_ptr<RTPSenderVideo> sender_video,
                      std::unique_ptr<VideoFecGenerator> fec_generator);
      ~RtpStreamSender();
    
      RtpStreamSender(RtpStreamSender&&) = default;
      RtpStreamSender& operator=(RtpStreamSender&&) = default;
    
      // Note: Needs pointer stability.
      std::unique_ptr<ModuleRtpRtcpImpl2> rtp_rtcp;
      std::unique_ptr<RTPSenderVideo> sender_video;
      std::unique_ptr<VideoFecGenerator> fec_generator;
    };
    
    
    class RtpVideoSender{
    ...
      const std::vector<webrtc_internal_rtp_video_sender::RtpStreamSender>
          rtp_streams_;
    };
    

    2.2.4 RtpSenderVideo对数据Rtp封装

    RtpSenderVideo主要是根据video_header生成rtp packet, 然后将rtp packet转发给pacer进行发送,

    对应的过程是如下图所框

    视频帧来到RtpSenderVideo之后通过处理:

    bool RTPSenderVideo::SendEncodedImage(
        int payload_type,
        absl::optional<VideoCodecType> codec_type,
        uint32_t rtp_timestamp,
        const EncodedImage& encoded_image,
        RTPVideoHeader video_header,
        absl::optional<int64_t> expected_retransmission_time_ms) {
      // 如果注入了帧变换器,则对帧进行变换,完成后会被异步发送
      if (frame_transformer_delegate_) {
        // The frame will be sent async once transformed.
        return frame_transformer_delegate_->TransformFrame(
            payload_type, codec_type, rtp_timestamp, encoded_image, video_header,
            expected_retransmission_time_ms);
      }
      // 转发image
      return SendVideo(payload_type, codec_type, rtp_timestamp,
                       encoded_image.capture_time_ms_, encoded_image, video_header,
                       expected_retransmission_time_ms);
    }
    

    RTPSenderVideo::SendEncodedImage()主要:

    • 检查是否有帧变换器,则对帧进行变换,完成后会被异步发送
    • 调用SendVideo()开始对image进行rtp组包
    bool RTPSenderVideo::SendVideo(
        int payload_type,
        absl::optional<VideoCodecType> codec_type,
        uint32_t rtp_timestamp,
        int64_t capture_time_ms,
        rtc::ArrayView<const uint8_t> payload,
        RTPVideoHeader video_header,
        absl::optional<int64_t> expected_retransmission_time_ms,
        absl::optional<int64_t> estimated_capture_clock_offset_ms) {
    #if RTC_TRACE_EVENTS_ENABLED
      TRACE_EVENT_ASYNC_STEP1("webrtc", "Video", capture_time_ms, "Send", "type",
                              FrameTypeToString(video_header.frame_type));
    #endif
      RTC_CHECK_RUNS_SERIALIZED(&send_checker_);
    
      if (video_header.frame_type == VideoFrameType::kEmptyFrame)
        return true;
    
      if (payload.empty())
        return false;
    
      int32_t retransmission_settings = retransmission_settings_;
      if (codec_type == VideoCodecType::kVideoCodecH264) {
        // Backward compatibility for older receivers without temporal layer logic.
        retransmission_settings = kRetransmitBaseLayer | kRetransmitHigherLayers;
      }
    
      // 根据video_header信息,更新播放延迟(current_playout_delay_)
      MaybeUpdateCurrentPlayoutDelay(video_header);
      if (video_header.frame_type == VideoFrameType::kVideoFrameKey) {
        if (!IsNoopDelay(current_playout_delay_)) {
          // Force playout delay on key-frames, if set.
          playout_delay_pending_ = true;
        }
        if (allocation_) {
          // Send the bitrate allocation on every key frame.
          send_allocation_ = SendVideoLayersAllocation::kSendWithResolution;
        }
      }
    
      // 更新rtp header 中的active_decode_target的掩码,是av1编码器中的一个概念
      // 详见: https://aomediacodec.github.io/av1-rtp-spec/#a4-active-decode-targets
      // 大概就是说,编解码的参考帧队列存在好几个,发生改变的时候要通知对端更换
      // 参考帧队列
      if (video_structure_ != nullptr && video_header.generic) {
        active_decode_targets_tracker_.OnFrame(
            video_structure_->decode_target_protected_by_chain,
            video_header.generic->active_decode_targets,
            video_header.frame_type == VideoFrameType::kVideoFrameKey,
            video_header.generic->frame_id, video_header.generic->chain_diffs);
      }
    
      const uint8_t temporal_id = GetTemporalId(video_header);
      // No FEC protection for upper temporal layers, if used.
      // 检查是否使用FEC,FEC的保护只对于时域层为0,或非SVC编码的帧使用
      const bool use_fec = fec_type_.has_value() &&
                           (temporal_id == 0 || temporal_id == kNoTemporalIdx);
    
      // Maximum size of packet including rtp headers.
      // Extra space left in case packet will be resent using fec or rtx.
      // 计算去除了fec和rtx头之后,packet所剩的容量
      int packet_capacity = rtp_sender_->MaxRtpPacketSize() -
                            (use_fec ? FecPacketOverhead() : 0) -
                            (rtp_sender_->RtxStatus() ? kRtxHeaderSize : 0);
    
      // 构造packet,设置payload_type, timstamp
      std::unique_ptr<RtpPacketToSend> single_packet =
          rtp_sender_->AllocatePacket();
      RTC_DCHECK_LE(packet_capacity, single_packet->capacity());
      single_packet->SetPayloadType(payload_type);
      single_packet->SetTimestamp(rtp_timestamp);
      single_packet->set_capture_time_ms(capture_time_ms);
    
      const absl::optional<AbsoluteCaptureTime> absolute_capture_time =
          absolute_capture_time_sender_.OnSendPacket(
              AbsoluteCaptureTimeSender::GetSource(single_packet->Ssrc(),
                                                   single_packet->Csrcs()),
              single_packet->Timestamp(), kVideoPayloadTypeFrequency,
              Int64MsToUQ32x32(single_packet->capture_time_ms() + NtpOffsetMs()),
              /*estimated_capture_clock_offset=*/
              include_capture_clock_offset_ ? estimated_capture_clock_offset_ms
                                            : absl::nullopt);
    
      auto first_packet = std::make_unique<RtpPacketToSend>(*single_packet);
      auto middle_packet = std::make_unique<RtpPacketToSend>(*single_packet);
      auto last_packet = std::make_unique<RtpPacketToSend>(*single_packet);
      // Simplest way to estimate how much extensions would occupy is to set them.
      // 根据video_header 给packet添加extension
      AddRtpHeaderExtensions(video_header, absolute_capture_time,
                             /*first_packet=*/true, /*last_packet=*/true,
                             single_packet.get());
      AddRtpHeaderExtensions(video_header, absolute_capture_time,
                             /*first_packet=*/true, /*last_packet=*/false,
                             first_packet.get());
      AddRtpHeaderExtensions(video_header, absolute_capture_time,
                             /*first_packet=*/false, /*last_packet=*/false,
                             middle_packet.get());
      AddRtpHeaderExtensions(video_header, absolute_capture_time,
                             /*first_packet=*/false, /*last_packet=*/true,
                             last_packet.get());
    
      RTC_DCHECK_GT(packet_capacity, single_packet->headers_size());
      RTC_DCHECK_GT(packet_capacity, first_packet->headers_size());
      RTC_DCHECK_GT(packet_capacity, middle_packet->headers_size());
      RTC_DCHECK_GT(packet_capacity, last_packet->headers_size());
      RtpPacketizer::PayloadSizeLimits limits;
      limits.max_payload_len = packet_capacity - middle_packet->headers_size();
    
      RTC_DCHECK_GE(single_packet->headers_size(), middle_packet->headers_size());
      limits.single_packet_reduction_len =
          single_packet->headers_size() - middle_packet->headers_size();
    
      RTC_DCHECK_GE(first_packet->headers_size(), middle_packet->headers_size());
      limits.first_packet_reduction_len =
          first_packet->headers_size() - middle_packet->headers_size();
    
      RTC_DCHECK_GE(last_packet->headers_size(), middle_packet->headers_size());
      limits.last_packet_reduction_len =
          last_packet->headers_size() - middle_packet->headers_size();
    
      bool has_generic_descriptor =
          first_packet->HasExtension<RtpGenericFrameDescriptorExtension00>() ||
          first_packet->HasExtension<RtpDependencyDescriptorExtension>();
    
      // Minimization of the vp8 descriptor may erase temporal_id, so use
      // |temporal_id| rather than reference |video_header| beyond this point.
      if (has_generic_descriptor) {
        MinimizeDescriptor(&video_header);
      }
    
      // 如果帧加密了,对payload和header进行加密
      // TODO(benwright@webrtc.org) - Allocate enough to always encrypt inline.
      rtc::Buffer encrypted_video_payload;
      if (frame_encryptor_ != nullptr) {
        if (!has_generic_descriptor) {
          return false;
        }
    
        // 获取帧加密后最大的长度
        const size_t max_ciphertext_size =
            frame_encryptor_->GetMaxCiphertextByteSize(cricket::MEDIA_TYPE_VIDEO,
                                                       payload.size());
        encrypted_video_payload.SetSize(max_ciphertext_size);
    
        size_t bytes_written = 0;
    
        // Enable header authentication if the field trial isn't disabled.
        std::vector<uint8_t> additional_data;
        if (generic_descriptor_auth_experiment_) {
          additional_data = RtpDescriptorAuthentication(video_header);
        }
    
        if (frame_encryptor_->Encrypt(
                cricket::MEDIA_TYPE_VIDEO, first_packet->Ssrc(), additional_data,
                payload, encrypted_video_payload, &bytes_written) != 0) {
          return false;
        }
    
        encrypted_video_payload.SetSize(bytes_written);
        payload = encrypted_video_payload;
      } else if (require_frame_encryption_) {
        RTC_LOG(LS_WARNING)
            << "No FrameEncryptor is attached to this video sending stream but "
               "one is required since require_frame_encryptor is set";
      }
    
      std::unique_ptr<RtpPacketizer> packetitpzer =
          RtpPacketizer::Create(codec_type, payload, limits, video_header);
    
      // TODO(bugs.webrtc.org/10714): retransmission_settings_ should generally be
      // replaced by expected_retransmission_time_ms.has_value(). For now, though,
      // only VP8 with an injected frame buffer controller actually controls it.
      // RTX的重传精细到帧
      // 检查上层是否设置了允许重传的时间(default:125ms)从而设置该帧允许重传
      const bool allow_retransmission =
          expected_retransmission_time_ms.has_value()
              ? AllowRetransmission(temporal_id, retransmission_settings,
                                    expected_retransmission_time_ms.value())
              : false;
      const size_t num_packets = packetizer->NumPackets();
    
      if (num_packets == 0)
        return false;
    
      bool first_frame = first_frame_sent_();
      std::vector<std::unique_ptr<RtpPacketToSend>> rtp_packets;
      for (size_t i = 0; i < num_packets; ++i) {
        std::unique_ptr<RtpPacketToSend> packet;
        int expected_payload_capacity;
        // Choose right packet template:
        if (num_packets == 1) {
          packet = std::move(single_packet);
          expected_payload_capacity =
              limits.max_payload_len - limits.single_packet_reduction_len;
        } else if (i == 0) {
          packet = std::move(first_packet);
          expected_payload_capacity =
              limits.max_payload_len - limits.first_packet_reduction_len;
        } else if (i == num_packets - 1) {
          packet = std::move(last_packet);
          expected_payload_capacity =
              limits.max_payload_len - limits.last_packet_reduction_len;
        } else {
          packet = std::make_unique<RtpPacketToSend>(*middle_packet);
          expected_payload_capacity = limits.max_payload_len;
        }
    
        packet->set_first_packet_of_frame(i == 0);
    
        if (!packetizer->NextPacket(packet.get()))
          return false;
        RTC_DCHECK_LE(packet->payload_size(), expected_payload_capacity);
    
        // 设置重传,关键帧,
        packet->set_allow_retransmission(allow_retransmission);
        packet->set_is_key_frame(video_header.frame_type ==
                                 VideoFrameType::kVideoFrameKey);
    
        // Put packetization finish timestamp into extension.
        if (packet->HasExtension<VideoTimingExtension>()) {
          packet->set_packetization_finish_time_ms(clock_->TimeInMilliseconds());
        }
    
        packet->set_fec_protect_packet(use_fec);
    
        if (red_enabled()) {
          // 如果启用了red封装,重新对payload进行red封装后,将包设置程redpacket
          // TODO(sprang): Consider packetizing directly into packets with the RED
          // header already in place, to avoid this copy.
          std::unique_ptr<RtpPacketToSend> red_packet(new RtpPacketToSend(*packet));
          BuildRedPayload(*packet, red_packet.get());//对media payload进行red封装
          red_packet->SetPayloadType(*red_payload_type_);
          red_packet->set_is_red(true);
    
          // Append |red_packet| instead of |packet| to output.
          red_packet->set_packet_type(RtpPacketMediaType::kVideo);
          red_packet->set_allow_retransmission(packet->allow_retransmission());
          rtp_packets.emplace_back(std::move(red_packet));
        } else {
          packet->set_packet_type(RtpPacketMediaType::kVideo);
          rtp_packets.emplace_back(std::move(packet));
        }
    
        if (first_frame) {
          if (i == 0) {
            RTC_LOG(LS_INFO)
                << "Sent first RTP packet of the first video frame (pre-pacer)";
          }
          if (i == num_packets - 1) {
            RTC_LOG(LS_INFO)
                << "Sent last RTP packet of the first video frame (pre-pacer)";
          }
        }
      }
    
      // 设置sequence
      if (!rtp_sender_->AssignSequenceNumbersAndStoreLastPacketState(rtp_packets)) {
        // Media not being sent.
        return false;
      }
    
      // 转发到pacer
      LogAndSendToNetwork(std::move(rtp_packets), payload.size());
    
      // Update details about the last sent frame.
      last_rotation_ = video_header.rotation;
    
      if (video_header.color_space != last_color_space_) {
        last_color_space_ = video_header.color_space;
        transmit_color_space_next_frame_ = !IsBaseLayer(video_header);
      } else {
        transmit_color_space_next_frame_ =
            transmit_color_space_next_frame_ ? !IsBaseLayer(video_header) : false;
      }
    
      // 复位delay设置,delay是由video_header决定的,之前解析的时候设置了playout_delay_pending_
      // 为true,此处对它进行复位
      if (video_header.frame_type == VideoFrameType::kVideoFrameKey ||  
          PacketWillLikelyBeRequestedForRestransmitionIfLost(video_header)) {
        // This frame will likely be delivered, no need to populate playout
        // delay extensions until it changes again.
        playout_delay_pending_ = false;
        send_allocation_ = SendVideoLayersAllocation::kDontSend;
      }
    
      TRACE_EVENT_ASYNC_END1("webrtc", "Video", capture_time_ms, "timestamp",
                             rtp_timestamp);
      return true;
    }
    

    SendEncodedImage()很长,有300行,但主要工作从video_header解析出相关信息用于该帧rtp_packet初始化,主要包括如下内容:

    • 调用MaybeUpdateCurrentPlayoutDelay()解析video_header,看是否需要延迟播放,修正延迟播放时间
    • 由于payload可能超过一个rtp packet size,需要分包,引入了单包:single_packet,分包: first_packet, middle_packet, last_packet 两种情况
    • 使用AddRtpHeaderExtensions()从video_header解析出rtp扩展,设置到rtp_packet中
    • 如果启用帧加密,调用frame_encryptor_->Encrypt()对帧和rtp扩展进行加密
    • 使用video_header, payload_type, 等生成Rtp打包器RtpPacketizer,RtpPacketizer将payload内容填充到rtp packet中
    • 设置packet是否使用fec
    • 如果enable red封装(详见rfc2198),使用BuildRedPayload()对payload进行red封装(就是在media payload前添加一个red header), 然后设置packet的payload_type
    • 使用AssignSequenceNumbersAndStoreLastPacketState()对packet设置sequence
    • 通过LogAndSendToNetwork()将rtp packet转发到pacer
    void RTPSender::EnqueuePackets(
        std::vector<std::unique_ptr<RtpPacketToSend>> packets) {
      RTC_DCHECK(!packets.empty());
      int64_t now_ms = clock_->TimeInMilliseconds();
      for (auto& packet : packets) {
        RTC_DCHECK(packet);
        RTC_CHECK(packet->packet_type().has_value())
            << "Packet type must be set before sending.";
        if (packet->capture_time_ms() <= 0) {
          packet->set_capture_time_ms(now_ms);
        }
      }
    
      // 将packet投递到pacer中
      paced_sender_->EnqueuePackets(std::move(packets));
    }
    
    
    void RTPSenderVideo::LogAndSendToNetwork(
        std::vector<std::unique_ptr<RtpPacketToSend>> packets,
        size_t unpacketized_payload_size) {
      {
        MutexLock lock(&stats_mutex_);
        size_t packetized_payload_size = 0;
        for (const auto& packet : packets) {
          if (*packet->packet_type() == RtpPacketMediaType::kVideo) {
            packetized_payload_size += packet->payload_size();
          }
        }
        // 统计打包后的payload的码率开销
        // AV1 and H264 packetizers may produce less packetized bytes than
        // unpacketized.
        if (packetized_payload_size >= unpacketized_payload_size) {
          packetization_overhead_bitrate_.Update(
              packetized_payload_size - unpacketized_payload_size,
              clock_->TimeInMilliseconds());
        }
      }
      // 将packet投入paced的发送队列中
      rtp_sender_->EnqueuePackets(std::move(packets));
    }
    
    

    packet实际上被投递到pacing_controller_的发送队列中

    void PacedSender::EnqueuePackets(
        std::vector<std::unique_ptr<RtpPacketToSend>> packets) {
      {
        TRACE_EVENT0(TRACE_DISABLED_BY_DEFAULT("webrtc"),
                     "PacedSender::EnqueuePackets");
        MutexLock lock(&mutex_);
        for (auto& packet : packets) {
          TRACE_EVENT2(TRACE_DISABLED_BY_DEFAULT("webrtc"),
                       "PacedSender::EnqueuePackets::Loop", "sequence_number",
                       packet->SequenceNumber(), "rtp_timestamp",
                       packet->Timestamp());
    
          RTC_DCHECK_GE(packet->capture_time_ms(), 0);
          pacing_controller_.EnqueuePacket(std::move(packet));
        }
      }
      // 唤醒处理线程
      MaybeWakupProcessThread();
    }
    

    2.2.5 PacedSender

    2.2.5.1 唤醒Module过程

    RtpPacket被投递到paced_controller_的发送队列后,会执行PacedSender::MaybeWakupProcessThread()唤醒处理线程调用注册的process()去处理发送队列:

    void PacedSender::MaybeWakupProcessThread() {
      // Tell the process thread to call our TimeUntilNextProcess() method to get
      // a new time for when to call Process().
      if (process_thread_ &&
          process_mode_ == PacingController::ProcessMode::kDynamic) {
        process_thread_->WakeUp(&module_proxy_);
      }
    }
    

    此处的唤醒机制涉及到到webrtc中的ProcessThreadProcessThread提供以下接口:

    class ProcessThread : public TaskQueueBase {
     public:
      ~ProcessThread() override;
    
      static std::unique_ptr<ProcessThread> Create(const char* thread_name);
    
      // Starts the worker thread.  Must be called from the construction thread.
      virtual void Start() = 0;
    
      // Stops the worker thread.  Must be called from the construction thread.
      virtual void Stop() = 0;
    
      // Wakes the thread up to give a module a chance to do processing right
      // away.  This causes the worker thread to wake up and requery the specified
      // module for when it should be called back. (Typically the module should
      // return 0 from TimeUntilNextProcess on the worker thread at that point).
      // Can be called on any thread.
      virtual void WakeUp(Module* module) = 0;
    
      // Adds a module that will start to receive callbacks on the worker thread.
      // Can be called from any thread.
      virtual void RegisterModule(Module* module, const rtc::Location& from) = 0;
    
      // Removes a previously registered module.
      // Can be called from any thread.
      virtual void DeRegisterModule(Module* module) = 0;
    };
    
    

    执行ProcessThread::RegisterModule(module)将一个Module添加进来,Module的接口如下,对Process()进行override后,使用ProcessThread::WakeUp(module)则会唤醒去调用Process()

    class Module {
     public:
      virtual int64_t TimeUntilNextProcess() = 0;
    
      // Process any pending tasks such as timeouts.
      // Called on a worker thread.
      virtual void Process() = 0;
    
      // This method is called when the module is attached to a *running* process
      // thread or detached from one.  In the case of detaching, |process_thread|
      // will be nullptr.
    
      virtual void ProcessThreadAttached(ProcessThread* process_thread) {}
    
     protected:
      virtual ~Module() {}
    };
    

    PacedSender使用私有实现的方式,继承class Module将其接口进行private override,保证了对象对外public接口的整洁性,因为这部分逻辑属于类的内部。因为private override所以饶了一个圈引入了一个module_proxy和delegate去和process_thread联系?

    class PacedSender : public Module,
                        public RtpPacketPacer,
                        public RtpPacketSender {
    ...
    
     private:
    
      void ProcessThreadAttached(ProcessThread* process_thread) override;
    
      void MaybeWakupProcessThread();
    
      // Private implementation of Module to not expose those implementation details
      // publicly and control when the class is registered/deregistered.
      class ModuleProxy : public Module {
       public:
        explicit ModuleProxy(PacedSender* delegate) : delegate_(delegate) {}
    
       private:
        int64_t TimeUntilNextProcess() override {
          return delegate_->TimeUntilNextProcess();
        }
        void Process() override { return delegate_->Process(); }
        void ProcessThreadAttached(ProcessThread* process_thread) override {
          return delegate_->ProcessThreadAttached(process_thread);
        }
    
        PacedSender* const delegate_;
      } module_proxy_{this};
    
    };
    

    但实际上private override 一个 public 函数,仍然可以通过基类多态的public调用,module_proxy_这个圈好像没必要绕

    class c1 {
    public:
    	virtual void fn() = 0;
    };
    
    class c2 : public c1 {
    private:
    	void fn() {
    		std::cout << "c2";
    	}
    };
    
    int main()
    {
    	c1* temp = new c2();
    	temp->fn(); //输出 c2
    
    	return 0;
    }
    

    wakeup后最后触发的回调是PacedSender::Process(),处理逻辑转到了pacing_controller

    void PacedSender::Process() {
      MutexLock lock(&mutex_);
      pacing_controller_.ProcessPackets();
    }
    

    2.2.5.2 ProcessPackets()

    ProcessPackets() 中会从发送队列取包进行转发,其中增添了很多逻辑,用于带宽探测和发送速率控制:

    void PacingController::ProcessPackets() {
      Timestamp now = CurrentTime();
      Timestamp target_send_time = now;
      
      // 更新处理时间和发送流量budget
    
      // mode_有两种:
      // kPeriodic:使用IntervalBudget class 跟踪码率,期望Process以为固定速率(5ms)进行
      // kDynamic:Process是以不定速率进行的
      if (mode_ == ProcessMode::kDynamic) {
        // 此处有一个时间模型
        // target_send_time 属于区间 [now, now + early_execute_margin]
    
        // 获取目标发送时间
        target_send_time = NextSendTime();
        // 获取最大预执行时刻
        // 带宽探测允许1ms提前处理,正常发帧不允许
        TimeDelta early_execute_margin =
            prober_.is_probing() ? kMaxEarlyProbeProcessing : TimeDelta::Zero();
        if (target_send_time.IsMinusInfinity()) {
          // 目标发送时间是无穷小,则设置为现在
          target_send_time = now;
        } else if (now < target_send_time - early_execute_margin) {
          // 当前时刻比预执行时刻早太多,更新发送码率的Budget
          // We are too early, but if queue is empty still allow draining some debt.
          // Probing is allowed to be sent up to kMinSleepTime early.
          TimeDelta elapsed_time = UpdateTimeAndGetElapsed(now);
          UpdateBudgetWithElapsedTime(elapsed_time);
          return;
        }
    
        if (target_send_time < last_process_time_) {
          // target_send_time可能由NextSendTime()一些错误的行为或回环导致其比
          // last_process_time_小,此时就要特殊处理Budeget的更新,否则将导致
          // 发送的budget没有随时间的流逝而增加,从而使得包没有budget可供发送
          // 所以采用last_process_time_ - target_send_time的方式计算ElapsedTime
    
          // After the last process call, at time X, the target send time
          // shifted to be earlier than X. This should normally not happen
          // but we want to make sure rounding errors or erratic behavior
          // of NextSendTime() does not cause issue. In particular, if the
          // buffer reduction of
          // rate * (target_send_time - previous_process_time)
          // in the main loop doesn't clean up the existing debt we may not
          // be able to send again. We don't want to check this reordering
          // there as it is the normal exit condtion when the buffer is
          // exhausted and there are packets in the queue.
    
          UpdateBudgetWithElapsedTime(last_process_time_ - target_send_time);
          target_send_time = last_process_time_;
        }
      }
    
      Timestamp previous_process_time = last_process_time_;
      TimeDelta elapsed_time = UpdateTimeAndGetElapsed(now);
    
      // 检测是否需要发keepalive包
      if (ShouldSendKeepalive(now)) {
        // We can not send padding unless a normal packet has first been sent. If
        // we do, timestamps get messed up.
        if (packet_counter_ == 0) {
          last_send_time_ = now;
        } else {
          DataSize keepalive_data_sent = DataSize::Zero();
          // 生成一个1 Bytes的keepalive包
          std::vector<std::unique_ptr<RtpPacketToSend>> keepalive_packets =
              packet_sender_->GeneratePadding(DataSize::Bytes(1));
          for (auto& packet : keepalive_packets) {
            keepalive_data_sent +=
                DataSize::Bytes(packet->payload_size() + packet->padding_size());
            // 发送keepalive包
            packet_sender_->SendPacket(std::move(packet), PacedPacketInfo());
            // 看是否有新增的fec包,有则投入发送队列中
            for (auto& packet : packet_sender_->FetchFec()) {
              EnqueuePacket(std::move(packet));
            }
          }
          OnPaddingSent(keepalive_data_sent);
        }
      }
    
      if (paused_) {
        return;
      }
    
      // 根据发送队列大小计算目标码率,使用目标码率更新
      // 预算
      if (elapsed_time > TimeDelta::Zero()) {
        DataRate target_rate = pacing_bitrate_;
        DataSize queue_size_data = packet_queue_.Size();
        if (queue_size_data > DataSize::Zero()) {
          // Assuming equal size packets and input/output rate, the average packet
          // has avg_time_left_ms left to get queue_size_bytes out of the queue, if
          // time constraint shall be met. Determine bitrate needed for that.
          packet_queue_.UpdateQueueTime(now);
          if (drain_large_queues_) {
            TimeDelta avg_time_left =
                std::max(TimeDelta::Millis(1),
                         queue_time_limit - packet_queue_.AverageQueueTime());
            // 根据队列大小计算最小目标码率
            DataRate min_rate_needed = queue_size_data / avg_time_left;
            if (min_rate_needed > target_rate) {
              target_rate = min_rate_needed;
              RTC_LOG(LS_VERBOSE) << "bwe:large_pacing_queue pacing_rate_kbps="
                                  << target_rate.kbps();
            }
          }
        }
    
        if (mode_ == ProcessMode::kPeriodic) {
          // In periodic processing mode, the IntevalBudget allows positive budget
          // up to (process interval duration) * (target rate), so we only need to
          // update it once before the packet sending loop.
          media_budget_.set_target_rate_kbps(target_rate.kbps());
          UpdateBudgetWithElapsedTime(elapsed_time);
        } else {
          media_rate_ = target_rate;
        }
      }
    
      // 从prober_获取探测码率
      bool first_packet_in_probe = false;
      PacedPacketInfo pacing_info;
      DataSize recommended_probe_size = DataSize::Zero();
      bool is_probing = prober_.is_probing();
      if (is_probing) {
        // Probe timing is sensitive, and handled explicitly by BitrateProber, so
        // use actual send time rather than target.
        // 从当前探测包簇中获取探测码率
        pacing_info = prober_.CurrentCluster(now).value_or(PacedPacketInfo());
        if (pacing_info.probe_cluster_id != PacedPacketInfo::kNotAProbe) {
          first_packet_in_probe = pacing_info.probe_cluster_bytes_sent == 0;
          recommended_probe_size = prober_.RecommendedMinProbeSize();
          RTC_DCHECK_GT(recommended_probe_size, DataSize::Zero());
        } else {
          // No valid probe cluster returned, probe might have timed out.
          is_probing = false;
        }
      }
    
      DataSize data_sent = DataSize::Zero();
    
      // The paused state is checked in the loop since it leaves the critical
      // section allowing the paused state to be changed from other code.
      while (!paused_) {
        if (first_packet_in_probe) {
          // 还处于码率探测时期(还未开始码率探测?),插入一个极小padding包从而设置慢启动窗口初始大小
          // If first packet in probe, insert a small padding packet so we have a
          // more reliable start window for the rate estimation.
          auto padding = packet_sender_->GeneratePadding(DataSize::Bytes(1));
          // If no RTP modules sending media are registered, we may not get a
          // padding packet back.
          if (!padding.empty()) {
            // 将探测包放入prober中
            // Insert with high priority so larger media packets don't preempt it.
            EnqueuePacketInternal(std::move(padding[0]), kFirstPriority);
            // We should never get more than one padding packets with a requested
            // size of 1 byte.
            RTC_DCHECK_EQ(padding.size(), 1u);
          }
          first_packet_in_probe = false;
        }
    
        // 循环内更新发送budget
        if (mode_ == ProcessMode::kDynamic &&
            previous_process_time < target_send_time) {
          // Reduce buffer levels with amount corresponding to time between last
          // process and target send time for the next packet.
          // If the process call is late, that may be the time between the optimal
          // send times for two packets we should already have sent.
          UpdateBudgetWithElapsedTime(target_send_time - previous_process_time);
          previous_process_time = target_send_time;
        }
    
        // Fetch the next packet, so long as queue is not empty or budget is not
        // exhausted.
        // 根据budget和发送时间从queue中获取下一个要发的包
        std::unique_ptr<RtpPacketToSend> rtp_packet =
            GetPendingPacket(pacing_info, target_send_time, now);
    
        if (rtp_packet == nullptr) {
          // No packet available to send, check if we should send padding.
          // 计算除去发送媒体所占用的码率后,还能够padding去做通道探测的大小
          DataSize padding_to_add = PaddingToAdd(recommended_probe_size, data_sent);
          if (padding_to_add > DataSize::Zero()) {
            std::vector<std::unique_ptr<RtpPacketToSend>> padding_packets =
                packet_sender_->GeneratePadding(padding_to_add);
            if (padding_packets.empty()) {
              // No padding packets were generated, quite send loop.
              break;
            }
            for (auto& packet : padding_packets) {
              // pading包入队列
              EnqueuePacket(std::move(packet));
            }
            // Continue loop to send the padding that was just added.
            continue;
          }
    
          // Can't fetch new packet and no padding to send, exit send loop.
          break;
        }
    
        RTC_DCHECK(rtp_packet);
        RTC_DCHECK(rtp_packet->packet_type().has_value());
        const RtpPacketMediaType packet_type = *rtp_packet->packet_type();
        DataSize packet_size = DataSize::Bytes(rtp_packet->payload_size() +
                                               rtp_packet->padding_size());
    
        if (include_overhead_) {
          packet_size += DataSize::Bytes(rtp_packet->headers_size()) +
                         transport_overhead_per_packet_;
        }
    
        // 发包
        packet_sender_->SendPacket(std::move(rtp_packet), pacing_info);
        for (auto& packet : packet_sender_->FetchFec()) {
          // 将新生成的fec包也发出去
          EnqueuePacket(std::move(packet));
        }
        data_sent += packet_size;
    
        // Send done, update send/process time to the target send time.
        OnPacketSent(packet_type, packet_size, target_send_time);
    
        // If we are currently probing, we need to stop the send loop when we have
        // reached the send target.
        if (is_probing && data_sent >= recommended_probe_size) {
          break;
        }
    
        if (mode_ == ProcessMode::kDynamic) {
          // Update target send time in case that are more packets that we are late
          // in processing.
          Timestamp next_send_time = NextSendTime();
          if (next_send_time.IsMinusInfinity()) {
            target_send_time = now;
          } else {
            target_send_time = std::min(now, next_send_time);
          }
        }
      }
    
      last_process_time_ = std::max(last_process_time_, previous_process_time);
    
      if (is_probing) {
        probing_send_failure_ = data_sent == DataSize::Zero();
        if (!probing_send_failure_) {
          //prober更新已发送大小
          prober_.ProbeSent(CurrentTime(), data_sent);
        }
      }
    }
    

    总结起来,主要有以下内容:

    • 使用budget进行发送码率控制,budget会随着时间流逝而增长,随着发送而减小,budget在内部的维护上,又根据ProcessPackets()是被固定周期调用(ProcessMode::kPeriodic)使用media_budget_,还是被动态调用(ProcessMode::kDynamic) 则使用media_debt_维护

    • 在入口使用UpdateBudgetWithElapsedTime()函数通过流逝的时间更新发送budget

      void PacingController::UpdateBudgetWithElapsedTime(TimeDelta delta) {
        if (mode_ == ProcessMode::kPeriodic) {
          delta = std::min(kMaxProcessingInterval, delta);
          media_budget_.IncreaseBudget(delta.ms());
          padding_budget_.IncreaseBudget(delta.ms());
        } else {
          media_debt_ -= std::min(media_debt_, media_rate_ * delta);
          padding_debt_ -= std::min(padding_debt_, padding_rate_ * delta);
        }
      }
      
    • 使用ShouldSendKeepalive()检查是否需要发送keepalive包, 判定的依据如下,如果需要则构造一个1Bytes的包发送

      bool PacingController::ShouldSendKeepalive(Timestamp now) const {
        if (send_padding_if_silent_ || paused_ || Congested() ||
            packet_counter_ == 0) {
          // 没有feedback过来就处于congested状态,则每500ms就会有一个keepalive探测包
          // We send a padding packet every 500 ms to ensure we won't get stuck in
          // congested state due to no feedback being received.
          TimeDelta elapsed_since_last_send = now - last_send_time_;
          if (elapsed_since_last_send >= kCongestedPacketInterval) {
            return true;
          }
        }
        return false;
      }
      
    • 根据发送队列大小和packet能在队列存放的时间计算一个目标发送码率target_rate

    • 从带宽探测器prober_中获取当前的探测码率,具体的prober原理可参考此篇, 通过发送包簇的发送码率和接受码率是否出现差值判断是否达到链路最大容量

    • 循环扫发送队列,将队列的包通过packet_sender->SendPacket()进行发送,并通过packet_sender->FetchFec()获取已发送包的fec包,进行发送

    2.2.5.3 PacketRouter转发帧

    Rtp Packet 经过转发到了PacketRouter::SendPacket()中

    void PacketRouter::SendPacket(std::unique_ptr<RtpPacketToSend> packet,
                                  const PacedPacketInfo& cluster_info) {
      TRACE_EVENT2(TRACE_DISABLED_BY_DEFAULT("webrtc"), "PacketRouter::SendPacket",
                   "sequence_number", packet->SequenceNumber(), "rtp_timestamp",
                   packet->Timestamp());
    
      MutexLock lock(&modules_mutex_);
      // With the new pacer code path, transport sequence numbers are only set here,
      // on the pacer thread. Therefore we don't need atomics/synchronization.
      // 设置transpoort sequence number
      if (packet->HasExtension<TransportSequenceNumber>()) {
        packet->SetExtension<TransportSequenceNumber>((++transport_seq_) & 0xFFFF);
      }
    
    
      // 找到ssrc对应的send module
      uint32_t ssrc = packet->Ssrc();
      auto kv = send_modules_map_.find(ssrc);
      if (kv == send_modules_map_.end()) {
        RTC_LOG(LS_WARNING)
            << "Failed to send packet, matching RTP module not found "
               "or transport error. SSRC = "
            << packet->Ssrc() << ", sequence number " << packet->SequenceNumber();
        return;
      }
    
      // rtp_module -> ModuleRtpRtcpImpl2
      RtpRtcpInterface* rtp_module = kv->second;
      if (!rtp_module->TrySendPacket(packet.get(), cluster_info)) {
        RTC_LOG(LS_WARNING) << "Failed to send packet, rejected by RTP module.";
        return;
      }
    
      if (rtp_module->SupportsRtxPayloadPadding()) {
        // This is now the last module to send media, and has the desired
        // properties needed for payload based padding. Cache it for later use.
        last_send_module_ = rtp_module;
      }
    
      // 取出当前包fec packet,存入pending_fec_packets_中
      // 等待PacedController取出发送
      for (auto& packet : rtp_module->FetchFecPackets()) {
        pending_fec_packets_.push_back(std::move(packet));
      }
    }
    

    主要做了三件事:

    • 如果设置transport suquence number 的Rtp Extension,则将该sequence number填入
    • PacketRouter会根据当前包的ssrc对应的ModuleRtpRtcpImpl2(开启了simulcast后,每一个simulcast都有独立的ModuleRtpRtcpImpl2), 然后调用ModuleRtpRtcpImpl2::TrySendPacket()进行发包
    • 将发送过程中产生的fec包取出暂存到pending_fec_packets_中,等待外部获取,决定是否投递发送

    2.2.5.4 ModuleRtpRtcpImpl2

    packet接下来会到ModuleRtpRtcpImpl2的TrySendPacket()函数,主要是调用rtp_sender_->packet_sender进行发送

    bool ModuleRtpRtcpImpl2::TrySendPacket(RtpPacketToSend* packet,
                                           const PacedPacketInfo& pacing_info) {
      RTC_DCHECK(rtp_sender_);
      // TODO(sprang): Consider if we can remove this check.
      if (!rtp_sender_->packet_generator.SendingMedia()) {
        return false;
      }
      rtp_sender_->packet_sender.SendPacket(packet, pacing_info);
      return true;
    }
    

    ModuleRtpRtcpImpl2内部含有rtp_sender, rtcp_sender_, rtcp_receiver_, 相当于rtp/rtcp的收发功能聚合在一起,同时提供了一些SendNack()等这样的功能;而TrySendPacket()中使用的rtp_sender_对象是ModuleRtpRtcpImpl2::RtpSenderContext类型, 该类型内部有之前用来生成RTP包的RTPSender packet_generator; 还有此处调用来发包的RtpSenderEgress packet_sender;

    class ModuleRtpRtcpImpl2{
    
      ...
        
      struct RtpSenderContext : public SequenceNumberAssigner {
        explicit RtpSenderContext(const RtpRtcpInterface::Configuration& config);
        void AssignSequenceNumber(RtpPacketToSend* packet) override;
        // Storage of packets, for retransmissions and padding, if applicable.
        RtpPacketHistory packet_history;
        // Handles final time timestamping/stats/etc and handover to Transport.
        RtpSenderEgress packet_sender;
        // If no paced sender configured, this class will be used to pass packets
        // from |packet_generator_| to |packet_sender_|.
        RtpSenderEgress::NonPacedPacketSender non_paced_sender;
        // Handles creation of RTP packets to be sent.
        RTPSender packet_generator;
      };
    
    
      std::unique_ptr<RtpSenderContext> rtp_sender_;
      RTCPSender rtcp_sender_;
      RTCPReceiver rtcp_receiver_;
    };
    

    2.2.5.6 RtpSenderEgress转发包到transport

    经转发后,packet来到了RtpSenderEgress::SendPacket()

    void RtpSenderEgress::SendPacket(RtpPacketToSend* packet,
                                     const PacedPacketInfo& pacing_info) {
      RTC_DCHECK_RUN_ON(&pacer_checker_);
      RTC_DCHECK(packet);
    
      RTC_DCHECK(packet->packet_type().has_value());
      RTC_DCHECK(HasCorrectSsrc(*packet));
    
      const uint32_t packet_ssrc = packet->Ssrc();
      const int64_t now_ms = clock_->TimeInMilliseconds();
    
    #if BWE_TEST_LOGGING_COMPILE_TIME_ENABLE
      worker_queue_->PostTask(
          ToQueuedTask(task_safety_, [this, now_ms, packet_ssrc]() {
            BweTestLoggingPlot(now_ms, packet_ssrc);
          }));
    #endif
    
      // 将packet的timestamp,sequence number等保存下来
      if (need_rtp_packet_infos_ &&
          packet->packet_type() == RtpPacketToSend::Type::kVideo) {
        worker_queue_->PostTask(ToQueuedTask(
            task_safety_,
            [this, packet_timestamp = packet->Timestamp(),
             is_first_packet_of_frame = packet->is_first_packet_of_frame(),
             is_last_packet_of_frame = packet->Marker(),
             sequence_number = packet->SequenceNumber()]() {
              RTC_DCHECK_RUN_ON(worker_queue_);
              // Last packet of a frame, add it to sequence number info map.
              const uint32_t timestamp = packet_timestamp - timestamp_offset_;
              rtp_sequence_number_map_->InsertPacket(
                  sequence_number,
                  RtpSequenceNumberMap::Info(timestamp, is_first_packet_of_frame,
                                             is_last_packet_of_frame));
            }));
      }
    
      //fec处理
      if (fec_generator_ && packet->fec_protect_packet()) {
        // This packet should be protected by FEC, add it to packet generator.
        RTC_DCHECK(fec_generator_);
        RTC_DCHECK(packet->packet_type() == RtpPacketMediaType::kVideo);
        absl::optional<std::pair<FecProtectionParams, FecProtectionParams>>
            new_fec_params;
        {
          MutexLock lock(&lock_);
          new_fec_params.swap(pending_fec_params_);
        }
        // fec_rate和fec_max_frame 可能被更新了
        if (new_fec_params) {
          fec_generator_->SetProtectionParameters(new_fec_params->first,
                                                  new_fec_params->second);
        }
        if (packet->is_red()) {
          // 对packet做了red封装(rfc2198),需要解red封装得到原始rtp包
          // 很奇怪,会进行red封包的应该只有fec,普通包也会进行fec封装?
    
    
          // 复制整个包
          RtpPacketToSend unpacked_packet(*packet);
    
          const rtc::CopyOnWriteBuffer buffer = packet->Buffer();
          // Grab media payload type from RED header.
          const size_t headers_size = packet->headers_size();
          unpacked_packet.SetPayloadType(buffer[headers_size]);
    
          // 对copyonwirte的payload进行拷贝
          // Copy the media payload into the unpacked buffer.
          uint8_t* payload_buffer =
              unpacked_packet.SetPayloadSize(packet->payload_size() - 1);
          std::copy(&packet->payload()[0] + 1,
                    &packet->payload()[0] + packet->payload_size(), payload_buffer);
    
          // 对包做fec
          fec_generator_->AddPacketAndGenerateFec(unpacked_packet);
        } else {
          // If not RED encapsulated - we can just insert packet directly.
          fec_generator_->AddPacketAndGenerateFec(*packet);
        }
      }
    
    
      // 
      // Bug webrtc:7859. While FEC is invoked from rtp_sender_video, and not after
      // the pacer, these modifications of the header below are happening after the
      // FEC protection packets are calculated. This will corrupt recovered packets
      // at the same place. It's not an issue for extensions, which are present in
      // all the packets (their content just may be incorrect on recovered packets).
      // In case of VideoTimingExtension, since it's present not in every packet,
      // data after rtp header may be corrupted if these packets are protected by
      // the FEC.
      int64_t diff_ms = now_ms - packet->capture_time_ms();
      if (packet->HasExtension<TransmissionOffset>()) {
        packet->SetExtension<TransmissionOffset>(kTimestampTicksPerMs * diff_ms);
      }
      if (packet->HasExtension<AbsoluteSendTime>()) {
        packet->SetExtension<AbsoluteSendTime>(
            AbsoluteSendTime::MsTo24Bits(now_ms));
      }
    
      if (packet->HasExtension<VideoTimingExtension>()) {
        if (populate_network2_timestamp_) {
          packet->set_network2_time_ms(now_ms);
        } else {
          packet->set_pacer_exit_time_ms(now_ms);
        }
      }
    
      const bool is_media = packet->packet_type() == RtpPacketMediaType::kAudio ||
                            packet->packet_type() == RtpPacketMediaType::kVideo;
    
      PacketOptions options;
      {
        MutexLock lock(&lock_);
        options.included_in_allocation = force_part_of_allocation_;
      }
    
      // Downstream code actually uses this flag to distinguish between media and
      // everything else.
      options.is_retransmit = !is_media;
      if (auto packet_id = packet->GetExtension<TransportSequenceNumber>()) {
        options.packet_id = *packet_id;
        options.included_in_feedback = true;
        options.included_in_allocation = true;
        AddPacketToTransportFeedback(*packet_id, *packet, pacing_info);
      }
    
      options.additional_data = packet->additional_data();
    
      if (packet->packet_type() != RtpPacketMediaType::kPadding &&
          packet->packet_type() != RtpPacketMediaType::kRetransmission) {
        UpdateDelayStatistics(packet->capture_time_ms(), now_ms, packet_ssrc);
        UpdateOnSendPacket(options.packet_id, packet->capture_time_ms(),
                           packet_ssrc);
      }
    
      // 转发packet
      const bool send_success = SendPacketToNetwork(*packet, options, pacing_info);
    
      // Put packet in retransmission history or update pending status even if
      // actual sending fails.
      if (is_media && packet->allow_retransmission()) {
        // 放入重传队列
        packet_history_->PutRtpPacket(std::make_unique<RtpPacketToSend>(*packet),
                                      now_ms);
      } else if (packet->retransmitted_sequence_number()) {
        // 标记此包已重传
        packet_history_->MarkPacketAsSent(*packet->retransmitted_sequence_number());
      }
    
      if (send_success) {
        // |media_has_been_sent_| is used by RTPSender to figure out if it can send
        // padding in the absence of transport-cc or abs-send-time.
        // In those cases media must be sent first to set a reference timestamp.
        media_has_been_sent_ = true;
    
        // TODO(sprang): Add support for FEC protecting all header extensions, add
        // media packet to generator here instead.
    
        RTC_DCHECK(packet->packet_type().has_value());
        RtpPacketMediaType packet_type = *packet->packet_type();
        RtpPacketCounter counter(*packet);
        size_t size = packet->size();
        worker_queue_->PostTask(
            ToQueuedTask(task_safety_, [this, now_ms, packet_ssrc, packet_type,
                                        counter = std::move(counter), size]() {
              RTC_DCHECK_RUN_ON(worker_queue_);
              // 更新发送速率等内容
              UpdateRtpStats(now_ms, packet_ssrc, packet_type, std::move(counter),
                             size);
            }));
      }
    }
    

    该函数主要做了以下几件事:

    • 将当前packet的timestamp和sequence做了个映射保存下来
    • 将包拷贝一份之后放到fec中做fec
    • 给packet设置一个时间扩展,此处提到了一个bug: 当fec包在pacer前做,timestamp会被改变,将导致fec恢复包恢复的相关数据包可能出现payload上的问题,所以用了set_pacer_exit_time_ms(now_ms); 这个bug没看太懂,按道理上面做fec,下面应该不要改包了,还是对fec流程理解的不够深入,有空回头看看
    • 调用SendPacketToNetwork()转发packet
    • 将packet放入记录队列packet_history_
    • 发送成功则调用UpdateRtpStats()更新发送速率等

    其中,转发包SendPacketToNetwork()如下

    bool RtpSenderEgress::SendPacketToNetwork(const RtpPacketToSend& packet,
                                              const PacketOptions& options,
                                              const PacedPacketInfo& pacing_info) {
      int bytes_sent = -1;
      if (transport_) {
        // 调用transport转发
        bytes_sent = transport_->SendRtp(packet.data(), packet.size(), options)
                         ? static_cast<int>(packet.size())
                         : -1;
        if (event_log_ && bytes_sent > 0) {
          event_log_->Log(std::make_unique<RtcEventRtpPacketOutgoing>(
              packet, pacing_info.probe_cluster_id));
        }
      }
    
      if (bytes_sent <= 0) {
        RTC_LOG(LS_WARNING) << "Transport failed to send packet.";
        return false;
      }
      return true;
    }
    

    主要是调用了transport_->SendRtp()转发包,其实调用的是WebRtcVideoChannel::SendRtp()

    2.2.6 MediaChannel

    本节所述的内容如下图标记

    包经过转发来到了MediaChannel,从MediaChannel发往SRtpTransport,ICETransport最终发往网络

    bool WebRtcVideoChannel::SendRtp(const uint8_t* data,
                                     size_t len,
                                     const webrtc::PacketOptions& options) {
      rtc::CopyOnWriteBuffer packet(data, len, kMaxRtpPacketLen);
      rtc::PacketOptions rtc_options;
      rtc_options.packet_id = options.packet_id;
      if (DscpEnabled()) {
        rtc_options.dscp = PreferredDscp();
      }
      rtc_options.info_signaled_after_sent.included_in_feedback =
          options.included_in_feedback;
      rtc_options.info_signaled_after_sent.included_in_allocation =
          options.included_in_allocation;
        //转发
      return MediaChannel::SendPacket(&packet, rtc_options);
    }
    
      bool SendPacket(rtc::CopyOnWriteBuffer* packet,
                      const rtc::PacketOptions& options) {
        //转发
        return DoSendPacket(packet, false, options);
      }
    
      bool DoSendPacket(rtc::CopyOnWriteBuffer* packet,
                        bool rtcp,
                        const rtc::PacketOptions& options)
          RTC_LOCKS_EXCLUDED(network_interface_mutex_) {
        webrtc::MutexLock lock(&network_interface_mutex_);
        if (!network_interface_)
          return false;
    	
        // 根据rtp还是rtcp类型进行转发
        return (!rtcp) ? network_interface_->SendPacket(packet, options)
                       : network_interface_->SendRtcp(packet, options);
      }
    

    rtp包来到的是BaseChannel::SendPacket

    bool BaseChannel::SendPacket(rtc::CopyOnWriteBuffer* packet,
                                 const rtc::PacketOptions& options) {
      return SendPacket(false, packet, options);
    }
    
    bool BaseChannel::SendPacket(bool rtcp,
                                 rtc::CopyOnWriteBuffer* packet,
                                 const rtc::PacketOptions& options) {
      // Until all the code is migrated to use RtpPacketType instead of bool.
      RtpPacketType packet_type = rtcp ? RtpPacketType::kRtcp : RtpPacketType::kRtp;
      // SendPacket gets called from MediaEngine, on a pacer or an encoder thread.
      // If the thread is not our network thread, we will post to our network
      // so that the real work happens on our network. This avoids us having to
      // synchronize access to all the pieces of the send path, including
      // SRTP and the inner workings of the transport channels.
      // The only downside is that we can't return a proper failure code if
      // needed. Since UDP is unreliable anyway, this should be a non-issue.
      // 线程检查
      if (!network_thread_->IsCurrent()) {
        // Avoid a copy by transferring the ownership of the packet data.
        int message_id = rtcp ? MSG_SEND_RTCP_PACKET : MSG_SEND_RTP_PACKET;
        SendPacketMessageData* data = new SendPacketMessageData;
        data->packet = std::move(*packet);
        data->options = options;
        network_thread_->Post(RTC_FROM_HERE, this, message_id, data);
        return true;
      }
      RTC_DCHECK_RUN_ON(network_thread());
    
      TRACE_EVENT0("webrtc", "BaseChannel::SendPacket");
    
      // Now that we are on the correct thread, ensure we have a place to send this
      // packet before doing anything. (We might get RTCP packets that we don't
      // intend to send.) If we've negotiated RTCP mux, send RTCP over the RTP
      // transport.
      if (!rtp_transport_ || !rtp_transport_->IsWritable(rtcp)) {
        return false;
      }
    
      // Protect ourselves against crazy data.
      if (!IsValidRtpPacketSize(packet_type, packet->size())) {
        RTC_LOG(LS_ERROR) << "Dropping outgoing " << ToString() << " "
                          << RtpPacketTypeToString(packet_type)
                          << " packet: wrong size=" << packet->size();
        return false;
      }
    
      if (!srtp_active()) {
        if (srtp_required_) {
          // The audio/video engines may attempt to send RTCP packets as soon as the
          // streams are created, so don't treat this as an error for RTCP.
          // See: https://bugs.chromium.org/p/webrtc/issues/detail?id=6809
          if (rtcp) {
            return false;
          }
          // However, there shouldn't be any RTP packets sent before SRTP is set up
          // (and SetSend(true) is called).
          RTC_LOG(LS_ERROR) << "Can't send outgoing RTP packet for " << ToString()
                            << " when SRTP is inactive and crypto is required";
          RTC_NOTREACHED();
          return false;
        }
    
        std::string packet_type = rtcp ? "RTCP" : "RTP";
        RTC_DLOG(LS_WARNING) << "Sending an " << packet_type
                             << " packet without encryption for " << ToString()
                             << ".";
      }
    
      // Bon voyage.
      return rtcp ? rtp_transport_->SendRtcpPacket(packet, options, PF_SRTP_BYPASS)
                  : rtp_transport_->SendRtpPacket(packet, options, PF_SRTP_BYPASS);
    }
    
    • 其首先会做线程检查,当前不是network线程则将任务重投
    • 然后使用rtp_transport_->SendRtpPacket()转发包,其类型为webrtc::SrtpTransport
    bool SrtpTransport::SendRtpPacket(rtc::CopyOnWriteBuffer* packet,
                                      const rtc::PacketOptions& options,
                                      int flags) {
      if (!IsSrtpActive()) {
        RTC_LOG(LS_ERROR)
            << "Failed to send the packet because SRTP transport is inactive.";
        return false;
      }
      rtc::PacketOptions updated_options = options;
      TRACE_EVENT0("webrtc", "SRTP Encode");
      bool res;
      uint8_t* data = packet->MutableData();
      int len = rtc::checked_cast<int>(packet->size());
    // If ENABLE_EXTERNAL_AUTH flag is on then packet authentication is not done
    // inside libsrtp for a RTP packet. A external HMAC module will be writing
    // a fake HMAC value. This is ONLY done for a RTP packet.
    // Socket layer will update rtp sendtime extension header if present in
    // packet with current time before updating the HMAC.
    #if !defined(ENABLE_EXTERNAL_AUTH)
      // 加密数据
      res = ProtectRtp(data, len, static_cast<int>(packet->capacity()), &len);
    #else
      if (!IsExternalAuthActive()) {
        res = ProtectRtp(data, len, static_cast<int>(packet->capacity()), &len);
      } else {
        updated_options.packet_time_params.rtp_sendtime_extension_id =
            rtp_abs_sendtime_extn_id_;
        res = ProtectRtp(data, len, static_cast<int>(packet->capacity()), &len,
                         &updated_options.packet_time_params.srtp_packet_index);
        // If protection succeeds, let's get auth params from srtp.
        if (res) {
          uint8_t* auth_key = nullptr;
          int key_len = 0;
          res = GetRtpAuthParams(
              &auth_key, &key_len,
              &updated_options.packet_time_params.srtp_auth_tag_len);
          if (res) {
            updated_options.packet_time_params.srtp_auth_key.resize(key_len);
            updated_options.packet_time_params.srtp_auth_key.assign(
                auth_key, auth_key + key_len);
          }
        }
      }
    #endif
      if (!res) {
        int seq_num = -1;
        uint32_t ssrc = 0;
        cricket::GetRtpSeqNum(data, len, &seq_num);
        cricket::GetRtpSsrc(data, len, &ssrc);
        RTC_LOG(LS_ERROR) << "Failed to protect RTP packet: size=" << len
                          << ", seqnum=" << seq_num << ", SSRC=" << ssrc;
        return false;
      }
    
      // Update the length of the packet now that we've added the auth tag.
      packet->SetSize(len);
      return SendPacket(/*rtcp=*/false, packet, updated_options, flags);
    }
    

    其主要:

    • 对数据执行ProtectRtp()进行加密

    • 调用SendPacket()将包转发到RtpTransport去转发

    bool RtpTransport::SendPacket(bool rtcp,
                                  rtc::CopyOnWriteBuffer* packet,
                                  const rtc::PacketOptions& options,
                                  int flags) {
      rtc::PacketTransportInternal* transport = rtcp && !rtcp_mux_enabled_
                                                    ? rtcp_packet_transport_
                                                    : rtp_packet_transport_;
      // Transport 为DtlsTransport
      int ret = transport->SendPacket(packet->cdata<char>(), packet->size(),
                                      options, flags);
      if (ret != static_cast<int>(packet->size())) {
        if (transport->GetError() == ENOTCONN) {
          RTC_LOG(LS_WARNING) << "Got ENOTCONN from transport.";
          SetReadyToSend(rtcp, false);
        }
        return false;
      }
      return true;
    }
    

    由于这里做的是本机的回环测试没有开DTLS,所以直接通过ice_tranport_->SendPacket() 发包

    // Called from upper layers to send a media packet.
    int DtlsTransport::SendPacket(const char* data,
                                  size_t size,
                                  const rtc::PacketOptions& options,
                                  int flags) {
      if (!dtls_active_) {
        // Not doing DTLS.
        return ice_transport_->SendPacket(data, size, options);// <-
      }
    
      switch (dtls_state()) {
        case DTLS_TRANSPORT_NEW:
          // Can't send data until the connection is active.
          // TODO(ekr@rtfm.com): assert here if dtls_ is NULL?
          return -1;
        case DTLS_TRANSPORT_CONNECTING:
          // Can't send data until the connection is active.
          return -1;
        case DTLS_TRANSPORT_CONNECTED:
          if (flags & PF_SRTP_BYPASS) {
            RTC_DCHECK(!srtp_ciphers_.empty());
            if (!IsRtpPacket(data, size)) {
              return -1;
            }
    
            return ice_transport_->SendPacket(data, size, options);
          } else {
            return (dtls_->WriteAll(data, size, NULL, NULL) == rtc::SR_SUCCESS)
                       ? static_cast<int>(size)
                       : -1;
          }
        case DTLS_TRANSPORT_FAILED:
          // Can't send anything when we're failed.
          RTC_LOG(LS_ERROR)
              << ToString()
              << ": Couldn't send packet due to DTLS_TRANSPORT_FAILED.";
          return -1;
        case DTLS_TRANSPORT_CLOSED:
          // Can't send anything when we're closed.
          RTC_LOG(LS_ERROR)
              << ToString()
              << ": Couldn't send packet due to DTLS_TRANSPORT_CLOSED.";
          return -1;
        default:
          RTC_NOTREACHED();
          return -1;
      }
    }
    

    最后就能看到包在ice_connection中被发送,在往下就是UDP层了

    int P2PTransportChannel::SendPacket(const char* data,
                                        size_t len,
                                        const rtc::PacketOptions& options,
                                        int flags) {
      RTC_DCHECK_RUN_ON(network_thread_);
      if (flags != 0) {
        error_ = EINVAL;
        return -1;
      }
      // If we don't think the connection is working yet, return ENOTCONN
      // instead of sending a packet that will probably be dropped.
      if (!ReadyToSend(selected_connection_)) {
        error_ = ENOTCONN;
        return -1;
      }
    
      last_sent_packet_id_ = options.packet_id;
      rtc::PacketOptions modified_options(options);
      modified_options.info_signaled_after_sent.packet_type =
          rtc::PacketType::kData;
      // 发送数据
      int sent = selected_connection_->Send(data, len, modified_options);
      if (sent <= 0) {
        RTC_DCHECK(sent < 0);
        error_ = selected_connection_->GetError();
      }
      return sent;
    }
    
  • 相关阅读:
    目录(爬虫)
    目录(自动化开发)
    目录(Python基础)
    目录(Django开发)
    C#Revit二次开发之-一键切换构件连接顺序 SwitchJoinOrder
    Revit常用的元素过滤方法
    C#之txt的数据写入
    惰性加载
    python mysql and ORM
    Python之常用模块学习(二)
  • 原文地址:https://www.cnblogs.com/ishen/p/15154959.html
Copyright © 2011-2022 走看看