zoukankan      html  css  js  c++  java
  • WebRtc VoiceEngine代码解析

    WebRtc中VoiceEngine可以完成大部分的VOIP相关人物,包括采集、自动增益、噪声消除、回声抑制、编解码、RTP传输。下边我们通过代码来解析Voe中处理流程;

    创建VoiceEngine和VoEBase


    [cpp]
    VoiceEngine* _vePtr = VoiceEngine::Create();                               //创建VoiceEngine  
    VoEBase* _veBasePtr = VoEBase::GetInterface(_vePtr);            //创建VoeBase  所有Voe相关操作通过这个共有类  
    _veBasePtr->Init()                                                                                 //创建整个Voe处理线程 

    VoiceEngine* _vePtr = VoiceEngine::Create();                              //创建VoiceEngine
    VoEBase* _veBasePtr = VoEBase::GetInterface(_vePtr);            //创建VoeBase  所有Voe相关操作通过这个共有类
    _veBasePtr->Init()                                                                               //创建整个Voe处理线程
    重点就在_veBasePtr->Init()  它会创建voe线程,线程负责采集、数字信号处理、编码、rtp传输。


    [cpp]
    int VoEBaseImpl::Init(AudioDeviceModule* external_adm,   AudioProcessing* audioproc) 

        _shared->process_thread();   //创建voe线程  
        _shared->process_thread()->Start(); 
        _shared->audio_device()->Init(); 
        
     

    int VoEBaseImpl::Init(AudioDeviceModule* external_adm,   AudioProcessing* audioproc)
    {
        _shared->process_thread();   //创建voe线程
        _shared->process_thread()->Start();
        _shared->audio_device()->Init();
      

    }audio_device()->Init()重载了int32_t AudioDeviceWindowsWave::Init()(windowns平台),别的平台是别的函数,基本差不多,在这个Init中,创建了ThreadProcess线程,ThreadProcess线程负责所有的音频流程,从设备获取音频数据包。


    [cpp]
    bool AudioDeviceWindowsWave::ThreadProcess() 

       while ((nRecordedBytes = RecProc(recTime)) > 0); 

    bool AudioDeviceWindowsWave::ThreadProcess()
    {
       while ((nRecordedBytes = RecProc(recTime)) > 0);
    }处理过程在RecProc


    [cpp]
    int32_t AudioDeviceWindowsWave::RecProc(LONGLONG& consumedTime) 

         _ptrAudioBuffer->DeliverRecordedData(); <SPAN style="FONT-FAMILY: Arial, Helvetica, sans-serif">}</SPAN> 

         int32_t AudioDeviceWindowsWave::RecProc(LONGLONG& consumedTime)
    {
         _ptrAudioBuffer->DeliverRecordedData(); }


    [cpp]
    int32_t AudioDeviceBuffer::DeliverRecordedData() 

         _ptrCbAudioTransport->RecordedDataIsAvailable(); 

    int32_t AudioDeviceBuffer::DeliverRecordedData()
    {
         _ptrCbAudioTransport->RecordedDataIsAvailable();
    }


    RecordedDataIsAvailable是虚函数,被VoeBase重载

    [cpp]
    int32_t VoEBaseImpl::RecordedDataIsAvailable( 
            const void* audioSamples, 
            uint32_t nSamples, 
            uint8_t nBytesPerSample, 
            uint8_t nChannels, 
            uint32_t samplesPerSec, 
            uint32_t totalDelayMS, 
            int32_t clockDrift, 
            uint32_t currentMicLevel, 
            bool keyPressed, 
            uint32_t& newMicLevel) 

         _shared->transmit_mixer()->DemuxAndMix(); 
        _shared->transmit_mixer()->EncodeAndSend(); 

    int32_t VoEBaseImpl::RecordedDataIsAvailable(
            const void* audioSamples,
            uint32_t nSamples,
            uint8_t nBytesPerSample,
            uint8_t nChannels,
            uint32_t samplesPerSec,
            uint32_t totalDelayMS,
            int32_t clockDrift,
            uint32_t currentMicLevel,
            bool keyPressed,
            uint32_t& newMicLevel)
    {
         _shared->transmit_mixer()->DemuxAndMix();
        _shared->transmit_mixer()->EncodeAndSend();
    }
    DemuxAndMix() 从字面意思是分路与混合,这个函数,主要负责AudioProcess的所有过程,包括Aec,Aecm,AGC,DTMF,遍历所有channel;


    [cpp]
    TransmitMixer::DemuxAndMix() 

        Channel* channelPtr = sc.GetFirstChannel(iterator); 
        while (channelPtr != NULL) 
        { 
            if (channelPtr->InputIsOnHold()) 
            { 
                channelPtr->UpdateLocalTimeStamp(); 
            } else if (channelPtr->Sending()) 
            { 
                // Demultiplex makes a copy of its input.  
                channelPtr->Demultiplex(_audioFrame); 
                channelPtr->PrepareEncodeAndSend(_audioFrame.sample_rate_hz_); 
            } 
            channelPtr = sc.GetNextChannel(iterator); 
        } 
     

    TransmitMixer::DemuxAndMix()
    {
        Channel* channelPtr = sc.GetFirstChannel(iterator);
        while (channelPtr != NULL)
        {
            if (channelPtr->InputIsOnHold())
            {
                channelPtr->UpdateLocalTimeStamp();
            } else if (channelPtr->Sending())
            {
                // Demultiplex makes a copy of its input.
                channelPtr->Demultiplex(_audioFrame);
                channelPtr->PrepareEncodeAndSend(_audioFrame.sample_rate_hz_);
            }
            channelPtr = sc.GetNextChannel(iterator);
        }

    }
    Channel::Demutiplex(),基本上没有什么具体任务,就是把audioFrame里边的数据拷贝到channel自身, webrtc是client解决方案,对于client只认为有一个audio source,但可以有多个channel,每个channel中都有audio process,所以需要把数据copy到每个channel.

    只有就是数据处理 PrepareEncodeAndSend()


    [cpp]
    Channel::PrepareEncodeAndSend(int mixingFrequency) 

       if (_inputFilePlaying) 
        { 
            MixOrReplaceAudioWithFile(mixingFrequency); //如果使用了voeFile::PlayFileAsMic();则从文件读取10ms数据,并覆盖audio buffer  
        } 
     
        if (_mute) 
        { 
            AudioFrameOperations::Mute(_audioFrame);//当然如果设置mutex,则memset 0   
        } 
       if (_inputExternalMedia) 
       { 
       _inputExternalMediaCallbackPtr->Process();  //所过设置了ExternalMedia,自己的audio处理过程,就是在这里调用的  
       } 
      InsertInbandDtmfTone();                     //添加DTMF音频  
     _rtpAudioProc->ProcessStream(&_audioFrame);  // 真正的GIPS牛逼代码,audio process过程: Aec Aecm AGC   

    Channel::PrepareEncodeAndSend(int mixingFrequency)
    {
       if (_inputFilePlaying)
        {
            MixOrReplaceAudioWithFile(mixingFrequency); //如果使用了voeFile::PlayFileAsMic();则从文件读取10ms数据,并覆盖audio buffer
        }

        if (_mute)
        {
            AudioFrameOperations::Mute(_audioFrame);//当然如果设置mutex,则memset 0
        }
       if (_inputExternalMedia)
       {
       _inputExternalMediaCallbackPtr->Process();  //所过设置了ExternalMedia,自己的audio处理过程,就是在这里调用的
       }
      InsertInbandDtmfTone();                     //添加DTMF音频
     _rtpAudioProc->ProcessStream(&_audioFrame);  // 真正的GIPS牛逼代码,audio process过程: Aec Aecm AGC
    }
    int AudioProcessingImpl::ProcessStream(AudioFrame* frame) 就是上述调用的_rtpAudioProc->ProcessStream();


    以上是DemuxAndMix()过程,之后就是EncodeAndSend()过程,至此整个voe数据处理流程分析结束;

    关于Audio Process则是另外一个大话题;

     
    总结一下几点:

    1.  VoeBase提供大部分的对外接口

    2. Channel:继承了大部分的音频功能;

  • 相关阅读:
    洛谷P3796 【模板】AC自动机(加强版)(AC自动机)
    洛谷P3203 [HNOI2010]弹飞绵羊(LCT,Splay)
    洛谷P1501 [国家集训队]Tree II(LCT,Splay)
    LCT总结——概念篇+洛谷P3690[模板]Link Cut Tree(动态树)(LCT,Splay)
    [BZOJ3172][TJOI2013]单词 AC自动机
    [BZOJ1968][AHOI2005]COMMON约数研究 数学
    [BZOJ1053][SDOI2005]反素数ant 数学
    [BZOJ1045][HAOI2008]糖果传递 数学
    [BZOJ2733][HNOI2012]永无乡 线段树合并
    [BZOJ1005][HNOI2008]明明的烦恼 数学+prufer序列+高精度
  • 原文地址:https://www.cnblogs.com/wangbin/p/4462899.html
Copyright © 2011-2022 走看看