zoukankan      html  css  js  c++  java
  • 多媒体开发之---live555 分析客户端

    live555的客服端流程:建立任务计划对象--建立环境对象--处理用户输入的参数(RTSP地址)--创建RTSPClient实例--发出DESCRIBE--发出SETUP--发出PLAY--进入Loop循环接收数据--发出TEARDOWN结束连接。

    可以抽成3个函数接口:rtspOpen rtspRead rtspClose。

    首先我们来分析rtspOpen的过程

    复制代码
    int rtspOpen(rtsp_object_t *p_obj, int tcpConnect)
    {
         ... ...
    TRACE1_DEC("BasicTaskScheduler::createNew !!! " ); if( ( p_sys->scheduler = BasicTaskScheduler::createNew() ) == NULL ) { TRACE1_DEC("BasicTaskScheduler::createNew failed " ); goto error; } if( !( p_sys->env = BasicUsageEnvironment::createNew(*p_sys->scheduler) ) ) { TRACE1_DEC("BasicUsageEnvironment::createNew failed "); goto error; } if( ( i_return = Connect( p_obj ) ) != RTSP_SUCCESS ) { TRACE1_DEC( "Failed to connect with %s ", p_obj->rtspURL ); goto error; } if( p_sys->p_sdp == NULL ) { TRACE1_DEC( "Failed to retrieve the RTSP Session Description " ); goto error; } if( ( i_return = SessionsSetup( p_obj ) ) != RTSP_SUCCESS ) { TRACE1_DEC( "Nothing to play for rtsp://%s ", p_obj->rtspURL ); goto error; } if( ( i_return = Play( p_obj ) ) != RTSP_SUCCESS ) goto error;      ... ... }
    复制代码

    1> BasicTaskScheduler::createNew()

    2> BasicUsageEnvironment::createNew()

    3> connect 

    复制代码
    static int Connect( rtsp_object_t *p_demux )
    {
         ... ...
    sprintf(appName, "LibRTSP%d", p_demux->id); if( ( p_sys->rtsp = RTSPClient::createNew( *p_sys->env, 1, appName, i_http_port ) ) == NULL ) { TRACE1_DEC( "RTSPClient::createNew failed (%s) ", p_sys->env->getResultMsg() ); i_ret = RTSP_ERROR; goto connect_error; } psz_options = p_sys->rtsp->sendOptionsCmd( p_demux->rtspURL, psz_user, psz_pwd ); if(psz_options == NULL) TRACE1_DEC("RTSP Option commend error!! "); delete [] psz_options; p_sdp = p_sys->rtsp->describeURL( p_demux->rtspURL );      ... ... }
    复制代码

      connect中做了三件事:RTSPClient类的实例,发送“OPTIONS”请求,发送“describeURL”请求。

      sendOptionsCmd()函数首先调用openConnectionFromURL()函数进程tcp连接,然后组包发送:

    OPTIONS rtsp://120.90.0.50:8552/h264_ch2 RTSP/1.0
    CSeq: 493
    User-Agent: LibRTSP4 (LIVE555 Streaming Media v2008.04.02)

      收到服务器的应答:

    RTSP/1.0 200 OK
    CSeq: 493
    Date: Mon, May 26 2014 13:27:07 GMT
    Public: OPTIONS, DESCRIBE, SETUP, TEARDOWN, PLAY, PAUSE

      describeURL()函数首先也会调用openConnectionFromURL()函数进行TCP连接(这里可以看出先发OPTIONS请求,也可以先发describeURL请求),然后组包发送:

    DESCRIBE rtsp://120.90.0.50:8552/h264_ch2 RTSP/1.0
    CSeq: 494
    Accept: application/sdp
    User-Agent: LibRTSP4 (LIVE555 Streaming Media v2008.04.02)

      收到服务器应答:

    复制代码
    DESCRIBE rtsp://120.90.0.50:8552/h264_ch2 RTSP/1.0
    CSeq: 494
    Accept: application/sdp
    User-Agent: LibRTSP4 (LIVE555 Streaming Media v2008.04.02)
    
    
    Received DESCRIBE response: 
    RTSP/1.0 200 OK
    CSeq: 494
    Date: Mon, May 26 2014 13:27:07 GMT
    Content-Base: rtsp://192.168.103.51:8552/h264_ch2/
    Content-Type: application/sdp
    Content-Length: 509
    
    Need to read 509 extra bytes
    Read 509 extra bytes: v=0
    o=- 1401092685794152 1 IN IP4 192.168.103.51
    s=RTSP/RTP stream from NETRA
    i=h264_ch2
    t=0 0
    a=tool:LIVE555 Streaming Media v2008.04.02
    a=type:broadcast
    a=control:*
    a=range:npt=0-
    a=x-qt-text-nam:RTSP/RTP stream from NETRA
    a=x-qt-text-inf:h264_ch2
    m=video 0 RTP/AVP 96
    c=IN IP4 0.0.0.0
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1;profile-level-id=000042;sprop-parameter-sets=h264
    a=control:track1
    m=audio 0 RTP/AVP 96
    c=IN IP4 0.0.0.0
    a=rtpmap:96 PCMU/48000/2
    a=control:track2
    复制代码

    4> SessionsSetup

    复制代码
    static int SessionsSetup( rtsp_object_t *p_demux )
    {
         ... ... 
            //    unsigned const thresh             = 1000000;
            if( !( p_sys->ms = MediaSession::createNew( *p_sys->env, p_sys->p_sdp ) ) )
            {
                    TRACE1_DEC( "Could not create the RTSP Session: %s
    ", p_sys->env->getResultMsg() );
                    return RTSP_ERROR;
            }    
    
            /* Initialise each media subsession */
            iter = new MediaSubsessionIterator( *p_sys->ms );
            while( ( sub = iter->next() ) != NULL )
            {
                   ... ...
                    bInit = sub->initiate();
    
                    if( !bInit )
                    {
                            TRACE1_DEC( "RTP subsession '%s/%s' failed (%s)
    ",
                            sub->mediumName(), sub->codecName(), p_sys->env->getResultMsg() );
                    }
                    else
                    {
                  ... ...
                            /* Issue the SETUP */
                            if( p_sys->rtsp )
                            {
                                    if( !p_sys->rtsp->setupMediaSubsession( *sub, False, b_rtsp_tcp, False ) )
                                    {
                                            /* if we get an unsupported transport error, toggle TCP
                                            * use and try again */
                                            if( !strstr(p_sys->env->getResultMsg(),"461 Unsupported Transport")
                                                    || !p_sys->rtsp->setupMediaSubsession( *sub, False, b_rtsp_tcp, False ) )
                                            {
                                                    TRACE1_DEC( "SETUP of'%s/%s' failed %s
    ", sub->mediumName(), sub->codecName(), p_sys->env->getResultMsg() );
                                                    continue;
                                            }
                                    }
                            }
    
                   ... .../* Value taken from mplayer */
                            if( !strcmp( sub->mediumName(), "audio" ) )
                            {
                                    if( !strcmp( sub->codecName(), "MP4A-LATM" ) )
                                    {
                                           ... ...
                                    }
                                    else if( !strcmp( sub->codecName(), "PCMA" )  || !strcmp( sub->codecName(), "PCMU" ))
                                    {
                                            tk->fmt.i_extra = 0;
                                            tk->fmt.i_codec = RTSP_CODEC_PCMU;
                                    }
                            } 
                            else if( !strcmp( sub->mediumName(), "video" ) )
                            {
                                    if( !strcmp( sub->codecName(), "H264" ) )
                                    {
                                           ... ...
                                    }
                                    else if( !strcmp( sub->codecName(), "MP4V-ES" ) )
                                    {
                                            ... ...
                                    }                
                                    else if( !strcmp( sub->codecName(), "JPEG" ) )
                                    {
                                            tk->fmt.i_codec = RTSP_CODEC_MJPG;
                                    }                
                            }  
                   ... ...         
                    }
            }
         ... ...
    }
    复制代码

      这个函数做了四件事:创建MediaSession类的实例,创建MediaSubsessionIterator类的实例,MediaSubsession的初始化,发送"SETUP"请求。

      创建MediaSession实例的同时,会调用initializeWithSDP()函数去解析SDP,解析出"s="相对应的fSessionName,解析出"s="相对应的fSessionName,解析出"i="相对应的fSessionDescription,解析出"c="相对应的connectionEndpointName,解析出"a=type:"相对应的fMediaSessionType等等。创建MediaSubsession类的实例,并且加入到fSubsessionsHead链表中,从上面的SDP描述来看,有两个MediaSubsession,一个video,一个audio。

      创建MediaSubsessionIterator类的实例,并且调用reset函数,将fOurSession.fSubsessionsHead赋值给fNextPtr,也就是将链表的头结点赋值给fNextPtr。当执行while循环的时候,执行了两次,一次video,一次audio。

      initiate函数,根据fSourceFilterAddr来判断是否是SSM,还是ASM,然后调用Groupsock的不同构造函数来创建实例fRTPSocket、fRTCPSocket;然后根据协议类型fProtocolName(这个值在sdp中的“m=”)来判断是基于udp还是rtp,我们只分析RTP,如果是RTP,则根据相应的编码类型fCodecName(这个值在sdp中的“a=rtpmap:”)来判断相应的fRTPSource,这里我们创建了H264和PCMU的RTPSource实例fRTPSource;创建RTCPInstance类的实例fRTCPInstance。

      setupMediaSubsession()函数,主要是发送“SETUP”请求,通过SDP的描述,知道我们采用的是RTP协议,根据rtspOpen传入的参数streamUsingTCP来请求rtp是基于udp传输,还是tcp传输,如果是TCP传输,只能是单播,如果udp传输,则根据connectionEndpointName和传入的参数forceMulticastOnUnspecified来判断是否多播还是单播,我们的服务端值支持单播,而且传入的参数false,所以这里采用单播;组包发送“SETUP”请求:

    SETUP rtsp://192.168.103.51:8552/h264_ch2/track1 RTSP/1.0
    CSeq: 495
    Transport: RTP/AVP;unicast;client_port=33482-33483
    User-Agent: LibRTSP4 (LIVE555 Streaming Media v2008.04.02)

       服务器应答:

    RTSP/1.0 200 OK
    CSeq: 495
    Date: Mon, May 26 2014 13:27:07 GMT
    Transport: RTP/AVP;unicast;destination=14.214.248.17;source=192.168.103.51;client_port=33482-33483;server_port=6970-6971
    Session: 151

      最后,如果采用TCP传输,则调用setStreamSocket()->RTPInterface::setStreamSocket()->addStreamSocket()函数将RTSP的socket值fInputSocketNum加入到fTCPStreams链表中;如果是UDP传输的话,组播地址为空,则用服务端地址保存到fDests中,如果组播地址不为空,则加入组播组。

    复制代码
            ... ...
         if (streamUsingTCP) { // Tell the subsession to receive RTP (and send/receive RTCP) // over the RTSP stream: if (subsession.rtpSource() != NULL) subsession.rtpSource()->setStreamSocket(fInputSocketNum, subsession.rtpChannelId); if (subsession.rtcpInstance() != NULL) subsession.rtcpInstance()->setStreamSocket(fInputSocketNum, subsession.rtcpChannelId); } else { // Normal case. // Set the RTP and RTCP sockets' destination address and port // from the information in the SETUP response: subsession.setDestinations(fServerAddress); }
    ... ...
    复制代码

    5> play

    复制代码
    static int Play( rtsp_object_t *p_demux )
    {
        ... ...
        if( p_sys->rtsp )
        {    
            /* The PLAY */
            if( !p_sys->rtsp->playMediaSession( *p_sys->ms, p_sys->i_npt_start, -1, 1 ) )
            {
                TRACE1_DEC( "RTSP PLAY failed %s
    ", p_sys->env->getResultMsg() );
                return RTSP_ERROR;;
            }        
        }
        ... ...return RTSP_SUCCESS;    
    }
    复制代码

      playMediaSession()函数,就是发送“PLAY”请求:

    PLAY rtsp://120.90.0.50:8552/h264_ch2/ RTSP/1.0
    CSeq: 497
    Session: 151
    Range: npt=0.000-
    User-Agent: LibRTSP4 (LIVE555 Streaming Media v2008.04.02)

     服务器应答:

    复制代码
    RTSP/1.0 200 OK
    CSeq: 497
    Date: Mon, May 26 2014 13:27:07 GMT
    Range: npt=0.000-
    Session: 151
    RTP-Info: url=rtsp://192.168.103.51:8552/h264_ch2/track1;seq=63842;rtptime=1242931431,url=rtsp://192.168.103.51:8552/h264_ch2/track2;seq=432;rtptime=3179210581
    复制代码

    接着我们分析rtspRead过程:

    复制代码
    int rtspRead(rtsp_object_t *p_obj)
    { 
          ... ...
            if(p_sys != NULL)
            {
                    /* First warn we want to read data */
                    p_sys->event = 0;    
                    for( i = 0; i < p_sys->i_track; i++ )
                    {
                            live_track_t *tk = p_sys->track[i];if( tk->waiting == 0 )
                            {
                                    tk->waiting = 1;
                                    tk->sub->readSource()->getNextFrame( tk->p_buffer, tk->i_buffer,
                                            StreamRead, tk, StreamClose, tk );
                            }        
                    }               
    
                    /* Create a task that will be called if we wait more than 300ms */
                    task = p_sys->scheduler->scheduleDelayedTask( 300000, TaskInterrupt, p_obj );        
    
                    /* Do the read */
                    p_sys->scheduler->doEventLoop( &p_sys->event );
    
                    /* remove the task */
                    p_sys->scheduler->unscheduleDelayedTask( task );    
    
                    p_sys->b_error ? ret = RTSP_ERROR : ret = RTSP_SUCCESS;
            }
    
            return ret;
    }
    复制代码

      这个函数首先要知道readSource()函数的fReadSource的值在哪里复制,在前面的initiate()函数里面有:

    复制代码
          
           ... ...
           } else if (strcmp(fCodecName, "H264") == 0) { fReadSource = fRTPSource = H264VideoRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency); } else if (strcmp(fCodecName, "JPEG") == 0) { // motion JPEG           ... ... } else if ( strcmp(fCodecName, "PCMU") == 0 // PCM u-law audio || strcmp(fCodecName, "GSM") == 0 // GSM audio || strcmp(fCodecName, "PCMA") == 0 // PCM a-law audio || strcmp(fCodecName, "L16") == 0 // 16-bit linear audio || strcmp(fCodecName, "MP1S") == 0 // MPEG-1 System Stream || strcmp(fCodecName, "MP2P") == 0 // MPEG-2 Program Stream || strcmp(fCodecName, "L8") == 0 // 8-bit linear audio || strcmp(fCodecName, "G726-16") == 0 // G.726, 16 kbps || strcmp(fCodecName, "G726-24") == 0 // G.726, 24 kbps || strcmp(fCodecName, "G726-32") == 0 // G.726, 32 kbps || strcmp(fCodecName, "G726-40") == 0 // G.726, 40 kbps || strcmp(fCodecName, "SPEEX") == 0 // SPEEX audio ) { createSimpleRTPSource = True; useSpecialRTPoffset = 0; } else if (useSpecialRTPoffset >= 0) {   ... ... } if (createSimpleRTPSource) { char* mimeType = new char[strlen(mediumName()) + strlen(codecName()) + 2] ; sprintf(mimeType, "%s/%s", mediumName(), codecName()); fReadSource = fRTPSource = SimpleRTPSource::createNew(env(), fRTPSocket, fRTPPayloadFormat, fRTPTimestampFrequency, mimeType, (unsigned)useSpecialRTPoffset, doNormalMBitRule); delete[] mimeType; } }
    复制代码

        如果是h264编码方式,则getNextFrame函数定义在FramedSource::getNextFrame:

    复制代码
    void FramedSource::getNextFrame(unsigned char* to, unsigned maxSize,
                    afterGettingFunc* afterGettingFunc,
                    void* afterGettingClientData,
                    onCloseFunc* onCloseFunc,
                    void* onCloseClientData) 
    {
        // Make sure we're not already being read:
        if (fIsCurrentlyAwaitingData) {
            envir() << "FramedSource[" << this << "]::getNextFrame(): attempting to read more than once at the same time!
    ";
            exit(1);
        }
    
        fTo = to;
        fMaxSize = maxSize;
        fNumTruncatedBytes = 0; // by default; could be changed by doGetNextFrame()
        fDurationInMicroseconds = 0; // by default; could be changed by doGetNextFrame()
        fAfterGettingFunc = afterGettingFunc;
        fAfterGettingClientData = afterGettingClientData;
        fOnCloseFunc = onCloseFunc;
        fOnCloseClientData = onCloseClientData;
        fIsCurrentlyAwaitingData = True;
    
        doGetNextFrame();
    }
    复制代码

      doGetNextFrame()函数定义在MultiFramedRTPSource::doGetNextFrame():

    复制代码
    void MultiFramedRTPSource::doGetNextFrame() 
    {
        if (!fAreDoingNetworkReads) {
            // Turn on background read handling of incoming packets:
            fAreDoingNetworkReads = True;
            TaskScheduler::BackgroundHandlerProc* handler = (TaskScheduler::BackgroundHandlerProc*)&networkReadHandler;
                                                       fRTPInterface.startNetworkReading(handler);
        }
    
        fSavedTo = fTo;
        fSavedMaxSize = fMaxSize;
        fFrameSize = 0; // for now
        fNeedDelivery = True;
        
        doGetNextFrame1();
    }
    复制代码

      doGetNextFrame1()函数定义在MultiFramedRTPSource::doGetNextFrame1():

    复制代码
    void MultiFramedRTPSource::doGetNextFrame1() 
    {
        while (fNeedDelivery) {
            // If we already have packet data available, then deliver it now.
            Boolean packetLossPrecededThis;
            BufferedPacket* nextPacket = fReorderingBuffer->getNextCompletedPacket(packetLossPrecededThis);
            if (nextPacket == NULL) break;
    
            fNeedDelivery = False;
    
            if (nextPacket->useCount() == 0) {
                // Before using the packet, check whether it has a special header
                // that needs to be processed:
                unsigned specialHeaderSize;
                if (!processSpecialHeader(nextPacket, specialHeaderSize)) {
                    // Something's wrong with the header; reject the packet:
                    fReorderingBuffer->releaseUsedPacket(nextPacket);
                    fNeedDelivery = True;
                    break;
                }
                nextPacket->skip(specialHeaderSize);
            }
    
            // Check whether we're part of a multi-packet frame, and whether
            // there was packet loss that would render this packet unusable:
            if (fCurrentPacketBeginsFrame) {
                if (packetLossPrecededThis || fPacketLossInFragmentedFrame) {
                    // We didn't get all of the previous frame.
                    // Forget any data that we used from it:
                    fTo = fSavedTo; fMaxSize = fSavedMaxSize;
                    fFrameSize = 0;
                }
                fPacketLossInFragmentedFrame = False;
            } else if (packetLossPrecededThis) {
                // We're in a multi-packet frame, with preceding packet loss
                fPacketLossInFragmentedFrame = True;
            }
            if (fPacketLossInFragmentedFrame) {
                // This packet is unusable; reject it:
                fReorderingBuffer->releaseUsedPacket(nextPacket);
                fNeedDelivery = True;
                break;
            }
    
            // The packet is usable. Deliver all or part of it to our caller:
            unsigned frameSize;
            nextPacket->use(fTo, fMaxSize, frameSize, fNumTruncatedBytes,
                            fCurPacketRTPSeqNum, fCurPacketRTPTimestamp,
                            fPresentationTime, fCurPacketHasBeenSynchronizedUsingRTCP,
                            fCurPacketMarkerBit);
            fFrameSize += frameSize;
    
            if (!nextPacket->hasUsableData()) {
                // We're completely done with this packet now
                fReorderingBuffer->releaseUsedPacket(nextPacket);
            }
    
            if (fCurrentPacketCompletesFrame || fNumTruncatedBytes > 0) {
                // We have all the data that the client wants.
                if (fNumTruncatedBytes > 0) {
                    envir() << "MultiFramedRTPSource::doGetNextFrame1(): The total received frame size exceeds the client's buffer size ("
                           << fSavedMaxSize << ").  "<< fNumTruncatedBytes << " bytes of trailing data will be dropped!
    ";
                }
                // Call our own 'after getting' function, so that the downstream object can consume the data:
                if (fReorderingBuffer->isEmpty()) {
                    // Common case optimization: There are no more queued incoming packets, so this code will not get
                    // executed again without having first returned to the event loop.  Call our 'after getting' function
                    // directly, because there's no risk of a long chain of recursion (and thus stack overflow):
                    afterGetting(this);
                } else {
                    // Special case: Call our 'after getting' function via the event loop.
                    nextTask() = envir().taskScheduler().scheduleDelayedTask(0,  (TaskFunc*)FramedSource::afterGetting, this);
                }
            } else {
                // This packet contained fragmented data, and does not complete
                // the data that the client wants.  Keep getting data:
                fTo += frameSize; fMaxSize -= frameSize;
                fNeedDelivery = True;
            }
        }
    }
    复制代码

       FramedSource::afterGetting(FramedSource* source) :

    复制代码
    void FramedSource::afterGetting(FramedSource* source) 
    {
        source->fIsCurrentlyAwaitingData = False;
        // indicates that we can be read again
        // Note that this needs to be done here, in case the "fAfterFunc"
        // called below tries to read another frame (which it usually will)
    
        if (source->fAfterGettingFunc != NULL) {
            (*(source->fAfterGettingFunc))(source->fAfterGettingClientData,
                                       source->fFrameSize, 
                                       source->fNumTruncatedBytes,
                                       source->fPresentationTime,
                                       source->fDurationInMicroseconds);
        }
    }
    复制代码

      fAfterGettingFunc函数指针在FramedSource::getNextFrame()中被赋值afterGettingFunc,afterGettingFunc的值则是rtspRead()函数调用getNextFrame()函数时,传入的StreamRead()。这样就获取了一帧数据。

         在MultiFramedRTPSource::doGetNextFrame()函数中,我们发现了fRTPInterface.startNetworkReading(handler),这个函数主要做了什么作用?

    复制代码
    void RTPInterface::startNetworkReading(TaskScheduler::BackgroundHandlerProc* handlerProc) 
    {
        // Normal case: Arrange to read UDP packets:
        envir().taskScheduler().turnOnBackgroundReadHandling(fGS->socketNum(), handlerProc, fOwner);
    
        // Also, receive RTP over TCP, on each of our TCP connections:
        fReadHandlerProc = handlerProc;
        for (tcpStreamRecord* streams = fTCPStreams; streams != NULL; streams = streams->fNext) {
            // Get a socket descriptor for "streams->fStreamSocketNum":
            SocketDescriptor* socketDescriptor = lookupSocketDescriptor(envir(), streams->fStreamSocketNum);
            if (socketDescriptor == NULL) {
                socketDescriptor = new SocketDescriptor(envir(), streams->fStreamSocketNum);
                socketHashTable(envir())->Add((char const*)(long)(streams->fStreamSocketNum), socketDescriptor);
            }
    
            // Tell it about our subChannel:
            socketDescriptor->registerRTPInterface(streams->fStreamChannelId, this);
        }
    }
    复制代码

      这个函数主要做了两个作用,一个是注册UDP socket的读取任务函数MultiFramedRTPSource::networkReadHandler()到任务队列,一个是注册TCP socket的读取任务函数SocketDescriptor::tcpReadHandler()到任务队列,最终还是会调用MultiFramedRTPSource::networkReadHandler()函数获取一帧数据。

    嵌入式QQ交流群:127085086

    http://www.cnblogs.com/cslunatic/p/3769859.html

  • 相关阅读:
    《DSP using MATLAB》 示例 Example 9.12
    《DSP using MATLAB》示例 Example 9.11
    《DSP using MATLAB》示例 Example 9.10
    《DSP using MATLAB》示例Example 9.9
    《DSP using MATLAB》示例 Example 9.8
    《DSP using MATLAB》示例Example 9.7
    《DSP using MATLAB》示例 Example 9.6
    《DSP using MATLAB》示例Example 9.5
    《DSP using MATLAB》示例 Example 9.4
    (转载)【C++11新特性】 nullptr关键字
  • 原文地址:https://www.cnblogs.com/pengkunfan/p/3945480.html
Copyright © 2011-2022 走看看