zoukankan      html  css  js  c++  java
  • 将RTSP流录制为mp4文件

    录制程序要继续添加新功能:模拟电视,板卡发送出来的是rtsp流(h264视频+alaw(pcma)音频)。

    由于之前做过将rtp流(h264视频+aac音频)录制合成mp4文件(参见http://www.cnblogs.com/chutianyao/archive/2012/04/13/2446140.html),很自然的就决定将其合成为mp4文件。

    但是有些不同:

    (1)需要解析RTSP协议。研究了一下RFC2326,发现也不是很复杂。

      rtsp分控制流和数据流:控制流就是客户端向服务端发送控制命令,包括查看节目信息、播放、停止节目等,一般是通过TCP协议通信的;数据流就是服务端将音视频数据发送到指定的地址、端口上,我们的音频和视频单独发送到两个不同的端口上,采用的是UDP协议。采用TCP或UDP,在RTSP协议中并没有明确规定,可以根据实际情况确定。

      控制流采用的是HTTP文本协议,比较简单、方便调试,这个RTSP协议中也没有规定必须使用HTTP,不过一般都是采用HTTP来实现的。

      大致步骤:

      1. 客户端连接rtsp服务器,发送option方法;服务器返回可用的方法,通常有DESCRIBE,SETUP,PLAY,TEARDOWN等,由于板卡端的rtsp服务程序也是我们自己实现的,可以确保已经实现了这些方法,因此客户端就没有进行检查了;

      2. 客户端发送DESCRIBE方法,服务器返回RTSP流的相关信息,包括video stream,audio stream的个数、码率、分辨率等参数信息;

      3. 根据返回的参数信息,客户端决定要播放哪些video stream,audio stream,发送SETUP方法;

       我们的RTSP流为:一个alaw audio 和一个h264 video,需要指定音视频数据分别发送到哪个端口上,通过下面的代码来构造发送消息: 

     1 int RTSP::Set_Setup()
     2 {
     3     int nRet = -1;
     4     int m_nIndex = 0;
     5 
     6     if (m_pBuf != NULL)
     7     {
     8 //        if (m_pContentBase == NULL)
     9 //        {
    10 //            sprintf(m_pBuf, "SETUP %s/%s %s\r\n", m_strUrl.c_str(), m_pMedia->p_control, RTSP_VERSSION);
    11 //        }
    12 //        else
    13 //        {
    14 //            sprintf(m_pBuf, "SETUP %s%s %s\r\n", m_pContentBase, m_pMedia->p_control, RTSP_VERSSION);
    15 //        }
    16 //        printf("m_pContentBase:%s\n", m_pContentBase);
    17 //        printf("m_strUrl:%s\n", m_strUrl.c_str());
    18 //        printf("m_pMedia->p_control:%s\n", m_pMedia->p_control);
    19 //        printf("m_pBuf:%s\n", m_pBuf);
    20         sprintf(m_pBuf, "SETUP %s %s\r\n", m_pMedia->p_control, RTSP_VERSSION);
    21 
    22         m_nIndex = strlen(m_pBuf);
    23         sprintf(m_pBuf + m_nIndex, "CSeq: %d\r\n", m_nSeqNum);
    24         m_nIndex = strlen(m_pBuf);
    25 
    26         if (m_pMedia->i_media_type == VIDEO)
    27         {
    28             GetVideoPort();
    29             sprintf(m_pBuf + m_nIndex, "Transport: %s;%s;client_port=%d-%d\r\n", "RTP/AVP", "unicast", m_nVideoPort, m_nVideoPort + 1);
    30             m_nIndex = strlen(m_pBuf);
    31         }
    32         else if (m_pMedia->i_media_type == AUDIO)
    33         {
    34             GetAudioPort();
    35             sprintf(m_pBuf + m_nIndex, "Transport: %s;%s;client_port=%d-%d\r\n", "RTP/AVP", "unicast", m_nAudioPort, m_nAudioPort + 1);
    36             m_nIndex = strlen(m_pBuf);
    37         }
    38 
    39         if (m_pSession[0] != 0)
    40         {
    41             sprintf(m_pBuf + m_nIndex, "Session: %s\r\n", m_pSession);
    42             m_nIndex = strlen(m_pBuf);
    43         }
    44 
    45         sprintf(m_pBuf + m_nIndex, "User-Agent: %s\r\n", USER_AGENT_STR);
    46         m_nIndex = strlen(m_pBuf);
    47         sprintf(m_pBuf + m_nIndex, "\r\n");
    48         m_nIndex = strlen(m_pBuf);
    49         m_nBufSize = m_nIndex;
    50 
    51         nRet = 0;
    52     }
    53 
    54     return nRet;
    55 }

      4. SETUP成功之后,通过PLAY命令就可以进行播放了:

     1 int RTSP::Set_Play()
     2 {
     3     int nRet = -1;
     4     int m_nIndex = 0;
     5 
     6     if (m_pBuf != NULL)
     7     {
     8         sprintf(m_pBuf, "PLAY %s %s\r\n", m_strUrl.c_str(), RTSP_VERSSION);
     9         m_nIndex = strlen(m_pBuf);
    10         sprintf(m_pBuf + m_nIndex, "CSeq: %d\r\n", m_nSeqNum);
    11         m_nIndex = strlen(m_pBuf);
    12         sprintf(m_pBuf + m_nIndex, "Session: %s\r\n", m_pSession);
    13         m_nIndex = strlen(m_pBuf);
    14         sprintf(m_pBuf + m_nIndex, "Range: npt=0.000-\r\n");
    15         m_nIndex = strlen(m_pBuf);
    16         sprintf(m_pBuf + m_nIndex, "User-Agent: %s\r\n", USER_AGENT_STR);
    17         m_nIndex = strlen(m_pBuf);
    18         sprintf(m_pBuf + m_nIndex, "\r\n");
    19         m_nIndex = strlen(m_pBuf);
    20         m_nBufSize = m_nIndex;
    21 
    22         nRet = 0;
    23     }
    24 
    25     return nRet;
    26 }

      这样我们就可以在刚才指定的端口上接收UDP的音视频数据了。

     更详细的可以参考rtsp协议的实现。

    (2)合成MP4.

    我们已经知道音视频格式分别为:alaw(pcma), h264;查看文档发现,mp4v2刚好支持这两种格式,剩下就很简单了:

     1 bool COutputATV::CreateMp4File(string filename)
     2 {
     3     m_Mp4File = MP4CreateEx(filename.c_str());
     4     if (m_Mp4File == MP4_INVALID_FILE_HANDLE)
     5     {
     6         return false;
     7     }
     8 
     9     MP4SetTimeScale(m_Mp4File, 90000);
    10     m_nVideoTrack = MP4AddH264VideoTrack(m_Mp4File,
    11                                          90000,                                     //timescale
    12                                          3214,                                       //sample duration:/*(90000 / 25)*/
    13                                                                                           /*  NOTICE:
    14                                                                                            *  why 3214? read the commets below.
    15                                                                                            */
    16                                          320,                                         //
    17                                          240,                                         //height:
    18                                          0x64, //sps[1] AVCProfileIndication
    19                                          0x00, //sps[2] profile_compat
    20                                          0x1f, //sps[3] AVCLevelIndication
    21                                          3); // 4 bytes length before each NAL unit
    22     if (m_nVideoTrack == MP4_INVALID_TRACK_ID)
    23     {
    24         LOG(LOG_TYPE_ERROR, "CreateMp4File():MP4AddH264VideoTrack() failed.");
    25         return false;
    26     }
    27     MP4SetVideoProfileLevel(m_Mp4File, 0x7F);
    28 
    29     m_nAudioTrack = MP4AddALawAudioTrack(m_Mp4File,
    30                                          8000,  //timescale
    31                                          500);  //sampleDuration.
    32     /* NOTICE:
    33      * in standard release of mp4v2 library(v1.9.1, and trunk-r479),the function MP4AddALawAudioTrack() does not specify the 3rd param:
    34      * 'sampleDuration', it calculate a fixed duration value with the following formula:
    35      *                      uint32_t fixedSampleDuration = (timeScale * 20)/1000; // 20mSec/Sample
    36      * please read the source code of MP4AddALawAudioTrack().
    37      * they can do it in this way because RFC3551 defines PCMA(a-law) as 20msec per sample, so the duration is a fixed value, please read RFC
    38      * 3551:http://www.ietf.org/rfc/rfc3551.txt
    39      * but, the souce boards' we used does not follow the RFC specifition, we found the sample duration value is 500.
    40      * (why the param is 500? every rtp packet contains  a timestamp, the duration is the difference of two samples(not rtp packets), the same as
    41      * h264 tracks in rtp). SO:
    42      * I modified the declarion of MP4AddALawAudioTrack(), add the 3rd param:'sampleDuration', to pass the actual duration value,I also modified
    43      * the implmention of MP4AddALawAudioTrack().
    44      *
    45      * as a result:
    46      * ***************************               IMPORTANT                ***************************
    47      * when distribute the Record software, you MUST use the mp4v2 library distribute with it,
    48      * please DO NOT use the standard release download from network!
    49      * ***********************************************************************************
    50      *
    51      * we use the default value of duration when creating mp4 file, we will modify it later when begin to write the first two samples with its
    52      * actual value.
    53      *
    54      * Added by:Zhengfeng Rao.
    55      * 2012-05-08
    56      */
    57 
    58     MP4SetTrackIntegerProperty(m_Mp4File,
    59                                m_nAudioTrack,
    60                                "mdia.minf.stbl.stsd.alaw.channels",
    61                                 1);
    62 
    63     if (m_nAudioTrack == MP4_INVALID_TRACK_ID)
    64     {
    65         LOG(LOG_TYPE_ERROR, "CreateMp4File():MP4AddAudioTrack() failed.");
    66         return false;
    67     }
    68     MP4SetAudioProfileLevel(m_Mp4File, 0x02);
    69 
    70     return true;
    71 }

    写音视频数据:

      1 void COutputATV::DecodeRtp(unsigned char *pbuf, int datalength)
      2 {
      3     if((pbuf == NULL) || (datalength <= 0))
      4     {
      5         return;
      6     }
      7 
      8     rtp_header_t rtp_header;
      9     char cType = pbuf[0];
     10 
     11     //the 1st byte indicate the node is audio/video, it's added by the input thread, so we need to remove it.
     12     pbuf += 1;
     13     datalength -= 1;
     14     int i_header_size = GetRtpHeader(&rtp_header, pbuf, datalength);
     15 
     16     if(i_header_size <=0 )
     17     {
     18         LOG(LOG_TYPE_ERROR, "COutputATV::DecodeRtp() Invalid header size:%d", i_header_size);
     19         return;
     20     }
     21 
     22     if(cType == 'A')
     23     {
     24         if (rtp_header.i_pt == 0x8)//AUDIO
     25         {
     26             int i_size = datalength - i_header_size;
     27             if (m_nAudioTimeStamp == 0)
     28             {
     29                 m_nAudioTimeStamp = rtp_header.i_timestamp;
     30             }
     31 
     32             if (m_nAudioTimeStamp != rtp_header.i_timestamp)//got a frame
     33             {
     34                 MP4WriteSample(m_Mp4File, m_nAudioTrack, m_pAudioFrame, m_nAudioFrameIndex);
     35                 m_nAudioFrameIndex = 0;
     36 
     37                 m_nAudioTimeStamp = rtp_header.i_timestamp;
     38                 memcpy(m_pAudioFrame + m_nAudioFrameIndex, pbuf + i_header_size, i_size);
     39                 m_nAudioFrameIndex += i_size;
     40             }
     41             else
     42             {
     43                 memcpy(m_pAudioFrame + m_nAudioFrameIndex, pbuf + i_header_size, i_size);
     44                 m_nAudioFrameIndex += i_size;
     45             }
     46         }
     47         else
     48         {
     49             //INVALID packet.
     50         }
     51     }
     52     else if(cType == 'V')
     53     {
     54         if (rtp_header.i_pt == 0x60)// VIDEO
     55         {
     56             char p_save_buf[4096] = {0};
     57             int i_size = RtpToH264(pbuf, datalength, p_save_buf, &m_nNaluOkFlag, &m_nLastPktNum);
     58             if(i_size <= 0)
     59             {
     60                 DumpFrame(pbuf, datalength);
     61                 LOG_PERIOD(LOG_TYPE_WARN, "RtpToH264() Illegal packet, igonred. datalength = %d, i_size = %d", datalength-1, i_size);
     62                 return;
     63             }
     64 
     65             if (m_nVideoTimeStamp == 0)
     66             {
     67                 m_nVideoTimeStamp = rtp_header.i_timestamp;
     68 
     69                 m_nVideoFrameIndex = 0;
     70                 memcpy(m_pVideoFrame + m_nVideoFrameIndex, p_save_buf, i_size);
     71                 m_nVideoFrameIndex += i_size;
     72             }
     73 
     74             if (m_nVideoTimeStamp != rtp_header.i_timestamp || p_save_buf[12] == 0x78)
     75             {
     76                 if (m_nVideoFrameIndex >= 4)
     77                 {
     78                     unsigned int* p = (unsigned int*) (&m_pVideoFrame[0]);
     79                     *p = htonl(m_nVideoFrameIndex - 4);
     80 
     81                    MP4WriteSample(m_Mp4File, m_nVideoTrack, m_pVideoFrame, m_nVideoFrameIndex, MP4_INVALID_DURATION, 0, 1);
     82                     //DumpFrame(m_pVideoFrame, m_nVideoFrameIndex);
     83                 }
     84 
     85                 m_nVideoFrameIndex = 0;
     86                 m_nVideoTimeStamp = rtp_header.i_timestamp;
     87                 memcpy(m_pVideoFrame + m_nVideoFrameIndex, p_save_buf, i_size);
     88                 m_nVideoFrameIndex += i_size;
     89             }
     90             else
     91             {
     92                 //printf("2.3.3*************i_size:%d, m_nVideoFrameIndex:%d\n", i_size, m_nVideoFrameIndex);
     93                 memcpy(m_pVideoFrame + m_nVideoFrameIndex, p_save_buf, i_size);
     94                 m_nVideoFrameIndex += i_size;
     95             }
     96         }
     97         else
     98         {
     99             //INVALID packet.
    100         }
    101     }
    102     else
    103     {
    104         //INVALID packet.
    105     }
    106 }

    需要说明的是:

    libmp4v2通过MP4AddALawAudioTrack(mp4file, timescale,sampleDuration)添加alaw音频时,第三个参数sampleDuration是我自己修改libmp4v2库添加的。

    因为libmp4v2中 MP4AddALawAudioTrack接口为:MP4AddALawAudioTrack(mp4file, timescale),sampleDuration是通过如下公式计算得到的:

    uint32_t fixedSampleDuration = (timeScale * 20)/1000; // 20mSec/Sample

    而这计算出来的值,并不符合我们的实际情况,所以我添加了这第三个参数,可以自己指定sample duration。

     

     

  • 相关阅读:
    Maven中使用描述文件切换环境配置
    整合MyBatis到Spring中实现Dao层自动装配
    使用MyBatis搭建项目时报 java.io.IOException: Could not find resource
    数据库CPU占用高排查
    JS 根据时区获取时间
    国外服务器 winserver2012 安装IIS后,安装urlrewrite模块总是自动停止应用程序池
    sql中char(9) char(10) char(13)
    通过 Microsoft.Ace.OLEDB 接口导入 EXCEL 到SQLSERVER
    SDL 当前连接查询脚本
    C# System.Drawing.Graphics 画图后,如何保存一个低质量的图片,一个占用空间较小的图片
  • 原文地址:https://www.cnblogs.com/chutianyao/p/2496310.html
Copyright © 2011-2022 走看看