zoukankan      html  css  js  c++  java
  • kurento用gstreamer推流 RTP to RTMP

    基于Gstreamer的rtp转rtmp代码

    flv不支持 音频 48000. 支持44k。flv不支持音频opus格式。
     

    1,用ffmpeg推流到rtp。

    srs的示例flv文件:

    ffmpeg -re -stream_loop -1 -i ./doc/source.200kbps.768x320.flv -an -vcodec h264 -f rtp rtp://127.0.0.1:5004 -vn -acodec libopus -f rtp rtp://127.0.0.1:5003

    命令执行后,根据输出可以提取到sdp描述信息:(蓝色是两个端口红色是格式96,H264)

    SDP:
    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    t=0 0
    a=tool:libavformat 57.83.100
    m=video 5004 RTP/AVP 96
    c=IN IP4 127.0.0.1
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1
    m=audio 5003 RTP/AVP 97
    c=IN IP4 127.0.0.1
    b=AS:96
    a=rtpmap:97 opus/48000/2
    a=fmtp:97 sprop-stereo=1

    这个sdp内容提供给kurento的rtpEndpoint的processOffer(sdp),即sdp里面是流的输出格式 sink。flv源流里面是aac,需要转换到opus。

    推流成aac

    ffmpeg -re -stream_loop -1 -i audio_opus.mp4 -an -vcodec h264 -f rtp rtp://127.0.0.1:59000 -vn -acodec aac -f rtp rtp://127.0.0.1:49000

    sdp为

    v=0
    o=- 0 0 IN IP4 127.0.0.1
    s=No Name
    t=0 0
    a=tool:libavformat 57.83.100
    m=video 59000 RTP/AVP 96
    c=IN IP4 127.0.0.1
    a=rtpmap:96 H264/90000
    a=fmtp:96 packetization-mode=1
    m=audio 49000 RTP/AVP 97
    c=IN IP4 127.0.0.1
    b=AS:128
    a=rtpmap:97 MPEG4-GENERIC/48000/2
    a=fmtp:97 profile-level-id=1;mode=AAC-hbr;sizelength=13;indexlength=3;indexdeltalength=3; config=119056E500

    这里流输出为aac。

    2,推rtp流到rtmp( rtmp://192.168.16.133/live/[key] )

    2.1 用ffmpeg直接推文件到rtmp

    ffmpeg -re -i doc/source.200kbps.768x320.flv -c copy 
        -f flv -y rtmp://192.168.16.133/live/livestream

    ffmpeg -re -stream_loop -1 -i audio_opus.mp4  -vcodec copy -acodec aac  -f flv -y rtmp://192.168.16.133/live/55000

    2.2 用ffmpeg推rtp 按照 sdp 到rtmp:

    ffmpeg 
    -protocol_whitelist "file,udp,rtp" 
    -i 127.0.0.1_55000.sdp 
    -vcodec copy 
    -acodec copy 
    -f flv 
    rtmp://192.168.16.133:1935/live/55000

    2.3 直接接收rtp端口来推流: 我们要用Gstreamer推流到rtmp服务器: 

    gst-launch-1.5 -em 
      rtpbin name=rtpbin latency=5 
      udpsrc port=5003 caps="application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS" ! rtpbin.recv_rtp_sink_0 
        rtpbin.  ! rtpopusdepay ! opusdec ! audioconvert ! audioresample ! voaacenc  bitrate=48000 ! aacparse avenc_aac  ! mux. 
      udpsrc port=5004 caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtpbin.recv_rtp_sink_1 
        rtpbin.  ! rtph264depay ! h264parse ! mux. 
      flvmux name=mux streamable=true ! rtmpsink sync=false location=rtmp://192.168.16.133/live/55000

    里面标红得值都来自于上面发布流的SDP结果。(用avenc_aac音频就会导致播完一帧后卡住!!!)

    其他:

    推摄像头到rtmp

    gst-launch-1.0 -v v4l2src ! 'video/x-raw, width=640, height=480, framerate=30/1' 
    ! queue ! videoconvert ! omxh264enc ! h264parse ! flvmux ! rtmpsink location='rtmp://{MY_IP}/rtmp/live'

    获取flv流:

    gst-launch-1.0 rtmpsrc location='rtmp://{MY_IP}/rtmp/live' ! filesink location='rtmpsrca.flv'

    参考:https://stackoverflow.com/questions/38495163/rtmp-streaming-via-gstreamer-1-0-appsrc-to-rtmpsink


    安装 目前最新得是1.5. kurento得omni build all会自动安装。
    apt-get install -y 
      gstreamer1.5-libav 
      gstreamer1.5-plugins-bad 
      gstreamer1.5-plugins-base 
      gstreamer1.5-plugins-good 
      gstreamer1.5-tools
    API文档:
    rutorial:https://gstreamer.freedesktop.org/documentation/tutorials/basic/hello-world.html?gi-language=c
    https://gstreamer.freedesktop.org/documentation/libav/avenc_aac.html?gi-language=c

    gstreamer代码实现rtp推流到rtmp:

    #include <string.h>
    #include <math.h>
     
    #include <gst/gst.h>
     
    #define VIDEO_CAPS "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264"
    #define AUDIO_CAPS "application/x-rtp,media=(string)audio,clock-rate=(int)48000,encoding-name=(string)OPUS"
     
    /* will be called when rtpbin has validated a payload that we can depayload */
    static void
    pad_added_cb(GstElement *rtpbin, GstPad *new_pad, GstElement *depay)
    {
        char *pad_name = GST_PAD_NAME(new_pad);
        char *depay_name = gst_element_get_name(depay);
        if (strstr(pad_name, "recv_rtp_src_0_") && strstr(depay_name, "audiodepay"))
        {
            GstPad *sinkpad;
            GstPadLinkReturn lres;
     
            g_print("new payload on rtpbin: %s %s %s
    ",
                    gst_element_get_name(rtpbin), GST_PAD_NAME(new_pad), gst_element_get_name(depay));
     
            sinkpad = gst_element_get_static_pad(depay, "sink");
            g_assert(sinkpad);
     
            lres = gst_pad_link(new_pad, sinkpad);
            g_assert(lres == GST_PAD_LINK_OK);
            gst_object_unref(sinkpad);
        }
        else if (strstr(pad_name, "recv_rtp_src_1_") && strstr(depay_name, "videodepay"))
        {
            GstPad *sinkpad;
            GstPadLinkReturn lres;
     
            g_print("new payload on rtpbin: %s %s %s
    ",
                    gst_element_get_name(rtpbin), GST_PAD_NAME(new_pad), gst_element_get_name(depay));
     
            sinkpad = gst_element_get_static_pad(depay, "sink");
            g_assert(sinkpad);
     
            lres = gst_pad_link(new_pad, sinkpad);
            g_assert(lres == GST_PAD_LINK_OK);
            gst_object_unref(sinkpad);
        }
    }
     
    int main(int argc, char *argv[])
    {
        GMainLoop *loop;
        GstElement *pipeline;
     
        GstElement *rtpbin;
        GstElement *audiosrc, *audiodepay, *audiodec, *audiores, *audioconv, *audiosink;
        GstElement *videosrc, *videodepay, *videosink;
        GstElement *flvmux, *rtmpsink;
     
        gboolean res;
        GstCaps *caps;
        GstPadLinkReturn lres;
        GstPad *srcpad, *audio_sinkpad, *video_sinkpad;
     
        gst_init(&argc, &argv);
        pipeline = gst_pipeline_new(NULL);
        g_assert(pipeline);
     
        /* the rtpbin element */
        rtpbin = gst_element_factory_make("rtpbin", "rtpbin");
        g_assert(rtpbin);
        gst_bin_add(GST_BIN(pipeline), rtpbin);
        // 001 源
        audiosrc = gst_element_factory_make("udpsrc", "audiosrc");
        g_assert(audiosrc);
        g_object_set(audiosrc, "port", 5003, NULL);
        caps = gst_caps_from_string(AUDIO_CAPS);
        g_object_set(audiosrc, "caps", caps, NULL);
        gst_caps_unref(caps);
        gst_bin_add(GST_BIN(pipeline), audiosrc);
     
        videosrc = gst_element_factory_make("udpsrc", "videosrc");
        g_assert(videosrc);
        g_object_set(videosrc, "port", 5004, NULL);
        caps = gst_caps_from_string(VIDEO_CAPS);
        g_object_set(videosrc, "caps", caps, NULL);
        gst_caps_unref(caps);
        gst_bin_add(GST_BIN(pipeline), videosrc);
     
        /* now link all to the rtpbin, start by getting an RTP sinkpad for session 0 */
        srcpad = gst_element_get_static_pad(audiosrc, "src");
        audio_sinkpad = gst_element_get_request_pad(rtpbin, "recv_rtp_sink_0");
        lres = gst_pad_link(srcpad, audio_sinkpad);
        g_assert(lres == GST_PAD_LINK_OK);
        gst_object_unref(srcpad);
     
        srcpad = gst_element_get_static_pad(videosrc, "src");
        video_sinkpad = gst_element_get_request_pad(rtpbin, "recv_rtp_sink_1");
        lres = gst_pad_link(srcpad, video_sinkpad);
        g_assert(lres == GST_PAD_LINK_OK);
        gst_object_unref(srcpad);
     
        /* the depayloading and decoding */
        audiodepay = gst_element_factory_make("rtpopusdepay", "audiodepay");
        g_assert(audiodepay);
        audiodec = gst_element_factory_make("opusdec", "audiodec");
        g_assert(audiodepay);
        /* the audio playback and format conversion */
        audioconv = gst_element_factory_make("audioconvert", "audioconv");
        g_assert(audioconv);
        audiores = gst_element_factory_make("audioresample", "audiores");
        g_assert(audiores);
        audiosink = gst_element_factory_make("avenc_aac", "audiosink"); // autoaudiosink voaacenc avenc_aac avenc_opus
        g_assert(audiosink);
        /* add depayloading and playback to the pipeline and link */
        gst_bin_add_many(GST_BIN(pipeline), audiodepay, audiodec, audioconv,
                         audiores, audiosink, NULL);
        res = gst_element_link_many(audiodepay, audiodec, audioconv, audiores,
                                    audiosink, NULL);
        g_assert(res == TRUE);
     
        videodepay = gst_element_factory_make("rtph264depay", "videodepay");
        g_assert(videodepay);
        videosink = gst_element_factory_make("h264parse", "videosink");
        g_assert(videosink);
        gst_bin_add_many(GST_BIN(pipeline), videodepay, videosink, NULL);
        res = gst_element_link_many(videodepay, videosink, NULL);
        g_assert(res == TRUE);
     
        // flvmux
        flvmux = gst_element_factory_make("flvmux", "flvmux");
        g_assert(flvmux);
        g_object_set(flvmux, "streamable", TRUE, NULL);
        gst_bin_add(GST_BIN(pipeline), flvmux);
     
        res = gst_element_link(audiosink, flvmux);
        g_assert(res == TRUE);
        res = gst_element_link(videosink, flvmux);
        g_assert(res == TRUE);
     
        rtmpsink = gst_element_factory_make("rtmpsink", "rtmpsink");
        g_assert(rtmpsink);
        g_object_set(rtmpsink, "sync", FALSE, NULL);
        g_object_set(rtmpsink, "location", "rtmp://u1802/live/demo2", NULL);
        gst_bin_add(GST_BIN(pipeline), rtmpsink);
        res = gst_element_link(flvmux, rtmpsink);
        g_assert(res == TRUE);
     
        /* the RTP pad that we have to connect to the depayloader will be created
       * dynamically so we connect to the pad-added signal, pass the depayloader as
       * user_data so that we can link to it. */
        g_signal_connect(rtpbin, "pad-added", G_CALLBACK(pad_added_cb), audiodepay);
        g_signal_connect(rtpbin, "pad-added", G_CALLBACK(pad_added_cb), videodepay);
     
        /* set the pipeline to playing */
        g_print("starting receiver pipeline
    ");
        gst_element_set_state(pipeline, GST_STATE_PLAYING);
     
        /* we need to run a GLib main loop to get the messages */
        loop = g_main_loop_new(NULL, FALSE);
        g_main_loop_run(loop);
     
        g_print("stopping receiver pipeline
    ");
        gst_element_set_state(pipeline, GST_STATE_NULL);
     
        gst_object_unref(loop);
        gst_object_unref(pipeline);
        gst_object_unref(audio_sinkpad);
        gst_object_unref(video_sinkpad);
        gst_object_unref(rtmpsink);
        gst_object_unref(flvmux);
        gst_object_unref(rtpbin);
        gst_object_unref(audiosrc);
        gst_object_unref(audiodepay);
        gst_object_unref(audiodec);
        gst_object_unref(audiores);
        gst_object_unref(audioconv);
        gst_object_unref(audiosink);
        gst_object_unref(videosrc);
        gst_object_unref(videodepay);
        gst_object_unref(videosink);
        return 0;
    }
     
    //

     参考:https://www.tianjiaguo.com/2019/11/gstreamer-rtp2rtmp/

    原创 基于Gstreamer的实时视频流的分发

    1  OverviewGstreamer是一款功能强大、易扩展、可复用的、跨平台的用流媒体应用程序的框架。该框架大致包含了应用层接口、主核心框架以及扩展插件三个部分。   Fig 1.0Gstreamer应用层接口主要是给各类应用程序提供接口

     


    GStreamer

    包含如下三个可执行程序
    • gst-inspect-1.0
      输出GStreamer安装插件情况
    • gst-launch-1.0
      构建GStreamer Pipeline,简单来说就是管道模型,在一堆数据流上面叠加一些处理,获取输出结果。
    • ges-launch-1.0
      GStreamer编辑服务原型工具

    GStreamer Pipeline Examples

    视频测试源
    # 播放测试源
     gst-launch-1.0 videotestsrc ! autovideosink
     
     # 产生一个 1280*720的视频源然后播放
     gst-launch-1.0 -v videotestsrc  ! video/x-raw, framerate=25/1,width=1280,height=720 ! autovideosink
    摄像头数据
    # 播放摄像头内容
    gst-launch-1.0  v4l2src ! autovideosink
    
    # 设置我们需要的大小、格式和帧率,
    gst-launch-1.0 v4l2src ! video/x-raw,format=YUY2, width=320,height=240,framerate=20/1 ! autovideosink
    调整和裁剪
    gst-launch-1.0 v4l2src ! video/x-raw,format=YUY2,width=640,height=480,framerate=15/1  
     ! aspectratiocrop aspect-ratio=16/9 !  videoconvert ! autovideosink
    编码和多路复用技术
    单路流
    # 使用x264将视频编码到H.264,并将其放入MPEG-TS传输流:
    gst-launch-1.0 -v videotestsrc ! video/x-raw,framerate=25/1, width=640, height=360 
    ! x264enc ! mpegtsmux ! filesink location=test.ts
    
    # 播放本地文件
    gst-launch-1.0 -v playbin uri=file:///home/frank/test.ts
    RTMP到RTP
    gst-launch-1.0 -v  rtmpsrc location=rtmp://172.17.230.220/live/123 ! flvdemux ! h264parse ! rtph264pay config-interval=-1 pt=111 ! udpsink host=121.199.37.143 port=15004
    
    gst-launch-1.0 -v rtmpsrc location=rtmp://172.17.230.220/live/123 
        ! flvdemux name=demux demux.audio ! queue ! decodebin ! audioconvert ! audioresample  
        ! opusenc ! rtpopuspay timestamp-offset=0  ! udpsink  host=121.199.37.143  port=15002 
        demux.video! queue ! h264parse ! rtph264pay timestamp-offset=0 config-interval=-1  
        ! udpsink  host=121.199.37.143 port=15004

    GStreamer API使用

    参考资料


     linux的gstreamer安装:https://gstreamer.freedesktop.org/documentation/installing/on-linux.html?gi-language=c#getting-the-tutorials-source-code

    1, 开发库安装 
    Ubuntu已经安装了gstreamer库,因此只需要再安装几个开发库即可: 
    libstreamer0.10-0 
    libstreamer0.10-dev 
    libstreamer0.10-0-dbg 

    # sudo apt-get install libstreamer0.10-0 libgstreamer0.10-dev libstreamer0.10-0-dbg 


    2,测试gstreamer开发库 
    #include <gst/gst.h> 
    int main (int   argc,char *argv[]) 

        const gchar *nano_str; 
        guint major, minor, micro, nano; 
        gst_init (&argc, &argv); 
        gst_version (&major, &minor, &micro, &nano); 
        if (nano == 1) 
            nano_str = "(CVS)"; 
        else if (nano == 2) 
            nano_str = "(Prerelease)"; 
        else 
            nano_str = ""; 
        printf ("This program is linked against GStreamer %d.%d.%d %s ", 
              major, minor, micro, nano_str); 
        return 0; 


    3,编译运行 

    看库依赖,tutorial示例获取:

    pkg-config --cflags --libs gstreamer-1.0

    git clone https://gitlab.freedesktop.org/gstreamer/gst-docs
    gcc -Wall $(pkg-config --cflags --libs  gstreamer-1.5) gstest.c -o hello  $ ubuntu不起作用,链接找不到库!!!!
    gcc basic-tutorial-1.c -o basic-tutorial-1 `pkg-config --cflags --libs gstreamer-1.5`

    ./hello 
    也可以为:
    gcc  gstest.c -o a.out -pthread -I/usr/include/gstreamer-1.5 -I/usr/lib/x86_64-linux-gnu/gstreamer-1.5/include -I/usr/include/glib-2.0 -I/usr/lib/x86_64-linux-gnu/glib-2.0/include -lgstreamer-1.5 -lgobject-2.0 -lglib-2.0

    运行结果: 

    This program is linked against GStreamer 1.8.1 (CVS)


    参考:

    各种aac:https://telecom.altanai.com/category/telecom-architectures/telecom-info/

    gstream c++示例:https://github.com/GNOME/gstreamermm

     
     
  • 相关阅读:
    mvc中HttpPost理解
    javascrip格式
    asp.net ToString()格式汇总
    Datatable根据多行排序
    sql server 判断是否存在数据库,表,列,视图
    IsPostBack是什么意思,如何运用?
    JS apply()的使用详解
    C++设计模式-Command命令模式
    C++ 继承和包含的区别?
    命令模式
  • 原文地址:https://www.cnblogs.com/bigben0123/p/14188475.html
Copyright © 2011-2022 走看看