zoukankan      html  css  js  c++  java
  • OPENCV(7) —— HighGUI

    包括函数createTrackbar、getTrackbarPos、setTrackbarPos、imshow、namedWindow、destroyWindow、destroyAllWindows、MoveWindow、ResizeWindow、SetMouseCallback、waitKey。这些函数保证了图像的基本处理、tarckbar的控制和鼠标键盘的响应

    读写图像视频的函数:图像相关的函数有imdecode、imencode、imread、imwrite。读取视频相关为VideoCapture类,负责捕捉文件和摄像头的视频,该类内有成员函数VideoCapture、open、isOpened、release、grab、retrieve、read、get、set,写视频的类为VideoWriter,类内有成员函数VideoWriter、open、isOpened、write

    void addWeighted(InputArray src1, double alpha, InputArray src2, double beta, double gamma, OutputArray dst, int dtype=-1)

    Calculates the weighted sum of two arrays. 不同权重相加

    	exttt{dst} (I)= 	exttt{saturate} ( 	exttt{src1} (I)* 	exttt{alpha} +  	exttt{src2} (I)* 	exttt{beta} +  	exttt{gamma} )

    int createTrackbar(const string& trackbarname, const string& winname, int* value, int count, TrackbarCallbackonChange=0, void* userdata=0)

    Parameters:

    • trackbarname – Name of the created trackbar.
    • winname – Name of the window that will be used as a parent of the created trackbar.
    • value – Optional pointer to an integer variable whose value reflects the position of the slider. Upon creation, the slider position is defined by this variable.
    • count – Maximal position of the slider. The minimal position is always 0.
    • onChange – Pointer to the function to be called every time the slider changes position. This function should be prototyped as void Foo(int,void*); , where the first parameter is the trackbar position and the second parameter is the user data (see the next parameter). If the callback is the NULL pointer, no callbacks are called, but only value is updated.
    • userdata – User data that is passed as is to the callback. It can be used to handle trackbar events without using global variables.
    #include "stdafx.h"
    
    #include <cv.h>
    #include <highgui.h>
    
    using namespace cv;
    
    /// Global Variables
    const int alpha_slider_max = 100;
    int alpha_slider;
    double alpha;
    double beta;
    
    /// Matrices to store images
    Mat src1;
    Mat src2;
    Mat dst;
    
    
    void on_trackbar( int, void* )
    {
     alpha = (double) alpha_slider/alpha_slider_max ;
     beta = ( 1.0 - alpha );
    
     addWeighted( src1, alpha, src2, beta, 0.0, dst);
    
     imshow( "Linear Blend", dst );
    }
    
    int main( int argc, char** argv )
    {
     /// Read image ( same size, same type )
     src1 = imread("lang.jpg");
     src2 = imread("taitan.jpg");
    
     if( !src1.data ) { printf("Error loading src1 
    "); return -1; }
     if( !src2.data ) { printf("Error loading src2 
    "); return -1; }
    
     /// Initialize values
     alpha_slider = 0;
    
     /// Create Windows
     namedWindow("Linear Blend", 1);
    
     /// Create Trackbars
     char TrackbarName[50];
     sprintf( TrackbarName, "Alpha x %d", alpha_slider_max );  // 将格式化的数据写入某个字符串中
    
     createTrackbar( TrackbarName, "Linear Blend", &alpha_slider, alpha_slider_max, on_trackbar );
        // 滚动条名称,窗口名称,滑块位置,滑块最大值,回调函数
    
     /// Show some stuff
     on_trackbar( alpha_slider, 0 );
    
     /// Wait until user press some key
     waitKey(0);
     return 0;
    }

    Video Stream

    captRefrnc.set(CV_CAP_PROP_POS_MSEC, 1.2);  // go to the 1.2 second in the video
    captRefrnc.set(CV_CAP_PROP_POS_FRAMES, 10); // go to the 10th frame of the video
    // now a read operation would read the frame at the set position

    Image similarity - PSNR and SSIM

    PSNR (aka Peak signal-to-noise ratio). The simplest definition of this starts out from the mean squad error. 计算两幅图像的均方误差 。 Let there be two images: I1 and I2; with a two dimensional size i and j, composed of c number of channels.

    MSE = frac{1}{c*i*j} sum{(I_1-I_2)^2}

    Then the PSNR is expressed as:

    PSNR = 10 cdot log_{10} left( frac{MAX_I^2}{MSE} 
ight)

    Here the MAX_I^2 is the maximum valid value for a pixel. In case of the simple single byte image per pixel per channel this is 255. When two images are the same the MSE will give zero, resulting in an invalid divide by zero operation in the PSNR formula.(差值为0的情况要区分对待) In this case the PSNR is undefined and as we’ll need to handle this case separately.

    the source code presented at the start of the tutorial will perform the PSNR measurement for each frame, and the SSIM only for the frames where the PSNR falls below an input value.

    Mat::convertTo

    在缩放或不缩放的情况下转换为另一种数据类型。

    void Mat::convertTo(OutputArray m,int rtype,double alpha=1,double beta=0)const

    参数:

    m – 目标矩阵。如果它的尺寸和类型不正确,在操作之前会重新分配。

    rtype – 要求是目标矩阵的类型,或者在当前通道数与源矩阵通道数相同的情况下的depth。如果rtype 为负,目标矩阵与源矩阵类型相同。

    beta – 可选的delta加到缩放值中去。

    该方法将源像素值转化为目标类型saturate_cast<> 要放在最后以避免溢出

    m( x;y) = saturate_cast < rType > ( α*( *this)( x;y) +β)

    #include "stdafx.h"
    
    #include <iostream>    // for standard I/O
    #include <string>   // for strings
    #include <iomanip>  // for controlling float print precision
    #include <sstream>  // string to number conversion
    
    #include <opencv2/imgproc/imgproc.hpp>  // Gaussian Blur
    #include <opencv2/core/core.hpp>        // Basic OpenCV structures (cv::Mat, Scalar)
    #include <opencv2/highgui/highgui.hpp>  // OpenCV window I/O
    
    using namespace std;
    using namespace cv;
    
    double getPSNR ( const Mat& I1, const Mat& I2);
    Scalar getMSSIM( const Mat& I1, const Mat& I2);
    
    void help()
    {
        cout
            << "
    --------------------------------------------------------------------------" << endl
            << "This program shows how to read a video file with OpenCV. In addition, it tests the"
            << " similarity of two input videos first with PSNR, and for the frames below a PSNR "  << endl
            << "trigger value, also with MSSIM."<< endl
            << "Usage:"                                                                       << endl
            << "./video-source referenceVideo useCaseTestVideo PSNR_Trigger_Value Wait_Between_Frames " << endl
            << "--------------------------------------------------------------------------"   << endl
            << endl;
    }
    int main(int argc, char *argv[], char *window_name)
    {
        help();
        if (argc != 5)
        {
            cout << "Not enough parameters" << endl;
            return -1;
        }
        stringstream conv;
    
        const string sourceReference = argv[1],sourceCompareWith = argv[2];
        int psnrTriggerValue, delay;
        conv << argv[3] << endl << argv[4];          // put in the strings
        conv >> psnrTriggerValue >> delay;    // take out the numbers   string --- int 
    
        char c;
        int frameNum = -1;            // Frame counter
    
        VideoCapture captRefrnc(sourceReference),
            captUndTst(sourceCompareWith);
    
        if ( !captRefrnc.isOpened())
        {
            cout  << "Could not open reference " << sourceReference << endl;
            return -1;
        }
    
        
        if( !captUndTst.isOpened())
        {
            cout  << "Could not open case test " << sourceCompareWith << endl;
            return -1;
        }
    
        Size refS = Size((int) captRefrnc.get(CV_CAP_PROP_FRAME_WIDTH),
            (int) captRefrnc.get(CV_CAP_PROP_FRAME_HEIGHT)),
    
            uTSi = Size((int) captUndTst.get(CV_CAP_PROP_FRAME_WIDTH),
            (int) captUndTst.get(CV_CAP_PROP_FRAME_HEIGHT));
    
        if (refS != uTSi)
        {
            cout << "Inputs have different size!!! Closing." << endl;
            return -1;
        }
    
        const char* WIN_UT = "Under Test";    // window name
        const char* WIN_RF = "Reference";
    
        // Windows
        namedWindow(WIN_RF, CV_WINDOW_AUTOSIZE );
        namedWindow(WIN_UT, CV_WINDOW_AUTOSIZE );
        cvMoveWindow(WIN_RF, 400       ,            0);         //750,  2 (bernat =0)
        cvMoveWindow(WIN_UT, refS.width,            0);         //1500, 2
    
        cout << "Reference frame resolution: Width=" << refS.width << "  Height=" << refS.height
            << " of nr#: " << captRefrnc.get(CV_CAP_PROP_FRAME_COUNT) << endl;
    
        cout << "PSNR trigger value " <<
            setiosflags(ios::fixed) << setprecision(3) << psnrTriggerValue << endl;
    
        Mat frameReference, frameUnderTest;
        double psnrV;
        Scalar mssimV;
    
        while( true) //Show the image captured in the window and repeat
        {
            captRefrnc >> frameReference;    // 获取视频流中的一帧图像
            captUndTst >> frameUnderTest;
    
            if( frameReference.empty()  || frameUnderTest.empty())    // 视频流结束
            {
                cout << " < < <  Game over!  > > > ";
                break;
            }
    
            ++frameNum;
            cout <<"Frame:" << frameNum <<"# ";
    
            ///////////////////////////////// PSNR ////////////////////////////////////////////////////
            psnrV = getPSNR(frameReference,frameUnderTest);                    //get PSNR
            cout << setiosflags(ios::fixed) << setprecision(3) << psnrV << "dB";
    
            //////////////////////////////////// MSSIM /////////////////////////////////////////////////
            if (psnrV < psnrTriggerValue && psnrV)    // 基于效率考虑,只有PSNR低于一定阈值时考虑SSIM
            {
                mssimV = getMSSIM(frameReference,frameUnderTest);
    
                cout << " MSSIM: "
                    << " R " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[2] * 100 << "%"
                    << " G " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[1] * 100 << "%"
                    << " B " << setiosflags(ios::fixed) << setprecision(2) << mssimV.val[0] * 100 << "%";
            }
    
            cout << endl;
    
            ////////////////////////////////// Show Image /////////////////////////////////////////////
            imshow( WIN_RF, frameReference);
            imshow( WIN_UT, frameUnderTest);
    
            c = cvWaitKey(delay);
            if (c == 27) break;
        }
    
        return 0;
    }
    
    double getPSNR(const Mat& I1, const Mat& I2)
    {
        Mat s1;
        absdiff(I1, I2, s1);       // |I1 - I2|
        s1.convertTo(s1, CV_32F);  // cannot make a square on 8 bits    // 在缩放或不缩放的情况下转换为另一种数据类型
        s1 = s1.mul(s1);           // |I1 - I2|^2    // 执行两个矩阵按元素相乘或这两个矩阵的除法
    
        Scalar s = sum(s1);         // sum elements per channel
    
        double sse = s.val[0] + s.val[1] + s.val[2]; // sum channels
    
        if( sse <= 1e-10) // for small values return zero
            return 0;
        else
        {
            double  mse =sse /(double)(I1.channels() * I1.total());
            double psnr = 10.0*log10((255*255)/mse);
            return psnr;
        }
    }
    
    Scalar getMSSIM( const Mat& i1, const Mat& i2)
    {
        const double C1 = 6.5025, C2 = 58.5225;
        /***************************** INITS **********************************/
        int d     = CV_32F;
    
        Mat I1, I2;
        i1.convertTo(I1, d);           // cannot calculate on one byte large values
        i2.convertTo(I2, d);
    
        Mat I2_2   = I2.mul(I2);        // I2^2
        Mat I1_2   = I1.mul(I1);        // I1^2
        Mat I1_I2  = I1.mul(I2);        // I1 * I2
    
        /*************************** END INITS **********************************/
    
        Mat mu1, mu2;   // PRELIMINARY COMPUTING
        GaussianBlur(I1, mu1, Size(11, 11), 1.5);
        GaussianBlur(I2, mu2, Size(11, 11), 1.5);
    
        Mat mu1_2   =   mu1.mul(mu1);
        Mat mu2_2   =   mu2.mul(mu2);
        Mat mu1_mu2 =   mu1.mul(mu2);
    
        Mat sigma1_2, sigma2_2, sigma12;
    
        GaussianBlur(I1_2, sigma1_2, Size(11, 11), 1.5);
        sigma1_2 -= mu1_2;
    
        GaussianBlur(I2_2, sigma2_2, Size(11, 11), 1.5);
        sigma2_2 -= mu2_2;
    
        GaussianBlur(I1_I2, sigma12, Size(11, 11), 1.5);
        sigma12 -= mu1_mu2;
    
        ///////////////////////////////// FORMULA ////////////////////////////////
        Mat t1, t2, t3;
    
        t1 = 2 * mu1_mu2 + C1;
        t2 = 2 * sigma12 + C2;
        t3 = t1.mul(t2);              // t3 = ((2*mu1_mu2 + C1).*(2*sigma12 + C2))
    
        t1 = mu1_2 + mu2_2 + C1;
        t2 = sigma1_2 + sigma2_2 + C2;
        t1 = t1.mul(t2);               // t1 =((mu1_2 + mu2_2 + C1).*(sigma1_2 + sigma2_2 + C2))
    
        Mat ssim_map;
        divide(t3, t1, ssim_map);      // ssim_map =  t3./t1;
    
        Scalar mssim = mean( ssim_map ); // mssim = average of ssim map
        return mssim;
    }

    写video

    For simple video outputs you can use the OpenCV built-in VideoWriter class, designed for this.

    The type of the container is expressed in the files extension (for example avi, mov or mkv). This contains multiple elements like: video feeds, audio feeds or other tracks (like for example subtitles). How these feeds are stored is determined by the codec used for each one of them.

    OPENCV的限制

    Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB. Furthermore you can only create and expand a single video track inside the container. No audio or other track editing support here.

    VideoWriter::open(const string& filename, int fourcc, double fps, Size frameSize, bool isColor=true)

    #include "stdafx.h"
    #include <iostream> // for standard I/O
    #include <string>   // for strings
    
    #include <opencv2/core/core.hpp>        // Basic OpenCV structures (cv::Mat)
    #include <opencv2/highgui/highgui.hpp>  // Video write
    
    using namespace std;
    using namespace cv;
    
    static void help()
    {
        cout
            << "------------------------------------------------------------------------------" << endl
            << "This program shows how to write video files."                                   << endl
            << "You can extract the R or G or B color channel of the input video."              << endl
            << "Usage:"                                                                         << endl
            << "./video-write inputvideoName [ R | G | B] [Y | N]"                              << endl
            << "------------------------------------------------------------------------------" << endl
            << endl;
    }
    
    int main(int argc, char *argv[])
    {
        help();
    
        // 命令行参数:video文件名,要提取的通道,是否使用与原video相同的codec
        if (argc != 4)
        {
            cout << "Not enough parameters" << endl;
            return -1;
        }
    
        const string source      = argv[1];           // the source file name
        const bool askOutputType = argv[3][0] =='Y';  // If false it will use the inputs codec type
    
        VideoCapture inputVideo(source);              // Open input
        if (!inputVideo.isOpened())
        {
            cout  << "Could not open the input video: " << source << endl;
            return -1;
        }
    
        string::size_type pAt = source.find_last_of('.');                  // Find extension point
        const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi";   // Form the new name with container  string 可以直接拼接
        int ex = static_cast<int>(inputVideo.get(CV_CAP_PROP_FOURCC));     // Get Codec Type- Int form
    
        // Transform from int to char via Bitwise operators
        char EXT[] = {(char)(ex & 0XFF) , (char)((ex & 0XFF00) >> 8),(char)((ex & 0XFF0000) >> 16),(char)((ex & 0XFF000000) >> 24), 0};
    
        Size S = Size((int) inputVideo.get(CV_CAP_PROP_FRAME_WIDTH),    // Acquire input size
            (int) inputVideo.get(CV_CAP_PROP_FRAME_HEIGHT));
    
        VideoWriter outputVideo;                                        // Open the output
        if (askOutputType)
            outputVideo.open(NAME, ex=-1, inputVideo.get(CV_CAP_PROP_FPS), S, true);
        else
            outputVideo.open(NAME, ex, inputVideo.get(CV_CAP_PROP_FPS), S, true);
    
        if (!outputVideo.isOpened())
        {
            cout  << "Could not open the output video for write: " << source << endl;
            return -1;
        }
    
        cout << "Input frame resolution: Width=" << S.width << "  Height=" << S.height
            << " of nr#: " << inputVideo.get(CV_CAP_PROP_FRAME_COUNT) << endl;
        cout << "Input codec type: " << EXT << endl;
    
        int channel = 2; // Select the channel to save
        switch(argv[2][0])
        {
        case 'R' : channel = 2; break;
        case 'G' : channel = 1; break;
        case 'B' : channel = 0; break;
        }
        Mat src, res;
        vector<Mat> spl;
    
        for(;;) //Show the image captured in the window and repeat
        {
            inputVideo >> src;              // read
            if (src.empty()) break;         // check if at end
    
            split(src, spl);                // process - extract only the correct channel    // 将图像拆成单通道
            for (int i =0; i < 3; ++i)
                if (i != channel)
                    spl[i] = Mat::zeros(S, spl[0].type());    // 其他通道置零
            merge(spl, res);    // 合并不同通道
    
            //outputVideo.write(res); //save or
            outputVideo << res;
        }
    
        cout << "Finished writing" << endl;
        return 0;
    }
  • 相关阅读:
    C语言I博客作业07
    C语言I博客作业06
    C语言I博客作业05
    C语言I博客作业04
    C语言II博客作业04
    C语言II博客作业03
    C语言II博客作业01
    学期总结
    C语言I博客作业08
    C语言I博客作业07
  • 原文地址:https://www.cnblogs.com/sprint1989/p/4074814.html
Copyright © 2011-2022 走看看