zoukankan      html  css  js  c++  java
  • 对摄像头进行标定

    转自 http://wiki.opencv.org.cn/index.php/%E6%91%84%E5%83%8F%E5%A4%B4%E6%A0%87%E5%AE%9A

    摄像头标定

     

    目录

     [隐藏]

    标定原理介绍

    标定程序1(opencv自带的示例程序)

    简介

    读者可以直接使用Opencv自带的摄像机标定示例程序,该程序位于 “OpenCVsamplesc目录下的calibration.cpp”,程序的输入支持直接从USB摄像机读取图片标定,或者读取avi文件或者已经存放于电脑上图片进行标定。

    使用说明

    编译运行程序,如果未设置任何命令行参数,则程序会有提示,告诉你应该在你编译出来的程序添加必要的命令行,比如你的程序是calibration.exe(以windows操作系统为例)。则你可以添加如下命令行(以下加粗的字体所示):

    calibration -w 6 -h 8 -s 2 -n 10 -o camera.yml -op -oe [<list_of_views.txt>]

    调用命令行和参数介绍

    Usage: calibration

        -w <board_width>         # 图片某一维方向上的交点个数
        -h <board_height>        # 图片另一维上的交点个数
        [-n <number_of_frames>]  # 标定用的图片帧数
                                 # (if not specified, it will be set to the number
                                 #  of board views actually available)
        [-d <delay>]             # a minimum delay in ms between subsequent attempts to capture a next view
                                 # (used only for video capturing)
        [-s <square_size>]       # square size in some user-defined units (1 by default)
        [-o <out_camera_params>] # the output filename for intrinsic [and extrinsic] parameters
        [-op]                    # write detected feature points
        [-oe]                    # write extrinsic parameters
        [-zt]                    # assume zero tangential distortion
        [-a <aspect_ratio>]      # fix aspect ratio (fx/fy)
        [-p]                     # fix the principal point at the center
        [-v]                     # flip the captured images around the horizontal axis
        [input_data]             # 输入数据,是下面三种之中的一种:
                                 #  - 指定的包含图片列表的txt文件
                                 #  - name of video file with a video of the board
                                 # if input_data not specified, a live view from the camera is used
    
    标定图片示例
    标定图片示例

    上图中,横向和纵向分别为9个交点和6个交点,对应上面的命令行的命令参数应该为: -w 9 -h 6

    • 经多次使用发现,不指定 -p参数时计算的结果误差较大,主要表现在对u0,v0的估计误差较大,因此建议使用时加上-p参数

    list_of_views.txt

    该txt文件表示的是你在电脑上面需要用以标定的图片列表。

    view000.png
    view001.png
    #view002.png
    view003.png
    view010.png
    one_extra_view.jpg
    

    上面的例子中,前面加“井号”的图片被忽略。

    • 在windows的命令行中,有一种简便的办法来产生此txt文件。在CMD窗口中输入如下命令(假设当前目录里面的所有jpg文件都用作标定,并且生成的文件为a.txt)。
    dir *.jpg /B >> a.txt
    

    输入为摄像机或者avi文件时

            "When the live video from camera is used as input, the following hot-keys may be used:
    "
                "  <ESC>, 'q' - quit the program
    "
                "  'g' - start capturing images
    "
                "  'u' - switch undistortion on/off
    ";
    
    

    代码

    请直接复制 calibration.cpp 中的相关代码。

    标定程序2

    OPENCV没有提供完整的示例,自己整理了一下,贴出来记录。

    1. 首先自制一张标定图片,用A4纸打印出来,设定距离,再设定标定棋盘的格子数目,如8×6,以下是我做的图片8×8

    1. 然后利用cvFindChessboardCorners找到棋盘在摄像头中的2D位置,这里cvFindChessboardCorners不太稳定,有时不能工作,也许需要图像增强处理。
    2. 计算实际的距离,应该是3D的距离。我设定为21.6毫米,既在A4纸上为两厘米。
    3. 再用cvCalibrateCamera2计算内参,
    4. 最后用cvUndistort2纠正图像的变形。

    结果如下:

    代码

    代码下载


    具体的函数使用,请参考Cv照相机定标和三维重建#照相机定标

    每个摄像机都有唯一的参数,例如,焦点,主点以及透镜的畸变模型。查找摄像机内参数差的过程为摄像机的标定;对基于增强现实的应用来讲,对摄像机标定很重要,因为它将透视变换和透镜的畸变都反映在输出图像上。为了让用户在增强现实应用中获得更佳的体验

    ,应该用相同的透视投影来增强物体的可视化效果;标定摄像机需要特殊的模式图像,例如:棋盘板或具有白色背景的黑圆圈,被标定的摄像机需要从不同的角度对特殊模式图像拍摄 10-15张照片,然后通过标定算法来找到最优的摄像机内部参数和畸变向量。 

    Show the distortion removal for the images too. When you work with an image list it is not possible to remove the distortion inside the loop. Therefore, you must do this after the loop. Taking advantage of this now I’ll expand the undistort function, which is in fact first calls initUndistortRectifyMap to find transformation matrices and then performs transformation using remapfunction. Because, after successful calibration map calculation needs to be done only once, by using this expanded form you may speed up your application:

    if( s.inputType == Settings::IMAGE_LIST && s.showUndistorsed )
    {
      Mat view, rview, map1, map2;
      initUndistortRectifyMap(cameraMatrix, distCoeffs, Mat(),
          getOptimalNewCameraMatrix(cameraMatrix, distCoeffs, imageSize, 1, imageSize, 0),
          imageSize, CV_16SC2, map1, map2);
    
      for(int i = 0; i < (int)s.imageList.size(); i++ )
      {
          view = imread(s.imageList[i], 1);
          if(view.empty())
              continue;
          remap(view, rview, map1, map2, INTER_LINEAR);
          imshow("Image View", rview);
          char c = waitKey();
          if( c  == ESC_KEY || c == 'q' || c == 'Q' )
              break;
      }
    }


    关于精度的说法
    http://stackoverflow.com/questions/12794876/how-to-verify-the-correctness-of-calibration-of-a-webcam

    Hmm, are you looking for "handsome" or "accurate"?

    Camera calibration is one of the very few subjects in computer vision where accuracy can be directly quantified in physical terms, and verified by a physical experiment. And the usual lesson is that (a) your numbers are just as good as the effort (and money) you put into them, and (b) real accuracy (as opposed to imagined one) is expensive, so you should figure out in advance what your application really requires in the way of precision.

    If you look up the geometrical specs of even very cheap lens/ccd combos (in the megapixel range and above), it becomes readily apparent that sub-sub-mm calibration accuracies are theoretically achievable within a table-top volume of space. Just work out (from the spec sheet of your camera's sensor) the solid angle spanned by one pixel - you'll be dazzled by the spatial resolution you have within reach of your wallet. However, actually achieving REPEATABLY something near that theoretical accuracy takes work.

    Here are some recommendations (from personal experience) for getting a good calibration experience with home-grown equipment.

    1. If your method uses a flat target ("checkerboard" or similar), manufacture a good one. Choose a very flat backing (for the size you mention window glass 5 mm thick or more is excellent, though obviously fragile). Verify its flatness against another edge (or, better, a laser beam). Print the pattern on thick-stock paper that won't stretch too easily. Lay it after printing on the backing before gluing and verify that the square sides are indeed very nearly orthogonal. Cheap ink-jet or laser printers are not designed for rigorous geometrical accuracy, do not trust them blindly. Best practice is to use a professional print shop (even a Kinko's will do a much better job than most home printers). Then attach the pattern very carefully to the backing, using spray-on glue and slowly wiping with soft cloth to avoid bubbles and stretching. Wait for a day or longer for the glue to cure and the glue-paper stress to reach its long-term steady state. Finally measure the corner positions with a good caliper and a magnifier. You may get away with one single number for the "average" square size, but it must be an average of actual measurements, not of hopes-n-prayers. Best practice is to actually use a table of measured positions.

    2. Watch your temperature and humidity changes: paper adsorbs water from the air, the backing dilates and contracts. It is amazing how many articles you can find that report sub-millimeter calibration accuracies without quoting the environment conditions (or the target response to them). Needless to say, they are mostly crap. The lower temperature dilation coefficient of glass compared to common sheet metal is another reason for preferring the former as a backing.

    3. Needless to say, you must disable the auto-focus feature of your camera, if it has one: focusing physically moves one or more pieces of glass inside your lens, thus changing (slightly) the field of view and (usually by a lot) the lens distortion and the principal point.

    4. Place the camera on a stable mount that won't vibrate easily. Focus (and f-stop the lens, if it has an iris) as is needed for the application (not the calibration - the calibration procedure and target must be designed for the app's needs, not the other way around). Do not even think of touching camera or lens afterwards. If at all possible, avoid "complex" lenses - e.g. zoom lenses or very wide angle ones. Fisheye or anamorphic lenses require models much more complex than stock OpenCV makes available.

    5. Take lots of measurements and pictures. You want hundreds of measurements (corners) per image, and tens of images. Where data is concerned, the more the merrier. A 10x10 checkerboard is the absolute minimum I would consider. I normally worked at 20x20.

    6. Span the calibration volume when taking pictures. Ideally you want your measurements to be uniformly distributed in the volume of space you will be working with. Most importantly, make sure to angle the target significantly with respect to the focal axis in some of the pictures - to calibrate the focal length you need to "see" some real perspective foreshortening. For best results use a repeatable mechanical jig to move the target. A good one is a one-axis turntable, which will give you an excellent prior model for the motion of the target.

    7. Minimize vibrations and associated motion blur when taking photos.

    8. Use good lighting. Really. It's amazing how often I see people realize late in the game that you need photons to calibrate any camera :-) Use diffuse ambient lighting, and bounce it off white cards on both sides of the field of view.

    9. Watch what your corner extraction code is doing. Draw the detected corner positions on top of the images (in Matlab or Octave, for example), and judge their quality. Removing outliers early using tight thresholds is better than trusting the robustifier in your bundle adjustment code.

    10. Constrain your model if you can. For example, don't try to estimate the principal point if you don't have a good reason to believe that your lens is significantly off-center w.r.t the image, just fix it at the image center on your first attempt. The principal point location is usually poorly observed, because it is inherently confused with the center of the nonlinear distortion and by the component parallel to the image plane of the target-to-camera's translation. Getting it right requires a carefully designed procedure that yields three or more independent vanishing points of the scene and a very good bracketing of the nonlinear distortion. Similarly, unless you have reason to suspect that the lens focal axis is really tilted w.r.t. the sensor plane, fix at zero the (1,2) component of the camera matrix. Generally speaking, use the simplest model that satisfies your measurementsand your application needs (that's Ockam's razor for you).

    11. When you have a calibration solution from your optimizer with low enough RMS error (a few tenths of a pixel, typically, see other answer below), plot the XY pattern of the residual errors (predicted_xy - measured_xy for each corner in all images) and see if it's a round-ish cloud centered at (0, 0). "Clumps" of outliers or non-roundness of the cloud of residuals are screaming alarm bells that something is very wrong - most likely outliers, or an inappropriate lens distortion model.

    12. Take extra images to verify the accuracy of the solution - use them to verify that the lens distortion is actually removed, and that the planar homography predicted by the calibrated model actually matches the one recovered from the measured corners.



    There is a problem with your camera calibration: cv::calibrateCamera() returns the root mean square (RMS) reprojection error [1] and should be between 0.1 and 1.0 pixels in a good calibration. For a point of reference, I get approximately 0.25 px RMS error using my custom stereo camera made of two hardware-synchronized Playstation Eye cameras running at the 640 x 480 resolution.

    Are you sure that the pixel coordinates returned by cv::findChessboardCorners() are in the same order as those in obj? If the axes were flipped, you would get symptoms similar to those that you are describing.

    [1]: OpenCV calculates reprojection error by projecting three-dimensional of chessboard points into the image using the final set of calibration parameters and comparing the position of the corners. An RMS error of 300 means that, on average, each of these projected points is 300 px away from its actual position.


    ==========================================================================通过查资料看文档,自己的使用流程总结下==================================================================================


    手工标定摄像机说明

    方法一:

    工具:摄像头一个;  黑白棋盘纸一张,明确知道横纵方向内格的交点个数以及方块的 面积大小;

    标定的流程:

    1. 启动命令行窗口,目录切换到有D:workSpacecameraCalibationx64Debug
    2. 执行命令(calibration  -w=10 -h=7 -s=1 -o=camera.yml -op -oe  -p)

          对参数进行简单的说明:

                 -w: 图形宽度方向的内交点个数;

                 -h: 图形高度方向的内交点个数;

                 -s: 代表的是面积;

                 -o: 指定输出相机内部的参数到文件;

                 -p: 修正交点的圆心;

                 -n:   指定计算摄像机参数时用到的图片数,通常10--20就足够了;

                -op:  写入检测到的特征点到文件;

                -oe:  写外部参数 到文件;

               注:-op -oe 这些参数矫正的时候没有用到;

       3.摁下 g ,进行图片捕获时, 旋转图片,平移一些距离;

       4.执行完之后会返回RMS 的值;代表着摄像机的精度,该值越接近0,越好;一般该   值在0-1.0 之间就够用,重复以上步骤,调节摄像机让该值尽可能地小;

       5.在该路径的目录下,会生成camera.yml; 这里边记录着摄像机测参数,以及精度值;这个文件用来矫正图片使用,使像素点在更加精确的位置上;








  • 相关阅读:
    关于 token
    windows 使用 virtualbox,搭建 minikube 环境
    kafka 和 rocketMQ 的数据存储
    分享周鸿祎的《如何建立一个“铁打的营盘”》
    How to configue session timeout in Hive
    毕业十年纪念
    常用排序算法
    [异常处理]class kafka.common.UnknownTopicOrPartitionException (kafka.server.ReplicaFetcherThread)
    线程的几个状态
    星型模式
  • 原文地址:https://www.cnblogs.com/chenshihao/p/5848054.html
Copyright © 2011-2022 走看看