zoukankan      html  css  js  c++  java
  • VINS(二)Feature Detection and Tracking

    系统入口是feature_tracker_node.cpp文件中的main函数

    1. 首先创建feature_tracker节点,从配置文件中读取信息(parameters.cpp),包括:

    • ROS中发布订阅的话题名称;
    • 图像尺寸;
    • 特征跟踪参数;
    • 是否需要加上鱼眼mask来去除边缘噪点;
    %YAML:1.0
    
    #common parameters
    imu_topic: "/imu0"
    image_topic: "/cam0/image_raw"
    
    #camera calibration 
    model_type: PINHOLE
    camera_name: camera
    image_ 752
    image_height: 480
    distortion_parameters:
       k1: -2.917e-01
       k2: 8.228e-02
       p1: 5.333e-05
       p2: -1.578e-04
    projection_parameters:
       fx: 4.616e+02
       fy: 4.603e+02
       cx: 3.630e+02
       cy: 2.481e+02
    
    # Extrinsic parameter between IMU and Camera.
    estimate_extrinsic: 1   # 0  Have an accurate extrinsic parameters. We will trust the following imu^R_cam, imu^T_cam, don't change it.
                            # 1  Have an initial guess about extrinsic parameters. We will optimize around your initial guess.
                            # 2  Don't know anything about extrinsic parameters. You don't need to give R,T. We will try to calibrate it. Do some rotation movement at beginning. 
    ex_calib_result_path: "/config/euroc/ex_calib_result.yaml"  # If you choose 1 or 2, the extrinsic calibration result will be written vins_folder_path + ex_calib_result_path.                        
    #If you choose 0 or 1, you should write down the following matrix.
    #Rotation from camera frame to imu frame, imu^R_cam
    extrinsicRotation: !!opencv-matrix
       rows: 3
       cols: 3
       dt: d
       data: [0, -1, 0, 
               1, 0, 0, 
               0, 0, 1]
    #Translation from camera frame to imu frame, imu^T_cam
    extrinsicTranslation: !!opencv-matrix
       rows: 3
       cols: 1
       dt: d
       data: [-0.02,-0.06, 0.01]
    
    #feature traker paprameters
    max_cnt: 150            # max feature number in feature tracking
    min_dist: 30            # min distance between two features 
    freq: 10                # frequence (Hz) of publish tracking result. At least 10Hz for good estimation. If set 0, the frequence will be same as raw image 
    F_threshold: 1.0        # ransac threshold (pixel)
    show_track: 1           # publish tracking image as topic
    equalize: 1             # if image is too dark or light, trun on equalize to find enough features
    fisheye: 0              # if using fisheye, trun on it. A circle mask will be loaded to remove edge noisy points
    
    #optimization parameters
    max_solver_time: 0.04  # max solver itration time (ms), to guarantee real time
    max_num_iterations: 8   # max solver itrations, to guarantee real time
    keyframe_parallax: 10.0 # keyframe selection threshold (pixel)
    
    #imu parameters       The more accurate parameters you provide, the better performance
    acc_n: 0.2          # accelerometer measurement noise standard deviation. #0.2
    gyr_n: 0.02         # gyroscope measurement noise standard deviation.     #0.05
    acc_w: 0.0002         # accelerometer bias random work noise standard deviation.  #0.02
    gyr_w: 2.0e-5       # gyroscope bias random work noise standard deviation.     #4.0e-5
    g_norm: 9.81007     # gravity magnitude
    
    
    #loop closure parameters
    loop_closure: 1   #if you want to use loop closure to minimize the drift, set loop_closure true and give your brief pattern file path and vocabulary file path accordingly;
                         #also give the camera calibration file same as feature_tracker node
    pattern_file: "/support_files/brief_pattern.yml"
    voc_file: "/support_files/brief_k10L6.bin"
    min_loop_num: 25

    该config.yaml文件中的其他参数在vins_estimator_node中被读取,属于融合算法的参数。

    • 优化参数(最大求解时间以保证实时性,不卡顿;最大迭代次数,避免冗余计算;视差阈值,用于选取sliding window中的关键帧);
    • imu参数,包括加速度计陀螺仪的测量噪声标准差、零偏随机游走噪声标准差,重力值(imu放火星上需要改变);
    • imu和camera之间的外参R,t;可选(0)已知精确的外参,运行中无需改变,(1)已知外参初值,运行中优化,(2)什么都不知道,在线初始化中标定
    • 闭环参数,包括brief描述子的pattern文件(前端视觉使用光流跟踪,不需要计算描述子),针对场景训练好的DBow二进制字典文件;

    2. 监听IMAGE_TOPIC, 有图像信息发布到IMAGE_TOPIC上时,执行回调函数:

    ros::Subscriber sub_img = n.subscribe(IMAGE_TOPIC, 100, img_callback);

    3. img_callback()

    前端视觉的算法基本在这个回调函数中,步骤为:

      1. 频率控制,保证每秒钟处理的image不多于FREQ;

      2. 对于单目:

        1). readImage;

        2). showUndistortion(可选);

        3). 将特征点矫正(相机模型camodocal)后归一化平面的3D点(此时没有尺度信息,3D点p.z=1),像素2D点,以及特征的id,封装成ros的sensor_msgs::PointCloud消息类型; 

      3. 将处理完的图像信息用PointCloud和Image的消息类型,发布到"feature"和"feature_img"的topic:

    pub_img = n.advertise<sensor_msgs::PointCloud>("feature", 1000);
    pub_match = n.advertise<sensor_msgs::Image>("feature_img",1000);

    4. 包含的视觉算法:

    1. CLAHE(Contrast Limited Adaptive Histogram Equalization)

    cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(3.0, cv::Size(8, 8));

    2. Optical Flow(光流追踪)

    cv::calcOpticalFlowPyrLK(cur_img, forw_img, cur_pts, forw_pts, status, err, cv::Size(21, 21), 3);

    3. 根据匹配点计算Fundamental Matrix, 然后用Ransac剔除不符合Fundamental Matrix的外点

    cv::findFundamentalMat(un_prev_pts, un_forw_pts, cv::FM_RANSAC, F_THRESHOLD, 0.99, status);

    4. 特征点检测:goodFeaturesToTrack, 使用Shi-Tomasi的改进版Harris corner

    cv::goodFeaturesToTrack(forw_img, n_pts, MAX_CNT - forw_pts.size(), 0.1, MIN_DIST, mask);

     特征点之间保证了最小距离30个像素,跟踪成功的特征点需要经过rotation-compensated旋转补偿的视差计算,视差在30个像素以上的特征点才会去参与三角化和后续的优化,保证了所有的特征点质量都是比较高的,同时降低了计算量。

  • 相关阅读:
    可移动的按钮
    .NET及JAVA 中如何使用代码启动程序
    SQL一对多特殊查询,取唯一一条
    jquery,字符串转json对象,json对象转字符串
    jQuery的ajax()、post()方法提交数组,参数[] 问题
    Sql Server中实现Mysql中的group_concat函数效果
    SQL:REGEXP
    sql:CallableStatement执行存储过程
    js去除前后空格
    jsp中常用操作字符串的el表达式
  • 原文地址:https://www.cnblogs.com/shang-slam/p/7045573.html
Copyright © 2011-2022 走看看