zoukankan      html  css  js  c++  java
  • STB数据集使用


    STB数据集

    一. 数据集简介

    • 数据介绍

    STB数据集来源于这篇论文:A hand pose tracking benchmark from stereo matching.

    数据集内容:

    Our stereo hand pose benchmark contains sequences with 6 different backgrounds and every background has two sequences with counting and random poses. Every sequence has 1500 frames, so there are totally 18000 frames in our benchmark. Stereo and depth images were captured from a Point Grey Bumblebee2 stereo camera and an Intel Real Sense F200 active depth camera simultaneously.

    一个双目(左右相机RGB)和一个深度(彩色RGB+depth),一搬论文使用方式:

    STB is a real-world dataset containing 18,000 images with the ground truth of 21 3D hand joint locations and corresponding depth images. we split the dataset into 15,000 training samples and 3,000 test samples.

    具体怎么分割使用自己衡量即可。

    使用此数据集的人很多:

    Learning to Estimate 3D Hand Pose from Single RGB Images

    3D Hand Shape and Pose Estimation from a Single RGB Image

    ...还有很多论文使用此数据,最近一直在做2D和3D手势识别的方向,后面会写一篇综述具体来说。

    二. 数据集的使用

    • 双目相机

    Point Grey Bumblebee2 stereo camera: base line = 120.054 fx = 822.79041 fy = 822.79041 tx = 318.47345 ty = 250.31296

    • 进一步解释
      • 数据集给出的参数:相机内参、baseline
      • 注释1:内参表示左右相机的内参相同,使用统一参数。
      • 注释2:baselin表示两个相机之间的距离,单位mm。由于手的距离大概5m左右,相机之间距离120mm,0.1<<5,这里忽略两个相机之间的旋转矩阵。
      • 观察画出的2D点,偏差有点大。可能是忽略旋转和标注不准等问题导致。
    #左右相机仅相差个平移参数baseline,旋转忽略
    fx = 822.79041
    fy = 822.79041
    tx = 318.47345
    ty = 250.31296
    base = 120.054
    #增广矩阵计算方便
    R_l = np.asarray([
      [1,0,0,0],
      [0,1,0,0],
      [0,0,1,0]])
    R_r = R_l.copy()
    R_r[0, 3] = -base #作为平移参数
    #内参矩阵
    K = np.asarray([
      [fx,0,tx],
      [0,fy,ty],
      [0,0,1]])
    
    #世界坐标系点,4*21矩阵,[x,y,z,1]增广矩阵,计算方便
    points = XXX
    
    #平移+内参
    left_point = np.dot(np.dot(K , R_l), points)
    right_point = np.dot(np.dot(K , R_r), points)
    
    #消除尺度z
    image_cood = left_point / left_point[-1, ...]
    image_left = (image_cood[:2,...].T).astype(np.uint)
    image_cood = right_point / right_point[-1, ...]
    image_right = (image_cood[:2,...].T).astype(np.uint)
    
    • 深度相机

    Intel Real Sense F200 active depth camera: fx color = 607.92271 fy color = 607.88192 tx color = 314.78337 ty color = 236.42484 fx depth = 475.62768 fy depth = 474.77709 tx depth = 336.41179 ty depth = 238.77962 rotation vector = [0.00531 -0.01196 0.00301] (use Rodrigues' rotation formula to transform it into rotation matrix) translation vector = [-24.0381 -0.4563 -1.2326] (rotation and translation vector can transform the coordinates relative to color camera to those relative to depth camera)

    • 进一步解释
      • 数据集给出的参数:彩色相机内参、深度相机内参、一个外参
      • 注释1:旋转参数以向量形式给出,直接使用Rodrigues公式转换成旋转矩阵即可,具体参考<视觉SLAM14讲>。
      • 注释2:外参的解释官网说是--彩色相机到深度相机的转换。博主尝试了两种方法均失败--->1)世界坐标系=深度相机,转换到彩色相机失败。2)世界坐标系=彩色相机,转换到深度相机失败。
      • 注释3:参考GCNHand3D论文,原来世界坐标系=深度相机,外参确实如官网所述:彩色相机到深度相机的转换。当然你也可以当做是使用左手坐标系的原则,和传统的右手坐标系相反即可。右手:(A=R*B),左手:(A=inv(R)*B)。彩色到深度外参:(R),那么深度到彩色:(inv(R)),同等左右手坐标系理解。
    fx = [822.79041,607.92271,475.62768]
    fy = [822.79041,607.88192,474.77709]
    tx = [318.47345,314.78337,336.41179]
    ty = [250.31296,236.42484,238.77962]
    
    # 0:Point Grey Bumblebee2 stereo camera
    # 1:Intel Real Sense F200 active depth camera COLOR
    # 2:Intel Real Sense F200 active depth camera DEPTH
    index = 1
    
    inter_mat = np.asarray([[fx[index], 0, tx[index], 0],
                            [0, fy[index], ty[index], 0],
                            [0,  0,  1, 0]]) #camera intrinsic param
    
    #matrix_R,_ = cv2.Rodrigues((0.00531,   -0.01196,  0.00301)) #calculate rotation matrix from rotate vector
    #matrix_R_inv = np.linalg.inv(matrix_R)
    #matrix_extrinsic是matrix_R_inv和平移矩阵的结合,
    #TODO 注意平移矩阵得加负号
    matrix_extrinsic = np.asarray([[ 0.99992395, 0.002904166, 0.01195165, +24.0381],
                            [ -0.00304,  0.99998137, 0.00532784, +0.4563],
                            [ -0.01196763,  -0.00529184,  0.99991438, +1.2326],
                            [   0,     0,     0,       1]])
    #世界坐标系点,4*21矩阵,[x,y,z,1]增广矩阵,计算方便
    points=XXX
    #depth 3D
    image_depth = np.dot(inter_mat , points)
    #color 3D
    image_color = np.dot(transfrom_matrix , points)
    #3D to 2D
    image_cood = image_depth or image_color
    image_cood = image_cood / image_cood[-1, ...]
    image_cood = (image_cood[:2,...].T).astype(np.uint)
    
    • 注意事项
    1. STB数据集是以mm为单位
    2. 深度图感觉不准确,distance = R + 255*G,有些图手的像素和背景一样,噪声很大。使用STB作为训练和评价指标的论文较少,且使用深度图的更少。

    三. 参考文献

    icip17_stereo_hand_pose_dataset

    旋转方向

    很多资料下次综述再罗列

  • 相关阅读:
    使用boost中的property_tree实现配置文件
    C++ 中使用boost::property_tree读取解析ini文件
    引用计数的原理和实例
    C++智能指针(auto_ptr)详解
    Oracle数据库用户锁定原因以及处理方式(ORA-28000)
    Address already in use : connect 的解决办法
    JMeter之Ramp-up Period(in seconds)说明(可同时并发)
    JMETER压力测试报错:JAVA.NET.BINDEXCEPTION: ADDRESS ALREADY IN USE: CONNECT
    spring cloud学习填坑笔记
    使用JMeter进行一次简单的带json数据的post请求测试,json可配置参数
  • 原文地址:https://www.cnblogs.com/wjy-lulu/p/12857249.html
Copyright © 2011-2022 走看看