zoukankan      html  css  js  c++  java
  • 【AR实验室】mulberryAR:并行提取ORB特征

    本文转载请注明出处 —— polobymulberry-博客园

    0x00 - 前言


    【AR实验室】mulberryAR : ORBSLAM2+VVSION末尾提及了iPhone5s真机测试结果,其中ExtractORB函数,也就是提取图像的ORB特征这一块耗时很可观。所以这也是目前需要优化的重中之重。此处,我使用【AR实验室】mulberryAR :添加连续图像作为输入中添加的连续图像作为输入。这样的好处有两个,一个就是保证输入一致,那么单线程提取特征和并行提取特征两种方法优化对比就比较有可信度,另一个是可以使用iOS模拟器来跑程序了,因为不需要打开摄像头的,测试起来相当方便,更有多种机型任你选。

    目前对特征提取这部分优化就只有两个想法:

    • 将特征提取的过程并行化。
    • 减少提取的特征点数量。

    第二种方法很容易,只需要在配置文件中更改提取特征点的数目即可,此处不赘述。本文主要集中第一种方法,初步尝试将特征提取并行化。

    0x01 - ORB-SLAM2特征提取过程耗时分析


    ORB-SLAM2中特征提取函数叫做ExtractORB,是Frame类的一个成员函数。用来提取当前Frame的ORB特征点。

    // flag是给双目相机用的,单目相机默认flag为0
    // 提取im上的ORB特征点
    void Frame::ExtractORB(int flag, const cv::Mat &im)
    {
        if(flag==0)
            // mpORBextractorLeft是ORBextractor对象,因为ORBextractor重载了()
            // 所以才会有下面这种用法
            (*mpORBextractorLeft)(im,cv::Mat(),mvKeys,mDescriptors);
        else
            (*mpORBextractorRight)(im,cv::Mat(),mvKeysRight,mDescriptorsRight);
    }

    从上面代码可以看出ORB-SLAM2特征提取主要调用的是ORBextractor重载的()函数。我们给该函数重要的几个部分打点,测试每个部分的耗时。

    重要提示-测试代码执行时间:

    测试某段代码执行的时间有很多种方法,比如:

    clock_t begin = clock();
    //...
    clock_t end = clock();
    cout << "execute time = " << (end - begin) / CLOCKS_PER_SEC << "s" << endl;

    不过我之前在多线程求和【原】C++11并行计算 — 数组求和中使用上述方法计时,发现这个方法对于多线程计算存在bug。因为目前我是基于iOS平台,所以此处我使用了iOS中计算时间的方式。另外又因为在C++文件中不能直接使用Foundation组件,所以采用对应的CoreFoundation。

    CFAbsoluteTime beginTime = CFAbsoluteTimeGetCurrent();
    CFDateRef beginDate = CFDateCreate(kCFAllocatorDefault, beginTime);
    // ...
    CFAbsoluteTime endime = CFAbsoluteTimeGetCurrent();
    CFDateRef endDate = CFDateCreate(kCFAllocatorDefault, endTime);
    CFTimeInterval timeInterval = CFDateGetTimeIntervalSinceDate(endDate, beginDate);
    cout << "execure time = " << (double)(timeInterval) * 1000.0 << "ms" << endl;

    将上述计时代码插入到operator()函数中,目前函数整体看起来如下,主要是对三个部分进行计时,分别为ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors:

    void ORBextractor::operator()( InputArray _image, InputArray _mask, vector<KeyPoint>& _keypoints,
                          OutputArray _descriptors)
    { 
        if(_image.empty())
            return;
    
        Mat image = _image.getMat();
        assert(image.type() == CV_8UC1 );
    
        // 1.计算图像金字塔的时间
        CFAbsoluteTime beginComputePyramidTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computePyramidBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputePyramidTime);
        // Pre-compute the scale pyramid
        ComputePyramid(image);
        CFAbsoluteTime endComputePyramidTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computePyramidEndDate = CFDateCreate(kCFAllocatorDefault, endComputePyramidTime);
        CFTimeInterval computePyramidTimeInterval = CFDateGetTimeIntervalSinceDate(computePyramidEndDate, computePyramidBeginDate);
        cout << "ComputePyramid time = " << (double)(computePyramidTimeInterval) * 1000.0 << endl;
    
        vector < vector<KeyPoint> > allKeypoints;
        
        // 2.计算关键点KeyPoint的时间
        CFAbsoluteTime beginComputeKeyPointsTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computeKeyPointsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeKeyPointsTime);
        
        ComputeKeyPointsOctTree(allKeypoints);
        //ComputeKeyPointsOld(allKeypoints);
        CFAbsoluteTime endComputeKeyPointsTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computeKeyPointsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeKeyPointsTime);
        CFTimeInterval computeKeyPointsTimeInterval = CFDateGetTimeIntervalSinceDate(computeKeyPointsEndDate, computeKeyPointsBeginDate);
        cout << "ComputeKeyPointsOctTree time = " << (double)(computeKeyPointsTimeInterval) * 1000.0 << endl;
    
        Mat descriptors;
    
        int nkeypoints = 0;
        for (int level = 0; level < nlevels; ++level)
            nkeypoints += (int)allKeypoints[level].size();
        if( nkeypoints == 0 )
            _descriptors.release();
        else
        {
            _descriptors.create(nkeypoints, 32, CV_8U);
            descriptors = _descriptors.getMat();
        }
    
        _keypoints.clear();
        _keypoints.reserve(nkeypoints);
    
        int offset = 0;
        
        // 3.计算描述子的时间
        CFAbsoluteTime beginComputeDescriptorsTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computeDescriptorsBeginDate = CFDateCreate(kCFAllocatorDefault, beginComputeDescriptorsTime);
        for (int level = 0; level < nlevels; ++level)
        {
            vector<KeyPoint>& keypoints = allKeypoints[level];
            int nkeypointsLevel = (int)keypoints.size();
    
            if(nkeypointsLevel==0)
                continue;
    
            // preprocess the resized image
            Mat workingMat = mvImagePyramid[level].clone();
            GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101);
    
            // Compute the descriptors
            Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel);
            computeDescriptors(workingMat, keypoints, desc, pattern);
    
            offset += nkeypointsLevel;
    
            // Scale keypoint coordinates
            if (level != 0)
            {
                float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor);
                for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
                     keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
                    keypoint->pt *= scale;
            }
            // And add the keypoints to the output
            _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());
        }
        CFAbsoluteTime endComputeDescriptorsTime = CFAbsoluteTimeGetCurrent();
        CFDateRef computeDescriptorsEndDate = CFDateCreate(kCFAllocatorDefault, endComputeDescriptorsTime);
        CFTimeInterval computeDescriptorsTimeInterval = CFDateGetTimeIntervalSinceDate(computeDescriptorsEndDate, computeDescriptorsBeginDate);
        cout << "ComputeDescriptors time = " << (double)(computeDescriptorsTimeInterval) * 1000.0 << endl;
    }

    此时,使用iPhone7模拟器运行mulberryAR,并且运行我之前录制的一段连续图像帧,得到结果如下(此处我只截取前三帧的结果):

    image

    image

    可以看出优化的重点在于ComputeKeyPointsOctTree、ComputeDescriptiors。

    0x02 - ORB-SLAM2特征提取优化思路


    ComputePyramid、ComputeKeyPointsOctTree和ComputeDescriptors函数中都会根据图像金字塔的不同层级做同样的操作,所以此处可以将图像金字塔不同层级的操作并行化。按照这个思路,对三个部分的代码进行了修改。

    1.ComputePyramid函数并行化

    该函数暂时无法进行并行化处理,因为里面在计算图像金字塔中第n层图像的时候,依赖第n-1层的图像,另外此函数在整个特征提取的部分占比不是很大,相对来说并行化意义不是很大。

    2.ComputeKeyPointsOctTree函数并行化

    该函数的并行化过程很容易,只需要将其中的for(int i = 0; i < nlevels; ++i)里面的函数做成单独函数,并添加到各自的thread中即可。不废话,直接上代码:

    void ORBextractor::ComputeKeyPointsOctTree(vector<vector<KeyPoint> >& allKeypoints)
    {
        allKeypoints.resize(nlevels);
    
        vector<thread> computeKeyPointsThreads;
        
        for (int i = 0; i < nlevels; ++i) {
            computeKeyPointsThreads.push_back(thread(&ORBextractor::ComputeKeyPointsOctTreeEveryLevel, this, i, std::ref(allKeypoints)));
        }
        
        for (int i = 0; i < nlevels; ++i) {
            computeKeyPointsThreads[i].join();
        }
        
        // compute orientations
        vector<thread> computeOriThreads;
        for (int level = 0; level < nlevels; ++level) {
            computeOriThreads.push_back(thread(computeOrientation, mvImagePyramid[level], std::ref(allKeypoints[level]), umax));
        }
        
        for (int level = 0; level < nlevels; ++level) {
            computeOriThreads[level].join();
        }
    }

    其中ComputeKeyPointsOctTreeEveryLevel函数如下:

    void ORBextractor::ComputeKeyPointsOctTreeEveryLevel(int level, vector<vector<KeyPoint> >& allKeypoints)
    {
        const float W = 30;
        
        const int minBorderX = EDGE_THRESHOLD-3;
        const int minBorderY = minBorderX;
        const int maxBorderX = mvImagePyramid[level].cols-EDGE_THRESHOLD+3;
        const int maxBorderY = mvImagePyramid[level].rows-EDGE_THRESHOLD+3;
        
        vector<cv::KeyPoint> vToDistributeKeys;
        vToDistributeKeys.reserve(nfeatures*10);
        
        const float width = (maxBorderX-minBorderX);
        const float height = (maxBorderY-minBorderY);
        
        const int nCols = width/W;
        const int nRows = height/W;
        const int wCell = ceil(width/nCols);
        const int hCell = ceil(height/nRows);
        
        for(int i=0; i<nRows; i++)
        {
            const float iniY =minBorderY+i*hCell;
            float maxY = iniY+hCell+6;
            
            if(iniY>=maxBorderY-3)
                continue;
            if(maxY>maxBorderY)
                maxY = maxBorderY;
            
            for(int j=0; j<nCols; j++)
            {
                const float iniX =minBorderX+j*wCell;
                float maxX = iniX+wCell+6;
                if(iniX>=maxBorderX-6)
                    continue;
                if(maxX>maxBorderX)
                    maxX = maxBorderX;
                
                vector<cv::KeyPoint> vKeysCell;
                FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX),
                     vKeysCell,iniThFAST,true);
                
                if(vKeysCell.empty())
                {
                    FAST(mvImagePyramid[level].rowRange(iniY,maxY).colRange(iniX,maxX),
                         vKeysCell,minThFAST,true);
                }
                
                if(!vKeysCell.empty())
                {
                    for(vector<cv::KeyPoint>::iterator vit=vKeysCell.begin(); vit!=vKeysCell.end();vit++)
                    {
                        (*vit).pt.x+=j*wCell;
                        (*vit).pt.y+=i*hCell;
                        vToDistributeKeys.push_back(*vit);
                    }
                }
                
            }
        }
        
        vector<KeyPoint> & keypoints = allKeypoints[level];
        keypoints.reserve(nfeatures);
        
        keypoints = DistributeOctTree(vToDistributeKeys, minBorderX, maxBorderX,
                                      minBorderY, maxBorderY,mnFeaturesPerLevel[level], level);
        
        const int scaledPatchSize = PATCH_SIZE*mvScaleFactor[level];
        
        // Add border to coordinates and scale information
        const int nkps = keypoints.size();
        for(int i=0; i<nkps ; i++)
        {
            keypoints[i].pt.x+=minBorderX;
            keypoints[i].pt.y+=minBorderY;
            keypoints[i].octave=level;
            keypoints[i].size = scaledPatchSize;
        }
    }

    在iPhone7模拟器上测试,得到如下结果(取前5帧图像测试):

    image

    可以看到通过并行处理,ComputeKeyPointsOctTree获得了2~3倍的提速。

    3.ComputeDescriptors部分并行化

    之所以这一部分叫做“部分”,而非“函数”是因为这部分涉及的函数相对于ComputeKeyPointsOctTree比较复杂,涉及的变量比较多。只有理清之间的关系才能安全地并行化。

    此处也不赘述,直接贴出修改后的并行化代码:

    vector<thread> computeDescThreads;
    vector<vector<KeyPoint> > keypointsEveryLevel;
    keypointsEveryLevel.resize(nlevels);
    // 图像金字塔每层的offset与前面每层的offset有关,所以不能直接放在ComputeDescriptorsEveryLevel计算
    for (int level = 0; level < nlevels; ++level) {
        computeDescThreads.push_back(thread(&ORBextractor::ComputeDescriptorsEveryLevel, this, level, std::ref(allKeypoints), descriptors, offset, std::ref(keypointsEveryLevel[level])));
        int keypointsNum = (int)allKeypoints[level].size();
        offset += keypointsNum;
    }
    
    for (int level = 0; level < nlevels; ++level) {
        computeDescThreads[level].join();
    }
    // _keypoints要按照顺序进行插入,所以不能直接放在ComputeDescriptorsEveryLevel计算
    for (int level = 0; level < nlevels; ++level) {
        _keypoints.insert(_keypoints.end(), keypointsEveryLevel[level].begin(), keypointsEveryLevel[level].end());
    }
    
    // 其中ComputeDescriptorsEveryLevel函数如下
    void ORBextractor::ComputeDescriptorsEveryLevel(int level, std::vector<std::vector<KeyPoint> > &allKeypoints, const Mat& descriptors, int offset, vector<KeyPoint>& _keypoints)
    {
        vector<KeyPoint>& keypoints = allKeypoints[level];
        int nkeypointsLevel = (int)keypoints.size();
        
        if(nkeypointsLevel==0)
            return;
        
        // preprocess the resized image
        Mat workingMat = mvImagePyramid[level].clone();
        GaussianBlur(workingMat, workingMat, cv::Size(7, 7), 2, 2, BORDER_REFLECT_101);
        
        // Compute the descriptors
        Mat desc = descriptors.rowRange(offset, offset + nkeypointsLevel);
        computeDescriptors(workingMat, keypoints, desc, pattern);
        
    //    offset += nkeypointsLevel;
        
        // Scale keypoint coordinates
        if (level != 0)
        {
            float scale = mvScaleFactor[level]; //getScale(level, firstLevel, scaleFactor);
            for (vector<KeyPoint>::iterator keypoint = keypoints.begin(),
                 keypointEnd = keypoints.end(); keypoint != keypointEnd; ++keypoint)
                keypoint->pt *= scale;
        }
        // And add the keypoints to the output
    //    _keypoints.insert(_keypoints.end(), keypoints.begin(), keypoints.end());
        _keypoints = keypoints;
    }

    在iPhone7模拟器上测试,得到如下结果(取前5帧图像测试):

    image

    可以看到通过并行处理,ComputeDescriptors获得了2~3倍的提速。

    0x03 - 并行化结果分析


    0x02小节已经对比了每步优化的结果。此处从整体的角度对结果进行简单的分析。使用iPhone7模拟器跑了前5帧的对比结果:

    image

    从结果中可以看出,ORB特征提取速度有了2~3倍的提升,在TrackMonocular部分占比也下降了不少,暂时ORB特征提取不用作为性能优化的重点。后面将会从其他方面对ORB-SLAM2进行优化。

  • 相关阅读:
    rem 适配屏幕大小
    javascript实现多线程提升项目加载速度
    前端工程化、模块化、组件化见解
    Object.defineProperty方法
    js 冒泡排序
    git 如何生成 SSH 公钥
    奇数幻方
    最长公共子序列-动态规划
    Sping学习笔记_整合hibernate
    Sping学习笔记_AOP
  • 原文地址:https://www.cnblogs.com/polobymulberry/p/6270399.html
Copyright © 2011-2022 走看看