zoukankan      html  css  js  c++  java
  • Mean Shift具体介绍

     Mean Shift,我们 翻译为“均值飘移”。其在聚类,图像平滑。图像切割和跟踪方面得到了比較广泛的应用。因为本人眼下研究跟踪方面的东西,故此主要介绍利用Mean Shift方法进行目标跟踪,从而对MeanShift有一个比較全面的介绍。

         (下面某些部分转载常峰学长的“Mean Shift概述”) Mean Shift 这个概念最早是由Fukunaga等人1975年在一篇关于概率密度梯度函数的预计(The Estimation of the Gradient of a Density Function, with Applications in Pattern Recognition )中提出来的,其最初含义正如其名,就是偏移的均值向量,在这里Mean Shift是一个名词,它指代的是一个向量,但随着Mean Shift理论的发展,Mean Shift的含义也发生了变化,假设我们说Mean Shift算法,通常是指一个迭代的步骤,即先算出当前点的偏移均值,移动该点到其偏移均值,然后以此为新的起始点,继续移动,直到满足一定的条件结束.

    然而在以后的非常长一段时间内Mean Shift并没有引起人们的注意,直到20年以后,也就是1995,另外一篇关于Mean Shift的重要文献(Mean shift, mode seeking, and clustering )才发表.在这篇重要的文献中,Yizong Cheng对主要的Mean Shift算法在下面两个方面做了推广,首先Yizong Cheng定义了一族核函数,使得随着样本与被偏移点的距离不同,其偏移量对均值偏移向量的贡献也不同,其次Yizong Cheng还设定了一个权重系数,使得不同的样本点重要性不一样,这大大扩大了Mean Shift的适用范围.另外Yizong Cheng指出了Mean Shift可能应用的领域,并给出了详细的样例。

     

    Comaniciu等人在还(Mean-shift Blob Tracking through Scale Space)中把非刚体的跟踪问题近似为一个Mean Shift最优化问题,使得跟踪能够实时的进行。眼下,利用Mean Shift进行跟踪已经相当成熟。

     

    目标跟踪不是一个新的问题,眼下在计算机视觉领域内有不少人在研究。所谓跟踪,就是通过已知的图像帧中的目标位置找到目标在下一帧中的位置。

    以下主要以代码形式展现Mean Shift在跟踪中的应用。

    void CObjectTracker::ObjeckTrackerHandlerByUser(IplImage *frame)//跟踪函数
          {
              m_cActiveObject = 0;

       if (m_sTrackingObjectTable[m_cActiveObject].Status)
              {
                  if (!m_sTrackingObjectTable[m_cActiveObject].assignedAnObject)
                  {
                        FindHistogram(frame,m_sTrackingObjectTable[m_cActiveObject].initHistogram);
                        m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = true;
                  }
                  else
                  {
                        FindNextLocation(frame);//利用mean shift 迭代找出目标下一个位置点

                 DrawObjectBox(frame);
                  }
             }

    }

     

    void CObjectTracker::FindNextLocation(IplImage *frame)
    {
    int i, j, opti, optj;
    SINT16 scale[3]={-3, 3, 0};
    FLOAT32 dist, optdist;
    SINT16 h, w, optX, optY;

    //try no-scaling
    FindNextFixScale(frame);//找出目标的下一个大致范围
    optdist=LastDist;
    optX=m_sTrackingObjectTable[m_cActiveObject].X;
    optY=m_sTrackingObjectTable[m_cActiveObject].Y;

    //try one of the 9 possible scaling
    i=rand()*2/RAND_MAX;
    j=rand()*2/RAND_MAX;
    h=m_sTrackingObjectTable[m_cActiveObject].H;
    w=m_sTrackingObjectTable[m_cActiveObject].W;
    if(h+scale[i]>10 && w+scale[j]>10 && h+scale[i]<m_nImageHeight/2 && w+scale[j]<m_nImageWidth/2)
    {
       m_sTrackingObjectTable[m_cActiveObject].H=h+scale[i];
       m_sTrackingObjectTable[m_cActiveObject].W=w+scale[j];
       FindNextFixScale(frame);
       if( (dist=LastDist) < optdist ) //scaling is better
       {
        optdist=dist;
    //    printf("Next%f->/n", dist);
       }
       else //no scaling is better
       {
        m_sTrackingObjectTable[m_cActiveObject].X=optX;
        m_sTrackingObjectTable[m_cActiveObject].Y=optY;
        m_sTrackingObjectTable[m_cActiveObject].H=h;
        m_sTrackingObjectTable[m_cActiveObject].W=w;
       }
    };
    TotalDist+=optdist; //the latest distance
    // printf("/n");
    }

    这里仍然在跟踪的基础上解说mean shift。首先还是把mean shift的原理用数学公式说一下吧。1、目标模型,算法採用的是特征值的加权概率分布来描写叙述目标模型。这应该是模式识别中主要描写叙述目标的模型,不同于自己主动控制理论中採用的状态方程。目标模型共m个特征值(能够理解为像素灰度值)

    当中X0是窗体中心点向量值(可能为RBG 向量或者灰度值), Xi 是窗体内第i 点向量值。C 为归一化常数,保障q1+q2+q3+……qm=1,H 为核函数的带宽向量。M 为特征值的个数,相应于图像处理能够理解为灰度等级划分的个数,从而特征值u 为相应的灰度等级。d 函数为脉冲函数,保证仅仅有具有u 特征值的像素才对概率分布作出贡献。从而k函数能够理解为u 灰度值的一个加权频数。

    2、 匹配对象,也採用特征值加权概率分布


    当中,Y 为匹配对象的中心, Xi 是匹配窗体内第i 点向量值, Hh 为匹配窗体的核函数带宽向量。 Ch 为匹配窗体特征向量的归一化常数。

    3、 匹配对象与目标模型的类似程度,类似函数可採用Bhattacharyya 函数

    4、 匹配过程就是寻找类似函数最大值的寻优过程,Mean-Shift 採用的是梯度下降法。首先将(Y) 在
    (Y0)
    附近进行泰勒级数展开,取前两项。即:


    要使得(Y) 向最大值迭代,仅仅要Y 的搜索方向与梯度方向一致就可以,通过求导可得到Y0的梯度方向为:


    为权值。因此假设例如以下确定Y1,那么Y1-Y0将与梯度方向一致。




    以上为mean shift的数学原理。有关文字的叙述已经在上一篇中提到了。用mean shift来跟踪属于确定性算法,粒子滤波器属于统计学方法。meanshift跟踪算法相对于粒子滤波器来说可能实时性更好一些,可是跟踪的准确性在理论上还是略逊于粒子滤波器的。mean shift跟
    踪的的实质就是通过相应的模板来确定目标的下一个位置。通过迭代找到新的中心点(即是目标的新的位置点)。有关跟踪的code例如以下所看到的:

    /**********************************************************************

    Bilkent University:

    Mean-shift Tracker based Moving Object Tracker in Video

    Version: 1.0

    Compiler: Microsoft Visual C++ 6.0 (tested in both debug and release
              mode)

    Modified by Mr Zhou

    **********************************************************************/
    #include "ObjectTracker.h"
    #include "utils.h"
    #include <math.h>
    #include <stdio.h>
    #include <stdlib.h>
    /*
    #define GetRValue(rgb)   ((UBYTE8) (rgb))
    #define GetGValue(rgb)   ((UBYTE8) (((ULONG_32) (rgb)) >> 8))
    #define GetBValue(rgb)   ((UBYTE8) ((rgb) >> 16))
    */
    //#define RGB(r, g ,b) ((ULONG_32) (((UBYTE8) (r) | ((UBYTE8) (g) << 8)) | (((ULONG_32) (UBYTE8) (b)) << 16)))

    #define min(a, b) (((a) < (b)) ? (a) : (b))

    #define max(a, b) (((a) > (b)) ? (a) : (b))


    #define MEANSHIFT_ITARATION_NO 5
    #define DISTANCE_ITARATION_NO 1
    #define ALPHA 1
    #define EDGE_DETECT_TRESHOLD 32
    //////////////////////////////////////////////////
    /*
    1 给定目标的初始位置和尺寸, 计算目标在图像中的直方图;
    2 输入新图像, 迭代直到收敛:
    计算图像上相应区域的新直方图;
    新直方图与目标直方图比較,计算权重;
    依据权重,计算图像上相应区域的形心/质心;
    依据形心,修正目标位置;

    直方图分为两部分, 每部分大小4096,
    RGB的256*256*256种组合, 缩减为16*16*16=4096种组合.
    假设目标区域的点是边缘点, 则计入直方图的后一部分,
    否则计入直方图的前一部分.
    */

    //////////////////////////////////////////////////

    CObjectTracker::CObjectTracker(INT32 imW,INT32 imH,IMAGE_TYPE eImageType)
    {

    m_nImageWidth = imW;
    m_nImageHeight = imH;
    m_eIMAGE_TYPE = eImageType;
    m_cSkipValue = 0;

    for (UBYTE8 i=0;i<MAX_OBJECT_TRACK_NUMBER;i++)//初始化各个目标
    {
       m_sTrackingObjectTable[i].Status = false;
          for(SINT16 j=0;j<HISTOGRAM_LENGTH;j++)
        m_sTrackingObjectTable[i].initHistogram[j] = 0;
    }

    m_nFrameCtr = 0;
    m_uTotalTime = 0;
    m_nMaxEstimationTime = 0;
    m_cActiveObject = 0;
    TotalDist=0.0;
    LastDist=0.0;

    switch (eImageType)
    {
       case MD_RGBA:
        m_cSkipValue = 4 ;
    break ;
       case MD_RGB:
    m_cSkipValue = 3 ;
       break ;
    };
    };

    CObjectTracker::~CObjectTracker()
    {

    }
    //returns pixel values in format |0|B|G|R| wrt to (x.y)
    /*
    ULONG_32 CObjectTracker::GetPixelValues(UBYTE8 *frame,SINT16 x,SINT16 y)
    {
    ULONG_32 pixelValues = 0;

    pixelValues = *(frame+(y*m_nImageWidth+x)*m_cSkipValue+2)|//0BGR
                   *(frame+(y*m_nImageWidth+x)*m_cSkipValue+1) << 8|
          *(frame+(y*m_nImageWidth+x)*m_cSkipValue) << 16;


    return(pixelValues);

    }*/

    //set RGB components wrt to (x.y)
    void CObjectTracker::SetPixelValues(IplImage *r,IplImage *g,IplImage *b,ULONG_32 pixelValues,SINT16 x,SINT16 y)
    {
    // *(frame+(y*m_nImageWidth+x)*m_cSkipValue+2) = UBYTE8(pixelValues & 0xFF);
    // *(frame+(y*m_nImageWidth+x)*m_cSkipValue+1) = UBYTE8((pixelValues >> 8) & 0xFF);
    // *(frame+(y*m_nImageWidth+x)*m_cSkipValue) = UBYTE8((pixelValues >> 16) & 0xFF);
    //setpix32f
    setpix8c(r, y, x, UBYTE8(pixelValues & 0xFF));
    setpix8c(g, y, x, UBYTE8((pixelValues >> 8) & 0xFF));
    setpix8c(b, y, x, UBYTE8((pixelValues >> 16) & 0xFF));
    }

    // returns box color
    ULONG_32 CObjectTracker::GetBoxColor()
    {
    ULONG_32 pixelValues = 0;

    switch(m_cActiveObject)
    {
    case 0:
    pixelValues = RGB(255,0,0);
    break;
    case 1:
    pixelValues = RGB(0,255,0);
    break;
    case 2:
    pixelValues = RGB(0,0,255);
    break;
    case 3:
    pixelValues = RGB(255,255,0);
    break;
    case 4:
    pixelValues = RGB(255,0,255);
    break;
    case 5:
    pixelValues = RGB(0,255,255);
    break;
    case 6:
    pixelValues = RGB(255,255,255);
    break;
    case 7:
    pixelValues = RGB(128,0,128);
    break;
    case 8:
    pixelValues = RGB(128,128,0);
    break;
    case 9:
    pixelValues = RGB(128,128,128);
    break;
    case 10:
    pixelValues = RGB(255,128,0);
    break;
    case 11:
    pixelValues = RGB(0,128,128);
    break;
    case 12:
    pixelValues = RGB(123,50,10);
    break;
    case 13:
    pixelValues = RGB(10,240,126);
    break;
    case 14:
    pixelValues = RGB(0,128,255);
    break;
    case 15:
    pixelValues = RGB(128,200,20);
    break;
    default:
    break;
    }

    return(pixelValues);


    }
    //初始化一个目标的參数
    void CObjectTracker::ObjectTrackerInitObjectParameters(SINT16 x,SINT16 y,SINT16 Width,SINT16 Height)
    {

       m_cActiveObject = 0;

       m_sTrackingObjectTable[m_cActiveObject].X = x;
       m_sTrackingObjectTable[m_cActiveObject].Y = y;
       m_sTrackingObjectTable[m_cActiveObject].W = Width;
       m_sTrackingObjectTable[m_cActiveObject].H = Height;

       m_sTrackingObjectTable[m_cActiveObject].vectorX = 0;
       m_sTrackingObjectTable[m_cActiveObject].vectorY = 0;


       m_sTrackingObjectTable[m_cActiveObject].Status = true;
       m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = false;
    }

    //进行一次跟踪
    void CObjectTracker::ObjeckTrackerHandlerByUser(IplImage *frame)
    {
       m_cActiveObject = 0;

       if (m_sTrackingObjectTable[m_cActiveObject].Status)
       {
        if (!m_sTrackingObjectTable[m_cActiveObject].assignedAnObject)
        {
         //计算目标的初始直方图
         FindHistogram(frame,m_sTrackingObjectTable[m_cActiveObject].initHistogram);
               m_sTrackingObjectTable[m_cActiveObject].assignedAnObject = true;
        }
        else
        {
         //在图像上搜索目标
         FindNextLocation(frame);   

         DrawObjectBox(frame);
        }
       }

    }
    //Extracts the histogram of box
    //frame: 图像
    //histogram: 直方图
    //在图像frame中计算当前目标的直方图histogram
    //直方图分为两部分,每部分大小4096,
    //RGB的256*256*256种组合,缩减为16*16*16=4096种组合
    //假设目标区域的点是边缘点,则计入直方图的后一部分,
    //否则计入直方图的前一部分
    void CObjectTracker::FindHistogram(IplImage *frame, FLOAT32 (*histogram))
    {
    SINT16 i = 0;
    SINT16 x = 0;
    SINT16 y = 0;
    UBYTE8 E = 0;
    UBYTE8 qR = 0,qG = 0,qB = 0;
    // ULONG_32 pixelValues = 0;
    UINT32 numberOfPixel = 0;
    IplImage* r, * g, * b;

    r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    cvCvtPixToPlane( frame, b, g, r, NULL ); //divide color image into separate planes r, g, b. The exact sequence doesn't matter.


    for (i=0;i<HISTOGRAM_LENGTH;i++) //reset all histogram
       histogram[i] = 0.0;

    //for all the pixels in the region
    for (y=max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);y<=min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2,m_nImageHeight-1);y++)
       for (x=max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);x<=min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2,m_nImageWidth-1);x++)
       {
        //边缘信息: 当前点与上下左右4点灰度差异是否超过阈值
        E = CheckEdgeExistance(r, g, b,x,y);

        qR = (UBYTE8)pixval8c( r, y, x )/16;//quantize R component
        qG = (UBYTE8)pixval8c( g, y, x )/16;//quantize G component
        qB = (UBYTE8)pixval8c( b, y, x )/16;//quantize B component

        histogram[4096*E+256*qR+16*qG+qB] += 1; //依据边缘信息, 累计直方图//HISTOGRAM_LENGTH=8192

        numberOfPixel++;

       }

    for (i=0;i<HISTOGRAM_LENGTH;i++) //normalize
       histogram[i] = histogram[i]/numberOfPixel;
    //for (i=0;i<HISTOGRAM_LENGTH;i++)
    //   printf("histogram[%d]=%d/n",i,histogram[i]);
         // printf("numberOfPixel=%d/n",numberOfPixel);
    cvReleaseImage(&r);
    cvReleaseImage(&g);
    cvReleaseImage(&b);

    }
    //Draw box around object
    void CObjectTracker::DrawObjectBox(IplImage *frame)
    {
    SINT16 x_diff = 0;
    SINT16 x_sum = 0;
    SINT16 y_diff = 0;
    SINT16 y_sum = 0;
    SINT16 x = 0;
    SINT16 y = 0;
    ULONG_32 pixelValues = 0;
    IplImage* r, * g, * b;

    r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    cvCvtPixToPlane( frame, b, g, r, NULL );

    pixelValues = GetBoxColor();

    //the x left and right bounds
    x_sum = min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2+1,m_nImageWidth-1);//右边界
    x_diff = max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);//左边界
    //the y upper and lower bounds
    y_sum = min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2+1,m_nImageHeight-1);//下边界
    y_diff = max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);//上边界

    for (y=y_diff;y<=y_sum;y++)
    {
       SetPixelValues(r, g, b,pixelValues,x_diff,y);
       SetPixelValues(r, g, b,pixelValues,x_diff+1,y);

          SetPixelValues(r, g, b,pixelValues,x_sum-1,y);
          SetPixelValues(r, g, b,pixelValues,x_sum,y);
    }
    for (x=x_diff;x<=x_sum;x++)
    {
       SetPixelValues(r, g, b,pixelValues,x,y_diff);
          SetPixelValues(r, g, b,pixelValues,x,y_diff+1);

          SetPixelValues(r, g, b,pixelValues,x,y_sum-1);
          SetPixelValues(r, g, b,pixelValues,x,y_sum);
    }
    cvCvtPlaneToPix(b, g, r, NULL, frame);

    cvReleaseImage(&r);
    cvReleaseImage(&g);
    cvReleaseImage(&b);
    }
    // Computes weights and drives the new location of object in the next frame
    //frame: 图像
    //histogram: 直方图
    //计算权重, 更新目标的坐标
    void CObjectTracker::FindWightsAndCOM(IplImage *frame, FLOAT32 (*histogram))
    {
    SINT16 i = 0;
    SINT16 x = 0;
    SINT16 y = 0;
    UBYTE8 E = 0;
    FLOAT32 sumOfWeights = 0;
    SINT16 ptr = 0;
    UBYTE8 qR = 0,qG = 0,qB = 0;
    FLOAT32   newX = 0.0;
    FLOAT32   newY = 0.0;
    // ULONG_32 pixelValues = 0;
    IplImage* r, * g, * b;


    FLOAT32 *weights = new FLOAT32[HISTOGRAM_LENGTH];

    for (i=0;i<HISTOGRAM_LENGTH;i++)
    {
       if (histogram[i] >0.0 )
        weights[i] = m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]/histogram[i]; //qu/pu(y0)
       else
        weights[i] = 0.0;
    }

    r = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    g = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    b = cvCreateImage( cvGetSize(frame), frame->depth, 1 );
    cvCvtPixToPlane( frame, b, g, r, NULL ); //divide color image into separate planes r, g, b. The exact sequence doesn't matter.

    for (y=max(m_sTrackingObjectTable[m_cActiveObject].Y-m_sTrackingObjectTable[m_cActiveObject].H/2,0);y<=min(m_sTrackingObjectTable[m_cActiveObject].Y+m_sTrackingObjectTable[m_cActiveObject].H/2,m_nImageHeight-1);y++)
       for (x=max(m_sTrackingObjectTable[m_cActiveObject].X-m_sTrackingObjectTable[m_cActiveObject].W/2,0);x<=min(m_sTrackingObjectTable[m_cActiveObject].X+m_sTrackingObjectTable[m_cActiveObject].W/2,m_nImageWidth-1);x++)
       {
        E = CheckEdgeExistance(r, g, b,x,y);

        qR = (UBYTE8)pixval8c( r, y, x )/16;
        qG = (UBYTE8)pixval8c( g, y, x )/16;
        qB = (UBYTE8)pixval8c( b, y, x )/16;

        ptr = 4096*E+256*qR+16*qG+qB; //some recalculation here. The bin number of (x, y) can be stroed somewhere in fact.

        newX += (weights[ptr]*x);
        newY += (weights[ptr]*y);

        sumOfWeights += weights[ptr];
       }

       if (sumOfWeights>0)
       {
        m_sTrackingObjectTable[m_cActiveObject].X = SINT16((newX/sumOfWeights) + 0.5); //update location
        m_sTrackingObjectTable[m_cActiveObject].Y = SINT16((newY/sumOfWeights) + 0.5);
       }

    cvReleaseImage(&r);
    cvReleaseImage(&g);
    cvReleaseImage(&b);
       delete[] weights, weights = 0;
    }
    // Returns the distance between two histograms.
    FLOAT32 CObjectTracker::FindDistance(FLOAT32 (*histogram))
    {
    SINT16 i = 0;
    FLOAT32 distance = 0;


    for(i=0;i<HISTOGRAM_LENGTH;i++)
       distance += FLOAT32(sqrt(DOUBLE64(m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
                      *histogram[i])));

    return(sqrt(1-distance));
    }
    //An alternative distance measurement
    FLOAT32 CObjectTracker::CompareHistogram(UBYTE8 (*histogram))
    {
    SINT16 i = 0;
    FLOAT32 distance = 0.0;
    FLOAT32 difference = 0.0;


    for (i=0;i<HISTOGRAM_LENGTH;i++)
    {
       difference = FLOAT32(m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
                             -histogram[i]);

       if (difference>0)
        distance += difference;
       else
        distance -= difference;
    }
    return(distance);
    }
    // Returns the edge insformation of a pixel at (x,y), assume a large jump of value around edge pixels
    UBYTE8 CObjectTracker::CheckEdgeExistance(IplImage *r, IplImage *g, IplImage *b, SINT16 _x,SINT16 _y)
    {
    UBYTE8 E = 0;
    SINT16 GrayCenter = 0;
    SINT16 GrayLeft = 0;
    SINT16 GrayRight = 0;
    SINT16 GrayUp = 0;
    SINT16 GrayDown = 0;
    // ULONG_32 pixelValues = 0;

    // pixelValues = GetPixelValues(frame,_x,_y);
    GrayCenter = SINT16(3*pixval8c( r, _y, _x )+6*pixval8c( g, _y, _x )+pixval8c( b, _y, _x ));

    if (_x>0)
    {
    //   pixelValues = GetPixelValues(frame,_x-1,_y);

       GrayLeft = SINT16(3*pixval8c( r, _y, _x-1 )+6*pixval8c( g, _y, _x-1 )+pixval8c( b, _y, _x-1 ));
    }

    if (_x < (m_nImageWidth-1))
    {
    //   pixelValues = GetPixelValues(frame,_x+1,_y);

          GrayRight = SINT16(3*pixval8c( r, _y, _x+1 )+6*pixval8c( g, _y, _x+1 )+pixval8c( b, _y, _x+1 ));
    }

    if (_y>0)
    {
    //   pixelValues = GetPixelValues(frame,_x,_y-1);

          GrayUp = SINT16(3*pixval8c( r, _y-1, _x )+6*pixval8c( g, _y-1, _x )+pixval8c( b, _y-1, _x ));
    }

    if (_y<(m_nImageHeight-1))
    {
    //   pixelValues = GetPixelValues(frame,_x,_y+1);

       GrayDown = SINT16(3*pixval8c( r, _y+1, _x )+6*pixval8c( g, _y+1, _x )+pixval8c( b, _y+1, _x ));
    }

    if (abs((GrayCenter-GrayLeft)/10)>EDGE_DETECT_TRESHOLD)
       E = 1;

    if (abs((GrayCenter-GrayRight)/10)>EDGE_DETECT_TRESHOLD)
       E = 1;

    if (abs((GrayCenter-GrayUp)/10)>EDGE_DETECT_TRESHOLD)
          E = 1;

    if (abs((GrayCenter-GrayDown)/10)>EDGE_DETECT_TRESHOLD)
          E = 1;

    return(E);
    }
    // Alpha blending: used to update initial histogram by the current histogram
    void CObjectTracker::UpdateInitialHistogram(UBYTE8 (*histogram))
    {
    SINT16 i = 0;

    for (i=0; i<HISTOGRAM_LENGTH; i++)
       m_sTrackingObjectTable[m_cActiveObject].initHistogram[i] = ALPHA*m_sTrackingObjectTable[m_cActiveObject].initHistogram[i]
                                                                +(1-ALPHA)*histogram[i];

    }
    // Mean-shift iteration
    //frame: 图像
    //MeanShift迭代找出中心点
    void CObjectTracker::FindNextLocation(IplImage *frame)
    {
    int i, j, opti, optj;
    SINT16 scale[3]={-3, 3, 0};
    FLOAT32 dist, optdist;
    SINT16 h, w, optX, optY;

    //try no-scaling
    FindNextFixScale(frame);
    optdist=LastDist;
    optX=m_sTrackingObjectTable[m_cActiveObject].X;
    optY=m_sTrackingObjectTable[m_cActiveObject].Y;

    //try one of the 9 possible scaling
    i=rand()*2/RAND_MAX;
    j=rand()*2/RAND_MAX;
    h=m_sTrackingObjectTable[m_cActiveObject].H;
    w=m_sTrackingObjectTable[m_cActiveObject].W;
    if(h+scale[i]>10 && w+scale[j]>10 && h+scale[i]<m_nImageHeight/2 && w+scale[j]<m_nImageWidth/2)
    {
       m_sTrackingObjectTable[m_cActiveObject].H=h+2*scale[i];
       m_sTrackingObjectTable[m_cActiveObject].W=w+2*scale[j];
       FindNextFixScale(frame);
       if( (dist=LastDist) < optdist ) //scaling is better
       {
        optdist=dist;
    //    printf("Next%f->/n", dist);
       }
       else //no scaling is better
       {
        m_sTrackingObjectTable[m_cActiveObject].X=optX;
        m_sTrackingObjectTable[m_cActiveObject].Y=optY;
        m_sTrackingObjectTable[m_cActiveObject].H=h;
        m_sTrackingObjectTable[m_cActiveObject].W=w;
       }
    };
    TotalDist+=optdist; //the latest distance
    // printf("/n");
    }

    void CObjectTracker::FindNextFixScale(IplImage *frame)
    {
    UBYTE8 iteration = 0;
    SINT16 optX, optY;

    FLOAT32 *currentHistogram = new FLOAT32[HISTOGRAM_LENGTH];
    FLOAT32 dist, optdist=1.0;

    for (iteration=0; iteration<MEANSHIFT_ITARATION_NO; iteration++)
    {
       FindHistogram(frame,currentHistogram); //current frame histogram, use the last frame location as starting point
      
          FindWightsAndCOM(frame,currentHistogram);//derive weights and new location
      
          //FindHistogram(frame,currentHistogram);   //uptade histogram
      
          //UpdateInitialHistogram(currentHistogram);//uptade initial histogram
       if( ((dist=FindDistance(currentHistogram)) < optdist) || iteration==0 )
       {
        optdist=dist;
        optX=m_sTrackingObjectTable[m_cActiveObject].X;
        optY=m_sTrackingObjectTable[m_cActiveObject].Y;
    //      printf("%f->", dist);
       }
       else //bad iteration, then find a better start point for next iteration
       {
       m_sTrackingObjectTable[m_cActiveObject].X=(m_sTrackingObjectTable[m_cActiveObject].X+optX)/2;
       m_sTrackingObjectTable[m_cActiveObject].Y=(m_sTrackingObjectTable[m_cActiveObject].Y+optY)/2;
       }
    }//end for
    m_sTrackingObjectTable[m_cActiveObject].X=optX;
    m_sTrackingObjectTable[m_cActiveObject].Y=optY;
    LastDist=optdist; //the latest distance
    // printf("/n");

    delete[] currentHistogram, currentHistogram = 0;
    }

    float CObjectTracker::GetTotalDist(void)
    {
    return(TotalDist);
    }

     


  • 相关阅读:
    jquery获取父元素或父节点的方法
    JS省份联级下拉框
    全国各省、市名称(包括县级市)
    让Vs2010支持 Css3+HTML5
    Sql Server 事务/回滚
    Windows.Forms Panel 动态加载用户控件 UserControl
    C/C++ 运算符 & | 运算
    WPF
    SQL Server 数据库定时自动备份【转】
    如何编写更棒的代码:11个核心要点
  • 原文地址:https://www.cnblogs.com/blfshiye/p/4062076.html
Copyright © 2011-2022 走看看