zoukankan      html  css  js  c++  java
  • Hard Negative Mning

     对于hard negative mining的解释,引用一波知乎:

    链接:https://www.zhihu.com/question/46292829/answer/235112564
    来源:知乎

    先要理解什么是hard negative

    R-CNN关于hard negative mining的部分引用了两篇论文:

    [17] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively trained part based models. TPAMI, 2010.

    [37] K. Sung and T. Poggio. Example-based learning for viewbased human face detection. Technical Report A.I. Memo No. 1521, Massachussets Institute of Technology, 1994. 4

    Bootstrapping methods train a model with an initial subset of negative examples, and then collect negative examples that are incorrectly classified by this initial model to form a set of hard negatives. A new model is trained with the hard negative examples, and the process may be repeated a few times.
    we use the following “bootstrap” strategy that incrementally selects only those “nonface” patterns with high utility value:
    1) Start with a small set of “nonface” examples in the training database.
    2) Train the MLP classifier with the current database of examples.
    3) Run the face detector on a sequence of random images. Collect all the “nonface” patterns that the current system wrongly classifies as “faces” (see Fig. 5b).Add these “nonface” patterns to the training database as new negative examples.
    4) Return to Step 2.

    在bootstrapping方法中,我们先用初始的正负样本(一般是正样本+与正样本同规模的负样本的一个子集)训练分类器,然后再用训练出的分类器对样本进行分类,把其中错误分类的那些样本(hard negative)放入负样本集合,再继续训练分类器,如此反复,直到达到停止条件(比如分类器性能不再提升).

    we expect these new examples to help steer the classifier away from its current mistakes.

    hard negative就是每次把那些顽固的棘手的错误,再送回去继续练,练到你的成绩不再提升为止.这一个过程就叫做'hard negative mining'.

    “Let’s say I give you a bunch of images that contain one or more people, and I give you bounding boxes for each one. Your classifier will need both positive training examples (person) and negative training examples (not person). 

    For each person, you create a positive training example by looking inside that bounding box. But how do you create useful negative examples? 
    A good way to start is to generate a bunch of random bounding boxes, and for each that doesn’t overlap with any of your positives, keep that new box as a negative. 
    Ok, so you have positives and negatives, so you train a classifier, and to test it out, you run it on your training images again with a sliding window. But it turns out that your classifier isn’t very good, because it throws a bunch of false positives (people detected where there aren’t actually people). 
    A hard negative is when you take that falsely detected patch, and explicitly create a negative example out of that patch, and add that negative to your training set. When you retrain your classifier, it should perform better with this extra knowledge, and not make as many false positives.

    a) Positive samples: apply the existing detection a t all positions and scales with a 50% overlap wit h the given bounding box and then select the hi ghest scoring placement. 
    b) Negative samples:

    hard negative, selected by finding high scoring detections in images not containing the target object.”

    R-CNN的实现直接看代码:

    rcnn/rcnn_train.m at master · rbgirshick/rcnn Line:214开始的函数定义

  • 相关阅读:
    漏洞扫描
    端口探测
    IP探测
    kali linux基础命令
    python学习07
    python学习06
    openoffice+jquery.media.js实现Linux与Windows中文档在线预览
    Oracle10g安装包
    MyEclipse2014安装包附注册破解包、eclipse安装包
    外层div自适应内层div高度
  • 原文地址:https://www.cnblogs.com/zf-blog/p/8043347.html
Copyright © 2011-2022 走看看