zoukankan      html  css  js  c++  java
  • 单目摄像机测距

    用相似三角形计算物体或者目标到相机的距离
    我们将使用相似三角形来计算相机到一个已知的物体或者目标的距离。

    相似三角形就是这么一回事:假设我们有一个宽度为 W 的目标或者物体。然后我们将这个目标放在距离我们的相机为 D 的位置。我们用相机对物体进行拍照并且测量物体的像素宽度 P 。这样我们就得出了相机焦距的公式:

    F = (P x D) / W

    举个例子,假设我在离相机距离 D = 24 英寸的地方放一张标准的 8.5 x 11 英寸的 A4 纸(横着放;W = 11)并且拍下一张照片。我测量出照片中 A4 纸的像素宽度为 P = 249 像素。

    因此我的焦距 F 是:

    F = (248px x 24in) / 11in = 543.45

    当我继续将我的相机移动靠近或者离远物体或者目标时,我可以用相似三角形来计算出物体离相机的距离:

    D’ = (W x F) / P

    为了更具体,我们再举个例子,假设我将相机移到距离目标 3 英尺(或者说 36 英寸)的地方并且拍下上述的 A4 纸。通过自动的图形处理我可以获得图片中 A4 纸的像素距离为 170 像素。将这个代入公式得:

    D’ = (11in x 543.45) / 170 = 35 英寸

    或者约 36 英寸,合 3 英尺。

    从以上的解释中,我们可以看到,要想得到距离,我们就要知道摄像头的焦距和目标物体的尺寸大小,这两个已知条件根据公式:  

    D’ = (W x F) / P 

    得出目标到摄像机的距离D,其中P是指像素距离,W是A4纸的宽度,F是摄像机焦距。

      在原文中,是通过预先拍照,根据第一张照片算出摄像头的焦距,在根据已知的焦距算出接下来的照片中白纸到摄像机的距离,这样不太直观,而且需要预先拍照,我将源程序改为实时测距,简单来说就是将原来的读入照片变为读摄像头,这样的效果看起来比较直观.源程序如下:

    #import the necessary packages
    import numpy as np
    import cv2
     
    # 找到目标函数
    def find_marker(image):
        # convert the image to grayscale, blur it, and detect edges
        gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  
        gray = cv2.GaussianBlur(gray, (5, 5), 0)        
        edged = cv2.Canny(gray, 35, 125)               
     
        # find the contours in the edged image and keep the largest one;
        # we'll assume that this is our piece of paper in the image
        (cnts, _) = cv2.findContours(edged.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE)  
        # 求最大面积 
        c = max(cnts, key = cv2.contourArea)
     
        # compute the bounding box of the of the paper region and return it
        # cv2.minAreaRect() c代表点集,返回rect[0]是最小外接矩形中心点坐标,
        # rect[1][0]是width,rect[1][1]是height,rect[2]是角度
        return cv2.minAreaRect(c)
     
    # 距离计算函数 
    def distance_to_camera(knownWidth, focalLength, perWidth):  
        # compute and return the distance from the maker to the camera
        return (knownWidth * focalLength) / perWidth            
     
    # initialize the known distance from the camera to the object, which
    # in this case is 24 inches
    KNOWN_DISTANCE = 24.0
     
    # initialize the known object width, which in this case, the piece of
    # paper is 11 inches wide
    # A4纸的长和宽(单位:inches)
    KNOWN_WIDTH = 11.69
    KNOWN_HEIGHT = 8.27
     
    # initialize the list of images that we'll be using
    IMAGE_PATHS = ["Picture1.jpg", "Picture2.jpg", "Picture3.jpg"]
     
    # load the furst image that contains an object that is KNOWN TO BE 2 feet
    # from our camera, then find the paper marker in the image, and initialize
    # the focal length
    #读入第一张图,通过已知距离计算相机焦距
    image = cv2.imread(IMAGE_PATHS[0]) 
    marker = find_marker(image)           
    focalLength = (marker[1][0] * KNOWN_DISTANCE) / KNOWN_WIDTH  
     
    #通过摄像头标定获取的像素焦距
    #focalLength = 811.82
    print('focalLength = ',focalLength)
     
    #打开摄像头
    camera = cv2.VideoCapture(0)
     
    while camera.isOpened():
        # get a frame
        (grabbed, frame) = camera.read()
        marker = find_marker(frame)
        if marker == 0:
        print(marker)
        continue
        inches = distance_to_camera(KNOWN_WIDTH, focalLength, marker[1][0])
     
        # draw a bounding box around the image and display it
        box = np.int0(cv2.cv.BoxPoints(marker))
        cv2.drawContours(frame, [box], -1, (0, 255, 0), 2)
     
        # inches 转换为 cm
        cv2.putText(frame, "%.2fcm" % (inches *30.48/ 12),
                 (frame.shape[1] - 200, frame.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX,
             2.0, (0, 255, 0), 3)
     
        # show a frame
        cv2.imshow("capture", frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    camera.release()
    cv2.destroyAllWindows() 

    使用相机计算人到相机的距离
      在第一部分中我们已经计算出了A4纸距离相机的距离,在具体应用中,我需要计算的是人距离相机的距离,来实现机器人对目标人距离的判断,应用与对目标人的跟随。在这里主要的思路是先通过opencv中的HOG方法检测到人,再根据人的预估身高和摄像头焦距计算人到摄像机的距离。在这里选择身高的原因在于人的身高在不同方向上变化较小,而且我们的摄像头高度是固定的,所以选择身高。

    首先要使用opencv进行行人检测:

    # import the necessary packages
    from __future__ import print_function
    from imutils.object_detection import non_max_suppression
    from imutils import paths
    import numpy as np
    import argparse
    import imutils
    import cv2
     
     
    cap = cv2.VideoCapture(0)
     
    # initialize the HOG descriptor/person detector
    hog = cv2.HOGDescriptor()
    # 使用opencv默认的SVM分类器
    hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
     
    while(1):
        # get a frame
        ret, frame = cap.read()
        
        frame = imutils.resize(frame, width=min(400, frame.shape[1]))
     
     
        # detect people in the image
        (rects, weights) = hog.detectMultiScale(frame, winStride=(4, 4),
             padding=(8, 8), scale=1.05)
        
        rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
        # 非极大抑制 消除多余的框 找到最佳人体
        pick = non_max_suppression(rects, probs=None, overlapThresh=0.65)
     
        # 画出边框
        for (xA, yA, xB, yB) in pick:
            cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
     
        # show a frame
        cv2.imshow("capture", frame)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    cap.release()
    cv2.destroyAllWindows() 
    #将行人检测与测距代码结合
    while camera.isOpened(): # get a frame (grabbed, frame) = camera.read() # 如果不能抓取到一帧,说明我们到了视频的结尾 if not grabbed: break frame = imutils.resize(frame, width=min(400, frame.shape[1])) #marker = find_marker(frame) marker = find_person(frame) #inches = distance_to_camera(KNOWN_WIDTH, focalLength, marker[1][0]) for (xA, yA, xB, yB) in marker: cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2) ya_max = yA yb_max = yB pix_person_height = yb_max - ya_max if pix_person_height == 0: #pix_person_height = 1 continue print (pix_person_height) #print (pix_person_height) inches = distance_to_camera(KNOW_PERSON_HEIGHT, focalLength, pix_person_height) print("%.2fcm" % (inches *30.48/ 12)) # draw a bounding box around the image and display it #box = np.int0(cv2.cv.BoxPoints(marker)) #cv2.drawContours(frame, [box], -1, (0, 255, 0), 2) cv2.putText(frame, "%.2fcm" % (inches *30.48/ 12), (frame.shape[1] - 200, frame.shape[0] - 20), cv2.FONT_HERSHEY_SIMPLEX, 2.0, (0, 255, 0), 3) # show a frame cv2.imshow("capture", frame) if cv2.waitKey(1) & 0xFF == ord('q'): break

    原文链接:https://blog.csdn.net/m0_37811342/article/details/80394935

  • 相关阅读:
    Object-c的类可以多重继承么?可以实现多个接口么?如何实现?
    对于TableViewCell重用机制的理解
    xcode快捷方式
    Mysql数据迁移——按分号split一列字段插入另一张表
    Android手机导出微信聊天记录
    Java性能分析工具之Jprofiler初体验
    Android adb端口被占用的解决办法
    mysql limit查询性能优化
    Delphi异或算法转换为java实现
    [python]用Python进行SQLite数据库操作
  • 原文地址:https://www.cnblogs.com/Fiona-Y/p/12800914.html
Copyright © 2011-2022 走看看