zoukankan      html  css  js  c++  java
  • 2005年 DARPA 自动驾驶比赛 斯坦福大学 自动驾驶团队 技术报告读后感 1 Thoughts on SRT Technical Report

    Thoughts on SRT Technical Report


    SRT believed that Darpa Grand Challenge was mainly the challenge of designing a robust software system that can guide a vehicle in the right direction and right speed while being capable of dodging surrounding obstacles, which can be walking pedestrians, running vehicles and so on. The vehicle, to quote from the report, can be any commercial SUV in the market. This report mainly focuses on the overall design pattern of the vehicle.


    The system requires some knowledge from Machine Learning and Probabilistic Robotics.


    1. Vehicle Description


    The car is natively throttle and brake-by-wireit uses DC motor so it has the steer-by-wire capability.


    Vehicle sensors input measured values such as speed and angular velocity to the build-in computer and communicate through CAN bus.


    For environment perception, sensors include:

    1. 5 Lidar pointed to the forward;

    2. 2 Radar pointed to the forward;

    3. 1 camera pointed to the forward;

    4. 1 GPS for positioning;

    5. 2 GPS for compassing;

    6. 1 communication antenna for emergency;

    7. 1 IMU tightly attached to the computer unit;


    2. Coordinate Frame Selection

    The vehicle is localized with respect to a UTM coordinate Frame;

    (A detailed reference on UTM can be found at :http://blog.sina.com.cn/s/blog_4c5f7aaf0100t8ur.html)


    3. Localization

    The estimation of roll, pitch and yaw through UKF.

    UKF integrate data from GPS, IMU and CAN bus at 100Hz.

    Use “bicycle model” when GPS fails.

    (For Bicycle Model, refer to http://code.eng.buffalo.edu/dat/sites/model/bicycle.html ;

    or http://www.me.berkeley.edu/~frborrel/pdfpub/IV_KinematicMPC_jason.pdf).

    The output of the UKF are 6-D estimates of the vehicle position and Euler angles along with uncertainty covariances.


    4. Sensor Processing

    The top three mentioned sensors mentioned in part 1 are used for environmental sensing.

    1. Lidar is more accurate in short-range perception, not suitable in high speed scenarios(because of the short range);

    2. Camera is suitable for larger range and it is capable of providing denser data than Lidar;

    3. RADAR provides range data up to 200ms, while accuracy is far coarser than Lidar;


    5. Drivability Map

    An important concept here is drivability map, which classifies the surrounding environment of the moving vehicle into many small cells, each of which are further categorized into three types of situation: drivable, not drivable and unknown. Cell is a function of the received data from the sensor (the function could be learned through Machine Learning I reckon). These situations mean whether each cell adjacent the vehicle is drivable or not.


    6. Vehicle Control

    Four control inputs including:

    1. gear;

    2. rear;

    3. brake;

    4. throttle.


    Three control modules are used:

    1. PID Motion Control:

    Takes in the output of Path Planning module and estimated vehicle state from Kalman Filter.

    This module generates steering and velocity.

    2. Path Planning(important):

    This modules takes the consideration of three inputs: drivability map, vehicle pose and preset RDDF file(pre-stored course map). It then produces a suitable path for the vehicle.

    3. Finite State Automation(FSA):

    FSA determines the highest level control state, that is, the current driving mode of the vehicle, such as drving forward mode, reversing mode, gear shifting and so on.


    7. Software


    Read its source code;


    To be added...

  • 相关阅读:
    oracle常用命令(比较常见好用)
    vim编辑器详解
    对话框和打印控件
    winform弹出唯一的窗体
    ListView
    菜单栏和布局
    窗体属性和公共控件
    ASPCMS和WPF
    MVC
    正则表达式
  • 原文地址:https://www.cnblogs.com/SongHaoran/p/7607890.html
Copyright © 2011-2022 走看看