Monocular Obstacle Avoidance and Collision Detection for Autonomous Vehicle Exploration (Brief Summary)
Alexander Du, Yee Ka Tai
Autonomous cars and other devices capable of unassisted movement are becoming widely considered as superior to human-based control in many areas. Constructing such platforms, however, often requires expensive equipment. We investigate using a simple setup consisting of an embedded computing module, such a Raspberry Pi or a Jetson Nano, with only a monocular camera to avoid objects and detect collisions given real-time, onboard processing as an alternative to compound systems based on depth cameras or lidar/radar devices.
This intern project began from the desire to re-implement “Self-supervised Deep Reinforcement Learning with Generalized Computation Graphs for Robot Navigation” by Gregory Kahn et al. [1], but evolved into a different approach after finding a way to detect collisions without using an IMU or motor encoder. We use the optical flow from a monocular camera stream to deduce a robot’s motion, and incrementally utilize more information inferred from optical flow for piloting a small-scale vehicle to explore an environment. A camera stream is an extremely versatile sensor, and holds much more information than just 2D images. Our efforts are a step towards creating an affordable and computationally efficient platform for real-world deployment without using expensive sensors and complex algorithms.
Details of this project will soon be published in the next post. Please stay tuned if you find it interesting!
References
[1] KAHN, Gregory, et al. Self-supervised deep reinforcement learning with generalized computation graphs for robot navigation. In: 2018 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2018. p. 1–8.
[2] Hanlun Artificial Intelligence Ltd., 2020. Donkey Car With Jetson Nano Demo. [video] Available at: <https://www.youtube.com/watch?v=RH96Li2uMEs> [Accessed 9 October 2020].