Date of Award

4-2018

Document Type

Thesis

Degree Name

Master of Science (MS)

Department

Mechanical and Civil Engineering

First Advisor

Matthew Jensen

Second Advisor

Anthony Smith

Third Advisor

Hector Gutierrez

Fourth Advisor

Hamid Hefazi

Abstract

In order for autonomous vehicles to safely navigate the road ways, accurate object detection must take place before safe path planning can occur. Currently there is a gap between models that are fast enough and models that are accurate enough for deployment. We propose Multimodal Fusion Detection System (MDFS), a sensor fusion system that combines the speed of a fast image detection CNN model along with the accuracy of a LiDAR point cloud data through a decision tree approach. The primary objective is to bridge the trade-off between performance and accuracy. The motivation for MDFS is to reduce the computational complexity associated with using a CNN model to extract features from an image. To improve efficiency, MDFS extracts complimentary features from the LIDAR point cloud in order to obtain comparable detection accuracy. MFDS achieves 3.7% higher accuracy than the base CNN detection model and is able to operate at 10 Hz. Additionally, the memory requirement for MFDS is small enough to fit on the Nvidia Tx1 when deployed on an embedded device.

Share

COinS