Collision-free space detection is a critical component of autonomous vehicle perception. The state-of-the-art algorithms are typically based on supervised deep learning. Their performance is dependent on the quality and amount of labeled training data. It remains an open challenge to train deep convolutional neural networks (DCNNs) using only a small quantity of training samples. Therefore, in this paper, we mainly explore an effective training data augmentation approach that can be employed to improve the overall DCNN performance, when additional images captured from different views are available. Due to the fact that the pixels in collision-free space (generally regarded as a planar surface) between two images, captured from different views, can be associated using a homography matrix, the target image can be transformed into the reference view. This provides a simple but effective way to generate training data from additional multi-view images. Extensive experimental results, conducted with six state-of-the-art semantic segmentation DCNNs on three datasets, validate the effectiveness of the proposed method for enhancing collision-free space detection performance. When validated on the KITTI road benchmark, our approach provides the best results, compared with other state-of-the-art stereo vision-based collision-free space detection approaches.
- Transmission line matrix methods
- Training data
- Three-dimensional displays
- collision-free space detection
- supervised deep learning
- homography matrix
- data augmentation