Larissa T. Triess1,2 | David Peter1 | Christoph B. Rist1 | J. Marius Zöllner2,3 |
1Mercedes Benz AG, Stuttgart |
2Karlsruhe Institute of Technology, Karlsruhe |
3Research Center for Information Technology, Karlsruhe |
In 2020 IEEE Intelligent Vehicles Symposium (IV) |
[Paper] [GitHub] |
|
|
|
|
Overview on the four components of the paper. First, we investigate effects of the network size on accuracy and runtime. Second, we introduce an improved projection technique for LiDAR point clouds. Then, we compare cross-entropy to soft Dice loss and propose the SLC layer. |
Autonomous vehicles need to have a semantic understanding of the three-dimensional world around them in order to reason about their environment.
State of the art methods use deep neural networks to predict semantic classes for each point in a LiDAR scan.
A powerful and efficient way to process LiDAR measurements is to use two-dimensional, image-like projections.
In this work, we perform a comprehensive experimental study of image-based semantic segmentation architectures for LiDAR point clouds.
We demonstrate various techniques to boost the performance and to improve runtime as well as memory constraints.
First, we examine the effect of network size and suggest that much faster inference times can be achieved at a very low cost to accuracy. Next, we introduce an improved point cloud projection technique that does not suffer from systematic occlusions. We use a cyclic padding mechanism that provides context at the horizontal field-of-view boundaries. In a third part, we perform experiments with a soft Dice loss function that directly optimizes for the intersection-over-union metric. Finally, we propose a new kind of convolution layer with a reduced amount of weight-sharing along one of the two spatial dimensions, addressing the large difference in appearance along the vertical axis of a LiDAR scan. We propose a final set of the above methods with which the model achieves an increase of 3.2% in mIoU segmentation performance over the baseline while requiring only 42% of the original inference time. |
We are currently facing some issues, therefore the links below are not active.
For any questions, you may contact the authors. We are sorry for your inconvenience. You can find the implementations of single components of the paper here:
Implementations of the SLC layer:
[PyTorch]
[TensorFlow]
Implementation of the Scan Unfolding: [Numpy]
Implementation of Cyclic Padding: coming soon
|
@inproceedings{triess2020iv,
title = {{Scan-based Semantic Segmentation of LiDAR Point Clouds: An Experimental Study}},
author = {
Triess, Larissa T. and Peter, David and Rist, Christoph B. and Z\"ollner, J. Marius
},
booktitle = {Proc. IEEE Intelligent Vehicles Symposium (IV)},
year = {2020},
pages = {1116-1121},
}
|
A. Milioto and I. Vizzo and J. Behley and C. Stachniss.
RangeNet++: Fast and Accurate LiDAR Semantic Segmentation.
IROS, 2019.
[PDF]
[Code]
J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall. SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. ICCV, 2019. [PDF] [Dataset] [Code] |