|
|
|
|
|
|
|
|
|
Point cloud sampling is a less explored research topic for this data representation. The most commonly used sampling methods are still classical random sampling and farthest point sampling. With the development of neural networks, various methods have been proposed to sample point clouds in a task-based learning manner. However, these methods are mostly generative-based, rather than selecting points directly using mathematical statistics. Inspired by the Canny edge detection algorithm for images and with the help of the attention mechanism, this paper proposes a non-generative Attention-based Point cloud Edge Sampling method (APES), which captures salient points in the point cloud outline. Both qualitative and quantitative experimental results show the superior performance of our sampling method on common benchmark tasks.
Fig. 1: Similar to the Canny edge detection algorithm that detects edge pixels in images, our proposed APES algorithm samples edge points which indicate the outline of the input point clouds. The blue grids/spheres represent the local patches for given center pixels/points. |
Fig. 2: Illustration of using standard deviation to select edge pixels/points. The center pixel/point is self-contained as a neighbor. A larger standard deviation in the normalized correlation map means a higher possibility that it is an edge pixel/point. |
Fig. 3: The key idea of the proposed methods. N denotes the total number of points, while k denotes the number of neighbors used for local-based sampling method. In the local-based case, the correlation map for each point is the standard deviation of the patch, which consists of the point and its neighbors. However, the global-based correlation map is the normalized attention map calculated by all points within the point cloud, which is similar to the attention map of a Transformer block. |
Fig. 4: Network architectures for classification (top left) and segmentation (top right). The structures of N2P attention feature learning layer (bottom left), two alternative downsample layers (bottom middle), and upsample layer (bottom right) are also given. Both kinds of downsample layers downsample a point cloud from N points to M points, while upsample layer upsamples it from M points to N points. |
Fig. 5: Visualized sampling results of local-based and global-based APES on different shapes in classification task. All shapes are from ModelNet40 test set. |
Fig. 6: Visualized segmentation results as shape point clouds are downsampled. All shapes are from the ShapeNet Part test set. |
Fig. 7: Visualization results of sampling 128 points from input point clouds of 1024 points with various methods. |
@inproceedings{wu2023attention, title={Attention-Based Point Cloud Edge Sampling}, author={Wu, Chengzhi and Zheng, Junwei and Pfrommer, Julius and Beyerer, J\"urgen}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2023} } |