PointNet++
Qi, Charles R., et al. "Pointnet++: Deep hierarchical feature learning on point sets in a metric space." arXiv preprint arXiv:1706.02413 (2017). https://arxiv.org/pdf/1706.02413.pdf
Github: https://github.com/charlesq34/pointnet2
Introduction
Pointnet++ is a hierarchical neural network build upon Pointnet to solve the issue of failing to capture local structure and generalize to complex scenes.
PointNet: learns a spatial encoding of each point and then aggregate all individual point features to a global point cloud signature. So, Pointnet doesn’t capture local structure induced by the metrics. Limit in recognizing fine-grained patterns and generalizability to complex scenes at different scale
Uses the basic CNN structure where its lower level neurons have smaller receptive fields whereas lager level has larger receptive fields. The ability to abstract local patterns along the hierarchy allows better generality to unseen cases.
Concept
Similar to CNNs, Pointnet++ extracts local features from a small neighborhood and further grouping into larger units and processed to produce higher level features. This process is recursive until we obtain the feature of the whole point set.
It addresses two issues:
how to generate the partitioning of the point set *
how to abstract sets of points or local features through a local feature learner.
Used PointNet iteratively as the building block.
How to generate overlapping partioning of a point set?
Define the partion as a neighborhood ball including centroid location and scale.
For centroid location, used FPS(farthest point sampling) algorithm
For scale, used multiple scales.
Architecture
Abstraction Level: Hierarchial Point Set Feature Learning
The set abstraction level is made of three key layers:
Sampling layer
Grouping layer
PointNet layer
Sampling Layer
Used iterative farthest point sampling (FPS) to choose a subset of points for choosing centroids. It has better coverage of the entire point set given the same number of centroids. Gives N' groups
Grouping Layer
Nx(d+C) --> N'xKx(d+C), where K is the number of points in the neighborhood of centroids. Query for K is done by Ball Query, instead of kNN. C can be 1, 3, 5 etc.. (intensity, xyz from cetroid etc)
PointNet Layer
N'xKx(d+C) --> N'x(d+C')
The coordinates of points in a local region are firstly translated into a local frame relative to the centroid point. By using relative coordinates together with point features we can capture point-to-point relations in the local region.
Set Abstraction is repeated
Robust Feature Learning under Non-Uniform Sampling Density
It is common that a point set comes with nonuniform density in different areas. (일정하지 않은 포인트 밀도 분포).
Such non-uniformity introduces a significant challenge for point set feature learning. Features learned in dense data may not generalize to sparsely sampled regions.
To achieve this goal we propose density adaptive PointNet layers: Point cloud를 다양항 밀로(multiple scale)로 샘플링 하는 것.
Density adaptive PointNet layers
Multi-scale Grouping (MSG)
Multi-resolution grouping(MRG)
Multi-scale Grouping (MSG)
Features at different scales are concatenated to form a multi-scale feature. ]
But this is computationally expensive since it runs local PointNet at large scale neighborhoods for every centroid point.
This is done by randomly dropping out input points with a randomized probability for each instance, which we call random input dropout.
Multi-resolution grouping(MRG)- proposed
MRG는 서로 다른 scale로 얻은 두 feature vector를 이어붙여서(concatenate) multi-scale feature vector를 얻습니다. 이 때, 첫 번째 vector는 local group에 해당하는 점들 전체에 대해 pointnet 단계를 거쳐서 얻고, 두 번째 vector는 local group에 대해 그보다 한 단계 아래의 sub-region에서 얻은 feature를 종합하여 얻습니다
Features of a region at some level Li is a concatenation of two vectors.
One vector (left in figure) is obtained by summarizing the features at each subregion from the lower level Li-1 using the set abstraction level.
The other vector (right) is the feature that is obtained by directly processing all raw points in the local region using a single PointNet.
\
Point Feature Propagation for Set Segmentation
In set abstraction layer, the original point set is subsampled.However in set segmentation task such as semantic point labeling, we want to obtain point features for all the original points.
Need to up-sample, after abstraction level
(N,d+C)--> (N2,d+C2) abstraction level --> (N,k)
"Set abstracion layer를 거치게 되면, sampling 단계에 의해 point cloud의 크기가 줄어들게 됩니다. 이렇게 얻은 feature vector를 segmentation task에 활용하려면 다시 원래의 크기로 복원해주어야 합니다. 구체적으로, 이전 점들에 대한 feature vector로부터 (1/거리값) 으로 weighting을 가해서 interpolation하는 방법을 이용하였습니다. 또한 down-sampling 되기 전의 feature vector를 skip-connection을 통해 concatenate하여 부족할 수도 있는 정보량을 보충해주었습니다. Interpolation 과정은 원래 point의 개수로 맞춰질 때까지 반복해주었고, 결과로 얻은 feature vector를 통해서 segmentation task를 수행해주었습니다."
Application: Lane Detection
Reference
Last updated