PointNetV3: feature extraction with position encoding
DOI:
Author:
Affiliation:

Zhejiang University of Technology

Clc Number:

Fund Project:

the Natural Science Foundation of Zhejiang Province

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Feature extraction of point clouds is a fundamental component of 3D vision tasks. While existing feature extraction net-works primarily focus on enhancing the geometric perception abilities of networks and overlook the crucial role played by coordinates. For instance, though two airplane wings share the same shape, it demands distinct feature representations due to their differing positions. In this paper, we introduce a novel module called Position-Aware Module (PAM) to lev-erages the coordinate features of points for positional encoding, and integrating this encoding into the feature extraction network to provide essential positional context. Furthermore, we embed PAM into the PointNet framework, and de-sign a novel feature extraction network, named PointNetV3. To validate the effectiveness of PointNetV3, we conducted comprehensive experiments including classification, object tracking and object detection on point cloud. The results of remarkable improvement in three tasks demonstrate the exceptional performance achieved by PointNetV3 in point cloud processing.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 26,2023
  • Revised:October 10,2023
  • Adopted:October 27,2023
  • Online:
  • Published: