PointNetV3:feature extraction with position encoding
CSTR:
Author:
Affiliation:

College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China

Clc Number:

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Feature extraction of point clouds is a fundamental component of three-dimensional (3D) vision tasks. While existing feature extraction networks primarily focus on enhancing the geometric perception abilities of networks and overlook the crucial role played by coordinates. For instance, though two airplane wings share the same shape, it demands distinct feature representations due to their differing positions. In this paper, we introduce a novel module called position aware module (PAM) to leverage the coordinate features of points for positional encoding, and integrating this encoding into the feature extraction network to provide essential positional context. Furthermore, we embed PAM into the PointNet++ framework, and design a novel feature extraction network, named PointNetV3. To validate the effectiveness of PointNetV3, we conducted comprehensive experiments including classification, object tracking and object detection on point cloud. The results of remarkable improvement in three tasks demonstrate the exceptional performance achieved by PointNetV3 in point cloud processing.

    Reference
    Related
    Cited by
Get Citation

WANG Jun, WANG Xuefei, ZHOU Boxiong, GUO Dongyan. PointNetV3:feature extraction with position encoding[J]. Optoelectronics Letters,2024,20(8):483-489

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:August 26,2023
  • Revised:April 04,2024
  • Adopted:
  • Online: July 24,2024
  • Published:
Article QR Code