MDCFS: Few-shot Object Detection with Multilevel decoupled Classifiers
DOI:
Author:
Affiliation:

Tianjin University of Technology

Clc Number:

Fund Project:

National Natural Science Foundation of China (No.62372325)

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    Few-shot object detection poses unique challenges as it requires effectively learning novel classes with limited labeled data. Current approaches often suffer from biases towards base classes during fine-tuning, leading to suboptimal performance on detecting novel classes. Additionally, in complex scenes, the confusion between foreground and background objects further affects the accuracy and robustness of the model. To address these issues, we propose the Multilevel De-coupling Classification Few-Shot Algorithm (MDCFS). we decouple the standard classifier into a parallel foreground classifier and a background classifier in the Few-Shot Object Detection (FSOD) setting. This decoupling enables the independent separation of positive samples from noisy negative samples, alleviating the foreground-background confusion problem commonly encountered in few-shot detectors. For Generalized Few-Shot Object Detection (G-FSOD), where the few-shot dataset contains base classes, we further decouple the foreground classification head into a base class classification head and a novel class classification head. To ensure balance, we assign more weight to the novel class classification head, effectively addressing the bias towards base classes. Furthermore, we optimize the initial weights of the few-shot fine-tuning stage, significantly reducing training time and mitigating catastrophic forgetting in G-FSOD. Additionally, we incorporate metric learning into our model with minimal cost. Experimental results demonstrate the effectiveness of our approach. Compared to state-of-the-art few-shot object detection methods based on fine-tuning, MDCFS achieves performance improvements of up to 6.3% on the PASCAL VOC dataset and 1.5% on the COCO dataset.

    Reference
    Related
    Cited by
Get Citation
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:October 13,2023
  • Revised:February 13,2024
  • Adopted:March 07,2024
  • Online:
  • Published: