Abstract:Drone photography is an essential building block of intelligent transportation, enabling wide-ranging monitoring, precise positioning, and rapid transmission. However, the high computational cost of Transformer-based methods in object detection tasks hinders real-time result transmission in drone target detection applications. Therefore, we propose Mask Adaptive Transformers tailored for such scenarios. Specifically, we introduce a structure that sup-ports collaborative token sparsification in support windows, enhancing fault tolerance and reducing computational overhead. This structure comprises two modules: a binary mask strategy and Adaptive Window Self-Attention(A-WSA). The binary mask strategy focuses on significant objects in various complex scenes. The A-WSA mechanism is employed to self-attend for balance performance and computational cost to selected objects and isolate all contextual leakage. Extensive experiments on the challenging CarPK and VisDrone datasets demonstrate the effectiveness and superiority of the proposed method. Specifically, it achieves a mean average precision (mAP@0.5) improvement of 1.25% over CD-yolov5 on the CarPK dataset and a 3.75% mAP@0.5 im-provement over CZ Det on the VisDrone dataset.