Abstract:In clinical practice, the capture of low-quality slit-lamp images is often unavoidable. Deep learning models developed us-ing high-quality slit-lamp images typically exhibit poor generalization performance in the automatic diagnosis of keratitis on low-quality images, severely hindering their application and promotion. To address this challenge, this paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms (ADK_FVQSAM). First, high-level features are extracted using the DenseNet121 backbone network, followed by adap-tive average pooling to scale the features to a fixed length. Subsequently, product quantization with residuals (PQR) is applied to convert continuous feature vectors into discrete features representations, preserving essential information in-sensitive to image quality variations. The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features. Finally, these enhanced features are classified through a fully connected layer. Experiments on clinical low-quality images show that ADK_FVQSAM achieves accuracies of 87.7%, 81.9%, and 89.3% for keratitis, other corneal abnormalities, and normal corneas, respectively. Compared to DenseNet121, Swin Transformer, and InceptionResNet, ADK_FVQSAM improves average accuracy by 3.1%, 11.3%, and 15.3%, respec-tively. These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on low-quality slit-lamp images, offering a practical approach for clinical application.