Comprehensive Attribute Prediction Learning for Person Search by Language
Published in IEEE Transactions on Image Processing (TIP), 2024
Recommended citation: Kai Niu, Linjiang Huang (Corresponding Author), Yuzhou Long, Yan Huang, Liang Wang. "Comprehensive Attribute Prediction Learning for Person Search by Language;.IEEE Transactions on Image Processing TIP 2024.
Abstract
Person search by language refers to searching for the interested pedestrian images given natural language sentences, which requires capturing fine-grained differences to accurately distinguish different pedestrians, while still far from being well addressed by most of the current solutions. In this paper, we propose the Comprehensive Attribute Prediction Learning (CAPL) method, which explicitly carries out attribute prediction learning, for improving the modeling capabilities of fine-grained semantic attributes and obtaining more discriminative visual and textual representations. First, we construct the semantic ATTribute Vocabulary (ATT-Vocab) based on sentence analysis. Second, the complementary context-wise and attribute-wise attribute predictions are simultaneously conducted to better model the high-frequency in-vocab attributes in our In-vocab Attribute Prediction (IAP) module. Third, to additionally consider the out-of-vocab semantics, we present the Attribute Completeness Learning (ACL) module for better capturing the low-frequency attributes outside the ATT-Vocab, obtaining more comprehensive representations. Combining the IAP and ACL modules together, our CAPL method has obtained the currently state-of-the-art retrieval performance on two widely-used benchmarks, i.e., CUHK-PEDES and ICFG-PEDES datasets. Extensive experiments and analyses have been carried out to validate the effectiveness and generalization capacities of our CAPL method.