Abstract:In recent years, visible-infrared person re-identification has attracted the attention of many scholars, and its goal is to match person images with the same identity from images of different modalities. Due to the huge difference between visible images and infrared images, visible-infrared person re-identification is a very challenging image retrieval problem. Existing research focuses on mitigating modal differences by designing network structures to extract shared features or generate intermediate modalities, which are susceptible to areas other than person. In order to solve such problems, focus on person information, and further reduce the difference between the two modes, a network structure of dual attention mechanism is proposed for visible-infrared person re-identification, on the one hand, through the dual attention mechanism to mine person spatial information of different scales and enhance the channel interaction ability of local features. On the other hand, the use of global branches and local branches, learn multi-granular feature information, so that different granular information can complement each other to form a more discriminating feature. Experimental results on two public datasets show that the proposed method has a significant improvement compared with the baseline, and shows ideal performance on both the RegDB dataset and the SYSU-MM01 dataset.