基于句子依存关系注意力增强的跨模态检索方法
DOI:
作者:
作者单位:

华东交通大学

作者简介:

通讯作者:

中图分类号:

TP391

基金项目:

江西省自然科学基金(20192ACBL21006)


A Cross-modal Retrieval Method Based on SentenceDependency Attention
Author:
Affiliation:

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    随着互联网技术的极速发展,不同模态的多媒体数据呈指数增长,人们已经无法满足于原始的例如图片检索的单模态数据检索方式,跨模态的多媒体检索成为信息检索的一个重要研究领域。针对该任务,提出一种增加句子依存关系词组注意力机制的双分支网络结构跨模态检索方法。该方法基于CNN模型提取图像特征,基于句法结构分析获得文本的依存关系片段,并在原始双分支网络结构模型的基础上,嵌入注意力机制学习各依存关系片段的权重分布,使文本的特征表示能够更偏重于关键的句子片段特征。实验结果表明该方法相比于其他方法在 检索准确率评估指标上都有较好的提高,验证了算法的有效性。

    Abstract:

    With the rapid development of Internet technology, multimedia data of different view have grown exponentially, and people have been unable to satisfy the original single-modal data retrieval methods such as image retrieval. Cross-modal retrieval has became more and more important in information retrieval field. Aiming at this task, a cross-modal retrieval method for double-branch network structure by increase the attention mechanism of sentence-dependent phrases is proposed. We apply the CNN model to extract image features, and obtain the dependency segments of text based on syntactic structure analysis, and design the original double-branch network structure model which embeds the attention mechanism to learn the weight distribution of each dependent segment, so that the feature representation of the text can be more focused on key sentence segment features. The experimental results show that the proposed method has better performance in the retrieval accuracy evaluation than other methods, and verify the effectiveness of the algorithm.

    参考文献
    相似文献
    引证文献
引用本文
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2020-03-19
  • 最后修改日期:2020-04-08
  • 录用日期:2020-05-28
  • 在线发布日期:
  • 出版日期: