Download PDFOpen PDF in browser

Aiding Intra-Text Representations with Visual Context for Multimodal Named Entity Recognition

EasyChair Preprint no. 1375

6 pagesDate: August 7, 2019

Abstract

With massive explosion of social media such as Twitter and Instagram, people daily share billions of multimedia posts, containing images and text. Typically, text in these posts is short, informal and noisy, leading to ambiguities which can be resolved using images. In this paper we explore text-centric Named Entity Recognition task on these multimedia posts. We propose an end to end model which learns a joint representation of a text and an image. Our model extends multi-dimensional self attention technique, where now image help to enhance relationship between words. Experiments show that our model is capable of capturing both textual and visual contexts with greater accuracy, achieving state-of-the-art results on Twitter multimodal Named Entity Recognition dataset.

Keyphrases: attention guided visual attention, Conditional Random Field, feature wise score vector, multi dimensional attention, multimodal named entity recognition, Natural Language Processing, self-attention, visual context

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:1375,
  author = {Omer Arshad and Ignazio Gallo and Shah Nawaz and Alessandro Calefati},
  title = {Aiding Intra-Text Representations with Visual Context for Multimodal Named Entity Recognition},
  howpublished = {EasyChair Preprint no. 1375},

  year = {EasyChair, 2019}}
Download PDFOpen PDF in browser