Adversarial Image Caption Generator Network

Ali Mollaahmadi Dehaqi, Vahid Seydi, Yeganeh Madadi

Research output: Contribution to journalArticlepeer-review

Abstract

Image captioning is a task to make an image description, which needs recognizing the important attributes and also their relationships in the image. This task requires to generate semantically and syntactically correct sentences. Most image captioning models are based on RNN and MLE methods, but we propose a novel model based on GAN networks where it generates the caption of the image through the representation of the image by utilizing the generator adversarial network and it does not need any secondary learning algorithm like policy gradient. Due to the complexity of benchmark datasets such as Flickr and Coco, in both volume and complexity, we introduce a new dataset and perform the experiments on it. The experimental results show the effectiveness of our model compared to the state-of-the-art image captioning methods.
Original languageEnglish
Article number182
Number of pages14
Journal SN Computer Science
Volume2
Issue number3
Early online date31 Mar 2021
DOIs
Publication statusPublished - May 2021
Externally publishedYes

Keywords

  • image captioning
  • Feature representation
  • Deep neural network
  • Generative adversarial network
  • Novel dataset

Fingerprint

Dive into the research topics of 'Adversarial Image Caption Generator Network'. Together they form a unique fingerprint.

Cite this