Sunny and Dark Outside?! Improving Consistency in VQA Models

Arijit Ray, Karan Sikka, Ajay Divakaran, Stefan Lee, Giedrius Burachas

Abstract

While models for Visual Question Answering (VQA) have steadily improved over the years, interacting with one quickly reveals that these models lack consistency. For instance, if a model answers “red” to “What color is the balloon?”, it might answer “no” if asked, “Is the balloon red?”. These responses violate simple notions of entailment and raise questions about how effectively VQA models ground language. In this work, we introduce a dataset, ConVQA, and metrics that enable quantitative evaluation of consistency in VQA. For a given observable fact in an image (e.g. the balloon’s color), we generate a set of logically consistent question-answer (QA) pairs (e.g. Is the balloon red?) and also collect a human-annotated set of common-sense based consistent QA pairs (e.g. Is the balloon the same color as tomato sauce?). Further, we propose a consistency-improving data augmentation module, a Consistency Teacher Module (CTM). CTM automatically generates entailed (or similar-intent) questions for a source QA pair and fine-tunes the VQA model if the VQA’s answer to the entailed question is consistent to the source QA pair. We demonstrate that our CTM-based training improves the consistency of VQA models on the Con-VQA datasets and is a strong baseline for further research.

Evaluate your VQA model on our common-sense-based consistent QA sets to test the robustness of your model's predictions to VQA2.0 Val Questions


Evaluate your VQA model on our logic-based consistent QA sets to test the logical robustness of your model

[Camera Ready Paper]
@inproceedings{ray2019sunny,
  title={Sunny and Dark Outside?! Improving Answer Consistency in VQA through Entailed Question Generation},
  author={Ray, Arijit and Sikka, Karan and Divakaran, Ajay and Lee, Stefan and Burachas, Giedrius},
  booktitle={Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)},
  pages={5863--5868},
  year={2019}}

Dataset Preview
Logical Consistent QA's (L-ConVQA): json file
Each Visual Genome image-id (in case the image-id doesn't exactly match the official Visual Genome id, the image location is at https://filebox.ece.vt.edu/~ray93/VGImages/images/VG_<id>.png) has a separate json file named qas_<id>.json. The json file has two keys- consistent and inconsistent. 'Consistent' contains list of sets of consistent QA pairs. 'Inconsistent' has a list of pairs of inconsistent QA's. We shall release bounding boxes and scene graph edges corresponding to these QA's soon.

AMT Worker Cleaned Logical Consistent QA's Test set (L-ConVQA Test): json file
Contains a list of human-cleaned sets of consistent QA pairs with the key being the Visual Genome image id. This is the data used to report results on L-ConVQA in the paper.

Human-annotated common-sense based consistency data (CS-ConVQA), Consistent Sets: json file
Contains a dictionary with keys as the VQA image and values as a list of lists of consistent QA sets. In each set, the first QA is a VQA Val2.0 QA. This is the split of the dataset used to report results on CS-ConVQA in the paper.

Human-annotated common-sense based consistency data (CS-ConVQA), Inconsistent Pairs: json file
Contains a dictionary with keys as the VQA image and values as a list of pairs of inconsistent QA's. The inconsistent pairs were generated by automatically switching answers in selected human-annotated consistent QA pairs.