Debiasing Visual Question Answering via Ensemble Gradient Detection and Iterative Attention Forgetting
Abstract
In Visual Question Answering (VQA), the model’s ability to understand and reason across different modalities—language, visual, and multimodal—is crucial for accurate predictions. However, recent studies have identified a significant challenge of language bias in VQA models, where the model’s reasoning often hinges on incorrect linguistic associations rather than genuine multimodal understanding. To address
this issue, We propose a novel Ensemble Bias Gradient Debiasing Approach (EBGDA) that combines bias detection with a dynamic forgetting mechanism. Our method utilizes a bias detector to identify and score biases across linguistic, visual, and multimodal data, enabling the model to focus on unbiased information
for more accurate predictions. Additionally, inspired by human reasoning, we introduce the Forgotten Attention Algorithm (FAA), which iteratively “forgets” irrelevant visual content, progressively concentrating attention on the image regions most relevant to the question. This combination of bias mitigation and
attention focusing enhances the model’s ability to make multimodal inferences, reducing bias and improving overall performance. Extensive experiments on the VQA-CP v2, VQA v2, VQA-VS, GQA-OOD, and VQA-CE datasets demonstrate the effectiveness of our approach, showing state-of-the-art performance in
mitigating biases and excelling in complex multimodal scenarios. Our approach achieves a 21.32% improvement over UpDn on VQA-CP v2, establishing a new state-of-the-art among methods without data augmentation.
Full Text:
PDFDOI: https://doi.org/10.31449/inf.v49i24.9013
This work is licensed under a Creative Commons Attribution 3.0 License.








