Model-Agnostic Explainability for Multi-Label Classification via Formal Concept Analysis
Abstract
Multi-label classification refers to a supervised learning problem where a single data instance can correspond to several labels simultaneously. While these models achieve high prediction accuracy, they share some limitations with single-label classifiers, particularly in terms of interpretability and explainability. In this work, we introduce a model-agnostic explainability method that enhances the interpretability of multilabel classification models using Formal Concept Analysis (FCA). Our approach aims to provide users with clearer insights into how these models make predictions, helping them better understand the rationale behind the decisions. Specifically, we address key questions such as: What is the smallest set of featuresneeded for the multi-label classifier f to make a prediction? and Which features are relevant to a specific prediction? By answering these questions, our method seeks to improve user confidence in the model’s decisions and foster a deeper understanding of multi-label classification. We introduce the key concepts of the Decisive Attribute Set (DAS) and the Significant Attribute Set (SSA). A DAS is the smallest set of features that can independently lead to a prediction, while an SSA includes all the features that influence the prediction of one or more labels. Additionally, we introduce a dedicated class-specific importance score that quantifies the role of each attribute, based on its frequency and specificity across formal concepts. Usingthese concepts, we generate clear, interpretable patterns in the form of rules, referred to as “DAS-rules”, which provide straightforward explanations for individual predictions. Our method achieves high local fidelity, generates compact explanations, and successfully captures feature interactions, as demonstrated through extensive experiments on three benchmark datasets (Stack Overflow, Yelp, and TMC2007-500). These results demonstrate the novelty and practical effectiveness of our FCA-based framework for multilabel explainability.DOI:
https://doi.org/10.31449/inf.v49i19.9947Downloads
Published
How to Cite
Issue
Section
License
Authors retain copyright in their work. By submitting to and publishing with Informatica, authors grant the publisher (Slovene Society Informatika) the non-exclusive right to publish, reproduce, and distribute the article and to identify itself as the original publisher.
All articles are published under the Creative Commons Attribution license CC BY 3.0. Under this license, others may share and adapt the work for any purpose, provided appropriate credit is given and changes (if any) are indicated.
Authors may deposit and share the submitted version, accepted manuscript, and published version, provided the original publication in Informatica is properly cited.







