Declarative Approaches to Counterfactual Explanations for Classification

RL2, Publisher: Theory and Practice of Logic Programming, Link>


Leopoldo Bertossi


We propose answer-set programs that specify and compute counterfactual interventions on entities that are input on a classification model. In relation to the outcome of the model, the resulting counterfactual entities serve as a basis for the definition and computation of causality-based explanation scores for the feature values in the entity under classification, namely responsibility scores. The approach and the programs can be applied with black-box models, and also with models that can be specified as logic programs, such as rule-based classifiers. The main focus of this study is on the specification and computation of best counterfactual entities, that is, those that lead to maximum responsibility scores. From them one can read off the explanations as maximum responsibility feature values in the original entity. We also extend the programs to bring into the picture semantic or domain knowledge. We show how the approach could be extended by means of probabilistic methods, and how the underlying probability distributions could be modified through the use of constraints. Several examples of programs written in the syntax of the DLV ASP-solver, and run with it, are shown.

2 visualizaciones

Entradas Recientes

Ver todo

RL2, Publisher: Journal of Machine Learning Research, Link> AUTHORS Jorge Pérez, Pablo Barceló, Javier Marinkovic ABSTRACT Alternatives to recurrent neural networks, in particular, architectures bas

RL2, Publisher: https://github.com/pdm-book/community Link> AUTHORS Marcelo Arenas, Pablo Barceló, Leonid Libkin, Wim Martens, Andreas Pieris ABSTRACT This is a release of parts 1, 2, and 4 of the