PDTX: A novel local explainer based on the Perceptron Decision Tree

Título: PDTX: A novel local explainer based on the Perceptron Decision Tree

Autores: Samara Silva Santos, Marcos Antonio Alves, Leonardo Augusto Ferreira and Frederico Gadelha Guimarães.

Resumo:
Artificial Intelligence (AI) approaches that achieve good results and generalization are often opaque models and the decision-maker has no clear explanation about the final classification. As a result, there is an increasing demand for Explainable AI (XAI) models, whose main goal is to provide understandable solutions for human beings and to elucidate the relationship between the features and the black-box model. In this paper, we introduce a novel explainer method, named PDTX, based on the Perceptron Decision Tree (PDT). The evolutionary algorithm jSO is employed to fit the weights of the PDT to approximate the predictions of the black-box model. Then, it is possible to extract valuable information that explains the behavior of the machine learning method. The PDTX was tested in 10 different datasets from a public repository as an explainer for three classifiers: Multi-Layer Perceptron, Random Forest and Support Vector Machine. Decision-Tree and LIME were used as baselines for comparison. The results showed promising performance in the majority of the experiments, achieving 87.34% of average accuracy, against 64.23% from DT and 37.44% from LIME. The PDTX can be used for black-box classifier explanations, for local instances and it is model-agnostic.

Palavras-chave:
Explainable AI, Interpretability, Machine Learning, Local explanations, xAI.

Páginas: 8

Código DOI: 10.21528/CBIC2021-50

Artigo em pdf: CBIC_2021_paper_50.pdf

Arquivo BibTeX: CBIC_2021_50.bib