Visualization and Concept Drift Detection Using Explanations of Incremental Models

Jaka Demšar, Zoran Bosnić, Igor Kononenko

Abstract


The temporal dimension that is ever more prevalent in data makes data stream mining (incremental learning)
an important field of machine learning. In addition to accurate predictions, explanations of the models
and examples are a crucial component as they provide insight into model's decision and lessen its black box
nature, thus increasing the user's trust. Proper visual representation of data is also very relevant to user's
understanding – visualization is often utilised in machine learning since it shifts the balance between perception
and cognition to take fuller advantage of the brain's abilities. In this paper we review visualisation
in incremental setting and devise an improved version of an existing visualisation of explanations of incremental
models. Additionally, we discuss the detection of concept drift in data streams and experiment with
a novel detection method that uses the stream of model's explanations to determine the places of change in
the data domain.

Full Text:

PDF


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.