The success of the intelligent systems has led to an explosion of the generation of new autonomous systems with new capabilities like perception, reasoning, decision support and self-actioning. Despite the tremendous benefits of these systems, they work as black-box systems and their effectiveness is limited by their inability to explain their decisions and actions to human users. The problem of explainability in Artificial Intelligence is not new but the rise of the autonomous intelligent systems has created the necessity to understand how these intelligent systems achieve a solution, make a prediction or a recommendation or reason to support a decision in order to increase users reliability in these systems. Additionally, the European Union included in their regulation about the protection of natural persons with regard to the processing of personal data a new directive about the need of explanations to ensure fair and transparent processing in automated decision making systems.
The main goal of these project is the research in techniques to explain artificial intelligent systems in order to increase the transparency and reliability in these systems. The goal of Explainable Artificial Intelligence (XAI) is “to create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems”.
The main contribution of the project to the XAI is the use of Case-based Reasoning (CBR) methods for the inclusion of explanations to several AI techniques using reasoning-by-example. CBR systems have previous experiences in interactive explanations and in exploiting memory-based techniques to generate these explanations. The memory of previous facts and decisions will be the main technique in this project to explain the reasoning behind some AI systems. More precisely, this project will delve into generic explanation techniques, which would be extensible to different domains, symbolic and subsymbolic AI techniques and personalized explanations.
The social return of this project will be achieved with the development of XAI systems in different domains with real users, in order to prove the reusability and the multidisciplinarity of the proposed techniques.
First, we propose the use of explanations in individual and group recommender systems in two different domains: entertainment and tourism and collaborative learning environments. In both cases, the explanations should help users to understand the recommendations in order to trust in the decision made by the intelligent system. In the domain of videogames we use AI techniques to analyse player traces and to create non-player characters, who could explain their decision in order to be more reliable to other human players. Moreover, we will deepen into authoring tools to support the development of cultural artifacts to users without programming knowledge. In the development of the project we will address new domains of experimentation applying IAs explainable to the massive data obtained from sensor systems, IoT (internet of things) or wearables.
Finally, this project will offer a catalogue of AI models that will be combined with CBR techniques in order to generate explanations that aim for the understanding, trust, and effectively manage of new XAI systems.