The area of research in Explained Artificial Intelligence (XAI) has experienced a considerable boom in recent years, generating a wide interest in both institutions and companies as well as in the academic field. In the last few years, the need to understand the reasons that lead an AI system to reach a conclusion, make a prediction, a reasoning, a recommendation or a decision has increased and in this way users trust the AI system. In this context, numerous XAI techniques have emerged and are applied in a growing number of practical areas.
The PERXAI project is proposed as an extension of the results of our previous CBRex project where we have investigated the application of Case Based Reasoning (CBR) as a subrogated technique to explain AI algorithms. This project has clearly identified the need to approach AI processes that can be explained from a much broader perspective than the XAI algorithm itself, including in this process the data used by the intelligent system and the way in which this explanation is presented -displayed- to the user, where interactivity plays a very important role. Additionally, for the explanation processes to be effective, the need for personalization to the user to whom the explanation is addressed must be taken into account. This process of generating complete and personalized explanations is inherently very complex to develop. However, from the experience of the CBRex project it can be concluded that the explanation processes follow a series of common patterns that can be abstracted from different use cases and reused between different domains.
Thus, the main objective of this project is to build a catalogue of comprehensive explanation strategies that can be captured from the user experience in different domains and applications and can be abstracted, formalized and reused for other domains, applications and users following a CBR approach complemented by an ontology-based semantic tagging that facilitates their transfer to other contexts.