The 3-year project, as part of the EU CHIST-ERA co-fund (with TAČR as the funding body for the Czech partner) will start in March 1, 2021.
Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions (interpretability).
Explainability considers how AI can be understood by human users. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods. CIMPLE will draw on models of human creativity, both in manipulating and understanding information, to design more understandable, reconfigurable and personalisable explanations.
Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users’ trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used.
Knowledge Graphs offer significant potential to better structure the core of AI models, using semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour.
These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
- Raphael Troncy (EURECOM, France)
- other partners:
- Open University (UK)
- INESC-ID (Portugal)
- Prague University of Economics and Business (Czech Rep.)
- webLyzard technology (Austria)
Team leader & members
- Vojtěch Svátek
- David Chudán
- Filip Vencovský
- Ondřej Zamazal
Prague University of Economics and Business (VŠE)
Faculty of Informatics and Statistics
Dept. of Information and Knowledge Engineering
Prof. Vojtěch Svátek
We seek motivated students interested in a PhD position related to Knowledge Graphs, Computational Linguistic (especially Information Extraction or Sentiment Analysis), and/or Information Visualization. Up to three PhD positions with grant funding (project contracts for approx. 240 thousand CZK p/a) plus regular PhD stipends will be offered from September 2021. Candidates should contact Prof. Svátek by e-mail not later than end April, 2021 (even those with MSc degree not yet completed, but expected to achieve by September 2021).