Checklist for evaluating the meaningfulness of AI system explanations for internal users

Authors Henry Maathuis, Jenia Kim, Kees van Montfort, Raymond Zwaal, Danielle Sent, Sieuwert van Otterloo
Publication date 2025
Research groups Artificial Intelligence
Type Report / working paper

Summary

It is now widely accepted that decisions made by AI systems must be explainable to their users. However, in practice, it often remains unclear how this explainability should be concretely implemented. This is especially important for nontechnical users, such as claims assessors at insurance companies, who need to understand AI system decisions and be able to explain them to customers. Think, for example, of explaining a rejected insurance claim or loan application. Although the importance of explainable AI is broadly recognized, there is often a lack of practical tools to achieve it. That’s why, in this handbook, we have combined insights from two use cases in the financial sector with findings from an extensive literature review. This has led to the identification of 30 key aspects of meaningful AI explanations. Based on these aspects, we developed a checklist to help AI developers make their systems more explainable. The checklist not only provides insight into how understandable an AI application currently is for end users, but also highlights areas for improvement.

On this publication contributed

Language Engels
Key words use cases, checklist, AI (artificial intelligence), systematic literature review, financial sector

Henry Maathuis

Henry Maathuis

Henry Maathuis

  • Researcher
  • Research group: Artificial Intelligence