Formalizing explanation design through interaction patterns in human-AI decision support

Authors Henry Maathuis, Daan Kolkman, Stefan Leijnen, Danielle Sent
Published in Proceedings of the 4th International Conference on Hybrid Human-Artificial Intelligence, HHAI 2025
Publication date 2025
Research groups Artificial Intelligence
Type Article

Summary

Trust in AI is crucial for effective and responsible use in high-stakes sectors like healthcare and finance. One of the most commonly used techniques to mitigate mistrust in AI and even increase trust is the use of Explainable AI models, which enables human understanding of certain decisions made by AI-based systems. Interaction design, the practice of designing interactive systems, plays an important role in promoting trust by improving explainability, interpretability, and transparency, ultimately enabling users to feel more in control and confident in the system’s decisions. This paper introduces, based on an empirical study with experts from various fields, the concept of Explanation Stream Patterns, which are interaction patterns that structure and organize the flow of explanations in decision support systems. Explanation Stream Patterns formalize explanation streams by incorporating procedures such as progressive disclosure of explanations or interacting with explanations in a more deliberate way through cognitive forcing functions. We argue that well-defined Explanation Stream Patterns provide practical tools for designing interactive systems that enhance human-AI decision-making.

Downloads en links

On this publication contributed

Language Engels
Published in Proceedings of the 4th International Conference on Hybrid Human-Artificial Intelligence, HHAI 2025
Key words explainable Artificial Intelligence, decision support systems, interaction design, human-computer Interaction
Digital Object Identifier 10.3233/FAIA250644
Page range 262-276

Henry Maathuis

Henry Maathuis

Henry Maathuis

  • Researcher
  • Research group: Artificial Intelligence