Trust in AI is crucial for effective and responsible use in high-stakes sectors like healthcare and finance. One of the most commonly used techniques to mitigate mistrust in AI and even increase trust is the use of Explainable AI models, which enables human understanding of certain decisions made by AI-based systems. Interaction design, the practice of designing interactive systems, plays an important role in promoting trust by improving explainability, interpretability, and transparency, ultimately enabling users to feel more in control and confident in the system’s decisions. This paper introduces, based on an empirical study with experts from various fields, the concept of Explanation Stream Patterns, which are interaction patterns that structure and organize the flow of explanations in decision support systems. Explanation Stream Patterns formalize explanation streams by incorporating procedures such as progressive disclosure of explanations or interacting with explanations in a more deliberate way through cognitive forcing functions. We argue that well-defined Explanation Stream Patterns provide practical tools for designing interactive systems that enhance human-AI decision-making.