On Wednesday, 21 January, the D-CREDO consortium and its associate partners came together for a focused and forward‑looking research meeting dedicated to ongoing work at the intersection of artificial intelligence and clinical reasoning. The session was designed to foster exchange across institutions. The meeting featured three short, research‑driven presentations, each followed by an open Q&A, allowing partners to both showcase progress and benefit from collective expertise within the consortium.

AI Literacy Among Students and Educators
presented by Wiktoria Bieniek
This study responds to a key challenge in health professions education: while AI-based tools are rapidly expanding in medical practice, the integration of AI-related competencies into medical and nursing curricula remains limited and inconsistent. As a result, students’ and educators’ preparedness to work with AI varies widely, and validated approaches to assessing AI literacy are still scarce.
AI literacy will be assessed using the Scale for the Assessment of Non-Experts’ AI Literacy (SNAIL) (Laupichler et al., 2023). The study population includes medical students, nursing students, and academic educators (physicians and nurses) from institutions participating in the D-CREDO consortium. Data will be collected via an anonymous online survey (10–15 minutes) administered through LimeSurvey, consisting of 31 SNAIL items, demographic questions, and attention checks.
The primary aim is to determine AI literacy levels across participant groups and institutions, with secondary analyses examining differences across the three AI literacy dimensions: technical understanding, critical appraisal, and practical application. Descriptive and inferential analyses will support cross-group and cross-institutional comparisons.
The study is planned to start in March 2026 following ethical approval and conclude in the summer semester of 2026. Results will inform a gap analysis to guide the further development of D-CREDO learning units and train-the-trainer modules.

Using Large Language Models to Provide Feedback
presented by Jonas Verdonschot
In current clinical reasoning education, students rarely receive personalized, detailed feedback on exercises such as SBAR assessments. Limited teaching capacity makes it difficult for educators to provide timely, individualized responses, leaving students without clear, actionable guidance for improvement. This feedback gap can slow learning and constrain the development of diagnostic reasoning skills.
This study explores the feasibility and effectiveness of using generative AI to deliver automatically generated, personalized feedback on clinical reasoning exercises. The goal is twofold: to enhance students’ learning through timely, tailored feedback, and to support educators by reducing the time and effort required for feedback provision.
The study addresses three main questions: (1) whether AI-driven feedback improves clinical reasoning performance, assessed through repeated measurements of clinical reasoning scores over time; (2) how students experience personalized AI feedback, measured using validated questionnaires on attitudes toward AI and perceived feedback quality; and (3) how educators experience AI-supported feedback and its impact on workload, assessed through questionnaires on attitudes toward AI, feedback quality, and time spent on feedback.
The approach follows a multi-step design, starting with the development of clear criteria for good feedback through thematic synthesis, which are then embedded into AI prompt design and shared with educators. AI- and educator-generated feedback will be compared using a set of SBARs, varying prompt length and instructional conditions. Feedback quality will be evaluated through content analysis and scoring against the predefined criteria, alongside measurements of educators’ time investment. An intervention phase will examine AI-supported feedback in repeated clinical reasoning sessions, with educators reviewing AI output before release. Together, these analyses will help clarify the educational value of AI-generated feedback and its potential role in strengthening clinical reasoning education within D-CREDO.

How Students Weigh AI Advice: Evidence from Two Studies
presented by Yavuz Selim Kiyak
This research explores how medical students use AI-generated advice in clinical decision-making, drawing on a Judge–Advisor System framework to examine how advice is processed and weighted.
The first study was a randomized controlled experiment with 186 fourth-year medical students. Participants received identical ChatGPT-generated diagnostic feedback, either with or without a concise safety warning. The warning had no significant effect on advice taking: 15.3% of students changed their decision in the no-warning group compared to 15.9% in the warning group. These findings suggest no automation bias and indicate that students may already underweight AI-generated diagnostic advice.
The second study included 648 medical students from all study years and examined how identical advice was weighted when attributed to different sources in ethically challenging psychiatric cases. Advice labeled as coming from ChatGPT was followed less often than advice attributed to a psychiatry resident, specialist, or a national law and ethics committee. This pattern was consistent across all years of training.
Overall, the results show that students tend to discount AI advice, perceiving it as less reliable and often interpreting these decisions as moral rather than purely technical. These findings have important implications for how AI advisory systems should be introduced and discussed in clinical education.

We would like to thank all presenters, participants, and associate partners for their contributions, engagement, and continued collaboration within D-CREDO. These exchanges play an important role in keeping the project aligned and moving forward.
We look forward to organising similar meetings in the future and to continuing these conversations as the work of the consortium develops.
Stay connected with D-CREDO and follow our journey on LinkedIn for more updates, insights, and stories.





