AI Innovations in Medical Education

UT Southwestern ❘ Discovery@UTSW 2026 ❘ P17 AI in Education
UTSW is deploying artificial intelligence to provide accurate, immediate feedback on assessments, a win for both medical students and faculty.
Artificial intelligence (AI) is rapidly transforming the landscape of medical education, offering innovative solutions to long-standing challenges.
One such challenge, efficiently and accurately evaluating student performance, has found a promising ally in AI. At UT Southwestern, AI tools are now being harnessed to streamline the reviewing and grading of medical students’ work, significantly reducing faculty workload while enhancing the consistency and quality of academic assessments.
A Glimpse Into the Future
Researchers from the Jamieson Lab, in collaboration with UTSW’s Simulation Center, are utilizing advanced AI tools to analyze and swiftly grade the Objective Structured Clinical Examination (OSCE), a standardized test of medical students’ clinical skills using a simulated patient encounter. Since fall 2023, the AI evaluation process has transformed OSCE feedback for UT Southwestern medical students, replacing more than 91% of human grading and delivering results within days versus weeks (as noted by the researchers in their case study “Rubrics to Prompts: Assessing Medical Student Post-Encounter Notes with AI,” published in The New England Journal of Medicine).
“The process of manually reviewing and grading these notes is labor-intensive, time-consuming, and prone to inconsistency,” says Andrew Jamieson, Ph.D., Assistant Professor in the Lyda Hill Department of Bioinformatics. “The Sim Center hosts more than 2,000 OSCE encounters in a single session, rendering the operational demands of timely, accurate grading particularly daunting. It was not uncommon for students to wait weeks to months to receive scores.”
Accurate, scalable, automated grading systems are helping to address these challenges and maximize the educational benefits of the OSCE by providing feedback while the experience is fresh in students’ minds.
This approach also can be readily adapted, at scale, for other institutions and to new scenarios.
“This flexibility provides a glimpse into a future where medical educators can effortlessly evaluate student performance using any number of bespoke grading rubrics on the fly, each tailored to the unique objectives and subjective preferences of faculty members,” Dr. Jamieson says.
Near-Instantaneous Results
This is one of many examples of how UTSW is looking for ways to incorporate AI into medical education in a manner that enhances the experience for both students and faculty.
The Office of Medical Education (OME) sees a bright future for applying multimodal AI using audio, video, and notes to assess competency-based medical education. Frameworks such as the Association of American Medical Colleges’ Entrustable Professional Activities (EPAs) outline 13 skills that medical school graduates must be competent in on the first day of their residency training.
Each EPA is complex and multifaceted, so OME educators have “atomized” each one into discrete and measurable sub-competencies (three to four per EPA). “Using the pioneering technology created by the Jamieson Lab and leveraging the data capture and simulation expertise of the Sim Center greatly facilitates evaluating each sub-competency,” says Sherry C. Huang, M.D., Vice Provost and Senior Associate Dean for Education.
In the pre-clerkship phase, educators have created detailed scenarios with corresponding AI-graded rubrics to evaluate developing history-taking, physical examination, and early clinical reasoning skills.
In the subsequent clerkship phase of the curriculum, they further drill down on clinical reasoning skills using structured rubrics to dissect each student’s ability to craft a problem representation, a differential diagnosis, and a management plan. The multimodal AI grades the notes, producing near-instantaneous results.
Enriching Student Learning
Ngoc “Kim” Van Horn, M.D., Associate Professor of Pediatrics, has crafted a five-station examination for her department’s clerkship that includes history-taking, physical examination, clinical documentation, patient counseling, and an oral presentation of the case to a more senior clinician. The rubric for this evaluation leverages video, audio, and text reading capabilities for true multimodal AI assessment. Upon completion, each student receives a detailed report on their performance for each EPA sub-competency, showing what they did well and where they can improve.
“This level of feedback vastly exceeds what was available to my generation and those prior in medical school and marks a new day for medical education,” says James H. Willig, M.D., M.S.P.H., Professor of Internal Medicine and Associate Dean for Undergraduate Medical Education. “We do not need to wait to see a poor grade to know a learner needs help, but rather we can observe their development benchmarked to peers for each EPA sub-competency.”
With the arrival of higher-precision, competency-based medical education, UTSW now will embark upon the hard work of validating these methods and researching how to best leverage this technology to augment student development.
“The curricular implications will be far-reaching, and I am immensely excited that we will be able to guarantee that upon completion of our curriculum, our learners will have reached competency across all EPAs and are ready to excel on day one of graduate medical education training,” Dr. Willig says.
By embracing these technological advancements, UTSW is reaffirming its enduring commitment to excellence in medical education while empowering faculty, enriching student learning, and shaping the future of clinical practice.