Paper Summary
Share...

Direct link:

Feedback and Coaching in MedEdPORTAL (2005–2025): Insights to Advance Assessment in Precision Medical Education

Wed, April 8, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum D

Abstract

Purpose
This study examined how MedEdPORTAL’s feedback and coaching resources from 2005 through mid-2025 embody precision medical education (PME) principles. It assessed the presence of data-informed, equitable, learner-centered, and continuously improving practices to strengthen PME assessment systems.

Framework
Guided by a PME theoretical lens, the study employed a cyclical model of data inputs, analytics, insights, interventions, outcomes, and iterative adjustments. This framework integrates PME principles, such as learner agency, equity, workflow integration, and reproducibility to enable safe, scalable, and effective education systems. Content and trends were analyzed in relation to these PME ideals.

Methods
A mixed-methods design combined qualitative content analysis with quantitative time-trend analyses. A total of 83 MedEdPORTAL feedback or coaching resources were systematically selected and coded for PME features across four sequential five-year blocks (2005-2025). Chi-square tests identified temporal trends; Cohen’s kappa assessed intercoder reliability.

Data Source
IRB review was not required, as the dataset consisted of publicly available program reports without any identifiable information. The dataset included 83 MedEdPORTAL feedback and coaching resources published between 2005 and July 7, 2025. Both qualitative themes and quantitative counts of PME-aligned strategies were extracted.

Results
Analytic agreement between two independent coders was high (Cohen’s kappa > 0.89), supporting the reliability of the following key findings:
Input: Learner self-assessments and reflections were the most common input (35%). Most resources (86%) used digital technology for data collection and interactions, but only 18% employed advanced systems integrated with clinical workflows. Use of validated assessments was reported in 41%.
Analytics: Although diverse analytic methods were reported, none reported predictive analytics.
Insights: Personalized feedback (93%) and learner agency support (89%), including goal setting, self-monitoring, and reflection, were widespread. Equity models for underrepresented groups increased gradually.
Interventions: In-person feedback/coaching dominated (51%). Asynchronous formats (55%) slightly surpassed real-time approaches (45%). AI or interactive technologies were scarce (4%). Despite growing emphasis on privacy, few reported de-identification or informed consent protocols.
Outcomes: Most outcomes targeted learner skill development (68%), with programmatic or clinical system impacts rare (16% and 4%). Short-term, single-episode interventions prevailed, limiting the ability to track longitudinal data.
Adjustments: Follow-up changes were reported in 46% of resources, most often involving curriculum modifications (26%) or feedback process revisions (16%).
Significant upward trends over two decades were observed in equity (p = 0.02), follow-up changes (p = 0.003), privacy (p < 0.001), longitudinal data use (p = 0.034), and integration across the ongoing assessment cycle (p = 0.014). Other PME strategies – such as personalization, technology use, learner agency, real-time feedback, and validated instruments – showed non-significant changes.

Significance
This study presents an empirically grounded map of PME principle integration in MedEdPORTAL’s feedback and coaching literature over the past two decades. This evidence contributes to documenting trends of progress in equity, privacy, and continuous improvement, while underscoring persistent gaps in personalization, technology use, and data stewardship. These insights guide future innovations toward scalable, equitable, and precision-aligned medical education assessments.

Authors