Paper Summary
Share...

Direct link:

Defining Competence: Assessment Strategies in Aligning Different Frameworks in One Medical School

Sun, April 12, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 3rd Floor, Plaza II

Abstract

This paper describes the process of aligning assessment tools which are guided by different competency frameworks across specialties and sites at one medical school.
Various competency-based conceptual frameworks are used in undergraduate medical education (UME) and graduate medical education (GME) to guide competency measurement, such as the Accreditation Council for Graduate Medical Education (ACGME) Milestones and EPA-based assessments (28, 29). Aligning these frameworks in specific educational medical contexts presents both opportunities and challenges. These frameworks, while grounded in the shared goal of competency-based medical education (CBME), differ in structure, terminology, and intended application. ACGME Milestones are developmental, focusing on progressive achievement within specific domains, while EPAs are task-based, integrating multiple competencies into observable clinical activities (30–32). When schools implement assessments which are guided by both frameworks, alignment can enhance the clarity and consistency of learner expectations across different stages of training. However, ensuring coherence between frameworks requires careful curriculum mapping, faculty training, and ongoing evaluation. While medical schools might adopt multiple frameworks in curriculum and assessments, how these frameworks align in practice, and what factors facilitate or hinder their integration, remains unclear.
Informed by the ACGME Milestones, our institution has used nine core competencies as the foundation for its curriculum and assessment over the past decade. While the entire curriculum and associated assessment strategies are centered around these nine competencies, the school has recently introduced an EPA-based assessment purposefully for acting internships (AIs) for formative assessment, beginning in May 2025. Using eight EPAs along with a separate section focused on professionalism, the form has a four-level rating scale: functioning at the level of an intern, at the expected level of an acting intern, at the level of a core clerkship student, and below the level of a core clerkship student. A total of 81 acting internships across multiple specialties and four clinical sites have been encouraged to adopt this assessment form.
Facilitators of alignment include a shared institutional vision for CBME across the school and departments, providing a foundation for cross-mapping and integrated assessment strategies centered on the ACGME Milestones. Support from leadership—including deans and four dedicated AI site directors—along with faculty development and a centralized assessment platform further strengthen alignment efforts. However, there are also notable barriers in our attempt to align frameworks. Faculty vary in their familiarity with and acceptance of each framework. While some appreciate the intuitive, task-based nature of EPA assessments, others are more comfortable with the structure and familiarity of ACGME Milestones. Terminological and methodological differences, including assessment focus, scale interpretation, and level of granularity, can create confusion for both faculty and learners. Moreover, EPAs are not mapped to clinical activities across the curriculum. Aligning these frameworks also demands significant administrative, technological, and faculty time—resources that are often limited.
While assessment strategies aligning competency-based frameworks like ACGME Milestones and EPAs can strengthen the assessment infrastructure in medical education, our experience suggests that achieving this alignment depends on strong institutional support, a shared understanding among educators, and well-integrated systems for curriculum and assessment in local contexts.

Author