Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Competency-based assessment (CBA) has emerged as a cornerstone of health professions education over the past two decades, representing a significant shift from time-based training to an outcomes-driven approach(3). This transformation is rooted in the need for health professionals to demonstrate not only knowledge acquisition but also the ability to apply that knowledge in diverse, real-world clinical contexts. The movement began with calls for greater accountability in medical education and was propelled by frameworks such as the Accreditation Council for Graduate Medical Education (ACGME) competencies, CanMEDS roles, and later, the development of entrustable professional activities (EPAs)(4–6). These frameworks sought to define the essential capabilities of health professionals in a structured, observable, and measurable way. As a result, assessments in medical and health professions education have undergone a paradigm shift—from testing knowledge and isolated skills to evaluating integrated competencies that reflect readiness for practice.
At its core, competency-based assessment aims to determine whether learners can consistently perform professional tasks to a specified standard across various contexts. However, a central question remains: What are we truly measuring when we claim to assess competencies? While terminology implies direct assessment of competence, the reality is more complex. Competence is multidimensional, dynamic, and deeply context dependent. It encompasses knowledge, technical skills, clinical reasoning, communication, professionalism, and the ability to integrate these domains into practice. Many existing assessment tools—such as written exams, simulations, Objective Structured Clinical Examinations (OSCEs), and workplace-based assessments—only capture slices of this broader construct. Thus, assessments often serve as proxies for competence rather than direct measures.
Another layer of complexity arises in the interpretation and use of assessment results. In competency-based systems, the purpose of assessment is not merely to rank learners but to inform decisions about progression, remediation, and readiness for unsupervised practice(7). Consequently, the validity, reliability, and educational impact of assessment data are critical. Programmatic assessment has gained traction as a model that aggregates multiple data points over time to create a more comprehensive picture of learner development. This approach emphasizes narrative feedback, longitudinal tracking, and professional judgment to support meaningful decisions(8,9). However, tensions persist between formative and summative functions of assessment, and between standardization and the nuanced interpretation needed in workplace settings.
The central challenge, therefore, lies in aligning assessment practices with the true goals of competency-based education. Are we measuring what matters? Are assessments capturing the ability to deliver safe, patient-centered care across varied clinical environments? Or are we constrained by methodological limitations, cultural norms, and logistical barriers that prevent us from fully realizing the promise of CBA? Addressing these questions requires ongoing scholarly inquiry, innovation in assessment design, and a commitment to faculty development. It also demands transparency in how assessment data are interpreted and used in decision-making processes. Ultimately, competency-based assessment must evolve to not only ensure that learners are competent, but to support improvement in practice in a complex and ever-changing healthcare landscape.