Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objective: The objective of this presentation is to examine the evaluation decisions made in a Research-Practice Partnership (RPP) encompassing researchers from a consortium of universities, school and district-level practitioners from two neighboring districts and one state educational department in Chile, focused on strengthening Professional Learning Communities (PLCs). This paper aims to highlight the challenges, successes, and lessons learned from assessing its two-year implementation.
Theoretical Framework: Developmental evaluation (Patton, 2011; Penuel et al., 2020) provides a framework for navigating complex environments for social and educational innovation, acknowledging that traditional methodologies are insufficient in providing evaluative feedback that is timely, relevant, culturally pertinent, and useful for adaptive decision-making. This presentation advocates for adaptive evaluation strategies, acknowledging that RPPs and evaluation should be developed together in an interwoven, iterative process, making the evaluation part of the change process itself (Patton, 2011).
Methods: Initially our RPP measurement infrastructure was mainly focused on testing our change theory. We applied the PLCA-R questionnaire, which received 189 responses from principals, leadership teams, teachers, and assistants, establishing a baseline for PLC conditions in nine schools (Olivier & Hipp, 2010). Additionally, we monitored school conditions, variations and progress. Realizing the relevance of assessing RPP development, we utilized Cooper et al. 's (2019) model to monitor collaborative processes such as planning, data use, communication, trust, brokering activities, and capacity building. We conducted interviews with principals and leadership team members at three intervals, assessing PLC development and complementing qualitative data from rubrics, field notes, and teacher leaders' journals (Patton, 2014; Creswell & Poth, 2018). Finally, we held several meetings to reflect and analyze the data to improve subsequent implementation steps.
Results: A key focus was documenting our learning process in designing and implementing an RPP. Evaluating the conditions for implementing PLCs and schools trajectories allowed us to produce robust data. Collaborative research introduced uncertainty about the underlying change theory of the intervention, posing challenges in planning and implementing monitoring systems. Adaptation and flexibility was essential for conducting relevant assessments.
Multiple methodologies were crucial for meeting evaluation objectives and supporting the intervention. Some instruments provided insightful information and valuable resources for working with schools, crucial for our analysis during the establishment of PLCs and the improvement network among schools. However, other instruments, such as rubrics and journals, were introduced too late or altered too significantly, complicating longitudinal comparisons. Concerns related to methodology and ethics included the potential imposition of additional burdens on schools, reinforcing power imbalances, and a lack of clarity regarding evaluation methods used in other RPP experiences.
Scientific or Scholarly Significance: These challenges highlight the need for designing and implementing monitoring strategies aligned with RPP principles and overall program evaluation standards. While developmental evaluation and implementation sciences provide strong foundations (Penuel et al., 2020; Patton, 2010), RPP teams require specific competencies and tools to adapt and adjust to each experience. Recording and systematizing evaluation practices is fundamental to the RPP field, ensuring relevance and rigor. This approach aligns with the ethical obligation to disrupt historical power dynamics, promoting more equitable and effective practices.