Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Help
About Vancouver
Personal Schedule
Sign In
Objectives of the presentation:
Outline the research studies undertaken on behalf of the California Department of Education’s Child Development Division (CDD) to develop, validate, and calibrate the DRDP©-2010 assessment system. Demonstrate an iterative process of assessment development supported by the application and interpretation of item response models and data.
Theoretical framework:
The validity argument for the DRDP© is based on the assessment’s content, its internal structure, the underlying response processes, and the relationships between DRDP©’s variables and other variables of interest (AERA, APA, & NCME, 1999). The systematic collection and analysis of evidence to support the validity of the assessments follows best assessment practice. This is especially important for item and assessment development applied for the purposes of measuring key dimensions of development in young children. Item response modeling techniques support the development and calibration of valid and reliable scales of measurement for key dimensions of development defined by California’s early-learning foundations.
Methods:
Working with researchers in child development, teachers and practitioners in early childhood education, and experts in psychometric assessment to develop, panel, and pilot-test observational assessment items to measure separate dimensions of development, field-test assessments composed of these items, conduct studies of the assessments in which data on collateral measures were also collected for validation purposes, conduct teacher interviews and think-aloud studies to examine item response-process issues, calibrate scales for the separate dimensions of development based on Rasch Partial Credit Model techniques, and examine relationships among DRDP© dimensions and corresponding collateral measures.
Data sources:
Teacher and practitioner item-paneling records, DRDP© assessment data from field-testing, think-aloud teacher interview data, DRDP© assessment data from calibration study, concurrent/convergent validity data collected for collateral measures such as Bayley, Battelle, Creative Curriculum, and Ages & Stages Questionnaire.
Results:
Results presented will include how data from earlier stages of the overall study design (item paneling, field-testing and teacher interviews) were used to inform the process of successive rounds of item revision and assessment refinement. Results will also be presented from the processes of examining scale characteristics, model fit, and DIF, and from the process of linking and equating the separate DRDP© age-level assessments to create scales for the separate dimensions that measure consistently across age levels, and from examining the relationship between DRDP© results and results obtained from other measures.
Significance:
DRDP© provides a compelling example of how best practice in the development of an assessment tool can be realized in the important and challenging area of early childhood education, in which most of the children being assessed cannot be expected to participate in the standard kinds of direct-assessment appropriate for older children. This work exemplifies how readily item-response modeling can be adapted to the context of assessment based on teacher observations of children. Outlining the series of rigorous research studies necessary to develop and validate a state-wide assessment program demonstrates the sophisticated qualitative and quantitative data-collection and data-analytic methodologies that are required to support the validity and reliability of a state-wide assessment program.