Paper Summary
Share...

Direct link:

Does Use of Text-to-Speech Correspond to Quality Measurement During Large-Scale Math Testing? (Poster 12)

Sat, April 13, 11:25am to 12:55pm, Pennsylvania Convention Center, Floor: Level 200, Exhibit Hall A

Abstract

Accessibility tools are commonly embedded within large-scale computer-based testing programs with an intent to foster greater test accessibility and fairness, and correspondingly contribute to score comparability. Such tools are deemed necessary given that without them, certain students are believed to experience unique barriers during testing such that they do not have appropriate opportunities to show what they know and can do. However, it is important to note that mere access to such tools doesn’t necessarily correspond to relevant use of such tools by students in need of them. Without actual student activation of the corresponding tools by those who experience related barriers when appropriate and needed, it is questionable whether their intended purpose will be met. Empirical work is necessary to identify whether they are indeed used as intended, and if when used, score comparability is evident. Test process data can facilitate precise investigations of student use and score comparability. One accessibility tool considered necessary for certain students on tests not designed to measure reading skills is text-to-speech; this tool permits students to have test questions read out loud by the computer. For the current study, process data from the 2017 National Assessment of Educational Progress (NAEP) 8th grade math test were accessed to explore student use of text-to-speech at the item-level. Differential item functioning (DIF) analyses were also conducted to examine relationships between text-to-speech use and score comparability among students for whom their use is often recommended; specifically, students with disabilities and English learners. Preliminary results will be presented.

Authors