Individual Submission Summary
Share...

Direct link:

Analyzing the effectiveness of fluency benchmarking methods

Thu, March 29, 3:00 to 4:30pm, Fiesta Inn Centro Histórico, Floor: Lobby Floor, Room E

Proposal

Many governments and organisations set oral reading fluency benchmarks to track the progress of children’s reading in the early grade. In most cases, the benchmark is based on results of data analysis. The aim of this paper is to investigate methods to set oral reading fluency benchmarks against comprehension standards. We analyzed 75 datasets with reading fluency and comprehension data from nine languages in seven countries. In total, we analyzed 26,867 EGRA assessments where students are given a short passage to read and asked comprehension questions about that passage.
We addressed three broad classes of research questions:
1 How do different methods of estimating a benchmark compare?
We compare three approaches to setting benchmarks. The first method is based on the mean of oral reading fluency for students with 80% comprehension or higher. The second method uses the median oral reading fluency of the same group of children. The third approach fits a logistic regression model between fluency and comprehension data (above or below 80%) threshold. The three methods are compared in terms of the level of benchmark set, reliability and precision of the benchmark.
Another comparison made is between benchmarks set against passage comprehension questions versus those against an independent measure of sentence reading comprehension. We also analyse benchmarking when children are given a 60 second time limit to read the passage vs. passage reading without a time limit.
2 What factors determine the level and precision of a benchmark estimate?
We analyse the factors influencing the level of fluency benchmark including characteristics of the sample (size, grade and average student reading ability), of the reading passages (difficulty, reliability, alternate versions) and linguistic characteristics (word and sentence length, transparent vs opaque orthography and agglutinating vs non-agglutinating languages). We find that student ability and word length are particularly important for benchmark levels. Sample size and comprehension question reliability are important for benchmark precisions.
3 Are fluency benchmarks useful indicators of comprehension? If so, in which languages and at which stages of reading development?
We examine whether oral reading fluency is a good proxy for comprehension across all languages assessed and whether the use of reading accuracy allows for better prediction of comprehension.
We conclude with recommendations about how to set reliable benchmarks, how to set benchmarks at early stages of reading development and how to assess comprehension reliably.

Authors