Skip to content

ReaderBench Model 1

Sterett H. Mercer edited this page Nov 11, 2021 · 9 revisions

ReaderBench Model 1

General Description

ReaderBench Model 1 is an ensemble (formed by averaging predicted quality scores) of the following six sub-models:

Full details of each sub-model are available in the links above.

All of these models used ReaderBench scores on 7 min narrative writing samples ("I once had a magic pencil and ...") from students in the fall, winter, and spring of Grades 2-5 (Mercer et al., 2019) to predict holistic writing quality on the samples (elo ratings calculated from paired comparisons).

More details on the sample are available in Mercer et al. (2019).

Mercer, S. H., Keller-Margulis, M. A., Faith, E. L., Reid, E. K., & Ochs, S. (2019). The potential for automated text evaluation to improve the technical adequacy of written expression curriculum-based measurement. Learning Disability Quarterly, 42, 117-128. https://doi.org/10.1177/0731948718803296

This scoring model was evaluated in the following publication:

Keller-Margulis, M. A., Mercer, S. H., & Matta, M. (2021). Validity of automated text evaluation tools for written-expression curriculum-based measurement: A comparison study. Reading and Writing: An Interdisciplinary Journal, 34, 2461-2480. https://doi.org/10.1007/s11145-021-10153-6
link to pre-print of accepted article

Clone this wiki locally