The new assessments are brief, trustworthy, and easy to administer. They can be administered to all kindergartners through third-graders a few times a year, allowing teachers to identify which students need extra help. They take only five to ten minutes per child to administer and can typically be given by classroom, reading, or special education teachers or aides. Once identified, these students can receive the assistance they need, and the downward spiral that results from weak early reading skills can be averted.
How do they work?
The key to our new ability to predict which children are likely to have problems in learning to read is the research finding that almost all struggling readers have problems with phonemic awareness — identifying and being able to manipulate the sounds in words (Torgesen, 1998). Not surprisingly, given their troubles with the phonological features of language, these children also have difficulty grasping the alphabetic principle and are slow to build up a sight vocabulary, meaning words that they can read automatically without sounding them out. Building on these highly consistent findings, researchers have found that by midway through kindergarten (assuming prereading skills are being taught), knowledge of letter names predicts future reading ability. And by first grade, letter- sound knowledge is highly predictive.
How accurate are they?
Just how accurate are these early assessments? Accuracy varies by instrument. Rather than reviewing several assessments, lets look at the average predictive power of assessing kindergartners letter identification skills (Snow et al., 1998). A meta-analysis of 20 studies that measured 11 different possible predictors of reading difficulties (including receptive vocabulary, expressive language, concepts of print, and verbal memory of stories or sentences) found that letter identification was the strongest single indicator of future reading. The mean correlation between letter identification in kindergarten and reading scores in grades one through three was .52. In fact, letter identification was almost as good a predictor by itself as an entire reading-readiness test (which includes a whole host of reading skills). But what does a moderately strong correlation like this mean when it comes to designating children at risk or not? Another study (Snow et al., 1998) used 1,000 kindergartners letter identification skills to find out. The researchers considered their predictions accurate if the children who were designated at risk in kindergarten were then in the bottom 20 percent on teachers ratings in first grade.
To begin with, the researchers tested a strict letter-identification cutoff; they designated students at risk only if they fell in the bottom 10 percent. According to the first-grade teachers ratings, this strict cutoff correctly identified 83.2 percent of children. Since there were 1,000 children in the study and the bottom 10 percent were designated at risk, 100 children were so designated. Of these, 63 were correctly identified (meaning they were in the bottom 20 percent according to teachers ratings in first grade), but 37 were false alarms (meaning they were not in the bottom 20 percent). Of the 900 children designated not at risk, 769 were correctly identified, but 131 were misidentified (meaning they were in the bottom 20 percent in first grade).
Believing that too many children who did end up having reading difficulties were missed with the strict cutoff, the researchers also examined a more lenient letter-identification cutoff. In this second analysis, they designated the bottom 25 percent of kindergartners at risk. Of these 250 children, 118 were correctly identified, but 132 were false alarms. Of the 750 children designated not at risk, 677 were correctly identified, but 73 were not. Overall, the more lenient cutoff meant that the overall accuracy of the prediction was reduced slightly (79.5 percent of children were correctly identified) — but the percentage of struggling readers who were missed dropped from 15 to 11.
Obviously, educators have to make a conscious choice when they decide what percentage of children to intervene with. Intervening with the bottom 10 percent means that many at-risk children will not be appropriately served. And intervening with the bottom 25 percent means that many not at-risk children will be served.
No assessment can completely overcome these potential errors in identifying at-risk children. Even with the best assessment, some children who will have reading problems are not identified and some who will not are. But there are strategies to greatly reduce the errors in identification. To minimize under-identification, schools are encouraged to screen all children — three times per year — starting with mid-K. (Assessments at the very beginning of kindergarten tend to be unreliable because students may lack skills simply because they havent been taught, not because they will have trouble with the concepts once they have been presented in the regular classroom setting.) To minimize over-identification, assessments often come with multiple forms so that teachers can confirm the results (and be sure that the child was not just having a bad day) before the intervention begins. Given the importance of addressing skill deficits, over-identification of children may be the best policy. For not-at-risk students, the intervention will simply reinforce their skills, acting like an insurance policy against future problems with reading. And, with adequate progress monitoring, such students will test out of the intervention quickly.
Fortunately, predictions of which students are at risk for reading failure become even more accurate by the end of first grade. This is what one would expect given that, starting at the end of first grade, students word-reading ability can be assessed directly instead of indirectly through such pre-reading skills as letter naming and phoneme segmentation. While it is clearly true that early word reading ability is a strong predictor of later word reading ability, very brief measures of oral reading fluency are also a strong predictor, and thus a good screening measure, for difficulties in reading comprehension. In fact, Fuchs, Fuchs, Hosp, and Jenkins (2001) reported evidence that a very brief measure of oral reading fluency was a better predictor of performance on a reading comprehension outcome measure than was a brief measure of reading comprehension itself. In this study, with middle and junior high school students with reading disabilities, the correlation between oral reading fluency and the reading comprehension measure was a nearly perfect .91.
More recently, researchers comparing third graders performance on the Dynamic Indicators of Basic Early Literacy Skills measure of Oral Reading Fluency to their scores on state assessments of reading comprehension have found correlations of .70 with the Florida Comprehensive Assessment Test (Buck and Torgesen, 2003) and .73 with the North Carolina end-of-grade assessment (Barger, 2003).