The question, ultimately, is which children to leave behind when the testing starts.
Members of the governing board that oversees the National Assessment of Educational Progress, meeting here in closed session earlier this month, grappled with how to report 2002 reading results for states that have had large changes in the percent of students excluded from the exam.
The concern is that high exclusion rates, or large swings in those rates, may influence the accuracy of NAEP results, as well as their comparability across states.
“This is a longtime discussion,” said Darvin M. Winick, the chairman of the National Assessment Governing Board, or NAGB. “The question is simple: Do you help your scores by excluding certain kids? It’s no more complicated than that.”
Widely known as “the nation’s report card,” NAEP is the leading national assessment of what a sampling of students know and can do in reading, mathematics, and other academic subjects.
The federal “No Child Left Behind” Act of 2001 requires all states to participate in NAEP reading and math tests in grades 4 and 8. It’s widely anticipated that results from those tests will be used as an informal benchmark to judge the rigor of states’ own assessments and standards.
Arnold A. Goldstein, a NAEP project officer at the National Center for Education Statistics, which manages NAEP for the U.S. Department of Education, presented preliminary data during an open portion of the board’s March 6-8 meeting here. His figures showed that between 1998 and 2002, exclusion rates on NAEP reading tests went up in some states and dropped in others, by as much as 7 percentage points and 9 percentage points, respectively. In general, increases in exclusion rates were correlated with increases in NAEP reading scores at the state level.
Mr. Winick said the board was exploring what steps it could or should take, but “there were no conclusions.” The board’s executive committee was given authority to deal with the issue before the May meeting, if necessary.
State and national results for the 2002 NAEP reading assessment are to be released in mid-June. For the first time, those results will be based on a sample of students who took the tests both with and without accommodations.
A puzzling trend
Under NAEP guidelines, schools may exclude certain students with disabilities or limited English proficiency, if officials deem them unable to participate meaningfully in the assessment.
Since the mid-1990s, NAEP has offered a wide range of accommodations to such students, such as more test- taking time, in an effort to reduce exclusion rates. Despite those efforts, exclusion rates vary widely across states and have been rising in a number of them, although it’s not clear why.
One reason may be that state policies for including students with disabilities or limited English skills in their testing programs are in flux, as a result of federal laws that require states to test all students.
Under NAEP policy, students with disabilities must be given accommodations stipulated in their individualized education plans. But, at the same time, NAEP does not permit its reading assessment to be read aloud to a student, and it does not provide an alternative assessment for students who cannot take part in the regular exam.
Both of those practices have been increasing in a number of states. As a result, as students are offered accommodations on state tests not available on NAEP, the exclusion rates may rise.
In November 2000, the NAEP governing board voted to flag or otherwise note any state or national sample with a change in exclusion rates of 3 percentage points or more from a previous assessment. The board indicated that score changes “need to be interpreted in light of these exclusion-rate changes.”
But in March 2001, the board rescinded that policy after NCES officials reported that they could not establish any precise point at which changes in exclusion rates would have a significant impact on average test scores.
Now, the board may be forced to reconsider its position yet again.
Although exclusion rates are a problem in only a few states, Mr. Goldstein of the center for education statistics said, with the increased focus on NAEP, “we may have to do something different this time.”
He added that the issue appears to be bigger in reading than in math, where exclusion rates on the 2000 NAEP exam were relatively consistent across states and at a “pretty low level.”
“The exclusion rate has remained a problem in a few states, probably through no fault of the states’ own,” said Mr. Goldstein, who attributed the problem largely to differences in accommodation policies between NAEP and the state tests.
But Richard G. Innes, an education activist in Villa Hills, Ky., has suggested that some states are benefiting “greatly and unfairly” in state-to-state comparisons of NAEP results because of differences in their exclusion rates.
U.S. history sooner?
In other action at its meeting this month, the governing board’s assessment and development committee recommended delaying a world history test for 12th graders until 2010 and moving U.S. history tests in grades 4, 8, and 12 from 2010 to 2006. The full board will not vote on the changes until its May meeting.
The board had scheduled the first test of world history for 2006, but became concerned that it lacked enough information about the content of such courses, when they are taught, and how many students take them. The board has commissioned a study of world history instruction and assessment, which should be ready for its May or August meeting.
Meanwhile, committee members expressed support for testing U.S. history sooner than originally planned because of heightened interest in the subject, which was last tested in 2001. Before deciding to move up the exam date, Mr. Winick said, the board would have to investigate the costs.
Board members also voted to delay until their August meeting final consideration of a framework for developing the background questionnaires that accompany NAEP tests.
The No Child Left Behind law for the first time gave the board the final authority for approving background questionnaires. The board is interested in shortening them so that they take less time and focus on information needed to report NAEP results accurately, or that has a well-established relationship to student achievement, such as how much reading students do at home.
John H. Stevens, a member of the NAEP board and the chairman of the ad hoc committee that is working on the framework for background questions, said “the importance of getting this right, or as near to right as we can, takes precedence over urgency.”
The board also is exploring whether it can increase the accuracy of NAEP results for subgroups of students, such as minority youngsters, by increasing sample sizes and, if so, how. NAEP now samples about 2,500 students per state in each grade and subject tested.
Vol. 22, number 27, page 7 - © 2003 Editorial Projects in Education
Vol. 22, number 27, page 7 - © 2003 Editorial Projects in Education