Good News and Bad News in Disaggregated Subgroup Reporting to the Public on 2005–2006 Assessment Results

Technical Report 52

Martha Thurlow, Chris Bremer, Debra Albus

December 2008

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Thurlow, M., Bremer, C., & Albus, D. (2008). Good news and bad news in disaggregated subgroup reporting to the public on 2005–2006 assessment results (Technical Report 52). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Table of Contents

Executive Summary
Overview
Method
Results
     Characteristics of State Assessment Systems
     States That Reported Disaggregated Regular Assessment Data for Students with Disabilities
     Unique States that Reported Disaggregated Regular Assessment Data for Students with Disabilities
     States that Reported Disaggregated Alternate Assessment Data for Students with Disabilities
     Assessment Participation in 2005–2006
     Assessment Performance in 2005–2006
     Assessment Performance: Trends
     Gap Comparisons from 2004–2005 to 2005–2006
     Other Information Collected for 2005–2006
     Click Analysis of Web-based Reporting
Summary and Discussion
References
Appendices


Executive Summary

This is the tenth report analyzing the public reporting of disaggregated data for students with disabilities by the National Center on Educational Outcomes. This analysis, for school year 2005–2006, also marks the fourth data cycle from the passage of the No Child Left Behind Act (NCLB) of 2001. On this tenth anniversary, we present both the good news and bad news for 2005–2006 reporting and summarize other observed trends.

For 2005–2006, a positive finding was that more states are reporting disaggregated data for students with disabilities for tests within and outside state accountability systems. In 2005–2006, 39 states reported, up from 36 states in 2004–2005. However, for state tests within accountability systems, only 39 states reported both participation and performance data this year compared to 44 states reporting last year. Part of this decline is due to a change in how this report credits data from different sources. In previous report cycles, data found only in State Performance Plans (SSP) or Annual Performance Reports (APRs) were accepted as equivalent to regular public reports. However, starting with the current cycle, such data are no longer credited. This change reflects the desire to determine whether states are reporting in the same way and with the same frequency as they are reporting for students without disabilities.

For unique states, only two reported participation and performance on regular assessments, and one reported these data for its alternate assessment. This shows backsliding compared to six unique states reporting on regular assessments and three reporting for alternates last year. Only one unique state reported data by referring to APRs posted online for regular and alternate assessments. Thus, even if this report had credited APRs as equivalent to regular public reports as it did last year, there still would have been fewer unique states reporting data, compared to the previous year.

For the school year 2005–2006, our findings indicated fewer regular states publicly reporting disaggregated participation and performance data for all of their alternate assessments, reversing an upward trend seen in previous years. For 2005–2006, only 28 states fully reported these data, compared to 42 for the previous year. As with regular assessment reporting, part of this decline can be attributed to the change in how SPP and APR data were credited. Absent that change, 39 states would have been credited with reporting data in the 2005–2006 cycle, a smaller decline.

Other areas also revealed a need for improvement. For accommodations reporting, fewer regular states reported information about accommodations use for students with disabilities on regular state tests in 2005–2006. However, the good news is that for those states that are continuing to report accommodations data publicly, all but one state in 2005–2006 reported both participation and performance by grade and content area when accommodations were used.

For performance, the analyses across 2004–2005 and 2005–2006 showed that average (mean) gaps across states for reading and mathematics did not change significantly on the whole. For elementary and middle grades for reading the average performance gap between students with disabilities and regular students, which may include all students or students without disabilities only, varied by only a few percentage points. However, at the high school level the average gap size for percent proficient in reading widened by 13%. For mathematics, the elementary, middle, and high school grades average gap sizes increased or decreased by only 2%.

A trend observed in this year’s report is that states are decreasing their use of augmented norm referenced/criterion-referenced (NRT/CRT) tests, dropping from 17% to 9%. A new analysis this year looked at the percentage of states reporting data by type of test in accountability systems. For 2005–2006, all NRT and augmented NRT/CRT assessments had disaggregated participation and performance data reported, while only 77% of CRT and 71% of high school exit tests fully reported these data.


Overview

The 2005–2006 school year was the seventh annual reporting period for which states were required by the Individuals with Disabilities Education Act (IDEA) to report on the performance of students with disabilities on standards-based assessments. It is the fourth reporting period since the enactment of the No Child Left Behind Act (NCLB). Starting with the reporting of 2005–2006 data, states are required by NCLB to test in all grades 3 through 8, and once in grade 10, 11, or 12. This report is the tenth in a series of NCEO reports documenting state public reporting practices.

Since the passage of NCLB, signed by President George W. Bush on January 8, 2002, the number of states that publicly reported disaggregated participation and performance data for students with disabilities for all of the general assessments within accountability systems has increased. Just after passage of the law, the number increased from 28 states for school year 2000–2001 to 35 states in 2001–2002. This number changed little in the subsequent three years: 36, 35, and 36 in 2002–2003, 2003–2004, and 2004–2005, respectively (Klein, Wiley, & Thurlow, 2006; Thurlow & Wiley, 2004; Thurlow, Wiley, & Bielinski, 2003; Wiley, Thurlow, & Klein, 2005; VanGetson & Thurlow, 2007).

The number of states reporting disaggregated participation and performance data for all of their alternate assessments improved over the past few years. Although only 22 states reported this information in 2001–2002, 33 states did so in 2003–2004 (Klein et al., 2006; Thurlow & Wiley, 2004). Continuing this trend, 42 states reported both disaggregated participation and performance data for all of their alternate assessments in 2004–2005 (VanGetson & Thurlow, 2007), another large jump in meeting reporting requirements. (The report for 2004–2005 reported 41 states in the text, but Appendix D showed 42 states reporting participation and performance data for all of their alternate assessments.)

Each year when we examine states’ public reporting practices, it is necessary to reassess the ways in which we credit states with reporting data publicly. Since 1997, states have been required to submit to the U.S. Department of Education a Performance Report that addressed various indicators for school age students with disabilities. In 2004, states were required for the first time to report their Annual Performance Report data to the public in some way that would communicate clearly. When we conducted our state analysis in previous years, the APR data on Indicator 3 (Assessment) were counted as a public report. Yet, those data included only students with disabilities, and did not reflect the principle that data on students with disabilities were to be reported in the same way and with the same frequency as the assessment data for students with disabilities. Given the increased ability of states to disaggregate data for students with disabilities and to report them alongside each other, it was decided that the federally required APRs would not be counted as regular public reporting in like manner to all other students. It would be a backward step for states to use their APR as their reporting mechanism to the public because it does not meet the original criteria set for public reporting in IDEA 1997.


Method

In December 2006, project staff began searching state education Web sites for posted reports with disaggregated data for students with disabilities for school year 2005–2006. States included the 50 "regular" states and 11 "unique" states (American Samoa, Bureau of Indian Education, Commonwealth of Northern Mariana Islands, U.S. Department of Defense Education Affairs, District of Columbia, Federated States of Micronesia, Territory of Guam, Republic of Palau, Commonwealth of Puerto Rico, Republic of the Marshall Islands, and U.S. Virgin Islands). During this time, information was collected both on the actual participation and performance data reported by states for students with disabilities and descriptive information on how the states reported those data. The data collection included all regular and alternate state assessments within and outside accountability systems, with the exception of tests designed specifically for bilingual or English language learners.

In February, following the collection of data, summary tables were created for the verification process with states. These summaries included only the descriptive information on how the state reported participation and performance. See Appendix A for a sample letter and summary table used in the verification process with state assessment directors.

The process to verify the descriptive reporting data found on state Web sites occurred in two waves between May and July of 2007. In the first wave, letters and summary tables were mailed to state assessment directors requesting help with verification of data. In this first wave, contact was established with state assessment directors or their office staff in 32 regular states and 2 unique states. In the second wave, letters were sent to all state directors of Special Education along with original or updated data summaries based on changes directed by those states with which we had contact in the first wave (see Appendix B). In this second wave, contact was established with 13 regular states and 1 unique state. Then from August to October, staff completed data entry and double checks for accuracy.

In reviewing past reports we included public reporting of data on state tests administered to students with disabilities who were also English language learners or bilingual students. Examples of these tests are the SABE/2 in California, the Reading Proficiency Test in English (RPTE) in Texas, and the IMAGE in Illinois. This year, these assessments were not included in the search for data. They will be included in a future report on public reporting for English language learners with disabilities.

We further note that the definition for what is counted as public reporting changed in this report from previous years. This year state Annual Performance Reports (APRs) and State Performance Plans (SPPs) were not counted as regular public reports that a state typically disseminates to meet the requirements of reporting data on students with disabilities in the same manner as reporting for all students.

The definition of general education students and students with disabilities in the data reported did not change from previous years. When general student data are presented in this report, that population might include the total of all students tested or may have been disaggregated further as students without disabilities, depending on the state. For consistency in this report the same term "general education student" refers to both groups as a contrast to the data reported on students with disabilities. This should be considered in interpreting the data. Similarly, the term students with disabilities sometimes includes only students with IEPs, and sometimes a combination of students with IEPs and 504 Plans. This also varies by state in the data and should be considered.


Results

Characteristics of State Assessment Systems

State-mandated general assessments for 2005–2006 are listed in Appendix C. The list includes all 50 regular states and the 11 unique states, and includes information on the name of each test, grades and content areas tested, whether the state has publicly available disaggregated participation or performance data for students with disabilities, and whether the results of these assessments are used for accountability purposes.

For the 50 regular states, 101 statewide assessments were identified. Among these were four states using the Iowa Test of Basic Skills (ITBS), three using the New England Common Assessment Program (NECAP), and two using the TerraNova; all other assessments were unique. The mean number of assessments per regular state was 2.0, with 29 states using 2 or more assessments. The largest number of assessments used by a single state was five (Utah). Thirteen assessments were dropped by eleven states (West Virginia dropped three tests). Nine state assessments, two ACT tests (one Explorer and one PLAN), two ITBSs, and one TerraNova were dropped, and nine new state assessments were added. Nine states added a report on a new assessments. Of these, six were individual state assessments and three were the NECAP noted above, reported by three states (New Hampshire, Rhode Island, and Vermont). All states continued reporting on at least one assessment that was also used in 2004–2005.

For the 11 unique states, 8 had the names of assessments being used in their public reports. For one of these (Bureau of Indian Affairs) students participate in assessments in their state of residence and thus are included in the state reporting systems in which they reside. Two others reported nothing about assessments used. Across the eight unique states reporting assessments, nine different statewide assessments were used, with one, the Stanford Achievement Test (SAT), being used by four unique states. Of the unique states’ assessments, the SAT and the TerraNova (used by one unique state) were also used by one or more regular state. Only one of the eight unique states reporting assessments (Commonwealth of Northern Mariana Islands) used more than one assessment. One unique state (U.S. Virgin Islands) reported a new assessment; this state did not report any assessments for 2004–2005.

Because few unique states publicly report complete disaggregated assessment data, Figure 1 includes only data from the 50 regular states, and breaks down the 101 state assessments (whether within or outside accountability systems) by type: criterion-referenced tests (CRT), norm-referenced tests (NRT), exit exams used as a requirement for graduation or for earning a particular type of diploma (EXIT), and augmented NRTs with state-developed test items. While some states’ NRTs and CRTs included an EXIT component, tests were classified as EXIT only in cases where a state had a specific assessment that had been designed for establishing fulfillment of high school completion requirements.

Figure 1. Number of Regular Assessments In and Outside Accountability Systems by Test Type (N=101)

Note: Tests are counted by test name. If a state has different names for CRTs by elementary/middle and high school these are counted separately.

Criterion-referenced tests were the most common, representing 64% of the 101 state-administered assessments in 2005–2006. Eleven states reported data for NRTs, nine states reported on augmented (NRT/CRT) assessments, and fifteen states reported for exit tests. Compared to 2004–2005, there was a large decrease in augmented assessments from 17% to 9%.


States That Reported Disaggregated Regular Assessment Data for Students with Disabilities

Figure 2 summarizes reporting of regular assessment data in the 50 regular states, for students with disabilities who participated in regular assessments that are included in the state’s NCLB accountability system. Overall, 39 regular states (78%) reported disaggregated data on students with disabilities for both participation and performance for all regular assessments in the state accountability system. Four states (7%) reported only performance data for all regular assessments, five states (9%) reported disaggregated participation and performance data for some regular assessments, and 2 states (Hawaii and Wyoming) publicly reported neither disaggregated participation nor disaggregated performance data for any regular assessments. This represents a decrease in public reporting compared to 2004–2005, when 44 states provided data for all NCLB accountability tests.

Figure 2. States that Disaggregated Assessment Results for Students with Disabilities on Regular Assessments in Accountability Systems

Figure 3 shows the same information as in Figure 2, by state. As in the past, there were no identifiable patterns of location for non-reporting or partial reporting states.

Figure 3. States Reporting 2005–2006 Disaggregated Participation or Performance Data for Students with Disabilities on Regular State Assessments in Accountability Systems*

*The figure does not include state APR or SPP data. A broad definition was used to determine whether a state had data—states were included if they had data in any form for each test; these data could be presented for the state as a whole, by grade ranges, or by grade.

Figure 4 shows the prevalence of full reporting of disaggregated data by test type, across the 50 regular states, for those tests within accountability systems. The figure shows that while norm-referenced (NRT) and augmented (NRT/CRT) tests within accountability systems are fully reported, CRT and EXIT assessments lag behind, at 77% and 71% of the assessments, respectively. The reason for this pattern was not probed in our data collection.

Figure 4. Percent of General Assessments in Accountability Systems Reporting Participation and Performance by Test Type



Figure 5 is similar to Figure 3, but shows reporting data for all assessments, including those outside accountability systems. Comparing the two figures, it is clear that full reporting is occurring for a higher proportion of assessments inside accountability systems, compared to those outside these systems.

Figure 5. States reporting 2005–2006 Disaggregated Participation or Performance Data for Students with Disabilities on Regular State Assessments In and Outside Accountability Systems*

1Mississippi did not report data for one writing test due to Hurricane Katrina.

*The figure does not include state APR or SPP data. A broad definition was used to determine whether a state had data—states were included if they had data in any form for each test; these data could be presented for the state as a whole, by grade ranges, or by grade.

Note: During the verification process one state (Hawaii) had additional information in its APR that would have made a difference in its reported data; both participation and performance data for all regular assessments were reported in its APR.


Unique States that Reported Disaggregated Regular Assessment Data for Students with Disabilities

For 2005–2006, there was some backsliding among the 11 unique states in the public reporting of disaggregated special education data (see Table 1). Although six unique states provided disaggregated data for 2004–2005, five of these six did not do so for 2005–2006. Only 2 of the 11 unique states provided disaggregated data for both participation and performance. Three unique states did not list any assessments used.

Table 1. Unique States Reporting Disaggregated Participation or Performance Data for Students with Disabilities*

Unique States

Participation

Performance

American Samoa

No

No

Bureau of Indian Affairs

No

No

Commonwealth of the Northern Mariana Islands

No

No

Department of Defense Education Activity

No

No

District of Columbia

Yes

Yes

Federated States of Micronesia

No

No

Guam

No*

No*

Palau

No

No

Puerto Rico

Yes

Yes

Republic of the Marshall Islands

No

No

Virgin Islands

No

No

*APR or SSP has it reported, but APR/SPP reporting is not counted here.

 

States that Reported Disaggregated Alternate Assessment Data for Students with Disabilities

Alternate assessment data are in Appendix D. All 50 regular states reported using at least one alternate assessment (see Appendix D). One state (North Carolina) used four alternate assessments, and eight states (Arizona, Louisiana, Minnesota, Montana, Oregon, South Carolina, South Dakota, and Virginia) used two alternate assessments. The remaining 41 regular states used one alternate assessment each. The mean number of alternate assessments per state was 1.22. One of the states using two alternate assessments (South Dakota) only used one of them for accountability purposes. Otherwise, all alternate assessments reported were used for accountability purposes, for at least some grades and content areas. Figure 6 shows the percent of states that disaggregated different types of data for students with disabilities on alternate assessments.

Figure 6. States that Disaggregated Alternate Assessment Results for Students with Disabilities*

*The figure does not include state APR or SSP data.

For unique states, 3 of 11 reported using one alternate assessment and all 3 said they used this assessment for accountability purposes. However, only one unique state (Puerto Rico) reported disaggregated data for an alternate assessment for both participation and performance (see Table 2). None of the remaining 10 unique states reported disaggregated data for alternate assessments in their state’s regular reports (not including state Annual Performance Reports or State Performance Plans).

Table 2. Unique States that Reported Disaggregated Participation and Performance Data for Students with Disabilities on Alternate Assessments

Unique States

Participation

Performance

American Samoa

No

No

Bureau of Indian Affairs

No

No

Commonwealth of the Northern Mariana Islands

No

No

Department of Defense Education Activity

No

No

District of Columbia

No

No

Federated States of Micronesia

No

No

Guam

No*

No*

Palau

No

No

Puerto Rico

Yes

Yes

Republic of the Marshall Islands

No

No

Virgin Islands

No

No

*APR or SSP has it reported, but APR/SPP reporting is not counted here.

 

Of the 50 regular states, 28 reported both participation and performance for all alternate assessments used (see Figure 7). Two states (Louisiana and Virginia) reported participation or performance for some assessments, but not all. Eleven states failed to publicly report either participation or performance for any alternate assessment.

Figure 7. States Reporting 2005–2006 Disaggregated Participation or Performance Data for Students with Disabilities on Alternate Assessments*

*The figure does not include state APR or SPP data. A broad definition was used to determine whether a state had data—states were included if they had data in any form for each test; these data could be presented for the state as a whole, by grade ranges, or by grade.

Note: States that had added APR or SPP data during the verification process are shown below. States that specifically referenced APRs or SPPs as a source of data to add are italicized. APR or SPP data we found in response to general requests to look are in regular font:

Added participation and performance for all alternates: Georgia, Hawaii, Kansas, Indiana, Louisiana, Minnesota, Ohio, Rhode Island, South Dakota, Tennessee, Vermont

Added participation for assessments not in regular reports: Alabama, Maine, Montana, Oklahoma, Utah, Wyoming

Added performance for assesments not in regular reports: Missouri

 

Figure 7 shows how states reported participation in their alternate assessments. Compared to 2004–2005, fewer states reported disaggregated data, and fewer of those reporting data provided data for both participation and performance for all assessments. In 2005–2006, 28 regular states reported participation and performance for all assessments, compared to 42 states in 2004–20051. The decision to not count SPPs and APRs this year produced lower numbers than if we had counted them as in the past; if we had counted them there would have been 39 alternate assessments for which participation and performance were reported and 34 states that specifically referenced APRs or SPPs as the source of publicly reported data. Eleven states failed to provide disaggregated participation or performance data for any assessment in 2005–2006, compared to only 3 three states in 2004–2005. If APR or SPP reporting had been included for 2005–2006, this number would have been nine states that failed to provide disaggregated participation or performance data for 2005–2006.

Of the 11 unique states, 2 (Guam and Puerto Rico) reported both participation and performance on their alternate assessments, though Puerto Rico did not report whether the alternate was based on alternate achievement standards, modified achievement standards, or grade-level achievement standards. One unique state, the District of Columbia, reported using an alternate assessment but did not report either participation or performance data. No other unique state publicly reported on any alternate assessments, or provided either participation or performance data.


Assessment Participation in 2005–2006

Regular Assessment Disaggregated Participation Results for Students with Disabilities

Figures 8 and 9 show participation reporting approaches for regular assessments in regular states, with Figure 8 showing reporting approaches by the 50 regular states when all of their assessments are considered. Figure 9 shows reporting approaches for each of the 98 regular assessments currently used within state accountability systems. This information is presented by state in Appendix E.

The most common participation reporting category among states for regular assessments was number of students assessed, with 37 states (see Figure 8). This was followed by 15 states reporting by percent of students not assessed, 9 states reporting percent of students assessed, 8 states reporting number or percent exempt/excluded, and 7 states reporting number or percent absent. The least reported category was number of students not assessed, with 6 states.

Looking at these data by total number of assessments, there is a similar pattern (see Figure 9). The most used category for participation reporting was number of students assessed, with 61 assessments, followed by reporting using the category percent of students assessed (25 tests). The least reported category was number of students not assessed, with 8 assessments.

Figure 8. Participation Reporting Approaches for Regular Assessments for Regular States Within Accountability Systems

Number of states reporting:

Figure 9. Participation Reporting Approaches for Regular Assessments for Regular States by Test Within Accountability Systems

Number of regular assessments reporting:

 

Among the 11 unique states, not graphed due to small numbers, one (District of Columbia) reported number and percent of students assessed, and one reported only the number of students assessed. The remaining nine unique states reported no disaggregated participation data (see Appendix E). None reported number or percent exempt or absent.

Figure 10 shows the participation rates reported for 8th grade math in states where this information was reported. The grade and content area (middle school math) were chosen to be consistent with information provided in previous reports. States providing data in other forms (e.g., with more than one grade aggregated together), or not using a middle school math assessment, are not included in this figure. For the 2005–2006 academic year, participation rates ranged from 89% to 99%, compared to a range of 83% to 100% in 2004–2005. However, fewer states are included in this analysis because only 14 states reported these data clearly, compared to 20 in the previous year. Sixty-nine percent (9 of the 14 states) had participation rates of 95% or higher, compared to fifty percent (10 of 20) in 2004–2005.

Figure 10. Percentages of Students with Disabilities Participating in Middle School Regular Math Assessments in Those States with Clear Reporting of Participation Rates


Alternate Assessment Disaggregated Participation Results for Students with Disabilities

Figures 11 and 12 show participation reporting approaches for alternate assessments (see Appendix F). Figure 11 shows reporting approaches for alternate assessments by the 50 regular states when all of their alternate assessments are considered. Figure 12 shows reporting approaches for each of the 40 alternate assessments based on alternate achievement standards that had data reported across all regular states. The most common way states reported participation was to provide the number of students assessed, with 31 states reporting this across 40 of their alternate assessments. The lowest type of reported information, provided by two states, was percent of students not assessed.


Figure 11. Participation Reporting Approaches for Alternate Assessments Based on Alternate Achievement Standards

Figure 12. Participation Reporting Approaches for Alternate Assessments Based on Alternate Achievement Standards by Assessment


Assessment Performance in 2005–2006

Regular Assessment Performance Results

As with the reporting of participation data, states’ reporting of performance data for regular assessments varied in both extent and approach (see Appendix G). Figures 13 and 14 show the performance reporting approaches by the 50 states (see Figure 13) and for the 101 individual assessments reported by states within and outside their accountability systems (see Figure 14). Data are presented in terms of the number of assessments across all regular states for which disaggregated performance data were provided.

Thirty-three states provided data on percent proficient, such as the percent of students with disabilities whose scores were at or above the proficient level. This was the second most common reporting method regardless of whether examined by state or by assessment.

Figure 13. Performance Reporting Approaches for Regular Assessments Within Accountability Systems

Figure 14. Performance Reporting Approaches for Regular Assessments by Assessments Within Accountability Systems


For all states providing clear disaggregated performance data for students with disabilities, performance of both general education students and students with disabilities were examined. In considering performance levels across states, it is important to keep in mind that each state determines the specific content of its assessments and establishes its own proficiency levels. Assessments may emphasize different content standards and may differ widely in difficulty. Thus it is unwise to compare proficiency rates across states, or to compare gaps between general education versus special education across states. If making year-to-year comparisons within a state, results are only comparable if the same assessments were used in the different years, if the state indicated that reported scores for altered assessments were comparable, and if participation rates and populations were similar.

Because reading/English Language Arts and math are core subjects in most states, and were the first content areas required to be assessed by NCLB, performance results for these areas are the primary focus of this report. If states reported a separate writing assessment, it is included in the assessments listed in the Appendices. However, writing-only assessments are not included in performance comparisons between students with and without disabilities. All of the assessments reported in this section are CRT assessments, with the exception of Iowa, which employed an NRT assessment in 2005–2006. EXIT exams are not reported here because states’ distinct EXIT exams differ in their precise purpose, and exam results may be combined with the results of other criteria to determine eligibility for graduation. Many of the graduation requirement exams are also used for NCLB accountability purposes, and these results are reported alongside those of the regular statewide exams that are used for accountability purposes.

We separated grade levels into three sets: elementary (3–5), middle (6–8), and high school (9–12). For the summary in this report, we present only one grade per level, specifically 4th grade, 8th grade, and 10th grade. These grades were chosen because they are the most common grades for testing historically and had been used in previous reports. If not available in a state, we sought data for the next lower grade, and if those were not available we went to the next higher grade. When a high school assessment failed to specify a grade level, it was included as a 10th grade assessment.

Although most states reported separately on students without disabilities (general education students) and on students with disabilities, some states did not report separately and instead reported data for "all" students. This can influence slightly (depending on the percentage of students with disabilities in the assessment) how gap comparisons are interpreted.

Reading Performance: Figures 15–17 show the reading performance of students by state for those states reporting data. In most states the performance of students with disabilities in reading was considerably lower than was the performance of students without disabilities or all students. In reviewing the performance data, we noted that smaller gaps were seen in states in which students with disabilities had high scores. In these situations there is little room left on the scale for a large gap (i.e., if students with disabilities have 85% Proficient, there can be no more than 15 points difference between these students and students without disabilities). Similarly, if students without disabilities have average scores that are exceptionally low on an assessment, there is a limited range for difference between them and students with disabilities. In general, states with the highest average scores for students with disabilities had smaller gaps. Smaller gaps were also seen in states with the lowest average scores for students without disabilities (or all students). Table 3, using data from Figures 15–20, compares the average gap for all states to the average gap for states with the highest five scores for students with disabilities and states with the lowest five scores for students without disabilities (or all students in states reporting data for all students as the comparison group rather than for students without disabilities). In each case (elementary, middle, and high school, Reading and Math, and for both extremes of scores), gaps were lower for the states with high disability or low non-disability scores. For some comparisons more than five states are listed because of tied scores. States with the highest and lowest scores for students with and without disabilities may be analyzed further in subsequent reports.

Table 3. 2005–2006 Gaps for Regular Assessments: Comparison of Mean Gaps to Gaps in States with Highest Scores for Students with Disabilities, and Lowest Scores for Students without Disabilities or All Students

Mean Gaps

Mean Gap for All States

States with 5 highest scores for students with disabilities

States with 5 lowest scores for students without disabilities (or all students)

Figure 15

Elementary Reading

34.5

16.9

(GA,KS,NE,ND,SD,TX,VA)

31.8

(CA,MA,MO,NV,NM)

Figure 16

Middle School Reading

42.5

23.0

(GA,KS,NE,TX,VA)

34.6

(CA,FL,MO,NV,NM)

Figure 17

High School Reading

44.8

27.4

(GA,NE,OH,TX,VA)

31.6

(CA,FL,KY,ME,MO)

Figure 18

Elementary Math

29.3

16.0

(ID, KS,NE,OR,TX)

27.0

(CA,MA,MO,NV,NM,RI)

Figure 19

Middle School Math

40.9

31.4

(KS,NE,ND,TN,VA)

27.4

(CA,KY,ME,MO,NM)

Figure 20

High School Math

38.5

27.2

(GA,NE,NC,TN,VA)

26.8

(CA,CO,MN,NM,OK)

 

Reading Performance. Figures 15–17 show the reading performance of students by state for those states reporting data. In most states the reading performance of students with disabilities in reading was considerably lower than was the performance of general education students without disabilities. In general, states with the lowest and highest average scores for students with disabilities have smaller gaps, possibly due to limitations on variability at the ends of the range of percentages. In general, middle school and high school average scores are lower than elementary scores.

At the elementary level (see Figure 15), gaps ranged from 8 to 66 percent. The following states had gaps of 25 percentage points or less: Georgia, Kansas, Kentucky, Missouri, Nebraska, North Dakota, Texas, and Virginia. Two states had gaps of 50 percentage points or more: New Hampshire and Washington.

At the middle school level (see Figure 16), gaps ranged from 19 to 57 percent. States with gaps of 25 percentage points or less were: Georgia, Kansas, Nebraska, and Texas. Nine states had gaps of 50 points or more: Alabama, Arizona, Colorado, Indiana, Montana, New Hampshire, New Jersey, Oklahoma, and Utah.

At the high school level (see Figure 17), gaps ranged from 23 to 58 percent. The following states had gaps of 25 percentage points or less: Florida, Georgia, Nebraska, and Virginia. Thirteen states had gaps of 50 points or more. We caution against comparing gaps across states.

Figure 15 Elementary School Reading Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students

Figure 16 Middle School Reading Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students

Figure 17 High School Reading Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students


Mathematics performance. Figures 18–20 show the performance of general education students and students with disabilities on states’ 2005–2006 math assessments. Across grade levels, it appears, as with reading, that states having the highest scores for students with disabilities or the lowest scores for students without disabilities had, on average, smaller gaps than the average across all states. As with reading, this suggests that small gaps may be associated with limited variability at the high and low ends of the range of percentages (see Table 3).

At the elementary school level, gaps in math achievement on regular assessments were smaller than for either middle school or high school. The gaps (see Figure 18) ranged from a low of 5 (Texas) to a high of 45 (Alabama). Nine states (Kansas, Kentucky, Maine, Missouri, Nevada, New Mexico, North Dakota, Texas, and Virginia) had gaps of 25 percentage points or less. States with the largest gaps (40 and above) were Alabama, Arizona, Colorado, Delaware, and New Hampshire.

At the middle school level (see Figure 19), gaps in achievement on regular math assessments ranged from a low of 20 (New Mexico) to a high of 51 (Oklahoma, Utah, and Wisconsin). States with gaps of 25 percentage points or less were Nebraska, New Mexico, and Kentucky. States with gaps of 50 or more were Alabama, Alaska, Oklahoma, Utah, and Wisconsin.

Figure 18 Elementary Mathematics Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students

Figure 19 Middle School Mathematics Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students

Gaps in math achievement on regular high school math assessments (see Figure 20) ranged from a low of 19 percentage points (Virginia) to a high of 58 percentage points (Alabama). States with a gap of 25 percentage points or less were California, New Mexico, North Carolina, Tennessee, and Virginia. States with a gap of 50 points or more included Alabama, Alaska, Arizona, South Dakota, West Virginia, and Wisconsin.

Figure 20 High School Mathematics Performance on the Regular Assessment

Legend: Heavy Solid Bar = Students with Disabilities

Light Bar = May be Students without Disabilities or Total Students

Alternate Assessment Performance Results

Figure 21 provides reporting approaches for alternate assessments for all regular states and Figure 22 provides the same information by total number of alternate assessments administered by regular states for which disaggregated performance data were found. The reporting approaches were similar across states and assessments with most reporting the percent performing in each achievement level followed by total percent proficient and number not proficient (See Appendix H).

Figure 21.Performance Reporting Approaches for Alternate Assessments


Figure 22. Performance Reporting Approaches for Alternate Assessments by Tests


Assessment Performance: Trends

Eleven states were examined for trends in the report for school year 2004–2005 (VanGetson & Thurlow, 2007). The current trend analysis builds on that base of 11 states because all of those states continued to report data for students with disabilities for reading or mathematics in 2005–2006. Nine of these states reported both reading and mathematics data for the past eight years (California, Delaware, Indiana, Kentucky, Louisiana, Missouri, New Jersey, New York, and Washington). Colorado reported eight years of reading data and Kansas reported eight years of mathematics.

States that report data on their Web sites rarely provide a context for their results. For the 2005-2006 data, only one state provided explicit, easy to find information about changes to the assessment that might have resulted in dramatic changes in performance.

Reading Assessment Gaps. Figures 23 through 25 show the percentage of students with disabilities reaching proficiency on regular state reading assessments at the elementary, middle, and high school levels for the past eight years. In Figure 23, the general trend for the elementary level showed steady increases in four states across the most recent four consecutive years, and a combination of increasing and maintaining levels of percent proficient in two states. The other two states show general increases in percent proficient over time though with less consistency. Overall, the percentage of students with disabilities achieving proficient levels in reading ranged from near 0% to 40% in 1998–1999 and 20% to 53% in 2005–2006.

For the middle school level in Figure 24, only two states showed consistent increases in percent proficient in the most recent four consecutive years. Six states showed a combination of increases and maintaining levels of proficiency across a similar time frame. Two other states showed general increases over the span of eight years, but with less consistency. The range of percent proficient in these states in 1998–99 was between near 0% and 48%. The range in 2005–2006 was between 8% and 45%. Although the difference in these ranges does not suggest a pattern of increasing percent proficient across years overall, it does suggest this across many of the states that had data reported in each of the past eight years.

In Figure 25, the high school level states showed a more consistent increase in percent proficient for reading. The lower and upper range of 1998–1999 of 0% to 10% rose to a range from 10% to 40% in 2005–2006. However, this is a very small number of states and it should not be assumed that similar patterns exist for other states.

Figure 23. Eight-Year Trends of the Percentage of Elementary Students with Disabilities who Achieved Proficiency on Statewide Reading Assessments

Figure 24. Eight-Year Trends of the Percentage of Middle Students with Disabilities who Achieved Proficiency on Statewide Regular Reading Assessments

Figure 25. Eight-Year Trends of the Percentage of High School Students with Disabilities who Achieved Proficiency on Statewide Regular Reading Assessments

Math Assessment Gaps. Figures 26–28 show the percentage of students with disabilities reaching proficiency on regular mathematics assessments at the elementary, middle, and high school levels for the past eight years. In Figure 26, showing elementary level mathematics performance, four states showed a combination of increases and maintaining levels of percent proficient in the most recent four years. The six other states showed less consistent increases in percent proficient within the same time frame. The lower and upper ranges of percent proficient for 1998–1999 was 0% to 43%, where in 2005–2006 the range was 25% to 65% proficient.

In Figure 27, for the middle school level, four states showed a combination of increased or maintained rates of proficiency in the past four years. The six other states, although less consistent, still trended upward in increased rates of proficiency from 1998–99 to 2005–06. Across all states, the lower and upper ranges changed from roughly 0% and 45% in 1998–99 to 8% and 41% in 2005–06.

At the high school level, Figure 28 shows most states with steady increases in the percent of students with disabilities proficient in mathematics. The change in ranges of percentage proficient from the first to last year charted was from near 0% to 5% in 1998–1999 to 10% to 30% in 2005–2006.

Overall, comparing the years 1998–99 to 2005–06 across only these states, there were comparable increases in percentages of students proficient at elementary, middle, and high school levels for both reading and mathematics. There were slightly higher increases for elementary math and high school reading; however, these comparisons are only taking into account the first and last years of data.

Figure 26. Eight-Year Trends of the Percentage of Elementary Students with Disabilities who Achieved Proficiency on Statewide Regular Mathematics Assessments

Figure 27. Eight-Year Trends of the Percentage of Middle Students with Disabilities who Achieved Proficiency on Statewide Regular Mathematics Assessments

Figure 28. Eight-Year Trends of the Percentage of High School Students with Disabilities who Achieved Proficiency on Statewide Regular Mathematics Assessments


Gap Comparisons from 2004–2005 to 2005–2006

The average gap for elementary reading increased from 33.6 in 2004–2005 to 34.5 in 2005–2006, with 41 and 45 states reporting respectively. For middle school there was a very slight decrease in the average gap from 43.4 percentage points to 42.5 percentage points, with 41 and 45 states reporting respectively. The average high school reading gaps showed a widening across years, from 29.8 to 42.8, with 41 states reporting in both years. This change of 13 percentage points is the most notable difference in the gap data for either reading or mathematics between the two years.

For mathematics, the average elementary gap widened from 26.6 percentage points in 2004–2005 to 29.3 percentage points in 2005–2006, with 41 states and 45 states reporting respectively. For middle school the average gap increased slightly from 39.2 to 40.9, with 41 and 45 states reporting. At the high school level, the average gap decreased slightly from 40.4 in 2004–2005 to 38.5 in 2005–2006. The mathematics gap data showed only a 2 percentage point difference, either increasing or decreasing, across the two years.


Other Information Collected for 2005–2006

Accommodations

Ten states provided information on students’ participation in regular assessments with accommodations. Slightly more states (N=16) reported this information in the previous report covering school year 2004–2005.

Unlike the previous report, no state reported public information on standard and non-standard accommodations used in a regular test. States reported either by a general accommodated category, listed specific accommodations, or reported by a "bundle" of accommodations provided to students with a particular disability (see Table 4 and Appendix I).

All but one of the states that reported accommodation information did so for participation and performance by grade and content area. Table 4 summarizes how states reported data for students who participated with and without accommodations. Appendix I has additional details about participation and performance for the states.


Table 4. Summary of States that Reported State-Level Information about Accommodations Information in State Public Reports

State

Terminology used

By

content/grade?

Participation

Performance

Comments

Colorado

Specific accommodations

Yes/Yes

Yes

Yes

Florida

With and without accommodations

Yes/Yes

Yes

Yes

Idaho

Accommodations

Yes/Yes

Yes

Yes

Reports for all students on IRI, not by students with disabilities

Indiana

Accommodations

Yes/Yes

Yes

Yes

Iowa

With and without accommodations

Yes/Yes

Yes

Yes

Kentucky

With and without accommodations

Yes/Yes

Yes

Yes

Mississippi

With and without accommodations by grade band and instructional level

Yes/No

Yes

No

Reports by elementary, middle and secondary grade bands.

Nebraska

Accommodations

Yes/Yes

Yes

Yes

Reports by accommodated test, alternate test, and alternate methods of assessment.

North Carolina

Specific accommodations

Yes/Yes

Yes

Yes

Texas

Linguistically Accommodated Testing, and bundle of accommodations for students with Dyslexia

Yes/Yes

Yes

Yes

Reports by limited English proficiency status, non-LEP status (1st and 2nd year), all total, and by special education student status.

 

Click Analysis of Web-based Reporting

Publicly reported data are not functionally public unless provided in an easily accessible manner. To examine ease of access, we analyzed the number of clicks it takes to locate disaggregated data on students with disabilities on states’ Department of Education Web sites (see Figure 29). This analysis is similar to that conducted in the previous report. Our click analysis includes all regular and unique states that had data reported in the initial collection and that were able to be located for this subsequent analysis. The analysis was conducted after all data verification was completed. Because state Web sites change frequently, the total number in Figure 29 may vary from the number of states reported in the appendices as having provided data.

Figure 29 presents the number of clicks between Web pages required to arrive at the disaggregated data. We did not count the additional clicks needed on a Web page that is used to generate reports because many of these allow users to choose specific demographic characteristics and test elements; counting these clicks would add many more actual mouse clicks to the count. For those sites, we only counted the number of clicks needed to arrive at the generator site and a final "submit" click. For this analysis, we specifically excluded use of a Web page search engine, and instead measured the number of clicks required to navigate from the home page to the data using available links on each page. We did not count "false starts" in which we initially chose a link that did not lead to the data, but note that states use very different terminology on their Web sites to identify where disaggregated assessment data are located.

Although states may use both Web-generated reports and more traditional documents posted online to publicly report data, each has its strengths and weaknesses for users, depending on their purpose in accessing the data. We noted that collecting data across a range of grades and tests was much more time-consuming when using a report generator than when accessing traditional reports because the generator sites typically require the user to manually select variables from several drop-down menus for each report generated. However, this design is not a problem for users who desire a single report on specific demographic variables. Some states offer both formats of accessing data, but then the question arises as to whether the data provided in each format are identical. States should clearly indicate posting dates, and if more than one format is offered, clarify whether the data are the same.

Most state Web sites in the analysis required three or four clicks to access data. Only a small number of states required seven or more clicks. This is somewhat comparable to last year’s report, which found 41 states with 3-4 clicks and 6 states with 6 clicks. However, because Web sites change frequently, and because this year’s analysis includes 14 fewer states than the previous report, one should not assume a clear year-to-year comparison is possible.

Figure 29. Number of States in Each "Click" Category


Summary and Discussion

As reflected in the title of this report, the findings show good news and bad news for the disaggregation of data for students with disabilities for 2005–2006 in public reports. On the negative side, compared to 2004–2005, fewer states publicly reported disaggregated data, and fewer of those reporting data provided data for both participation and performance for all assessments. Considering performance, the performance gaps between students with disabilities and students without disabilities remained essentially the same for math at all levels and for reading at the elementary and middle school level, compared to 2004–2005. At the high school level, the average gap for reading widened by 13%. However, on the actual reported performance for students with disabilities in those states for which we have trend data, most states reported increases in the percent proficient for reading and mathematics over time at elementary, middle and high school grades for the small sample of states for which we had eight years of data. Looking at these trend years, we note spikes and precipitous drops in the year-to-year performance data. From currently available information online, we only identified one state that had changed an assessment during that time, but these changes in the data suggest some conditions must have varied (such as changes to tests or cut scores, student factors, accommodations, or instruction). Yet, with those inconsistencies aside, there are clear trends toward improved performance overall across eight years. In future reports we plan to continue analyzing the potential reduction in gaps at either end of performance, looking at the highest and lowest average scores for students with and without disabilities.

For accommodations reporting, the bad news is that fewer states publicly reported accommodations use for students with disabilities on regular state tests in 2005–2006 than in the past. Greater transparency is needed in this area. However, the good news is that for those states that reported accommodations data publicly, all but one state in 2005–2006 reported both participation and performance by grade and content area. States did not report in a way that allowed the reviewers to differentiate between accommodations that resulted in valid scores versus those that resulted in scores considered invalid, as they had in the past by distinguishing between standard and nonstandard accommodations. Although some states report the number or percent of students excluded from performance data, the reasons why students may be excluded is important. If the reason for unusable scores is related to accommodations provided, this should be transparent in reporting.

Recommendations for Reporting

Based on findings in previous reports and in this current analysis, recommendations are made for reporting data:

Report participation and performance results for each test, subject area, and grade level. As with previous reports, this need was especially apparent with alternate assessments. Although states’ annual performance reports and state performance plans are a means to publicly report data, these reports are not always clear and accessible to public audiences, and should not be considered as equivalent to regular public reports. Because this analysis did not include data found only in APRs or SPPs, the number of states counted as reporting data on state alternate assessments for students with disabilities was even lower than in previous years. For regular assessments, too, states need to report data for each grade level tested. Although we note in the appendices where states reported data by grade ranges (e.g., elementary grades together), these data could not be used in our analyses.

The confidentiality issue often is a factor for reporting participation for alternate assessments, due to the required minimum group size for reporting. However, even if a state indicates performance with a dash or asterisk due to the minimum N policy for data privacy, this was still counted as reporting data publicly. However, some states choose to report data in small subcategories that make it impossible to report because of minimum N (alternate assessment participation by disability category). If one of these states also does not report the participation for the total in the grade level for a test, then no data are available. For this reason, we recommend that, at the least, states clearly report data for all students with disabilities by grade and by content area assessed.

Report participation in two informative ways. VanGetson and Thurlow (2007) graphed participation rates of students with disabilities by considering data on a state’s alternate assessment for a particular content area within a grade level in contrast to data for students in the grade taking the regular state assessment. This could not be done for this report because not enough data were reported. This was due, in part, to the fact that states did not often provide this information outside of their APRs or SPPs. It is useful to know both what percentage of students with disabilities in a grade level participated in regular and alternate assessments, and to know the participation rate for each assessment by grade. State reports of regular assessments often employ the latter approach. In the future, both participation numbers and percentages should be reported clearly.

Four states specifically referred to a posted APR or SPP as their only way of reporting alternate assessment data. Five other states did not specifically refer to these reports but said to look on the state Web site for the information and this was the only location where alternate assessment data were officially reported. Some states used APR-based reports to publicly report both regular and alternate assessment data. States need to be sure that reports based on their APRs are designed to be easily understood by a public audience. For some states, where data for alternate assessments were not publicly reported except in a state APR or SPP, we believe that the requirement for public reporting is not met.

Clearly label preliminary and final data with dates posted. There are multiple ways of reporting data online, but it is important that, whatever approach is used, the most current version is clearly labeled for the user. From initial searches through verification, data may be posted online more than once, and sometimes older versions of data are left online even after updated data are posted. For example, for one state we found two reports, each of which appeared to be a final report for the year—but the data in the reports were different. A clear posting or publishing date would be helpful in such instances.

It was also problematic when the question about how a state reports public data was interpreted to mean all the ways possible that a state reports data in all kinds of reports, and not a typical annual report format. Although we did collect data from multiple sources and in various formats (e.g., Excel sheets, PDF files, and generated reports), data from these sources did not always match the final data reported in a state’s regular report, which was the primary source for data collected. For this reason, it may help the process of gathering these data in the future to better define the acceptable sources of data as only including a "final" document or set of documents by grade, or a Web address for a final report or a report generator.

Report participation with accommodations. The number of states that publicly reported use of accommodations in relation to participation numbers decreased from the previous report. Given that it is important to track the percent of students with disabilities taking regular and alternate assessments by grade, data on whether these students are participating with or without accommodations across assessments, by grade, is important. These data provide another view of how students are participating in the system overall, and provide useful information on how students perform with and without accommodations by grade and assessment. As noted in the previous report, it is important to report this information, in part to determine the extent to which there may have been exclusion of students’ scores from summary data, thus confounding the overall picture of how all students performed on an assessment.

Consider APR data and regular reporting. States often rely on Annual Performance Report data for public reporting. For the few states that base regular reports on APRs, the presentation of data originating in APRs needs to meet the requirement that data for students with disabilities be provided in the same manner as the state’s data on students without disabilities. Another area of interest is the question of whether states are reporting the percentage of students with disabilities taking the regular assessments and various types of alternate assessments by grade level. Reporting by grade is not a requirement for regular reporting under NCLB, but these data have been reported sporadically in regular reports in the past. Participation rates, disaggregated by grade, would be a beneficial addition to regular reports as aids in interpreting performance data, especially for analyzing achievement gaps between students with disabilities and students without disabilities.

Make data accessible. Even the most carefully collected data are of limited utility if users cannot easily find and review the information. Accessibility includes providing clear report formats, making it easy to navigate to data from state education department home pages, and providing, if possible, both summary reports and reports (or report generators) disaggregated by grade level and content area.

We commend those states that provide complete and accessible data on statewide assessments. We were pleased to see evidence of performance improvements over time. We were disappointed nevertheless to find that many regular states and most unique states continued to provide inadequate or inaccessible data, even in this tenth year of required reporting.


References

Klein, J. A., Wiley, H. I., & Thurlow, M. L. (2006). Uneven transparency: NCLB tests take precedence in public assessment reporting for students with disabilities (Technical Report 43). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M. L., & Wiley, H. I. (2004). Almost there is public reporting of assessment results for students with disabilities (Technical Report 39). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Thurlow, M. L., Wiley, H. I., & Bielinski, J. (2003). Going public: What 2000–2001 reports tell us about the performance of students with disabilities (Technical Report 35). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Wiley, H. I., Thurlow, M. L., & Klein, J. A. (2005). Steady progress: State public reporting practices for students with disabilities after the first year of NCLB (2002–2003) (Technical Report 40). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

VanGetson, G. R., & Thurlow, M. L. (2007). Nearing the target in disaggregated subgroup reporting to the public on 2004–2005 assessment results (Technical Report 46). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Appendices

Appendix A: Sample Letter Sent to Assessment Directors

Appendix B: Sample Letter Sent to Special Education Directors

Appendix C: Status of Disaggregated Data (Participation and Performance) for Students with Disabilities on Regular State Tests in the Fifty States and Unique States for 2005-2006

Appendix D: Status of Disaggregated Data (Participation and Performance) for Students with Disabilities on Alternate State Tests in the Fifty States and Unique States for 2005-2006

Appendix E: Disaggregated Participation Information for Students with Disabilities on Regular State Tests for the Fifty States and Unique States for 2005-2006

Appendix F: Disaggregated Alternate Assessment Participation Information for Students with Disabilities on Regular Assessments for the Fifty States and Unique States for 2005-2006

Appendix G: Disaggregated Regular Assessment Performance Information for Students with Disabilities for the Fifty States and Unique States for 2005-2006

Appendix H: Disaggregated Alternate Assessment Performance Information for Students with Disabilities for the Fifty States and Unique States for 2005-2006

Appendix I: Participation and Performance for Students Tested with Accommodations