2009 Survey of States: Accomplishments and New Issues at the End of a Decade of Change

Jason Altman • Sheryl Lazarus • Rachel Quenemoen • Jacquelyn Kearns • Mari Quenemoen & Martha Thurlow

June 2010

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Altman, J. R., Lazarus, S. S., Quenemoen, R. F., Kearns, J., Quenemoen, M., & Thurlow, M. L. (2010). 2009 survey of states: Accomplishments and new issues at the end of a decade of change. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Table of Contents

The Mission of the National Center on Educational Outcomes
Acknowledgments
State Directors of Special Education
State Directors of Assessment
Executive Summary
Overview of 2009 Survey of States
Successful Practices and Recurring Challenges
Growth Models
Participation and Accommodations
Alternate Assessments Based on Modified Academic Achievement Standards (AA-MAS)
Assessment Reporting Practices
Alternate Assessments Based on Alternate Achievement Standards (AA-AAS)
Current and Emerging Issues
Preferred Forms of Technical Assistance
Appendix A: Successes and Challenges Reported by Unique States
Appendix B: State Technical Manuals for the AA-AAS
Appendix C: Contextual Comments Related to Assessment Data Trends


The Mission of the National Center on Educational Outcomes

NCEO Staff

Deb Albus
Jason Altman
Manuel Barrera
Laurene Christensen
Christopher Johnstone
Jane Krentz
Sheryl Lazarus
Kristi Liu
Ross Moen
Michael Moore
Rachel Quenemoen
Christopher Rogers
Dorene Scott
Yi Chen-Wu
Mai Vang

Martha Thurlow,
Director

NCEO is a collaborative effort of the University of Minnesota, the
National Association of State Directors of Special Education (NASDSE), and the Council of Chief State School Officers (CCSSO). NCEO provides national leadership in assisting state and local education agencies in their development of policies and practices that encourage and support the participation of students with disabilities in accountability systems and data collection efforts.

NCEO focuses its efforts in the following areas:

  • Needs Assessments and Information Gathering on the participation and performance of students with disabilities in state and national assessments and other educational reform efforts.
  • Dissemination and Technical Assistance through publications, presentations, technical assistance, and other networking activities.
  • State Data Collection Technical Assistance to assist states in continuing to meet the challenges of collecting comprehensive, accurate, and consistent data on the participation and performance of students with disabilities.
  • Collaboration and Leadership to build on the expertise of others and to develop leaders who can conduct needed research and provide additional technical assistance.

The Center is supported primarily through a Cooperative Agreement (#H326G050007) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. Additional support for targeted projects, including those on English language learners, is provided by other federal and state agencies. The Center is affiliated with the Institute on Community Integration in the College of Education and Human Development, University of Minnesota. Opinions or points of view expressed within this document do not necessarily represent those of the Department of Education or the Offices within it.

National Center on Educational Outcomes
207 Pattee Hall
150 Pillsbury Dr. SE
Minneapolis, MN 55455
612/626-1530  Fax: 612/624-0879  http://www.nceo.info

The University of Minnesota is an equal opportunity educator and employer.

Table of Contents


Acknowledgments

With the collective efforts of State Directors of Special Education, and State Directors of Assessment, we are able to report on the activities of all 50 states and 8 of 11 federally funded entities (unique states). Because of the thoughtful and knowledgeable responses of the directors of special education, directors of assessment, and their designees who completed this survey, we are able to share new initiatives, trends, accomplishments, and emerging issues during this important period of education reform. The purpose of this report is to make public the trends and issues facing states, as well as the innovations states are using to meet the demands of changing federal legislation. We appreciate the time taken by respondents to gather information from other areas or departments, and we hope that this collaborative effort provided an opportunity to increase awareness within and across state programs and departments.

For their support, special thanks go to:

  • Dave Malouf, of the Office of Special Education Programs in the U.S. Department of Education (OSEP);
  • Eileen Ahearn, of the National Association of State Directors of Special Education;
  • Michael Moore, communications director for the National Center on Educational Outcomes;
  • Miong Vang, office assistant at the National Center on Educational Outcomes;
  • June De Leon, of the University of Guam, for her assistance in obtaining completed surveys from the Pacific unique states

NCEO’s 2009 Survey of States was prepared by Jason R. Altman, Sheryl S. Lazarus, Rachel F. Quenemoen, Jacquelyn Kearns, Mari Quenemoen, and Martha L. Thurlow.

Table of Contents


State Directors of Special Education

ALABAMA
Mabrey Whetstone

ALASKA
Art Arnold

ARIZONA
Colette Chapman

ARKANSAS
Marcia Harding

CALIFORNIA
Mary Hudler

COLORADO
Ed Steinberg

CONNECTICUT
Anne Louise Thompson

DELAWARE
Martha Toomey

FLORIDA
Bambi Lockman

GEORGIA
Nancy O’Hara
Kimberly Hartsell

HAWAII
Paul Ban

IDAHO
Jean Taylor
Jacque Hyatt

ILLINOIS
Beth Hanselman

INDIANA
Sharon Knoth

IOWA
Lana Michelson

KANSAS
Colleen Riley

KENTUCKY
Larry Taylor

LOUISIANA
Susan Batson

MAINE
David Stockford

MARYLAND
Carol Ann Baglin

MASSACHUSETTS
Marcia Mittnacht

MICHIGAN
Jacquelyn Thompson

MINNESOTA
Barbara L. Troolin

MISSISSIPPI
Ann Moore

MISSOURI
Heidi Atkins Lieberman

MONTANA
Tim Harris

NEBRASKA
Gary Sherman

NEVADA
Frankie McCabe

NEW HAMPSHIRE
Santina Thibedeau

NEW JERSEY
Roberta Wohle

NEW MEXICO
Denise Koscielniak

NEW YORK
Rebecca Cort

NORTH CAROLINA
Mary Watson

NORTH DAKOTA
Robert Rutten

OHIO
Kathe Shelby

OKLAHOMA
Misty Kimbrough

OREGON
Nancy Latini

PENNSYLVANIA
John Tommasini

RHODE ISLAND
Kenneth Swanson

SOUTH CAROLINA
Michelle Bishop (acting)

SOUTH DAKOTA
Ann Larsen

TENNESSEE
Joseph Fisher

TEXAS
Kathy Clayton

UTAH
Nan Gray

VERMONT
Karin Edwards

VIRGINIA
Doug Cox

WASHINGTON
Doug Gill

WEST VIRGINIA
Lynn Boyer

WISCONSIN
Stephanie Petska

WYOMING
Peggy Brown-Clark

AMERICAN SAMOA
Moeolo Vaatausili

BUREAU OF INDIAN
EDUCATION
Gloria Yepa

DEPARTMENT OF
DEFENSE
David Cantrell

DISTRICT OF
CCOLUMBIA

Tameria Lewis

GUAM
May Camacho (acting)

NORTHERN MARIANA ISLANDS
Suzanne Lizama

MARSHALL ISLANDS
Ruthiran Lokeijak

MICRONESIA
Arthur Albert

PALAU
Helen Sengebay

PUERTO RICO
Norma Sanchez

U.S. VIRGIN ISLANDS
Carrie S. Johns

These were the state directors of special education in June, 2009 when NCEO verified the survey.

Table of Contents


State Directors of Assessment

ALABAMA
Gloria Turner
Miriam Byers

ALASKA
Erik McCormick

ARIZONA
Roberta Alley

ARKANSAS
Gayle Potter

CALIFORNIA
Deb V.H. Sigman
J. T. Lawrence

COLORADO
Jim McIntosh
Jo O’Brien

CONNECTICUT
Robert Lucco

DELAWARE
Wendy Pickett

FLORIDA
Victoria Ash

GEORGIA
Stephen Pruitt
Melissa Fincher

HAWAII
Kent Hinton

IDAHO
Margo Healy
Bert Stoneberg

ILLINOIS
Joyce Zurkowski

INDIANA
Wes Bruce

IOWA
Jim Addy

KANSAS
Tom Foster
Scott E. Smith

KENTUCKY
Ken Draut

LOUISIANA
Fen Chou

MAINE
Dan Hupp

MARYLAND
Leslie Wilson

MASSACHUSETTS
Mark Johnson

MICHIGAN
Joseph Martineau
Vince Dean

MINNESOTA
Dirk Mattson

MISSISSIPPI
Kris Kaase

MISSOURI
Andrea Wood

MONTANA
Judy Snow

NEBRASKA
Pat Roschewski

NEVADA
Carol Crothers

NEW HAMPSHIRE
Gaye Fedorchak

NEW JERSEY
Tim Peters

NEW MEXICO
Anne Bradley

NEW YORK
Steven Katz

NORTH CAROLINA
Angela Hinson Quick

NORTH DAKOTA
Greg Gallagher

OHIO
Pat Corrigan

OKLAHOMA
Joyce Defehr

OREGON
Tony Alpert

PENNSYLVANIA
Ray Young

RHODE ISLAND
Mary Ann Snider

SOUTH CAROLINA
Elizabeth Jones

SOUTH DAKOTA
Gay Pickner

TENNESSEE
Dan Long

TEXAS
Gloria Zyskowski
Cathy Kline

UTAH
Deborah Swensen

VERMONT
Michael Hock

VIRGINIA
Shelley Loving-Ryder

WASHINGTON
Joe Willhoft
Christopher Hanezrik

WEST VIRGINIA
Jan Barth

WISCONSIN
Lynette Russell
Phil Olsen

WYOMING
Bill Herrera
LLesley Wangberg

AMERICAN SAMOA
Robert Soliai

BUREAU OF INDIAN EDUCATION
Patricia Abeyta

DEPARTMENT OF
DEFENSE

Steve Schrankel

DISTRICT OF
COLUMBIA

LeRoy Tompkins
Joshua Boots

GUAM
Nerissa Bretania-
Shafer

MARIANA ISLANDS
Jackie Quitugua

MARSHALL ISLANDS
Stanley Heine

MICRONESIA
Burnis Danis

PALAU
Raynold Mechol

PUERTO RICO
Carmen Ramos

U.S. VIRGIN ISLANDS
Lauren Larsen

These were the state directors of assessment in June, 2009 when NCEO verified the survey.

Table of Contents


Executive Summary

This report summarizes the twelfth survey of states by the National Center on Educational Outcomes (NCEO) at the University of Minnesota. Results are presented for all 50 states and 8 of the 11 unique states. The purpose of this report is to provide a snapshot of the new initiatives, trends, accomplishments, and emerging issues during this important period of standards-based education reform as states documented the academic achievement of students with disabilities.

Key findings include:

  • States identified several successes and challenges in implementing large scale assessment to a wide and varied population of students.
  • Nearly half of the states did not disaggregate assessment results for English language learners with disabilities.
  • Most states monitored the participation of students on their regular assessment with accommodations. This was most frequently accomplished by directly observing test administration.
  • Three in four states examined the validity of the accommodations used in their state by reviewing research literature or completing an analysis of data.
  • Four in five states used or were considering a growth model for reporting or accountability purposes.
  • Almost half of the states had a formal policy on the use of formative assessments by districts and schools.
  • More than a quarter of the states had decided not to develop an alternate assessment based on modified achievement standards (AA-MAS).
  • Many states that had developed or were developing an AA-MAS changed an existing grade-level test rather than designing an entirely new test. The most frequently made changes included simplifying vocabulary, reducing the length of the test, and shortening reading passages.

States widely recognized the benefits of inclusive assessment and accountability systems, and continued to improve assessment design, participation and accommodations policies, monitoring practices, and data reporting. In addition states identified key areas of need for technical assistance moving forward.

Table of Contents


Overview of 2009 Survey of States

This report marks the 12th time over the past 17 years that the National Center on Educational Outcomes (NCEO) has collected information from states about the participation and performance of students with disabilities in assessments during standards-based reform.

As in 2007, state directors of special education and state directors of assessment were asked to provide the name and contact information of the person they thought had the best working knowledge of the state’s thinking, policies, and practices for including students with disabilities in assessment systems and other aspects of educational reform. In many states, more than one contact was identified and the respondents were asked to work as a team to complete the survey.

Responses were gathered online. A hard copy of the survey was provided to a few states that preferred to respond by completing a written questionnaire. Once the responses were compiled, the data were verified with the states. For the fourth survey administration in a row all 50 regular states responded to the survey. In addition, representatives from 8 of the 11 unique states completed the survey.

SSurvey responses showed that states were examining a number of issues related to participation and accommodations policies on the regular assessment. States also reported information related to their alternate assessment based on alternate achievement standards (AA-AAS), and on new developments in assessment such as alternate assessments based on modified academic achievement standards (AA-MAS) and growth models. Over the past two years, states have continued to make strong progress, though challenges remain and several new issues have emerged.

 

Eleven Unique States

American Samoa
Bureau of Indian Education
Department of Defense
District of Columbia
Guam
Northern Mariana Islands
Marshall Islands
Micronesia
Palau
Puerto Rico
U.S. Virgin Islands

Table of Contents


Successful Practices and Recurring Challenges

For several assessment topics, state respondents were asked to indicate whether they had developed successful practices or faced recurring challenges. The respondents rated each topic as very challenging, challenging, successful, or very successful (see Figure 1 for regular states’ responses). States reported that assessment validity and test design/content were areas of success. Issues related to English language learners (ELLs) with disabilities were considered challenging, and states appeared to have mixed viewpoints on the alternate assessment based on modified achievement standards (AA-MAS). About as many respondents considered the performance of urban schools to be an area of success as considered it to be an area of challenge. Most respondents considered their states’ reporting and monitoring practices to be successful. More states considered assistive technology an area of success than a challenge, and most states described the English language proficiency assessment as successful or very successful.

Figure 1. Successes and Challenges Reported by Regular States

Figure 1 Pie Charts

Figure 1 Pie Charts (continued)

Figure 1 Pie Charts (continued)

Unique states reported use of assistive technology for assessment activities, assessment validity, and English language proficiency assessments as particularly challenging. Figures for the unique states are in Appendix A.

Table of Contents


Growth Models

Twenty-one states considered developing a growth model for accountability purposes, while 16 considered its development for reporting purposes (see Figure 2). Thirteen states reported that they were part of the United States Department of Education’s pilot study on growth models and already had a functioning growth model. Most unique states were not considering growth models.

Figure 2. States’ Consideration of Growth Models

Figure 2 Bar Chart 

Note: State respondents were able to select both “Considering for accountability purposes” and “Considering for reporting purposes” as responses.

About half of the states reported that growth models would better measure the performance of schools and students, and that they would provide information useful for instruction (see Figure 3). About one-third of the states indicated that growth models would help schools make adequate yearly progress (AYP). Only one unique state was considering the development of a growth model.

Figure 3. Reasons for Consideration of Growth Models

Figure 3 Bar Chart 

Note: State respondents were able to select multiple responses.

Thirty-eight states tracked assessment results using individual student identifiers. The most frequent reason given was to better understand which students are making gains in order to improve instruction and assessments (see Figure 4). More states in 2009 than in 2007 indicated that individual student performance was tracked to build a foundation for the eventual use of growth models or support the use of current growth models.

Figure 4. Reasons for Tracking Assessment Performance by Individual Identifying Information in 2009 and 2007

Figure 4 Bar Chart 

Note: State respondents were able to select multiple responses.

Table of Contents


Participation and Accommodations

With the inclusion of students with disabilities in assessments and accountability systems, states paid increased attention to the reporting of participation and performance data. Similarly, states increasingly attended to these data and considered ways to improve the performance of low performing students, including students with disabilities.

Participation Reporting Practices

For the third consecutive survey, states were asked about their participation reporting practices (see Table 1). Survey results showed similar practices to those found in 2007. States reported the participation of students with disabilities in different ways, depending on the nature of their participation. Students counted as non-participants for reporting included students who did not participate in any way, students who sat for the assessment but did not complete enough items, students who used accommodations that produced invalid results, and students who tested at a lower grade level than their enrollment. More unique states counted students as nonparticipants if they did not participate in the assessment in any way or if the student sat for the assessment but did not complete enough items to score.

Table 1. Reporting Practices for Counting Students as Assessment Participants

Practice

State Category

Survey
Year

Not Counted as Participants, Received No Score

Counted as Participants, Received No Score, Score of Zero or Lowest Proficiency Level

Earned Score is Counted as Valid

Other, or No Answer

Students who did not participate in state assessments in any way (e.g., absent on test day, parent refusal)

Regular States

2009

42

7

0

1

2007

47

2

0

1

Unique States

2009

6

0

1

1

2007

1

0

1

1

Students who attended (sat for) assessment, but did not complete enough items to score

Regular States

2009

15

29

4

2

2007

16

27

7

0

Unique States

2009

3

3

0

2

2007

0

1

1

1

Students who used invalid accommodations (e.g., non-standard, modifications)

Regular States

2009

19

17

4

10

2007

16

16

2

16

Unique States

2009

1

2

1

4

2007

0

1

0

2

Students who are sitting for their second test administration in one school year

Regular States

2009

4

1

7

38

2007

4

0

7

39

Unique States

2009

2

0

2

4

2007

0

0

0

3

Note: 50 regular states responded in both 2007 and 2009. For unique states, 3 responded in 2007 and 8 responded in 2009.

 

Participation Practices Related to Accommodations

Eighty percent of states reported that they monitored accommodations use in 2009. Monitoring was typically achieved by directly observing test administrations; by interviewing students, teachers, and administrators; or by conducting desk audits (see Figure 5). The frequency of audits varied, with most states monitoring on a scheduled basis. Fewer states monitored on either a random basis or a targeted basis. In the unique states, accommodations monitoring most often was completed by directly observing test administrations.

Figure 5. States’ Accommodations Monitoring Activities

Figure 5 Bar Chart 

States communicated information about accommodations to districts and schools via a variety of communication modes (see Figure 6). Most states provided accommodations policy information on a Web site. Almost as many states conducted workshops or sent the information to each district/school in written form. Few states used an online interactive format for the workshops. Many unique states conducted workshops and provided written information to each district or school. Unique states were less likely than regular states to make the information available on a Web site.

Figure 6. Modes of Communicating Accommodations Information

Figure 6 Bar Chart 

Note: State respondents were able to select multiple responses.

Most states examined the validity of certain accommodations for students with disabilities. More than half of the states reviewed research literature and half collected data (see Figure 7). Fewer states conducted experimental studies or completed an internal statistical analysis. Unique states reported that they collected data and convened stakeholders.

Figure 7. Ways that States Examined Validity of Accommodations

Figure 7 Bar Chart 

Note: State respondents were able to select multiple responses.

 

Difficulties Related to Accommodations

More than 80 percent of states identified one or more difficulties in ensuring that accommodations specified on student Individualized Education Programs (IEPs) were carried out on test day. The most frequently reported difficulties included arranging for trained readers, scribes, and interpreters, and ensuring that test administrators and proctors knew which students they were supposed to supervise and which students should receive specific accommodations (see Figure 8). Four unique states identified ensuring that test administrators and proctors knew which students they were supposed to supervise and which students should receive which accommodations as a difficulty. Four unique states also indicated difficulties arranging for special education equipment (e.g., calculator, assistive technology, word processor, etc.) and checking that it was operating correctly.

Figure 8. Identified Difficulties in Carrying Out Specified Accommodations on Test Day

Figure 8 Bar Chart 

Note: State respondents were able to select multiple responses.

Table of Contents


Alternate Assessments Based on Modified Academic Achievement Standards (AA-MAS)

States have the option of developing alternate assessments based on modified academic achievement standards (AA-MAS). AA-MAS regulations were finalized in April, 2007. Since then, some states refined their reasons for moving forward with this assessment option, while other states made efforts to improve the assessments they already offered.

 

State AA-MAS Practices

In 2007, five states already had an AA-MAS in place, 33 states were considering using an existing grade-level assessment to create an AA-MAS, and another 25 states were considering developing a new assessment (there was overlap in the states selecting these responses). In 2009 (see Figure 9) eight states had already administered an AA-MAS, one planned to give it for the first time in 2008-09, and fifteen were in the process of developing one. Fourteen states had decided not to develop an AA-MAS. Web links to information on the tests that were offered by states in 2008-2009 are included in Appendix B.

Figure 9. Stage of AA-MAS Development

Figure 9 Pie Chart 

States that had an AA-MAS typically were testing students in reading, mathematics, and science (see Table 2). For reading and mathematics most of these states had an AA-MAS for grades 3-8 as well as at high school. Many of these states also had an AA-MAS for science. One state did not have an AA-MAS assessment at grade 3, and three did not offer an AA-MAS in high school. Two states (Connecticut and Tennessee) were piloting an AA-MAS in 2009.

Table 2. Grade Levels and Content Areas Assessed in States with Active AA-MAS

Grades Assessed

Reading

Mathematics

Science

Other

California – 3-8
Indiana – 3-8
Kansas – 3-8, HS
Louisiana – 4-10
Maryland – 6-8, HS
North Carolina – 3-8
North Dakota – 3-8, 11
Oklahoma – 3-8, HS
Texas – 3-11

California – 3-8
Indiana – 3-8
Kansas – 3-8, HS
Louisiana – 4-10
Maryland – 6-8, HS
North Carolina – 3-8
North Dakota – 3-8, 11
Oklahoma – 3-8, HS
Texas – 3-11

California– 5
Kansas – 4, 7, 11
Louisiana – 4, 8, 11
Maryland –HS
North Carolina – 5, 8
North Dakota – 4, 8, 11
Oklahoma – 5, 8, HS
Texas – 5, 8, 10, 11

Social Studies
Kansas –6,8,12
Louisiana – 4, 8, 11
Texas –8, 10, 11

Writing
Kansas - 5,8, 11
Louisiana – 4, 8, 11
Texas – 4, 7

Note: In addition to the states listed in the table, Connecticut and Tennessee were piloting an AA-MAS in reading and math in 2009. Tennessee also was piloting an AA-MAS in science.

States that were developing or had developed their AA-MAS were three times more likely to modify an existing grade-level test than to design an entirely new test (see Figure 10).

Figure 10. Process Used to Develop an AA-MAS

Figure 10 Pie Chart 

In both 2007 and 2009, states indicated how they planned to modify existing tests. In 2009, states most frequently reported that they planned to simplify the vocabulary, reduce the number of items, use shortened or fewer reading passages, segment reading passages, or provide fewer answer choices (see Figure 11). Six states planned to use only multiple choice items. These approaches were similar to those listed in 2007, except that a smaller percentage of states indicated that they planned to use non-traditional items or formats. Note that there were fewer respondents to this question in 2009 because many states had decided not to develop an AA-MAS by the time of the 2009 survey was administered.

Figure 11. Changes To Existing Tests in AA-MAS Development

Figure 11 Bar Chart 

Note: State respondents were able to select multiple responses.
* The 2007 state survey did not ask about segmented reading passages.

States used a variety of strategies and methods to determine content targets or blueprints for their AA-MAS (see Figure 12). The most frequent approach was to keep the test specifications the same for the AA-MAS and the regular assessment. Eleven states used stakeholder panels. Few states reported that a consultant or test company provided content targets.

Figure 12. Determinations of Content Targets or Blueprints for AA-MAS

Figure 12 Bar Chart 

Note: State respondents were able to select multiple responses.

Table of Contents


Assessment Reporting Practices

States use a variety of practices to report assessment results for students with disabilities and English language learners (ELLs) with disabilities.

 

Reporting Practices for Students by Disability Category

Fewer than half of the states disaggregated results by disability category (primary disability) in 2009 (see Figure 13). This was less than in either 2007 or 2005. This decrease was due in part to the increase in states that disaggregated data only when requested in 2009. There was a major increase between 2005 and 2007 in the number of states that did not disaggregate results by primary disability. Results from 2009 were very similar to 2007. Few unique states disaggregated by disability category.

Figure 13. Number of States Reporting Assessment Results by Disability Category in 2005, 2007, and 2009

Figure 13 Bar Chart 

Note: All states responded in 2007 and 2005. There was one state that did not respond to this question in the 2009 survey.

States disaggregated data by disability category for a variety of reasons, including examining trends, responding to requests, and for reporting purposes. The most frequently given reason was to examine trends (see Figure 14).

Figure 14. Reasons for Reporting Assessment Results by Disability Category

Figure 14 Bar Chart 

Note: State respondents were able to select multiple responses.

 

Reporting Practices for English Language Learners (ELLs) with Disabilities

Exactly half of the regular states either disaggregated assessment results for ELLs with disabilities in 2009, or would do so by special request (see Figure 15). States disaggregated the results to examine trends, respond to requests, or for reporting purposes.

Figure 15. Reporting Assessment Results for English Language Learners with Disabilities

Figure 15 Pie Chart 

Table of Contents


Alternate Assessments Based on Alternate Achievement Standards (AA-AAS)

States continued to administer alternate assessments based on alternate achievement standards (AA-AAS) for students with the most significant cognitive disabilities. In 2009, all states aligned the AA-AAS with grade-level or with extended (or expanded) academic content standards. Seven regular states (Hawaii, Idaho, Mississippi, Nebraska, New Hampshire, Nevada, Utah) and three unique states (Guam, Puerto Rico, Marshall Islands) were in the process of revising their AA-AAS.

 

AA-AAS Test Formats

Most states used either a portfolio or a standardized set of performance tasks to assess students with the most significant cognitive disabilities in 2009 (see Table 3). Even for states using these formats, practices varied widely and defied easy categorization. Some states that administered performance tasks did not require teachers to submit evidence of student performance, while others did. Some states that reported using a portfolio or body of evidence approach for their AA-AAS also required the student to complete standardized performance tasks.

Table 3. AA-AAS Test Formats

Format

Regular States

Unique States

Portfolio or Body of Evidence

20a

5c

Standardized Set of Performance Tasks

18b

5

Multiple Choice Test

8

0

IEP Analysis

0

2

Other

2

0

Currently in revision

7

3

a Of these 20 states, 8 used a standardized set of performance tasks.
b Of these 18 states, 8 required the submission of evidence.
c Of these 5 unique states, 4 used a standardized set of performance tasks.

 

AA-AAS Content Alignment

In 2009, all states reported that their AA-AAS was aligned either to grade-level or to extended academic content standards, representing a complete shift from functional to academic content coverage. No states reported that they aligned their AA-AAS to functional skills, nor that they allowed IEP teams to determine AA-AAS content (see Figure 16).

Figure 16. AA-AAS Content Alignment

Figure 16 Bar Chart 

 

AA-AAS Scoring Methods

Fewer states used a rubric to measure achievement on AA-AAS in 2009 compared with previous years, though this still represented the most common approach (see Figure 17). Many states that reported using another method also used a rubric. Five unique states reported using a rubric.

Figure 17. Scoring Methods

Figure 17 Bar Chart 

Of states that used a rubric to score the AA-AAS, significantly fewer states assessed non-academic skills such as social relationships, self-determination, or number/variety of settings when compared with 2005. The most common outcomes measured by rubrics were level of assistance, skill/competence, and alignment with academic content standards (see Figure 18). Some states that did not report scoring skill/competence on their rubric did score for “accuracy.”

Figure 18. Outcomes Measured by Rubrics

Figure 18 Bar Chart 

Twenty-one states used a test company contractor to score the AA-AAS, though none of the unique states used this approach. In a number of states, the student’s special education teacher, teachers from other districts, or a member of the student’s IEP team scored the assessment (see Figure 19).

Figure 19. Who Scored the AA-AAS?

Figure 19 Bar Chart 

 

Methods for Determining Achievement Levels

In 2009, fifteen states used the body of work approach to set cut points for achievement levels, though states also reported using a variety of other methods (see Figure 20). The other method category included methods such as student profile and combinations of two or more approaches. In 2009, no state used judgmental policy capturing.

Figure 20. Methods for Determining Achievement Levels

Figure 20 Bar Chart 

Table of Contents


Current and Emerging Issues

States made many changes to their assessment policies and practices in response to recent changes in regulations and guidance for the Elementary and Secondary Education Act (ESEA) and the Individuals with Disabilities Education Act (IDEA) as well as to federal peer-review guidance. Several issues emerged as states included students with disabilities in their assessment and accountability systems, including computerized testing, formative assessment, and contextual factors related to assessment data trends.

 

Computerized Testing

About one-third of regular states offered their regular state assessments on computer-based platforms for science, math, or reading (see Figure 21). Some states had a computer-based platform for their AA-MAS or AA-AAS. None of the states with an Alternate Assessments based on Grade Level Achievement Standards (AA-GLAS) offered a computer version of that test.

Figure 21. Content Areas and Specific Assessments Offered on Computer-based Platforms

Figure 21 Bar Chart 

 

Formative Assessment

Nearly half of the states had a policy on the use of formative assessments by districts (see Figure 22). Six states were considering the potential development of a formative assessment policy.

Figure 22. State Policies and Viewpoints on Formative Assessment

Figure 22 Pie Chart 

 

Contextual Factors Related to Assessment Data Trends

States commented on contextual factors related to recent assessment data trends. Comments focused on public reporting, federal reporting, adequate yearly progress (AYP), participation, and performance. Examples of the range of responses under each category are presented here. A full list of comments (without state names) is provided in Appendix C.

A total of 14 states provided commentary on public reporting. These comments were most often related to changes in curriculum or standards; changes in assessment or achievement levels; or changes in reporting methods, calculations, or procedures (including minimum “n” size).

  • Revised data reporting on state Web site to enhance the readability of assessment data.
  • Recently began reporting on the number of students with IEPs who take the general assessment with approved accommodations.
  • New reporting on growth model approved by the U.S. Department of Education. There was increased emphasis and training around the use of accommodations for assessment purposes.

A total of 13 states provided commentary on federal accountability reports. These comments were most often related to changes in reporting methods; changes in calculations or procedures (including minimum “n” size); or changes in targets, annual measurable objectives, or accountability workbooks.

  • As a result of a U.S. Department of Education Title IA monitoring visit, the definition of participant was changed to be defined solely as the recipient of a valid score.
  • Prior to 2008, AYP data were disaggregated by subgroup by grade level. In 2008, the reports were displayed by subgroup by range of scores.
  • Received permission to use spring 2009 testing only for safe harbor, effectively getting one free year.

Twenty-two states provided commentary on AYP. These comments were most often related to changes in assessment or achievement levels, changes in reporting methods, or issues related to calculations or procedures.

  • Given the required increase in proficiency targets, as outlined in state’s NCLB Accountability Workbook, a decreased number of schools were identified for this year as having met the NCLB AYP objectives for the subgroup of students with disabilities.
  • Prior to 2007-08 assessment results, limited English proficient (LEP) and students with disabilities (SWD) populations had a minimum “n” count of 50 for the purposes of determining Adequate Yearly Progress (AYP) at the school level.

A total of 16 states provided commentary on assessment participation. These comments were most often related to changes in reporting methods, calculations or procedures (including minimum “n” size), or success in meeting targets.

  • Students with disabilities who took the AA-MAS were counted as participating; these same students would have not have been counted as participating in 2007 if they took a regular assessment with a modification.
  • Divisions that exceeded the 1% cap without providing acceptable rationales (i.e., small “n,” demographic anomalies, local military bases) were required to reassign proficient scores that exceeded the cap to failing scores. The reassignment of scores resulted in a reduction in participation of students not appropriate for the assessment.
  • Participation was down because we did not allow partial assessments.

A total of 21 states provided commentary on assessment performance. These comments most often were related to changes in assessment or achievement levels, or other assessment issues or topics.

  • Scores for students with disabilities have shown a steady increase each year; however, the gap remains between general education and special education student scores.
  • New performance levels were set for the regular and alternate assessment.
  • State personnel used the data from this indicator as a priority for the Continuous Improvement/Focused Monitoring System during the 2006-2007 school year.

Table of Contents


Preferred Forms of Technical Assistance

The forms of technical assistance that states preferred in 2009 were
similar to those in past years (see Figure 23). These forms included descriptions of assessments in other states, “how to” documents, and conference calls on hot topics. There has been increased interest in descriptions of assessments in other states, individual consultation in states, and awareness materials.

Figure 23. Technical Assistance Preferences of States

Figure 23 Bar Chart 

Table of Contents


Appendix A

Successes and Challenges Reported by Unique States

Unique states also provided commentary on successes and challenges.

Included in this appendix are depictions of the issues that were most prevalent in unique states. The most frequently mentioned issues sometimes were different from those frequently cited as important to regular states. For that reason, not all of topics addressed for regular states on page 2 are shown here. As shown below, instructional accommodations, the English language proficiency assessment, the use of assistive technology, and assessment validity were most often identified as areas of success by the unique states. Responses were mixed for reporting and monitoring, and test design and content and instructional accommodations were most frequently identified as challenges.

Appendix A Pie Charts

Appendix A Pie Charts (continued)

Table of Contents


Appendix B

If a state provided a link to additional information about their AA-AAS or AA-MAS, the Web address is listed below. Some states did not provide Web addresses.

State Technical Manuals for the AA-AAS

State

Web site

Arizona

http://www.azed.gov

Alaska

http://www.eed.state.ak.us/tls/assessment/techreports.html
(scroll down to Alternate Assessment)

California

http://www.cde.ca.gov/ta/tg/sr/technicalrpts.asp. The reports are listed under CAPA.

Colorado

http://www.cde.state.co.us/cdeassess/documents/reports/2008/2008_CSAPATech_Report.pdf

Connecticut

http://www.csde.state.ct.us/public/cedar/assessment/checklist/resources/cmt_capt_skills_checklist_technical%20Manual_10-19-06.pdf

Delaware

http://www.dapaonline.org

Florida

http://www.fldoe.org/asp/pdf/FloridaAlternateTechnicalReport.pdf

Idaho

Reading, Math, Language: http://itcnew.idahotc.com/DNN/LinkClick.aspx?fileticket=JWFz4h1M%2bEA%3d&tabid=249&mid=2934

Science: http://itcnew.idahotc.com/DNN/LinkClick.aspx?fileticket=TDGpkEXbzkU%3d&tabid=249&mid=2767

Indiana

https://ican.doe.state.in.us/beta/tm.htm

Massachusetts

Alternate assessment technical manual is integrated with the standard assessment tech report, in one giant tome available at http://www.doe.mass.edu/mcas/tech/?section=techreports

The MCAS-Alt Educator’s Manual is posted to www.doe.mass.edu/mcas/alt/resources.html

Maine

http://www.maine.gov/education/mea/techmanual.html

Michigan

http://www.michigan.gov/mde/0,1607,7-140-22709_28463-166642--,00.html

Minnesota

http://education.state.mn.us/MDE/Accountability_Programs/Assessment_and_Testing/Assessments/MTAS/MTAS_Technical_Reports/index.html

Missouri

http://www.dese.mo.gov/divimprove/assess/tech/map-a_tech_manual_2007.pdf

http://www.dese.mo.gov/divimprove/assess/tech/documents/MOAltStandardSettingReportRevisedfromTAC.pdf

http://www.dese.mo.gov/divimprove/assess/tech/linkreport.pdf

North Carolina

http://www.dpi.state.nc.us/accountability/testing/technicalnotes

Nebraska

http://www.nde.state.ne.us/

New Hampshire

http://www.ed.state.nh.us/education/doe/organization/curriculum/NHEIAP%20Alt%20Assessment/2007-2008%20Alt/NH-AltMaterialsandInformation.htm.

This document exits in both Word and Pdf formats.

Ohio

http://www.education.ohio.gov Testing tab, then Alternate Assessment link

Oklahoma

http://www.sde.state.ok.us special education, assessment, assessing students with disabilities manual

Oregon

http://www.ode.state.or.us/search/page/?=1560

Pennsylvania

http://www.paassessment.org

South Carolina

http://www.ed.sc.gov/agency/Accountability/Assessment/old/assessment/programs/swd/SouthCarolinaAlternateAssessmentSC-Alt.html

South Dakota

http://doe.sd.gov/oess/specialed/news/historicalalternate.asp

Tennessee

http://state.tn.us/education/speced/assessment.shtml

Texas

http://www.tea.state.tx.us/index3.aspx?id=3638&menu_id3=793

Utah

http://www.schools.utah.gov/sars/servicesinfo/pdfs/uaamanual.pdf

Wisconsin

http://www.dpi.wi.gov/oea/publications.html

West Virginia

http://wvde.state.wv.us/oaa/pdf/WVAPTA_Spring08_Final_12162008.pdf APTA

 

State Technical Manuals for the AA-MAS

State

Web site

California

http://www.cde.ca.gov/ta/tg/sr/technicalrpts.asp they are listed under CMA.

Louisiana

http://www.louisianaschools.net/lde/uploads/11109.pdf

North Carolina

The NCEXTEND2 technical manual is located at http://www.dpi.state.nc.us/accountability/testing/technicalnotes

Ohio

In testing alternate Assessment/AA-MAs links on http://education.ohio.gov

Oklahoma

http://www.sde.state.ok.us, accountability and assessment

Tennessee

The TCAP-MAAS will be piloted this spring (2009) and first statewide administration will be in the spring of 2010. A manual has not yet been developed. This manual will be developed after the completion of this pilot and analysis of data from the pilot. There is information regarding the TCAP-MAAS at the following TN web sites: Power Point Presentation: http://state.tn.us/education/speced/assessment.shtml

TCAP Power Point Presentation: http://tennessee.gov/education/speced/announcements.shtml Power Point Presentation: http://tennessee.gov/education/assessment/doc/K_8ppt.pdf (starting at page 36)

 

State Performance Level Descriptors for the AA-MAS

State

Web site

Kansas

http://www.ksde.org/Default.aspx?tabid=420

Louisiana

http://www.louisianaschools.net/lde/saa/2382.html

North Carolina

http://www.ncpublicschools.org/accountability/policies/tswd/ncextend2

Oklahoma

http://www.sde.state.ok.us, accountability and assessment

Table of Contents


Appendix C

Contextual Comments Related to Assessment Data Trends

This appendix is a compilation of the respondents’ comments on reporting, adequate yearly progress (AYP), participation, performance, and Federal accountability reports. Each bulleted item is a comment made by a state.

 

Public Reporting

Changes in Assessment or Achievement Levels

  • Due to Federal Peer Review findings, state has changed its alternate assessment of alternate achievement standards 3 times in 3 years. This has resulted in huge fluctuation in scores for SWD and has also had impact on students who do the alt rather than the regular assessment.
  • A new alternate assessment was used in spring 2008. LEP students took our general assessments with accommodations rather than the previously used alternate assessment, IMAGE, as their content-based assessment.
  • State changed from fall testing to spring testing last year. State testing in the fall occurred for the last time in Fall 2008. Spring 2009 was the first time that state testing occurred in the spring. Science and social studies was added to the alternate assessment Spring 2009.

Changes in Curriculum or Standards

  • New performance standards were set in 2008 for grades 5-8 math and language arts.
  • Implementing a new curriculum. During the 2007-2008 school year new assessments were implemented in grades 3, 4, 5, and 8 in the content area of mathematics. New achievement standards were set and the results are not comparable to previous years.
  • Changes in curriculum and assessments have affected trends.
  • New performance standards were set in 2008 for grades 5-8 math and language arts.

Changes in Reporting Methods, Calculations or Procedures (Including Minimum “n” Size)

  • New reporting on the Growth Model which was approved by the USDoE. There was increased emphasis and training around the use of accommodations for assessment purposes.
  • Revise data reporting on [state’s reporting site] to enhance the readability of assessment data.
  • Recently (08-09) began reporting on the number of students with IEPs who take the general assessment with approved accommodations.
  • Prior to 2008, AYP data were disaggregated by subgroup by grade level. In 2008, the reports were displayed by subgroup by range of scores.

Changes in Targets, Annual Measurable Objectives, or Accountability Workbooks

  • State took action in spring 2009 to suspend state accountability for an interim period (2009-2011). A new state accountability system is under development and will begin in 2012. State will continue to report publicly all state-required tests. NCLB federal reporting remains unchanged in 2009.

Other

  • Ministry of Education does not report students’ data to the public. Results are disseminated to Principals, Area Specialists and other Services providers for the purposes of professional development, reporting purposes, and others as appropriate.
  • In the 2007-08 school year, there were no substantial changes in public reporting from previous reporting years.

 

Federal Accountability Reports

Changes in Assessment or Achievement Levels

  • Aside from the alternate assessment addition and trends, most factors have remained constant (i.e., had the same tests and definitions). Three assessments piloted in 2006. Science was added as a test in 2008.
  • A new alternate assessment was used in spring 2008. LEP students took our general assessments with accommodations rather than the previously used alternate assessment as their content-based assessment.

Changes in Curriculum or Standards

  • Based on the new Reading proficiency standards, fewer state schools met AYP.

Changes in Reporting Methods, Calculations or Procedures (Including Minimum “n” Size)

  • AYP calculations for 2007-2008 reflected one change that had an impact on participation and performance calculations. In past years, participation was based on whether or not a student attempted to access the assessment, whether or not a valid score resulted from their attempt. For example, students who were coded as “extreme frustration” were counted as participants because they participated, to the best of their ability, in the assessment. These students were also included in the performance calculations as non-proficient records. However, as a result of a US Department of Education Title IA monitoring visit, the definition of participant was changed to be defined solely as the recipient of a valid score. As a result, we saw slight declines in participation rates from prior years. But, students with invalid scores are no longer included in performance calculations, so the performance data is not comparable to prior years.
  • Prior to 2008, AYP data were disaggregated by subgroup by grade level. In 2008, the reports were displayed by subgroup by range of scores.
  • If this refers to data accessible through EDEN, this is experiencing growing pains as we are working out the bugs necessary to utilize our statewide student information system to capture as much of the data as possible.

Changes in Targets, Annual Measurable Objectives or Accountability Workbooks

  • NCLB Accountability Workbook. Consolidated State Performance Report parts 1 and 2.
  • Target increased.
  • Our state received permission to use spring 2009 testing only for safe harbor, effectively getting one free year. 

Other

  • Our state reports through APR based on SPP.
  • Our state displays results of the states and local results for indicators in the SPP. The SPP information can be found on a Web site showing data that is added to result tables and graphs for each of the indicators to be reported.
  • Located on Education Web site.
  • No changes in Federal Accountability reports.

 

APR Indicator 3—AYP

Changes in Assessment or Achievement Levels

  • The State Assessment Program has undergone significant changes since 2004-2005. The 2007-2008 year was the first year for administration of an alternate assessment based on modified academic achievement standards, and an alternate assessment based on alternate academic achievement standards. There was some natural confusion because of the change in assessment procedures. Some of the data may reflect that confusion.
  • In 2008 AYP included the Modified Assessment for grades 3-5.
  • The state is in the process of revising the 1% alternate assessment.
  • An alternate assessment has been added in science. A task force has been formed to consider putting the alternate assessment on line. The state is over the 1% alternate assessment cap; we have observed a steady increase in the number of students taking the alternate assessment. A study is underway to determine if a 2% assessment is needed.
  • A new alternate assessment was used in spring 2008.
  • New assessment design was introduced in Spring 2009 that utilized matrices that differed by grade cluster. In the past, the assessment gave scores based on the grade level of the performance and the assessment did not present itself differently for differentially-aged students. New cut scores are still being determined for Spring 2009. However, the two structures are common at the indicator level and show strong correlation patterns despite a revised representation of performance scoring.

Changes in Curriculum or Standards

  • 2007 standards & test changes very negatively impacted AYP.
  • In the 2007-08 school year, the AMO proficiency targets were adjusted for all subgroups and the school as a whole for Reading in grades 3-8 to reflect a new test edition with higher standards (cut scores) for student proficiency.
  • The state is implementing a new curriculum. During the 2007-2008 school year new assessments were implemented in grades 3, 4, 5, and 8 in the content area of mathematics. New achievement standards were set and the results are not comparable to previous years.

Changes in Reporting Methods, Calculations or Procedures (Including Minimum “n” Size)

  • Utilizing the scores of some of the special education students who have exited special education within 2 years has improved the AYP for some schools.
  • In 2008, the state changed the minimum group size for students with disabilities from 45 to 30. This resulted in a greater number of buildings and districts having an SWD subgroup that was evaluated for AYP.
  • Added flexibility to count sped students back 2 years after exiting.
  • Annual Measurable Objective in Reading increased from 67.5% to 74%, and in Mathematics from 47.5% to 58% (from 2006-07 to 2007-08). The cell size for students with disabilities also changed from 50 to 40 during the same time period.
  • Prior to 2007-08 assessment results, Limited English proficient (LEP) and students with disabilities (SWD) populations had a minimum N count of 50 for the purposes of determining Adequate Yearly Progress (AYP) at the school level. Performance for these students was aggregated at the district and/or state level for AYP determinations where the count was considered to be statistically reliable. The minimum N count for LEP and SWD was changed to 40 to match all other subgroup determinations for statistical significance in the assessment of the 2007-08 school year.
  • Based on amendments submitted May 8, 2007 to our consolidated state application accountability workbook, our state employs a proficiency index to calculate AYP for the following grade bands: 3-5, 6-8, and 10. During FFY 2007 the minimum “n” size requirements for the grade bands was changed from 40 to 30. Due to the changes in the “n” size calculation (described above), comparing FFY 2007 data to data from FFY 2005 and FFY 2006 to determine progress or slippage for Indicator 3A is not valid.

Changes in Targets, Annual Measurable Objectives, or Accountability Workbooks

  • Given the required increase in proficiency targets, as outlined in state’s NCLB Accountability Workbook, a decreased number of schools were identified for this year as having met the NCLB AYP objectives for the subgroup of students with disabilities. For example, the “Target for Percent Proficient” increased from 68 to 79 percent in reading and 74 to 82 percent in mathematics, while another target increased from 72 to 81 percent in reading and 69 to 80 percent in mathematics. These increases of eight to 11 percentage points in the amount of students who must reach proficiency are the direct reason that fewer districts met NCLB AYP objectives for students with disabilities. It is expected for the 2008-09 school year, more schools will make AYP, as there is no increase in test proficiency requirements. In addition, Department personnel are confident that with the increased accountability to all schools via a legislative mandate that has resulted in increased monitoring of student progress through data, positive trends in academic performance for all subgroups will be seen in future years. This action has strongly placed an urgency to improve outcomes of all students. Similarly, the Department has implemented and trained school personnel in the area of Scientific Research-Based Interventions (SRBI) as a school reform/student improvement framework similar to Response to Intervention. In our state, SRBI are for all districts to implement to improve student outcomes, not solely for use to identify students as learning disabled.
  • This target has been changed. The state only has a target for % of students making AYP, not districts. That is the target we used for the subpopulation of students with disabilities. This is the first year we have used our baseline data to determine the % of districts reaching AYP, not the % of students.
  • Target increased.
  • The NCLB targets for AYP at the district and school levels increased in 2007-2008 compared to 2006-2007 and prior years.

Success in Meeting Targets

  • The state has met the target for local school systems making AYP for the past two years. The number of local school systems has remained consistent for the past two years at 38%.

Other

  • Not applicable for state. Under our Compact of Free Association with U.S., we are not bounded by NCLB.
  • Our state uses grade spans (grades 3-5, 5-8 and 9-12) for calculating AYP for schools. As elementary schools feed into middle schools/junior highs and secondary schools, the number of buildings decreases for which AYP is reported. This is evident in the APR, but is not reflected in Table 6 since the units of analysis are different in the two documents.

 

APR Indicator 3—Participation

Changes in Assessment or Achievement Levels

  • Students with disabilities that took our modified assessment were counted as participating; these same students would have not have been counted as participating in 2007 if they took a regular assessment with a modification.
  • Our state’s alternate assessment 1 Participation Criteria was modified in 2009.
  • Depending on when the last survey was taken, reporting for participation for Grades 3-8 and 11 may have changed. In 2006-2007, participation reporting included Grades 3-8 and 11, whereas in 2005-2006, data were reported for participation on Grades 4, 8, and 11.

Changes in Reporting Methods, Calculations or Procedures (Including Minimum “n” Size)

  • Our state strongly disagrees with the conclusion of the data reported above for overall participation rate. While the data are both valid and reliable, the calculation used is not appropriate. Changes were made to Federal Table #6 – Participation and Performance of Students with Disabilities on State Assessments, that resulted in moving the field for the “subset [of students who took a regular assessment] whose assessment results were invalid” from pages 2 and 11 to pages 4 and 13 respectively for math and reading assessments, thereby forcing the calculation for SPP indicator 3B to include students with invalid scores as nonparticipants. This is a change from how the data table was designed previously (FFY05 and FFY06) and is indirect opposition to the State’s Approved NCLB Accountability Workbook. This change in data table layout and expectation by OSEP regarding the calculation of participation rate is the only reason the data reported above suggest that our state failed to meet the participation rate targets for FFY 2007 for reading and math and is directly responsible for the appearance that the participation rate for reading and math did not show progress.
  • Local school divisions were held to the 1% cap for student participation in our alternate assessment based on alternate achievement standards. Divisions that exceeded the 1% cap without providing acceptable rationales (i.e., small “n”, demographic anomalies, local military bases) were required to reassign proficient scores that exceeded the cap to failing scores. The reassignment of scores resulted in a reduction in participation of students not appropriate for the assessment.
  • Participation was down because we did not allow partial assessments.
  • Grades 3-8 ok from Oct. 2005; for grade 11, slight change in computation starts Oct. 2007.
  • Our state uses the December 1 child count in determining proficiency rates, rather than the number of students enrolled in the assessment process. It is believed that using the latter count would be less likely to reveal those students who appear in the child count, but are not enrolled in the assessment. This will obviously change when submitting the next APR, reflecting the change in the Indicator Measurement Table recently released by OSEP.

Success in Meeting Targets

  • During FFY 2007 State’s participation rate (100%) indicates the State met its participation goal of 95%.
  • The participation rate for students with disabilities in our state has remained above the 95% expected level.
  • Compared to previous APR reports, there has been slight increase of 5% of IEP students participating in statewide assessments.
  • Our state continues to exceed the 95% standard for all grades tested. The past year, our state noticed a trend in the number of students who were non-participants. Therefore, the policy and procedures for excusing a student are being reviewed.
  • Department contends that the State met the target for Reading and Math Assessments and did not meet the participation target of 97 percent for the high school Reading or Math Assessments but made significant progress in both areas over last year.

Other

  • In the 2007-08 school year, there were no substantial changes in determining student participation from previous reporting years.
  • 2007-2008 Annual Performance Report (APR) is located in full on Special Education Web site.
  • The 2007 assessment participation rate for our students with disabilities slightly decreased since FFY 2004. However, a percentage point drop of 1-5% may represent less than 40 students statewide.

 

APR Indicator 3—Performance

Changes in Assessment or Achievement Levels

  • New test format was introduced for the general assessment that involved two parts. In March, the students answered open-ended problem-solving type questions. In May, they responded to MC scantron-style questions. Spring 2009 test included embedded trial questions that were considered to be too difficult for the general population at some grade levels.
  • New performance levels were set on the State’s Alternate Performance Assessment (one percent assessment).
  • Standards revised, test changed, new test vendor for Spring 2007 Standard Test = scores much lower.
  • State Performance Plan targets for indicator 3 are the same as the AMOs. A new alternate assessment (AA-AAS) was first administered in 2007-08.
    New achievement cut scores set for 08-09.
  • Our state developed access points for students with significant cognitive disabilities; 2007-08 new alternate assessment against alternate achievement standards implemented.
  • Depending on when the last survey was taken, reporting for performance for Grades 3-8 and 11 may have changed. In 2006-2007, performance reporting included Reading and Math, Grades 3-8 and 11, whereas in 2005-2006 data were reported for participation on Grades 4, 8, and 11.
  • The first operational administration of new assessments for 3-8 math and language arts and Algebra I and English II occurred in 2007-2008, followed by a standard setting for new academic achievement standards.
  • A new alternate assessment was used in spring 2008. LEP students took our general assessments with accommodations rather than the previously used alternate assessment, IMAGE, as their content-based assessment.

Changes in Curriculum or Standards

  • Substantial increases in Reading proficiency standards (Grades 3-8) were approved by the State Board of Education (SBE).
  • Our state is implementing a new curriculum. During the 2007-2008 school year new assessments were implemented in grades 3, 4, 5, and 8 in the content area of mathematics. New achievement standards were set and the results are not comparable to previous years.
  • Grades 3-8 ok from Oct. 2005; new standards at grade 11 start Oct. 2007.

Changes in Targets, Annual Measurable Objectives or Accountability Workbooks

  • Math and reading AMO increases, change in reading standards.
  • The AMOs increased in Grades 4, 6, 8, and 11 for reading and in Grades 4, 6, and 11 for math.
  • Based on amendments submitted May 8, 2007 to our State’s Consolidated State Application Accountability Workbook, our State employs a proficiency index to calculate for grade bands 3-5, 6-8, and 10. Data from FFY 2006 serves as a new baseline. During FFY 2007 the minimum “n” size requirements for the grade bands was changed from 40 to 30. During FFY 2007, the uniform bar for meeting AYP increased in reading and mathematics for all grade bands.

Other

  • The scores for SWD have shown a steady increase each year; however, the gap remains between general education and special education student scores.
  • Compared to last APR reports, there have been increasing performance rates where reading increased by 14% and math increased by 7%.
  • Our state continues to not meet the target for indicator 3c. Our state’s students with IEPs continue to make greater rate of growth in all assessed grades in reading and in mathematics, when compared with the rate of growth for the performance of regular education students across many of the assessed grade levels.
  • 2007-2008 Annual Performance Report (APR) is located in full on Special Education Web site.
  • As of 2007, our assessment proficiency rates for students have increased over time. All rates have increased by at least 15 percentage points from FFY 2005-2006. Local educators, administrators and boards of education have made concerted effort to improve the educational process for all students by implementing scientific, research-based instructional practices across all grade levels. There is a strong emphasis on providing appropriate, research-based interventions to students through such initiatives as Professional Learning Communities, Reading First Initiatives and Response to Intervention Initiatives. The State Legislators passed a bill which funds Instructional Facilitators for every school in the state. These facilitators help to guide the implementation of research-based instructional strategies and programs with fidelity through coaching, mentoring and training. The State Department of Education provides on-going training opportunities for the Instructional Facilitators. Additionally, the Department of Education Special Programs Unit staff used the data from this indicator as a priority for the Continuous Improvement/Focused Monitoring System during the 2006-2007 school year. Outcome data were tied to the related requirements of state and district-wide assessment; §§300.320 through 300.324 IEP provisions; §300.101(a) FAPE; §300.207 highly qualified staff. Findings of noncompliance are reported in Indicator #15. Districts were required to develop Corrective Action Plans for areas of noncompliance. The State Department of Education looked for patterns of noncompliance in the priority areas in order to address systemic issues during Regional Trainings and the Leadership Symposium as well as providing on-site technical assistance with State Department of Education staff or our partners.
  • Wide variation in results for individual students from one year to next.