Alternate Assessment Forum: Connecting into a Whole

Salt Lake City, Utah
June 23-24, 2000

Proceedings Report

Prepared by Rachel Quenemoen, Carol Massanari, Sandra Thompson, and Martha Thurlow


This document has been archived by NCEO because some of the information it contains is out of date.


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Quenemoen, R., Massanari, C., Thompson, S., & Thurlow, M. (2000). Alternate assessment forum: Connecting into a whole. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/Forum2000/ForumReport2000.htm


Table of Contents

Background

Plenary Session: Getting Off to a Good Start

Breakout Sessions: State Stories - Learning from Our Experiences: Exploring Critical Issues

Plenary Session: Ken Warlick Address

Plenary Session: World Café - Intended and Unintended Consequences of Accountability Systems for Students with Disabilities

Plenary Session: Learning from Research and Extended Implementation

Appendices


Background

One hundred thirty-five representatives from 39 states plus American Samoa participated in a forum on June 23-24, 2000 in Salt Lake City, Utah to discuss alternate assessment. Representatives were primarily state department of education staff, but also included some local or regional education staff, university staff, parents, and test publisher staff, all of whom have played major roles in helping states develop their alternate assessment systems. The forum, a third annual pre-session to the CCSSO National Large Scale Assessment Conference, was co-sponsored by the Regional Resource and Federal Centers (RRFCs), the Council of Chief State School Officers (CCSSO), the National Association of State Directors of Special Education (NASDSE), and the National Center on Educational Outcomes (NCEO).

Purpose of the Forum

The purpose of the forum was to support state efforts to implement effective alternate assessment systems that measure how and what students are learning. The purpose originates from the 1997 Amendments to the Individuals with Disabilities Education Act (Public Law 105-17) and its provisions on participation in state and districtwide assessment programs. These provisions reflect the increased emphasis on accountability to improve curriculum and instruction, and the demand for more and better information about educational results for children with disabilities. The specific focus of the forum was on:

1. Sharing information about the alternate assessments that states have developed for children with disabilities who cannot participate in state and districtwide assessment programs;

2. Discussion of critical issues and identification of strategies to address critical issues that are emerging as states and districts implement inclusive assessment and accountability systems.

Forum Process

The primary goal for the forum was to have state representatives meet and share with each other what they are doing and the challenges they face. Through a mixture of plenary sessions focused on common issues, and breakout sessions focusing on individual state experiences, participants were able to compare and contrast multiple approaches to inclusive assessment systems, and discuss benefits and challenges of each. Resource people from the sponsoring agencies (RRFC, CCSSO, NASDSE, NCEO) facilitated guided conversations and were available to assist participants with making needed links with resources.

Plenary sessions included:

ˇ Opening session: Getting off to a good start

ˇ Lunch with OSEP: An address by Dr. Ken Warlick

ˇ World Café: Intended and unintended consequences of accountability systems for students with disabilities

ˇ Learning from research and extended implementation

ˇ Closing session: A celebration

Breakout sessions included:

ˇ State Stories: Learning from our experiences

ˇ State Stories: Exploring critical issues

A "State Fair" poster session was provided as well, with no formal presentations, but sharing of materials across the states.

The conference agenda is provided in Appendix A.

Structure of This Report

Recorders took notes in each of the plenary and breakout sessions. In addition, discussion group recorders in the World Café and breakout sessions provided written group responses to defined questions. Summaries for the plenary sessions were developed from recorder notes, presenter materials, and from discussion group responses for the World Café.

For breakout session summaries, each presenting state was asked to provide a written summary of its state status on alternate assessment, generally covering the following questions:

ˇ Overview of approach

ˇ Process used to develop approach

ˇ Relationship to general education content and performance standards

ˇ Methods of gathering data

ˇ Methods of scoring data

ˇ Reporting methods

ˇ Training processes

ˇ Use of information in accountability system.

States provided the summaries from existing official state materials, or developed summaries specifically for the forum in the format above, as appropriate. These summaries are provided in Appendix C (NOTE: Appendix C is a separate word file from the body of this report, due to its length.). In addition, a summary of new ideas or strengths of each state approach, as perceived by participants and recorded on discussion group response sheets is provided in the body of the report. The body of the report includes a list of each state, reference to pages in the Appendix with the summary and contact information for that state, and a bulleted list of new ideas or strengths of the state approach as perceived and recorded by discussion groups. The sections of this report follow the agenda, with the addition of background and closing sections. The appendices contain:

A. Forum Agenda

B. NCEO Model of Intended and Unintended Consequences of Accountability Systems for Students with Disabilities

C. State Stories: Summaries provided by presenting states, with contact information

Return to the top


Plenary Session: Getting Off to a Good Start

Opening: Connecting Into A Whole

Three Native American Students of Navajo descent set the opening stage for this forum. Coordinated by Zuni Guthrie, these students created an environment of learning and fun by performing four dances from their culture, two of which were the Hoop Dance and the Friendship Dance.

The hoop dancer, the symbol for the forum, weaves multiple individual hoops together to create whole images. He ends with the image of connecting all the individual images into a larger whole–the earth. We have been working for the past three years to refine the individual pieces of the alternate assessment. Now we are at a point where the individual pieces are coming together into a whole alternate assessment system. This piece is integrated into a larger whole, which is the state assessment system, which is part of an even larger whole, the state accountability system for education.

The Friendship Dance (or circle dance) is a dance that involves the audience. Individuals are invited to join the dance–to participate as the line of dancers weaves a pattern across the floor, adding new participants and ending in a circle that connects individuals together. It provides a symbol that illustrates the relationship and connection between individuals who form a circle or community. Similarly, the forum provided an opportunity for participants to connect with others, to share their experiences, to break down barriers in order to see the similarities and connections, and to ultimately feel the connection across state boundaries, forming a community of those concerned with continuously improving the assessment and accountability of education.

National Status of Alternate Assessment

Martha Thurlow, Director of the National Center on Educational Outcomes, provided an overview of the current national status of Alternate Assessment. The overview addressed the IDEA provisions, and summarized findings from the NCEO Online Survey of States (State Alternate Assessments: Status as IDEA Alternate Assessment Requirements take Effect, Thompson & Thurlow, 2000).

The IDEA Alternate Assessment (alternate assessment) Provisions require alternate assessments to be in place by July 1, 2000, for students unable to participate in regular state/district assessments. An online survey was designed by NCEO in 1998 to help states learn from each other as they went about the process of developing their approach. The first status report was disseminated at the CCSSO Large-Scale Assessment Conference in 1999, and the 2000 update was recently completed. The survey was completed by all 50 states and five other educational units that receive special education funding, and was updated by 47 states and two other educational units between March and June 1 of this year. The report compares results from 1999 and 2000 and shows a tremendous amount of development in the past year. It also shows great variation between states in several features of their alternate assessments, and shows where states are currently in their development, decisions that still need to be made, procedures that still need to be developed, and implementation that still needs to take place.

Principles or Beliefs

Although this area was not assessed through the on-line survey, information collected anecdotally and through state guidelines and other reports suggests that some states based development of alternate assessments on principles or beliefs that all students can learn and all have a right to participate in an assessment system. Other states based development on meeting the law while maintaining the status quo. These differences in principles or beliefs are apparent in all features of alternate assessments.

Stakeholders

Leadership for the development of alternate assessments came either from state assessment personnel or state special education personnel, or, in some cases, was a joint effort. Some states included only special education personnel in the development process. These states tended to base their alternate assessments on special education skill sets, with limited or no alignment with state standards. In many states, stakeholder groups included general and special education teachers, parents, and advocates in system development.

Participation Guidelines

In 1999, 34 states had addressed participation guidelines, and by 2000 this number increased to 46. Participation guidelines are based on several criteria, including the extent of the student’s participation in the general education curriculum, graduation expectations for the student, and the student’s support needs.

Standards

In 1999, 32 states had addressed the standards on which the alternate assessment would be based; this increased to 47 states by June 2000. Alternate assessments encompass general education standards in 28 states, and most include performance indicators that are inclusive of students with even the most severe disabilities. Alternate assessments in seven states assess standards with an additional set of functional skills; five states assess all alternate assessment participants on both; two states have two alternate assessments: one that assesses general education standards at lower academic levels, and one that assesses functional skills only. Alternate assessments in three states were developed based on functional skills and then linked back to state standards. Nine states based their alternate assessments on functional skills only, with no alignment to state standards. Three states were still uncertain about how they would align their alternate assessments to state standards.

Approach

The overall approach to alternate assessments was addressed by 29 states in 1999, and has now been addressed by 49 states. Portfolio assessment or a collection of a body of evidence is the most popular approach, selected by at least 28 states. Other approaches included:

ˇ Analysis of IEP goals (selected by at least five states)

ˇ Checklist or rating scale of functional skills (at least four states)

ˇ Variety of other or unclear approaches (13 states)

Most states had recently completed pilot tests of their alternate assessments and plan to implement them statewide next year.

Measures of Proficiency

Only 17 states had addressed measures of proficiency (performance levels or scoring) in 1999, increasing to 40 states in 2000. Fourteen states plan to use the same measures of proficiency for their alternate assessments and general statewide assessment. Fourteen states have measures specific to their alternate assessments. Other states have not finalized plans or were not clear on the survey. There is great variation across states, with different rubrics, scales, and levels of performance. Alternate assessments also measures different things (e.g., level of skill, amount of progress, amount of support or level of independence).

Return to the top


Breakout Sessions: State Stories–Learning from Our Experience;

Exploring Critical Issues

Forum breakout sessions were designed to provide opportunities to hear about 20 state systems, with a focus on reporting, data use, and integrated systems. Time was allotted for questions as well as table conversations to share and generate additional ideas. All 20 presenting states are listed below, with page numbers cited for a state-developed summary of state status in Appendix C.

The states presenting in the first set of breakout sessions were asked to provide a general description of their progress thus far, focusing on whatever challenges and opportunities they saw as important. These states were Alaska, Arkansas, Colorado, Florida, Kansas, Minnesota, Missouri, North Dakota, Oregon and West Virginia. States from the second set of breakout sessions were asked to focus on "critical issues" as follows:

Large systems: California and Ohio
High stakes: Massachusetts and Delaware
Computer based systems: Indiana and Rhode Island
Training: Georgia and Wyoming
Evaluating the system: Tennessee and Vermont.

(Michigan’s state summary is also included in Appendix C. The Michigan presentation was part of the Plenary session on research and extended implementation.)

All states provided an overview of their system, and discussion in all breakout sessions ranged widely from specific issues to general approaches. For this report, all breakout sessions are presented in a common format. First, a brief summary of key issues, ideas, or strengths identified by breakout session participants is given below. This information is based on discussion report forms from the sessions and from recorders’ notes. Second, a summary of each state’s alternate assessment is provided in Appendix C. This information is from official documents or summaries prepared specifically for the forum. They are snapshots of progress as seen by state leadership. Each summary includes a contact person (with e-mail or phone information) or a Web site address.

New Ideas or Strengths: Participants’ Perceptions

In each breakout session, participants were provided with a discussion guide asking:

1. What did you hear in this presentation that was new to you?

2. In reflecting on what you heard, how do these systems compare with what is happening in your state?

3. What are some of the things that appear to be strengths and why?

4. How can you use the experience of these states to strengthen or enhance your own state’s alternate assessment system?

5. What additional questions do you have for the presenters?

Each discussion group had a recorder, and written summaries of the discussions were reviewed and augmented by the recorder notes. The presenting states, page references for Appendix C state summaries, and bulleted ideas and strengths for each state presentation are below.

Alaska (Appendix C)

ˇ Rubric allows for measurement of both skills and support dimensions (see rubric in Appendix B summary)

ˇ Team of parents and teachers are developing the portfolio

ˇ One year testing window instead of just one to two weeks

Arkansas  (Appendix C

ˇ LEP inclusion in Alternate Portfolio, interesting approach

ˇ Mandated student improvement plan for EVERY student

ˇ Rubric allows measure of "appropriateness," which allows for some judgment of whether expectations are suitably high on the performance task

ˇ General education assessment and content specialists are included in development committee

California (Appendix C)

ˇ Large state issues: high LEP, high numbers affected by alternate, costs, political issues

ˇ Back to the drawing board, very challenging

Colorado  (Appendix C)

ˇ Expanded indicators are helpful

ˇ Using and modifying general education standards is good, and rubrics and scoring training are both good

ˇ Good tie to state standards

ˇ Built in "gap" between indicators for alternate and CSAP (general state assessment), want clearly different students in two assessments

ˇ Concern that legislature is making determination about who will or will not be included in the accountability system

Delaware (Appendix C)

ˇ Bubble sheet indicates they are taking the DAPA (state alternate assessment)

ˇ Proactive and designed to create change in instructional practice

Florida (Appendix C)

ˇ Steps taken to allow variety in ways of addressing student needs

ˇ Two different exit programs–some concern about this

Georgia  (Appendix C)

ˇ Some concern about tracking, general and special education tracks

ˇ Training manual and process good

ˇ Like train the trainer concept

ˇ Training on "decision making" done first, then on assessment literacy

ˇ Emphasis on technical assistance needed for writing measurable IEP objectives was helpful, a big issue for an IEP based system

ˇ One alternate, year long data gathering, replaces all the statewide tests, NRT and CRT

Indiana  (Appendix C

ˇ Tech based tools, marketing them

ˇ Parents like the system, video option an asset

Kansas (Appendix C)

ˇ IEP worksheet is very helpful

ˇ Good emphasis on providing professional development and training of teachers

ˇ Extended curriculum standards discussion good

ˇ Alternate assessment data gathering is ongoing during year, but they send it in the same time the state assessment is administered

Massachusetts  (Appendix C)

ˇ Standards based IEP

ˇ Proactive and designed to create change in instructional practice

ˇ Important discussion of high stakes

Minnesota  (Appendix C)

ˇ They use a rating scale and data collection only on teacher’s perception; problems with comparability, but good for policy compliance to letter of law

Missouri  (Appendix C

ˇ Accountability system has "reportable" students and "accountable" students; moving more students to "accountable"

ˇ They see teachers doing scoring as a profound means to professional development

North Dakota  (Appendix C)

ˇ Primary and secondary evidence of skill required: primary is what the goal is and how progress is documented, discrete progress; secondary is showing generalization across settings, people, etc.

ˇ Looked at benchmarking with examples of skills and activities

ˇ System is tied to standards, and to school improvement processes

ˇ They are identifying training needs of teachers, and developing training modules on data collection systems

ˇ Tying assessments to standards is no longer a contentious issue

Ohio  (Appendix C)

ˇ Using IEP as foundation

ˇ Interesting model with measures both external and internal to IEP team

Oregon  (Appendix C)

ˇ Not using "alternate assessment" term; instead a comprehensive and inclusive system that includes all

ˇ Money for training is a big issue; emphasis on why training is important was good

Rhode Island (Appendix C)

ˇ They found that special educators’ knowledge of general education standards was very low, so had to start there

ˇ Need for support–both technology and portfolios–was high, but limited availability of direct support was a problem

ˇ Hard to convince parents, since they were satisfied with whatever they were getting

ˇ Links instruction and assessment, can be embedded in instructional day

Tennessee  (Appendix C)

ˇ System is designed to improve results

ˇ Students do one each year; school systems keep them for three years

ˇ Looking at "statistically sound" loophole, concerns about reporting

Vermont (Appendix C)

ˇ Adequate Yearly Progress system

ˇ They have three alternate assessments, choices for ALL students, with and without disabilities

ˇ Goals and objectives should be related to standards

West Virginia (Appendix C)

ˇ Copied SAT/9 code of ethics, and added it to the alternate materials

ˇ A great deal of training is needed

ˇ SEA pays teachers on-loan to help with training

ˇ Interesting discussion of scripting alternate assessment vs. allowing some individual variation, a reliability issue

ˇ Careful evaluation of alternate assessment ongoing

Wyoming  (Appendix C)

ˇ Use of parents, training of parents is a strength, new insight

ˇ IEP process tied to standards, and becomes an integral part of IEP process

ˇ Approach is practical, concept of using IEP goals tied to alternate

ˇ System accountability; school improvement is purpose

ˇ Are there parent training materials available from any states?

State summaries in Appendix C are meant to provide an overview of each state’s approach, and to encourage additional sharing of experiences with state staff listed as contact persons.

Return to the top


Plenary Session: Ken Warlick Address

Ken Warlick, Director, Office of Special Education Programs, U.S. Department of Education, addressed federal provisions about students with disabilities and state and districtwide assessment. The following is a transcription of his address.

Before I begin my formal remarks, I want to thank all of you for inviting me to address this conference today. There are so many people in this room who have impacted on my life: mentors, friends, and colleagues. If I were to thank each of you publicly, I would have little time left for the remainder of my remarks.

There is one individual, however, I must thank publicly. For several years now, you have been requesting policy guidance from the Office of Special Education Programs. We are on the verge of being able to issue that guidance. The person who has been invaluable in bringing us to this point is Dave Malouf. Dave has been a superb resource of information; he has remained level headed, calm and persistent about keeping a diverse group of highly opinionated folks on task. Without Dave's leadership, we would not be so close to launching the policy guidance. We have not yet decided the format in which we will issue the guidance, e.g. an OSEP memo, interpretations, some binding and some non-binding guidance, promising practice, or just food for thought. However, if you are happy that we are finally issuing the guidance, be sure to thank Dave. If you are unhappy with what I have to say, blame me.

Introduction

I want to thank Carol Massinari of Mountain Plains Regional Resource Center and the student Navaho Osage fancy dancers for setting the tone for this meeting. The tone was set that the issues we are discussing at this conference are a part of a larger whole. The tone was set of the importance of celebrating what we have accomplished.

As you know, this year we are celebrating the 25th anniversary of the law we now call the Individuals with Disabilities Education Act (IDEA). As we watched the fancy dancers perform this morning, I hope we all recognized that they did not accomplish the level of skill exhibited today overnight. It took hard work, perseverance, periodic frustration, sweat and tears, high expectations, and faith. That is exactly how we have accomplished all that we have done over the last 25 years with the implementation of IDEA and we need to recognize that is what it will take as we implement inclusive assessments and many of the other challenges of IDEA '97. We have moved from access to the schoolhouse to access to high expectations and access to the general curriculum.

You also heard the dancers explain that the dances are evolving and getting more complex as people share ideas and get more skillful with the dance. The same is true with inclusive assessment. We know far more about inclusive assessments today than we knew five years ago and we will get more skillful as we go along.

Whether you agree or disagree with what I have to say today, I want you to know that I personally recognize that in each of your states you have approached the dance of inclusive assessments thoughtfully, in good faith, and based on your knowledge of the issues and the culture in your state. Our mutual job is to continue to improve the dance of inclusive assessments to benefit all children.

Big Picture

I am very happy that this pre-conference on alternate assessment is connected with the Large Scale Assessment Conference sponsored by the Council of Chief State School Officers. I hope you plan to attend the conference and that you will attend not only sessions devoted to disability issues but other sessions as well. It is important that we not lose sight of the bigger picture. The issues we are discussing today are grounded in the standards-based education reform discussions that have been going on in this country for over a decade. The issues of inclusive assessments cannot be separated from discussions about Goals 2000, ESEA Title I, Section 504 of the Rehabilitation Act of 1973, Title II of the Americans with Disabilities Act, or evolving case law.

Our discussions must focus not merely on the participation of children with disabilities in statewide and districtwide assessments but how that participation benefits the child. Including all children in the assessment system can ensure a high quality educational experience for each student by creating high education expectations for all children and accountability for the educational results of all students, including students with disabilities, minority children, migrant and homeless children, children with limited English proficiency or children in poverty. High expectations for students necessitate high expectations for teachers and it is also necessary to support teachers with high quality research-based professional development to enhance their ability to facilitate learning.

Beginning with the assessment requirements in the reauthorization of ESEA in 1994 and subsequently in the reauthorization of IDEA in 1997, there has been a dramatic evolution in the assessment discussions across America.

Inclusion of children with disabilities in assessments has changed dialogue around assessments forever. You are at this conference because of the change in that dialogue.

We must succeed in this initiative. Inclusive assessments will provide accountability for the performance of all children including children with disabilities. It is crucially important that schools know how successful they are in preparing all students to meet high standards and parents need to know as well. The inclusion of all children in state and districtwide assessments will provide significant information for improving instruction.

If we are not improving educational results for all children, we need to do things differently than we have in the past. That is why it is so important to disaggregate data about student performance along the lines of ethnicity, gender, disability or eligibility for Title I, migrant, or homeless services. We must pay attention to the data and make changes as needed to our approaches to ensure that results for all children are improving.

We must also be willing to rethink our policies as needed. We must change our policies if we find that they arbitrarily deny benefit to students. It is imperative that advocates knowledgeable about disability issues are involved in all discussions around assessment and accountability.

Why were the 1997 changes in IDEA necessary? The finding of Congress states that "the implementation of this Act has been impeded by low expectations. Over twenty years of research and experience has demonstrated that the education of children with disabilities can be made more effective by having high expectations for such children and ensuring their access in the general curriculum to the maximum extent possible."

I have found it very helpful to periodically review the Report from the Committee on Labor and Human Resources of May 9, 1997. That report, I believe, makes some very clear statements about Congressional intent in the 1997 amendments to IDEA.

I request your indulgence as I read just a few selected statements:

A stated purpose was to "Improve educational results for children with disabilities through early intervention, preschool, and educational experiences that prepare them for later educational challenges and employment".

The report, the amendments, and the regulations repeatedly emphasize the terms "access to" and "progress in" the general education curriculum.

More report language:

The committee wishes to emphasize that, once a child has been identified as being eligible for special education, the connection between special education and related services and the child's opportunity to experience and benefit from the general education curriculum should be strengthened…

This provision is intended to ensure that children's special education and related services are in addition to and are affected by the general education curriculum, not separate from it…

The new focus is intended to produce attention to the accommodations and adjustments necessary for disabled children to access the general education curriculum and the special services that may be necessary for appropriate participation.

Children with disabilities must be included in state and districtwide assessments of student progress with individual modifications and accommodations as needed. Thus, the bill requires that the IEP include a statement of any individual modifications in the administration of state and districtwide assessments. The committee knows that excluding children with disabilities from these assessments severely limits and in some cases prevents children with disabilities through no fault of their own, from continuing on to post-secondary education.

The bill requires that if the IEP team determines that if the child's performance cannot appropriately be assessed with the regular education assessments, even with individual modifications, the IEP must include a statement of why the assessment is not appropriate and alternat(iv)e assessments must be made available.

The committee reaffirms the existing federal law requirement that children with disabilities participate in state and districtwide assessments. This will assist parents in judging if their child is improving with regard to his or her academic achievement, just as the parents of non-disabled children do.

Lastly:

"The purpose of the IEP is to tailor the education to the child; not the child to the education. If the child could fit into the school's general education program without assistance, special education would not be necessary."

It is important for states to have guiding principles about the purpose of their assessments and how the assessment results are to be used. Is it for student accountability, to compare one state’s performance to another, for school accountability applying sanctions and rewards, a combination of these? Can one assessment be used for all these purposes? There is increasing skepticism about whether one assessment can serve all these purposes. Moreover, there are an increasing number of recommendations that one test or one score should not be used as the sole basis to deny benefits such as promotion or a diploma.

It is important that all states consider both the intended and unintended consequences of their decisions. Data are important. We all want to see the achievement of children with disabilities improve. We want to see a closing of the gap between performance of children with disabilities and non-disabled students. But data are only of value in relationship to how it is used.

We need to be certain that when we focus on accountability, the dialogue also results in good decisions for individual children. A young lady who must use audio text for the rest of her adult life as a means of reading won’t care if others' test scores improve if she is denied the opportunity for postsecondary education.

It is most important for educators to understand the linkage between standards, curriculum development, instruction and assessment as one seamless unified approach. These issues cannot be approached separately.

We need to recognize the critical importance of training at this time. Inclusive assessments can benefit children. Inclusive assessments are "do-able." But success in inclusive assessments requires thinking about assessment issues in ways that we have never thought about them before. Seventy percent of the State Directors of Special Education have identified professional development around assessment implications and how the IEP will reflect a student’s progress in the general curriculum as a major challenge.

Accountability

Although IDEA makes no specific reference as to how states include children with disabilities in the state accountability system, the IDEA requires states to establish performance goals and indicators for children with disabilities consistent, to the maximum extent appropriate, with other goals and standards for all children established by the state and to report on progress toward meeting those goals.

Under Title I policies, in the 2000-2001 school year each state must have a statewide assessment system that serves as the primary means for determining whether schools and districts receiving Title I funds are making adequate yearly progress toward educating all student to high standards. All students with disabilities must be included in the state assessment system, and the scores of students with disabilities must be included in the assessment system for purposes of public reporting and school and district accountability. State assessment systems must assign a score, for accountability purposes, to every student who has attended school within a single school district for a full academic year. And, states must explain how scores from alternate assessments are integrated into their accountability systems. [Source: Letter sent on April 7, 2000 by Mike Cohen, Assistant Secretary for Elementary and Secondary Education, to each Chief State School Officer]

Exemptions from Assessment Programs

The IEP team determines HOW individual students with disabilities participate in assessment programs NOT WHETHER. The only students with disabilities who are exempted from participation in general state and districtwide assessment programs are students with disabilities convicted as adults under state law and incarcerated in adult prisons (34 CFR §300.311(b)(1)). With this exception and the parent "opt out" option discussed later, NO exemption or exclusion language should appear in state or district assessment guidelines, rules, or regulations

Section 504 prohibits exclusion from participation of, denial of benefits to, or discrimination against individuals with disabilities on the basis of their disability in Federally-assisted programs or activities. Title II of the ADA provides that no qualified individual with a disability shall, by reason of such disability, be excluded from participation in or be denied the benefits of the services, programs, or activities of a public entity or be subjected to discrimination by such an entity. Because of the benefits that accrue as the result of assessment, exclusion from assessments on the basis of disability generally would violate Section 504 and ADA (Source: Dear Colleague Letter by Judith E. Heumann, Assistant Secretary for Special Education and Rehabilitative Services, and Norma V. Cantu, Assistant Secretary for Civil Rights, U.S. Department of Education, September 29, 1997.

Inclusion in assessments provides valuable information which benefits students either by indicating individual progress against standards or in evaluating educational programs. Given these benefits, exclusion from assessment programs based on disability would potentially violate Section 504 and Title II of the ADA.

Some of you have asked if permission is required from parents of children with disabilities for participation in statewide or districtwide assessments.

If parental permission is not required for participation in the statewide and districtwide assessment programs for non-disabled children, it is not required for children with disabilities. However, parents of children with disabilities will be involved in IEP team decisions on how an individual child will participate in such assessment programs.

Most states allow parents to opt out of participation in assessments, sometimes for religious or other reasons. Parents of a child with a disability should have the same right to "opt out" as parents of non-disabled students consistent with allowable justification criteria established by the SEA or LEA.

Denying parents of children with disabilities the same rights afforded parents of non-disabled children would raise concerns about discrimination on the basis of disability. However, parents and students should be informed of any consequences associated with non-participation in state or districtwide assessments, and should not be pressured to "opt out" of assessment programs.

Most states already keep track of students who are "opted out" by parents. States and districts should keep track of parent requested "opt out" exemptions for students with disabilities disaggregated from those for non-disabled students. This would help the state determine whether any kind of "opting out" pressure might be occurring.

Accommodations and Modifications

OSEP recognizes that there has been an evolution of assessment terminology and increased agreement about such terminology since the IDEA Amendments of 1997. The term accommodation is commonly used to define changes in assessment format, response, setting, timing, scheduling, or response that do not alter in any significant way what the test measures or the comparability of scores. An accommodation is frequently viewed as a change in assessment procedure that "levels the playing field" and does NOT CHANGE what is being measured.

Changes that are considered to alter what the test is supposed to measure are called in some states and by some groups a variety of names, e.g., modification, nonstandard administration, modifications in administration, or non-approved or non-aggregatable modifications. However, in some states the terms "modification" and "modified" refer to allowable changes. There is no universal agreement around these terms.

Many of your state boards of education and many of your legislatures require you to define terms not in common parlance. Congress did not define these terms.

However, the IDEA statute and regulations use the terms "accommodations" and "modifications in administration" in connection with state and districtwide assessment programs and assessments of student achievement. And, the Analysis of Comments and Changes that accompanied the publication of the final regulations uses the terms "individual modifications" and "necessary modifications" as well. However, the definitions of these terms as used in the statute and regulations do not necessarily correspond with the definitions that have evolved or are evolving in the field of assessment.

For example, IDEA (34 CFR §300.347) requires that IEP teams include a statement of "modifications in the administration" of assessments of student achievement. In this context, "modifications in administration" should be viewed as a general term that would include both accommodations and modifications, as they have appeared to be defined in practice. Here it is an overarching umbrella term. Further, (34 CFR §300.138) requires that children with disabilities be provided with "accommodations and modifications in administration, if necessary," which would include the full range of accommodations and modifications, as they appear to be defined in practice.

The bottom line is regardless of whether one uses the terms "accommodations," "modifications in administration," or just the simple term "changes," under IDEA (34 CFR §300.347(a) (5)(i)) the IEP team determines what changes are needed for each child with a disability to participate in the assessment.

Section 504, Title II of the ADA, and IDEA require that students with disabilities must be provided with appropriate test accommodations where necessary.

I encourage all of you to read the National Center on Educational Outcomes' new Policy Directions publication called Non-approved Accommodations: Recommendations for Use and Reporting. It raises important questions for consideration. It also deals frankly about how research on accommodations so far has produced highly inconclusive answers. National Center on Educational Outcomes (NCEO) at the University of Minnesota (612/626-1530; http://www.cehd.umn.edu/NCEO). We need not only to ensure that accommodations are used to enhance participation in the benefits of inclusive assessments but also to expand research on the impact of accommodations.

SEAs and LEAs need to carefully consider the intended and unintended consequences of accommodation policies that may impact on student opportunities for promotion or graduation (e.g., receipt of a regular diploma, a certificate of attendance, etc.).

For example, students who are blind and who have not learned to read Braille are essentially denied access to the assessment if the appropriate accommodation (including having the test read to them) is not provided regardless of whether the test’s content is mathematics, reading or some other content areas. The same argument can potentially be made for students with significant reading disabilities. Denying access to the assessment because of the effects of a disability, especially when the assessment provides access to a benefit such as a diploma, raises many concerns.

Failure to thoroughly inform parents and students of these consequences will raise significant legal concerns.

Failure to provide a student access to instruction needed for the student to have equal opportunity to reap the benefit of participation in the assessment will also raise significant legal concerns. Students do not learn what we do not teach them.

A critical assessment issue may be whether the skill being measured is altered and/or whether scores derived with the change can be compared to scores derived without the change.

A critical policy issue is the purpose for which scores will be used.

A critical legal issue may be whether, by using an accommodation that might be called a non-standard administration, the child can demonstrate what he or she knows and can do, thus enabling him or her to receive the diploma or other benefit such as promotion.

It is critical that every student count, even those few students who may use accommodations that are considered non-standard administration. If states do not find a way to count such scores, students who are viewed as poor performers or who are difficult to teach are likely to be determined to need such accommodations simply on the basis of those perceptions.

The dilemma is how to maintain rigor (reliability and validity of assessments), protect the rights of students, and simultaneously ensure that schools teach all children what they need to know and to do (knowledge and skills).

I encourage you to read the soon-to-be-released document by the Office for Civil Rights that addresses fairness in validity and reliability of assessments and the state or LEA's role in deciding how scores are reported, how scores are used, and for what purposes.

Alternate Assessments Definition

Generally, an alternate assessment is understood to mean an assessment designed for those students with disabilities who are unable to participate in general large-scale assessments used by a school district or state, even when accommodations or modifications are provided. The alternate assessment provides a mechanism for including all students, including those with the most significant disabilities, in the assessment system (OCR definition).

Alternate Assessment Requirements

IDEA (34 CFR §300.138) specifically requires inclusion of children with disabilities in both state and districtwide assessment programs and requires the SEA or LEA, as appropriate, to develop guidelines for the participation of children with disabilities in alternate assessments for those children who cannot participate in state and districtwide assessment programs, and develop alternate assessments.

All SEAs and LEAs must provide alternate assessments for all state and districtwide assessments conducted beginning no later than July 1, 2000.

Of course, if an LEA does not conduct districtwide assessments other than those that are part of the state assessment system, then the LEA would follow SEA guidelines and use the SEA alternate assessment(s). The requirements apply to districtwide assessments regardless of whether or not there is a state assessment.

The alternate assessment should assess the same broad content areas as the statewide or districtwide assessment but may assess additional content as determined necessary by the state or local district. Title I requires that, at a minimum, language arts and math are assessed; but Title I also requires that if other subject areas are assessed by the state, then all students need to be assessed in those content areas as well.

Alternate assessments are alternates to general assessments. The purpose of an alternate assessment should reasonably match, at a minimum, the purpose of the assessment for which it is an alternate. One might ask, "If an alternate assessment is based on totally different or alternate standards, or a totally separate curriculum, what is the alternate assessment an alternate to?"

Many of you have heard me lament in the past that many people view academic and functional skills as an either/or choice rather than as points on a continuum. All children need functional skills regardless of whether the child has significant cognitive delays or whether the child is academically gifted. Some children learn functional skills in the home or from peers. Some learn them incidentally, and some need to have them taught directly. However, all children deserve access to the general curriculum. Functional skills are a means to access the general curriculum for children with disabilities just as it is for the non-disabled. It seems to me that the more our policies limit access, the more we limit opportunity, particularly, the opportunity for independence, choice, and self-sufficiency.

States should consider whether alternate standards meet the requirement for consistency "to the maximum extent appropriate" with other goals and standards established by the state. Functional skills can be assessed as indicators of progress toward general education standards. This aligns the alternate assessment better with general education standards and avoids the notion that some students need a completely separate curriculum. Several states refer to these as expanded standards or "same standards with real world indicators of progress." Functional skills can be aligned to state standards as real world indicators of progress toward those standards. We all must be constantly vigilant to policies that seem to deny access to the general curriculum to children with disabilities either as individuals or as a group.

Out-of-Level Testing

"Out-of-level testing" means assessing students in one grade level using versions of tests that were designed for students in other (usually lower) grade levels. LEAs and states have often used information from out-of-level assessments for developing instructional strategies for students. The problem arises when out-of-level assessments are used for accountability purposes.

Some states allow out-of-level testing in an effort to limit student frustration. Although IDEA does not specifically prohibit its use, out-of-level testing may be problematic for several reasons.

IDEA (34 CFR §300.137) requires that the performance goals for children with disabilities should be consistent, to the maximum extent appropriate, with other goals and standards for all children established by the state. The purpose is to maintain high expectations and provide coherent information about student attainment of the state’s content and student performance standards.

Out-of-level testing may not assess the same content standards at the same levels as are assessed in the "grade-level" assessment. Thus, unless the out-of-level test is designed to yield scores referenced to the appropriate grade-level standards, out-of-level testing may not provide coherent information about student attainment of the state or LEA content and student performance standards.

Also, many assessment experts argue that out-of-level testing produces scores that are (even using transformation formulations) insufficiently comparable to allow aggregation, (as required by 34 CFR §300.139). [Additional information about out-of-level testing is available from the National Center on Educational Outcomes (NCEO) at the University of Minnesota (612/626-1530; http://www.cehd.umn.edu/NCEO).]

If out-of-level tests are used, IEP teams need training and clear information about the statistical appropriateness of administering such tests at each possible level different from the student’s grade level.

We must have high expectations for all children. We must ensure that all teachers have the training necessary to effectively teach diverse learners. Students with disabilities are entitled to instruction in the same rich curriculum as their non-disabled peers. High expectations improve results for all.

Reporting

IDEA (34 CFR §300.137) requires states to report to the Secretary and to the public every two years on the progress of the state and of the children with disabilities in the state toward meeting performance goals including performance on assessments, drop-out rates, and graduation rates.

Additionally, IDEA (34 CFR §300.139) requires the SEA to report to the public, in the same frequency and detail as it reports for non-disabled children, on the number and performance results of children with disabilities participating in regular and alternate assessments and to include in those reports aggregated data that include the participation of children with disabilities together with all other children, and disaggregated data on the performance of children with disabilities.

Aggregation and Disaggregation of Data

In their reports to the public on assessments, states must report aggregated data that include the performance of children with disabilities together with all other children, and disaggregated data on the performance of children with disabilities. There is no federal requirement for disaggregation by category of disability, just disaggregation of the performance of children with disabilities separate from the performance of non-disabled children. However such disaggregation can be extremely helpful at the state and local level. Some states have also established similar requirements for local education agencies to report locally on student performance on state and local assessment programs. These are often called "report cards." This is a state decision.

The state reports must be with the same frequency and in the same detail as reports on the assessment of non-disabled children. For example, if school level results are reported, then school level results for students with disabilities should be disaggregated, unless the number of students with disabilities is too small.

It is the SEA’s decision how to collect sufficient data from LEAs to meet the Federal SEA reporting requirement consistent with these provisions.

Some states have found a way to aggregate results from alternate assessments with data from general assessments. Other states have not. We encourage states to facilitate the aggregation of scores from the alternate assessment with scores from the general assessment.

It is important for states to report performance data from alternate assessment in a way that ensures that all children with disabilities are included in the accountability benefits of state and districtwide assessments.

Use of State Data and State Self-Assessment

Federal requirements related to assessment, found at 34 CFR §§300.138 (Participation in Assessments), 300.139 (Reports) and 300.347(a)(5)(i) (Content of IEP), will be examined in several ways through OSEP's Continuous Improvement Monitoring Process. As part of the state's self-assessment process, information from state and districtwide assessment should be used by the state's Monitoring Steering Committee to evaluate the state's level of implementation and performance.

I have noted that several states are doing a great job in producing reports that are useful to the state. For example, I recently read a report from Colorado that disaggregated performance data by disability category and by the use of modifications. I know NCEO has recommended that states keep track of accommodations use so that they know what is taking place during their assessments. Such disaggregated reports also help states know how they are doing with different groups of children with disabilities and enables them to ascertain the effectiveness of accommodations.

For example, states should examine in their self-assessment the percentage of students taking the statewide assessments with and without (accommodations) modifications in administration and the percentage participating in alternate assessment. Also, performance on assessments is an important indicator for a state and districts to use in evaluating and improving results for children with disabilities (This is an ESEA Title I requirement). And don’t forget to look at graduation rates and dropout rates as well.

Some states also use the results of assessments for school improvement planning, some for rewards and sanctions to schools and districts, and some for decisions about student promotion or graduation. States must also use information about the performance of children with disabilities in state and districtwide assessment programs to revise their state improvement plans as needed to improve performance.

Monitoring

As part of data collection in the SEA and in LEAs, OSEP monitors will review documents and conduct interviews regarding participation in state and districtwide assessments. OSEP will gather data to determine that the state and LEAs (as appropriate) have developed alternate assessments and provided guidelines for the participation of children with disabilities in alternate assessments. In addition, OSEP will review the extent to which alternate assessments are aligned with general curriculum standards and statewide or districtwide assessments. Part of this review will focus on whether assessment policies deny children with disabilities access to the general curriculum. OSEP will gather information about participation of children with disabilities in statewide and districtwide assessment programs, including information that is reported to the public, aggregated and disaggregated, in the same frequency and in the same detail as for non-disabled children. Finally, OSEP will review whether the IEP team determines the (accommodations or) modifications in administration in state or districtwide assessments. In addition to data collection about participation in and performance on state and districtwide assessments, OSEP will review dropout and graduation data.

Closing

For several months now, the Office of Special Education and Rehabilitative Services has convened a crosscutting workgroup on assessment including staff from the Office of Special Education Programs, Title I, Bilingual Education, the Office of Civil Rights, the Office of General Council, and the National Assessment of Educational Progress, etc. We are close to completion of policy guidance on the issues I have discussed today as well as many other questions you have raised in the past. Also, NCEO is developing a crosswalk of legislation that impacts on assessment discussions so that you will have the information you need to keep the big picture in sight.

I know this is challenging and sometimes frustrating work. I know that the work in each state is totally different and unique. Comparisons across states are difficult and not always enlightening. We will not have all the answers tomorrow or even next year. We will get better at this over time just as we have gotten better at serving children with disabilities over the last 25 years, and just as those fancy dancers got better as they practiced the dance.

And as we practice this dance of inclusive assessment, let us stop and reflect to empower our faith and hope for the future. As we celebrate the 25th Anniversary of IDEA, our reflections of where we have been will energize our efforts for improving results for students. Just remember that 25 years ago there were some who argued that even including children with disabilities in public schools was a bad idea.

It proved to be a good idea. Yes, it has been hard work. But it was "do-able" and it benefited not only children with disabilities and their families but also all children. It improved teaching and learning for all. Inclusion of all children in assessments is a good idea as well. It is "do-able". It will be hard work. It will benefit all children. It will improve teaching and learning for all. It will produce better results as students exit school. Isn’t that what our education system is supposed to be about anyway?

Thank you.

Return to the top


Plenary Session: World Café–Intended and Unintended Consequences of Accountability Systems for Students with Disabilities

This plenary session was designed to facilitate conversations to identify and explore the effects of accountability systems as experienced up to this point. The process was divided into three rounds. In round one, groups of four to eight people discussed a series of questions about inclusive accountability challenges for students with disabilities, and then discussed consequences of these challenges for students with disabilities. In round two, one person from the original group stayed at a table with the original notes, and others joined new groups. The same series of questions was posed in the second groups. In round three, participants returned to their original discussion groups to address what specific strategies are needed at the state, regional, and national levels to minimize further unintended consequences, or address consequences we have experienced.

A working model of intended and unintended consequences developed by the National Center on Educational Outcomes (NCEO) was used to frame the discussion. The model is built on the National Research Council’s "Testing, Teaching, and Learning: A Guide for States and School Districts" (1999), which proposes a "theory of action" for driving the reform movement. The NCEO model, with a brief description, is included in Appendix B, adapted as a result of the Forum discussions.

At the conclusion of all three rounds of discussion, recorders at each table provided results of discussions to NCEO staff. NCEO staff collated all responses according to the categories of challenges and issues in the working model, identified responses that added to the model or further explained the issues, revised the model based on the responses by adding training issues, and summarized the responses by challenge, positive and negative outcomes, and strategies to maximize positive outcomes and minimize negative outcomes.

Inclusive Accountability Challenges Reported by Small Groups

In rounds one and two of the process, participants first addressed the challenges they were experiencing in their states and districts, and then were asked to compare the challenges to those on the working model. All of the challenges currently listed on the model were included, but an additional very large category of training issues emerged. Summaries of the discussion of challenges, by model category, are below.

Content and Performance Standards

Challenges reported by small groups in designing a system to measure performance against common standards included how to link performance of all students to state and district content standards as well as how to adjust performance standards for the alternate assessment. There was also discussion about how those linkages can work for the students with the most significant disabilities. Among the specific challenges listed were:

ˇ Nature of state and district standards make linkages more or less challenging;

ˇ Writing goals and objectives that link to the general education curriculum via state or district standards;

ˇ Linking performance indicators to alternate assessment participants in meaningful and appropriate ways.

An interesting discussion took place about states that have a separate curriculum and standards for alternate assessment participants. There was discussion about the pros and cons of a separate, statewide, functional/standard course of study. There was some concern that linking state standards to the individualized approach of special education may narrow the curriculum, resulting in a lack of focus on the functional needs of students who have difficulty with general education standards. Some discussants suggested that expecting students with significant disabilities to work toward general state standards might provide a disservice to students who do NOT need extra time and attention to meet standards, resulting in not meeting the needs of students at the other end of the spectrum from those with disabilities. This discussion may relate to training needs on expectations and attitudes.

Accommodations and Alternate Assessments: One System, All Students

The discussion of the challenges involved in designing one assessment system for all students included discussion of accommodations, modifications, and alternate assessment. General issues about purpose of the assessment and how purpose affects appropriateness of accommodations and use of data were a backdrop to this discussion. Several small groups suggested that there can be a mismatch between purposes of system level instructional improvement versus individual instructional planning. Another mismatch may be between the need to have data that have high validity and reliability versus moral, ethical, and inclusion issues. The one system, all students challenges closely intertwine with measurement and reporting issues. Among the key challenges identified by the groups were:

ˇ Continued exclusion of categories of students (e.g., separate schools, cultural and language challenges)

ˇ Complexity of accommodations options, lack of understanding of use and effects

ˇ Definitional variations in accommodations vs. modifications

ˇ Challenges related to reporting, and use of results

ˇ Lack of understanding about how to include all students in assessment and accountability–not just to measure but to use results from accommodated, modified, and alternate assessments

ˇ Alternate options for norm-referenced tests, (e.g., NAEP)

Measurement and Reporting

Issues related to measurement and reporting were the most common challenges raised by all small group reports, and they interrelate with challenges in many of the other areas. Technical and psychometric difficulties were clearly a major concern, but fairness of use of results also emerged as a major issue. The following reflect most of the specific challenges identified by groups:

ˇ How to report student results for "out-of-district or school" placements

ˇ Balance between what makes sense for improvement planning vs. psychometric soundness

ˇ Putting all students on same scale vs. accountability for all

ˇ How to compare districts and schools with inconsistent populations

ˇ How to compare fairly across schools, districts, states, since there are so many variables

ˇ Value added component, comparability on progress may be more fair if you look at progress toward standards AND growth

ˇ System vs. student accountability, measurement and reporting issues differ

ˇ Problems with aggregating, problems with disaggregating

ˇ Psychometric issues abound!

School Improvement Planning Based on Data

The development of the assessment system is meant to yield data that will drive instructional improvement. Small group reports suggested that for instructional improvement to occur, several challenges have to be addressed, including training on purpose and uses of data, and ensuring that all students, specifically those with disabilities and those with limited English proficiency, are included in the improvement processes. Key challenges identified by groups were:

ˇ Better understanding needed about how to use data to enhance achievement

ˇ Shift in management of programs for accountability: special education may not be represented, so word doesn’t get back to the special education people

ˇ Understanding difference between scoring all students on same performance scale vs. accountability for all

ˇ System of accountability is for ALL students, yet change is very slow, often nonapparent for students with disabilities and limited English proficient students–does all mean all?

ˇ How do we document whether assessment and accountability have created results? It still remains business as usual.

High Stakes Issues

States and districts vary in the stakes attached to their assessment system. Participants raised some specific concerns about high stakes:

ˇ What is accountability? Testing all students OR testing all and aggregating, using results for improvement?

ˇ Civil rights implications of various approaches to diplomas, e.g., granting diplomas varies, regular diplomas for success on IEPs, special education diplomas, legal issue.

ˇ Is the alternate assessment included in the accountability system? How?

ˇ If there are consequences for students without disabilities do you have the SAME consequences apply to the alternate assessment participants?

Training, Professional Development

In addition to challenges included in the original model, the small groups strongly emphasized the need for broadly based training for administrators, parents, and both general and special education teachers. This additional concern echoes the concerns raised by the National Research Council in Testing, Teaching, and Learning (1999, p. 3): "In our view, standards-based policies can affect student learning only if they are tied directly to efforts to build the capacity of teachers and administrators to improve instruction." Among the specific challenges identified by the groups were:

ˇ Attitudes about expectations for students with disabilities

ˇ Assessment literacy for all partners (administrators, teachers, parents, community members) in understanding the purpose and use of assessment results

ˇ Redesign of preservice training on the part of institutions of higher education, both general and special education

ˇ Improvement of general education staff instructional skills in addressing needs of all children

ˇ Improvement of special education staff understanding and linkage to the general education curriculum, specifically state or district content and performance standards

ˇ Improve understanding and skills in integrating assessment and instruction

ˇ Understanding and aligning multiple federal and state requirements for inclusive assessment and accountability

Consequences for Students with Disabilities Reported by Small Groups

After the inclusive accountability challenges were discussed and recorded, discussants moved on to discuss the consequences of these challenges for students with disabilities. In rounds one and two of the process, small groups were asked what they perceived to be unintended consequences of inclusive accountability measures for students with disabilities, and then asked to compare their experiences to those in the working model. The overall results showed a strong emphasis on positive consequences as well as negative consequences, and every discussion group reported at least as many positive as negative outcomes. All of the items on the working model were reported except for two items, which were "meaningful diplomas" as an intended outcome, and "cheating on tests" as an unintended outcome. There was no evidence from the discussion results to indicate that participants disagreed with these two items, but no direct comments about them were recorded.

Positive consequences were described in the following categories.

Higher Levels of Learning and Achievement Toward Common Standards

ˇ Linking of general assessment to functional standards has improved IEP goal writing

ˇ Alternate assessment forced us to think about how to include students

ˇ It helps teachers focus on what to teach (not be babysitters). We can stop seeing selves as caretakers and more so as teachers.

ˇ Accommodating for assessment is influencing accommodations for instruction

ˇ Gives more opportunities for extended 1:1 with child

ˇ Teachers have reported that it focused their instruction and gave them new options for content

ˇ Teachers are doing more authentic assessment

Access to General Education Curriculum

ˇ Schools could become more inclusive

ˇ Ownership of special education students is now shared with general education more so than in the past

ˇ All staff are involved with student part of portfolio process

ˇ IEPs will get aligned with standards

Opportunity to Learn, Mastery of Grade Level Material

ˇ I’ve got to look at HOW to make this child successful

Accountable System and Students

ˇ Teachers are thinking of new ways to assess students

ˇ Simplifying IEPs and getting down to what’s important

ˇ Opens communication pipelines from state to local, and administrators to providers

ˇ High stakes testing may improve achievement levels

ˇ Teachers can "advertise" the wonderful learning that takes place in their classrooms for the seriously cognitively impaired

Negative consequences were described in the following categories:

Simplifying IEP Objectives to Ensure Mastery

ˇ Assessments may begin to address only lower level skills, ones that all can accomplish

ˇ Criteria for judging accountability is being narrowed; are the standards the appropriate ones for all students?

ˇ More testing reduces instructional time

ˇ IEPs tend to look more similar than individualized because all are addressing same standards

ˇ Short-changing students with respect to employability skills, life skills education, to spend more time on academics

Misinterpretation of Achievement Results

ˇ Use of scores may provide incentives to include more students in alternate

ˇ If the same performance levels, all alternate assessment participant results will be in bottom category

ˇ People say more and more that disability is reason students can’t learn

Higher Rates of Dropout, Retention, Absenteeism, Lower Graduation Rates

ˇ Differentiated diplomas (again)

ˇ Two diploma options may lead to drop out, absenteeism

Teacher Burnout

ˇ Teachers view portfolios as holding the teacher accountable and not the students

ˇ Implement without improving teachers

ˇ Impact on teacher time

High Rates of Exemption/Exclusion: Disappearing Students

ˇ Schools may become less inclusive with high stakes test pressures

ˇ Schools now unwilling to "house" classes of students with significant disabilities, but could go to equating formulas that fix that

ˇ It reduces students being included in general education instruction, once they specify they’ll participate in alternate assessments

ˇ Groups of students excluded (e.g., students in psych-ed centers, foster care, students in juvenile justice system) from regular assessments because of category

ˇ Students and teachers "hiding under rocks" from the assessment: special education private schools, teachers exempting students, moving students–who’s accountable?

Strategies to Address Consequences

After the discussion on perceived positive and negative consequences of the challenges, participants moved to identifying concrete strategies to maximize positive and minimize negative consequences of standards-based reform for students with disabilities. For this discussion, participants returned to their original small groups of four to eight people. Here, they brainstormed specific strategies needed at state, regional and national levels to maximize positive consequences and minimize negative consequences. The brainstormed strategies are listed below grouped into eleven categories that emerged from their analysis. The strategies are not prioritized, and minimal collapsing of examples was done as a way of capturing all ideas recorded in the small groups. Strategies were not addressed in the original model.

Strategies for Improvement of Instruction

ˇ Use a portfolio approach for a system check and to collect student data; can improve the system plus improve individual instructional plan

ˇ Formalize evaluation of opportunity to learn

ˇ Formalize evaluation of teacher performance

ˇ Align curriculum, instruction, assessment

ˇ Align special education instructional and assessment focus

Strategies for Continuous Improvement of the Accountability System

ˇ Review work frequently to make changes as necessary, but use data-based decision making

ˇ Keep nay-sayers in the mix, helps you to answer tough questions up front rather than after implementation is entrenched

ˇ Give us written guidelines (double edged sword)

ˇ Proceed slowly, don’t push ahead just because of a federal timeline (arbitrary, uninformed)

ˇ Include special education and LEP staff in early discussions about accountability

ˇ States need to take the time to consider all aspects of accountability. Learn from other’s mistakes.

ˇ Continue to define terms: curriculum, standards and benchmarks

ˇ Paperwork issues: integration of instructional assessment should minimize paperwork

ˇ Get MANY more people involved in development so they don’t see it as (and it isn’t) a state mandate

Strategies for Training

ˇ Teacher training is a must–on how to collect data, strategies to collect data on students with diverse needs, and limitations of college prep for all students

ˇ Include administrators and general education teachers in training

ˇ Provide training on performance assessment and scoring, reporting, using data

ˇ Emphasize that the alternate assessment is a sampling of skills, and is not meant to cover everything about the child

ˇ Hold more meetings where people sit down and talk through the issues, like this session

ˇ State and LEA partnerships are needed for training, also get higher education on board

ˇ Strategically recruit teachers

ˇ Get the word out that alternate assessment helps instruction and makes you a better teacher

ˇ Can’t change attitude first, we must get people to try out alternate assessments, and then change will come

ˇ Remember that teachers (all people) are at a different place in their development and will attack this differently

ˇ Have a teacher manual and structured training

ˇ Strong training and implementation plan, teachers and whole IEP team

Strategies for Integrating General Education and Special Education as One System

ˇ Greater integration between special education and general education is a key strategy

ˇ Require special education training for general education teachers

ˇ Need transition time for special education and regular education teachers to accept the fact that both are responsible for the education of students with disabilities

ˇ Make general education teachers more responsible for differentiated instruction

ˇ Help special education teachers focus on preparing students for community life, independence, employability

ˇ Improve preservice training for ALL educators, general education needs more attention to individualized teaching, special education needs more attention to specialized assessment, planning, teaching

Strategies for Measurement and Reporting

ˇ Scores should go back to district/school of residence when students are placed in residential facilities

ˇ States need to work with test publishers to expand permissible accommodations

ˇ Develop standards assessment model with inclusivity at all levels, including assessment development

ˇ Use different levels of performance: four levels for regular, four levels for alternate, then create accountability index that incorporates those

ˇ Have allowable accommodations, and special conditions, then code by allowable, special conditions

ˇ States develop indexing or equating that assigns scores for alternate back to school in appropriate proportion to population

ˇ Scores of students in alternate assessment don’t necessarily need to be low; can report progress, not performance

ˇ Track students to be certain all are assessed and how they are assessed to determine if type of participation is appropriate

ˇ Compare participation rates to Dec. 1 child count

ˇ We need to be establishing some base lines and then see if we are influencing these

ˇ Can’t do trend analysis unless you have a good system, across states, year to year

Strategies for Inclusive Assessment Systems

ˇ Figure out what accommodations a student needs without looking at a list. Then ask if the accommodations are allowed.

ˇ Have district account for ALL students (e.g., absences, excluded, regular, accommodated, alternate)

ˇ Every district has to collaborate with private schools to get assessments accomplished

ˇ Exemptions–total catatonic state–we may have to exclude

ˇ Need an additional framework for special education and LEP: for example, instead of four levels, add a fifth category, of something like access skills for the alternate assessment participants, use prerequisites for skills; for LEP use prerequisites for English

ˇ If participation in alternate is too broad, narrow your criteria

ˇ Test prep–we haven’t done this with students with disabilities. If they have always been exempted…they will need practice taking tests

Strategies for Addressing Civil Rights, Litigation Issues

ˇ Anticipate litigation with exit testing, need to be positive at all levels

ˇ Civil rights issues need to be articulated, understand the framework

ˇ OCR policy letters would be helpful

Strategies for Parent Involvement

ˇ Include parents–"validation forms" for parent–they can say "I don’t think this represents my child." This would show up as a negative under the "appropriate" rubric. Parents’ score would be one part of the total rubric. How much weight would that have?

ˇ Include parents in training; want to do it, but it’s too expensive. One LEA uses the COACH process, which has a training benefit

ˇ Help parents develop higher expectations

ˇ Parents need to be on steering committee

Strategies for Addressing Mandates

ˇ Too much imposition of standards from federal level won’t fly

ˇ Some folks would like more prescription from the federal level

ˇ Integrated document needed that goes across all federal statutes, like one NCEO is working on

ˇ States need to align state legislation with federal requirements

ˇ Develop appeals processes at state level

ˇ Define accommodations more clearly in Federal law

Strategies for Addressing High Stakes

ˇ Use a differentiated diploma system to keep students from being denied a diploma

ˇ Social promotion: Don’t deny promotion as long as students are making progress on IEP goals

ˇ No "rainbow" diplomas!

ˇ Multiple paths to diploma

ˇ Vocational path to diploma

ˇ Different for different reasons for testing, e.g., accountability

Strategies for Addressing Money and Resources

ˇ Full funding for IDEA requirements, also needed for training

ˇ Let’s put the dollars in instruction!

ˇ More money and time

ˇ Increase funding for materials and training

ˇ Smaller class sizes

Return to the top


Plenary Session: Learning from Research and Extended Implementation

The final plenary session provided information from research and from states with extended implementation in place. Peggy Dutcher from Michigan served as moderator and posed questions. John Haigh from Maryland and Jacqui Farmer Kearns from Kentucky shared the Maryland and Kentucky research and implementation agendas to date. The Michigan summary is provided in Appendix C, pp. 50-51, as background.

From Michigan’s experience, Dutcher raised these key issues for comment by the panelists:

ˇ Are you finding that your system is being implemented as planned?

ˇ Michigan is concerned about many of the technical issues. How did each of your states evaluate/research your system while in development and how are you handling ongoing evaluation of your programs?

ˇ What lessons did you learn as you developed and adapted your systems?

ˇ How is your state handling the reporting issues, especially reporting at the state level? Also, how did you approach reporting your assessment information in a way that would help improve instruction and the performance of students?

ˇ To what extent are your stakeholders satisfied with your system? Do you see any negative outcomes?

ˇ How did you approach the task of training raters and the issue of rater reliability? Also, what were/are the costs related to this training?

ˇ How much does it cost to implement your system?

As discussion proceeded, questions from the audience were taken as well, adding some additional issues and emphasizing other topics. Thus panelists’ answers did not always fit one of the initial questions precisely, but the general topics were maintained. Panelist responses to the issues are summarized below in general topic categories.

Reporting Issues. For example, will the data from alternate assessment be useful? What will the media do with the data from the alternate? How do teachers respond? What is appropriate for state level aggregation? How do you use results to improve instruction? And how do we achieve validity and reliability?

Kentucky
Alternate assessment scores are included in the school accountability system. We don’t use these scores for student accountability system from these scores; no retention, no graduation requirements are built on this. Scores are reported as part of the school accountability index.

We use a holistic scoring approach, and teachers score through regional panels. It costs more to go to regional scoring, but teachers walk away from scoring sessions with their own students’ scores. They sit with peers, look at scores, and think them through, so when the report comes out it is not too big of a surprise.

As for validity, we’re finding that schools with higher scores in general education also have higher portfolio scores for the alternate. We look at what is in the scoring rubric. The nice thing about portfolio assessment is you can do videos, multiple methods of assessment, get peer information, parents; it all adds to validity. A single measurement system makes validity more difficult; using multiple types of data really helps. With peers scoring, it’s easy to pick up on who did not belong in the alternate.

Reliability went down when teachers were scoring their own students’ assessments, but when we went to double-blind scoring it went up. You need to do inter-rater reliability to see match, pull a sample and score again. It takes about an hour to score one portfolio; expensive if you have center-based scoring with experts, and we really want staff development from the scoring experience, so we work on training teacher scorers. We had graduate students who did reliability site visits–it took about a semester. They interviewed teacher and principal, and did IEP analysis–these are also program measures. Now we’re looking at predictive validity (postschool outcomes) but it’s very hard to do.

Maryland
We have high stakes for schools and students, including a diploma test. We are a birth to 21 state, so we need to go beyond what general education is doing to evaluate all school programs however. The results of the alternate are linked to what we’re measuring overall. All are scored, but reported at statewide level only. Other reports are shared throughout the year with several audiences. We started with "what do you want students with disabilities to know and be able to do when they leave school?" We tried to match purpose with regular assessment.

We validated using effective practices in the literature. We worked on the scoring rubric, and then we focused on triangulation in data gathering. Teachers don’t necessarily know how to get good data. We explored ways to get parents involved, peers can be involved. You triangulate around source, pieces of data that go into a body of evidence, or like Maryland, get three distinctively different kinds of data.

We found a university within our state to help us with reliability and validity. We score using teachers, then at end of week, go back and score a random set. We keep those senior scorers available, and it becomes a career ladder for teacher-scorers. We started off the first couple of years in one central site to control scoring and what everyone knew. Now we score at three different sites, but we had built a cadre of core reliable folks before we went out to regional sites.

 

Standards Issues. For example, special educators weren’t familiar with general education standards–how can training and support help the linkage occur? How can we balance the need for individualization for this population, yet find common standards?

Kentucky
The general education approach was through portfolio assessment so the alternate was a portfolio as well. The challenge was to really understand what standards-based activities are, then to really understand how students with severe disabilities access them, learn from them. We now have content specific activities, standards based activities, that give access to all. If school isn’t for all, then who is it for? It was an opportunity for special education and general education to think together, to work together on school for all. We jumped in on a very steep learning curve, and now after ten years, more people are doing what needs to be done. We started with no examples; now we have good and bad examples, and that makes all the difference in training teachers. We’re even finding students who can self evaluate their work.

Maryland
It’s about change! Especially if it’s in infancy, there are lots of opportunities to engage teachers in development. The whole idea of standards is new to them. The political reality is that what is required by state policy-makers changes. Boards want one thing, then they change. Maintaining the posture of readiness is really important. Keep in mind a focus: outcomes to standards to milestones, whatever other language you use, but keep the purpose in mind of what it is that we’re doing, be sure we’re including all students.

 

Performance Assessment and Instruction Issues. For example, how can we develop a standardized set of assessment activities, and still allow for full range of severe and moderate cognitive levels at a given age or grade?

Kentucky
We found we weren’t finding a lot of performance in the portfolios that connected to standards, or finding not enough use of technology, or showing skill in multiple settings. It is through scoring that we learn what professional development needs to be beefed up, and give support. Scoring done by teachers is powerful training on what should be taught, and what should be assessed. Teachers will say, "Oh, is that what I was supposed to be doing? Oh, is that what I look like when I teach? I see, I am over prompting that student." But they need to see other examples to know what’s possible, and to want to change their practice.

The best strategies include hiring teachers to do the training on scoring. It helps for credibility. They can do orientation in fall, and there too, scoring portfolios is the best training. Teachers love to see what other people have done, which is a large part of training. If they practice scoring in fall, they see what they need to look for. You can also put scoring training on the web, can be certified scorers working with you. Then hold regional help meetings in regions throughout year, identifying proficient people in each district and each region to help out.

Maryland
Change is a key piece. Flexible training is really crucial for any program that you start. Our teachers score, rather than have someone else do that, since it’s a terrific training piece for the teachers. At first teachers would scream and yell and not want change at all. "You haven’t seen ‘my’ kids!" they’d say. Start out with teachers in the core planning and design, then have them go out and do training and outreach. It’s much better to hear it from peers than from someone from the state department.

We tag on to that, developing additional performance based tasks so teachers get very used to writing tasks. They do it right along with regular education folks; for students who are included, it’s a parallel task to assess. Maryland assessment consortium, school districts got together to develop performance tasks, we use them as part of the training as well.

 

Research and Evaluation Issues. For example, are you finding that your system is being implemented as planned? How do we continuously monitor and improve our efforts? What are we learning from research?

Kentucky
We’ve been fortunate to have an ongoing evaluation and research design. We had a federally funded research grant. We need to look at score distribution and reliability every year. Now we are looking for intended and unintended consequences of these reforms, through interviews, looking at IEPs, using other program measures.

Maryland
We are buying video tape, and see use of video as a good tool to see what needs to be improved. We found virtually every school had camcorders already, but the biggest problem was getting a person to run the camera. In our manual, we go through a lot of training to put a video together. We’ve suggested, highly recommended, they use video tape; a checklist can be used as an alternative, but you need to have a third party do the checklist.

 

Stakeholder Satisfaction Issues. For example, how do all parties feel about the effort? How do we communicate to them?

Kentucky
Satisfaction is an interesting idea. We are continually looking at what the problems are. For example, are students really in the right assessment? We’re able to catch a lot of problems just by looking at portfolios themselves. Teachers have been very involved all along, and that helps greatly with understanding and acceptance. Parents don’t complain. But are teachers happy about it? Probably no, since this is accountability. But is it better for students? Yes, for example, one concrete outcome is that more students have communication systems than before.

Maryland
There are good principals who saw this as an opportunity, who embraced it and saw it as an opportunity to do better. It helped on the regular assessment side; we had data that special education students weren’t pursuing standards and a diploma. Initially, parents didn’t want it; now they see the benefit.

 

Cost Issues. For example, what is it costing to train and score the Alternate Assessment?

Kentucky
Our maintenance costs are about $200,000 per year. That includes professional development. That’s for scoring fewer than 1,000 portfolios.

Maryland
Our approach is expensive because we test at pre-Kindergarten and 21 in addition to K-12. We also use three to five teachers for each portfolio scoring, but the value comes back in staff development outcomes. For 1,400 students, it costs $200,000–video, room and board for scorers, training.

 

Conclusion

The concluding session was a celebration of work done thus far, and a focus on the students we all serve. Plans are being made for a fourth annual Alternate Assessment Forum in Houston, Texas, in June 2001.

Return to the top


Appendices

Appendix A: Agenda

Appendix B: NCEO Model, as Amended from Discussion

Appendix C: State Stories

Return to the top


Appendix A

Alternate Assessment Forum Agenda: Connecting Into A Whole

June 23

8:30–9:30 Getting Off to a Good Start–DoubleTree/Hilton Ballroom

ˇ Welcome and Opening the Forum

ˇ Agenda Walk Through and Housekeeping Details

ˇ State Approaches to Alternate Assessment–An Overview

9:30–9:45 Break provided by Measured Progress Incorporated -

9:45–11:45 State Stories: Learning From Our Experience Together

These breakout sessions are designed to provide opportunities to hear about other state systems with a focus on reporting, data use, and integrated systems. Time will be allotted for questions as well as table conversations to share and generate additional ideas. Presenting states include:

Arkansas & Alaska–Canyon Room I

Arkansas: Gayle Potter, presenter. The Arkansas Comprehensive Testing, Assessment and Accountability Program (ACTAAP) encompasses high standards, professional development, student assessment, and accountability for schools and students. One component of ACTAAP is the Arkansas Alternate Portfolio Assessment System.

Alaska: Wendy Tada, Fran Maiuri, and Toni Jo Dalman, presenters. Alaska's Department of Education & Early Development has developed an individualized, performance-based alternate assessment. Teachers and parents will collect a portfolio of evidence of the student's proficiency in meeting alternate performance standards.

Colorado & North Dakota–Canyon Room II

Colorado: Sue Bechard and Terri Connally, presenters. Colorado just completed a pilot of the alternate assessment involving 120 students in grades three and four. The performance-based assessment was build on expanded reading and writing benchmarks of the state standards and included four literacy related activities and two scoring rubrics.

North Dakota: Keith H. Gustafson, presenter. The North Dakota Alternate Assessment process consists of a portfolio review of documentation gathered on student mastery of outcomes derived from the North Dakota Content and performance Standards in English-Language Arts, Math, Science, and Social Studies.

West Virginia & Oregon–Canyon Room III

West Virginia: Mary Pat Farrell, Sandra McQuain, and Beth Cipoletti, presenters. West Virginia will describe how one state will measure individual student progress in the attainment of functional skills based upon the established state curriculum to produce results that can be aggregated for accountability purposes.

Oregon: Pat, Almond, and Bob Siewart, presenters. We will be talking about our Extended Career Related Learning Assessment and our Extended CIM academic assessment. We will discuss where we are with reporting in the general system and how results from these extended assessments will be incorporated.

Missouri & Minnesota–Topaz Room

Missouri: Melodie Friedebach and Jim Friedebach, presenters: Missouri has developed a portfolio assessment for its alternate assessment, MAP-A. This session will focus on the content of the portfolio and the scoring procedures that will be used this summer. The session will also focus on the professional development opportunities for teachers that are associated with the MAP-A.

Minnesota: Mike Trepanier, presenter. An advisory committee defined principle statements that guided the development of the alternate assessment. These were: meet the law; easy to understand, use, complete, and report; not abusive to students, staff, or parents. Each alternate assessment is a teacher rating scale based on what is developmentally appropriate for the student. Reporting goes through the director of special education and is submitted over the Internet.

Florida & Kansas–Salon I

Florida: Carol B. Allman, presenter. FL school districts are permitted to choose the appropriate alternate assessment procedure for students with disabilities who are excluded from the state test. The state provides monies to schools and districts through the Accountability and Assessment Project and coordinates alternate assessment training of teachers.

Kansas: Susan Bashinski, presenter. The Kansas Alternate Assessment is derived from the Kansas Extended Standards in Reading, Writing, and Mathematics. The assessment is designed to be a structured interview based on the indicators in the extended standards.

12:00–1:30 Lunch with OSEP–Grand Ballroom C Lunch provided by CTB/McGraw Hill

A presentation by Dr. Kenneth Warlick

1:30–3:00 World Café–Intended and Unintended Consequences of Accountability Systems for Students with Disabilities–DoubleTree/Hilton Ballroom

This plenary session is designed to facilitate conversations to identify and explore the effects of accountability systems as experienced up to this point. Focus questions will be used to guide conversations in a café atmosphere.

Beverages provided by ILSSA -

3:00–3:20 Break–Break provided by Riverside Publishing

3:20–4:30 Learning From Research and Extended Implementation

This plenary session will provide information and learning from both research and the first-hand experience of states.

Peggy Dutcher (MI), Jacqui Farmer Kerns (KY), and John Haigh (MD), presenters.

June 24

8:30–9:30 State Fair Breakfast–Grand Ballroom C. Breakfast provided by Measured Progress Incorporated; Coffee and Tea available all morning. Coffee service provided by ILSSA

Each state will have a "booth" to display materials that they are interested in sharing.

Over breakfast, participants can review what other states are doing.

9:45–11:45 State Stories: Exploring Critical Issues

These breakout sessions are designed to provide opportunities to hear about other state systems with a focus on unique issues. Time will be allotted for questions as well as table conversations to share and generate additional ideas. The states presenting and issues include:

Large systems: California & Ohio–Canyon Room I

California: Mark Fetler, presenter. California's alternate assessment program is designed to include students with more significant disabilities in statewide assessment and accountability programs. The statewide alternate assessment program begins July 1, 2000.

Ohio: Pete Tolan, presenter. The Ohio model for conducting an alternate assessment is based on a review of student progress in attaining IEP goals. Ohio has a state-wide procedure for developing IEPs and alternate assessment provides a means of organizing and compiling this information to yield data about student progress and program effectiveness.

High stakes: Massachusetts & Delaware–Canyon Room II

Massachusetts: Dan Weiner, presenter. MCAS-Alt will measure student achievement of Massachusetts' learning standards in four subject areas through use of individual student portfolios. Curriculum and assessment materials have been developed to assist teachers in aligning instruction for students with significant disabilities. Massachusetts will transition next year to a high-stakes student assessment.

Delaware: Mary Ann Mieczkowski, presenter. This session will discuss the Delaware Alternate Portfolio Assessment paying particular attention to how it has been aligned with state standards and how it will be used in the overall state accountability system.

Computer based systems: Indiana & Rhode Island–Canyon Room III

Indiana: Deb Bennett, Melanie Davis, John Cunningham, and Helen Arvidson, presenters. Indiana's alternate assessment, IASEP, is a computer-based rating and documentation system that allows the teacher to collect and rate on-going evidence of student learning. Using customized software, teachers can create video clips, audio clips, scanned documents, digital images, and text entries. This performance-based evidence is then linked to validated skills, knowledge, and abilities. Sample portfolios, data from the 98/99 pilot, and preliminary data from the 99/00 statewide implementation will be shared.

Rhode Island: Maria F. Lindia, presenter. Rhode Island's alternate assessment represents a multi-disciplinary approach to student learning and progress. Portfolios showcase student work where learning across life domain areas can be assessed in a comprehensive way.

Training: Georgia & Wyoming–Topaz Room

Georgia: Nancy E. Elliot, presenter. Georgia has developed an IEP based alternate assessment process that focuses on priority objectives for each student and his or her progress throughout the IEP implementation period. Training was conducted for teams from every school system in the state on the "inclusion of students with disabilities in the assessment program."

Wyoming: Becca Walk, Beth Norton and Peg Sherard, presenters. Wyoming has "expanded standards" based on the regular standards. The benchmarks are real life indicators. We are using modified IEP forms for data collection. We have piloted our system in six districts and two BOCES.

Evaluating your system: Tennessee & Vermont–Salon I

Tennessee: Terry L. Long, presenter. Tennessee chose to implement a portfolio assessment system for students who cannot meaningfully participate in standard state and district wide assessments. The Tennessee Department of Education contracted with ILSSA for development, implementation, and ongoing technical assistance and with Vanderbilt University for formative and summative evaluation of the implementation process.

Vermont: Michael Hock & Fred Jones, presenters. Vermont will be presenting procedures and results of our alternate assessment field-testing and some of the tools which were developed. Included will be an overview of our alternate assessment Web site.

12:00 Closing Luncheon–A Celebration–Grand Ball room C. Lunch provided by Harcourt

 


Appendix B

Model of Intended and Unintended Consequences of School Reform for Students with Disabilities

The purpose of this model is to explore the challenges of school reform for students with disabilities, specifically the challenges of inclusive accountability and assessment practices. The model is built on the National Research Council’s "Testing, Teaching, and Learning: A Guide for States and School Districts" (1999), which proposes a "theory of action" for driving the reform movement:

Generally, the idea of standards-based reform states that, if states set high standards for student performance, develop assessments that measure student performance against the standards, give schools the flexibility they need to change curriculum, instruction, and school organization to enable their students to meet the standards, and hold schools strictly accountable for meeting performance standards, then student achievement will rise. (p. 15)

Overall, the intended outcome of standards-based reform as portrayed by the theory of action is increased levels of learning and achievement for all students in our nation’s schools. The theory of action assumes that all students are included in all components of the reform agenda: standards, assessments, flexibility, and strict accountability. Despite the intended positive consequence of higher student achievement, there is the potential for unintended negative consequences.

The reform movement has influenced the implementation of additional policies and procedures that must be examined for students with and without disabilities. These secondary policies and practices are also implemented with the intent to improve student learning and achievement. For example, states have begun to implement policies to end social promotion. The overall intent of these policies is to ensure that students have mastered grade level material before being promoted. However, among the unintended effects of these policies may be an increase in the number of students retained and in the number of students who drop out of school.

The figure "Intended and Unintended Consequences of School Reform for Students With Disabilities" illustrates an adaptation of the theory of action of standards-based reform. The figure shows the antecedent components of the accountability system driving school reform, challenges that arise as students with disabilities are included in the system, and examples of some intended and unintended consequences of inclusive accountability and assessment systems as well as secondary policies and practices. This figure was the working model used to focus discussion in a plenary discussion at the Alternate Assessment Forum in June 2000.

The figure is presented as it was refined as a result of discussion; these changes are noted on the figure with asterisks and italics. The changes include an emphasis on training of all partners as an essential challenge, and the addition of positive and negative as qualifiers for intended and unintended consequences.

Click here to view the Model for Intended and Unintended Consequences of School Reform for Students with Disabilities.

Return to the top