Self-Study Guide for the Development of Statewide Assessments that Include Students with Disabilities

by Martha Thurlow, James Ysseldyke, and Kenneth Olsen

Published by the National Center on Educational Outcomes in collaboration with
St. Cloud State University and
National Association of State Directors of Special Education

1996


This document has been archived by NCEO because some of the information it contains is out of date.


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Olsen, K., Thurlow, M., & Ysseldyke, J. (1996). Self-study guide for the development of statewide assessments that include students with disabilities. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/Self_Study_Guide.html


Introduction

Recognizing the Problem

More and more, the public is demanding states to implement systems of education that emphasize higher standards and accountability for all students. In response, states are revising their standards and Congress is creating national initiatives, such as, the Goals 2000: Educate America Act, and the Elementary and Secondary Improving America's Schools Act that call for a comprehensive education system that envelopes all students, including those with disabilities. Recently, in considering the reauthorization of the Individuals with Disabilities Education Act, the U.S. House of Representatives proposed changes in IDEA that will require all states to include students with disabilities in their state assessment systems.

The Title II language of House bill 1986 is as follows:
  • "(E)(i)  a statement of any individual modifications in the administration of State or districtwide assessments of student achievement that are needed in order for the child to participate in such assessment; and
  • "(ii)  if the individualized education program team (hereafter referred to as the 'IEP team') established under section 614(d) determines that the child will not participate in a particular State or districtwide assessment of student achievement (or part of such an assessment), a statement of --
    • "(I)  why that assessment is not appropriate for the child; and
    • "II)  how the child will be assessed;

(pp. 14-15)

Historically, students with disabilities have been excluded at unreasonable rates from state assessment programs - sometimes as high as 100%. Most states exclude 50% or more of their students with disabilities. Only one state, Kentucky, includes all students in its state assessment program.

Why have so many students with disabilities been excluded from assessments? NCEO's research identified the following problems in state guidelines:

These problems have created an unacceptable situation because students who are not included in assessments and other systems of accountability tend not to be included in educational reforms. Assessment systems should include, in one way or another, all students. Although this may not be immediately achieved in some states, it is possible to have rates of exclusion that are well below 50% - an intermediate goal that can be achieved right now in existing state assessments.

Three concepts - participation in assessment, assessment accommodations, and reporting of results — are key aspects of setting state assessment policies and procedures.

Students with disabilities fall into one of three general categories in relation to a state assessment system as shown in Figure 1. Many students with disabilities can participate in the regular assessment in the same manner as students without disabilities. Another group of students can deal with the content of the test but need modifications in the way the test is presented or the way that they provide answers. Finally, there are some students for whom the regular assessment is inappropriate. These students would need a different type of assessment, perhaps covering different content.

Figure 1:  Students with Disabilities in Relation to State Assessments


Students who can participate in regular assessment with
no accommodations needed

Students who can participate in regular assessment
with accommodations

Students who should receive a different assessment

 

Taking the First Steps

If you are involved in the assessment process, this guide is for you. It is designed to help state education agency (SEA) staff evaluate and revise their assessment policies and procedures in a way that promotes the participation of all students in some form of their state assessment programs. More specifically, it is intended for staff responsible for the state assessment system and staff responsible for the education of students with disabilities. It also could be used by test development contractors and stakeholders who are serving on state task forces to develop state assessment systems. And, this guide should be helpful to local education agency staff who wish to revise their own assessments to include all students.

You'll find this guide organized around eight steps for revising state assessment policies and procedures in ways that will significantly increase the participation of students with disabilities. It should be understandable to anyone who might need to address this topic (including teachers, counselors, parents, administrators, etc.). For further resource documents that address in much greater detail the need for these guidelines and the recommendations of various individuals and groups, see Appendix A.

The eight steps that are covered in this guide are as follows:

  1. Consider Your Assessment Context

  2. Decide What You Want To Do

  3. Develop Guidelines About Participation in Assessments

  4. Develop Guidelines About Assessment Accommodations

  5. Coordinate Procedures for Making Participation and Accommodations Decisions

  6. Develop Guidelines About Reporting Results of State Assessments

  7. Implement Revised Assessment Policies and Procedures

  8. Evaluate Implementation and Effects

Some of these steps may need to be repeated. Decisions at one step may require you to return to an earlier step to make adjustments. If you get confused, don't worry, there are worksheets to help you move through the steps.

Much can be learned from other states. you'll find there are many examples of state approaches - the state is sometimes identified and other times not. These examples give you information on the effects of policies and practices, in addition to information on what policies and procedures were used. you'll find worksheets at the end of each step that can help you move through the steps.


Step 1

Consider Your Assessment Context

  • Form an initial stakeholder team
  • Describe your current assessment system
  • Identify the philosophies and attitudes that drive your assessment system

Your first step in revising state assessment guidelines is to get a firm handle on the current situation. There are several components to this step.

Form an Initial Stakeholder Team

You need to form a small stakeholder team of seven to nine state and local people who are familiar with both the state assessment system and educational services for students with disabilities. It is important to involve stakeholders from local education agencies who are responsible for implementation of the state assessment system. These individuals can assist you in examining the current assessment system and the foundations on which it is based. Completing the chart below will help you organize and select your initial team. It is best to start small and expand as you find that you need additional expertise.

NAMES:

IS FAMILIAR WITH:

             
How the state assessment system was developed              
Characteristics of the assessment (e.g., norms, standards, rubrics, etc.)              
Purpose of the assessments              
The current assessment contract/contractor              
How results are reported and disseminated              
Research on accommodations, modifications              
What accommodations or modifications have been offered and provided              
How local schools or districts implement the state system              
How local schools or districts have tried to include students with disabilities              
How IEP teams work              
Needs and abilities of students with disabilities              
Federal and state law              
Funding (resources)              

Describe Your Current Assessment System

It is important that you and your stakeholder team know every aspect of your current assessment system. The primary factors to consider in it are: (1) scope of assessment, (2) type of assessment, and (3) purpose of assessment. These factors must be considered for every assessment in your current system. The top half of Worksheet 1 is a good place to record your observations.

  1. Scope.  Start by identifying the scope of your assessment system.  Among the factors to list are:

  • number of large-scale assessments being administered

  • names of the assessments, and their relation to the full array of assessments

  • when the assessments are administered (e.g., spring)

  • at what levels the assessments are used (e.g., grades 4, 8, 12)

  • when assessments were started

  • when assessments were last revised

  • whether assessments are mandated by state law

These are basic factors that are not the focus of your policy and procedure revision efforts, but may need to be taken into consideration as you make revisions. Use this quick list of factors to focus on each assessment separately, but be certain to compare and interrelate all decisions so that the users do not become confused. A good technique might be to put all guidelines into one document, as North Carolina and other states have done.

North Carolina's Four Different State Assessment Systems:

End-of-Course Tests, which are tests administered at the end of certain high school courses.  These tests are said to provide school and school system level information on curricular goals.   They are also said to provide information for comparing individual student performance.  The scores from these multiple choice tests are required by the State Board of Education to be a part of the students' permanent records and high school transcripts.  It is recommended that they be used as part of students' final grades for the courses for which they have been developed.

End-of-Grade Tests, which include writing essays for Grades 4, 6, and 8, open-ended sections for reading, mathematics, and social studies for Grades 3-8, and multiple choice sections for reading, mathematics computation, mathematics applications, and social studies for Grades 3-8.  These tests are said to provide information on curricular goals for schools and school systems.  They are also said to provide a basis for comparing individual student performance.  The open-ended tests are said to measure problem-solving within a content area, while the multiple choice questions are said to measure achievement in specific areas.

Minimum Skills Diagnostic Tests, which are administered in Grades 3, 6, and 8 to determine whether students are performing at a level consistent with the state promotion standards.  Students not meeting the minimum competencies are scheduled for summer school.

Competency Tests, which include reading, mathematics, and writing assessments.  Passing these tests is one criterion for earning a high school diploma in North Carolina.  There are multiple opportunities to take the tests, and remediation is provided as well to those who fail any of the tests.

Each of these has a different purpose in the assessment system.

  1. Type of Assessment.   The specific type of assessment under consideration is another factor that should be listed and considered further when trying to gain greater participation of all students in an assessment system.  The type of assessment will have implications for the use of accommodations.  Among the most common types are:

  • Multiple choice
  • Extended response
  • Performance (events)
  • Portfolio
  • Project

These can be categorized according to the type of response requested, such as closed-ended response items and open-ended response items.

You can also describe the type of assessment in terms of the basis for scoring. For example:

  • Is the assessment norm-referenced?  If it is,

    • Is it norm-referenced on a population outside of the state?

    • Are state norms used?

    • Were students with disabilities included in the norming sample?

  • Is the assessment scored against an absolute standard or rubric*?  If it is,

    • Were students with disabilities included during instrument development?

    • Were students with disabilities considered when standards for performance were set?

Rubrics indicate degrees to which an absolute standard has been met.  For example, Kentucky uses the terms Novice, Apprentice, Proficient, and Distinguished in its rubric.

Descriptive information will help you set the framework to revise the policies and procedures in your state assessment system.

  1. Purpose of Assessment.  Not all assessments in a state system serve the same purpose. Different assessment purposes may have different implications for policies and procedures. Many states have systems that include multiple assessments. Often, the different assessments are used for multiple purposes. This means that you will need to take time to examine each component of your assessment system.

Common purposes of state assessment systems are to:

  • Describe student competence to inform the public

  • Make comparisons among educational units (districts, schools)

  • Achieve system accountability (evaluate the extent to which standards are met)

  • Set policy based on student data

  • Make decisions that affect student progress (minimum competency tests, grade or course promotion tests, high school graduation exams)

  • Make decisions that affect the employment of school personnel (extent of student progress determines teacher salaries, school leadership)

The first four purposes are considered to be low stakes, while the remaining two are considered to be high-stakes. A low-stakes assessment has no consequences for a particular group or individual within the group. A high-stakes assessment has consequences for a particular group or individual within the group.

A test that determines whether a student will graduate is considered to have high stakes for the student. When improved student test scores in a district determine whether the superintendent will be rehired, that test is said to be high stakes for the superintendent, but not necessarily for the students. Usually, a reported test that does not affect anyone (i.e., there are no rewards or sanctions) is considered to be low stakes.

You may find that reporting results creates higher stakes than before. Statewide "report cards" on schools and school districts have tended to affect real estate values, public image, and local school board and superintendent tenure. These consequences go beyond their purported "lower-stakes" purposes. For example, when the newspaper reports poor results for a district, parents may call for the resignation of the superintendent. Low stakes and high stakes are relative terms. Yet, they help in considering the intended and unintended consequences of various assessment policies.

If you use an assessment for high-stakes purposes, be sure to document and communicate the exact consequences of the assessment and how they are applied. Some of the questions you should answer are:

  • Are the "high stakes" of the assessment directed to a local education agency, a school, an administrator, a teacher, or a student?

  • Is a "warning" used to allow the subject of the high stakes to make changes before consequences are imposed (e.g., a school administrator is alerted to develop a restructuring plan or face a takeover by the district or state)?  (Note: This is a legal requirement)

  • Is an assistance program available to assist in remediation efforts?

  • Will the high stakes have unintended consequences?   What are they?

  • How are assessment consequences currently applied to students with disabilities?

Often, the consequence of exclusion is to make a district or an administrator look better (as when low-performing students who have cognitive disabilities are excluded from assessments designed to be used in making merit decisions for administrators). There is no intended consequence for individual students, yet there is a consequence for them.

Identify the Philosophies and Attitudes that Drive Your Assessment System

To revise your state assessment system, you must build on a foundation of philosophies and attitudes that recognize the need to be accountable for all students. It is essential that you identify the assumptions and philosophy under which the current assessment system operates and to determine the extent to which these support an assessment that includes all students. Working to identify the assumptions of your revised assessment system is a primary focus of Step 2.

A state's assessment system generally is constructed to reflect the state's goals for its students or the state's curriculum framework. You need to evaluate the extent to which your state's curriculum framework and standards reflect the curricula for all students, including students whose educational program emphasizes life-role skills. If you look at other states, you'll find that several have defined goals in core academic areas (see the Delaware example), whereas others have established life-role goals (see the Kentucky example).

An assessment system that focuses only on academic skills represents a greater challenge to the participation of all students than does one that addresses the educational needs of all students, including those with more severe cognitive challenges. Academically-focused goals, however, should never be used as an excuse for a state assessment system that fails to promote the participation of all students.

Delaware's goals focus on standards within seven areas (mathematics, history, geography, economics, civics, science, English language arts).   For grades K-4 in economics, the three standards are:
  • Identify the basic needs and wants of individuals and families, and the types of activities undertaken in order to satisfy them.
  • Explain and demonstrate the use of money, barter and other media of exchange within markets.
  • Explain how prices in a market economy result from the interrelationship between supply and demand and competition.

Kentucky's goals focus on life-role skills:

  • Students are able to use basic communication and mathematics skills for purposes and situations they will encounter throughout their lives.
  • Students shall develop their abilities to apply core concepts and principles from mathematics, the sciences, the arts, the humanities, social studies, practical living, and vocational studies to what they will encounter throughout their lives.
  • Students shall develop their abilities to become self sufficient individuals.
  • Students shall develop their abilities to become responsible members of a family, work group, or community, including demonstrating effectiveness in community service.
  • Students shall develop their abilities to think and solve problems.
  • Students shall develop their abilities to connect and integrate experiences and new knowledge from all subject matter fields with what they have previously learned and build on past learning experiences to acquire new information through various media sources.

 

Summary

The first step in evaluating and revising your state assessment policies is among the most important. By involving stakeholders up front, you increase the probability that your revised state assessment will be relevant to their needs and acceptable to them. you'll need to conduct periodic revisions and improvements. As you revise, remember to continue taking the time to really look at what the current status of your assessment system is, and to look at its effects (intended and unintended). By doing this, you will have a much better foundation for addressing the key issues as you work to improve your system.

Before you proceed to Step 2, take an inventory of what you and your stakeholders have shared. What is the assessment system like? Do you know when and to whom the assessment is administered? Do you know the types of assessments that are used? Do you know the purpose of each component of your state assessment system? Is the assessment considered to be high stakes or low stakes, and for whom?

Do you think that your assessment system was designed for all students in your system? Are stakeholders in agreement that an assessment system should provide accountability for all students in the system? Are all parts of the accountability system or assessment appropriate for all students?

NCEO Study Guide

Worksheet 1

Notes

Current State Assessment System


What does our assessment system look like?

Target grades/ages

 

 

Assessment components

 

 

Type of assessment

 

 

Purpose

 

 

High stakes or low stakes, and for whom

 

 

History, legal mandates, and other context factors

 

 

What attitudes and philosophy underlie the current assessment system?

 

 

 


Step 2

Decide What You Want To Do

  • Agree on general plans for revisions and identify available resources
  • Define your assumptions
  • Plan your approach

Your second step in revising state assessment guidelines is to agree on the general plans for revisions, define the assumptions that will underlie the assessment, and develop plans for making changes in guidelines.

You must involve stakeholders in this second step. You may want to add to the team used in Step 1. In this step, it is particularly important that you include parents and teachers. It is advisable that you involve individuals who can go back to a larger group of similar individuals and share the assumptions and plans developed in Step 2. Therefore, it is sometimes helpful to select official representatives of groups who have access to the boards, newsletters, and meetings of the larger group of stakeholders.

It also is essential that you devote sufficient time to Step 2. Without enough time to thoroughly involve key stakeholders, to spend time hashing out assumptions, and to make plans, you may jeopardize all other steps.

Agree on General Plans for Revisions and Identify Available Resources

Before you start a revision process, know generally what you want to do. This means you need to agree on your general goals and what resources you will have available to use in making revisions.

Goal of a revised assessment system. The specific goal that you set within the general framework of increasing the participation of students with disabilities in your state assessment can take many forms. Some of the possibilities are:

  • Increase participation in the current system:

Use the same assessment procedures as are now used, but change the participation guidelines, the accommodations guidelines, the reporting procedures, or any combination of the three.

  • Expand the system:

Add a new form of assessment that will be appropriate for studnets with different educational goals (e.g., students with more severe disabilities).

  • Revise the system:

Completely revise the entire assessment system.

Be sure you have agreement on what your revision goal is before you start. Among the questions you should ask are:

  • Does your goal call for development of new systems or only revisions of old systems?

  • Are there portions of the system that must be revised and others that might have lower priority for revision?

Identify available resources. You must garner the resources available to you for the revision process. Identify people resources, equipment resources, and knowledge resources. Relevant questions for you to answer include the following:

  • What existing stakeholder groups might support an effort to revise the assessment guidelines?

  • What accommodations already exist in the assessment system (e.g., Braille, large print, audio tapes, videos of instructions in American Sigh Language, etc.)?

  • What equipment or resources exist in your state to help with assessment accommodations (e.g., regional SEA offices, agencies with Brailling or enlarging equipment, distribution centers for technology/assistive devices, pools of signers or readers)?

  • What sources of funds are available that could be tapped to assist in the revision process?

  • Who, or what agencies, in your state have particular expertise or access to knowledge that could help with the decision and improvement process?

You should take advantage of the experiences and efforts of other states. The National Center on Educational Outcomes (NCEO) has compiled a set of state guidelines, both for participation in assessments and assessment accommodations. In addition, NCEO has produced a number of reports on these issues (see Resources as the end of this document). By reviewing these materials and contacting a few states, you will broaden your perspective on the options available to you.

Define Your Assumptions

It is important for you to state explicitly the assumptions that will underlie your revised assessment system. To help you and your stakeholders do this, a set of possible assumptions is provided below, together with a brief explanation of the reasons for each assumption. Using Worksheet 2, your team (and probably other stakeholders) should determine to what extent you agree with the assumptions. Then change them as needed, and add new ones that are identified.

Example of possible assumptions for an assessment system (based on NCEO proposed guidelines):

NCEO Assumption 1:  When data are collected for making policy decision or for accountability, all students should participate in the assessment.   When a sampling procedure is used, the sample must be representative of all students.

Whenever an accountability system fails to include all students (or a representative sample of students), there are two major problems that occur.  The first is that policy decisions will be made on the basis of incomplete or incorrect data, and thus may not be appropriate for all students.  The second problem is that when students are not included in the accountability system, the system tends to view itself as not responsible for the educaiton of those students.

NCEO Assumption 2:  Not all students need to take the same assessment.

Being an "accountable" system does not require that all students take exactly the same assessment.  This assumption is the basis for using accommodations during assessment, but also applies to the notion of developing a different tool for measuring the performance of some students.

NCEO Assumption 3:   Participation, accommodation, and reporting decisions may differ as a function of the purpose of the assessment.

It is extremely important to always keep the purpose of the assessment in mind when thinking about the specific guidelines that are used for making participation decisions, accommodation decisions, and decisions about how data are reported. It would be inappropriate, for example, to require all students to participate in an assessment of college mathematics aptitude when not all students are in the high school mathematics curriculum. Another example of how purpose affects guidelines is that it probably is inappropriate to report data at the student level when data are used for district funding decisions.

NCEO Assumption 4:   State assessment programs should be fair and accurate.

Fairness and accuracy are relative terms. One of the problems with striving for "fairness" is that it is defined differently by different people. There is a tendency for some people to be overly concerned about the emotional stress that an assessment might create for a student with a disability. While our educational system makes sure that other students are experienced in taking assessments, it often fails to do so for students with disabilities. Fairness involves this kind of training, as well as ensuring that students have the opportunity to learn the concepts and skills that are the focus of assessments.

Accuracy refers to the extent to which an assessment reflects the student's skills when they are the focus of the assessment. An assessment should strive for accuracy, regardless of the characteristics (or disabilities) of the student.

NCEO Assumption 5:   Assessment procedures should be sensitive to the needs of students with disabilities.

An assessment that is responsive to the needs of individual students is one that allows them to receive information in the ways that they would typically receive information, and to respond in ways that they typically would respond. It is one that provides accommodations for the students' differing abilities to maintain attention, to sit for long periods of time, and so on.

Beyond this, it is important to include students with disabilities when testing an assessment to identify problematic item formats and to see whether there is need for more items at the lower end. In this way, instruments can be modified during the development phase (e.g., items dropped, modified, or added) to allow greater numbers of students with disabilities to participate meaningfully.

NCEO Assumption 6:   The purpose of accommodations is to achieve equity, not advantage.

Accommodations are to achieve equity, not to gain advantages over others. A person who wears glasses does not do so to make his or her sight better than that of other people. Glasses are worn to achieve the same level of sight as that of most people (the standard). Similarly, people who use hearing aids do so to achieve hearing levels as close as possible to those of people with normal hearing. This is the purpose of all accommodationsÑto bring the person using the accommodation to the same level (on some dimension) as most other people.

NCEO Assumption 7:   Assessment programs should make clear that high standards are expected for all students.

There is no intention to lower standards when students with disabilities are included in assessments. In fact, the objective in including students with disabilities in assessments is to make sure that they, along with all students, are held to high standards. The belief is that all students can achieve to higher levels than they are now achieving.

Still, it is important to recognize that there will be a range of performance on assessments. State advisory boards should decide the range of performance permitted for each content standard.

NCEO Assumption 8:   Assessment should be consistent with students' instructional programs and accommodations.

Just as it is believed that students should not be assessed on something if they have not had the opportunity to learn it, it is believed that students should not be assessed on topics for which they have not received instruction or the appropriate accommodations. Similarly, new accommodations should not be introduced at the time of assessment if they have not been a part of the student's instructional program.

This assumption can be a dangerous one, however. The original decision to not have a student participate in certain types of instruction (e.g., science) should be questioned first. It has been found in some states that have included all students in all assessments, that students were excluded from some instructional content in which they should have participated. Including the students in the assessment brought to light their inappropriate exclusion from the instruction.

In the same way, the extent to which all appropriate accommodations are used during instruction should be questioned. Assessment programs should avoid the use of accommodations that have never been used during instruction.

NCEO Assumption 9:   Reports of assessment results must include all students, including any student who does not take the assessment.  Students who do not take the assessment should still be counted as part of the sample when calculating average scores.

This assumption is a critical one for helping to remove incentives for excluding students from assessments. There is extensive evidence that the rate of exclusion has a significant effect on average scores. Therefore, if students are excluded but not counted in the denominator, scores go up. The incentive for exclusion in this situation is very high. This is particularly true when the practice is publicized, and its effects reported (e.g., how many students received zero scores).

Plan Your Approach

After a set of assumptions has been agreed upon, you need to plan for development and revisions. You may wish to form another advisory committee at this time. This group would help you consider all of the ramifications of your approach.

You will have to decide what assistance you need in order to proceed. It could come either from within your own agency or from the outside (e.g., University-based personnel within the state, a technical assistance center, or external contractors). If you choose to use external contractors, you should consider developing a request for proposals and conducting a proposal review process to select a group to help develop the guidelines or alternative assessments you might need. If you have an ongoing contract with someone for the overall assessment, it might be best to build greater participation of students with disabilities and accommodation developments into the contract requirements.

A simple format for documenting your plan is outlined in Worksheet 3. This format is based on the steps in this self-study guide. You also may wish to insert additional steps and delete others after you have completed all eight steps in this guide. You might want to chart where tasks overlap and which tasks relate to or depend on the prior completion of other tasks.

You will find it most helpful to go through the following sequence when documenting your plan:

  1. Outline the steps/stages in the process, perhaps starting from the last step and developing your plan backwards.   Integrate your steps with milestones in the overall plan.

  2. Project how the timelines in the overall plan might relate to your steps.  Starting from the last step, determine when the steps must be completed, how long they will take, and when they must start.

  3. Determine who must be involved in each step and assign an individual to lead the step (even if a team is involved).

  4. Estimate the resources needed for each step and insert additional steps as needed to ensure that the resources are obtained.

  5. Revise the plan as necessary to reflect the realities of time, personnel, and resources.

Whatever sequence you use when developing your plan, it is important to document it and share it with others to obtain their feedback.

Summary

In Step 2, you are deciding what you want to do. After agreeing on general plans and identifying resources, you are ready to define the specific assumptions upon which your revised assessment system will be based. After this is done, and you have obtained broader feedback on the assumptions, you can set your plans for pursuing revisions and improvements. Once again, it is important to involve stakeholders in all of these processes.

 

NCEO Study Guide

Worksheet 2

Assumptions About Statewide Assessment Systems
and
Students with Disabilities


Directions:

Review each assumption and check those with which you agree.  Determine what must change in the others before you can agree.  Add additional assumptions as desired.
_____
  1. Any time data are collected for the purpose of making policy or accountability decisions, we must include all students.  When a sampling procedure is used for an assessment, the sample must be representative of all students.

_____
  1. Not all students need to take the same test.

_____
  1. Participation, accommodations, and reporting decision may differ as a function of the purpose of the assessment.
_____
  1. State assessment programs should be fair and accurate.
_____
  1. Assessment procedures should be sensitive to the needs of students with disabilities.
_____
  1. Accommodations should achieve equity, not advantage.
_____
  1. Assessment programs need to make clear that the same high standards are expected of all students.
_____
  1. Assessment should be consistent with students' instructional programs and accommodations.
_____
  1. Reports of results must include students with disabilities, including those taking alternative assessments or for whom information was provided by informed respondents.  If a student was excluded for testing for any reason, that student should still be included in the denominator used when calculating averages.

 

Worksheet 3

Format for Documenting Your Development Plan

Start End Who Step/Stage
      Step 1: Consider Your Assessment Context
  • Form an initial stakeholder group
  • Describe your current assessment system
  • Identify philosophies and attitudes that drive your assessment
      Step 2:  Decide What You Want to Do
  • Agree on general plans for revisions; identify available resources
  • Define assumptions
  • Plan your approach
      Step 3:  Develop Guidelines About Participation in Assessment
  • Review your goal for assessment revision
  • Write specific guidelines that reflect your assumptions and meet the goal
  • Evaluate the written guidelines
      Step 4:  Develop Guidelines About Assessment Accommodations
  • Write specific guidelines
  • Evaluate the written guidelines
      Step 5:  Coordinate Procedures for Making Participation and Accommodation Decisions
  • Develop a flowchart to guide decisions
  • Use exemptions sparingly until system is in place
  • Document decisions
      Step 6:  Develop Guidelines for Reporting the Results of State Assessments
  • Consider the implications of reporting
  • Write specific guidelines
  • Evaluate the written guidelines
      Step 7:  Implement Revised Assessment Policies and Procedures
  • Negotiate roles for state assessment contractor in installation and maintenance
  • Orient/train State staff to support revisions
  • Obtain/train local personnel to implement system changes
      Step 8:  Evaluate Implementation and Effects
  • Determine usefullness, implementation, and effects on staff
  • Follow-up included and excluded or alternative assessment students

 


Step 3

Develop Guidelines About Participation in Assessments

  • Examine your current written guidelines and evidence of actual practice
  • Review your goal for improving the assessment system
  • Write specific guidelines
  • Evaluate the written guidelines

Your third step in revising state assessment guidelines is to agree on the words to write about participation. Your words must reflect the assumptions and goals established in Step 2Ñwords that become part of your guidelines for assessment of students with disabilities. In order to generate words that are understood and acceptable, you must plan to review and evaluate your goal and guidelines.

Remember stakeholder involvement is essential. Take care to ensure that all critical perspectives are represented (e.g., assessment, disability, local, state). Many more individuals and agencies may have an investment in the state assessment system than originally thought by those involved in revising the system.

If your state has more than one assessment, consider each separately. In most cases, the different assessments are used for different purposes. When considering each assessment, take into account its purpose.

Examine Your Current Written Guidelines and Evidence of Actual Practice

To examine who participates in your state assessment system, you need to examine both written guidelines and evidence of actual practice.

Most states now have some existing written guidelines that address the participation of students with disabilities in assessments. Many times these are combined with guidelines on the participation of students who are learning English (variously referred to as English language learners, students with limited English proficiencyÑLEP, English as a Second Language (ESL) students, and other terms). It is probably best to address students with disabilities separate from students with limited English proficiency.

The location and exact wording of the current guidelines should be made available for further reference. Some states include their guidelines in statutes and others have them in regulations. However, most states have them in separate, non-legal documents.

States with Written Guidelines on the Participation of Students
with Disabilities in Statewide Assessments (1995)

States with Written Guidelines on the Participation of Students

There are a number of ways to obtain evidence of the actual participation of students with disabilities in assessments. First, you should ask several individuals (WHO?) questions such as:

  • To what extent are local school personnel aware for the guidelines?

  • What is the general attitude about the guidelines among SEA staff, local assessment personnel, special educators, and parents?

  • To what extent do people say they follow the guidelines?

  • How is implementation of the guidelines checked?

 

Next, you need to look at assessment data to assess the actual participation of students with disabilities in the state assessment. Questions that should be addressed include:

  • How are samples drawn and do they include students in separate schools and students in separate classes?

  • Is there documentation of exemptions?

  • Do rates of nonparticipation in an assessment differ for students with different characteristics?

  • What percentage of students with disabilities in the state have data that you can use?

These kinds of information may not be part of your state database. You might have to do a considerable amount of searching to find them.

An even better way for you to document participation, of course, is to conduct a study in a sample of schools to determine actual participation rates. If this is unreasonable for your state to pursue at this time, you can get a good estimate by talking to many people, or by surveying or interviewing key respondents.

Review Your Goal for Improving the Assessment System

Before proceeding further, you need to review and restate your goal for improving the assessment and the assumptions that underlie the new system. Remember, the goal that you decided upon in Step 2 was one of the following:

  • Use the same assessment procedures, but change the participation guidelines

  • Add a new form of assessment that will be appropriate for students with different educational goals

  • Completely revise the assessment system

Write Specific Guidelines

The specific guidelines that you write will vary somewhat with the original goal that was identified. You can consider the following possible approaches:

Use the same assessment procedures, but change the participation guidelines. Many states can immediately increase the participation rate of students with disabilities beyond the 50% level simply by changing written guidelines (and ensuring that they are followed). The possible ways in which this could be done might be identified through a brainstorming session involving stakeholders. Some example guidelines are:

  • Use a team to decide on the participation of a student only if the team has received training in the importance of including all students.

  • Require the decision maker to document the reasons for exclusion of any student.  And, ensure that these reasons are examined for appropriateness.

  • Provide a checklist of considerations in making the decision about participation in the assessment.

  • Require documentation of the number of students excluded for various reasons.

  • If an assessment is a high stakes assessment (such as a graduation exam), require the student and the student's parents to sign off on a form giving the reasons for exclusion and the consequences of nonparticipation (such as not receiving a reular diploma).

  • Examine and remove words that provide a reward to a student for not participating in an assessment (such as when any student given an exemption from an assessment automatically receives a regular diploma).

Linked with the notion of changing participation guidelines is the notion of accommodations. You will find these discussed in further detail as part of Step 4.

Add a new form of assessment for students with different educational goals. Depending on the nature of the primary assessment, your guidelines might need to identify students who should be exempted from the regular assessment. In such cases, you will want to look at a new form of assessment for these students. The new assessment form should be developed only for those students who are not working on the same kind of goals as are other students. For example, students with more severe cognitive disabilities may have educational plans that target instruction on self-help and independent living skills. These goals may be related to reading in that they include the discrimination of key signs and symbols in the environment. For example, an assessment that checks recognition of words like "restroom," "men," "women," "stop," "elevator," "exit," and "information," along with a variety of symbols could be a new assessment to complement the state's reading assessment.


Options that are currently being used by states to obtain statewide data on students who need a different assessment include:

Kentucky developed an Alternate Portfolio Assessment system for students who have moderate to severe cognitive disabilities that prevent them from completing a regular course of study even with program modifications. They do not participate in the other components of Kentucky's assessment system. Two key elements of the Alternate Portfolio Assessment system are: (1) scores of students participating in this assessment are weighted equally with those of students participating in the regular assessment for the school's accountability purposes, and (2) entries to the student's portfolio are not specified, other than that each entry must be related to the state's Academic Expectations. An alternate Portfolio Advisory Committee is charged with the task of identifying the Academic Expectations to be assessed within the alternate Portfolio process. Overall, 28 expectations critical to maintaining the integrity of functional programming for students participating in the Alternative Portfolio process have been identified. These and other expectations are incorporated into the assessment system.

Michigan developed separate performance-based measures for students with specific disabilities. They assess the unique components of the education of each category of student (e.g., mobility skills for students with visual impairment, American Sign Language skills for students with hearing impairments, use of assistive devices for students with orthopedic impairments), as well as the general requirements of the Michigan Educational Assessment Program.

Assessments can be developed in a number of ways. Kentucky wrote a subcontract for the development of its alternative portfolio system as a part of its contract with the firm developing the overall assessment system. That firm, in turn, worked with a project at the University of Kentucky to develop and pilot-test the system, train scorers, and operate the system for the first few years.

Michigan funded a project through a private corporation in order to develop its sets of outcomes and the related assessment procedures. This firm, which specialized in disability research, drafted the instruments and conducted extensive studies to check feasibility and validity.

The American Institutes of Research (AIR) developed an instrument called the Performance Assessment for Self-Sufficiency (PASS), which uses an informed respondent. All work was conducted under a contract with the U.S. Office of Special Education Programs.

Developing assessments via a contract might take you one or two years. In the interim, you could use an alternative assessment, such as parent and teacher reports on an existing adaptive behavior measure.

It is necessary that your criteria for deciding when a student's goals differ from those of the regular curriculum, and thus that the student should be assessed differently, should be clear and stringent. It would be unwise (and even unethical) to place a student in a functional living skills curriculum solely for the purpose of allowing the student to participate in the different assessment system rather than the regular assessment system. A possible approach to determine which students should participate in a different assessment might be to develop a checklist of characteristics. Students who pass criteria, such as those outlined here, would then be assessed using another measure that produces statewide data.

Possible Criteria for Using a Different Assessment for a Student:

  • The student's demonstrated cognitive ability and adaptive behavior could prevent completing the course of study even with program modifications and adaptations.

  • The student's current adaptive behavior requires extensive direct instruction in multiple settings to accomplish the application and transfer of skills necessary for functional application in domestic community living, recreational/leisure, and vocational activities in school, work, home, and community environments.

  • The student's inability to complete the course of study may not be the results of excessive or extended absences; it may not be primarily the result of visual or auditory disabilities, specifc learning disabilities, emotional-behavioral disabilities, or social, cultural, and economic differences.

  • The student is unable to apply or use academic skills at a minimal competency level in natural settings (such as the home, community, or work site) when instructed solely or primarily through school-based instruction.

  • For the test grade level, the student is unable to:

    1.  Complete a regular diploma program even with extended school services, schooling, program modifications and adaptations
    2.  Acquire, maintain, generalize skills and demonstrate performance without intensive frequent, and individualized community-based instruction.

Completely revise the assessment system. It was expected that as states began to develop new performance assessments, they would do so in a way that made the assessments truly appropriate for all students. This has not happened. But, the idea of completely starting over is still reasonable for some states. For example, many states have recently revised their curriculum frameworks and are in the process of making major changes in their state assessmentsÑpresenting an opportunity to be more inclusive. Starting from scratch probably is the best way to create an assessment system that really includes all students.

Some states have completely revised their assessment systems, while others are starting from scratch in developing new parts of their assessment programs.

Kentucky created an assessment system that really includes all students. It did so by first identifying the desired results of education for all students. In this way, it started with the assumption that all students must be assessed on the same goals. At the same time, Kentucky recognized that some students needed to demonstrate their attainment of the goals in nontraditional ways.

Oregon is including students with disabilities as it develops a new component for its assessment system. While it is preparing to develop a new science assessment, it is starting with the assumption that all students with disabilities are going to participate in the assessment.

Evaluate Your Written Guidelines

As soon as guidelines are written, you should evaluate them. This can be done in two ways:

  1. Have individuals in the field read the guidelines and react to them.  Direct their input with some key questions to consider.  Some possible questions are suggested in Worksheet 4.  You will want to include open-ended questions so that people can provide other kinds of input.

  2. Run a field test of the guidelines and the assessment before they are actually used.  Start with known entities, like, who is in the schools where the field test occurs, and examine participation in light of these known entities.

Although you might find it easier to do only one of the evaluation steps, there are definite advantages to doing both. A major advantage is that you would have better knowledge of how things will work during the actual administration of an assessment (possibly avoiding too many surprises).

Summary

Regardless of the goal of the assessment and the assumptions under which you are operating, revising participation guidelines without considering accommodations is an incomplete approach to revising state assessment policies and procedures. It is extremely important that you approach this step (Step 3) and Step 4 in a coordinated manner. In many cases, written guidelines for participation will depend on available assessment accommodations.


Worksheet 4

Key Questions to Consider When Evaluating Written Guidelines

  1. Clarity  ---  Are the guidelines easy to read and use?  What about the guidelines is unclear?





  2. Sufficiency  ---  What conditions are not covered, e.g., do the guidelines adequately address:
    1. All types of students?


    2. All school settings?


    3. Ages/Grades?


  3. Necessity  ---  Is everything in the guidelines really needed or is there too much detail or structure?




  4. Potential effects  ---  What will be the results of using the guidelines?  Will they:
    1. Increase the appropriate involvement of students with disabilities?


    2. Avoid negative side effects?


    3. Improve consistency across the state?


 

 


Step 4

Develop Guidelines About Assessment Accommodations

  • Examine your written guidelines and evidence of actual practice
  • Write specific guidelines
  • Evaluate the written guidelines

Your fourth step in revising state assessment guidelines is to agree on what words to use when writing about accommodations, adaptations, and modifications. The words in your guidelines should reflect your assumptions. It is very important to develop the guidelines about accommodations with a stakeholder group.

Examine Your Current Written Guidelines and Evidence of Actual Practice

Examine your current assessment accommodations. Just as you described participation in terms of both written guidelines and actual practice, you must describe assessment accommodations in written guidelines and actual practice.

What constitutes an accommodation? The possibilities are almost unlimited. Some of the more common accommodations are shown below, organized according to where the accommodation is madeÑin the presentation of items to the student, in the response required of the student, in the setting or place that the assessment occurs, and in the scheduling or timing of the assessment. Other possible combinations may occur as well, but are not easily categorized into one of the above four groups.

Common Testing Accommodations
Presentation Format Response Format Setting Timing/Scheduling
  • Braille editions
  • Use of magnifying equipment
  • Large-print edition
  • Oral reading of directions
  • Signing of directions
  • Interpretation of directions
  • Mark response in book
  • Use template for responding
  • Point to response
  • Use sign language
  • Use typewriter or computer
  • Alone, in a study carrel
  • With small groups
  • At home, with supervision
  • In special education class
  • Extended time
  • More breaks
  • Extending sessions over several days

Most states now have written guidelines that address the use of accommodations during assessments. You will find it helpful to identify and locate current guidelines on assessment accommodations, and reproduce them for further study. Guidelines might appear in regulations, test administration manuals, program guidelines for different disabilities, IEP training materials, or any other format. Sometimes accommodations vary for different assessments. In this case, it might be helpful for you to create a matrix to identify which accommodations are allowed for which assessments.

Besides making distinctions among different assessments, some states organize accommodations around categories of disability. Other states simply list all possible accommodations. Still other states defer any discussion of accommodations to IEP teams. At this point, it is most important to document what is in the written guidelines. The organization of accommodations, as well as the specific accommodations, may change as a result of your revision process.

To collect evidence of the use of accommodations, determine whether the use of accommodations is recorded on the assessment protocol. If it is, and this is included in data reports (or can be obtained from a technical report), you have the most direct evidence possible. If it is not available, you should ask several individuals questions such as:

  • To what extent are local school personnel aware of the guidelines on accommodations?

  • What is the general attitude about the guidelines among SEA staff, local assessment personnel, special educators, and parents?

  • To what extent do people say they follow the guidelines?

  • How is implementation of the guidelines checked?

Next, you should look at the actual use of accommodations. For example, you might conduct a survey of a sample of schools to determine what accommodations were used, how it was decided that they were appropriate, and how their use was documented. Be certain to survey teachers who can provide perspectives on various student characteristics, needs, etc. (such as learning disabilities, attention deficits, and so on).

Write Specific Guidelines

Making decisions about allowable accommodations is, in many ways, more complicated than making decisions about participation guidelines. Little research exists on whether the effects of using accommodations have an impact on the validity of an instrument. Such research is needed.

Modifications should still be used, however, perhaps with the scores identified so that they can be examined further. [This is different from the practice of Òflagging,Ó which has been used by some data collection programs as a way to identify whose test results are questionable because the test was not administered in the standard way. Some college entrance tests have used flagging to alert admissions officials of assessments conducted under nontraditional procedures.]

The lack of research data on accommodations has contributed to inconsistencies in accommodation practices across states. For example, some states use accommodations that other states specifically prohibit. Among these are reading items to a student, allowing extended time, and out-of-level testing.

Remember, not all students with disabilities will need modified assessments. Yet, modifications in assessments should be used when needed. Accommodations that teachers currently use with students during instruction and that are typically used outside of school (e.g., in work and community settings) should be appropriate accommodations for use during assessments. Still, this simple statement can be translated into many different written guidelines. Current state guidelines about assessment accommodations range in length from one sentence to more than 60 pages!

As new technologies and procedures for accommodations and adaptations are developed, they can be included in the array of possible accommodations and adaptations for instruction and testing. In the meantime, each state can set its own policies, informed about what other states are doing.

Some of the themes that appear in states' written guidelines, and the states in which they appear are:

Documentation requirements beyond IEP ---  Mississippi, Nevada, New Hampshire, New Mexico

Acceptability of out-of-level testing  ---   Delaware, Georgia, Kansas

Use of same accommodation in assessment as in instruction  ---  Alabama, Arizona, Delaware, Georgia, Illinois, Kentucky, Maine, Maryland, Michigan, Mississippi, New Hampshire, Pennsylvania, Vermont, Virginia

Because of the lack of research, your state will have to make some judgments about whether any given accommodation is one that you would consider to be OK (no questions about the validity of the resulting assessment), Tentative (will use the assessment but have questions about its validity), and Not OK (the accommodation will not be used because of validity concerns).

Worksheet 5 lists an array of accommodations, plus spaces for you to add others that have been identified by your stakeholder group. Each is to be rated according to three possible ratings. Complete the checklist for each separate test and subtest. For example, you might allow different accommodations on a reading test than you would allow on a mathematics test. You may want to have individual stakeholders complete the worksheet first, then hold a discussion to reach agreement on the final decision regarding each. The three possible ratings are:

OK = The accommodation will be allowed without question and scores may be identified for further study.

Tentative = The accommodation will be allowed but scores will be identified for further study.

Not OK = The accommodation will not be allowed.

Some states have developed common accommodations and have made them available to LEAs. For example North Carolina makes available audio tapes, Braille, and large print editions of several of their statewide tests. This approach increases the chance that assessment results will be valid and comparable.

Lists of trained signers and trained volunteer readers can be made available and some performance events might be adapted in advance to ensure maximum participation of students with disabilities. Of course, the most appropriate way to ensure the availability of appropriate accommodations during assessments is to ensure that those accommodations are available for instruction. New accommodations should not be introduced at the time of assessment.

Evaluate the Written Guidelines

As with participation guidelines, accommodation guidelines should be evaluated soon after they are written. Two procedures are recommended for doing this:

  1. Ask field reviewers to read and react to the accommodation guidelines.  Perhaps you could direct their input by providing them with some key questions to consider.  Include both objective and open-ended quiestions.

  2. Implement the guidelines for accommodations in a field test.  Start with known entities (those who are in the schools where they field test occurs) and examine the use of accommodations in light of these known entities.

Doing both of these evaluation procedures is advantageous because they will help you better assess how things will work during the actual administration of an assessment.

Summary

The focus of Step 4 has been on accommodation decisions. It is good to be aware of what accommodations are used in teaching and what accommodations are permitted by society. A guiding principle for you to think about is: accommodations used during assessment should be consistent with accommodations used during instruction.

These decisions are highly related to participation decisions. Step 5 helps you think about putting the two together.


Worksheet 5

Checklist of Accommodations

Presentation Accommodations

OK

Tentative

Not OK

Braille Version
Interpret Directions
Large Print Version
Read Directions
Read Entire Assessment
Sign Directions
Sign Entire Assessment
Use of Magnifying Glass
Setting Accommodations OK Tentative Not OK

At Home Administration

In Small Group
In Special Education Setting
In Study Carrel
Individual Administration
Timing/Scheduling Accommodations OK Tentative Not OK
Extended Time
More Breaks in Testing Across Days
More Breaks in Testing During Same Day
 
Response Accommodations OK Tentative Not OK
Assistance in Marking Response
Mark Answer in Book
Oral Response
Point to Response
Sign Language Response
Use of Computer/Typewriter
 
Other Accommodations OK Tentative Not OK
IEP Defined
Out of Grade Level Assessment
Use of Prompts/Focusing Strategies
Use of Talking Calculator

 


Step 5

Coordinate Procedures for Making Participation and Accommodation Decisions

  • Develop a flowchart to guide decisions
  • Use exemptions sparingly until the system is in place
  • Document decisions

The fifth step in revising state assessment guidelines requires you to step back and coordinate the policies on participation in assessment and the use of assessment accommodations. It also addresses what you can do as you work on revising your system to be accountable for all students.

Develop a Flowchart to Guide Decisions

Draw out a picture of what is to happen in your assessment, given different students and different assessments. You may want to do this for the way things are now, the way they will be after initial revisions are made, and the way they should be when all revisions are implemented.

Use Exemptions Sparingly Until the System is in Place

Exemptions are sometimes called exclusions, noneligibles, and excuses. Whatever they are called in your system, avoid them. As you work on developing your revised assessment system, you may find that for a period of time (for example, before you are able to develop a different assessment system for students with more severe cognitive disabilities), you will need to have a mechanism for deciding which students should participate in the different assessment system. These are the students who will be exempted from the existing assessment until another assessment is in place or who will be assessed using an interim measure such as an adaptive behavior scale.

It is advisable that you require a name to be associated with the exemption decision. This assigns accountability for the exemption decision to someone.

You will need to develop a form that requests an exemption for an individual student. The form should always include the caution that if there is any doubt about whether the student should participate in the assessment, then that student should participate.

Ideally, your exemption form will provide some specific guidelines to help whoever fills out the form make the decision. For example, characteristics of the student's educational program or IEP objectives in comparison to those that the assessment tests might be one guideline to consider (see box).

We recommend that parents be informed of the accommodation decision and sign off on the form. This is essential if the assessment has high stakes for the individual student. If a parent is the person requesting an exemption, a similar procedure should be followed. It might be useful to add an item asking the parent about changes that would be needed (in the assessment, in the preparation of the student, in accommodations, etc.) for the student to participate in the assessment.

A list of student characteristics could help make decisions about who will participate in a specific assessment.  The following checklist is one possibility:

  1. Can the student work independently?

  2. Can the student work with 25 to 30 other students in a quiet setting?

  3. Can the student work continuously for 20 to 30 minute periods?

  4. Can the student listen and follow oral directions given by an adult or an audio tape?

  5. Can the student use paper and pencil to write short answer or paragraph length responses to open ended questions?

  6. Can the student understand and answer questions in a multiple choice format?

If the answer to any of the questions is "no," then go to an Accommodations Checklist to determine accommodations for the student to use during the assessment.

This list is an example and not a model for what the form should include for your state.  You and your stakeholders will need to determine that based on the purpose and other characteristics of the assessment.

Document Decisions

Many states require IEP teams to document the decisions to use accommodations or to exempt students. This documentation should include a description of the options considered and why each level was rejected. Worksheet 6 provides an example of a checklist that could be used to decide whether a student should be assessed using another measure than the regular assessment.

Some states (e.g., Delaware) require that documentation of decisions be available in the student's IEP folder. During the annual child count audit, folders are checked to ensure that appropriate decisions were made. This approach, together with more stringent guidelines, has had a significant effect on the participation rate of students with disabilities in the statewide assessment system.

An increasing number of states also are documenting the type of accommodation used on the test record form. Such data will be essential for future research on the effects of accommodations. More importantly, the SEA must have a mechanism to review the accommodation decisions and determine their appropriateness. An effective mechanism is a state panel that reviews requests for new forms of accommodation and determines reasonableness. This group can serve as an advisory group for conducting research on accommodations and can provide ongoing guidance for making revisions to the state guidelines.

Summary

In Step 5, you have considered a total approach to developing guidelines for participation in assessments and for accommodations. Participation and accommodation policies must be coordinated with each other.

Coordinating policies and guidelines about participation and accommodations also requires that you think about what happens as an assessment system is being revised. You will need to consider exemptions. For most states, exemption policies will change as you go through the revision process. For a few states (such as those that relied solely on an informal process for making decisions, or that left the decision to an IEP team without requirements for documentation), this step will involve writing a new set of guidelines for exemptions, as well as a form to help guide and monitor exemption decisions.

 

NCEO Study Guide

Worksheet 6

A Checklist to Decide Whether a Student Should Enter an Alternative Assessment
(Adapted from work by the Kentucky State Department of Education)


Directions:

Provide this checklist to school teams making decisions about inclusion in assessment (e.g., IEP teams, Multi-disciplinary teams, Admissions and Release Teams, etc.).   Justify each decision and document in the student's record the basis for its decision, using current and lognitudinal data (such as including performance data across multiple settings in the areas of academics, communication, cognition, social competence recreation/leisure, domestic community living and vocational skills; behavior observations in multiple settings; adaptive behavior; and continuous assessment of progress on IEP goals and objectives.
_____
  1. Student can take the regular assessment without accommodations

_____
  1. Student has been receiving the following accommodations during the course of instruction and will need these same accommodations during assessment (Specify):




_____

_____


_____





_____




_____


_____

_____

_____

  1. Student meets the following criteria for a different assessment:

  1. The student has demonstrated cognitive ability and adaptive behavior that could prevent completing the course of study even with program modifications and adaptations.

  2. The student's current adaptive behavior requires extensive direct instruction in multiple settings to successfully transfer the skills necessary to function in domestic community living, recreational/leisure activities, and vocational activities in school, work, home, and community environments.

  3. The student's inability to complete a course may not be the result of: excessive or extended absences; visual or auditory disabilities; specific learning disabilities; emotional-behavioral disabilities; or social, cultural, and economic differences.

  4. The student is unable to apply or use academic skills at a minimal competency level in natural settings (such as the home, community, or work site) when instructed solely or primarily through school-based instruction.

  5. For the tested grade level, the student is unable to:
  1. Complete a regular diploma program even with extended school services, schooling, program modifications, and adaptations.

  2. Acquire, maintain, and generalize skills and demonstrate performance without intensive, frequent, and individualized community-based instruction.

 


Step 6

Develop Guidelines for Reporting the Results of State Assessments

  • Consider the implications of reporting
  • Write specific guidelines
  • Evaluate the written guidelines

The sixth step in revising state assessment guidelines involves how you approach obtaining agreement about reporting results. Describing how the results of assessments are reported, and how they relate to the participation of students with disabilities and the use of accommodations, is as important as describing the actual participation and use of accommodations.

In some states, the decision about whether a student's assessment results are reported is based primarily on the amount of time the student is in general education classrooms.  For example, this approach (modified by the concept of partial testing) is used in North Dakota:

  1. If the student is mainstreamed in 50 percent or more of the core courses being tested, ... the student's test results are to be included in class, grade, district, and state averages.

  2. If the student is mainstreamed in less than 50 percent of the core courses, ... the student's test results are not to be included in class, grade, district, and state averages.

  3. If a student who has an IEP does not take all sections of the test, or if the student takes the test under other than standard testing procedures, ... the student's test results should not be included in the class, grade, district, and state averages.

(North Dakota Department of Public Instruction, 1994, p.1)

You should write and evaluate guidelines about reporting. Here again, be sure to involve key stakeholders. At this point, it also is extremely important to include parents, administrators, and others.

Consider the Implications of Reporting

When considering reporting, you need to think about both the reporting of participation and the actual assessment results.

As you report participation, consider:

  • Is the number of students excluded from an assessment reported (including those student in other placements, e.g., home instruction, residential settings, hospitals)?

  • Is the number of students with disabilities who are eligible for assessment reported?

  • For which units of the educational system (state or local education agency, school) is the number of students reported?

When you report results, consider:

  • Are the scores of students with disabilities included in the general results reported, without separation of their scores?

  • Are the scores of students with disabilities reported separately from those of other students?

  • For which units of the educational system are the results of students reported?

Both positive and negative effects are possible.

One of the initial assumptions in Step 2 was that guidelines for participation and accommodations might vary as a function of assessment purpose. The purpose will influence how assessment results will be used and reported.

High stakes purposes. If there are to be rewards or sanctions for teachers, schools, or districts, you will have to ensure that there are no incentives for excluding students with disabilities from the assessments.

Possible Pitfalls of High Stakes Assessments

If a state automatically excludes all students on IEPs from assessments, and school are held accountable for their test scores, it is quite likely that there might be an increased number of referrals to special education.  This was recently document in New York.  If all students in a school are included in school level reports, it is possible the school might refer students with disabilities to centralized programs located in other buildings.

There are many ways that reporting procedures can be instituted to overcome the pitfalls of high stakes assessments. For example, North Carolina assigns a random chance score when the number of excluded students exceeds 5%. Maryland assigns a zero. Kentucky assigns the scores of all students to their neighborhood schools, regardless of the school they actually attend.

Low stakes purposes. If the results are used for program adjustment purposes or otherwise have low stakes, the ramifications for exclusion and for reporting results are less significant, especially if other assessment mechanisms are available. The purpose of the assessment should define how the resulting information will be used and reported.

Worksheet 7 lists possible ways to break out data for reporting. Consider each one in light of the purpose of the assessment, the possible incentives that will be created for excluding students from the assessment (if that is still an option), and the extent to which each meets the needs of your state.

Write Specific Guidelines

It is crucial to include inclusion/exclusion rates along with test results. When exclusion rates differ among units, comparing them becomes inappropriate. Therefore, it is critical to be able to look at exclusion rates when results for different units are reported together.

Before writing guidelines, decide the extent to which data should be reported separately for students with disabilities, and how those data are to be used. For example, if you need data that are representative of a particular population of students, it might be necessary to over-sample that set of students due to lower incidence rates. Reporting results by disability at the school level could be a violation of confidentiality if there are too few students with those disabilities in the school's population.

The Kentucky state education agency formed a "Disability and Diversity Advisory Committee" as it was developing its assessments.  This committee was established to review issues and make recommendations for the development, implementation, and inclusion of students with disabilities in the KIRIS accountability program.  The decisions of this committee were communicated through a program advisory responsible for communicating policy decision to local schools and school districts.

Evaluate the Written Guidelines

You should evaluate reporting guidelines soon after they are written. As with other guidelines, we recommend that two procedures be used:

  1. Ask field reviewers to read and react to the reporting guidelines.  Direct their input by having them ask questions like those in Worksheet 7.

  2. Try out the reporting plan that would emerge from the guidelines.  For example, enter data (real or hypothetical) in the way that you would under the guidelines that have been developed.  Show these data to state and school personnel, to parents, to legislators, and to other policymakers who are among the target audiences for the reports.

Completing the second of these two recommendations is the most critical step. As you follow these procedures, try to discern how data might be misread or result in unintended consequences given your reporting guidelines.

Summary

In Step 6, you have focused on reporting results of the assessment. It is important for you to consider the possible implications of various reporting approaches, as well as to write specific guidelines and to evaluate the written guidelines.


Worksheet 7

Possible Options for Reporting the Data of Students with Disabilities
[Note:  All options would include reporting all exemptions]


Reporting Option 1:  Data for all students are reported together, with no distinctions made for who the students are (e.g., number of students with disabilities.

What are the possible consequences of this option given the purpose of your assessment?

 

What incentives will this option create for excluding students from the assessment?

 

To what extent does this option meet the needs of your state?

Reporting Option 2:  Data for students with disabilities are aggregated separate from data for all other students and the two sets of data (all students except those with disabilities as one group, and student with disabilities as another group) are reported separately.

What are the possible consequences of this option given the purpose of your assessment?

 

What incentives will this option create for excluding students from the assessment?

 

To what extent does this option meet the needs of your state?

Reporting Option 3:  Data for students with disabilities are aggregated separate from data for all other students, as well as aggregated with the data for other students.  Both sets of data (all students as one group, and students with disabilities as a separate breakout) are reported.

What are the possible consequences of this option given the purpose of your assessment?

 

What incentives will this option create for excluding students from the assessment?

 

To what extent does this option meet the needs of your state?

Other Reporting Options  (Identify other options that exist and answer the above three questions about each option.)

 

 

 


Step 7

Implement Revised Assessment Policies and Procedures

  • Determine roles for those involved in implementing the revised assessments
  • Prepare state education agency staff for implementation
  • Prepare local personnel for implementation
  • Prepare consultants, readers, and test implementers

The seventh step in revising state assessment policies and procedures involves implementing the revised assessment system. You will follow four major tasks to implement the system: (1) working with the state assessment contractor, if one exists, as well as determining roles for those involved in the revised system, (2) training SEA personnel, (3) training LEA personnel and parents, and (4) preparing consultants, readers, and test implementers.

If past systems exempted or excluded students with disabilities, this step will require careful planning and may require a multi-year phase-in process. Worksheet 8 helps you to start planning how you will introduce the revisions in your system.

Task 1
Determine Roles for Those Involved in Implementing the Revised Assessments

Companies that have been contracted to develop, score, and report assessments for your state can play a powerful role in supporting your revised assessments. They are usually responsible for forms development, local staff training, forms distribution, scoring (including analysis of demographic data) and reporting. Whatever policy decisions were made up to this point must be reflected in each of these activities. A close working relationship with the contractor is essential to ensure complete understanding and consistent implementation of your state's philosophy, policies, and procedures.

If you want to report data separately by disability, the student record forms must include a coding system. Many states have discovered too late that there was no way to identify which scores reflected data from students with disabilities and there was no way to report inclusion or exclusion rates. If you also want to collect data on accommodations, you could put that information in the coding as well. Some consistent statewide accommodated formats might be built into the contract to ensure uniformity (for example, large print, templates, professional audio tapes, etc.). General instruction manuals should include policies and guidelines about the inclusion of students with disabilities.

Make sure that when the contractor conducts local staff training, it includes directions about assessing students with disabilities. This could include information on sample selection, participa-tion and accommodation decisions, and reporting and interpretation of results.

You must come to agreement also on how policy interpretation questions will be handled and communicated to others. For example, the contractor might be required to establish and maintain a statewide review panel to judge whether newly proposed accommodations would invalidate the measures.

Your statewide assessment contractor should have a clear understanding of the types of reports expected and how those reports will be communicated within the LEAs. Initial agreement can ease tensions and reduce delays in producing needed information.

Task 2
Prepare State Education Agency Staff for Implementation

Perhaps the most frequently overlooked aspect of a state implementation effort is adequate preparation of SEA staff. Their inadequate involvement and/or training can do much damage to your efforts to be accountable for the educational results of all students. At least three types of staff must be knowledgeable about the philosophy, policies and procedures.

The first type of staff that must be knowledgeable is the staff responsible for the overall assessment system. These individuals, usually trained in tests and measurement, will most frequently work with local testing coordinators. They usually serve as guardians of test validity and reliability. They must understand why the participation of all students is important, and what it means for comparability of results. They must understand and commit to the established procedures that will determine accommodations and/or use of alternative measures so that all SEA staff communicate the same message to the local education agencies.

A second group includes staff in the state office who have special responsibilities for the education of students with disabilities. These individuals, usually trained in special education, are the ones who work most closely with local personnel who participate in planning and implementing programs for students with disabilities. Frequently such staff are vocal advocates for individual student rights and protections. They must understand how the rights of students with disabilities are protected in an educational system that is accountable for all students. They also must know how the overall assessment system can produce information to help them. They, too, must be prepared to deliver a consistent SEA message.

The third group is the regional technical assistance staff who are used by local personnel to conduct training and to help with trouble-shooting on a variety of issues. Regional staff must be trained in the same way as staff in the state department of education. Worksheet 9 gives you a possible format to develop a communication plan for working with your personnel.

Task 3
Prepare Local Personnel for Implementation

Teachers, principals, local assessment personnel, local special education coordinators and parents must become aware of how the revised assessment system will function. Changes in state policies and procedures fail unless those policies and procedures are carried out in local education agencies.

As you organize the state assessment system training sessions with general educators, you must include a segment on dealing with students with disabilities. IEP team leaders (and participants, if possible) need to know what is expected of them when they document decisions, arrange for accommodations, and use results to guide future decision making. Teams that will be scoring portfolios or open-ended test results must be trained to be consistent.

You will find that one-shot, large group orientation sessions on a regional basis are seldom sufficient to reach all individuals at the depth needed. It is essential that you build training into other training events (e.g., training of IEP teams on accommodating instruction) and provide on-site consultation. Making an 800 number "hot line" or an e-mail address available to respond to inquiries can reduce costs and increase responsiveness (not to mention consistency).

Since parents are an essential part of the IEP team, it is important for you to also think about, and make specific plans for, parent training. Such training could begin with informational pieces sent home that address the importance of participation of all students in statewide assessments. Additional training could then be provided to groups of parents about how to make recommendations about the type of assessment that is appropriate, and accommodation needs, for their children with disabilities.

Task 4
Prepare Consultants, Readers, and Test Implementers

You need to conduct training for those who will implement tests. Contractors will often use field-based teams to conduct the assessments (e.g., performance events). Interpreters will need specific information on what they can do and what they cannot do during the state assessment. Similarly, readers will need specific training on how to appropriately read for an assessment, and ways in which it is inappropriate to read. Recorders (e.g., scribes) will need the same kind of training as well.

Summary

In Step 7, you concentrated on implementing the revised assessment system. A strategic plan for how this will happen is usually worth thinking about and preparing. Your plan, as discussed here, should take into account any contractors that your state might have working on the assessment system, state education agency staff, and local personnel, including parents.

Collect information from these key stakeholders along the way on how they are perceiving the revised assessment system and the implementation process. Their responses to occasional surveys will help you fine-tune and adjust your implementation process to promote quicker implementation.


Worksheet 8

Notes about Introducing Revisions to the State System


Roles of the State Assessment Contractor and Other Personnel

_____  Mark record forms with disabilities and/or accommodation data

_____  Provide statewide accommodated formats

_____  Write guidelines in instruction manuals

_____  Train local personnel

_____  Handle policy interpretation questions

_____  Produce special reports

 

Ensure SEA Staff Readiness

_____  SEA assessment personnel

_____  SEA staff with special responsibilities for education of students with disabilities

_____  Regional SEA staff

 

Training and Supporting Local Personnel

_____  Plans to train teachers, principals, local assessment personnel, local special education coordinators, and parents

_____  Plans for ongoing support (e.g., 800 number or e-mail address)

 

Train and Support Local Personnel

_____  Assessment team members (if they exist)

_____  Personnel who will assist with accommodations

 

 


Worksheet 9

Communication Plan

Message Who Needs to
Know the
Message
How the Message
Will be Delivered
When the Message
Will be Delivered
Who Will Give the
Message
 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

       

 


Step 8

Evaluate Implementation and Effects

  • Select an evaluation strategy
  • Follow up on students with disabilities
  • Report findings

The final step is not really a final step. Developing an assessment system that incorporates accountability for all students will take a number of iterations. Each change should be driven by data on the extent to which the guidelines were useful and were actually followed, and data on the effects of the changes on participation.

The evaluation outline in Worksheet 10 will help you choose which evaluation questions are important and which data sources and techniques you will use to answer the questions.

Select and Evaluation Strategy

Both usefulness and implementation of the guidelines should be evaluated. There are several approaches that you could take. Among these are a third party evaluation study, conducting surveys and telephone interviews, and holding focus group meetings.

While you might fund a formal third party study that involves observations and in-depth interviews at local sites, it is probably more feasible for you to conduct mail surveys, phone interviews or focus group interviews. Teachers, principals, assessment coordinators, special education coordinators, parents, and students should be included.

If you decide to conduct focus groups, these interviews will help you evaluate the usefulness and implementation of the guidelines. Focus groups involve six to nine individuals with common characteristics (e.g., all teachers) who respond to a facilitator's questions in a group setting and whose responses are enriched by the responses of others.

Topics for either a survey or focus group might include:

  • Issues of awareness of the guidelines

  • Perspectives on quality of the guidelines (e.g., clarity, feasibility, sufficiency)

  • Perceptions of the extent to which guidelines were actually followed

  • Information on which components or options were used most and least often, and why

  • Reported effects of the guidelines on the personnel involved

The evaluation questions in Worksheet 10 could be re-phrased as survey questions or focus group interview probes.

Follow Up on Students with Disabilities

It will be important for you to check on how educators are adhering to the intent of the guidelines. Data should be collected on all of the ways that a student with disabilities could participate in the assessment. The primary categories of participation would be:

  1. Students included in the regular assessment without accommodations—Ideally, all students who need accommodations receive them and students who do not need them do not receive them. You can expect that there will be a sample of students who were in the regular assessment without accommodations. Check these students to ensure that none of them needed accommodations. If some of them did, reasons for the failure to provide accommodations should be determined.

  2. Students included in the regular assessment with accommodations—The primary concern here is what kinds of accommodations were used. Your follow-up should determine the extent to which the accommodations were really needed and whether the accommodations were limited to those in the guidelinesÑthese are assumed to retain test validity. At some point, you need to determine what effect the accommodations had on test performance, if any.

  3. Students in an alternative assessmentMost states require a specific person in the district to sign off for each student who does not participate in the regular assessment. You can collect and review a sample of these sign-offs and individual student records to make sure that students were not placed in the alternative assessment system who could have participated in the regular assessment with accommodations and adaptations. You could then conduct interviews with team members who made the decisions to see what led them to the conclusions for those students.

If your state is in transition to using a new assessment system and you have some students who are given exemptions from the assessment, you should follow up on them as well. Use the approach used for students in the alternative assessment.

Report Findings

Prepare summary information showing percentages of students falling into each of the categories noted above. These summaries should be prepared and compared to traditional exclusion rates and to average test scores. You can use the results to revise the guidelines and improve next year's training activities.

Summary

In Step 8, you have fully implemented a revised assessment system and are collecting the follow-up information needed to determine whether it is working as expected. Although it is typically viewed as an add-on to collect evaluation information, it is critical to take this step when looking at a revised assessment system.


Worksheet 10

Possible Evaluation Questions


To what extent are various stakeholders aware of the guidelines?

 

 

To what extent are various stakeholders knowledgeable about the content of the guidelines?

 

 

To what extent do various stakeholders understand the need for the guidelines?

 

 

What is the percieved quality (e.g., clarity, feasibility, sufficiency) of the guidelines of different stakeholders?

 

 

What are the perceptions of various stakeholders on the extent to which the guidelines were actually followed during the administration of the assessment?

 

 

What do assessment participants perceive to be allowable assessment options?

 

 

What effects of the assessment guidelines (intended or unintended) have been observed?

 

 

 

 


Some Final Thoughts . . . .

As you follow the steps in this Study Guide, you will probably encounter challenges that have not been mentioned. Refer to the Resource Materials and Sources for Technical Assistance sections of this guide for further help.

NCEO is interested in hearing your comments, especially about your experiences in revising and implementing existing guidelines. NCEO would also like to hear whether your state is developing an alternate assessment in order to include all students in your assessment, even those with the most severe cognitive disabilities.

Contact:

NCEO
350 Elliott Hall
75 E. River Road
Minneapolis, Minnesota 55455

Phone: 612-626-1530
Fax: 612-624-0879
E-mail: scott027@tc.umn.edu

See the NCEO World Wide Web Home Page: http://www.cehd.umn.edu/NCEO

 


Resources
Students with Disabilities in
National and Statewide Assessments

Allington, R.L., & McGill-Franzen, A. (1992). Unintended effects of educational reform in New York. Educational Policy, 6, (4), 397-414.

This article reports a significant increase in retention and identification of students for special education services during a period of increased high-stakes assessment from 1978 to 1989.

Bell, G. (1994). The test of testing: Making appropriate and ethical choices in assessment. Oak Brook, IL: North Central Regional Educational Laboratory.

This document addresses many topics related to choices that need to be made in relation to testing. In addition to general ethical assessment responsibilities, it addresses the selection and development of testing programs, preparing students for an assessment, administering the test, and interpretation and use of test results. Several issues are addressed in each of these areas.

Brauen, M., O'Reilly, F., & Moore, M. (1994). Issues and options in outcomes-based accountability for students with disabilities. Rockville, MD: Westat.

This document provides a framework for creating an outcomes-based accountability system that includes students with disabilities. It addresses issues and options for four decisions: (1) selecting outcomes, (2) establishing performance standards, (3) identifying assessment strategies, and (4) identifying accountable parties.

Houser, J. (1995). Assessing students with disabilities and limited English proficiency (Working Paper 95-13). Washington, DC: U. S. Department of Education, Office of Educational Research and Improvement.

This paper presents a summary of issues that have been addressed related to the inclusion of students with disabilities and students with limited English proficiency in the National Assessment of Educational Progress (NAEP). Major topics include: data validity and current policy; current status; data validity and alternative assessment; and next steps.

McGrew, K.S., Thurlow, M.L., Shriner, J.G., & Spiegel, A.N. (1992). Inclusion of students with disabilities in national data collection programs. (Technical Report 2). Minneapolis, MN: National Center on Educational Outcomes.

This document presents an analysis of the degree to which individuals with disabilities are involved in national and state data collection programs. Recommendations for increasing the participation of individuals with disabilities are provided.

McGrew, K.S., Thurlow, M.L., & Spiegel, A.N. (1993). The exclusion of students with disabilities in national data collection programs. Educational Evaluation and Policy Analysis, 15, 339-352.

This article reports on the extent to which students with disabilities are excluded from our national data collection programs. Included are data collection programs in the Department of Education, Department of Health and Human Services, Department of Commerce, and the National Science Foundation.

Mehrens, W.A. (1993). Issues and recommendations regarding implementation of high school graduation tests. Oak Brook, IL: North Central Regional Educational Laboratory.

This monograph, developed through NCREL's Regional Policy Information Center, summarizes approaches to high school graduation requirements that are being used in the North Central Region and examines issues that arise about the implementation of graduation tests. Fifty specific recommendations are provided, and a sequence of tasks for designing a program for a high school graduation test is presented.

National Academy of Education. (1993). The trial state assessment: Prospects and realities (Third Report of the National Academy of Education Panel on the Evaluation of the NAEP Trial State Assessment: 1992 Trial State Assessment). Stanford, CA: American Institutes for Research.

This document provides a comprehensive analysis of the state level administration and reporting of NAEP. Among the topics covered are the exclusion of students on Individualized Education Programs, including charts showing the rates of inclusion and exclusion by state.

National Transition Network. (1995). Inclusion of transition-age students with disabilities in large-scale assessments. Minneapolis, MN: University of Minnesota, National Transition Network.

This document describes the functions of large-scale assessments, how they are used for state and individual decision making, and national and state policies related to their use. Issues of relevance to transition-age students are highlighted.

NCEO. (1995). 1994 state special education outcomes. Minneapolis, MN: National Center on Educational Outcomes.

This is one of the annual state reports prepared by the National Center on Educational Outcomes. It focuses on state activities in assessing the results of education for students with disabilities, as well as including a special report on the status of students with disabilities in relation to Goals 2000 activities.

NCEO. (1996). 1995 state special education outcomes. Minneapolis, MN: National Center on Educational Outcomes.

This is the most recent of the annual state reports prepared by the National Center on Educational Outcomes. It focuses on the information that states collect on participation, exit, achievement, vocational, and post-school outcomes as well as how accessible data are on students with disabilities. Longitudinal trends over five years are examined.

North Central Regional Educational Laboratory. (1996). State student assessment programs database 1994-1995. Oak Brook, IL: NCREL.

This document presents the results of a survey of state assessment personnel that is conducted annually. It provides information on content areas covered, grade levels assessed, types of assessments, and so on for many additional variables.

Office of Technology Assessment. (1992). Testing in American schools: Asking the right questions. Washington, DC: U.S. Government Printing Office.

This document examines technological and institutional aspects of educational testing. It provides a broad view of a range of issues related to testing and accountability.

Phillips, S.E. (1994). Legal implications of high-stakes assessment: What states should know (Regional Policy Information Center Report). Oak Brook, IL: North Central Regional Educational Laboratory.

This report was written to "help state and national education policymakers avoid legal challenges to their student assessment programs." It does so by explaining the relevant legal and psychometric issues.

Robinson, G.E., & Brandon, D.P. (1994). NAEP test scores: Should they be used to compare and rank state educational quality? Arlington, VA: Educational Research Service.

This report examines the problems with using NAEP test scores to rank and/or compare states, noting that most of the variation in state average test scores can be explained by the effects of demographic characteristics over which schools have no control.

Thurlow, M.L., Scott, D.L., & Ysseldyke, J.E. (1995). Compilation of states' guidelines for including students with disabilities in assessments (Synthesis Report 17). Minneapolis, MN: National Center on Educational Outcomes.

This report compiles the written laws, regulations, and guidelines that states have on the participation of students with disabilities in statewide assessments and includes a summary of the major themes and trends.

Thurlow, M.L., Scott, D.L., & Ysseldyke, J.E. (1995). Compilation of states' guidelines for accommodations in assessments for students with disabilities (Synthesis Report 18). Minneapolis, MN: National Center on Educational Outcomes.

This report compiles the written laws, regulations, and guidelines that states have about the use of accommodations in statewide assessments. It includes a summary of the major themes and trends in the written accommodations guidelines.

Thurlow, M.L., Shriner, J., & Ysseldyke, J.E. (1994). Students with disabilities in the context of educational reform based on statewide educational assessments. Paper presented at the annual meeting of the American Educational Research Association, New Orleans.

This is a paper that was presented at AERA to summarize the status of statewide assessments in terms of the participation of students with disabilities, the accommodations that are allowed, and the nature of written guidelines.

Thurlow, M.L., Ysseldyke, J.E., & Anderson, C.L. (1995). High school graduation requirements: What's happening for students with disabilities? (Synthesis Report 20). Minneapolis, MN: National Center on Educational Outcomes.

This report summarizes and analyzes current state graduation requirements and how they are applied to students with disabilities. Variability from one state to another is demonstrated.

Thurlow, M.L., Ysseldyke, J.E., & Silverstein, B. (1993). Testing accommodations for students with disabilities: A review of the literature (Synthesis Report 4). Minneapolis, MN: National Center on Educational Outcomes.

This paper reviews literature about testing accommodations for people with disabilities. It addresses policy and legal considerations, technical concerns, minimum competency, certification/ licensure testing efforts, existing standards, and accommodations.

Ysseldyke, J.E., & Thurlow, M.L. (1994). Guidelines for inclusion of students with disabilities in large-scale assessments (Policy Directions No. 1). Minneapolis, MN: National Center on Educational Outcomes.

This policy report explains ways to include students with disabilities in large-scale assessments, use possible accommodations and adaptations, and monitor how well the intent of the guidelines is followed. Included are recommendations and a list of resources.

Zlatos, B. (1994). Don't test, don't tell. The American School Board Journal, 181 (11), 24-33.

This article describes the "academic red-shirting" phenomenon, suggesting that this and similar practices skew the way we rank our schools. It is suggested that some schools succumb to a temptation to make their scores look artificially good, resulting in children being left out of tests.

Ysseldyke, J.E., Thurlow, M.L., & Geenen, K. (1994). Educational accountability for students with disabilities (Policy Directions Number 3). Minneapolis, MN: National Center on Educational Outcomes.

This report explains ways to move toward an accountability system that is different from one relying on process data (child count). It examines possible alternative approaches, data needed to demonstrate that education is working for students with disabilities, and barriers to the collection of these data.

Ysseldyke, J.E., Thurlow, M.L., McGrew, K.S., & Shriner, J.G. (1994). Recommendations for making decisions about the participation of students with disabilities in statewide assessment programs (Synthesis Report 15). Minneapolis, MN: National Center on Educational Outcomes.

This report summarizes a meeting that discussed including students with disabilities in statewide assessment programs. Included are recommendations for inclusion, accommodations, and reporting results.

Ysseldyke, J.E., & Thurlow, M.L., McGrew, K.S., & Vanderwood, M. (1994). Making decisions about the inclusion of students with disabilities in large-scale assessments (Synthesis Report 13). Minneapolis, MN: National Center on Educational Outcomes.

This report summarizes a meeting held to address issues in making decisions about the inclusion of students with disabilities in large-scale assessments. Recommendations are made for inclusion, accommodations, and future research.

Ysseldyke, J.E., Thurlow, M.L., & Geenen, K. (1994). Implementation of alternative methods for making educational accountability decisions for students with disabilities (Synthesis Report 12). Minneapolis, MN: National Center on Educational Outcomes.

This report covers a seminar of state directors of special education and state assessment coordinators from six states. It examines the challenges of collecting data to make accountability decisions for students with disabilities and makes recommendations for future practice.

Ysseldyke, J.E., & Thurlow, M.L. (Eds.). Views on inclusion and testing accommodations for students with disabilities (1993). (Synthesis Report 7). Minneapolis, MN: National Center on Educational Outcomes.

Included are six experts' responses to questions about assessment inclusion and accommodations issues.

 


Sources for Technical Assistance

Regional Educational Laboratories

Priorities:  Reform programs; strategies for scaling up effective teaching and learning processes

Number of Laboratories:  10


Region:  Applalachian (Kentucky, Tennesee, Virginia, West Virginia)

Appalachia Educational Laboratory, Inc. (AEL)
1031 Quarrier Street
P.O. Box 1348
Charleston, WV 25325

Dr. Terry L. Eideel, Executive Director
Specialty Area:  Rural Education

Phone: (304) 347-0400, (800) 624-9120
Fax: (304) 347-0487
Email: eidellt@ael.org

Region:  Western   (Arizona, California, Nevada, Utah)

WestEd
730 Harrison Street
San Fransisco, CA 94107

Dr. Dean H. Nafziger, Executive Director
Specialty Area:  Assessment and Accountability

Phone:  (415) 565-3000
Fax:  (415)  565-3012
Email:  tross@fwl.org

Region:  Central   (Colorado, Kansas, Missouri, Nebraska, North Dakota, South Dakota, Wyoming)

Mid-Continent Regional Educational Laboratory (McREL)
2550 South Parker Road #500
Aurora, CO 80014

Dr. Timothy Waters, Executive Director
Specialty Area:  Curriculum, Learning and Instruction

Phone: (303) 337-0990
Fax:  (303) 337-3005
Email:  twaters@mcrel.org

Region:  Midwestern   (Illinois, Indiana, Iowa, Michigan, Minnesota, Ohio, Wisconsin)

North Central Regional Educational Laboratory (NCREL)
1900 Spring Road #300
Oak Brook, IL 60521

Dr. Jeri Nowakowski, Executive Director
Specialty Area:  Technology

Phone: (708) 57104700
Fax:  (708) 571-4716
Email:  nowakows@ncrel.org

Region:   Northwestern  (Alaska, Idaho, Montana, Oregon, Washington)

Northwest Regional Educational Laboratory (NWREL)
101 Southwest Main Street #500
Portland, OR 97204

Dr. Ethel Simon-McWilliams, Executive Director
Specialty Area:  School Change Processes

Phone:  (503) 275-9500, (800) 547-6339
Fax:  (503) 275-9489
Email:  simone@nwrel.org

Region:   Pacific  (Hawaii, Guam, Mariana Islands, Marshall Islands, Micronesia, Palau)

Pacific Region Educational Laboratory (PREL)
828 Fort Street Mall #500
Honolulu, HI 96813

Dr. John W. Kofel, Executive Director
Specialty Area:  Language and Cultural Diversity

Phone:  (808) 533-6000
Fax:  (808) 533-7599
Email:  kofelj@prel-oahu-1.prel.hawaii.edu

Region:   Northeastern  (Connecticut, Maine, Massachusetts, New Hampshire, New York, Rhode Island, Vermont, Puerto Rico, Virgin Islands)

Northeast and Islands Laboratory at Brown University (LAB)
144 Wayland Avenue
Providence, RI 02906-4384

Dr. Mary Lee Fitzgerald, Executive Director
Specialty Area:  Language and Cultural Diversity

Phone:  (401) 274-9548
Fax:  (401) 421-7650
Email:  Mary_Fitzgerald@Brown.edu

Region:   Mid-Atlantic  (Delaware, District of Columbia, Maryland, New Jersey, Pennsylvania)

Mid-Atlantic Laboratory for Student Success (LSS)
933 Ritter Annex
13th Street And Cecil B. Moore Avenue
Philadelphia, PA 19122

Dr. Margaret Wang, Executive Director
Specialty Area:  Urban Education

Phone:  (215) 204-3001
Fax:  (215) 204-5130
Email:  mcw@vm.temple.edu

Region:   Southeastern  (Alabama, Florida, Georgia, Mississippi, North Carolina, South Carolina)

SouthEastern Regional Vision for Education (SERVE)
University of North Carolia at Greensboro
P.O. Box 5367
Greensboro, NC 27435

Dr. Roy H. Forbes, Executive Director
Specialty Area:  Early Childhood Education

Phone:  (910) 334-3211, (800) 755-3277
Fax:  (910) 334-3268
Email:  rforbes@serve.org

Region:   Southwestern  (Arkansas, Louisiana, New Mexico, Oklahoma, Texas)

SouthWest Educational Development Laboratory (SEDL)
211 East Seventh Street
Austin, TX 78701

Dr. Preston C. Kronkosky, Executive Director
Specialty Area:  Language and Cultural Diversity

Phone:  (512) 476-6861
Fax:  (512) 476-2286
Email:  pkronkos@sedl.org

Regional Resource Centers

Priorities:   State special education technical assistance needs

Number of Centers:  7


Region 1:  Northeast   (Connecticut, Maine, Massachusetts, New Hampshire, New Jersey, New York, Rhode Island, Vermont)

Northeast Regional Resource Center (NERRC)
Trinity College of Vermont
Colchester Avenue
Burlington, VT 05401

Pamala Kaufmann, Director

Phone: (802) 658-5036
Fax:  (802) 658-7435
TDD:  (802) 860-1428
Email:  nerrc@aol.com

Region 2:   Mid-South  (Delaware, District of Columbia, Kentucky, Maryland, North Carolina, South Carolina, Tennessee, Virginia, West Virginia)

Mid-South Regional Resource Center (MSRRC)
University of Kentucky
126 Mineral Industries Building
Lexington, KY 40506

Ken Olsen, Director

Phone: (606) 257-4921
Fax:  (606) 257-4353
Email:  olsenk@uklans.uky.edu

Region 3:   South Atlantic  (Alabama, Arkansas, Florida, Georgia, Louisiana, Mississippi, New Mexico, Oklahoma, Texas, Puerto Rico, U.S. Virgin Islands)

South Atlantic Regional Resource Center (SARRC)
Florida Atlantic University
1236 North University Drive
Plantation, FL 33322

Timothy Kelly, Director

Phone:  (954) 473-6106
Fax:  (954) 424-4309
Email:  SARRC@acc.fau.edu

Region 4:   Great Lakes  ( Illinois, Indiana, Michigan, Minnesota, Ohio, Pennsylvania, Wisconsin)

Great Lakes Area Regional Resource Center (GLARRC)
The Ohio State University
700 Ackerman Road #440
Columbus, OH 43202

Larry Magliocca, Director

Phone:  (614) 447-0844
Fax:  (614) 447-9043
TDD:  (614) 447-0186
Email:  magliocca.l@osu.edu

Region 5:   Mountain Plains  (Colorado, Iowa, Kansas, Missouri, Montana, Nebraska, North Dakota, South Dakota, Utah, Wyoming)

Mountain Plaines Regional Resource Center (MPRRC)
Utah State University
1780 North Research Parkway #112
Logan, UT 84321

John Copenhaver, Director

Phone:  (801) 752-0238
Fax:  (801) 753-9750
Email:  Latham@cc.usu.edu

Region 6:   Western  (Alaska, Arizona, California, Hawaii, Idaho, Nevada, Oregon, Washington, American Samoa, Federated States of Micronesia, Commonwealth of the Northern Mariana Islands, Guam, Republic of the Marshall Islands, Republic of Palau)

Western Regional Resource Center (WRRC)
1268 University of Oregon
Center on Human Development
901 East 18th Street
Eugene, OR 97403

Richard Zeller, Director

Phone:  (541) 346-5641
Fax:  (541) 346-5639
TDD:  (503) 346-5641
Email:  Richard_Zeller@ccmail.uoregon.edu

Region 7:   All

Federal Resource Center for Special Education
1875 Connecticut Avenue NW #800
Washington, DC 20009

Carol H. Valdivieso, Director

Phone:  (202) 884-8215
Fax:  (202) 884-8443
Email:  FRC@aed.org

Comprehensive Regional Assistance Centers

Priorities:   Assessing Title I schoolwide programs; helping local education agencies that have the highest percentages or numbers of children in poverty

Number of Centers:  15

Region I   (Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont)

Education Development Center, Inc.
55 Chapel Street
Newton, MA 02158

Dr. Vivian Guilfoy, Director

Phone:  (617) 969-7100 x 2201, (800) 332-0226
Fax:  (617) 969-3440
Email:  viviang@edc.org

Region II   (New York)

New York Technical Assistance Center (NYTAC)
The Metropolitan Center for Urban Education
New York University
32 Washington Sqaure East #72
New York, NY 10003

Dr. LaMar P. Miller, Executive Director

Phone:  (212)998-5100,  (800) 469-8224
Fax:  (212) 995-4199
Email:  millrla@is2nyu.edu

Region III   (Delaware, District of Columbia, Maryland, New Jersey, Ohio, Pennsylvania)

Center for Equity and Excellence in Education
George Washington University
1730 North Lynn Street #401
Arlington, VA 22209

Dr. Charlene Rivera, Director

Phone:  (703) 528-3588, (800) 925-3223
Fax:  (703) 528-5973
Email:  crivera@ceee.gwu.edu

Region IV   (Kentucky, North Carolina, South Carolina, Tennessee, Virginia, West Virginia)

Appalachia Educational Laboratory
P.O. Box 1348
Charleston, WV  25325

Dr. Pamela Buckley, Director

Phone:  (304) 347-0441, (800) 642-9120
Fax:  (304) 347-0487
Email:  buckleyp@ael.org

Region V   (Alabama, Arkansas, Georgia, Louisiana, Mississippi)

Southwest Educational Development Laboratory
3330 North Causeway Boulevard #430
Metairie, LA 70002

Dr. Hai T. Tran, Director

Phone:  (504) 838-6861, (800) 644-8671
Fax:  (504) 831-5242
Email:  hTran@sedl.org

Region VI   (Iowa, Michigan, Minnesota, North Dakota, South Dakota, Wisconsin)

Comprehensive Regional Assistance Center Consortium
University of Wisconsin
1025 West Johnson Street
Madison, WI 53706

Dr. Minerva Coyne, Director

Phone:  (608) 263-4220
Fax:  (608) 263-3733
Email:  mcoyne@macc.wisc.edu

Region VII   (Illinois, Indiana, Kansas, Missouri, Nebraska, Oklahoma)

University of Oklahoma
College of Continuing Education
555 Constitution, Room 128
Norman, OK 73072

Dr. John E. Steffens, Director

Phone:  (405) 325-1729 or 1713, (800) 228-1766
Fax:  (405) 325-1824
Email:  steffens@uoknor.edu

Region VIII   (Texas)

Intercultural Development Research Association (IDRA)
5835 Callaghan Road #350
San Antonio, TX 78228

Dr. Maria Robledo Montecel, Executive Director
Dr. Albert Cortez, Site Director

Phone:  (210) 684-8180
Fax:  (210) 684-5389
Email:  cmontecl@txdirect.net   or  acortez@txdirect.net

Region IX   (Arizona, Colorado, New Mexico, Nevada, Utah)

New Mexico Highlands University
121 Tijeras Avenue NE #2100
Albuquerque, NM 87102

Dr. Paul E. Martinez, Director

Phone:  (505) 242-7447
Fax:  (505) 242-7558
Email:  martinez@cesdp.nmhu.edu

Region X   (Idaho, Montana, Oregon, Washington, Wyoming)

Northwest Regional Educational Laboratory (NWREL)
101 Southwest Main Street #500
Portland, OR 97204

Mr. Carlos Sundermann, Director

Phone:  (503) 275-9480
Fax: (503) 275-9625
Email:  sundermmc@nwrel.org

Region XI   (Northern California)

Far West Laboratory for Educational Research
730 Harrison Street
San Fransisco, CA 94107

Dr. Beverly Farr, Director

Phone:  (415) 565-3009
Fax:  (415) 565-3012
Email:  bfarr@wested.org

Region XII   (Southern California)

Los Angeles County Office of Education
9300 Imperial Highway
Downey, CA 90242

Dr. Celia C. Ayala, Director

Phone:  (310) 922-6319
Fax:  (310) 922-6699
Email:  ayala_celia@lacoe.edu

Region XIII   (Alaska)

South East Regional Resource Center
210 Ferry Way #200
Juneau, AK 99801

Dr. Bill Buell, Director

Phone:  (907) 586-6806
Fax:  (907) 463-3811
Email:  akrac@ptialaska.net

Region XIV (Florida, Puerto Rico, Virgin Islands)

Educational Testing Service
1979 Lake Side Parkway #400
Tucker, GA 30084

Dr. Trudy Hensley, Director

Phone:  (770) 723-7443, (800) 241-3864
Fax:  (770) 723-7436
Email:  thensley@ets.org

Region XV   (American Samoa, Federated States of Micronesia, Commonwealth of the Northern Mariana Islands, Guam, Hawaii, Republic of the Marshall Islands, Republic of Palau)

Pacific Region Educational Laboratory (PREL)
828 Fort Street Mall #500
Honolulu, HI 96813

Dr. John W. Kofel, Executive Director
Dr. Juvenna Chang, Project Director

Phone:  (808) 533-6000
Fax:  (808) 533-7599
Email:  kofelj@prel.hawaii.edu   or  changj@prel.hawaii.edu

Special Assessment Projects
National Center for Research on Evaluation, Standards and Student Testing (CRESST)
UCLA Graduate School of Education
145 Moore Hall
405 Hilgard Avenue
Los Angeles, CA 90024

Eva L. Baker and Robert L. Linn, Co-directors

Phone:  (310) 206-1532
Fax:  (310) 825-3883

Center for the Study of Testing, Evaluation and Educational Policy (CSTEEP)
323 Campion Hall
Boston College
Chestnut Hill, MA 02167

George F. Madaus, Principal Investigator

Phone:  (617) 552-4521
Fax:  (617) 552-8419