Common Misperceptions and Research-based
Recommendations for Alternate
Assessment based on Alternate
Achievement Standards
Synthesis Report 73
Rachel Quenemoen, Jacqui Kearns, Mari Quenemoen, Claudia Flowers, Harold Kleinert
February 2010
All rights reserved.
Any or all portions of this document may
be reproduced and distributed without
prior permission, provided the source is
cited as:
Quenemoen, R., Kearns, J., Quenemoen, M., Flowers, C., & Kleinert, H.
(2010). Common misperceptions and research-based recommendations for
alternate assessment based on alternate achievement standards (Synthesis
Report 73). Minneapolis, MN: University of Minnesota, National Center on
Educational Outcomes.
Table of
Contents
Executive Summary
In 1997 the newly authorized Individuals with Disabilities Education Act (IDEA) required states to
provide an alternate assessment for students who could not participate in the general assessment,
even with accommodations [612(a)(17)(A)]. A 2003 regulation for the No Child Left Behind Act of
2001 (NCLB) permitted states to use results from alternate assessments based on alternate achievement
standards (AA-AAS) in Adequate Yearly Progress (AYP) calculations. AA-AAS had to be
linked to grade-level content standards but could be reduced in depth, breadth, and complexity (U.S.
Department of Education, 2004). AA-AAS are designed so that students with the most significant
cognitive disabilities can be included in large-scale accountability testing and to promote their access
to the same interesting and challenging curriculum as their peers.
In our collaborative work with states and educators on AA-AAS, staff at the National Alternate
Assessment Center (NAAC) and the National Center on Educational Outcomes (NCEO) have
encountered several common misperceptions about these assessments and the students who are
assessed using AA-AAS. This report explores these misperceptions, and proposes research-based
recommendations to address them.
The common misperceptions we discuss in this report relate to three major themes: (a) the characteristics
of students who participate in AA-AAS; (b) the content that should be taught and assessed
by teachers; and (c) issues regarding the purpose and the validity of AA-AAS outcomes. Since
the re-authorization of IDEA, the passage of NCLB, and the 2003 regulation on AA-AAS, the
field has been forced to change very quickly. New strategies have been under development even
as they have been required. Educators, policymakers, and researchers may still have questions and
concerns about the purpose and the consequences of AA-AAS. The goal of this paper is to provide
information for examining assumptions and misperceptions about AA-AAS as well as assumptions
and misperceptions about the students who participate in AA-AAS. For each misperception, we
refer to the recent literature on assessment, curriculum, and instruction for students with the most
significant disabilities to support our analysis and recommendations.
Overview
Alternate assessment based on alternate
achievement standards (AA-AAS) represents a dramatic shift in
thinking about educational expectations and opportunities for
students with the most significant cognitive disabilities.1
These students have in the past been taught a separate
curriculum from their peers, often in separate settings. Now,
they are expected to participate and make progress in the same
academic curriculum and activities as their same-age classmates.
Large-scale assessments like the AA-AAS promote school,
district, and state accountability to make sure that all
students, including this small group of students with the most
significant cognitive disabilities, have access to a rigorous,
challenging, and interesting standards-based curriculum.
With this shift in academic expectations
for students with the most significant cognitive disabilities,
many teachers, parents, researchers, and policymakers have
expressed some confusion or concern. Curriculum and instruction
for these students have traditionally emphasized important
functional life-skills such as self-care and social skills, and
some stakeholders worry that academic instruction will eliminate
the time and resources needed for these functional skills.
Others are skeptical that any assessment for a group of students
with such widely diverse learning characteristics could ever
achieve sufficient technical quality to provide trustworthy
results.
Initial research indicates that including
students with disabilities in large-scale accountability testing
results in higher expectations, improved instruction, and
improved performance for those students (Cortiella, 2007;
Kleinert, Kennedy, & Kearns, 1999; Quenemoen, Lehr, Thurlow, &
Massanari, 2001; Towles-Reeves, Garrett, Burdette, & Burge,
2006; Ysseldyke, Dennison, & Nelson, 2003). An educational
system that holds high expectations for students with
significant cognitive disabilities will ultimately benefit all
students. The often cited observation that “what gets tested
gets taught” can be modified, based on past practices of
excluding many students with disabilities, to “who gets tested
gets taught” (Erickson, Thurlow, & Ysseldyke, 1996; McGrew,
Vanderwood, Thurlow, & Ysseldyke, 1995; Thurlow, Nelson,
Teelucksingh, & Ysseldyke, 2000). This report is intended to
inform policy and practice on assessment, curriculum, and
instruction for students with the most significant disabilities
and to raise questions for further research and reflection.
The misperceptions were identified over a
period of two years through technical assistance activities
provided through the National Center on Educational Outcomes
(NCEO) and the National Alternate Assessment Center (NAAC).
Technical assistance staff assembled the comments and questions
heard from the field, and then asked researchers and advisors at
NAAC and NCEO to review and comment on them. From this
discussion, the comments were clustered first into several
groups and then into ten general statements that captured key
misperceptions. These were again brought to the NAAC and NCEO
researchers and advisors who identified extant research related
to each of these ten general misperceptions. Based on the
research, NCEO and NAAC staff developed specific recommendations
to address each misperception in the field. This report
addresses some of the most common questions and concerns we
encounter when working with educators, policymakers, and
families on AA-AAS, and reflects what we know about best
practices in assessment, curriculum, and instruction.
Top of page
Common Misperception #1Many students
who take the AA-AAS function more like infants or toddlers than
their actual age, so it makes no sense for schools to be held
accountable for their academic performance.
Some people assume that these students have
such severe disabilities that the students are unable to learn
academic content. Sometimes, this misperception is rooted in the
assumption that all students must progress through typical
infant and preschool skill development before any other academic
instruction can occur. Notions of developmental state or “mental
age” were common in special education through the 1970s.
Psychology and education specialists believed that all people
must pass through discrete, linear, developmental stages in
order to move from childhood to adulthood. According to this
theory, a student with a significant cognitive disability who
has not mastered certain pre-academic, social, emotional, or
linguistic skills for a certain developmental stage must
continue to work on a curriculum designed for that stage. For
instance, a 14-year-old deemed to be the “mental age” of 4 would
continue to use 4-year-old materials in schools and would
receive instruction limited to skills taught through those
materials.
In the 1980s, special educators realized
that these students were able to master many functional skills
appropriate for their age regardless of whether they had
mastered all lower developmental skills. Special education
experts such as Brown, Nietupski, and Hamre-Nietupski (1976)
began to challenge this developmental paradigm by arguing that
all students can benefit from age-appropriate material and
activities. Subsequent studies suggested that schools should
teach students with severe disabilities important real-world,
functional skills to prepare them to live fully in their
communities. This caused a shift in thinking that resulted in a
powerful, age-appropriate functional curriculum for these
students.
In recent years, we have demonstrated that
these students can learn grade-appropriate academic skills in
addition to functional skills (Browder & Spooner, 2006) and
benefit from participating with grade-level peers (McDonnell,
Mathot-Buckner, Thorson, & Fister, 2001; Roach & Elliott, 2006;
Ryndak, Morrison, & Sommerstein, 1999). Learner characteristics
data from many states show us that most students who participate
in AA-AAS have basic literacy and numeracy skills (Almond &
Bechard, 2005; Kearns, Towles-Reeves, Kleinert, Kleinert, &
Thomas, in press). These students are able to learn targeted
grade-appropriate academics to an alternate achievement level,
even when they have not mastered all earlier academic content.
Today, most experts agree that the notion of developmental
stages should not prohibit students from working on both
important functional skills and age-appropriate academic skills
(Hughes & Carter, 2008; Kleinert, Browder, & Towles-Reeves,
2009; Spooner & Browder, 2006). In fact, researchers have shown
that students can benefit from integrating a functional
curriculum with academic skill learning (see Browder &
Minarovic, 2000; Colllins, Kleinert, & Land, 2006; Collins &
Stinson, 1995; Gardill & Browder, 1995; Kleinert, Collins,
Wickham, Riggs, & Hager, in press; Lalli & Browder, 1993;
McDonnell, 1987; Westling, Floyd, & Carr, 1990).
We know now that waiting until these
students are “ready” by mastering all earlier skills means they
“never” will be given access to the skills and knowledge they
can learn. We do not expect second-graders to master personal
hygiene before we teach them to read; neither should we hold
back literacy instruction to students with disabilities who are
still working on basic functional skills (Spooner & Browder,
2006).
Educators must also be careful not to
assume that a student’s current demonstration of cognitive or
communicative ability is fixed or absolute. Experts on students
with the most complex cognitive and medical challenges caution
that a student’s current level of communication does not
necessarily represent the upper limit of that student’s
capacity. Students who appear to have limited symbolic
communication may benefit from assistive technology or other
services to allow them to express their cognitive ability
(Browder et al., 2007; Kearns et al., in press; Towles-Reeves,
Kearns, Kleinert, & Kleinert, 2009).
Research-based Recommendation #1:
Build accountability systems to ensure that
all students who are eligible for the AA-AAS have access to and
learn academic content expected for their same-age typical
peers, to an appropriate but challenging alternate achievement
standard.
|
Top of page
Common Misperception #2
Many students who
participate in AA-AAS have life-threatening medical conditions
or are not able to communicate.
Similar to developmental assumptions,
sometimes people assume that all students who participate in
AA-AAS face multiple medical and cognitive challenges that go
well beyond what educators can address; they assume that many of
these students cannot speak, hear, or communicate in any way.
However, most students who are eligible for AA-AAS (more than
90%) have effective communication skills with or without
assistive devices. Approximately 10% of students who participate
in AA-AAS communicate on a pre-symbolic level (without
intentional use of language, pictures, objects, or signs).
Students in this small group (less than 0.1% of the total
population of students) may be able to improve their
communication if given opportunities to express themselves,
including the use of assistive and augmentative communication
devices.
Researchers at the University of Kentucky
recently gathered data about the characteristics of students
taking the AA-AAS in seven states with a Learner Characteristics
Inventory (Kearns et al., in press). They found that only
10.3% of these students were unable to communicate with
pictures, objects, or signs, while another 17.3% could
communicate deliberately using non-standardized means of
communication such as gestures, pictures, objects, or signs. On
the other hand, over 70% of students who participate in AA-AAS
in these states could communicate with symbolic language
including verbally and with Braille. In a similar study, the
Colorado Alternate Assessment Collaborative (Almond & Bechard,
2005) found that of 165 students eligible for the AA-AAS in
their study, nearly 40% used 200 words or more to communicate,
while only 10% did not use words in functional communication.
Kearns et al. (in press) also looked at school attendance and
found that 84%–92% of students attended at least 90% of school
days, while only 1–3% missed 50% or more of school days due to
illness or (nearly as often) other factors. The group of
students who takes the AA-AAS may include some with fragile
medical conditions, but most students in this group attend
school regularly.
The field of severe disabilities has worked
from the “least dangerous assumption” for decades (Donnellan,
1984). Even for the less than 10% of students with disabilities
who currently do not use symbolic communication, we must teach
assuming that they can build effective communication strategies.
The categories of “pre-symbolic” or “emerging symbolic”
communication are not fixed diagnoses, but serve as markers for
where a student is at that moment in time. For some students,
learning to manipulate switches to express preferences, tape
recorders to access interesting literature or music, or eye
movements to communicate intention can open the door to
increased learning, expression, and participation in academic
and everyday life.
Research-based Recommendation #2:
For the small group of students who
initially demonstrate a lack of symbolic communication (about
10% of students who take the AA-AAS), educators should
persistently and systematically seek multiple and varied
communication strategies including assistive technology to
permit these students to learn and then to show what they know
on an AA-AAS. |
Top of page
Common Misperception #3
Students in the
AA-AAS can learn only rote academic skills, so AA-AAS should
reflect only these skills.
Only recently have scholars begun to
understand applications of academic content for students with
significant cognitive disabilities. In the content area of
reading, most research has employed a behaviorist approach with
a focus on sight words and functional reading in daily life
settings (Browder, Courtade-Little, Wakeman, & Rickelman, 2006;
Browder, Wakeman, Spooner, Ahlgrim-Delzell, & Algozzine, 2006).
Recent research has begun to explore how students with
significant cognitive disabilities learn a broad spectrum of
literacy skills, including and beyond sight words (Browder,
Trela, & Jimenez, 2007; Browder, Ahlgrim-Delzell, Courtade,
Gibbs, & Flowers, 2008), though cultural attitudes and
expectations remain a barrier to providing access to literacy
instruction for these students (Kliewer, Biklen, &
Kasa-Hendrickson, 2006). In mathematics, high quality
evidence-based research published between 1975–2005 nearly
always focused on time management, money, and basic counting
skills, while studies on how these students can access algebra,
geometry, and data analysis and probability were minimal or
non-existent (Browder, Spooner, Ahlgrim-Delzell, Harris, &
Wakeman, 2008). Courtade, Spooner, and Browder (2007) did a
similar review of science instruction for students with the most
significant cognitive disabilities. They found only 11 studies,
most of which had a narrow focus on the “science in personal and
social perspective” standard.
Education experts have begun to explore the
paths that typically developing students take toward competency
as measured by large-scale assessments (Pellegrino, Chudowsky, &
Glaser, 2001); new research is needed to “map” how students with
significant cognitive disabilities build competence as well.
Researchers now are finding strong evidence of academic skill
and knowledge development among students who participate in
AA-AAS, including abstract concepts and transfer of learning
(Browder, et al., 2007; Jimenez, Browder, & Courtade, 2008). For
instance, students who have had little prior exposure to
literacy may benefit from highly structured and systematic
instruction using age-appropriate adapted literature and
strategies such as story-based lessons (Browder, Trela, &
Jimenez, 2007) or working on sequencing by supplementing text
with pictures or objects (Downing, 2007). This principle applies
not only to students with significant cognitive disabilities,
but to students with a range of other disabilities including
vision impairment or dyslexia. A student who does not have
phonemic awareness, for instance, may still be able to develop
advanced literary skills with the help of assistive technology
(Johnstone, Thurlow, Thompson, & Clapper, 2008; Thompson,
Johnstone, Thurlow, & Clapper, 2004).
Much research remains to be done on
teaching academics to students who participate in AA-AAS, but
anecdotal and case study evidence indicates that we must retain
a “principle of uncertainty” about what students may be able to
do (Kliewer & Biklen, 2001). Teacher attitudes that some
students will not benefit from academic instruction such as
literacy training pose a significant barrier to student access
to those skills (Downing, 2007; Katims, 2001; Zascavage & Keefe,
2004). As long as we are not sure what many students with
significant cognitive disabilities are capable of, we must
operate on the principle of the “least dangerous assumption”
(Donnellan, 1984; Jorgensen, 2005). In Massachusetts, where
alternate assessments were developed relatively early, teachers
have been “surprised and amazed” at what their students can do
(Wiener, 2005).
Research-based Recommendation #3:
Build AA-AAS approaches based on a model of
academic content development that allows these students to
demonstrate a range of grade-level content that their peers are
also learning and demonstrating. |
Top of page
Common Misperception #4
The AA-AAS
has eliminated the teaching of important functional skills.
People sometimes assume that the addition
of academics to the assessment and accountability systems for
students with significant cognitive disabilities means that
there is limited time for teaching functional skills like
self-care, community participation, and safety. They believe
that there is not enough time in the day to do both academics
and functional skills. While researchers continue to emphasize
the importance of teaching functional skills to improve
students’ quality of life (Turnbull, Turnbull, Wehmeyer, & Park,
2003), many experts agree that teaching age-appropriate academic
material has positive outcomes even for students with the most
significant disabilities. In fact, researchers have shown
examples of how academic and functional skills can be taught
concurrently (Collins, Kleinert, & Land, 2006; Kleinert,
Collins, Wickham, Riggs, & Hager, in press), and most districts
provide a curriculum that is a mix of academic and functional
skills (Browder, Spooner, Ahlgrim-Delzell, et al., 2003).
The academic content required by the AA-AAS
does not replace other important individual student goals. Even
while students work toward mastering academic skills to be
assessed by AA-AAS, each student’s Individualized Education
Program (IEP) may include additional individual goals for
social, self-care, and other functional skills (U.S. Department
of Education, 2005). Alternate assessments are designed to make
sure schools are teaching students with the most significant
cognitive disabilities according to high academic standards. A
student’s IEP, on the other hand, also can include instruction
on important functional and social skills.
No research has shown that students must
learn functional skills before they can master academic skills,
nor that academic and functional instruction are incompatible
(Browder, Wakeman, Flowers, Rickelman, Pugalee, & Karvonen,
2007). Many teachers have found that blended instruction in
academic and functional skills yields better results for both.
Some functional skills, such as communication and social skills,
can be easily aligned with academic content such as literacy.
The line between academics and functional instruction begins to
blur as teachers and parents discover how truly useful and
satisfying increased literacy and numeracy skills are for these
students, for quality of life and enjoyment, for integration
into the community, school, or adult life, and for future
employment (Browder, Ahlgrim-Delzell, Pugalee, & Jimenez, 2006;
Ward, Van De Mark, & Ryndak, 2006).
Research-based Recommendation #4:
Provide training and support to teachers so
that they can effectively merge academic and functional
instruction where appropriate and so that they understand the
vital importance of academic skills and knowledge to full
participation in family, school, and community life. |
Top of page
Common Misperception #5
AA-AAS must cover
all of the same content that is on the general assessment for
typical peers.
Although AA-AAS should be linked to general
education content standards, federal regulations permit states
to define the appropriate depth, breadth, and complexity of
content coverage. The AA-AAS “should be clearly related to
grade-level content, but it may be restricted in scope or
complexity or take the form of introductory or prerequisite
skills” (U.S. Department of Education, 2005). But, states must
show that these content priorities truly “raise the bar” of
historically low expectations and are clearly linked to the
content that typical students in the same grade should know and
be able to do (U.S. Department of Education, 2004). This is a
shift for teachers who do not have experience with this content.
Many stakeholders and advisors to state assessment programs may
need training and support to build understanding of what is
possible. Stakeholder and advisor understanding can ensure that
AA-AAS are linked to the student’s grade (or grade band) but are
reduced in scope and complexity from the general education
assessment.
Many researchers are working on ways to
explore how grade-level academic content might be assessed at
different achievement levels (Browder, 2001; Browder et al.,
2002; Kleinert & Kearns, in press; Kleinert & Kearns, 2001;
Thompson, Quenemoen, Thurlow, & Ysseldyke, 2001). Researchers
have used standard evaluative techniques to assess the alignment
between state AA-AAS and state content standards, finding that,
in fact, they are aligned without representing the full depth
and breadth of the standard (Flowers, Browder, &
Ahlgrim-Delzell, 2006; Roach, Elliott, & Webb, 2005).
Massachusetts pioneered work on aligning alternate assessments
with grade-level content standards. Massachusetts worked to
identify the “‘essence’ of each standard, and to develop ‘entry
points’ at increasing levels of complexity” in collaboration
with content area experts (Wiener, 2005).
The recently developed Links for Academic
Learning (LAL) manual presents 10 criteria to test the alignment
of AA-AAS with the standards-based general curriculum, based on
what we know about students with significant cognitive
disabilities, what we know about high quality academic
instruction for these students, and the flexible nature of the
AA-AAS (Flowers, Wakeman, Browder, & Karvonen, 2007). The LAL
manual provides guidelines to ensure that students who qualify
for the AA-AAS are tested on challenging academic content,
aligned to grade-level general curriculum, but with reduced
depth and breadth. A properly aligned AA-AAS should allow for
achievement levels appropriate for students with significant
cognitive disabilities, while continuing to require access to
challenging academic content.
Research-based Recommendation #5:
Provide training to teachers, and to other
key assessment system stakeholders and advisors, on what
research suggests these students are able to know and do when
given the opportunity. |
Top of page
Common Misperception #6
Most AA-AAS are
entirely individualized and differ for each student.
Because of the varied learning
characteristics of students who take the AA-AAS, this test must
be more flexible than most general assessments. A good AA-AAS
allows a defined amount of flexibility in administration of the
items and tasks so that these students have the opportunity to
show what they know. However, AA-AAS must still reflect
professional understanding of standardization and comparability.
Furthermore, a good AA-AAS incorporates training, oversight, and
structures to balance flexibility with standardization of
procedures. Ongoing monitoring is important to ensure the
assessments are administered, scored, and reported as intended.
The comparability of AA-AAS results is key
to their function as large-scale accountability measures
(Quenemoen, Rigney, & Thurlow, 2002). While IEPs can be entirely
individualized, the AA-AAS must provide results that are
comparable across schools and districts. The challenge is to
allow flexibility for students of varying ability levels while
maintaining high standards for all children and the
comparability needed for system accountability (Gong & Marion,
2006). AA-AAS may test comparable content, or ask students to
perform comparable activities, while allowing for flexibility to
provide various accommodations. Validity arguments can rely in
part on traditional psychometric analysis, but should also
incorporate new or modified evidence-based evaluative methods to
account for intentional flexibility (Gong & Marion, 2006;
Shafer, 2005). Ultimately, the purpose of the AA-AAS is not to
hold individual students accountable, but to hold schools
accountable for providing a challenging and meaningful education
for all students.
Research-based Recommendation #6:
All AA-AAS scores should indicate whether
the student is proficient in an academic domain through
procedures that allow flexibility but control for possible
sources of error. |
Top of page
Common Misperception #7
An AA-AAS measures
teacher performance in compiling attractive portfolios or
examples rather than measuring student academic performance.
Some teachers have expressed the concern
that putting together “good-looking” portfolios or choosing
student examples that make them look good will artificially
produce higher scores, regardless of the quality of instruction
or amount of student progress. Some researchers have found that
teachers who have greater knowledge and competence in assembling
portfolios do, in fact, produce better student scores (Johnson &
Arnold, 2007). However, a good AA-AAS is designed according to
standards of validity and reliability. On a well-designed test,
regardless of the format, test scores should not be sensitive to
factors that are irrelevant to the construct being tested. An
assessment that is sensitive to external factors such as “good
looking” portfolios, regardless of the actual content, is simply
a poorly-designed assessment. Any assessment must control for
differences in test administration or teacher behavior, and the
scoring should focus on independent student performance.
That said, students of successful teachers
who are well trained in various aspects of instruction and
assessment will tend to have higher scores (Horvath,
Kampfer-Bohach, & Kearns, 2005). Research has shown that
teachers who are trained in instructional practices and using
assessment data to make instructional decisions produce better
student scores on alternate assessments (Browder, Karvonen,
Davis, Fallin, & Courtade-Little, 2005; Karvonen, Flowers,
Browder, Wakeman, & Algozzine, 2006). Teachers who routinely
integrate elements of the assessment into daily instruction
(Kampfer, Horvath, Kleinert, & Kearns, 2001) and have been
trained in participation and accommodation decisions on
large-scale assessments (DeStefano, Shriner, & Lloyd, 2001) also
tend to have students who score higher on alternate assessments.
The time teachers spend preparing the cosmetic features of an
alternate assessment seems to have little outcome on student
scores (Kampfer et al., 2001). The key to successful test
administration is to provide teachers with excellent training,
give them a well-defined scoring rubric, and ensure that they
are familiar with how the student best demonstrates what he or
she is capable of doing.
It is important to remember that all
large-scale accountability assessments do, to some extent, test
teachers. Accountability assessments like AA-AAS measure
achievement at the individual student level, but ultimately
reflect the efforts of the teacher, school, and district. This
is true not only for AA-AAS, but for general assessments as
well. Large-scale assessments should identify schools and
districts where teachers are providing high-quality,
standards-based instruction to all students, while controlling
for administrator behaviors.
Research-based Recommendation #7:
Train teachers on systematic data gathering
procedures, provide oversight, coaching, and monitoring to
ensure they implement the procedures as intended, and design
scoring processes to exclude evidence that reflects teacher
behaviors instead of independent student performance. |
Top of page
Common Misperception #8
It would make more
sense if teachers simply reported on their students’ progress
meeting IEP goals rather than requiring an AA-AAS.
Some people assume that gathering data that
already are used for the IEP is the best measure of student
achievement. Indeed, the AA-AAS does not displace the important
role of the IEP. A good IEP will identify the services,
supports, and specialized instruction needed so that the student
can learn both academic and functional skills and knowledge.
Data gathered on the specific goals and objectives in the IEP
are important for individual accountability among IEP team
members for these short- and long-term goals and objectives in
all areas where the student has them. Some of these goals and
objectives will specify the services and supports the student
needs to access the general curriculum, but student progress
based on the IEP does not provide accountability for student
achievement of proficiency in the general curriculum (U.S.
Department of Education, 2005). In contrast, AA-AAS are designed
to provide data for system accountability to ensure that all
students are provided access to and are achieving to proficiency
in the general curriculum (Quenemoen, Rigney, & Thurlow, 2002).
The leverage of system accountability, as well as individual
accountability, may yield far more opportunities for most
students, including targeting important school improvement
resources (Thurlow, Quenemoen, Thompson, & Lehr, 2001). IEPs
ideally should be aligned with the academic skills required on
AA-AAS. Initial research on the effects of IEP-assessment
alignment on AA-AAS scores has been mixed, and more research is
required (Karvonen & Huynh, 2007).
Furthermore, alternate assessments must be
scored according to professionally accepted standards applicable
to all accountability assessments (AERA, APA, & NCME, 1999).
Teachers may be trained to score alternate assessments, and this
is particularly appropriate for some test formats such as rating
scales or performance tasks. However, an independent second
rater, or a sample of independent audits of evidence, can
provide needed information on the validity and reliability of
the scores, even when the teacher serves as the first scorer.
For example, in one state, the teacher rates the student’s
skills and knowledge, but a sample of the actual student work is
collected and reviewed by a second rater. Another state has
teachers videotape the administration, and all videotapes are
submitted to the state for a review. Increasingly, because of
reporting timelines, states are using professional scorers who
must comply with minimum education requirements and must attain
and maintain rigorous standards for scoring accuracy (Quenemoen,
2009).
Research-based Recommendation #8:
Design AA-AAS so that there are comparable
data on the effectiveness of schools in providing access to the
general curriculum to students with the most significant
cognitive disabilities. |
Top of page
Common Misperception #9
Some AA-AAS formats
(i.e., portfolio, checklist, performance assessment) are better
than others.
States tend to use one or a combination of
common alternate assessment formats, including a portfolio, a
checklist, a performance assessment, or observation in
structured and unstructured settings (Roeber, 2002). However,
research on the technical quality of AA-AAS has shown that the
format of the test is a poor predictor of technical quality. The
nature of a “portfolio” or “checklist” or “performance
assessment” can vary enormously, and a number of states now use
hybrid models that combine elements of these approaches (Gong &
Marion, 2006; Quenemoen, 2009; Quenemoen, Thompson, & Thurlow,
2003). The three main test formats share many attributes in
common (Elliott & Roach, 2007), and any of these formats by name
alone can be of poor or high quality.
A good AA-AAS is built on a set of beliefs
about how students with severe disabilities learn and
demonstrate the academic content. Questions that need to be
addressed include: What kinds of observations of their learning
will give us evidence of what these students know and do in the
academic content? What should we “see” when these students have
been given access to the same grade-appropriate, interesting
content as their typical peers? The responses to these questions
help answer the question of what is the “best” format for the
AA-AAS. Though states’ current formats can vary widely, they
show very similar assumptions about what successful outcomes
look like for students with the most significant cognitive
disabilities (Quenemoen et al., 2003). These assumptions build
on lessons learned from the functional curriculum, based on
observations of how full or partial participation in the full
range of age-appropriate school and community-based activities
enhances the lives and social integration of students with
significant cognitive disabilities (Quenemoen, 2009; Ysseldyke &
Olsen, 1997). The extension of these concepts to the academic
curriculum has resulted in a new understanding of what is
possible when these students have access not only to the
classrooms of their peers but also to the curriculum taught in
the classrooms. Alternate assessments should be designed to
collect samples of supporting evidence of what a student knows
and can do, regardless of the format of the assessment (e.g.,
body of evidence, checklist, or performance task). Over time,
this evidence can be used to understand better how these
students develop skills and knowledge in the academic domains
and inform improvements in alternate assessment design (Marion &
Pellegrino, 2006).
Research-based Recommendation #9:
Select the format of the AA-AAS based on
beliefs about academic teaching and learning for AA-AAS
students. |
Top of page
Common Misperception #10
No AA-AAS can be a
technically adequate measure of student achievement for
accountability purposes.
People sometimes assume that AA-AAS breaks
all the rules of good design of large-scale assessments, as
judged by high quality psychometric evidence that has been used
by measurement experts for a century. In order to function well
as a tool for accountability, and to meet NCLB regulations,
AA-AAS must indeed be technically defensible (NCLB, 2001; U.S.
Department of Education, 2004). Researchers continue to assess
the quality of states’ alternate assessments, with mixed results
(Johnson & Arnold, 2004; Tindal et al., 2003; Yovanoff & Tindal,
2007).
The challenges of designing AA-AAS are very
new; prior to the 1990s, no large-scale assessment program
included students with significant cognitive disabilities, and
very few measurement experts had experience designing
assessments for these students. Due to relatively small numbers
of test takers, and the intentional flexibility of the tests,
some AA-AAS cannot be evaluated the same way that general
assessments are examined. The data set is not large enough, and
the elements are not entirely standardized according to
traditional definitions (Gong & Marion, 2006).
Fortunately, there has been a great deal of
work done since the 1990s on issues that have emerged in
developing psychometrically sound AA-AAS. Some researchers are
beginning to use the assessment triangle proposed by Pellegrino,
Chudowsky, and Glaser (2001) in Knowing What Students Know. To
evaluate the technical quality of an assessment, we must analyze
the interaction between how we think students develop knowledge
and achieve proficiency, how we design tests to allow students
to show what they know, and how we interpret the results of the
test in a meaningful way. This assessment triangle between
cognition, observation, and interpretation represents an
evidence-based analysis of the technical quality of an
assessment.
Another way to understand and estimate
validity of AA-AAS outcomes is to look at the intended and
unintended consequences of an assessment on teaching and
learning (Linn, Baker, & Dunbar, 1991; Marion & Pellegrino,
2006; Marion & Perie, 2009; Shepard, 1993; Stone & Lane, 2003).
For instance, two intended outcomes of AA-AAS are to improve the
quality of instruction and to raise the level of performance for
students with significant cognitive disabilities. Initial
evidence suggests that it may indeed have this effect
(Cortiella, 2007; Kleinert et al., 1999; Quenemoen et al., 2001;
Towles-Reeves et al., 2006; Ysseldyke et al., 2003). Yet recent
surveys of parents and teachers show that they have both
positive and negative perceptions of the effects of AA-AAS on
their students’ education (Roach, 2006; Roach, Elliott, &
Berndt, 2007). More research is needed to document the intended
and unintended effects of AA-AAS on student performance, as well
as on post-school outcomes.
States must continue to work on ensuring
the technical quality of their AA-AAS. A recent study of 50
states and the District of Columbia (N=51) shows that only 35%
of these states in 2006-07 could document with evidence that
their scoring and reporting structures reflected the knowledge
that students were being asked to demonstrate (Cameto et al.,
2009). In the same study, 41% of these states had conducted a
formal study to document the validity of the alternate
assessment outcomes in relation to the intended internal or
external variables, and could present the data for review.
Forty-seven percent had not conducted any such study. The AA-AAS
can be designed to produce valid and reliable information, but
states must continue to work with experts and stakeholders to
ensure that their assessments show alignment between the way
students demonstrate learning, what the assessment actually
tests, and how they interpret the results.
Research-based Recommendation #10:
State assessment offices should address
three components of the assessment design as they develop and
implement the AA-AAS: (a) description of the student population
and a theory of learning for these students, (b) structure of
the observations from the assessment, and (c) interpretation of
the results. The technical defense of an AA-AAS starts and ends
with these three components. |
Top of page
Conclusions
Alternate assessments based on alternate
achievement standards are a work in progress. As we build
understanding in the field of how to design meaningful
instructional opportunities in the grade-level curriculum, we
anticipate that we will find new ways of capturing what these
students know and can do. We also anticipate that they will
continue to surprise us with their capacity to learn content
that, in the past, they were never taught. As we carefully
consider the consequences of the shift to increased access to
academic curricula, we need to carefully sort through the
concerns and even complaints from teachers, parents, and other
stakeholders. We especially need to discern where the challenges
of living through a paradigm shift are painful but necessary and
predictable steps and where there are important adjustments to
make to ensure positive consequences for students in the long
run.
The degree to which we are hearing
misperceptions about AA-AAS suggests that clarifying the purpose
and use of these assessments may be necessary. Concerns about
the appropriateness of academic content and the decreased focus
on functional life skills for these students rest in part on
assumptions about what kinds of skills these students will need
to live satisfying, interesting, productive lives after they
leave school. Current post-school outcome data rely on
post-secondary enrollment or full-time employment as the primary
valued post-school outcome, but the past 20 years of data do not
show positive outcomes on these measures for students with
significant cognitive disabilities (Newman, Wagner, Cameto, &
Knokey, 2009; Wagner, Newman, Cameto, Levine, & Garza, 2006).
What has not been measured is how academic skills and knowledge
may enhance these students’ full participation of life in their
homes and communities. Would skills like reading for enjoyment
and information, understanding of mathematical tools, basic
understanding of scientific principles, understanding of history
and current events, and orientation to and appreciation of the
arts contribute to a satisfying adult life in supported or
independent settings? Do these skills enhance
participation in the community, in recreation and leisure, or in
work? Education for students with significant cognitive
disabilities has failed so far to result in full-employment or
independence for the vast majority of these students, despite
intensive efforts to implement community-based and
community-referenced programs at the high school and sometimes
middle school level. Perhaps the AA-AAS should spur new
conversations about what desired outcomes for these students
really are and how to achieve them. In the long run, discussion
around these questions may be necessary to ensure that desired
consequences are achieved from the use of AA-AAS for
accountability purposes.
In the short term, the 10 recommendations
we provide here can help inform improvement of the AA-AAS.
Engaging all stakeholders in discussions about AA-AAS, and
clarifying misperceptions as we do so, has the potential for
improving academic achievement for students with significant
cognitive disabilities.
Top of page
References
AERA, APA, & NCME (1999). Standards for
educational and psychological testing. Washington, DC: American
Educational Research Association.
Almond, P. & Bechard, S. (2005). In-depth
look at students who take alternate assessments: What do we know
now? Retrieved from
http://www.measuredprogress.org/resources/inclusive/articlespapers/StudentsTakingAltAssess.pdf
Browder, D. (2001). Curriculum and
assessment for students with moderate and severe disabilities.
New York: Guilford Press.
Browder, D. M., Ahlgrim-Delzell, L.,
Courtade, G., Gibbs, S. L., & Flowers, C. (2008). Evaluation of
the effectiveness of an early literacy program for students with
significant developmental disabilities. Exceptional Children,
75, 33–52.
Browder, D. M., Ahlgrim-Delzell, L.,
Pugalee, D. K., & Jimenez, B. A. (2006). Enhancing numeracy. In
D. Browder & F. Spooner (Eds.), Teaching language arts, math and
science to students with significant cognitive disabilities
(pp.171–196). Baltimore: Paul Brookes.
Browder, D. M., Courtade-Little, G. R.,
Wakeman, S., & Rickelman, R. (2006). From sight words to
emergent literacy. In D. Browder & F. Spooner (Eds.),
Teaching language arts, math, and science to students with
significant cognitive disabilities (pp. 63–91). Baltimore: Paul
Brookes.
Browder, D., Flowers, C., Ahlgrim-Delzell,
L., Karvonen, M., Spooner, F., & Algozzine, R. (2002).
Curricular implications of alternate assessments. Paper
presented at the National Council of Measurement in Education
Annual Conference, New Orleans.
Browder, D. M., Karvonen, M., Davis, S.,
Fallin, K., & Courtade-Little, G. (2005). The impact of teacher
training on state alternate assessment scores. Exceptional
Children, 71, 267–282.
Browder, D. M., & Minarovic, T. (2000).
Utilizing sight words in self-instruction training for employees
with moderate mental retardation in competitive jobs. Education
and Training in Mental Retardation and Developmental
Disabilities, 35, 78–89.
Browder, D. M., & Spooner, F. (Eds.)
(2006). Teaching language arts, math and science to students
with significant cognitive disabilities. Baltimore: Paul
Brookes.
Browder, D. M., Spooner, F.,
Ahlgrim-Delzell, L., Flowers, C., Karvonen, M., & Algozzine, R.
(2003). A content analysis of the curricular philosophies
reflected in states’ alternate assessment performance
indicators. Research and Practice for Persons with Severe
Disabilities, 28, 165–181.
Browder, D. M., Spooner, F.,
Ahlgrim-Delzell, L., Wakeman, S. Y., & Harris, A. (2008). A
meta-analysis on teaching mathematics to students with
significant cognitive disabilities. Exceptional Children, 74,
407–432.
Browder, D. M., Trela, K., & Jimenez, B.
(2007). Training teachers to follow a task analysis to engage
middle school students with moderate and severe developmental
disabilities in grade-appropriate literature. Focus on Autism
and Other Developmental Disabilities, 22, 206–219.
Browder, D., Wakeman, S., Flowers, C.,
Rickelman, R., Pugalee, D., & Karvonen, M. (2007). Creating
access to the general curriculum with links to grade-level
content for students with significant cognitive disabilities: An
explication of the concept. The Journal of Special Education,
41(1), 2–16.
Browder, D. M., Wakeman, S. Y., Spooner,
F., Ahlgrim-Delzell, L., & Algozzine, B. (2006). Research on
reading instruction for individuals with significant cognitive
disabilities. Exceptional Children, 72, 392–408.
Brown, L., Nietupski, J., &
Hamre-Nietupski, S. (1976). Criterion of ultimate functioning.
In M.A. Thomas (Ed.), Hey, don’t forget about me! Education’s
investment in the severely, profoundly, and multiply handicapped
(pp. 2–15). Reston, VA: Council for Exceptional Children.
Cameto, R., Knokey, A.-M., Nagle, K.,
Sanford, C., Blackorby, J., Sinclair, B., and Riley, D. (2009).
National profile on alternate assessments based on alternate
achievement standards. A report from the national study on
alternate assessments (NCSER 2009-3014). Menlo Park, CA: SRI
International.
Collins, B., Kleinert, H., & Land, L.
(2006). Addressing math standards and functional math. In D.
Browder & F. Spooner (Eds.), Teaching language arts, math and
science to students with significant cognitive disabilities (pp.
197 Cameto, R., Knokey, A.-M., Nagle, K., Sanford, C.,
Blackorby, J., Sinclair, B., and Riley, D.228). Baltimore: Paul
Brookes.
Collins, B. C., & Stinson, D. M. (1995).
Teaching generalized reading of product warning labels to
adolescents with mental disabilities through the use of key
words. Exceptionality, 5, 163 Cameto, R., Knokey, A.-M., Nagle,
K., Sanford, C., Blackorby, J., Sinclair, B., and Riley, D.181.
Cortiella, C. (2007). Rewards & roadblocks:
How special education students are faring under No Child Left
Behind. Retrieved from
http://www.ncld.org/images/stories/OnCapitolHill/PolicyRelatedPublications/RewardsandRoadblocks/RewardsandRoadblocks.pdf
Courtade, G. R., Spooner, F., & Browder, D.
M. (2007). A review of studies with students with significant
cognitive disabilities that link to science standards. Research
and Practice in Severe Disabilities, 32, 43–49.
DeStefano, L., Shriner, J., & Lloyd, C.
(2001). Teacher decision making in participation of students
with disabilities in large-scale assessments. Exceptional
Children, 68, 7–22.
Donnellan, A. (1984). The criterion of the
least dangerous assumption. Behavioral Disorders, 9, 141–150.
Downing, J. E. (2007). Building literacy
for students at the presymbolic and early symbolic levels. In D.
Browder & F. Spooner (Eds.), Teaching language arts, math &
science to students with significant cognitive disabilities (pp.
39–62). Baltimore: Paul Brookes.
Elliott, S. N., & Roach, A. T. (2007).
Alternate assessments of students with significant disabilities:
Alternative approaches, common technical challenges. Applied
Measurement in Education, 20(3), 301–333.
Erickson, R. N., Thurlow, M. L., &
Ysseldyke, J. E. (1996). Neglected numerators, drifting
denominators, and fractured fractions: Determining participation
rates for students with disabilities in statewide assessment
programs (Synthesis Report 23). Minneapolis, MN: University of
Minnesota, National Center on Educational Outcomes.
Flowers, C., Browder, D. M., &
Ahlgrim-Delzell, L. (2006). An analysis of three states’
alignment between language arts and mathematics standards and
alternate assessment. Exceptional Children, 72(2), 201–215.
Flowers, C., Wakeman, S., Browder, D., &
Karvonen, M. (2007). Links for academic learning:An alignment
protocol for alternate assessments based on alternate
achievementstandards. Charlotte, NC: University of North
Carolina at Charlotte.
Gardill, M. C., & Browder, D. M. (1995).
Teaching stimulus classes to encourage independent purchasing by
students with sever behavior disorders. Education and Training
in Mental Retardation and Developmental Disabilities, 30,
254–264.
Gong, B., & Marion, S. (2006). Dealing with
flexibility in assessments for students with significant
cognitive disabilities (Synthesis Report 60). Minneapolis, MN:
University of Minnesota, National Center on Educational
Outcomes.
Horvath, L. S., Kampfer-Bohach, S., &
Kearns, J. F. (2005). The use of accommodations among students
with deafblindness in large-scale assessment systems. Journal of
Disability Policy Studies, 16(3), 177–187.
Hughes, C., & Carter, E. W. (2008). Peer
buddy programs for successful secondary school inclusion.
Baltimore: Paul Brookes.
Jimenez, B., Browder, D., & Courtade, G.
(2008). Teaching an algebraic equation to high school students
with moderate developmental disabilities. Education and Training
in Developmental Disabilities, 43(2), 266–274.
Johnson, E., & Arnold, N. (2007). Examining
an alternate assessment: What are we testing? Journal of
Disability Policy Studies, 18(1), 23–31.
Johnstone, C., Thurlow, M., Thompson, S., &
Clapper, A. (2008). The potential for multi-modal approaches to
reading for students with disabilities as found in state reading
standards. Journal of Disability Policy Studies, 18(4), 219–229.
Jorgensen, C. (2005). The least dangerous
assumption: A challenge to create a new paradigm. Disability
Solutions, 6(3). Retrieved from:
http://www.disabilitysolutions.org/newsletters/files/six/6-3.pdf
Kampfer, S. H., Horvath, L. S., Kleinert,
H. L, & Kearns, J. F. (2001). Teachers’ perceptions of one
state’s alternate assessment: Implications for practice and
preparation. Exceptional Children, 67, 361–374.
Katims, D. S. (2001). Literacy assessment
of students with mental retardation: An exploratory
investigation. Education and Training in Mental Retardation and
Developmental Disabilities, 36, 363–372.
Karvonen, M., Flowers, C., Browder, D. M.,
Wakeman, S. Y., & Algozzine, B. (2006). Case study of the
influences on alternate assessment outcomes for students with
disabilities. Education and Training in Developmental
Disabilities, 41, 95–110.
Karvonen, M., & Huynh, H. (2007).
Relationship between IEP characteristics and test scores on an
alternate assessment for students with significant cognitive
disabilities. Applied Measurement in Education, 20(3), 273-300.
Kearns, J. F, Towles-Reeves, E., Kleinert,
H. L., Kleinert, J. O., & Thomas, M. K. (in press).
Characteristics of and implications for students participating
in alternate assessments based on alternate achievement
standards. The Journal of Special Education.
Kleinert, H., Browder, D., & Towles-Reeves,
E. (2009). Models of cognition for students with significant
cognitive disabilities: Implications for assessment. Review of
Educational Research, 79, 301–326.
Kleinert, H., Collins, C., Wickham, D.,
Riggs, L., & Hager, K. (in press). Embedding life skills,
self-determination, and enhancing social relationships and other
evidence-based practices. In J. Kearns & H. Kleinert (Eds.),
Meaningful outcomes for students with significant cognitive
disabilities: Alternate assessment on alternate achievement
standards. Baltimore: Paul Brookes.
Kleinert, H., & Kearns, J. (Eds.) (in
press). Meaningful outcomes for students with significant
cognitive disabilities: Alternate assessment on alternate
achievement standards. Baltimore: Paul Brookes.
Kleinert, H. L., & Kearns, J. F. (2001).
Alternate assessment: Measuring outcomes and supports for
students with disabilities. Baltimore: Paul Brookes.
Kleinert, H. L., Kennedy, S., & Kearns, J.
F. (1999). The impact of alternate assessments: A statewide
teacher survey. The Journal of Special Education, 33(2), 93–102.
Kliewer, C., & Biklen, D. (2001). “School’s
not really a place for reading”: A research synthesis of the
literate lives of students with severe disabilities. Journal of
the Association for Persons with Severe Handicaps, 26(1), 1–12.
Kliewer, C., Biklen, D., &
Kasa-Hendrickson, C. (2006). Who may be literate? Disability and
resistance to the cultural denial of competence. American
Educational Research Journal, 43(2), 163–192.
Lalli, J. S., & Browder, D. M. (1993).
Comparison of sight word training procedures with validation of
the most practical procedure in teaching reading for daily
living. Research in Developmental Disabilities, 14, 107–127.
Linn, R., Baker, E., and Dunbar, S. (1991).
Complex, performance-based assessment: expectations and
validation criteria. Educational Researcher, 20(8), 16–21.
Marion, S., & Pellegrino, J. (2006). A
validity framework for evaluating the technical quality of
alternate assessments. Educational Measurement: Issues and
Practices, 25(4), 47–57.
Marion, S., & Perie, M. (2009). An
introduction to validity arguments for alternate assessments. In
W. Schafer & R. Lissitz (Eds.), Alternate assessments based on
alternate achievement standards: Policy, practice, and potential
(pp. 113–126). Baltimore: Paul Brookes.
McDonnell, J. (1987). The effects of time
delay and increasing prompt hierarchy strategies on the
acquisition of purchasing skills by students with severe
handicaps. The Journal of the Association for Persons with
Severe Handicaps, 12, 227–236.
McDonnell, J., Mathot-Buckner, C., Thorson,
N., & Fister, S. (2001). Supporting the inclusion of students
with moderate and severe disabilities in junior high school
general education classes: The effects of classwide peer
tutoring, multi-element curriculum, and accommodations.
Education and Training of Children, 24, 141–160.
McGrew, K. S., Vanderwood, M. L., Thurlow,
M. L., & Ysseldyke, J. E (1995). Why we can’t say much about the
status of students with disabilities during education reform
(Synthesis Report No. 21). Minneapolis, MN: University of
Minnesota, National Center on Educational Outcomes.
Newman, L., Wagner, M., Cameto, R., &
Knokey, A. M. (2009). The post-high school outcomes of youth
with disabilities up to 4 years after high school. A report of
findings from the national longitudinal transition study-2
(NLTS2) (NCSER 2009-3017). Retrieved from
www.nlts2.org/reports/2009_04/nlts2_report_2009_04_complete.pdf
No Child Left Behind (NCLB) Act of 2001,
Pub. L. No. 107-110, § 115, Stat. 1425 (2002).
Pellegrino, J., Chudowsky, N., & Glaser,
R., Eds. (2001). Knowing what students know: The science and
design of educational assessment. Washington, DC: National
Research Council.
Quenemoen, R. F. (2009). The long and
winding road of alternate assessments. In W. Schafer & R.
Lissitz (Eds.), Alternate assessments based on alternate
achievement standards: Policy, practice, and potential (pp.
127–156). Baltimore: Paul Brookes.
Quenemoen, R. F., Lehr, C. A., Thurlow, M.
L., & Massanari, C. B. (2001). Students with disabilities
in standards-based assessment and accountability systems:
Emerging issues, strategies, and recommendations (Synthesis
Report 37). Minneapolis, MN: University of Minnesota, National
Center on Educational Outcomes.
Quenemoen, R., Rigney, S., & Thurlow, M.
(2002). Use of alternate assessment results in reporting and
accountability systems: Conditions for use based on research and
practice (Synthesis Report 43). Minneapolis, MN: University of
Minnesota, National Center on Educational Outcomes.
Quenemoen, R., Thompson, S., & Thurlow, M.
(2003). Measuring academic achievement of students with
significant cognitive disabilities: Building understanding of
alternate assessment scoring criteria (Synthesis Report No. 50).
Minneapolis, MN: University of Minnesota, National Center on
Educational Outcomes.
Roach, A. (2006). Influences on parent
perceptions of an alternate assessment for students with severe
cognitive disabilities. Research & Practice for Persons with
Severe Disabilities, 31, 267–274.
Roach, A. T., & Elliott, S. N. (2006). The
influence of access to the general education curriculum on the
alternate assessment performance of students with severe
cognitive disabilities. Educational Evaluation and Policy
Analysis, 28, 181–194.
Roach, A., Elliott, S., & Berndt, S.
(2007). Teacher perceptions and the consequential validity of an
alternate assessment for students with significant disabilities.
Journal of Disability Policy Studies, 18, 168–175.
Roach, A., Elliott, S., & Webb, N. (2005).
Alignment of an alternate assessment with state academic
standards: Evidence for the content validity of the Wisconsin
alternate assessment. The Journal of Special Education, 38(4),
218–231.
Roeber, E. (2002). Setting standards on
alternate assessments (Synthesis Report 42). Minneapolis, MN:
University of Minnesota, National Center on Educational
Outcomes.
Ryndak, D. L., Morrison, A. P., &
Sommerstein, L. (1999). Literacy before and after inclusion in
general education settings: A case study. The Journal of the
Association for Persons with Severe Handicaps, 24, 5-22.
Schafer, W. D. (2005). Technical
documentation for alternate assessments. Practical Assessment
Research & Evaluation, 10(10). Retrieved from
http://pareonline.net/getvn.asp?v=10&n=10
Shepard, L. A. (1993). Evaluating test
validity. Review of Research in Education, 19, 405–450.
Spooner, F., & Browder, D. (2006). Why
teach the general curriculum? In D. Browder & F. Spooner (Eds.),
Teaching language arts, math and science to students with
significant cognitive disabilities (pp. 1–13). Baltimore: Paul
Brookes.
Stone, C. A. & Lane, S. (2003) Consequences
of a state accountability program: Examining relationships
between school performance gains and teacher, student, and
school variables. Applied Measurement in Education, 16(1), 1–26.
Thompson, S. J., Johnstone, C. J., Thurlow,
M. L., & Clapper, A. T. (2004). State literacystandards,
practice, and testing: Exploring accessibility (Technical Report
38). Minneapolis, MN: University of Minnesota, National Center
on Educational Outcomes.
Thurlow, M. L., Nelson, J. R.,
Teelucksingh, E., & Ysseldyke, J. E. (2000). Where’s Waldo? A
third search for students with disabilities in state
accountability reports (Technical Report 25). Minneapolis, MN:
University of Minnesota, National Center on Educational
Outcomes.
Thurlow, M., Quenemoen, R., Thompson, S., &
Lehr, C. (2001). Principles and characteristics of inclusive
assessment and accountability systems (Synthesis Report 40).
Minneapolis, MN: University of Minnesota, National Center on
Educational Outcomes.
Thompson, S. J., Quenemoen, R., Thurlow, M.
L., & Ysseldyke, J. E. (2001). Alternate assessments for
students with disabilities. Thousand Oaks, CA: Corwin Press.
Tindal, G., McDonald, M., Tedesco, M.,
Glasgow, A., Almond, P., Crawford, L., & Hollenbeck, K. (2003).
Alternate assessment in reading and math: Development and
validation for students with significant disabilities.
Exceptional Children, 69, 481–494.
Towles-Reeves, E., Garrett, B., Burdette,
P., & Burdge, M. (2006). Validation of large-scale alternate
assessment systems and their influence on instruction—What are
the consequences? Assessment for Effective Intervention, 31(3),
45–57.
Towles-Reeves, E., Kearns, J., Kleinert,
H., & Kleinert, J. (2009). An analysis of the learning
characteristics of students taking alternate assessments based
on alternate achievement standards. Journal of Special
Education, 42(4), 241–254.
Turnbull, H. R., Turnbull, A. P., Wehmeyer,
M. L., & Park, J. (2003). A quality of life framework for
special education outcomes. Remedial and Special Education,
24(2), 67–74.
U.S. Department of Education (2004).
Standards and assessments peer review guidance: Information and
examples for meeting requirements of the No Child Left Behind
Act of 2001. Washington, DC: Author.
U.S. Department of Education (2005,
August). Alternate achievement standards for students with the
most significant cognitive disabilities: Non-regulatory
guidance. Washington, DC: Author.
Wagner, M., Newman, L., Cameto, R., Levine,
P, & Garza, N. (2006). An overview of findings from wave 2 of
the national longitudinal transition study-2 (NLTS2). Retrieved
from
www.nlts2.org/reports/2006_08/nlts2_report_2006_08_complete.pdf
Ward, T., Van De Mark, C. A., & Ryndak, D.
L. (2006). Balanced literacy classrooms and embedded instruction
for students with severe disabilities: Literacy for all in the
age of school reform. In D. Browder & F. Spooner (Eds.),
Teaching language arts, math and science to students with
significant cognitive disabilities (pp. 125–170). Baltimore:
Paul Brookes.
Westling, D. L., Floyd, J., & Carr, D.
(1990). Effects of single setting versus multiple setting
training on learning to shop in a department store. American
Journal on Mental Retardation, 94, 616–624.
Wiener, D. (2005). One state’s story:
Access and alignment to the GRADE-LEVEL content for students
with significant cognitive disabilities (Synthesis Report 57).
Minneapolis, MN: University of Minnesota, National Center on
Educational Outcomes.
Yovanoff, P., & Tindal, G. (2007). Scaling
early reading alternate assessments with statewide measures.
Exceptional Children, 73(2), 184–201.
Ysseldyke, J., Dennison, A., & Nelson, R.
(2003). Large-scale assessment and accountability systems:
Positive consequences for students with disabilities (Synthesis
Report 51). Minneapolis, MN: University of Minnesota, National
Center on Educational Outcomes.
Ysseldyke, J. E., & Olsen, K. R. (1997).
Putting alternate assessments into practice: What to measure and
possible sources of data (Synthesis Report 28). Minneapolis, MN:
University of Minnesota, National Center on Educational
Outcomes.
Zascavage, V. T., & Keefe, C. H. (2004).
Students with severe speech and physical impairments:
Opportunity barriers to literacy. Focus on Autism and Other
Developmental Disabilities, 19, 223–234.
The National Center on Educational Outcomes
(NCEO) (www.nceo.info) has been providing national leadership in
designing and building educational assessments, especially for
students with disabilities and students who are English language
learners, for almost twenty years. NCEO works with states to
identify important outcomes for students with disabilities and
examine their participation in national and state assessments,
including with the use of accommodations and alternate
assessments. NCEO believes that it is possible to enhance the
positive consequences of assessments used for system
accountability with students with disabilities and reduce their
negative consequences through systematic attention to
assumptions in the design, implementation, and continuous
improvement of assessments and related accountability systems.
The National Alternate Assessment Center
(NAAC) represents a multi-disciplinary team of experts in
measurement and special education to advance theory and practice
in the designs of alternate assessments. NAAC provides guidance
to states as they design or re-design their alternate
assessments, using the best research available on effective
measurement practices, universal design, and alignment with
grade-level content expectations and curriculum. NAAC provides
resources to states, teachers, and parents on its Web site
(www.naacpartners.org).
1 The term “significant cognitive
disabilities” is used to refer to students who participate in
alternate assessments based on alternate achievement standards.
This term overlaps the population of students with severe
developmental disabilities, but it is not synonymous.
Top of page
|