Empirical Support for Accommodations Most Often Allowed in State Policy


NCEO Synthesis Report 41

Published by the National Center on Educational Outcomes

Prepared by:

Martha Thurlow • Sara Bolt

November 2001


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Thurlow, M., & Bolt, S. (2001). Empirical support for accommodations most often allowed in state policy (Synthesis Report 41).Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/Synthesis41.html


Executive Summary

The need for research findings on the effects of assessment accommodations has been lamented for several years. This need was heightened in importance as IDEA 97 became law and required that students with disabilities be included in assessments and provided with needed assessment accommodations. In 1999 Tindal and Fuchs developed a comprehensive review of the literature on test changes. The complexity of the many studies included in that review made it evident that in the future a searchable data base was needed to cull the information for addressing specific accommodations, specific groups of students, specific ages, or combinations of these and other factors. NCEO launched a searchable database of research on accommodations in September, 2001 (http://cehd.umn.edu/NCEO/AccomStudies.htm), with plans to update the research included in the accommodations database every three months.

This report is a compilation of information collected from the searchable database on the accommodations that are most often allowed in state policies: Braille editions, computer/machine response, dictate response to scribe, extended time, interpreter for instructions, large print edition, mark answers in test booklet, read aloud, test direction clarifications, and test breaks. The summary of research reveals that there are not yet simple or conclusive answers to questions about the effects of specific accommodations. It is important to continue to document what the research tells us, and to analyze the specific contexts of the studies, their methodologies, and their findings.


Research Findings Needed

The reauthorization of the Individuals with Disabilities Education Act in 1997 (IDEA 97) requires that students with disabilities be included in state and district assessments, with appropriate accommodations as necessary. Accommodations are changes in assessment materials or procedures that address aspects of students’ disabilities that may interfere with the demonstration of their knowledge and skills on standardized tests. Accommodations attempt to eliminate barriers to meaningful testing, thereby allowing for the participation of students with disabilities in state and district assessments.

Currently there is limited consensus on what constitutes an “appropriate” accommodation. Although providing accommodations for sensory or physical disabilities (e.g., Braille, large print, etc.) has rarely been questioned, accommodations for students with disabilities that specifically affect cognitive functioning (e.g., read the test to the student, extended time, etc.) have been considered more controversial due to beliefs that these accommodations may alter the construct the test is intended to measure (Phillips, 1994). This controversy is evident from analyses of state policies of testing accommodations. Thurlow, House, Boys, Scott, and Ysseldyke (2000) found that although there has been an increase in the number of states with accommodation policies, states continue to vary in terms of which accommodations they will either allow on statewide tests or treat in the same way as “standard” test administrations.

Complicating this issue is the fact that empirical research on the effects of specific testing accommodations is still quite limited (Fuchs, Fuchs, Eaton, Hamlett, & Karns, 2000). In 1999, Tindal and Fuchs completed a comprehensive review of literature on testing accommodation research. Although they identified 114 studies on test changes overall, these studies have not provided conclusive support for many specific accommodations. Also, many of the studies that were included in their review did not address the use of specific accommodations.

Given the controversy surrounding the use of accommodations as evidenced by state policy analysis (Thurlow et al., 2000), it is important to know that the most frequently allowed accommodations are supported by a solid research base. The purpose of this document is to summarize information currently available on 10 of the most frequently allowed testing accommodations in state policies. The intent of this summary is to provide information on what the research currently indicates, and to provide decision makers with a general overview of common accommodations in order to help in making “appropriate” accommodation decisions.


Method

In 1999 NCEO gathered, organized, and reported various aspects of state accommodation policies (Thurlow et al., 2000). Five main categories of accommodations were identified: presentation, presentation equipment, scheduling, response, and setting accommodations. State policy information was organized according to each of these main categories, and by each of the more frequently mentioned individual accommodations (read aloud, extended time, dictated response, etc.). The report also included information on states that allowed specific accommodations only under certain conditions (e.g., on certain test content areas, for specific grades of students, etc.). For the current synthesis, the number of states allowing accommodations both with and without such limitations was tallied. Information for the top 12 accommodations that are allowed in state policies is included in Table 1.

Table 1. Most Frequently Allowed Accommodations in State Policies

Accommodation

 Number of State Allowing Accommodation

 With or Without Limitations

Without Limitations

Individual Administration

44

44

Dictate Response to Proctor/Scribe

43

32

Small Group Administration

41

41

Large Print

40

38

Braille

38

33

Extended Time

37

32

Interpreter for Instructions

36

34

Read/Reread/Simplify/Clarify Directions

35

31

Computer/Machine Response

34

28

Read Aloud

34

4

Write in Test Booklet

33

28

Testing with Breaks

33

28

It is important to note that these are not necessarily the most frequently used accommodations, they are simply the most frequently allowed in state policies. Because two of the first three accommodations listed are not considered to be highly controversial (individual and small group administrations), and may actually be used during standardization procedures, these accommodations were deleted for purposes of this synthesis of accommodation research. For the remaining 10 accommodations, survey and empirical research was collected and summarized.

The Tindal and Fuchs (1999) report provided initial guidance on what research studies to include. Updated searches of ERIC were conducted to identify more recent research. Research on accommodations used for K-12 students in large-scale assessments was the primary focus. However, for several of the accommodations, limited research of this type was identified and so other research (accommodations for college students, intellectual assessments, etc.) was also included. The 10 accommodations are presented here in alphabetical order.


Braille Edition of the Test

Braille editions of tests are developed to accommodate students with blindness or significant visual impairments. Sixty-three possible dot combinations that can be read tactually form the basis for the Braille code, which is now universally accepted as the graphic symbol for blind readers (Barraga, 1983).

 

Explanation

Academic tests are not designed to measure students’ sensory abilities. However, if students with blindness are required to take a regular-print version of the test, their performance will be more representative of their visual disability than of their academic abilities. By taking a Braille version of a test, students with blindness are able to demonstrate their true academic abilities. Phillips (1994) pointed out that providing accommodations to students with sensory deficits has traditionally been a common practice.

 

State Use

The Braille accommodation is allowed by 33 of the 48 states that have statewide assessments (Thurlow et al., 2000). Five additional states allow this accommodation with limitations. Some states allow the Braille accommodation only on certain assessments and others may not allow the accommodation simply because a Braille version of the test has not yet been developed.

 

Empirical Research

Despite the fact that the Braille accommodation has been widely accepted, studies have shown that students using a Braille edition of a test may have trouble with certain types of items. Bennett, Rock, and Kaplan (1987) found that math items involving special symbols and tally systems were differentially difficult for students taking a Braille edition of the Stanford Achievement Test (SAT). In addition, Bennett, Rock, and Novatkoski (1989) found that differential item functioning existed for the Braille edition of the SAT math section, specifically among items that included figures in the stimulus and items for which spatial estimation was considered helpful in eliminating two of the options. Similarly, Coleman (1990) found that Braille readers had more difficulty with length measurement tasks than regular-print readers. A statewide stakeholder focus group in Texas suggested that tactual versions of print diagrams and pictures may convey very different information to students than visual diagrams and pictures, thus challenging the validity of the Braille version of the test. Although certain types of test items have been found to be more difficult for Braille readers, it appears that overall test scores are not greatly affected by this.

In addition to having difficulty with certain test items, students taking a Braille assessment may need extra time to complete the test. According to Phillips (1994), reading in Braille is a slower process than reading print materials. Braille shorthand can help speed up the process; however, Barraga (1983) suggests that extensive abbreviation can actually make it more difficult for students to recognize words in an unfamiliar context. Wright and Wendler (1994) found that the majority of students who were given twice the normal amount of time were able to finish the test.

 

Controversy

The Braille accommodation is typically regarded as a change that maintains the validity of the test. There is little controversy about whether this accommodation should be allowed. However, even though the accommodation is allowed, the scores from Braille editions of a test often are not aggregated or reported in the same way as other scores. Analyses of test performance at the item level also suggest that certain types of items are differentially difficult for Braille readers.

 

Recommendations

    Braille assessments should be developed for tests that currently do not have a Braille edition and should be offered to students with severe visual impairments who use Braille.

    Items that are difficult to translate into a Braille version of the test should be avoided during initial item development (e.g., picture items, tally items, etc.).

    Students using a Braille version of the test should be given extended time to complete the test, perhaps as much as double the time.


Computer/Machine Response

Students are using computers more than ever before. Similarly, computerized assessments are becoming more widely used. Because computerized assessments often involve both computer presentation and computer response, it is often difficult to disentangle the impact of the computer response mode on test administration.

 

Explanation

Many students complete writing assignments via computer. Students who are more familiar with completing assignments on computer may be more comfortable with completing a test in this mode. Because tests are intended to measure the outcomes of instruction, rationale for using this accommodation when students use computers in the classroom is clear. Also, many students with physical impairments that limit their abilities to respond with paper-and-pencil may not be able to demonstrate their true knowledge and skills unless a test is administered in an alternate format, such as via computer. Thus, computer/machine response has been studied for both writing assessments and assessments in other subject areas.

 

State Use

Of the 48 states with statewide assessment programs, 28 allow the computer/machine response accommodation. Six additional states allow it in some situations and prohibit it in others. States that limit its use commonly only allow this accommodation on certain assessments, or only allow it if all students in the classroom have the opportunity to take the test via computer (Thurlow et al., 2000).

 

Survey Research

Jayanthi, Epstein, Polloway, and Bursuck (1996) sent surveys to 708 general education teachers to obtain information on educators’ perceptions and use of a variety of accommodations. Results of the survey (401 respondents) indicated that of the 24 accommodations studied, allowing a word processor was somewhat difficult for elementary and middle school teachers, but a bit easier for high school teachers to provide. Overall, 8.3% of teachers surveyed reported using this accommodation, and teachers reported finding this accommodation somewhat helpful.

 

Empirical Research

Several studies have examined the use of computerized assessments for students in grades K-12. For this analysis, our primary focus was computer and word-processor response accommodations, rather than general computer administrations. Studies that examined student preferences related to the computer/machine response accommodations also are discussed.

Writing assessments. Eight studies were identified that examined the effects of having students complete writing tasks via computer. The majority of these studies involved middle-school students, and some included students with disabilities.

One of these studies showed students performing better under a computer response mode. Russell and Haney (1997) had two groups of middle school students complete an open-ended assessment on paper that involved questions in a variety of subject areas. Then, one group completed a performance writing assessment (which required extended written response) on paper, and one group completed it on computer. Paper-and-pencil test responses for the second test were then converted into computer responses. Results indicated that groups scored similarly on the open ended assessment, but that students using a computer for the performance writing assessment scored significantly better than those responding with paper-and-pencil (effect size = .94).

Other studies have suggested that positive effects of the computer response mode are mediated by student efficiency in keyboard use. In a later study, Russell (1999) again had middle school students answer open-ended test questions in different subject areas, some students using a paper-and-pencil format and others responding in the computer format. Results indicated that for students who keyboard at a speed of 20 words or more per minute, the paper-and-pencil format underestimated achievement level. For slower keyboarders, the computer response format underestimated achievement level. Similarly, Russell and Plati (2001) found that paper-and-pencil results severely underestimated the achievement of students in grades 8-10 who were accustomed to writing using a computer. This study used open-ended questions from the language arts section of the Massachusetts Comprehensive Assessment System (MCAS).

Hollenbeck, Tindal, Harniss, and Almond (1999) found no differences between the scores for seventh and eighth grade students composing stories for a statewide writing test in computer mode versus handwritten mode. In fact, students with disabilities performed significantly poorer when composing with computer than when handwriting their stories. Significant effects were found, however, for students who were allowed to use spell-checkers. It was not clear whether handwritten work was converted into word-processed work prior to evaluation in this study.

Several studies have looked at process related differences between handwritten and computer formats for writing. In a study by Vacc (1987), four eighth grade students (all certified as having mild mental handicaps) wrote letters in both computer and handwritten formats. Vacc found that writing letters on a microcomputer took more time, produced noticeably longer letters, and involved more revision than handwritten letters. The mean number of words written per unit of time was substantially higher for participants’ handwritten letters. Judges evaluations did not differ significantly between the two production modes.

MacArthur and Graham (1987) had fifth and sixth graders with learning disabilities compose and revise stories under three modes: handwritten, word processed, and dictation. Although the dictation condition resulted in significant differences, the handwritten and word-processed stories did not differ in terms of length, quality, story structure, mechanical or grammatical errors, or vocabulary. Word-processing was less than half as fast as handwriting. The overall amount of revision was similar across the handwritten and word-processed modes.

Another study looked at differences in how judges rated handwritten and word-processed essays for a statewide assessment (Hollenbeck, Tindal, Stieber, & Harniss, 1999). Middle school students completed the assessment in handwritten mode. Next, essays were word-processed, without changes in spelling or grammar. Original handwritten compositions were rated significantly higher than the typed composition on three of the six traits for the total group. Five of the six mean trait scores favored the handwritten essays. This study suggested that the two modes should not be used in the same evaluation system. Similarly, factor analysis studies of writing assignments corresponding to the Oregon statewide writing assessment found that when handwritten and word-processed essays were analyzed separately, all traits loaded on a single factor; however, when data from the two modes were analyzed together, two factors emerged: one for the handwritten traits, and one for the word-processed traits (Helwig, Stieber, Tindal, Hollenbeck, Heath, & Almond, 2000). Furthermore, low correlations were found for individual students’ ratings in computer and handwritten essays in this study.

Other computerized response assessments. Nine studies examining the use of computerized assessments for K-12 students in non-writing assessments were identified. Three of these studies involved assessment of writing-related skills (capitalization and spelling), and the others examined other skill areas (math, reading, vocabulary, etc.). Several of these studies demonstrated that students performed better under the paper-and-pencil format.

Watkins and Kush (1988) conducted a study in which elementary students with learning disabilities received a capitalization test under both a paper-and-pencil method and via computer (with conditions counterbalanced). During the computer administration, the students were presented items that measured particular capitalization objectives until the student reached a “mastery,” “non-mastery,” or “review” criteria for each objective. Analysis of the paper-and-pencil tests also involved coding student progress into one of the three categories. Results indicated that students scored higher on the paper-and-pencil test; however, the computer and paper-and-pencil tests did not significantly differ in their assignment of students to instructional interventions.

Varnhagan (1984) studied one group of students in special education and one group of third-grade regular education students who were both administered a spelling test in standard written format, and via computer. Results indicated that the written mode elicited higher scores than the computer format for both groups. Students generally took more time in completing the computer test. Contrary to this, Hasselbring and Crossland (1982) found the computer response format to be favored over the paper-and-pencil format on a spelling test for 9- to 14-year-old students with learning disabilities. The computer administration required less teacher administration time, and difficulties associated with deciphering student handwriting were avoided in the computer response format.

In a study by Swain (1997), third grade students (some with disabilities in math, some without) completed two math tests (Keymath and a computer-administered math test). Participants scored higher on all subtests of the paper-and-pencil test than on the computer-administered test. No interaction effects (mode of administration by disability status) were found.

Russell (1999) also found limited support for computer response accommodations on a math test. He found that regardless of keyboarding speed, middle school students’ performance on math assessments administered on a computer underestimated their level of achievement. He suggested that students may require scratch paper for many math items, which is difficult to provide in a computer administration.

Other studies have shown no differences according to assessment mode. Miller (1990) administered a receptive vocabulary screening test to students with cerebral palsy and students without disabilities. Students completed different forms of the test in both standard response mode and via computer (counterbalanced design). Results indicated that the two modes were equivalent for the groups of students. In the study by Russell and Haney (1997), in addition to completing a writing performance assessment, the two groups of students completed a multiple choice test in differing formats (computer and paper-and-pencil). While students who were administered the writing test via computer performed better on a writing performance task, the two groups did not significantly differ in performance on the multiple choice test.

Horton and Lovitt (1994) found mixed results in the area of reading. Their study involved having middle and high school students (some with and some without learning disabilities) complete reading inventories under two conditions: computer presentation and response, and paper-and-pencil administration. Results favored the computer mode on factual questions for the students overall; however, it appeared that middle school students with learning disabilities scored markedly better on social studies items in the paper-and-pencil format than in the computer format. On interpretive test items, the results of the group analysis showed no significant difference between the two assessment methods.

In addition to the Hasselbring and Crossland (1982) and Horton and Lovitt (1994) studies which provided some support for the computer response accommodation, one other study was identified that supported the use of this accommodation for students with disabilities. Burk (1999) studied the effects of a computer format versus a paper-and-pencil format for a variety of populations on multiple choice tests in multiple subject areas. Results indicated that students with learning disabilities performed significantly better on tests delivered via computer.

Student preferences. A number of the studies on the effects of computer accommodations also surveyed students’ perceptions of the computer administration. Hollenbeck et al. (1998) found that both students with and without disabilities tended to prefer the handwritten format for the writing assessment. Varnhagan (1984), Watkins and Kush (1988), and Horton and Lovitt (1984) all found students to prefer the computer format, despite their general findings that students performed better under the handwritten condition.

Overall, it appears that the computer/word-processor response accommodation has limited empirical support, especially for students with disabilities. The research review identified studies that both supported and discouraged its use, as well as studies that demonstrated no mode effect.

 

Controversy

Research findings indicate limited agreement about whether computer/machine response is a valid accommodation for students with disabilities. Teachers find it somewhat difficult to implement as an accommodation, and states are not unanimous in terms of allowing it. Also, it is clear that without necessary computer knowledge and typing skills, this accommodation may hinder student performance. However, with increasing use and access to computers, this accommodation may become more helpful and necessary for students. Furthermore, computer administrations can additionally offer easy access to other accommodations (e.g. computer read aloud, large print, etc.). Although this synthesis did not focus on the multiple accommodation possibilities available via computer, research studies have begun to analyze how several accommodations can be incorporated by using computers. Finally, if students cannot physically use a pencil to complete a test, it may make more sense for them to use a computer response format.

The fact that judges often rate word-processed essays lower than handwritten essays suggests that this accommodation may put students at a disadvantage unless there is appropriate training of scorers. Greater attention to these scoring implications will be needed if computer use in assessments increases as expected.

 

Recommendations

    Students must be provided the necessary typing and computer skills so that they are accustomed to using the computer/machine response accommodation. Without this support, caution should be taken in allowing a student to use the computer/machine response accommodation.

    This accommodation may also be relevant for students who physically cannot manipulate a pencil, and can more easily respond using a computer or machine. Of course, instruction in its use, and previous experience with computers are essential.

    Because studies have demonstrated the tendency to use different evaluation criteria for handwritten and computer versions, it is essential that evaluators first demonstrate the ability to use the same criteria before scoring assessments.

    When students use this accommodation, it is important to know that equipment is working properly. Also, students should frequently save their responses during testing to ensure that work is not inadvertently erased (CEC, 2000).


Dictate Response to a Scribe

Students are sometimes offered the opportunity to orally respond to test items and have a scribe record their answers instead of providing a written response. This accommodation is offered to students with a variety of disabilities including learning disabilities, behavioral disorders, mild mental retardation, physical impairments, and communication disorders.

 

Explanation

In order to accurately answer test items, students need writing skills. It is possible for test results to inaccurately represent a student’s ability in a particular subject area simply because the student’s writing skills hinder his or her ability to demonstrate achievement. For instance, students who can accurately solve a math problem may have difficulty demonstrating this ability if they cannot write.

 

State Use

Of the 48 states with statewide assessment programs, 32 allow the dictated response to a scribe accommodation. Eleven additional states allow it in some situations, and prohibit it in others. Some states prohibit the use of this accommodation on writing tests, but allow it for other subject areas. Another state specifies that students must indicate punctuation and spell all key words when this accommodation is used. This suggests that there is some disagreement about whether dictated response is a valid accommodation (Thurlow et al, 2000).

 

Survey Research

Several surveys have addressed the scribe accommodation. There is some evidence of change over time in the responses to this accommodation. The earliest survey was conducted by Gajria, Salend, and Hemrick (1994). They surveyed 100 teachers on their awareness, use, and perceived integrity of a variety of accommodations and received responses from 64 of them. Gajria et al. found that 82.8% of these respondents were aware of the dictated response accommodation, 50% reported using it in their classrooms, and 67% thought that it maintained the integrity of the test. It was ranked 31st of 32 modifications in terms of “ease of use,” and 28th in terms of effectiveness.

Results of the Jayanthi et al. (1996) survey (401 respondents of 708 surveyed) indicated that of the 24 accommodations studied, allowing dictated responses for tests was rated as easy and somewhat helpful to provide. Overall, 21% of the teachers who responded indicated that they currently use this accommodation in their classrooms.

Lambert, Dodd, Christensen, and Fishbaugh (1996) surveyed rural secondary teachers, asking about their willingness to provide and their current use of the dictated response accommodation. Of the 171 teachers surveyed, 121 surveys were returned. “Allow the student to dictate answers to a proctor” ranked tenth among several accommodations in terms of the percentage of teachers who had provided the accommodation in the past. Also, the average rating of willingness to provide this accommodation was 4 on a 1-5 rating scale, 5 being “very willing to provide.”

Most recently, Hollenbeck, Tindal, and Almond (1998) surveyed 633 regular and special educators, of which 166 responded. They found that 85% of the teachers who returned surveys had correct knowledge about this accommodation, and 71% reported using this accommodation.

 

Empirical Research

A limited amount of research has been conducted on the dictated response accommodation. Three empirical studies and four descriptive studies on the use of this accommodation for K-12 students were identified. Two of the empirical studies focused only on story writing, while the other investigated the effects of dictation on a math assessment. The four descriptive studies investigated the effects of accommodations on the scores of students with disabilities in the statewide assessments in Maryland and Kentucky, and were included due to the limited amount of empirical research available.

Fuchs, Fuchs, Eaton, Hamlett, and Karns (2000) studied the effects of the encoding accommodation (“teacher wrote student responses”) on math problem solving curriculum based measures (CBM) scores of students both with and without learning disabilities. Results indicated that this accommodation had a significantly greater impact on the scores of students with learning disabilities than those without learning disabilities.

MacArthur and Graham (1987) studied the differences between stories written by 5th and 6th grade students with learning disabilities under handwritten, word-processed, and dictated response modes. Each student composed a story in each of the three modes. Results indicated that the dictated stories were longer, of higher quality, and had fewer grammatical errors than handwritten or word-processed stories. This study suggests that dictating a story helps to free the student from spelling, penmanship, punctuation, and capitalization concerns, and allows students to express their ideas more quickly. Hidi and Hildyard (1983, as cited in MacArthur and Graham) conducted a similar study on students of average achievement in grades 3 and 5. Results showed an increase in the length of stories and essays, but no differences in terms of quality. MacArthur and Graham (1987) suggest that when the “mechanical and conventional demands of producing text” are removed, “learning disabled students compose more fluently and with better results.”

Tippets and Michaels (1997) studied the factor structure of the reading and language arts sections of the Maryland statewide assessment for 3rd, 6th, and 8th grade students. Students were assigned to accommodation categories according to their Individualized Education Programs (IEPs). Analysis of test scores indicated that students receiving the dictation accommodation in combination with other accommodations (read aloud, extended time, etc.) performed better than students not receiving any accommodations. The average scores for students receiving the dictated response accommodation were better than students who wrote their own responses. A concern that emerged was whether scribes were encouraging students to elaborate on responses, helping students organize their thoughts, or facilitating writing mechanics such as capitalization and punctuation skills. However, overall, the factor structures that emerged from accommodated and non-accommodated tests in this study were similar.

Koretz (1997) performed an in-depth study of the effects of the dictated response accommodation on the scores of students in the Kentucky statewide assessment system. Results indicated that the dictated response accommodation had a large impact on scores for students with learning disabilities, behavioral disabilities, and mild mental retardation, especially in social studies, reading, and science. In a second study of this assessment system, Koretz and Hamilton (1999) found this accommodation to have somewhat less of an impact on scores of students with disabilities.

Trimble (1998) studied the effects of the dictation accommodation in combination with other accommodations on the scores of 4th, 8th, and 11-12th grade students taking the statewide assessment in Kentucky. For 4th graders, the dictated response accommodation produced average scores above that of the total population, including students without disabilities. For 8th and 11th graders, students receiving the dictated response accommodation scored on average higher than those of students with disabilities not receiving the accommodation, but not above the average score of all students.

 

Controversy

Research on the dictated response accommodation for students with disabilities in the K-12 range has been limited and rather inconclusive. Although students with disabilities appear to receive higher scores on a variety of tests when using this accommodation, it is unclear whether the test scores obtained using this accommodation as implemented are valid. For students who could physically not respond without a scribe, dictated response seems like an appropriate accommodation to allow. Also, when a test is not meant to measure certain writing skills, it also seems like a legitimate accommodation. Contrary to this, if a test is designed to measure grammar and other specific writing skills, this accommodation may be less valid. Furthermore, teachers appear to be concerned about the difficulty of implementation, and consider it to be one of the least effective accommodations.

 

Recommendations

    The dictated response accommodation should be provided to students for whom it is determined will benefit from this accommodation on tests not specifically designed to measure writing ability that includes specific skills such as spelling.

    If students are unable to handwrite, but can efficiently type on a computer, a computer response accommodation should be considered prior to a dictated response accommodation.

    When the dictated response accommodation is allowed, scribes should be carefully trained in how to record responses (e.g., whether students must indicate spelling, punctuation, etc.). Scribes should also be monitored to be certain that students’ verbatim responses are recorded.

    Scribes should be familiar with the test so they can easily record student answers (CEC, 2000).

    More research should examine the effects of this accommodation on the test scores of students with and without disabilities.


Extended Time

Students with disabilities are often allowed extra time to complete tests that are normally administered under timed conditions. This accommodation is frequently given in addition to other common accommodations (read aloud, Braille, large print, etc.), and is offered to students with a variety of disabilities. Extended time is often understood as “unlimited time” although in some cases it simply means that a specified amount of time is added to the normal time allotted for students to complete the test. Research has used both.

 

Explanation

Deficits in information processing rates are common among students with learning disabilities (Huesman, 1999). When a test is timed, students with learning disabilities may not be given enough time to show what they know and can do. Students with other disabilities, similarly, may require extra time if their disability hinders their ability to respond in a timely manner. Extending the amount of time that a student has to complete a test, therefore, can help to alleviate these problems, and consequently can make the test a more accurate measure of the student’s level of achievement.

 

State Use

Of the 48 states with statewide assessment programs, 32 allow the extended time accommodation. Five additional states allow it in some situations, and prohibit it in others. Two other states prohibit the use of extended time. This suggests that there is some disagreement about whether extended time is a valid accommodation (Thurlow et al., 2000).

 

Survey Research

Several surveys have explored the extended time accommodation. The earliest, by Gajria et al. (1994) surveyed 100 teachers on their awareness, use, and perceived integrity of the accommodation. They found that 93.8% of the 64 teachers who responded were aware of the extended time accommodation, 90.6% reported using it in their classrooms, and 79.7% thought that it maintained the integrity of the test. It was ranked 14th of 32 modifications in terms of “ease of use,” and 15th in terms of effectiveness.

The Jayanthi et al. (1996) survey responded to by 401 general education teachers (of 708 sent) indicated that allowing extended time for tests was rated as “easy” and helpful to provide. Overall, 74% of the teachers who responded indicated that they currently use the extended time accommodation in their classrooms.

The Lambert et al. (1996) survey of 171 rural secondary teachers found that for 121 respondents, extended time ranked first among several accommodations in terms of the percentage of teachers who had provided the accommodation in the past. The average rating of willingness to provide this accommodation was 4.4 on a 1-5 rating scale, 5 being “very willing to provide.”

Finally, the Hollenbeck et al. (1998) survey of 633 regular and special educators found that 59% of the 166 teachers who returned surveys had correct knowledge about this accommodation. They also found that only 13% reported using this accommodation.

 

Empirical Research

Extended time is one of the most widely researched testing accommodations. Chiu and Pearson (1999) conducted a meta-analysis of 18 studies on this accommodation (including primary, secondary, and post-secondary studies) and found that the majority of studies looked at the effects of extended time on the test scores of students with learning disabilities. Extended time accommodation studies often examine whether there is a significantly greater effect of the accommodation on the scores of students with disabilities versus the scores of students without disabilities. Overall, Chiu and Pearson (1999) found a .07 effect size supporting the use of the extended time accommodation for students with disabilities over students without disabilities. Some studies have found a differential impact of the accommodation according to student disability status; others have found no differential impact.

Several studies examined the effects of extended time on test scores of students with disabilities in elementary, middle, and high school. Some of these studies examined the effects of the extended time accommodation on tests in separate content areas, including language arts and math. Others have looked at the effects of this accommodation on scores from standardized tests covering multiple subject areas.

Math. Eight studies were identified that examined the effects of extended time on math test scores. Gallina (1989) found that elementary students with Tourette’s syndrome scored in the average range on untimed math tests, but performed poorly on timed math tests. Centra (1986) found that students with learning disabilities derived significantly greater gains from extended time on the math section of the Scholastic Aptitude Test (SAT) than students without learning disabilities. Furthermore, Lewis and Green (1999) found there to be relatively few items displaying differential item functioning (DIF) for students with disabilities receiving the extra time accommodation on a math test. Finding few DIF items suggests that the accommodation did not change the underlying construct of the test.

Contrary to these findings that provide support for the use of the extended time accommodation for students with disabilities, Marquart (2000) found there to be no significant increase in math test scores for eighth grade students when receiving the test under an extra time condition (40 min) as opposed to a standard condition (20 min). This non-effect was found for all three groups of students studied (students with disabilities, students without disabilities, and at-risk students). Students, however, were found to prefer the extended time condition.

Munger and Loyd (1991) also found that 5th grade students with disabilities (both learning and physical) were not differentially affected by the untimed condition on a math test when compared to students without disabilities. Similarly, Montani (1995) found that both low achieving and normally achieving 3rd grade students benefited from extended time, and there was no significant difference in effect among the two groups of students. Murray (1987) looked at the effects of the untimed condition on spatial test scores of boys ages 12-14 years. Groups included: (a) boys without learning disabilities + boys with learning disabilities and average achievement, and (b) boys with learning disabilities and low achievement. Results indicated that the first group performed better in the untimed, but not in the timed condition.

Taken altogether, the results of these studies are inconclusive. However, results of a study by Fuchs, Fuchs, Eaton, Hamlett, and Karns (2000) may provide a better understanding of the effects of extended time on math tests. This study found that 4th grade students with learning disabilities did not differentially benefit from extended time on concept and application problems, but did differentially benefit from extended time on more innovative problem-solving type items. This suggests that on certain types of math problems, extended time may be more influential for students with disabilities.

Language arts. Five studies were identified that examined the effects of the extended time accommodation on language arts tests. Munger and Loyd (1991) found that 5th grade students with physical and learning disabilities were not differentially affected by the amount of “speededness” on a language use and expression test when compared to students without disabilities. Fuchs, Fuchs, Eaton, Hamlett, Binkley, and Crouch (2000) found that the extended time accommodation did not benefit fourth grade students with learning disabilities more than students without disabilities on reading tests. Huesman and Frisbie (2000) found that the test scores of students with learning disabilities on the reading section of the Iowa Tests of Basic Skills did significantly increase in the extended time condition; however, the scores of students without disabilities who were given extended time and told to “take time and work carefully” also significantly increased. Another important finding in their study was that the amount of extended time that students with learning disabilities required varied among individuals.

Perlman, Borger, Collins, Elenbogen, and Wood (1996) found that fourth grade students with learning disabilities and IEPs requiring the extended time accommodation finished the reading test within the recommended time limit, but that eighth graders with learning disabilities took substantially more time, and scored higher when they did use extra time. Overall, the results of the Perlman et al. (1996) study suggest that the tests may be more reliable when administered without time limits, and that merely knowing unlimited time is available may yield higher scores even if no additional time is used. Finally, Lewis and Green (1999) found that there were relatively few items that displayed DIF for students with disabilities receiving extra time on a language arts tests, which suggests that the same construct was being measured under the accommodated condition. Altogether, these studies seem to suggest that although there appears to be no definitive answer to the question of whether extended time is significantly better for students with disabilities, this accommodation may make the test results more valid for all students.

Ziomek and Andrews (1998) studied the effects of the extended time accommodation for students who had taken the American College Test (ACT) twice, and at least once under extended time conditions. Results indicated that they performed better under the extended time condition. The authors concluded that scores administered under non-standard conditions should continue to be “flagged” until score comparability can be determined; however, this recommendation recently has been negated. The Educational Testing Service (ETS) decided to stop flagging accommodated tests.

 

Controversy

Research on the extended time accommodation for students with disabilities in the K-12 grade range has been inconclusive. Although students with disabilities appear to gain from the use of this accommodation on a variety of tests, it is also true that students without disabilities display similar gains. For math tests, it appears that differential gains for students with disabilities may be evident on certain types of items. On reading tests, the research has provided less direction. Limited differential effects for students with disabilities, as opposed to students without disabilities, were found for this accommodation. One study suggested both groups of students benefit from extended time. It may be that the underlying question is whether the particular test is intended to measure how fast students can complete test items. If tests are meant to measure rate of completion, then extended time is not a valid accommodation. However, if rate of test completion is not meant to be evaluated, extended time may be considered more valid.

In this synthesis, studies involving extended time in addition to other accommodations were not examined in depth. It is understandable that extended time might be a valid accommodation for students who are also having the test read aloud, reading large print, etc. The question of whether extended time alone is a valid accommodation for students with disabilities remains somewhat unanswered.

 

Recommendations

    The extended time accommodation should be allowed to students for whom it is determined will benefit from this accommodation on tests not specifically designed to measure rate of test-completion.

    The amount of extra time necessary may vary among students; it may be important to determine for each student individually the amount of extra time that is most desirable. Some students may experience fatigue from a longer test, and others may benefit.

    Students who require this accommodation may need to take the test in another room (CEC, 2000).

    The extended time accommodation should be allowed when other accommodations provided necessitate extended time (e.g., read aloud, tape recorder, etc.).

    When only some students are allowed access to this accommodation, it may be necessary to indicate that they received this accommodation in reporting results.

    Test-makers should determine whether test-completion rate is of essence to what they are testing. If it is determined that testing rate is not intended to be measured, perhaps all students should be given the amount of time they need to complete the test.


Interpreter for Instructions

Students with hearing impairments are often allowed to have an interpreter communicate test instructions in sign language. Although entire tests (directions and items) may be communicated to students through interpreters, for the purpose of this summary, we will refer to using interpreters to communicate only test directions. According to Bourquin (1996), interpreting is “the process of receiving a message in one language and transmitting an equal meaning into a second language.” The form of interpretation used will depend on the language preferences of the student (e.g., American Sign Language, finger spelling, signing + speech, etc.).

 

Explanation

For students to demonstrate their knowledge and perform well on tests, they must understand what the test requires them to do. Students with hearing impairments cannot understand orally communicated test instructions unless they are able to lip-read or are provided an interpreter who can translate the spoken directions into a language that they can understand. Through the use of an interpreter for test directions, tests can measure achievement rather than sensory deficits of students with hearing impairments.

 

State Use

Of the 48 states that have statewide assessment programs, 34 allow “interpreter for instructions” as a testing accommodation. Two additional states allow this accommodation on certain tests, but not on others. One other state prohibits the use of this accommodation (Thurlow et al., 2000).

 

Survey Research

Petronio (1988) interviewed 10 students with deaf-blindness to find out what they needed and wanted most from interpreters. Results indicated that students wanted information conveyed in the language that they most easily understood. A number of students also needed special modifications to the normal manner of interpreting due to tunnel vision or problems viewing interpreters at a distance. Students also emphasized the need for interpreters to convey visual information about relevant things going on in the classroom. Finally, Petronio’s results indicated that students who had used interpreters for three or more years were the most satisfied with them; those who were new to using interpreters were more easily frustrated.

 

Empirical Research

Very limited empirical research exists on the use of an interpreter specifically for communicating achievement test instructions to students with hearing impairments. Because of this, other studies that examined adapting tests for students with hearing impairments were examined. Ray (1982) conducted a study of adapting the Weschler Intelligence Scales for Children-Revised (WISC-R) for students with hearing impairments. The adaptation involved using additional sample items prior to conducting each subtest, as well as using alternate instructions in which the directions were signed to the student. The directions were also modified in order to avoid using words that were not easily translated into sign language. Results indicated that when these adaptations were made, deaf children scored on average the same as other students on the performance scale of the WISC-R.

Sullivan (1982) similarly studied the effects of accommodations on the WISC-R for students who were deaf. Each student in the study had half of the subtests communicated via Total Communication (simultaneous verbal and sign language), and half of the subtest directions communicated verbally with gestures. This study found that WISC-R Performance Scale scores were higher when directions were communicated via Total Communication than when communicated only verbally and gesturally. An additional study (Sullivan, 1982) showed Total Communication of directions resulted in significantly higher scale scores than pantomiming directions. However, results also suggested that the population of students with hearing impairments is not homogeneous; adaptations therefore must be individualized.

Benderson (1988, as cited in Gorden, Stump, and Glaser, 1996) found that the scores of students with hearing impairments who were given special administrations of the Stanford Achievement Test (SAT) did not over- or under-predict their college achievement. For other students given special administrations, there was a great deal of fluctuation in terms of how well the test scores predicted future achievement.

 

Controversy

Providing an interpreter for instructions to students with hearing impairments appears to be a very reasonable testing accommodation. Students need to be able to comprehend task demands in order for them to demonstrate what they know and can do on a test. Perhaps the only controversy identified has to do with the quality of the interpretation. If interpreters do not communicate directions accurately to students with hearing impairments, the accommodation does not meet its intended purpose.

 

Recommendations

    An interpreter for instructions should be provided to students with hearing impairments who will benefit from such communication.

    Interpreters should be trained professionals (Bourquin, 1996), and should sign the instructions exactly as given (Ray, 1982).

    Interpreters should be aware of the unique needs of the students they are serving; different students may prefer differ language systems.

    Interpreters should check to be sure that the correct message is received by the student.

    Some words in the standardized directions may not be easily translated into sign language. Test developers should avoid using such words and phrases, and a standard procedure should be developed for how interpreters should communicate directions if such words and phrases are necessary.

    Students should have experience using an interpreter prior to the testing situation.

    More empirical research is necessary to demonstrate effective ways of adapting standardized tests for students with hearing impairments.


Large Print Edition of a Test

Large-print editions of tests are frequently used to accommodate students with visual impairments. Burns (1998) states that large-print applications are also used by students who are distracted by cluttered test formats and by very young children. Research studies have defined large-print in several similar ways: “14-point Helvetica font” (Mick, 1989), “double the size of regular print” (Burk, 1999), and “16-point type” (Grise, Beattie, & Algozzine, 1982).

Explanation

Few academic tests are developed with the intent to measure either visual abilities or a student’s degree of distractedness. However, the way students are currently tested requires that they have adequate visual abilities and that they attend well to regular print tests. Consequently, the score of a non-accommodated student with a visual impairment on a reading comprehension test may reflect his or her visual difficulties rather than his or her comprehension skills. A large-print edition of a test can ensure that the student’s academic abilities rather than his or her visual abilities are measured.

 

State Use

Of the 48 states that have statewide assessment programs, 38 allow large print as a testing accommodation (Thurlow et al., 2000). Two additional states allow it in some situations but not others. One of these states allows large-print only if it is not offered in addition to the extended time accommodation. None of the states that allow large print indicate that it can be used only by students with visual disabilities.

 

Survey Research

The Jayanthi et al. (1996) survey of 708 general education teachers indicated that the 401 responding teachers found the large-print testing accommodation both helpful and easy to provide. Overall, 9.8% of the teachers who responded indicated that they currently use the large-print accommodation in their classrooms.

 

Empirical Research

Students with visual impairments. Several studies on large-print accommodations have focused on secondary and post-secondary students with visual impairments. A few of these studies have suggested that when using the large-print accommodation, students with visual impairments score more like students without disabilities taking a standard administration of the Graduate Record Exam (GRE) (Bennett, Rock, & Jirele, 1987). Bennett, Rock, and Kaplan (1987) found no significant differences in SAT item functioning for students with visual impairments taking a large-print edition of the SAT. These results support the idea that the large print accommodation does not affect the validity of academic tests.

Not all of this research, however, has been supportive of the large-print accommodation. Rock, Bennett, and Jirele (1988) identified problems in fit with the three-common-factor model of the GRE for students with visual impairments taking a large-type, extended time administration when compared to students without disabilities and students with visual impairments taking the standard administration. They found that the overall scores of students with visual impairments taking the large-type edition of the GRE exhibited lower intercorrelations with the verbal factor of the test than the scores of groups receiving a standard administration. Because students self-selected the large-print exam, it is unclear whether these test differences were due to the severity of visual impairments, or whether the large-type accommodation actually influenced test validity. The authors suggest that the reading task may be more difficult for large-print readers because large print items may be cut off at the end of a page and continue on the next page. This may interfere with the student’s construction of item meaning.

Only one study was identified that used the large-print accommodation with young students with visual impairments. Coleman (1990) studied 24 students (7 large print readers, 7 Braille readers, and 10 regular print readers) and found that large print readers had more difficulty with length measurement tasks than did the other readers. However, he concluded that it was vision problems rather than the large-print that accounted for these results.

Although not pertaining directly to the specific effects of the large print accommodation, Wright and Wendler (1994) documented that students who received the large-print accommodation on a Stanford Achievement Test (SAT) field trial required up to twice the normal time to complete the test. The extra time necessary was suggested to be due to visual processing deficits.

Students with learning disabilities. The use of the large-print accommodation by students with learning disabilities has also been studied. The majority of these studies suggest that the large-print accommodation does not have a significant effect on test scores (Beattie, Grise, & Algozzine, 1983; Burk, 1999; Florida Department of Education, 1982; Fuchs, Fuchs, Eaton, Hamlett, Binkley, & Crouch, 2000; Grise, Beattie, & Algozzine, 1982; Mick, 1989). In one study, large-print did have an effect on scores. Perez (1980) selected three modified presentation modes (regular print, large print, and audio support) for items from Florida’s Statewide Student Assessment and conducted these modified tests with 48 secondary-level students with learning disabilities. Findings indicated that large-print presentation resulted in the highest levels of performance overall. Performance with large print was significantly higher than performance with regular print, as well as higher than performance with audio support.

In all but one study, therefore, the large-print accommodation seemed to have no effect on scores of students with learning disabilities. It is important to note that contrary to the studies of students with visual impairments, none of these studies involved a decision-making process whereby individual students’ large-print needs were addressed. Perhaps if more attention had been given to identifying which students would benefit from the accommodation, the accommodation would have had more of an effect for students with learning disabilities.

 

Controversy

Overall, it appears that state policymakers and educators agree that large-print is a useful accommodation. Furthermore, most studies suggest that this accommodation does not unfairly affect students’ scores. Perhaps the most important point is that large-print test booklets can be less user-friendly than regular print tests. They are often much heavier than normal tests, and require more page-turning. Students using large-print test booklets have indicated that they do not like how heavy they are (Florida Department of Education, 1982). Additionally, more research may be needed on how this accommodation affects the test-validity of very young children, a group of students that Burns (1998) states often use large print.

 

Recommendation

    The large-print accommodation should be offered to any student who it is determined will benefit from the accommodation.

    When determining the large print needs of a student, the best approach is to experiment with different point and font sizes to decide which text best meets the student’s needs (Burns, 1998).

    Students receiving the large-print accommodation should have adequate practice taking tests in this format to be familiar with the extra page turning that may be required.

    Tests that are translated into large print need to be checked to make sure no inappropriate line breaks or measurement tasks are included. Make sure that items are grouped as much like the original as possible (CEC, 2000).

    Students who use the large-print accommodation may also require additional time due to visual processing difficulties.


Mark Answers in Test Booklet

In some situations, the typical multiple-choice “fill-in-the-bubble” separate sheet response format may not be an appropriate way for a student to complete a test. The “mark answers in test booklet” accommodation may be necessary for some students to demonstrate what they truly know and can do. This accommodation allows students to respond to test items directly in the test booklet rather than on a separate answer sheet.

Explanation

Burns (1998) suggested that if a student does not understand the “bubble-task,” is not able to record answers using a bubble format, or has a mobility or coordination problem, marking answers in the test booklet might be an effective accommodation. Bubbling-in answers on a separate answer sheet requires adequate student attention to ensure that the intended item is being marked. It also requires adequate motor coordination. If a student does not have these prerequisite abilities, his or her test performance may not reflect what the test was intended to measure.

 

State Use

Of the 48 states with statewide assessment programs, 28 allow the mark answers in test booklet accommodation. Five additional states allow it on a portion, but not all, of the required state assessments (Thurlow et al., 2000).

 

Empirical Research

Four empirical studies were identified that examined the effects of the “mark answers in test booklet” accommodation. Three of these studies showed no significant difference between performances of students receiving this accommodation and students completing the test under a standard response mode.

Rogers (1983) conducted a study of this accommodation with 8- to 16-year-old students with hearing impairments taking a multiple-choice spelling test. Results indicated that the separate answer sheet was a valid assessment strategy, and that the two methods had similar degrees of reliability. In other words, the standard and accommodated test formats functioned similarly for this group of students. Tolfa-Veit and Scruggs (1986) looked at differences in number of items marked “outside the box” for fourth grade students with and without learning disabilities. Results indicated that students with and without learning disabilities had similar percentages of items marked outside of the necessary area. Students with learning disabilities were found to complete significantly fewer items than students without learning disabilities in the specified time period; however, the two groups did not differ in the percent of items they answered correctly. Tindal, Heath, Hollenbeck, Almond, and Harniss (1998) similarly studied this accommodation with fourth grade students in special and regular education. In this study, students completed statewide reading and math tests in both accommodated and non-accommodated conditions. Results indicated that there were no significant differences between testing response formats for either group of students (those in regular education and those in special education).

In a study by Mick (1989), a modified test format including “marking answers in test booklet” was found to result in lower scores for students with educable mental handicaps and students with learning disabilities. A reading test was administered to these students under both a modified format (large print, unjustified lines, and writing answers directly in test booklet) and a standardized format. Although these results may suggest that marking answers in the test booklet puts students at a disadvantage, it is not clear which aspect(s) of the modified version accounted for changes in performance.

Overall, there is no empirical support for this accommodation if the criterion is increased test scores. Test scores for students taking the test with and without this accommodation are similar; only one study suggested otherwise.

 

Controversy

Separate answer sheets help to facilitate the scoring process. Research has shown students to score similarly both with and without this accommodation. It might be argued, therefore, that the accommodation of marking answers in test booklets is unnecessary. However, the research has failed to address the needs of students with specific motor impairments, as well as those students with attention problems that might lead them to mis-mark answers when using a separate answer sheet. Because of this, this accommodation still seems warranted in certain situations with certain students.

 

Recommendations

    This accommodation should be allowed for students with motor coordination problems that hinder their ability to respond appropriately in a standard test format. Furthermore, this accommodation should be favored above other response accommodations (e.g. dictated response to a scribe) because it is less likely to result in test bias.

    More research should be done on this accommodation, especially involving students with attention problems and physical impairments.

    If a student does not know how or is not attentive enough to mark answers appropriately on a separate answer sheet, this accommodation should be provided; however, it may be that the student should be taught how to appropriately use the standard bubble sheet format for later testing purposes (Burns, 1998).

    There should be ample space in the test booklet for the student to respond when providing this accommodation (CEC, 2000).


Read Aloud Test Items

Reading aloud a test to a student is an accommodation frequently used by students with learning disabilities. Students with physical or visual impairments may also benefit from the oral reading accommodation; however, it has been suggested that students with visual impairments should be tested with Braille or large-print when possible (Phillips, 1994). The read aloud accommodation generally refers to having a teacher or aide read test directions, test items, and test reading passages to students. Although this accommodation is often used in combination with other equipment (e.g., computer, video, cassette tape, etc.), for the purposes of this analysis, focus will be placed on read aloud accommodations as provided by test proctors and teachers.

 

Explanation

Reading is a prerequisite skill for demonstrating skills in a variety of academic areas. For instance, completing math word problems requires that students know how to read the questions. However, the intent of such an item may be to test math skills rather than reading skills. For this reason, it appears necessary to allow a read aloud accommodation for students with reading disabilities. In this way, students’ true abilities can be manifested on the test without error that is due to an inability to read quickly or accurately.

 

State Use

Of the 48 states with statewide assessment programs, 4 states allow the read aloud accommodation without limitations. One state prohibits its use, and 30 states allow it under certain conditions. The states that impose limitations on this accommodation generally allow it only on certain sections of the assessment, such as in the math, science, social studies, and writing sections, and prohibit its use on the reading sections (Thurlow et al., 2000).

 

Survey Research

The 64 teachers responding to the Gajria et al. (1994) survey of 100 teachers generally favored the read aloud accommodation. A total of 90.6% of the respondents were aware of the read aloud accommodation, 79.7% reported using it in their classrooms, and 89% thought that providing this accommodation maintained the integrity of the test. It was ranked 7th of 32 accommodations in terms of “ease of use,” and 5th in terms of effectiveness.

The Jayanthi et al. (1996) survey responses from 401 of the 708 general education teachers to whom survey were sent indicated that “reading test questions to students” was an easy accommodation to make. They also indicated that it was “helpful” to “very helpful” to use. Of the teachers that responded, 67.9% reported using this accommodation in their classrooms.

Hollenbeck et al. (1998) found that 56% of the 166 responding teachers (of 633 surveyed) had correct knowledge about the accommodation entitled “read math text.” A total of 21% reported using this accommodation.

 

Empirical Research

Oral reading accommodation for math tests. Nine studies looked at the effects of the oral reading accommodation on math test scores. Five of these studies tested the hypothesis that the oral reading accommodation should differentially boost the scores of students with disabilities compared to students without disabilities. Three of these studies found that students with learning disabilities benefited significantly more from the accommodation than students without disabilities (Fuchs, Fuchs, Eaton, Hamlett, & Karns, 2000; Tindal, Heath, Hollenbeck, Almond, & Harniss, 1998; Weston, 1999). Although Johnson (2000) found that students with learning disabilities did differentially benefit from the accommodation, the increase in scores was not statistically significant due to a small sample size. Johnson (2000) also found that orally reading the test items had no differential effects for good versus poor readers among non-disabled students.

Further support on using this accommodation for students with learning disabilities is warranted from additional findings by Tindal et al. (1998). Tindal et al. (1998) found that even the lowest achievement-ranked students in general education did not benefit significantly from the oral reading accommodation. According to these findings, the oral reading accommodation appears to be a valid accommodation for students with learning disabilities. Calhoon, Fuchs, and Hamlett (2000) studied a sample of only students with learning disabilities, and found that providing a reader on a math test significantly increased scores. They also studied a computer read aloud accommodation, and found that this accommodation also increased scores of students with learning disabilities. No significant difference in scores was found between teacher-read and computer-read accommodations.

Other studies have looked at the read aloud accommodation in terms of overall test functioning. Pomplun and Omar (2000) found that a proposed two-factor model (writing and math factors) fit the data for all groups of students on a fourth grade math test, including students with learning disabilities receiving the read aloud accommodation, students with learning disabilities not receiving the accommodation, and students without disabilities. This further suggests that the read aloud accommodation is appropriate for students with disabilities on math tests. Additionally, Bielinski, Thurlow, Ysseldyke, Freidebach, and Freidebach (2001) found that although there were a few items that displayed differential item functioning for accommodated students on a statewide math test, overall, the accommodated version appeared to be measuring the same construct as the non-accommodated test for students without disabilities. In a similar study, Lewis and Green (1999) found there to be relatively few items displaying DIF for the accommodated group on a math test; however, no reference group of regular education students was provided in the study.

In contrast, Meloy, Deville, and Frisbie (2000) found that the read aloud accommodation increased the scores of both learning disabled and non-learning disabled students on math and science sections of the Iowa Tests of Basic Skills. They argued that the read aloud accommodation changed the construct being measured for most students relative to what is measured under standard conditions.

For the most part, the oral reading accommodation for math tests appears to be supported by research. However, the findings of Meloy et al. (2000) suggest that more research support is needed to validate the use of this accommodation

Oral reading accommodation for reading tests. Concern has been expressed regarding the use of the oral reading accommodation on reading tests. Phillips (1994) suggested that this accommodation results in the substitution of listening comprehension for reading comprehension, and therefore greatly alters what the test measures. Burns (1998) argued that the oral reading accommodation for reading tests undermines test validity, and could lead to students not getting necessary services.

Five studies on the use of the read aloud accommodation for reading tests were identified. Tippets and Michaels (1997) examined the factor structure of reading and language arts tests for students receiving the read aloud accommodation alone and in combination with other accommodations (e.g., read aloud + extended time, read aloud + dictated response, etc.), and compared this to the factor structure of the tests for non-accommodated students. Results indicated no difference between the structures, which suggests that the same underlying construct was being measured.

Contrary to this finding, Meloy et al. (2000) found that the read aloud accommodation benefited both learning disabled and non-learning disabled students on several sections of the Iowa Tests of Basic Skills, of which the greatest impact was found on the reading sections of the test. This suggests that the accommodation changes what the test measures, therefore is invalid. Bielinski (2000) also suggested that the read aloud accommodation changed the construct that statewide reading test intended to measure. He found numerous items displaying differential item functioning (DIF) when compared to results for students in regular education. However, there were also several DIF items for low performing students in regular education. Lewis and Green (1999) found few DIF items for accommodated students in a similar study; however, the study did not provide a comparison group of students without disabilities.

Finally, Barton and Huynh (2000) conducted a study of errors made by students with disabilities receiving the read aloud accommodation on a high school reading test. Their analysis suggested that there may be some items that are particularly difficult for students receiving the read aloud accommodation, such as items that include an index of entries or references that are difficult for students to listen to and remember if they are not skilled in following along as the proctor reads.

Clearly, more research needs to be done on the oral reading accommodation to determine how it affects what the test measures.

 

Controversy

Research has primarily supported the use of the read aloud accommodation for students with disabilities on math tests. However, great concern has been expressed about the validity of using this accommodation on reading tests, and limited research has addressed this issue. Additionally, it is not always clear what the read aloud accommodation entails; in some cases students may be allowed to ask the reader to “re-read” certain sections, and in other cases students have the test read only once and must rely on auditory memory in answering the questions (Burns, 1998). Administering this accommodation in large groups of students can complicate things further; students may require different amounts of time to complete items, requiring the reader to adjust his or her pace accordingly. Concern is also evident in whether students should be given a copy of the test to follow along or if they should simply listen to the test being read. Some students may benefit from following along, whereas it may pose as a distraction for others (Burns, 1998). Finally, readers must be careful not to display inadvertent cues such as changes in voice inflection when correct answers are being read. Students may easily recognize these subtle hints, which may in turn alter the meaning of test scores. Overall, it appears that differences in how the read aloud accommodation is administered may have important effects on test scores.

 

Recommendations

    The read aloud accommodation should be offered to students for whom it is determined will benefit from this accommodation on tests not specifically designed to measure reading achievement.

    More research needs to be done on using the read aloud accommodation on reading and language arts tests.

    Decisions should be made on an individual basis about whether the student can follow along when reading the test, or whether he or she should only listen to the test being read (Burns, 1998).

    In order to best reflect the non-accommodated condition, students should be allowed to ask the reader to re-read certain sections of the test.

    It is best to administer this accommodation individually to effectively accommodate each student’s pace.

    Students using the oral reading accommodation should be allowed approximately twice the normal amount of time to complete the test (Burns, 1998).

    Readers should be trained in how to effectively administer this accommodation (proper reading speed, avoidance of inappropriate voice inflection, etc.). They should know the pronunciation of words on the test and practice reading it in a straightforward and clear manner prior to test administration (CEC, 2000).


Read/Reread/Simplify/Clarify Test Directions

A variety of adaptations are frequently made in how test directions are communicated to students. Directions may be read aloud or paraphrased, additional examples may be given, or the student may be allowed to ask to have directions repeated. Frequently, this accommodation is included along with the read aloud accommodation for the remainder of the test. This accommodation appears to be most appropriate for students with reading-related disabilities.

 

Explanation

In order for students to demonstrate what they know and can do on a test, they must have a good understanding of what the test requires them to do. A small misunderstanding in test directions can cause students to complete an entire test incorrectly. In general, tests do not intend to measure how carefully a person can follow directions. This accommodation helps to facilitate the understanding of test directions so that students can demonstrate their true knowledge and skills.

 

State Use

Of the 48 states with statewide assessment systems, 31 allow some form of the read/reread/ simplify/clarify directions accommodation (Thurlow et al., 2000). One state prohibits this accommodation, and four other states have limitations on its use. In one state that limits its use, clarifying directions is prohibited. In another state, the accommodation is allowed only on certain subtests.

 

Survey Research

This accommodation has been the subject of several surveys. The earliest was by Gajria et al. (1994), who surveyed 100 teachers and received responses from 64. They found that 90.6% of those surveyed were aware of the “read test directions and items” accommodation, 79.7% reported using it in their classrooms, and 89% thought that it maintained the integrity of the test. It was ranked 7th of 32 modifications in terms of “ease of use,” and 5th in terms of effectiveness.

Jayanthi et al. (1996) received surveys from 401 of 708 general education teachers. Results indicated that of the 24 accommodations studied, “give individual help with directions during tests” was rated as “easy” to “very easy” to provide. It was ranked first in helpfulness, and in terms of how many teachers used the accommodation (84.7% of the teachers who responded indicated that they used the accommodation).

Lambert et al. (1996) received responses from 121 of 171 teachers surveyed. “Allow a proctor to rephrase test questions” ranked sixth among several accommodations in terms of the percentage of teachers who had provided the accommodation in the past. Also, the average rating of willingness to provide this accommodation was 4.1 on a 1-5 rating scale, 5 being “very willing to provide.”

Hollenbeck, Tindal, and Almond (1998) surveyed 633 regular and special educators, of which 166 responded. They found that 51% of the teachers who returned surveys had correct knowledge about the “clarify directions” accommodation, and 16% reported using this accommodation.

 

Empirical Research

Very limited research on this accommodation was identified. Elliott, Kratochwill, and McKevitt (2001) included “support with understanding test directions” as part of a package of accommodations to students in a study of the effects of testing accommodations for students with and without disabilities. Several students without disabilities and all students with disabilities in this study were given an individualized accommodation package determined to be appropriate by their teachers. Another group of students without disabilities was provided a standard package of testing accommodations, which included “support with understanding test directions” as well as other accommodations (e.g., help reading words, verbal encouragement, extra time). Finally, one group of students without disabilities was not provided any accommodations. Accommodation packages were found to have positive effects for a large percentage of students with disabilities (63.4%), a medium percent of students without disabilities who received the teacher recommended packages (42.9%), and a smaller percent of students without disabilities receiving the standard package (20.0%). Because of the use of multiple accommodations in this study, however, it is difficult to determine the specific effects of the “support with understanding test directions” accommodation.

 

Controversy

Tests typically are not designed to measure how well students follow directions, but rather how well they read, write, calculate, etc. Logically, this accommodation should not change what the test measures. Teachers and policymakers tend to agree that this is a valid accommodation; overall, there is not much controversy. However, further empirical support should be sought. Furthermore, because this accommodation is “definitionally diverse,” it may be necessary to study its different manifestations separately (e.g., reading directions vs. simplifying directions).

 

Recommendations

    In general, test directions should be written as simply and clearly as possible on the test, thus avoiding the need for much additional clarification.

    When the purpose of the test is not to test the ability to follow directions, this accommodation should be allowed.

    This accommodation should always accompany the read aloud accommodation unless the test measures reading decoding skills.

    When clarifying test directions, it is important for the test administrator to clearly and accurately communicate how the test is to be completed.

    A set of guidelines should be developed to indicate appropriate and inappropriate procedures for this accommodation.

    This accommodation may require testing in a separate location in order to avoid distracting other students (CEC, 2000).


Test Breaks

Students often are provided breaks between subtests. In some circumstances, students are not given breaks between subtests, or the breaks they are given are very short. Rarely are students given breaks within subtests. In this summary, the “test break” accommodation is defined as allowing students to take the test with breaks when the normal administration of the test does not include breaks. The test breaks accommodation may provide students with various disabilities the opportunity to demonstrate what they truly know and can do on a test.

 

Explanation

The use of magnification equipment, tape recorders, and earphones can cause fatigue, making frequent breaks a necessity (CEC, 2000). Also, students with attention problems may have difficulty concentrating on a test for the period of time that is typically expected of students. Fatigue and lowered concentration levels can limit students’ abilities to show what they really know on a test. When provided with more frequent or longer breaks, students with disabilities can experience a break from frustration and regain an optimal level of test concentration.

 

State Use

Of the 48 states with statewide assessment programs, 28 allow the test break accommodation. Five additional states allow this accommodation under certain conditions. For these five states, the accommodation is allowed on only some of the statewide assessments, or is allowed only between subtests (Thurlow et al., 2000).

 

Survey Research

No survey research was identified that specifically addressed the test breaks accommodation. One study, however, examined the accommodation of testing over several sessions. Hollenbeck, Tindal, and Almond (1998) surveyed 633 regular and special educators and received responses from 166. They found that 45.2% of the responding teachers had correct knowledge about this accommodation, and 18% reported using this accommodation.

 

Empirical Research

No studies were identified that specifically addressed the test breaks accommodation. However, two studies were identified that examined the use of a very similar accommodation – testing on multiple days. DiCerbo, Stanley, Roberts, and Blanchard (2001) studied the effects of two-day and three-day administrations of reading comprehension tests on the scores of 939 third-grade students. All students completed a form of the Stanford 9 reading comprehension test, and a second form of the same test under either the two-day or three-day condition. Students were not able to return to previously completed test sections on the second or third days, and the overall amount of time students had to complete the test was equivalent. Results indicated that participants’ scores were 12 scaled score points higher for the divided-time administrations than for the one-time administration (ES=.25). There was also a significant time by reading comprehension ability interaction (p < .001). Middle and low ability readers experienced greater benefits from the multiple day administrations than high ability readers.

Walz, Albus, Thompson, and Thurlow (2000) examined the effects of allowing students to take a reading test over multiple days versus one day. Participants were 112 seventh and eighth grade students (48 students receiving special education services, and 64 non-special education students) taking the Minnesota Basic Standards reading test. Under the one day administration, students read and answered questions for three reading passages; in the multiple day administration, students read one passage and completed the corresponding questions each day for three days. Each student participated in both administrations of the test. Results indicated that a multiple-day test accommodation did not enhance the test scores of students with disabilities. Furthermore, students without disabilities performed better under the one-day administration.

Overall, it appears that more research needs to address the test breaks accommodation. No research was identified that specifically studied this accommodation, and studies of similar accommodations have differing results. Perhaps the age of the students is a factor (e.g., younger students may benefit more from this accommodation).

 

Controversy

Test breaks are suggested to provide students with disabilities a chance to regain concentration in order to perform optimally on tests. Furthermore, this accommodation does not appear to significantly affect what is being tested. However, no research has addressed whether students with disabilities perform better when provided the test breaks accommodation. It has been suggested that breaks within subtests may distract from the problem-solving rhythm a student has developed (Burns, 1998) and interfere with performance on a sequence of items (CEC, 2000). However, breaks between subtests are suggested to be beneficial, especially when students are being administered a test individually (Burns, 1998). It is clear that more research is needed to determine whether the test breaks accommodation is appropriate, and if so, how and when it should be used.

 

Recommendations

    Test breaks between subtests should be provided to students for whom it has been determined will benefit from this accommodation.

    Breaks within subtests should be avoided, unless absolutely necessary.

    Because it may be possible for students to find answers to previous test questions during breaks, students should not be allowed to return to previously completed test items following a break.

    The test break accommodation may require a separate setting (CEC, 2000).

    Lengthened test breaks may be necessary for some students (Burns, 1998).

    Test-developers should take into consideration the age and the sustained attention abilities of the students who will be tested.

    More research needs to address the use of this accommodation.


Summary

This report on the research that has been conducted on the most frequently used accommodations reveals that there are not yet simple or conclusive answers to questions about the effects of accommodations. Yet, there are some trends. There were three accommodations included here that more than 90% of the states allow without limitations (Interpreter for Instructions, Large Print Edition of the Test, Read/Reread/Simplify/Clarify Directions). For these three, the primary concerns relate to the quality of the implementation of the accommodation and to the alignment of the accommodation to specific student needs.

The Read Aloud accommodation is clearly the most controversial of the most frequently allowed accommodations, with only four states allowing it without limitations. The research confirms this controversy, with results for reading tests mired in concerns about fidelity of implementation, validity issues, and logistical concerns, even though use of Read Aloud generally is supported for math tests.

It is important to continue to document what the research tells us, and to analyze the specific context of the studies, their methodologies, and their findings. Doing so will be facilitated by referring to NCEO’s searchable accommodations research database on its Web site (http://cehd.umn.edu/NCEO/AccomStudies.htm).


References

Barraga, N. (1983). Visual handicaps and learning. Austin, TX: Pro-Ed, Inc.

Barton, K. E. & Huynh, H. (2000). Patterns of errors made on a reading test with oral reading administration. Paper presented at the annual conference of the National Council on Measurement in Education, New Orleans, LA.

Beattie, S., Grise, P., & Algozzine, B. (1983). Effects of test modifications on the minimum competency performance of learning disabled students. Learning Disability Quarterly, 6, 75-77.

Bennett, R.E., Rock, D.A., & Jirele, T. (1987). GRE score level, test completion, and reliability for visually impaired, physically handicapped, and nonhandicapped groups. The Journal of Special Education, 21 (3), 9-21.

Bennett, R.E., Rock, D.A., & Kaplan, B.A. (1987). SAT differential item performance for nine handicapped groups. Journal of Educational Measurement, 24 (1), 44-55.

Bennett, R.E., Rock, D.A., & Novatkoski, I. (1989). Differential item functioning on the SAT-M Braille Edition. Journal of Educational Measurement, 26 (1), 67-79.

Bielinski, J., Thurlow, M., Ysseldyke, J., Freidebach, J., & Freidebach, M. (2001). Read-aloud accommodation: Effects on multiple-choice reading & math items (Technical Report 31). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Bourquin, E.A. (1996). Using interpreters with deaf-blind clients: What professional service providers should know. RE:view, 27 (4), 149-154.

Braden, J.P. (1992). Intellectual assessment of deaf and hard-of-hearing people: A quantitative and qualitative research synthesis. School Psychology Review, 21 , 82-94.

Burk, M. (1998, October). Computerized test accommodations: A new approach for inclusion and success for students with disabilities. Paper presented at Office of Special Education Program Cross Project Meeting “Technology and the Education of Children with Disabilities: Steppingstones to the 21st Century.”

Burns, E. (1998) Test accommodations for students with disabilities. Springfield: Charles C. Thomas, Publisher, LTD.

Calhoon, M.B., Fuchs, L.S., & Hamlett, C.L. (2000). Effects of computer-based test accommodations on mathematics performance assessments for secondary students with learning disabilities. Learning Disability Quarterly, 23, 271-282.

Centra, J.A. (1986). Handicapped student performance on the Scholastic Aptitude Test. Journal of Learning Disabilities, 19 (6), 324-327.

Chiu, C.W.T., Pearson, & P. David (1999). Synthesizing the effects of test accommodations for special education and limited English proficiency students. Paper presented at the National Conference on Large Scale Assessment.

Coleman, P.J. (1990). Exploring visually handicapped children’s understanding of length (math concepts). (Doctoral dissertation, The Florida State University, 1990). Dissertation Abstracts International, 51, 0071.

Council for Exceptional Children. Making assessment accommodations: A toolkit for educators. Reston, VA: Council for Exceptional Children.

DiCerbo, K., Stanley, E., Roberts, M., & Blanchard, J. (April, 2001). Attention and standardized reading test performance: Implications for accommodation. Paper presented at the annual meeting of the National Association of School Psychologists, Washington, DC, 2001.

Elliott, S., Kratochwill, T., & McKevitt, B. (2001). Experimental analysis of the effects of testing accommodations on the scores of students with and without disabilities. Journal of School Psychology, 31(1), 3-24.

Florida Department of Education. (1982). Florida Technical Report: Study of procedural adaptation and format modifications in testing learning disabled students elementary level: Volume II. Tallahassee: Author.

Fuchs, L.S., Fuchs, D., Eaton, S.B., Hamlett, C., & Karns, K. (2000). Supplementing teacher judgements about test accommodations with objective data sources. School Psychology Review, 29 (1), 65-85.

Fuchs, L.S., Fuchs, D., Eaton, S.B., Hamlett, C., Binkley, E., & Crouch, R. (2000). Using objective data source to enhance teacher judgements about test accommodations. Exceptional Children, 67 (1), 67-81.

Gajria, M., Salend, S.J., & Hemrick, M.A. (1994). Teacher acceptability of testing modifications for mainstreamed students. Learning Disabilities Research and Practice, 9(4), 236-243.

Gallina, N.B. (1989). Tourette’s syndrome children: Significant achievement and social behavior variables (Tourette’s syndrome, attention deficit hyperactivity disorder) (Doctoral dissertation, City University of New York, 1989). Dissertation Abstracts International, 50, 0046.

Gordon, R.P, Stump, K., & Glaser, B.A. (1996). Assessment of individuals with hearing impairments: Equity in testing procedures and accommodations. Measurement and Evaluation in Counseling and Development, 29, 111-119.

Grise, P., Beattie, S., & Algozzine, B. (1982). Assessment of minimum competency in fifth grade learning disabled students: Test modifications make a difference. Journal of Educational Research, 76 (1), 35-40.

Hasselbring, T.S. & Crossland, C.L. (1982). Application of microcomputer technology to spelling assessment of learning disabled students. Learning Disability Quarterly, 5, 80-82.

Helwig, R., Stieber, S., Tindal, G., Hollenbeck, K., Heath, B., & Almond, P. (2000). A comparison of factor analyses of handwritten and word-processed writing of middle school students. Eugene, OR: RCTP.

Hollenbeck, K., Tindal, G., & Almond, P. (1998). Teacher’s knowledge of accommodations as a validity issue in high-stakes testing. The Journal of Special Education, 32 (3), 175-183.

Hollenbeck , K., Tindal, G., Harniss, M., & Almond, P. (1999). The effect of using computers as an accommodation in a statewide writing test. Eugene, OR: University of Oregon, BRT.

Hollenbeck, K., Tindal, G., Stieber, S., & Harniss, M. (1999). Handwritten vs. word processed statewide compositions: Do judges rate them differently? Eugene, OR: University of Oregon, BRT.

Horton, S.V., & Lovitt, T.C. (1994). A comparison of two methods of administering group reading inventories to diverse learners: Computer versus pencil and paper. Remedial and Special Education, 15 (6) 378-390.

Huesman, R.L. (1999). The validity of ITBS reading comprehension test scores for learning disabled and non-learning disabled students under extended-time conditions. (Doctoral dissertation, University of Iowa, 1999). Dissertation Abstracts International, A 60/06, 1996.

Huesman, R.L., & Frisbie, D.A. (2000). The validity of the ITBS reading comprehension test scores for learning disabled and non learning disabled students under extended time conditions. Paper presented at the annual meeting of the National Council on Measurement in Education, New Orleans, LA.

Individuals with Disabilities Education Act Amendments of 1997, 20 U.S.C. Sec. 1400 et seq.

Jayanthi, M., Epstein, M.H., Polloway, E.A., & Bursuck, W.D. (1996). A national survey of general education teachers’ perceptions of testing adaptations. The Journal of Special Education, 30 (1), 99-115.

Koretz, D. (1997). The assessment of students with disabilities in Kentucky (CSE Technical Report No. 431). Los Angeles, CA: Center for Research on Standards and Student Testing.

Koretz, D. & Hamilton, L. (1999). Assessing students with disabilities in Kentucky: The effects of accommodations, format, and subject (Technical Report No. 498). Los Angeles, CA: Center for Research on Standards and Student Testing. (ERIC Document Reproduction Service No. ED 440 148).

Lambert, D., Dodd, J.M., Christensen, L., & Fishbaugh, M.S.E. (1996). Rural secondary teachers’ willingness to provide accommodations for students with learning disabilities. Rural Special Education Quarterly, 15 (2), 36-42.

Lewis, D., Green, D. R., & Miller, L. (1999). Using differential item functioning analyses to assess the validity of testing accommodated students with disabilities. Paper presented at the national conference on large-scale assessment, Snowbird, UT.

MacArthur, C.A., & Graham, S. (1987). Learning disabled students’ composing under three methods of text production: Handwriting, word processing, and dictation. The Journal of Special Education, 21 (3), 22-42.

Marquart, A. (2000). The use of extended time as an accommodation on a standardized mathematics test: An investigation of effects on scores and perceived consequences for students of various skill levels. Paper presented at the annual meeting of the Council of Chief State School Officers, Snowbird, UT.

Meloy, L.L., Deville, C., & Frisbie, C. (2000). The Effect of a Reading Accommodation on Standardized Test Scores of Learning Disabled and Non Learning Disabled Students. Paper presented at the annual meeting of the National Council on Measurement in Education (New Orleans, LA).

Mick, L.B. (1989). Measurement effects of modifications in minimum competency test formats for exceptional students. Measurement and Evaluation in Counseling and Development, 22, 31-36.

Miller, P. (1990). Use of the Peabody Picture Vocabulary Test-Revised (PPVT-R) with individuals with severe speech and motor impairment: Effect of response mode on test results (Speech Impairment). (Doctoral Dissertation, University of Kansas ,1990): Digital Dissertations, 51, 5632.

Montani, T.O. (1995). Calculation skills of third-grade children with mathematics and reading difficulties (learning disabilities). (Doctoral dissertation, Rutgers the State University of New Jersey, 1995). Dissertation Abstracts International, 56, 0910.

Munger, G.F., & Loyd, B.H. (1991). Effect of speededness on test performance of handicapped and nonhandicapped examinees. Journal of Educational Research, 85 (1), 53-57.

Murray, E.A. (1987). The relationship between spatial abilities and mathematics achievement in normal and learning-disabled boys. (Doctoral dissertation, Boston University, 1987). Dissertation Abstracts International, 58, 0176.

Perez, J.V. (1980). Procedural adaptations and format modifications in minimum competency testing of learning disabled students: A clinical investigation (Doctoral dissertation, University of South Florida, 1980). Dissertation Abstracts International, 41, 0206.

Perlman, C.L., Borger, J., Collins, C.B., Elenbogen, J.C., & Wood, J. (1996). The effect of extended time limits on learning disabled students’ scores on standardized reading tests. Paper presented at the annual meeting of the National Council on Measurement in Education, New York, NY.

Petronio, K. (1998). Interpreting for deaf-blind students: Factors to consider. American Annals of the Deaf, 133 (3), 33-43.

Phillips, S.E. (1994). High stakes testing accommodations: Validity vs. disabled rights. Applied Measurement in Education, 7 (2), 93-120.

Pomplun, M. & Omar, M. H. (2000). Score comparability of a state mathematics assessment across students with and without reading accommodations. Journal of Applied Psychology, 85, 21-29.

Ray, S.R. (1989). Adapting the WISC-R for deaf children. Diagnostique, 7, 147-157.

Ray, S.R. (1989). Context and the psychoeducational assessment of hearing impaired children. Topics in Language Disorders, 9(4) 33-43.

Rock, D.A., Bennett, R.E., & Jirele, T. (1988). Factor structure of the Graduate Record Examinations General Test in handicapped and nonhandicapped groups. Journal of Applied Psychology, 73 (3), 383-392.

Rogers, W.T. (1983). Use of separate answer sheets with hearing impaired and deaf school age students. B.C. Journal of Special Education, 7 (1), 63-72.

Russell, M. (1999). Testing writing on computers: A follow-up study comparing performance on computer and on paper. Educational Policy Analysis Archives, 7.

Russell, M., & Haney, W. (1997). Testing writing on computers: An experiment comparing student performance on tests conducted via computer and via paper-and-pencil. Educational Policy Analysis Archives, 5 (3).

Russell, M., & Plati, T. (2001). Effects of computer versus paper administration of a state-mandated writing assessment. TCRecord.org. Retrieved January 23, 2001, from the World Wide Web: http://www.tcrecord.org/PrintContent.asp?ContentID=10709.

Statewide Stakeholder Focus Group. (September, 1995) «Including Students with Disabilities in Statewide Assessment and Accountability Systems: A Study of the Issues.» Under a grant from The Texas Education Agency to Region XI Education Service Center.

Sullivan, P.M. (1982). Administration modifications on the WISC-R Performance Scale with different categories of deaf children. American Annals of the Deaf, 127 (6), 780-788.

Swain, C. R. (1997). A comparison of a computer-administered test and a paper and pencil test using normally achieving and mathematically disabled young children. ( Doctoral Dissertation, University of North Texas, 1997) Digital Dissertations, 58, 835.

Thurlow, M., House, A., Boys, C., Scott, D., & Ysseldyke, J. (2000). State participation and accommodation policies for students with disabilities: 1999 Update (Synthesis Report 33). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Tindal, G., & Fuchs, L. (2000). A Summary of Research on Test Accommodations: An Empirical Basis for Defining Test Accommodations. Lexington, KY: Mid-South Regional Resource Center. (ERIC Document Reproduction Service No. ED 442 245).

Tindal, G. Heath, B., Hollenbeck, K., Almond, P., & Harniss, M. (1998). Accommodating students with disabilities on large-scale tests: An empirical study of student response and test administration demands. Exceptional Children, 64 (4), 439-450.

Tippets, E., & Michaels, H. (1997). Factor Structure Invariance of Accommodated and Non-Accommodated Performance Assessments. Paper presented at the National Council on Measurement in Education annual meeting, Chicago.

Tolfa-Veit, D., & Scruggs, T.E. (1986). Can learning disabled students effectively use separate answer sheets? Perceptual and Motor Skills, 63, 155-160.

Trimble, S. (1998). Performance trends and use of accommodations on a statewide assessment (Maryland/Kentucky Report Number 3). Minneapolis, MN: National Center on Educational Outcomes.

Vacc, N. (1987). Word processor versus handwriting: A comparative study of writing samples produced by mildly mentally handicapped students. Exceptional Children, 54(2), 156-165.

Varnhagen, S., & Gerber, M.M. (1984). Use of microcomputers for spelling assessment: Reasons to be cautious. Learning Disability Quarterly, 7, 266-270.

Walz, L., Albus, D., Thompson, S., & Thurlow, M. (2000). Effect of a multiple day test accommodation on the performance of special education students (Minnesota Report 34). Minneapolis: University of Minnesota, National Center on Educational Outcomes.

Watkins, M.W., & Kush, J.C. (1988). Assessment of academic skills of learning disabled students with classroom microcomputers. School Psychology Review, 17 (1), 81-88.

Weston, T. (1999). The validity of oral presentation in testing. Montreal, CANADA: American Educational Research Association.

Wright, N., & Wendler, C. (1994). Establishing timing limits for the new SAT for students with disabilities. Paper presented at the Annual Meeting of the National Council on Measurement in Education (New Orleans, LA, April 4-8, 1994). ERIC ID# ED375543.

Ziomek, R.L., & Andrews, K.M. (1998). ACT assessment score gains of special tested students. ACT Research Report Series, October, 1998.