Technical Report 65A Summary of the Research on the Effects of Test Accommodations: 2009-2010Christopher M. Rogers • Elizabeth M. Christian • Martha L. Thurlow November 2012 All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as: Rogers, C. M., Christian, E. M., & Thurlow, M. L. (2012). A summary of the research on the effects of test accommodations: 2009-2010 (Technical Report 65). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Table of Contents
Executive SummaryThe use of accommodations in instruction and assessments continues to be of great importance for students with disabilities. This importance is reflected in an emphasis on research to investigate the effects of accommodations. Key issues under investigation include how accommodations affect test scores, how educators and students perceive accommodations, and how accommodations are selected and implemented. The purpose of this report is to provide an update on the state of the research on testing accommodations as well as to identify promising future areas of research. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. We summarize the research to review current research trends and enhance understanding of the implications of accommodations use in the development of future policy directions, implementation of current and new accommodations, and valid and reliable interpretations when accommodations are used in testing situations. In 2009 and 2010, 48 published research studies on the topic of testing accommodations were found. Among the main points of the 2009-10 research are:
The studies in 2009-2010 demonstrated several similarities when viewing them in comparison with previous research, especially in relation to the 2007-2008 studies examined in the previous accommodations research review. However, there were several differences, or shifts, as well. There were increases in research examining science assessment accommodations and decreases in reading assessment accommodations. There was a rise in multi-purpose study designs, and accordingly, more studies employed multiple data collection methods and instruments. Test performance of students in elementary and middle school received increased attention. The number of accommodations receiving focused examination--including common and unique accommodations--expanded to 10 in the current review. There were decreases in studies measuring impact of the extended-time accommodation, as well as a small increase in examining response accommodations. Further, students with disabilities were reported not to have benefited from extended-time in half of the relevant studies, and to have benefited in the other half of the studies. Research provided more support for the benefits of computerized administration, along with demonstrated score equivalency with and without this accommodation, indicating no problematic concern regarding academic construct validity. In fact, attention to the effects of accommodations on construct validity has increased in general, and only 2 of 21 separate findings indicated that academic constructs were different in the accommodated and non-accommodated testing conditions. OverviewFederal legislation has spurred states to include all students in statewide assessment, and vast improvements in inclusion have taken place over the past decade. For many students with disabilities, access to tests necessitates the provision of assessment accommodations. As the use of accommodations has increased, there has been a concurrent need to attend to the implementation of accommodations and to ensure the validity of results when accommodations are used. States look to educational research for answers about which accommodations have proven successful in increasing the validity of results for students with disabilities. Often this effort means looking for increased scores for students with disabilities, along with evidence that the constructs measured or the validity of inferences that can be drawn from results are not changed. To synthesize research efforts, NCEO has provided reports on accommodations research completed over time. The time periods included 1999-2001 (Thompson, Blount, & Thurlow, 2002), 2002-2004 (Johnstone, Altman, Thurlow, & Thompson, 2006), 2005-2006 (Zenisky & Sireci, 2007), and 2007-2008 (Cormier, Altman, Shyyan, & Thurlow, 2010). The purpose of this document is to provide a synthesis of the research on test accommodations published in 2009 and 2010. The research described here encompasses empirical studies of score comparability and validity studies as well as investigations into accommodations use and perceptions of their effectiveness. Taken together, the current research casts a wide net in exploring the issues surrounding test accommodations practices, with a number of efforts made on key accommodations. Reporting the findings of current research studies was a primary goal of this analysis; a second goal was to identify areas requiring continued investigation in the future. Review ProcessSimilar to the process used in past accommodations research syntheses (Cormier, Altman, Shyyan, & Thurlow, 2010; Johnstone et al., 2006; Thompson, Blount, & Thurlow, 2002; Zenisky & Sireci, 2007), a number of sources were accessed to complete the review of the accommodations research published in 2009 and 2010. Specifically, five research databases were consulted, including Educational Resources Information Center (ERIC), PsycINFO, Academic Search Premier, Digital Dissertations, and Educational Abstracts. To confirm the thoroughness of our searches, we used the Web search engine Google Scholar to search for additional research. In addition, a hand-search of 30 journals was completed to ensure that no qualifying study was missed. A list of hand-searched journals is available on the National Center on Educational Outcomes website (www.nceo.info/OnlinePubs/AccommBibliography/AccomStudMethods.htm). Online archives of several organizations were also searched for relevant publications. These organizations include Behavioral Research and Teaching (BRT) at the University of Oregon (http://brt.uoregon.edu/), the National Center for Research on Evaluation, Standards, and Student Testing (CRESST; http://www.cse.ucla.edu/), and the Wisconsin Center for Educational Research (WCER; http://www.wcer .wisc.edu/testacc). The initial search was completed in December, 2010. A second search was completed in April, 2011 to ensure that all articles published in 2009 and 2010 were found and included in this review. Within each of these research databases and publications archives, we used a sequence of search terms. Terms searched for this review were:
Many of these search terms were used as delimiters when searches yielded large pools of documents found to be irrelevant to the searches. The research documents from these searches were then considered for inclusion in this review with respect to several criteria. First, the decision was made to focus only on research published or defended in doctoral dissertations in 2009 and 2010. Second, the scope of the research was limited to investigations of accommodations for regular assessment (hence, articles specific to alternate assessments, accommodations for instruction or learning, and universal design in general were not part of this review). Third, research involving English language learners (ELLs) was included only if the target population was ELLs with disabilities. Fourth, presentations from professional conferences were not searched or included in this review, based on the researchers' criteria to include only research that would be accessible to readers and that had gone through the level of peer review typically required for publication in professional journals or through a doctoral committee review. (This criterion was implemented for the first time during the 2007-2008 review) Finally, in order to be included in the online bibliography and summarized in this report, studies needed to involve either (1) experimental manipulation of an accommodation, (2) investigation of the comparability of test scores across accommodated and non-accommodated conditions, or (3) examination of survey results on teachers' knowledge and/or perceptions of accommodations. Top of Page | Table of Contents ResultsThe results of our analyses of the 48 studies published from January 2009 through December 2010 are presented in substantial detail. We provide the studies' publication types, as well as the range of research purposes. We specify the types of research approaches and the primary and secondary sources of data collection. We also describe the data collection methods and instruments. We provide the academic content areas covered in the research. We depict research participants in terms of their being students, educators, and parents, their ages or grade levels, the participant sample sizes and disability status, and the disability categories. We report the types of accommodations studied. We also explicate the research findings in terms of the impact of accommodations as well as perceptions about accommodations, incidence of accommodations use, and implementation. Additional sections offer perspectives on accommodations in postsecondary education, the accommodations decision-making process, and the association of accommodations to academic discipline. Finally, limitations and future research directions in the assembled body of research literature are presented as reported by the researchers. Publication TypeThe results of the review process showed a total of 48 studies about accommodations were published during the period from January 2009 through December 2010. As shown in Figure 1, of these 48 studies, 36 were journal articles, 10 were dissertations, and 2 were published professional reports released by research organizations (e.g., National Center on Educational Outcomes, University of Oregon Behavioral Research and Teaching). Figure 1. Percentage of Accommodations Studies by Publication Type The total number of studies published on accommodations in 2009-2010 (n=48) increased since the previous report examining accommodations research published in 2007-2008 (n=40). There was also an increase in the number of journal articles (n=25 in 2007-2008; n=36 in 2009-2010), and a slight decrease in the number of dissertations published on accommodations (n=13 in 2007-2008; n=10 in 2009-2010). The increase in journal articles included in this report may be due, in part, to an increased number of journals that published research on accommodations in 2009-2010. The report on accommodations research in 2007-2008 included articles from 19 journals; the articles described in the current report were found in 24 journals. Purposes of the ResearchA number of purposes were identified in the accommodations research published in 2009 and 2010. Table 1 provides a view of the predominant focus of each of these 48 studies. In some cases, a work had only one expressed purpose; this describes 11 of the studies (see Appendix A-1). The majority of studies sought to accomplish multiple purposes. In those cases, we identified the "primary purpose" according to the title of the work or the first-mentioned purpose in the text of the work. Table 1. Primary Purpose of Reviewed Research
The most common primary purpose for research published during 2009-2010 was to report on the effect of accommodations on test scores (31%), through comparing scores of students who received accommodations to those who did not. The next most common primary purpose was studying perceptions of the accommodations and preferences between or among a small number of accommodations of a certain type (23%). Other primary purposes included comparing test items, which refers to whether item difficulty or other item-specific content validity issues changed when test format changed from print-based to electronic (e.g., Kim & Huynh, 2010), or to audio presentation (e.g., Cook et al., 2009), among others. The purpose of evaluating test structure focused on the effects of accommodations on academic constructs. Factor structure was examined by comparing the tests with and without accommodations. We identified the primary purpose of summarizing research in works that were expressly written as literature reviews; for example, Lindstrom (2010) inquired about the impact of different types of accommodations on the mathematics test scores of students with high-incidence disabilities. The purpose of reporting on implementation practices and accommodations use was fairly uncommon as a primary study purpose, yet an example was when Johnstone and his colleagues (2009) inquired about factors that may have affected use of assistive technology. The investigation of test validity was the primary purpose of only three studies (Elliott et al., 2010; Laitusis, 2010; Lovett et al., 2010). For example, Laitusis (2010) used an external validation measure of teacher rating of comprehension abilities to analyze correlational data and regression procedures to examine possible connections with comprehension as measured by a standardized test both with and without a form of the oral presentation accommodation. The least common primary purpose was to identify predictors of the need for test accommodations, which was the primary focus of one study (Cawthon, 2009) in which relationships across instructional factors and the effect of accommodations use were explored (see Appendix A-2). Table 2 provides a more detailed view of the body of literature showing the multiple purposes of many studies. For example, some efforts included analyses of score comparisons between students with disabilities and students without disabilities when using accommodations, yet also sought students' comments through survey or interview about their test-taking experience. Table 2. All Purposes of Reviewed Research
a The total of these percentages is >100% due to the multiple purposes identified in most (37) of the studies; 23 of the studies had 2 identified purposes, and 14 of the studies had 3 identified purposes. The most common single purpose of the 2009-2010 published studies was to demonstrate the effect of accommodations on test scores; this was included in over half of the works (52%). Study approaches either compared test scores of students with disabilities and students without disabilities when using accommodations, or compared test scores of students with disabilities when using and not using accommodations. The former approach was the most common, comprising fully two-thirds of this category of research. An additional study (Lovett et al., 2010) considered the impact of using supports commonly implemented as accommodations--word-processing and extra time--on the quality of essay-based college-level course examinations completed only by students without disabilities. Another purpose we identified in over one-third of the studies was a focus on discussing issues, usually noted when the researchers offered a detailed consideration of a central issue related to accommodations. For instance, Bayles (2009) presented discussion related to instructional and curricular access for students with disabilities, Lazarus and her colleagues (2009) discussed the trend line of accommodations policy development, Freeland and her colleagues (2010) considered training and experience with technology as a possible intervening variable, and Lovett (2010) structured his literature review around answering questions about the extended-time accommodation. The purpose of reporting on implementation practices and accommodations use was present in about one-fifth of all studies. For instance, in the course of summarizing research about accommodations in technology-supported assessments, Salend (2009) also reported about related accommodations practices. The purpose of comparing test items co-occurred in many studies on comparing scores between accommodated and non-accommodated tests, yet added the focus on analyzing differential item functioning (DIF). For instance, Stone and her colleagues (2010) examined differential benefits of standard print, large-print, and braille formats for students with and without blindness or visual impairments. We made a judgment call as to which of these purposes was predominant for these types of studies, tending to note that comparing scores came first in the study text or encompassed more of the results reporting than comparing items. The purpose of summarizing accommodations research was identified when the researcher included a comprehensive review of literature; other than those studies that were written as literature reviews, examples of the level of comprehensiveness we sought occurred in dissertations where another purpose predominated but a substantive research summary was also completed. Research Type and Data Collection SourceJust over half of the accommodations research reviewed here used a quasi-experimental research design to gather data on the research purposes. As seen in Table 3, the number of descriptive quantitative research studies decreased slightly in 2010 compared to 2009, while the number of studies using a quasi-experimental design remained about the same. Though few studies were reported to use experimental, longitudinal, or meta-analytic designs, these categories also were rarely included in past reports. The data reported here may reflect an increase in the use of these designs in accommodations research. Furthermore, there appeared to be a large difference between data collection sources, with about twice as many studies using primary versus secondary sources of data overall and within each year. This is a change from the previous report, in which approximately equal numbers of studies used primary and secondary data sources. Primary data sources included actual data collection procedures that researchers undertook to obtain their data. Secondary data collection included the use of archival or extant data. Table 3. Research Type and Data Collection Source by Year
Data Collection Methods and InstrumentsThe researchers collected study data gathered through primary or secondary procedures using various methods and tools, as seen in Figure 2. The majority of the research included in this synthesis for 2009-2010 used data acquired through academic content testing. Just over half of the studies employed surveys to gather data. Interviews, observations, and focus groups were used much less frequently. For this analysis, we considered "articles" the method or source for those studies that reviewed research, including one study that employed formal meta-analysis. One study used state policies as the data source for the descriptive analyses completed. Fewer than half of the studies reported using more than one method or tool to gather data. Figure 2. Data Collection Methods Used in 2009-2010 Research Note: Of the 48 studies reviewed for this report, 12 reported using two data collection methods, and 5 reported using three data collection methods. Nearly all of the studies used data collection instruments of one form or another; only four studies did not employ any instruments. Table 4 presents the types of data collection instruments used in studies. Surveys presented items of an attitudinal or self-report nature. Tests were course- or classroom-based. Assessments were statewide or large-scale in scope. Protocols refer to non-academic sets of questions, usually presented in an interview or focus group format. Measures referred to norm-referenced academic or cognitive instruments. All of these instruments were placed into five categories: protocols or surveys developed by study authors, norm-referenced cognitive ability measures, norm-referenced academic achievement measures, state criterion-referenced academic assessments. and surveys or academic tests developed by education professionals or drawn by researchers from other sources. Non-test protocols developed by the author or authors of the studies--the most commonly-used instrument type--included performance tasks, questionnaires or surveys, and interview or focus-group protocols, among others. Surveys or academic tests developed by education professionals or researchers used sources outside of current studies, and were exemplified by attitudinal surveys such as the Attitudes Toward Requesting Accommodations (ATRA) scale, or by subsets of items drawn from released or otherwise-available pools such as the National Assessment of Educational Progress, as well as course-content exams. State criterion-referenced assessments included those of Georgia, South Carolina, Texas, and Wisconsin, as well as some from states that remained unidentified in the research. Norm-referenced academic achievement measures included the Gates-MacGinitie Reading Test (GMRT). Norm-referenced cognitive ability measures included the Test of Silent Word Reading Fluency (TOSWRF), among others. A substantial minority--10 studies in all--used instrumentation of more than one kind. Additionally, a small number of studies used multiple instruments in each of them, often of the same kind (Laitusis, 2010; Logan, 2009; Lovett et al., 2010; Parks, 2009). A small number (n=5) of the instruments was used in more than one study: Attitudes Toward Requesting Accommodations (ATRA) survey, the Principles and NCTM standards for school mathematics test, the Gates MacGinitie Reading Test (GMRT), the Woodcock Johnson III Tests of Academic Achievement measure, and the South Carolina Palmetto Achievement Challenge Test (SC PACT) assessment. We present a complete listing of the instruments used in each of the studies in Appendix C, including the related studies that served as sources for these instruments, when available. Table 4. Data Collection Instrument Types
Content Area AssessedA number of studies published during 2009-2010 focused on accommodations used in certain academic content areas. As shown in Table 5, math and reading were the two most commonly assessed content areas. Table 5 also provides a comparison to content areas in NCEO's previous reports on accommodations (Cormier et al., 2010; Zenisky & Sireci, 2007). In general, the emphasis on reading and math is consistent across reviews. The number of studies on writing, social studies, and psychology has remained fairly consistent since 2005. An increase in science studies is apparent across years. There were no studies citing Civics/US History as a content area in the 2007-2008 and 2009-2010 reports. All studies published in 2009-2010 specified a content area. This is a change from past reports, in which at least one study did not cite the content area studied. Table 5. Academic Content Area Assessed Across Three Reports
a Studies in 2005-2006 including examinations of more than one content area ranged in number of areas assessed from 2 to 6. b Studies in 2007-2008 including examinations of more than one content area ranged in number of areas assessed from 2 to 4. c Studies in 2009-2010 including examinations of more than one content area ranged in number of areas assessed from 2 to 5. d Detailed descriptions of what constituted 'Other Language Arts' for each of the four studies from 2009-2010 can be found in Appendix C, Table C-2. Research ParticipantsResearchers drew participants from differing roles in education (see Figure 3 and Appendix D, Table D-1). A large majority studied only students--32 of the 48 studies from 2009-2010. The next largest participant group studied was 'educators only,' describing or analyzing the educator perspective on accommodations. Additional data are reported about combinations of participant groups, as well as noting that some studies did not specify participants; these were usually the topical literature review documents. Figure 3. Types of Research Participants Table 6 shows details about the size and composition of the participant groups in the research studies published during 2009 and 2010; this information is displayed in more detail by study in Appendix D. The size of the samples varied from 12 (Mastergeorge & Martinez, 2010) to 61,270 (Anjorin, 2009). In 2009-2010, there was a larger number of studies in which at least 50% of the participants were people with disabilities (n=17) than there were studies where at least 50% of the participants were people without disabilities (n=15). Eleven studies examined participant groups composed primarily of people with disabilities, which are reported in the 75-100% column. In fact, 10 of these 11 studies focused only on students with disabilities. Most studies involving participants with disabilities had numbers between 25-299, and only 2 studies had participant numbers of 1,000 or more. Alternately, studies with 24% or fewer of the participants having disabilities tended to number 1,000 participants or more. Also, the studies with mostly participants without disabilities included studies focused on educator input and perspectives. Table 6. Participant Sample Sizes and Ratio of Individuals with Disabilities
1 11 of the studies did not specify the proportion of participants who had disabilities. 2 5 of the studies did not specify the number of participants. Analyzing the proportions more closely, a finer distinction is indicated in the center columns, in which the studies examined samples that had somewhat more participants without disabilities (25-49%) and somewhat more participants with disabilities (50-74%). These two columns have almost equivalent overall numbers, with a total of 6-7 studies in each. These studies, with relatively similar ratios of people with and without disabilities, tended to examine data from at least 500 participants (n=7) compared with 2 studies with 100-499 participants, and 4 studies with 10-49 participants. Finally, about ¼ of the studies with participant numbers reported did not specify the proportions of participants with or without disabilities; 7 of these 11 studies collected data only from educator participants. School LevelResearch on accommodations published during 2009 and 2010 involved kindergarten through college-aged participants (see Table 7). Previous reports included research with participants in kindergarten through postsecondary (see Appendix D for more detail); the category postsecondary/college represents a change from past reports. As seen in Table 7, a plurality of the studies published in 2009 and 2010 focused on middle school students (n=18). Thirteen studies involved elementary school students, and ten involved high school students. About one quarter of the studies (n=12) involved samples from across more than one grade-level cluster; most of these studies included relatively larger groups of 50 or more participants (about 67%), and secondary data sources (see Appendices B and D). Put another way, these multiple grade-level studies were primarily analyses of extant large-scale assessment data sets, often drawn at the state level. Although not more common than K-12 studies, there was a noteworthy number of studies that examined accommodations use and implementation at the postsecondary/college level. Twelve studies did not involve students as participants. Table 7. Grade Level of Research Participants
Disability CategoriesA broad range of disability categories was included in samples in the 2009-2010 research (see Appendix D for details). As shown in Table 8, seven studies did not specify disability categories of participants, and eight studies did not include students in the sample. Of the remaining 33 studies, the most commonly studied disability category was learning disabilities (n = 26); nine of these studies had only participants with learning disabilities. In comparison to the previous reporting period, 2007-2008, the proportion of studies with participants with learning disabilities changed from about three-eighths of the studies to over half of the studies. Approximately one third of these remaining 33 studies included participants with an attention problem, an emotional behavioral disability, blindness/visual impairment, or deafness/hearing impairment. The least common disability category was autism, and all of the studies specifying that category also included participants with other categories as well. Sixteen studies included participant groups with various disabilities, rather than all having one specific category of disability. Only eight studies reported participants with "multiple disabilities"; that is, they included participants who each had more than one disability identified. Table 8. Disabilities Reported for Research Participants
a Physical disability = mobility and/or impairment with arm use. b Intellectual disabilities = students who were referred to as having "mental retardation" in previous report; also, this number includes one European study (Peltenburg et al., 2009) that applied the term "learning disability" to its participants who were reportedly ages 8-12 but were identified as appropriate for assessment items at the educational level of grade 2. c Multiple disabilities = individual students who were specifically categorized as having more than one disability. d Not specified = those studies or reviews (3) of studies that did not report about or provide detail as to the participants' disabilities. e Not applicable = those documents that had only non-students as participants; this includes an NCEO policy review. Types of AccommodationsThe number of times specific categories of accommodations were included in 2009-2010 published research is summarized in Table 8. Presentation accommodations were the most frequently studied category (n=28), and within this category the most common accommodations were read-aloud (n=20) and computer administration (n=9). The next most frequent category studied was response, and in that category, computer administration (n=9) was the most common accommodation. It should be noted that the computer administration accommodation fits into three categories: presentation, equipment/materials, and response. Several studies (n=15) analyzed accommodations from more than one category. Three studies--Bayles (2009), Bublitz (2009), and Mastergeorge and Martinez (2010)--examined accommodations as naturalistically identified in students' IEPs, but were not specified by the researchers. One study--Altman et al. (2010)--examined accommodations naturalistically identified in students' IEPs, but these were too numerous to mention, and their specific effects on score data were not the central focus of the study. A complete listing of accommodations studied is provided in Appendix E. Table 8. Accommodations in Reviewed Research
Research FindingsThe findings of the body of research literature on accommodations published from 2009-2010 are summarized in Tables 9-19. We present information according to the nature of the studies, in keeping with their varying purposes and focuses. The findings included reviews of perceptions about accommodations, including those of student test-takers as well as educators and other stakeholders, primarily parents. We summarize the findings of the research on specific accommodations, including read-aloud, computerized administration, extended-time, calculator, and aggregated sets of accommodations commonly called "bundles." We also summarize the findings on unique accommodations--those examined in only one study each--including scribing, word-processing, a virtual manipulative tool, a resource guide modification, American sign language (ASL) via avatar, and braille and large-print. Separate summaries of findings include varying implementation conditions as well as incidence of use of various accommodations across large data sets. The findings from studies in postsecondary educational contexts, which have numbered about 6-7 in 2005-2006, 2007-2008, and 2009-2010, receive separate attention. We also report separately accommodations decision making as addressed by five studies. This report also presents findings by academic content areas: math, reading, writing, other language arts, science, and social studies. In Appendix F, we provide substantial detail on an individual study level. Impact of AccommodationsResearch examining the effect of accommodations on assessment performance for students with disabilities comprised 34 studies published in 2009 and 2010 (see Table 9; see also Appendix F, Tables F-1 to F-6 for details about each study of this type). In a continuing trend, oral administration, or the "read-aloud" accommodation, was the single most investigated accommodation in 2009-2010, with nearly one-third of the accommodation-specific studies (n=11). Several of the studies found that the academic construct was not altered by the inclusion of read-aloud to support test-takers. Three studies indicated that read-aloud provided a differential boost for students with disabilities in comparison with students without disabilities, while two studies showed that read-aloud helped to improve performance for all students, and one study showed that it helped improve scores for students with disabilities in comparison to their scores without read-aloud. One of the studies finding differential boost reported the effects of read-aloud alone although it actually implemented the read-aloud accommodation in an accommodations bundle that included 150% extended-time and recording answers in the test booklet. (See Appendix F, Table F-1.) Computerized administration was another frequently-examined accommodation in the 2009-2010 published literature, with seven studies. The findings were somewhat mixed, with some studies affirming this test mode as supporting construct validity and the needs of students with disabilities, and others finding the opposite. Three studies indicated that this accommodation helped to improve performance of students with disabilities, yet one study found that there was no difference in test results for students with learning disabilities. One study indicated that test mode had no effect on the test construct, and one study--a meta-analysis of 81 studies--indicated that the computerized presentation of tests was comparable to paper-based assessments in science, but not in reading or other language arts, not in social studies, and not in mathematics. Finally, one study (Russell et al., 2009b) examined the relative impact of two different ways of providing ASL--through a recording of a human signing and an avatar signing--and found that neither had more impact on test scores of students with hearing impairments or deafness than the other. (See Appendix F, Table F-2.) The extended-time accommodation was examined primarily as to its impact on assessment scores of students with disabilities. In comparison with no additional time, students with disabilities did not score differently when given extended-time to complete testing, according to three studies. Alternately, two studies indicated that, in comparison with students without disabilities, students with disabilities differentially benefited from extended-time--that is, extended-time provided a "differential boost" for students with disabilities. (See Appendix F, Table F-3.) Effects of the calculator accommodation were explored in three studies. One study found that students without disabilities were provided a differential boost when using a graphing calculator, in comparison with students with disabilities. Another study comparing performance of students with learning disabilities and students with attention deficit/hyperactivity disorder who received or did not receive the calculator accommodation resulted in no improvement in performance and no overall decrease in math anxiety for students in either group--in fact, some individual students with disabilities experienced higher math anxiety when using the calculator accommodation. The third study yielded no difference in scores when using a graphing calculator compared with using a four-function calculator for either students with disabilities or students without disabilities. (See Appendix F, Table F-4.) Two studies scrutinized effects of different aggregated sets of accommodations--also called accommodations bundles. One study combined unique extended-time and unique read-aloud approaches, comparing effects for accommodated and standard administrations for students with disabilities and students without disabilities, and reported that the accommodations package helped to improve scores of all students--both those with and those without disabilities--yet not consistently across both accommodations. Another study compared performance when provided IEP-specified accommodations for students with disabilities as well as teacher-recommended accommodations for students without accommodations to performance when provided with a package of accommodations--read-aloud directions, paraphrase directions, verbal encouragement, and extended-time--for students with disabilities and students without disabilities. Most students with disabilities (78%) benefited from accommodations in comparison to the no accommodations condition, and about half of the scores of students without disabilities (55%) improved, although about the same proportion of scores of the students without disabilities improved from the teacher-specified accommodations as improved from the standard package of accommodations. (See Appendix F, Table F-5.) We categorized six studies as having inspected the effect of unique accommodations--that is, accommodations included in only one study. Most of these studies considered accommodations that were novel or otherwise not typical in their design or implementation. For instance, a virtual manipulative tool offered through a computer-based test platform assisted with basic operations for most students with learning disabilities (Peltenburg et al., 2009), as defined in the Netherlands context, in which students ages 8-12 were performing at the level of "end Grade 2" (p. 276). (See Table 9 for detail about findings of each study; also, see Appendix F, Table F-6.) Table 9. Summary of Research Findings by Specific Accommodation
Perceptions about AccommodationsTable 10 shows the results of research on perceptions about accommodations. More than one-half of the studies (n=9) reported on student perceptions, with most of those studies (n=5) relating to preferring one accommodation of some kind over another--for instance, some students preferred the human ASL interpreter over an avatar (Russell et al., 2009b). Further, student preferences tended to support computerized test administration over a paper-and-pencil format, according to three studies (Arce-Ferrer & Guzman, 2009; Kingston, 2009; Russell et al., 2009a), although one study (Lee et al., 2010) found the opposite. Students also indicated a complicated view of the modifications in one study (Roach et al., 2010) and of the accommodations in another study (Logan, 2009). The perceptions of educators about accommodations were mixed in three studies, and were primarily positive in one study. The mixed nature of educator perceptions was related in one study to a concern about altering the exam itself (Byrd, 2010), yet many educators affirmed the inherent value of supporting students through test accommodations but with reservations (Zhang et al., 2010). One study noted that educators tended to support IEP-specified accommodations, but not accommodations that were not planned in advance (Elliott et al., 2009). One study indicated that educators had primarily a positive view of accommodations, due in part to the fairness in test results that they established (Mastergeorge & Martinez, 2010). One study presented the varying understandings and frames of reference on accommodations for research participants, including students, teachers, and parents (Jordan, 2009). (See Appendix F, Table F-7 for more detailed explanation of findings of each study.) Table 10. Summary of Research Findings on Perceptions about Accommodations
Implementation and Use of AccommodationsTable 11 shows several studies (n=10) that reported on incidence of accommodations use and implementation-related matters. Most of these findings (n=7) reported on common accommodations in use in various settings and with specific disability categories; for instance, five studies indicated that the state assessments examined most commonly offered small group administration as an accommodation. Findings also focused on the manner in which some accommodations are implemented, with one study reporting on the computer as medium for different accommodations practices (Salend, 2009), and another on implementation of read aloud (Lazarus et al., 2009). Finally, one study expounded the variety of factors associated with the implementation of accommodations, including educator training and knowledge (Bayles, 2009). (See Appendix F, Table F-8 for more detailed explanation of findings of each study.) Table 11. Summary of Research Findings on the Implementation of Accommodations
Accommodations in Postsecondary EducationTable 12 presents a set of research findings for nine studies that were focused specifically on accommodations in educational settings beyond the K-12 school setting. This report marks the first time we have separated these findings from the findings for other groups. Studies sought to investigate effects of accommodations on test performance, test-takers' experiences using accommodations, and stakeholder groups' perceptions of accommodations, along with implementation and decision-making issues. The studies (n=4) on the perceptions of postsecondary students with disabilities of accommodations provided insights into factors that were related to students' decisions to seek accommodations support in coursework and course examinations--including aspects of the university size and type (relative enrollment numbers and public or private institution), as well as the learning environment (in-person or online), and the nature of disabilities (visible or invisible to peers or others). Another group of findings pertained to accommodation effects; computerized administration compared favorably to paper-and-pencil format, and extended-time added complexity to the effects (Lee et al., 2010); further, word-processed essays composed with extended-time were scored highly, yet there were mitigating elements limiting this pattern (Lovett et al., 2010); and finally, students with disabilities completing selected response course exams performed equivalently to their peers without disabilities (Ricketts et al., 2010). (See Appendix F, Table F-9 for more detailed explanation of findings of each study.) Table 12. Summary of Research Findings on Accommodations at the Postsecondary Level
Accommodations Decision-making ProcessAnother small number of studies (n=5) provided insight into the nature of, and factors related to, the process of selecting accommodations (i.e., accommodations decision making). Research findings are presented in Table 13. Two of these studies (Bublitz, 2009; Mariano et al., 2009) focused only on seeking findings about accommodations decision making, whereas the other three studies (Altman et al., 2010; Cawthon, 2010; Lovett, 2010) also reported additional findings other than those pertaining to decision making. Three studies specifically examined factors that influence decision making. For instance, one study (Mariano et al., 2009) compared educator training on different decision-making models, and the possible effects of educators trained with one model recommending significantly more presentation accommodations than educators trained with the other model. Two studies relayed educators' conscious considerations in selecting accommodations. For instance, one study (Cawthon, 2010) identified the pieces of evidence that educators of students who are deaf or hard-of-hearing used in decision making. (See Appendix F, Table F-10 for more detailed explanation of findings of each study.) Table 13. Summary of Research Findings on Accommodations Decision-making Processes
Accommodations by Academic Content AssessmentsFor the first time in this report, we analyzed findings according to academic content area. This focus reflected a recognition that many accommodations are associated with specific academic content. Some examples of these cases include: calculators for math and science assessments, and word-processing for writing assessments or constructed responses on reading, other English language arts assessments, and science assessments. Some accommodations, such as oral administration, may be presented differently depending on the academic construct being assessed. We present findings for each content area here according to the frequency with which the content areas were identified in the set of 48 research studies reviewed: 27 findings from 20 studies in mathematics, 20 findings from 16 studies in reading, 7 findings from 7 studies in science, 4 findings from 4 studies in other language arts, 3 findings from 3 studies in writing, and 2 findings from 2 studies in social studies (see Figure 4). Analysis of findings for each content area are the same as those we employed earlier in this report, including the impact of accommodations on assessment performance, perceptions about accommodations, construct validity of accommodated assessments, and matters of implementation and instances of use of accommodations. Figure 4. Research Findings by Content Area Note: The number of findings does not equate with the number of studies, because many studies reported more than one finding. Table 14 displays the 27 research findings for accommodations in 20 studies of mathematics assessments, sorted by frequency according to the nature of the findings. The most common individual finding was that accommodations did not change the mathematics construct or constructs assessed, noted in five studies. These studies focused on over 10 accommodations, including calculator, read-aloud directions, read-aloud questions, alternate test setting, extended-time, computerized administration, small group administration, and checking comprehension of directions. Eleven of the math findings were unique; we present them individually in Table 14 and Table F-11, by signifying that only one study produced each of the findings. Twelve studies of mathematics provided insights on the performance of students using accommodations--including one study examining the impact of modifications (Elliott et al., 2010). Half of the performance findings (n=6) resulted from a comparison of scores between students with disabilities and students without disabilities, and these findings diverged widely from one another. Four findings pertained to the differential score increases that accommodations brought to some students in comparison to others; these findings were for teacher-recommended accommodations and a standard accommodations package (Elliott et al., 2009), as well as computerized administration (Russell et al., 2009a), a virtual manipulative tool (Peltenburg et al., 2009), and various modifications (Roach et al., 2010). However, three findings indicated that both students with disabilities and students without disabilities improved when provided supports--such as four-function and graphing calculators (Bouck, 2010) and some specific modifications (Elliott et al., 2010), and one study using the graphing calculator yielded that not all students with disabilities improved in scores (Bouck, 2009). Further, two studies found that accommodations did not assist students with disabilities in improving more than students without disabilities, and one study (Lindstrom, 2010) found that students without disabilities improved more when using read-aloud accommodations than did students with disabilities. In the remaining two studies, the findings were complex in the literature review (Lindstrom, 2010), and the other (Parks, 2009) found that calculator use did not improve test results. In the other six studies, which compared effects of accommodations use to non-use for students with disabilities, there was more concurrence in the findings. Three studies found that these students' scores were higher for those using supports than those who did not. These studies involved the following supports: virtual manipulative tool (Peltenburg et al., 2009), a set of accommodations offered through an online platform (Russell et al., 2009a), and a set of modifications (Roach et al., 2010). Two studies compared the relative benefit of two accommodations of the same type, and found that neither supported students with disabilities more than the other. The accommodations were four-function and graphing calculators (Bouck, 2001), and American Sign Language (ASL) through recording of human or avatar signer (Russell et al., 2009b). Finally, only one study found that students with disabilities scored essentially the same whether using accommodations or not (Freeland et al., 2010). Approximately one-fourth of the findings (n=7) pertained to perceptions about accommodations offered in math testing, and most of the perspectives reported (n=6) were those of the test-takers themselves. In two studies, students with disabilities in general offered information about accommodations preferences including that accommodations offered through a computer-based test administration platform are preferable to their previous testing experiences in which accommodations are offered in non-digital formats (Russell et al., 2009a; Russell et al., 2009b), and that they prefer a specific version of an American Sign Language (ASL)--when humans sign rather than avatars signing (Russell et al., 2009b). When comparing perceptions about accommodations between students with disabilities and others, two studies found apparent contradictions: that students with ADHD and LD experienced higher anxiety during the test than students without disabilities, and sustained the anxiety whether or not they received a calculator accommodation (Parks, 2009), yet that students with a typical variety of disabilities had similar preferences for using calculators as students without disabilities (Bouck, 2010). Additionally, one study (Jordan, 2009) reported on the differing views of students with disabilities and their educators and parents. Two other studies reported their uniquely specific findings about the effect of accommodations on students' (Roach et al., 2010) and educators' perspectives (Mastergeorge & Martinez, 2010). Finally, two studies reported patterns of use of specific accommodations--one comparing students with disabilities and students without disabilities (Bouck, 2010), and the other comparing students with disabilities and their educators (Schoch, 2010). A single study (Cawthon, 2010) reported findings related to educators' accommodations decision-making processes and accommodations practices. (See Appendix F, Table F-11 for more detailed explanation of findings of each study.) Table 14. Summary of Research Findings on Accommodations in Mathematics Assessments (from 20 studies)
Note: Some of these 20 studies reported support for more than one category of findings. Table 15 details the 18 findings for accommodations in reading assessments. Similar to the math findings, the finding with the researchers agreeing most, in 6 studies, was that the accommodations on the reading assessments did not change the academic construct or constructs being tested. The accommodations examined included read-aloud (Cook et al., 2009; Cook et al., 2010; Snyder, 2010), computerized administration (Kingston, 2009), various state-allowed accommodations (Roxbury, 2010), and braille (Stone et al., 2010). Seven studies reported findings about the performance of students using test supports--including three studies examining impact of modifications. Most of the performance findings (n=5) resulted from a comparison of scores between students with disabilities and students without disabilities, and these findings were mostly convergent on the point that accommodations like read-aloud (Cook et al., 2009) or modifications (Elliott et al., 2010; Randall & Engelhard, 2010; Roach et al., 2010) supported students both with and without disabilities. From among all of the data of these four studies, there was only one instance of differential benefit in comparing students with disabilities and students without disabilities: the grade 3 students with disabilities using the read-aloud modification on the reading assessment improved more than their peers without disabilities, although this differential benefit was not present for the grade 7 students in the same study (Randall & Engelhard, 2010). In contrast, two studies found that accommodations benefited students with disabilities more than students without disabilities. These studies examined a bundled set of accommodations (Fletcher et al., 2009) and read-aloud for students with learning disabilities (Laitusis, 2010). The remaining performance study, comparing accommodations use and non-use for students with disabilities, found that students with visual impairments scored essentially the same whether using various unspecified access technologies or not (Freeland et al., 2010). A small number of the findings (n=3) pertained to perceptions about accommodations offered on reading assessments. These studies generally supported the idea that the perceptions were mixed; that is, use of accommodations was not reflected in only positive attitudes and feelings. Each of the three studies demonstrated more complex results. One study (Jordan, 2009) reported on the views of students with disabilities and their educators and parents. In another study (Logan, 2009), the researchers found unexpected results: students with a set of motivations or attitudes, termed "achievement goals" according to the questionnaire, did not have positive experiences using accommodations on the reading assessment. The last study of this type (Roach et al., 2010) yielded uniquely specific findings about students' preferences for or against the available accommodations. Finally, one study (Roxbury, 2010) reported patterns of use of accommodations, comparing students with disabilities and students without disabilities. Another study (Cawthon, 2010) reported findings related to educators' accommodations decision-making processes and accommodations practices. (See Appendix F, Table F-12 for more detailed explanation of findings of each study.) Table 15. Summary of Research Findings on Accommodations in Reading Assessments (from 16 studies)
Note: Some of these 16 studies reported support for more than one category of findings. 1 This finding indicates that read-aloud served not as a modification on the reading test, but rather was an accommodation. Table 16 presents the findings for the science assessment accommodations. The most common individual finding was that accommodations did not change the science construct or constructs assessed, a finding supported by four studies. The accommodations examined included read-aloud (Kim et al., 2009a; Kim et al., 2009b), computerized administration (Kingston, 2009), and various state-allowed accommodations (Roxbury, 2010). The remaining three studies reported unique findings, each not supporting the others. Two of the findings pertained to the effects of accommodations on performance. In a comparison of the science scores of students with disabilities and students without disabilities, the standard accommodations package assisted both groups in improving their scores, yet students with disabilities benefited differentially more than their peers without disabilities (Elliott et al., 2009). In a comparison of the assessment results of students with disabilities using accommodations with those not using accommodations, both groups had similar results, indicating no benefit of access technologies on a computer-based test (Freeland et al., 2010). The last study (Cawthon, 2010) provided insights into the accommodations decision-making process for special educators. (See Appendix F, Table F-13 for more detailed explanation of findings of each study.) Table 16. Summary of Research Findings on Accommodations in Science Assessments (from 7 studies)
Table 17 shows findings of four studies on accommodations offered in assessments on "other language arts," an academic construct which explicitly excludes reading and writing. This narrow body of literature yielded five separate findings, most of which illuminated the area of construct validity, but with divergent results. Two studies (Finch et al., 2009; Kim & Huynh, 2010) indicated that the tests with accommodations did not change the academic constructs tested by the non-accommodated assessment, while one study (Kingston, 2009) indicated that accommodations changed the construct. One study (Kim & Huynh, 2010) informed the impact of accommodations on performance, comparing scores of students with disabilities and students without disabilities, in both conditions of testing with and without accommodations. This study found that students with disabilities did not benefit from using accommodations and that students without disabilities did benefit from accommodations, at a minimal yet significant degree. Finally, one study (Mastergeorge & Martinez, 2010) demonstrated that educators had primarily a positive view of accommodations, due in part to the fairness in test results that they established. (See Appendix F, Table F-14 for more detailed explanation of findings of each study.) Table 17. Summary of Research Findings on Accommodations in Other Language Arts Assessments (from 4 studies)
Note: Some of these 4 studies reported support for more than one category of findings. Table 18 shows findings of three studies on accommodations in writing assessments. The majority of the findings (n=2) pertain to construct validity, converging to indicate that accommodations did not change the writing constructs assessed (Cook et al., 2010; Stone et al., 2010). One study (Lovett et al., 2010) compared scores of students with disabilities who tested with accommodations to those who did not do so. The result of this study indicated that students with disabilities showed no improvement when using accommodations. (See Appendix F, Table F-15 for more detailed explanation of findings of each study.) Table 18. Summary of Research Findings on Accommodations in Writing Assessments (from 3 studies)
The fewest findings were reported about accommodations used in social studies assessments (see Table 19). Both of the studies (Freeland et al., 2010; Kingston, 2009) reporting these findings were not solely focused on analyzing data from social studies tests, but rather had included this content area along with assessment scores from tests in math, reading, and science, among others. Accommodations did not support students with disabilities to improve their scores over when they had taken the test without accommodations--in fact, students with visual impairments and students with total blindness scored higher without access technologies than with them (Freeland et al., 2010). Pertaining to construct validity, the other study--a meta-analysis of two studies with social studies scores--found that test-takers scored higher on computer-administered tests than the tests presented in a standard administration, meaning that these assessments were testing qualitatively different academic constructs, but with a low effect size (Kingston, 2009). (See Appendix F, Table F-16 for more detailed explanation of findings of each study.) Table 19. Summary of Research Findings on Accommodations in Social Studies Assessments (from 2 studies)
Across the academic content areas, accommodations research from 2009 through 2010 supported a few consistent findings. Regarding construct validity, the literature indicated that accommodated tests were not different from non-accommodated tests as far as the nature of the content being tested, as supported by 20 out of 21 findings. Regarding the impact of accommodations on assessment outcomes, the areas of convergence in the findings did not seem to cross academic content areas, at least beyond mathematics and reading. An exception to this pattern was that students with disabilities did not perform significantly better when provided accommodations than when not provided them for assessments on other language arts, writing, and social studies; however, these findings were reported by relatively few studies (n=3), so these are not necessarily strong conclusions. Alternately, most of the findings were narrowed to content areas, and were affected by limited numbers of studies. Findings about perceptions of accommodations varied, and only three academic content areas were studied--mathematics, reading, and other language arts. Research areas that had limited findings included accommodations decision making by educators--represented by only one study (Cawthon, 2010) and incidence of accommodations use, which addressed only mathematics (n=2) and reading (n=1). Limitations and Future ResearchAs is often the case in research, many of the studies reviewed discussed limitations in order to provide context for the results that were observed (n=38). As seen in Table 20, limitations were summarized under five broad categories. A study was counted for a given category when it provided at least one limitation under that category. A more comprehensive description of limitations for each individual study is available in Appendix G. The most commonly cited category of limitations in the research was methodology, where frequently the use of bundled (vs. individually administered) accommodations and non-random sampling of participants were referenced. Many authors also identified sample characteristics as a limitation to the research. Specifically, common limitations were sample size and the representativeness of the samples obtained on variables such as age, grade level, and race. More detailed information regarding specific limitations of each study is also available in Appendix G-1. Table 20. Categorized Limitations Identified by Authors
aTwenty-six studies included more than one category of limitations, represented in 2 to 4 limitations categories. As would be expected, methodology and sample characteristics were also often highlighted as areas that needed to be addressed in future research--as seen in Table 21. However, we found that researchers recognized more instances where the test or test context used in the study led to implications for future research than was the case when identifying limitations (Table 21). More detailed information about suggestions for future research is available in Appendix G-2. Table 21. Categorized Areas of Future Research Identified by Authors
aTwenty studies listed directions for future research that fit into multiple categories. Top of Page | Table of Contents DiscussionSeveral themes are evident in the research studies published in 2009 and 2010, especially in relation to the research studies from 2007 and 2008, which were reported in the previous NCEO accommodations research review (Cormier et al., 2010). We address here themes in terms of purposes, research designs, assessment types, study participant characteristics, accommodations, academic content areas and research findings associated with them, and study limitations and future research directions. We conclude with several comments on promising trends overall. Research PurposesThe nature of the research literature on accommodations has continued to change. Many of the studies in 2009-2010 combined the effect of accommodations on performance and their effect on assessment constructs. Many also combined quantitative and qualitative research on the impact of accommodations on students with disabilities, in that they examined accommodations' effects on test scores as well as their effects on perceptions of test takers. There were several differences between the purposes identified in the 2007-2008 and the purposes in the 2009-2010 studies. First of all, there was a much lower proportion of studies focused on comparing scores in the current set of studies: 63% in 2007-2008 and 31% in 2009-2010. About one-fourth (23%) of the current set of studies was focused on examining perceptions and preferences on use, a much larger proportion than the 13% of the 2007-2008 studies. The proportion of studies that described implementation practices and accommodations use was 20% of 2007-2008 studies, but a much lower 6% in 2009-2010 studies. Test validity was the purpose for a similarly low proportion in both reports: 6% for 2009-2010 and 3% for 2007-2008. Research Types and Data Collection SourcesThe research studies in 2009-2010 were mostly experimental (6%) or quasi-experimental (52%), which was a larger proportion than in 2007-2008. On the other hand, there was a much smaller proportion of studies using a descriptive quantitative design (21%) compared with 2009-2010 (55%). Further, the source of data in 2007-2008 was reported to be just over one-half from primary sources--that is, collected by researchers rather than drawn from extant data--whereas in 2009-2010, the data came from primary sources for over two-thirds of the studies. Data Collection MethodsData collection methods generally were quite different between studies published in 2007-2008 and 2009-2010. With multiple purposes, there often was more than one data collection method and more than one instrument used. Over one-third of the 2009-2010 studies used more than one data collection method. Although the most common data collection method was content testing in both 2007-2008 and 2009-2010, there was a large difference in the use of surveys, from about one-fifth of 2007-2008 studies to over half of 2009-2010 studies. This shift seemed to be related to researchers' efforts to uncover students' and educators' experiences during the implementation of accommodations. Other methods used often in 2009-2010 included interviews and observations. ParticipantsGrade LevelResearch on accommodations has varied in terms of the focus on different grade level clusters--elementary, middle school, and high school--and the 2009-2010 studies differ from the 2007-2008 studies on this variable as well. First, a larger proportion of 2009-2010 published research analyzed accommodations across more than one grade level cluster than in 2007-2008. Second, although the proportion of high school and postsecondary participants were each about the same across the two periods, there was a larger proportion of studies published in 2009-2010 with elementary participants and with middle school students than in the research published in 2007-2008. Disability CategoriesThe disability categories of study participants with disabilities were also somewhat different 2009-2010 compared to 2007-2008. The overall proportion of participants in many disability categories increased. The studies with participants with learning disabilities increased in proportion from 38% in 2007-2008 to 54% in 2009-2010. Similar increases occurred for participants with attention problems: from 8% in 2007-2008 to 23% in 2009-2010. Additional increases are reported here in descending order of difference: for blindness/visual impairment, from 6% in 2007-2008 to 21% in 2009-2010; for deafness/hearing impairment, from 5% to 19%; for emotional behavioral disability, from 10% to 23%, and for intellectual disabilities, from 5% to 17%. Some of these increases might be due to researchers providing data about their participants' disability categories; a smaller proportion of studies in the current review failed to specify participant disability categories. Another possible source of the increase is that a larger proportion of individual participants were identified as having more than one disability, 12% in 2007-2008 and 17% in 2009-2010. The increases in the proportion of participants' disabilities did not seem to be due to changes in the proportion of studies using large secondary data sets because fewer studies (31%) in 2009-2010 used this type of data, in comparison with 45% of the 2007-2008 studies. More studies both collected disability data and also utilized comparative procedures to measure the impact of accommodations use by participants with various disabilities, yielding more findings about the effects of specific accommodations for students with specific disabilities. AccommodationsThe 2009-2010 studies included 10 specific accommodations, in four of the five accommodation categories. In the presentation category, read-aloud, braille, and large-print were represented. In the equipment/materials category, computerized administration, calculator, and sign-language recording were included. In the response category, partial-scribe, word-processing, and virtual manipulative were represented. In the timing/scheduling category, extended-time was the focus of research. In comparison, in 2007-2008 studies, four specific accommodations were examined: read-aloud and segmenting text (presentation), computerized administration (equipment/materials), and extended-time (timing/scheduling). The specific accommodations were examined through a higher number of studies using primary data sources in 2009-2010 (33 of 48 studies--69%), compared to 2007-2008 (22 of 40 studies--55%). There seemed to be some shifts in attention to specific accommodations. The read-aloud accommodation and computerized administration both maintained the same proportion of studies from 2007-2008 to 2009-2010, at 23% and 15% respectively. Examination of extended-time decreased from 25% of the 40 studies in 2007-2008 to 10% of the 48 studies in 2009-2010. Aggregated or bundled accommodations were studied less frequently, decreasing from 5 studies (13%) in 2007-2008 to 2 studies (4%) in 2009-2010. Content Areas and Associated Research FindingsAccommodations for mathematics and reading continued to be the most commonly examined in 2009-2010 studies, yet attention to accommodations for science assessments seemed to be increasing. The researchers of the 2009-2010 studies showed more interest in investigating accommodations used during science tests, and somewhat more interest in mathematics accommodations, than those involved with 2007-2008 studies. This difference might be related to the increase in attention to the performance of students with disabilities on statewide science assessments during the 2007-2008 school year, which is the federally-required timeline of implementing science tests, and reporting data on this performance (Thurlow, Rogers, & Christensen, 2010). On the other hand, slightly fewer of the studies reported on accommodations for reading assessments. Another difference in the studies published in 2009-2010 was that a larger number, and a slightly higher proportion, of them examined accommodations used in more than one content area. When examining the findings by specific accommodation, some interesting comparisons and contrasts can be observed for the 2009-2010 studies compared to the 2007-2008 studies. In 2009-2010, three findings indicated that read-aloud provided a differential benefit for students with disabilities, and two findings indicated that all students benefited when taking tests using read-aloud. An identical number of study findings in 2007-2008 reported these results. A chief difference between these sets of findings was that only one study (Temple, 2007) involved reading assessments in 2007-2008, whereas most of the 2009-2010 studies (all except Lindstrom, 2010) involved reading. Findings on the impact of the use of computerized administration received new attention in studies published in 2009-2010, with 3 of the 4 relevant studies finding that this accommodation benefited students with disabilities. In 2007-2008, no studies examined the comparative impact of computerized administration. The 2007-2008 studies focused primarily on analyses of potential effects of computerized administration on construct validity, with 5 of 6 relevant studies indicating no problematic effects on construct validity. The 2009-2010 published studies included a meta-analysis that found computerized delivery changed the academic constructs involved (Kingston, 2009); and another study found that this accommodation changed the constructs of the intelligence test (Arce-Ferrer & Guzman, 2009). Impact of the calculator response accommodation was studied more often in 2009-2010 (three studies) than in 2007-2008 (one study--Sharoni & Vogel, 2007). The findings tended to be more negative in 2009-2010, with none of the three studies indicating differential benefits for students with disabilities, and only one study indicating any benefit for students with disabilities (Bouck, 2010). The 2009-2010 studies yielded contested benefits for students with disabilities. Three studies measuring the impact of extended-time--on introductory psychology course exams (Lee et al., 2010), a literature review with many academic content areas (Lovett, 2010), and a writing assessment (Lovett et al., 2010) found that students with disabilities did not benefit from extended time. Two studies indicated that students with disabilities did differentially benefit from extended-time: on a math assessment (Lindstrom, 2010), and in undergraduate medical course examinations (Ricketts et al., 2010). These studies were generally in contrast to those published in 2007-2008, where effects were generally minimal, but more often found on K-12 assessments. In 2009-2010, there seemed to be increased research attention to the potential for problematic effects of accommodated tests on construct validity. In 2007-2008, there were 8 study findings relevant to construct validity, and in 2009-2010, there were 21 relevant findings. In 2007-2008, at least four findings (about half) indicated that accommodations--including computer administration and read-aloud--offered on math and reading tests affected the construct being tested. In 2009-2010, the studies' results showed that various accommodations did not affect the academic constructs being assessed. That is, the 2009-2010 studies indicated that accommodations were not associated with construct validity concerns. This was the case across academic content areas, including mathematics, reading, science, writing, and a majority of studies in the other content areas. In fact, accommodations were found only to change the constructs in 2 of the 21 findings examining construct validity. These two findings focused on the same accommodation--computerized administration--in the same study (Kingston, 2009) with the academic constructs of other language arts and social studies. Another study (Arce-Ferrer & Guzman, 2009) found that computerized administration did not change the constructs measured on an intelligence test. Further, the read-aloud accommodation was found not to alter the construct being tested in mathematics, reading, and science. A growing set of studies in academic content areas examined perceptions about using accommodations during testing. In comparison with only 12.5% of all studies published in 2007-2008, perceptions were investigated in 25% of the mathematics studies, 17% of the reading studies, and 20% of the other language arts studies in 2009-2010. Further, all of the 2007-2008 studies examined educators' perceptions of accommodations, whereas about half of the 2009-2010 studies identified student test-takers' perceptions. LimitationsThe most noticeable difference between the accommodations research published in 2007-2008 and that published in 2009-2010 is the increase in researchers identifying methodological issues as limitations. Although identifying sample characteristics has remained the same between the two sets of years method choices such as having no control group or engaging in non-random sampling has been increasingly noted as a limitation. Researchers have pointed out that the unit of analysis has been the classroom rather than the individual student. Random assignment of research participants to differing conditions--such as testing with accommodations or testing without accommodations--has not occurred by participant, but rather has sometimes been implemented at the classroom or school level. Further, researchers note that accommodations have been implemented differently across conditions. These decisions indicated that studies are not true experiments but rather have used quasi-experimental designs. Other limitations noted more often by researchers whose work was published in 2009-2010 included test and test context. Researchers commented in some studies that the test used was different somehow from tests typically used in the participants' school or district, or that the test segment presented to participants did not use typical administration conditions. Test context issues also included inconsistency of tests across grade levels, different test forms being used in different accommodated conditions, participants running out of time in test administration sessions, more than typical incidence of missing data, and in some cases, suspicions that participants did not respond honestly to survey questions. Researchers also reported more results-oriented limitations. For example, some studies used results that were not truly independent, such as when study participant scores from both academic years were linked to one another. Another concern pertained to analyses in which the effects of accommodations were difficult to distinguish from the effects of students' disabilities. Several limitations from studies published in 2009-2010 did not seem to fit into categories used in previous reports. The "other" category of limitations included that students' prior knowledge was not reported through alternative data sources, which could have helped to clarify any issues with the test and accommodations used. Another limitation was that researchers could not pinpoint all of the potential sources of differential item functioning (DIF). Finally, well more than twice as many studies in the current report (n=26) cited limitations that could be described as fitting more than one limitations category. This compared to the limitations of the 10 studies reported in studies published in 2007 to 2008. Future ResearchThere were a few differences in the potential future research directions that researchers identified in the 2009-2010 studies compared to 2007-2008. First, a larger proportion of the 2009-2010 studies indicated a need for more research with improved methodology, such as investigating impact and functionality of accommodations through single-subject designs, and inquiring about practitioners' knowledge and perceptions of accommodations. Second, a smaller proportion of the studies indicated need for more research involving improvements in results. An example of results improvements that have decreased is the need to replicate the results in order to demonstrate their validity or generalizability. This change may be related to a larger proportion of studies focused on validity of accommodations in 2009-2010. Finally, more than twice as many studies (n=20) in 2009-2010 identified future research directions, compared to the nine studies in 2007 to 2008 doing so. TrendsSome themes we recognized in the literature included a steady rise in investigating accommodations for science assessments, increased activity in collecting data simultaneously across grade level clusters--elementary, middle school, and high school--and increased examination of secondary large data sets, at the district and state levels. Another trend was that researchers crafted multi-purpose study designs--that is, test data were collected to measure the impact of accommodations, and survey and interview data were collected about students' experiences in using accommodations. The literature paid continued attention in the current review to students with low-incidence disabilities--including perceptual impairments in seeing and hearing. Additionally, we observed fewer studies on the extended-time accommodation, as well as a small increase in examining response accommodations. Top of Page | Table of Contents ReferencesReport ReferencesBarnard, L., Lan, W. Y., & Lechtenberger, D. (2008, March). How student attitudes toward requesting accommodations are related to academic achievement in postsecondary education. Paper presented at the annual meeting of the American Educational Research Association, New York. Bishop-Temple, C. (2007). The effects of interactive read-alouds on the reading achievement of middle grade reading students in a core remedial program. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 68(10), 4179. Brown, J. I., Fishco, V. V., & Hanna, G. (1993). Nelson-Denny Reading Test, Form H. Itasca, IL: Riverside. Cawthon, S., & The Online Research Lab. (2006). Findings from the National Survey on Accommodations and Alternate Assessments for Students who are Deaf or Hard of Hearing. Journal of Deaf Studies and Deaf Education, 11(3), 337–359. Cormier, D. C., Altman, J. R., Shyyan, V., & Thurlow, M. L. (2010). A summary of the research on the effects of test accommodations: 2007-2008 (Technical Report 56). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. CTB/McGraw-Hill. (1997). TerraNova Multiple Assessment Battery. Monterey, CA: Author. Cury, F., Elliot, A. J., DaFonseca, D., & Moller, A. C. (2006). The social-cognitive model of achievement motivation and the 2 x 2 achievement goal framework. Journal of Personality and Social Psychology, 90(4), 666-679. Elliot, A. J., & McGregor, H. A. (2001). A 2 x 2 achievement goal framework. Journal of Personality and Social Psychology, 80(3), 501-519. Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A review of multiple-choice item-writing guidelines for classroom assessment. Applied Measurement in Education, 15(3), 309-334. Hammill, D. D., & Larsen, S. C. (1996). Test of Written Language. (3rd ed.). Austin, TX: Pro-Ed. Hopko, D. R., Mahadevan, R., Bare, R. L., & Hunt, M. K. (2003). The abbreviated math anxiety scale (AMAS): Construction, validity, and reliability. Assessment, 10(2), 178-182. Janssen, J., Scheltens, F., & Kraemer, J. (2005). Leerling- en onderwijsvolgsysteem. Rekenen-wiskunde groep 4. Handleiding [Student and education monitoring system. Mathematics grade 2. Teachers guide]. Arnhem, The Netherlands: CITO. Johnstone, C. J., Altman, J., Thurlow, M. L., & Thompson, S. J. (2006). A summary of research on the effects of test accommodations: 2002 through 2004 (Technical Report 45). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman Test of Educational Achievement-Second Edition (KTEA-II) administration and scoring manual. Circle Pines, MN: American Guidance Service. Lewandowski, L. J., Lovett, B. J., Parolin, R., Gordon, M., & Codding, R. S. (2007). Extended time accommodations and the mathematics performance of students with and without ADHD. Journal of Psychoeducational Assessment, 25(1), 17-28. Lewandowski, L. J., Lovett, B. J., & Rogers, C. L. (2008). Extended time as a testing accommodation for students with reading disabilities: Does a rising tide lift all ships? Journal of Psychoeducational Assessment, 26(4), 315-324. MacGinitie, W. H., MacGinitie, R. K., Maria, K., & Dreyer, L. G. (2000). Gates-MacGinitie Reading Tests–Manual for scoring and interpretation. Itasca, IL: Riverside. Mastergeorge, A. M., & Martinez, J. F. (2010). Rating performance assessments of students with disabilities: A study of reliability and bias. Journal of Psychoeducational Assessment, 28(6), 536-550. Mather, N., Hammill, D. D., Allen, E. A., & Roberts, R. (2004). TOSWRF: Test of Silent Word Reading Fluency: Examiner's manual. Austin, TX: Pro-Ed. National Council of Teachers of Mathematics. (2000). Principles and NCTM standards for school mathematics. Reston, VA: Author. Northwest Evaluation Association. (2009). NWEA technical manual for Measures of Academic Process and Measures of Academic Progress for primary grades. Portland, OR: Author. Pekrun, R., Goetz, T., Perry, R. P., Kramer, K., Hochstadt, M., & Molfenter, S. (2004). Beyond test anxiety: Development and validation of the test emotions questionnaire. Anxiety, Stress, and Coping, 17(3), 287-316. Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1991). A manual for the use of the Motivated Strategies for Learning Questionnaire (MSLQ). Ann Arbor: University of Michigan, National Center for Research to Improve Postsecondary Teaching and Learning. Pintrich, P. R., Smith, D. A. F., Garcia, T., & McKeachie, W. J. (1993). Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educational and Psychological Measurement, 53(3), 803-813. Raven, J. (2000). The Raven's Progressive Matrices: Change and stability over culture and time. Cognitive Psychology, 41, 1-48. Raven, J., Raven, J. C., & Court, J. (1993). Test de matrices progresivas: manual para la aplicacia¨n [Standard progressive matrices test: Direction for administration manual]. Buenos Aires: Paida¨s. Rodriguez, M. C. (2005). Three options are optimal for multiple-choice items: A meta-analysis of 80 years of research. Educational Measurement: Issues and Practice, 24,3-13. Sharoni, V., & Vogel, G. (2007). Entrance test accommodations, admission and enrollment of students with learning disabilities in teacher training colleges in Israel. Assessment & Evaluation in Higher Education, 32(3), 255-270. Thompson, S., Blount, A., & Thurlow, M. (2002). A summary of research on the effects of test accommodations: 1999 through 2001 (Technical Report 34). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Thurlow, M., Rogers, C., & Christensen, L. (2010). Science assessments for students with disabilities in school year 2006-2007: What we know about participation, performance, and accommodations (Synthesis Report 77). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Woodcock, R. W., McGrew, K. S., & Mather, N. (2001). Woodcock-Johnson Tests of Achievement. (3rd ed.). Itasca, IL: Riverside Publishing. Woodcock, R., Mather, N., & Schrank, F. A. (2004). Woodcock-Johnson III: Diagnostic Reading Battery. Itasca, IL: Riverside Publishing. Zenisky, A. L., & Sireci, S. G. (2007). A summary of the research on the effects of test accommodations: 2005-2006 (Technical Report 47). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. 2009 and 2010 Accommodation ReferencesAltman, J. R., Cormier, D. C., Lazarus, S. S., Thurlow, M. L., Holbrook, M., Byers, M., Chambers, D., Moore, M., & Pence, N. (2010). Accommodations: Results of a survey of Alabama special education teachers. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Anjorin, I. (2009). High-stakes tests for students with specific learning disabilities: Disability-based differential item functioning. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(02). Arce-Ferrer, A. J., & Guzman, E. M. (2009). Studying the equivalence of computer-delivered and paper-based administrations of the Raven standard progressive matrices test. Educational and Psychological Measurement, 69(5), 855-867. doi:10.1177/0013164409332219 Barnard-Brak, L., & Sulak, T. (2010). Online versus face-to-face accommodations among college students with disabilities. The American Journal of Distance Education, 24(2), 81-91. doi:10.1080/08923641003604251 Barnard-Brak, L., Davis, T., Tate, A., & Sulak, T. (2009). Attitudes as a predictor of college students requesting accommodations. Journal of Vocational Rehabilitation, 31(3), 189-198. doi:10.3233/JVR-2009-0488 Barnard-Brak, L., Sulak, T., Tate, A., & Lechtenberger, D. (2010). Measuring college students' attitudes toward requesting accommodations: A national multi-institutional study. Assessment for Effective Intervention, 35(3), 141-147. doi:10.1177/1534508409358900 Bayles, M. (2009). Perceptions of educators and parents of the California High School Exit Examination (CAHSEE) requirement for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(04). Bouck, E. C. (2009). Calculating the value of graphing calculators for seventh-grade students with and without disabilities: A pilot study. Remedial and Special Education, 30(4), 207-215. doi:10.1177/0741932508321010 Bouck, E. (2010). Does type matter: Evaluating the effectiveness of four-function and graphing calculators. Journal of Computers in Mathematics and Science Teaching, 29(1), 5-17. Bublitz, D. F. (2009). Special education teachers' attitudes, knowledge, and decision-making about high-stakes testing accommodations for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(04). Byrd, T. D. M. (2010). East Tennessee State University faculty attitudes and student perceptions in providing accommodations to students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(10). Cawthon, S. W. (2009). Accommodations for students who are deaf or hard of hearing in large-scale, standardized assessments: Surveying the landscape and charting a new direction. Educational Measurement: Issues and Practice, 28(2), 41-49. doi:10.1111/j.1745-3992.2009.00147.x Cawthon, S. W. (2010). Science and evidence of success: Two emerging issues in assessment accommodations for students who are deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 15(2), 185-203. doi:10.1093/deafed/enq002 Cook, L., Eignor, D., Steinberg, J., Sawaki, Y., & Cline, F. (2009). Using factor analysis to investigate the impact of accommodations on the scores of students with disabilities on a reading comprehension assessment. Journal of Applied Testing Technology, 10(2). doi:10.1080/08957341003673831 Cook, L., Eignor, D., Sawaki, Y., Steinberg, J., & Cline, F. (2010). Using factor analysis to investigate accommodations used by students with disabilities on an English-language arts assessment. Applied Measurement in Education, 23(2), 187-208. Elliott, S. N., Kratochwill, T. R., McKevitt, B. C., & Malecki, C. K. (2009). The effects and perceived consequences of testing accommodations on math and science performance assessments. School Psychology Quarterly, 24(4), 224-239. doi: 10.1037/a0018000 Elliott, S. N., Kettler, R. J., Beddow, P. A., Kurz, A., Compton, E., McGrath, D., Bruen, C., Hinton, K., Palmer, P., Rodriguez, M. C., Bolt, D., & Roach, A. T. (2010). Effects of using modified items to test students with persistent academic difficulties. Exceptional Children, 76(4), 475-495. Finch, H., Barton, K., & Meyer, P. (2009). Differential item functioning analysis for accommodated versus non-accommodated students. Educational Assessment, 14(1), 38-56. doi:10.1080/10627190902816264 Fletcher, J. M., Francis, D. J., O'Malley, K. Copeland, K., Mehta, P., Caldwell, C. J., Kalinowski, S., Young, V., & Vaughn, S. (2009). Effects of a bundled accommodations package on high-stakes testing for middle school students with reading disabilities. Exceptional Children, 75(4), 447-463. Freeland, A. L., Emerson, R. W., Curtis, A. B., & Fogarty, K. (2010). Exploring the relationship between access technology and standardized test scores for youths with visual impairments: Secondary analysis of the National Longitudinal Transition Study 2. Journal of Visual Impairment & Blindness, 104(3), 170-182. Johnstone, C., Thurlow, M., Altman, J., Timmons, J., & Kato, K. (2009). Assistive technology approaches for large-scale assessment: Perceptions of teachers of students with visual impairments. Exceptionality, 17(2), 66-75. doi:10.1080/09362830902805756 Jordan, A. S. (2009). Appropriate accommodations for individual needs allowable by state guidelines. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(10). Kim, D. H., & Huynh, H. (2010). Equivalence of paper-and-pencil and online administration modes of the statewide English test for students with and without disabilities. Educational Assessment, 15(2), 107-121. doi:10.1080/10627197.2010.491066 Kim, D. H., Schneider, C., & Siskind, T. (2009). Examining the underlying factor structure of a statewide science test under oral and standard administrations. Journal of Psychoeducational Assessment, 27(4), 323-333. doi:10.1177/0734282908328632 Kim, D., Schneider, C., & Siskind, T. (2009). Examining equivalence of accommodations on a statewide elementary-level science test. Applied Measurement in Education, 22(2), 144-163. doi:10.1080/08957340902754619 Kingston, N. M. (2009). Comparability of computer- and paper-administered multiple-choice tests for K-12 populations: A synthesis. Applied Measurement in Education, 22(1), 22-37. doi:10.1080/08957340802558326 Laitusis, C. C. (2010). Examining the impact of audio presentation on tests of reading comprehension. Applied Measurement in Education, 23(2), 153-167. doi:10.1080/08957341003673815 Lazarus, S. S., Thurlow, M. L., Lail, K. E., & Christensen, L. (2009). A longitudinal analysis of state accommodations policies: Twelve years of change, 1993-2005. The Journal of Special Education, 43(2), 67-80. doi:10.1177/0022466907313524 Lee, K. S., Osborne, R. E., & Carpenter, D. N. (2010). Testing accommodations for university students with AD/HD: Computerized vs. paper-pencil/regular vs. extended time. Journal of Educational Computing Research, 42(4), 443-458. doi:10.2190/EC.42.4.e Lindstrom, J. H. (2010). Mathematics assessment accommodations: Implications of differential boost for students with learning disabilities. Intervention in School and Clinic, 46(1), 5-12. doi:10.1177/1053451210369517 Logan, J. P. (2009). The affective and motivational impact of the test accommodation extended time based on students' performance goal orientations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(06). Lovett, B. J. (2010). Extended time testing accommodations for students with disabilities: Answers to five fundamental questions. Review of Educational Research, 80(4), 611-638. Lovett, B. J., Lewandowski, L. J., Berger, C., & Gathje, R. A. (2010). Effects of response mode and time allotment on college students' writing. Journal of College Reading and Learning, 40(2), 64-79. Mariano, G., Tindal, G., Carrizales, D., & Lenhardt, B. (2009). Analysis of teacher accommodation recommendations for a large-scale test. Eugene, OR: University of Oregon, Behavioral Research and Teaching. Mastergeorge, A. M., & Martinez, J. F. (2010). Rating performance assessments of students with disabilities: A study of reliability and bias. Journal of Psychoeducational Assessment, 28(6), 536-550. doi:10.1177/0734282909351022 Parks, M. Q. (2009). Possible effects of calculators on the problem solving abilities and mathematical anxiety of students with learning disabilities or attention deficit hyperactivity disorder. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(07). Peltenburg, M., van den Heuvel-Panhuizen, M., & Doig, B. (2009). Mathematical power of special-needs pupils: An ICT-based dynamic assessment format to reveal weak pupils' learning potential. British Journal of Educational Technology, 40(2), 273-284. doi:10.1111/j.1467-8535.2008.00917.x Randall, J., & Engelhard, G., Jr. (2010). Performance of students with and without disabilities under modified conditions: Using resource guides and read-aloud test modifications on a high-stakes reading test. The Journal of Special Education, 44(2), 79-93. doi:10.1177/0022466908331045 Ricketts, C., Brice, J., & Coombes, L. (2010). Are multiple choice tests fair to medical students with specific learning disabilities? Advances in Health Sciences Education, 15(2), 265-275. doi:10.1007/s10459-009-9197-8 Roach, A. T., Beddow, P. A., Kurz, A., Kettler, R. J., & Elliott, S. N. (2010). Incorporating student input in developing alternate assessments based on modified academic achievement standards. Exceptional Children, 77(1), 61-80. Roxbury, T. L. (2010). A psychometric evaluation of a state testing program: Accommodated versus non-accommodated students. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(06). Russell, M., Hoffmann, T., & Higgins, J. (2009). A universally designed test delivery system. TEACHING Exceptional Children, 42(2), 6-12. Russell, M., Kavanaugh, M., Masters, J., Higgins, J., & Hoffmann, T. (2009). Computer-based signing accommodations: Comparing a recorded human with an avatar. Journal of Applied Testing Technology, 10(3). Salend, S. (2009). Using technology to create and administer accessible tests. Teaching Exceptional Children, 41(3), 40-51. Schoch, C. S. (2010). Teacher variations when administering math graphics items to students with visual impairments. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 72(02). Snyder, J. (2010). Audio adapted assessment data: Does the addition of audio to written items modify the item calibration? Dissertation Abstracts International: Section A. Humanities and Social Sciences, 71(05). Stone, E., Cook L., Cahalan Laitusis, C., & Cline, F. (2010). Using differential item functioning to investigate the impact of testing accommodations on an English-language arts assessment for students who are blind or visually impaired. Applied Measurement in Education, 23(2), 132-152. doi:10.1080/08957341003673773 Zhang, D., Landmark, L., Reber, A., Hsu, H. Y., Kwok, O., & Benz, M. (2010). University faculty knowledge, beliefs, and practices in providing reasonable accommodations to students with disabilities. Remedial and Special Education, 31(4), 276-286. doi:10.1177/0741932509338348 Top of Page | Table of Contents AppendicesThe appendices are available in the PDF version of this document.
|