Christopher M. Rogers, Martha L. Thurlow, Sheryl S. Lazarus, and Kristin K. Liu
All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:
Rogers, C. M., Thurlow, M. L., Lazarus, S. S., & Liu, K. K. (2019). A summary of the research on effects of test accommodations: 2015-2016 (NCEO Report 412). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.
The use of accessibility features including accommodations in instruction and assessments continues to be of great importance for students with disabilities. This importance is reﬂected in an emphasis on research to investigate the effects of accommodations. Key issues under continued investigation include how accommodations affect test scores, how educators and students perceive accommodations, and how accommodations are selected and implemented. Emerging issues across more recent years include how large-scale testing delivered online via various technologically-advanced platforms and formats have inﬂuence on accessibility features including accommodations, and vice-versa.
The purpose of this report is to provide an update on the state of the research on testing accommodations as well as to inform future research pursuits. Previous reports by the National Center on Educational Outcomes (NCEO) have covered research published since 1999. We summarize the research to review current research trends and enhance understanding of the implications of accommodations use in the development of future policy directions, to highlight implementation of current and new accommodations, and seek to draw valid and reliable interpretations when accommodations are used in testing situations. In 2015 and 2016, 58 published research studies on the topic of testing accommodations were found. Among the main points of the 2015-16 research were:
Purpose: More than 40 percent of the research was to evaluate the comparability of test scores when assessments were administered with and without accommodations. The next most common purpose was to report on perceptions and preferences about accommodations use. The majority of studies (about 75%) addressed multiple purposes.
Research design: About 70% of the studies reported primary data collection on the part of the researchers, rather than drawing on existing (extant) data sets. About two-ﬁfths of the studies involved descriptive qualitative designs, and quasi-experimental comprised another one-ﬁfth of the studies. Researchers also drew on a variety of other quantitative and qualitative methodologies, including survey methodologies and meta-analyses.
Types of assessments, content areas: A wide variety of instrument types were used. About one-ﬁfth of the studies used academic content items drawn from speciﬁc sources outside of the researchers’ work, and about 16 percent of studies used state criterion-referenced tests. Over half of the studies used non-academic protocols or surveys developed by the study authors. Other studies used norm-referenced measures. About one-third used multiple types of data. Reading and mathematics were the most common content areas included in the 2015-2016 research. Other content areas included science, writing, other language arts, and social studies. Only ﬁve percent of all studies addressed more than one content area in the assessments used.
Participants: Participants were most frequently students, spanning a range of grade levels from K-12 to postsecondary students, although several studies included educators as participants. Studies varied in the number of participants; some studies included fewer than 10 participants, whereas other studies involved tens of thousands of participants.
Disability categories: Learning disabilities was the most common disability category of participants in the research, accounting for over half of the studies. Attention problems and emotional behavioral disability were the next most commonly studied. Low-incidence disabilities were included in more than one-third of the studies.
Accommodations: Presentation and Timing/Scheduling accommodations were the most frequently studied categories of accommodations. Oral delivery and extended time were the most-studied individual accommodations. Combinations of these two, and others, into aggregated sets of accommodations were also studied by several researchers. A relatively large number of 2015-2016 studies reported on unique accommodations.
Findings: Empirical studies investigating performance effects of accommodations were not unconditionally conclusive about positive impacts on assessment scores for students with disabilities. Extended time mostly had no apparent inﬂuence on performance, and even had a negative impact for postsecondary students with attention-related disabilities. Oral delivery accommodations had mixed ﬁndings, supporting students with disabilities in one study, providing no differential beneﬁt for students with disabilities compared to students without disabilities in at least one testing condition in another study, and had negative impacts in one study. The studies with ﬁndings on impacts of unique accommodations demonstrated a mix of impacts: three had positive effects, two had no effect, and three had negative effects for students with disabilities.
Of the 11 studies addressing the impact of accommodations in reading, only two speciﬁcally reported positive impacts for students with disabilities. In contrast, of the ﬁndings in ﬁve studies of the impact of accommodations in math, the accommodation conditions beneﬁted the performance of at least some students with disabilities in four studies, and had a negative impact for students with disabilities in one study. Many of the 2015-2016 studies were not focused on simple performance impacts, but provided comparisons between different versions of accommodations, particularly oral delivery.
A larger proportion of studies provided ﬁndings about perceptions of accommodations than in previous NCEO reports, comprising nearly half of the 2015-2016 studies. Many studies provided insights about students’ preferences among accommodations, and about speciﬁc features of accommodations. Seven studies’ ﬁndings explicitly indicated students’ positive perceptions of the usefulness and support that they gained from accommodations. Findings about educator perceptions were also nuanced and highlighted their recognition of their own limitations in understanding and providing accommodations, particularly at the postsecondary level.
A larger proportion of studies than in previous NCEO reports investigated accommodations in the postsecondary education context, with over half of the studies in 2015-2016; this body of research yielded ﬁndings across various study purposes, including 15 studies on perceptions, eight studies on use patterns, ﬁve studies on performance effects; and four studies that were literature reviews.
All students, including students with disabilities and English learners (ELs) with disabilities, are required by the Individuals with Disabilities Education Act (IDEA) of 2004 and by the 2015 reauthorization of the Elementary and Secondary Education Act (ESEA) known as the Every Student Succeeds Act (ESSA) to participate in assessments used for accountability. Some students can beneﬁt from universal features or designated features to meaningfully access assessments, while others need accommodations in order to demonstrate their academic knowledge and skills. States and assessment consortia seek clarity from research on accommodations when making policy decisions about accommodations.
To synthesize accommodations research efforts completed across the years, the National Center on Educational Outcomes (NCEO) has published a series of reports on accommodations research. The time periods included 1999-2001 (Thompson, Blount, & Thurlow, 2002), 2002-2004 (Johnstone, Altman, Thurlow, & Thompson, 2006), 2005-2006 (Zenisky & Sireci, 2007), 2007-2008 (Cormier, Altman, Shyyan, & Thurlow, 2010), 2009-2010 (Rogers, Christian, & Thurlow, 2012), 2011-2012 (Rogers, Lazarus, & Thurlow, 2014), and 2013-2014 (Rogers, Lazarus, & Thurlow, 2016). This report covers the time period 2015-2016.
The purpose of this report is to present a synthesis of the research on test accommodations published in 2015 and 2016. The literature described here encompasses empirical studies of score comparability and validity studies as well as investigations into accommodations use, implementation practices, and perceptions of their effectiveness. As a whole, the current research body offers a broad view and a deep examination of issues pertaining to assessment accommodations. Reporting the ﬁndings of current research studies was the primary goal of this analysis.
Similar to the process used in past accommodations research syntheses (Cormier et al., 2010; Johnstone et al., 2006; Rogers et al., 2012; Rogers et al., 2014; Rogers et al., 2016; Thompson et al., 2002; Zenisky & Sireci, 2007), a number of sources were accessed to complete the review of the accommodations research published in 2015 and 2016. Speciﬁcally, ﬁve research databases were consulted: Educational Resources Information Center (ERIC), PsycINFO, Academic Search Premier, Digital Dissertations, and Educational Abstracts. To help conﬁrm the thoroughness of our searches, we used the Web search engine Google Scholar to search for additional research. In addition, a hand-search of 49 journals was completed, in efforts to ensure that no qualifying study was missed. A list of hand-searched journals is available on the National Center on Educational Outcomes website (www.nceo.info/OnlinePubs/AccommBibliography/ AccomStudMethods.htm).
Online archives of several organizations also were searched for relevant publications. These organizations included Behavioral Research and Teaching (BRT) at the University of Oregon (http://brt.uoregon.edu), the College Board Research Library http://research.collegeboard.org), the National Center for Research on Evaluation, Standards, and Student Testing (CRESST; http://www.cse.ucla.edu), and the Wisconsin Center for Educational Research (WCER; http:// testacc.wceruw.org/).
The initial search was completed in December, 2016. A second search was completed in May, 2017, to ensure that all articles published in 2015 and 2016 were found and included in this review. Within each of these research databases and publications archives, we used a sequence of search terms. Terms searched for this review were:
Many of these search terms were used as delimiters when searches yielded large pools of documents found to be irrelevant to the review.
The research documents from these searches were then considered for inclusion in this review using several criteria. First, this analysis included only research published or defended (in doctoral dissertations) in 2015 and 2016. Second, the scope of the research was limited to investigations of accommodations for regular assessment; hence, articles speciﬁc to alternate assessments, accommodations for instruction or learning, and universal design in general were not part of this review. Third, research involving English learners (ELs) was included only if the target population was ELs with disabilities. Fourth, presentations from professional conferences were not searched or included in this review, based on the researchers’ criteria to include only research that would be accessible to readers and had gone through the level of peer review typically required for publication in professional journals or through a doctoral committee review. (This criterion was implemented for the ﬁrst time during the 2007-2008 review.) Finally, to be included in the online bibliography and summarized in this report, studies needed to involve (a) experimental manipulation of an accommodation, (b) investigation of the comparability of test scores across accommodated and non-accommodated conditions, or across more than one accommodated condition, or (c) examination of survey results or interview data sets about students’ or teachers’ knowledge or perceptions of accommodations.
To reﬂect the wide range of accommodations research that was conducted in 2015 and 2016, the studies are summarized and compared in the following ways: (a) publication type; (b) purposes of research; (c) research type and data collection source; (d) assessment or data collection focus; (e) characteristics of the independent and dependent variables under study; and (f) comparability of ﬁndings between studies in similar domains.
Fifty-eight studies were published between January 2015 and December 2016. As shown in Figure 1, of the 58 studies, 43 were journal articles, 15 were dissertations, and none were published professional reports released by research organizations or entities (e.g., ETS).
The total number of studies published on accommodations in 2015-2016 (n=58) increased slightly from accommodations research published in 2013-2014 (n=53). The number of journal articles increased (n=43 in 2015-2016; n=37 in 2013-2014), and the number of dissertations published on accommodations was about the same (n=15 in 2015-2016; n=14 in 2013-2014). The number of professional reports released by research organizations or entities decreased (n=0 in 2015-2016; n=2 in 2013-2014). The report on accommodations research in 2013-2014 (Rogers et al., 2016) included 37 journal articles from 27 journals; the 43 articles described in 6 the current report were published in 29 journals.
Figure 1. Percentage of Accommodations Studies by Publication Type
A number of purposes were identiﬁed in the accommodations research published in 2015 and 2016. Table 1 shows the primary focus of each of these 58 studies. Fifteen studies each listed a single purpose (see Appendix A). The majority of studies reviewed sought to accomplish multiple purposes. In those cases, we identiﬁed the “primary purpose” based on the title of the work or the ﬁrst-mentioned purpose in the text.
Table 1. Primary Purpose of Reviewed Research
|only students with disabilities (9 studies; 15.5% of studies)|
|only students without disabilities (3 studies; 5.2% of studies)|
| both students with and without disabilities
(12 studies; 20.7% of studies)
|Study/compare perceptions and preferences about use||18||31%|
|Report on implementation practices and accommodations use||9||16%|
|Summarize research on test accommodations||6||10%|
|Compare test items||0||0%|
|Identify predictors of need for accommodations||0||0%|
|Evaluate test structure||0||0%|
|Investigate test validity||0||0%|
The most common primary purpose for research published during 2015-2016 was to compare scores of (a) students with disabilities only, (b) students without disabilities, or (c) students with and without disabilities; score comparison was the central focus of 41 percent of the 58 studies (see Appendix A for each study’s purpose details). The next most common primary purpose was to investigate accommodations perceptions and preferences (31%). The third most common purposes was to report on accommodations use (16%).
Reviews of research on accommodations included explorations of the research: (a) on various accommodations for students within speciﬁc disability categories (Barnett & Gay, 2015; Condra et al., 2015; Zeedyk, Tipton, & Blacher, 2016); (b) about various accommodations for students at a speciﬁc education level (DeLee, 2015); and (c) about a speciﬁc accommodation for students within a speciﬁc disability category (Cahan, Nirel, & Alkoby, 2016). In this analysis, test development was the central focus of a single study (Hansen, Liu, Rogat, & Hakkinen, 2016).
Table 2 shows the multiple purposes of many studies. Several studies had two purposes—for example, some studies (Lin, Childs, & Lin, 2016; Seo & De Jong, 2015) both compared scores of students with and without disabilities and examined item comparability. Other studies (Bouck, Bouck, & Hunley, 2015; Higgins et al., 2016; Rosenblum & Herzberg, 2015) included score comparisons of students with disabilities, while also reporting on accommodations perceptions and preferences.
Table 2. All Purposes of Reviewed Research
|Study/compare perceptions and preferences about use||26||45%|
|Summarize research on test accommodations||24||41%|
|only students with disabilities (8 studies; 15.5% of studies)|
|only students without disabilities (3 studies; 5.2% of studies)|
| both students with and without disabilities
(12 studies; 20.7% of studies)
|Report on implementation practices and accommodations use||14||24%|
|Compare test items||4||7%|
|Identify predictors of the need for accommodations||2||3%|
|Evaluate test structure||1||2%|
|Investigate test validity||2||3%|
Note. Because 43 studies each had more than one purpose, the study purpose numbers total more than the 58 studies and the percents total more than 100%.
Descriptive qualitative research was the most frequent design (about 40%) for the studies in 2015-2016, which is different from previous biennial reports when quasi-experimental research was more common; it is unclear whether this is a trend. As displayed in Table 3, the researchers themselves gathered the data (i.e., primary source data) in almost three times as many descriptive qualitative studies (n=17) compared to studies with secondary data sources using extant or archival data (n=6). The number of descriptive qualitative research studies decreased from 2015 to 2016. Likewise, studies using quasi-experimental research design also decreased from 2015 to 2016, consistent with an overall decrease in studies from 2015 (n=34) to 2016 (n=24). Descriptive quantitative studies did not change from 2015 to 2016, with six studies in each year. In 2015 and 2016 researchers conducted some studies (n=8) using correlational designs, yet few longitudinal or meta-analytic designs. No studies used experimental designs, so that design was not included in Table 3.
Table 3. Research Type and Data Collection Source by Year
|Research Design||Data Collection Source||Total Sources|
|Source Totals Across Years||36||22||58|
We also observed a similarity in data collection sources between the current review period and previous review period. In 2015-2016, primary data were used in 36 studies (62%) and secondary data were used in 22 studies (38%). This difference between data sources is smaller than the previous report (Rogers et al., 2016) in which over twice as many studies used primary data in comparison to secondary data sources. (Appendix B presents research designs and data collection sources for individual studies).
The research included in this analysis used the methods shown in Figure 2 to collect study data. Forty-one percent of the studies (n=24) used performance data acquired through academic content testing. In some of the cases, tests were administered as part of the study; in other cases, extant data sources were used. Interviews (n=22, 38%) and surveys (n=22, 38%) were other common data sources, while observations and focus groups were less commonly used methods of collecting data. Another less frequently used method was “articles.” This term refers to seven studies that chieﬂy reviewed research literature. Five studies collected various other data, including course grades and/or cumulative grade point averages (GPAs; Crosby, 2015; Dong & Lucas, 2016; Kim & Lee, 2016; Lewandowski, Wood, & Lambert, 2015), as well as disability documentation to validate interview data (Crosby, 2015). About one-third of the studies reported using more than one method or tool to gather data. The two most common combined collection methods were testing and surveys (n=7), and surveys and interview protocols (n=6). See Appendix B for additional details about each study’s data collection methods.
Figure 2. Data Collection Methods Used in 2015-2016 Research
Note. Of the 58 studies reviewed for this report, 18 reported using two data collection methods, and 1 reported using three data collection methods. Thus, the total number of studies represented in this figure total more than 58. Other using three data collection methods. Thus, the total number of studies represented in this figure total more than data included course grades, postsecondary GPAs, and disability documentation and academic records. 58. Other data included course grades, postsecondary GPAs, and disability documentation and other academic records.
Nearly all of the 2015-2016 studies used some type of data collection instrument; only seven studies did not employ any instruments because they were literature reviews. Table 4 shows the types of data collection instruments used. Surveys presented items of an attitudinal or selfreport nature. Tests and exams were course- or classroom-based. Assessments were statewide or large-scale in scope. Protocols refer to sets of questions, usually presented in an interview or focus group format. Measures referred to norm-referenced academic or cognitive instruments. All of these instruments were placed into six categories: non-academic protocols or surveys developed by study authors; surveys or academic tests developed by education professionals or drawn by researchers from other sources; state criterion-referenced academic assessments; norm-referenced academic achievement measures; norm-referenced cognitive ability measures; and other.
Non-academic protocols developed by the author or authors of the studies—the most commonly-used instrument (in 57% of studies)—included performance tasks, questionnaires or surveys, and interview and focus-group protocols, among others. Surveys or academic tests developed by education professionals or researchers used sources outside of current studies, and were exempliﬁed by perception surveys such as the Attitudes Toward Requesting Accommodations scale (ATRA; Barnard-Brak, Davis, Tate, & Sulak, 2009), and secondary analyses of datasets such as the National Longitudinal Transition Study-2 (NLTS2; Valdes, Godard, Williamson, McCracken, & Jones, 2013).
Table 4. Data Collection Instrument Types
|Instrument Type||Number of
|Non-academic protocols or surveys developed by
|Surveys or academic tests developed by professionals or
researchers using sources outside of current study
|State criterion-referenced assessments||9||16%|
|Norm-referenced academic achievement measures||9||16%|
|Norm-referenced cognitive ability measures||5||9%|
Other: screening including psychological and diagnostic information (Lovett & Leja, 2015; Spiel et al., 2016;
Weis, Dean, & Osborne, 2016); disability documentation (Crosby, 2015; Kafle, 2015); task scoring rubrics (Hansen et al., 2016; Nelson & Reynolds, 2015); and course grades and/or cumulative GPAs (Kim & Lee, 2016).
b None: 7 studies were literature reviews of studies employing various data collection approaches and/or instruments (Barnett & Gay, 2015; Cahan et al., 2016; Condra et al., 2015; DeLee, 2015; Kettler, 2015; Lane & Leventhal, 2015; Zeedyk et al., 2016).
c Twenty studies (34%) used more than one type of instrument; therefore, numbers total more than the 58 studies represented, and percents total more than 100.
State criterion-referenced assessments included those of Colorado, Maine, Michigan, released test items from assessment consortia and several states (Higgins et al., 2016), and two large-scale assessments from Ontario, Canada, as well as assessments from states that remained unidentiﬁed in the research. Seven norm-referenced academic achievement measures were used in one or more studies, such as the National Assessment of Educational Progress (NAEP), the Nelson-Denny Reading Test (NDRT), the Wechsler Individual Achievement Test-Second Edition (WIAT-II), and the Woodcock Johnson III Tests of Achievement (WJ-III). Seven norm-referenced cognitive ability measures were used in one or more studies, such as the Clinical Evaluation of Language Functions, Fourth Edition (CELF-4); the Peabody Picture Vocabulary Test, Fourth Edition (PPVT-IV); the Wechsler Intelligence Scale for Children (WISC-IV); and the Woodcock-Johnson Tests of Cognitive Abilities III (WJIII). About one-third of all studies (n=20) used instrumentation of more than one kind. We present a complete listing of the instruments used in each of the studies in Table C-1 in Appendix C, including the related studies or other bibliographic source information for these instruments, when available.
Many studies published during 2015-2016 focused on accommodations used in speciﬁc academic content areas. As shown in Table 5, reading was the most commonly studied content area. Table 5 also provides a comparison to content area frequency found in NCEO’s previous analyses of accommodations research (Rogers et al., 2014, 2016). Across the years, reading and mathematics have been the most common content areas for this research; however, the number of studies addressing reading assessments decreased in 2015-2016, and the number examining math assessment data decreased dramatically, from previous years. The number of studies examining accommodation effects in more than one content area also decreased. There was little change across years in the number of studies addressing science, “other language arts,” and social studies. There was an overall decrease in the number of studies that used assessment data in 2015-2016 compared to 2011-2012 and 2013-2014. (See Appendix C, Table C-2, for additional details about the content areas.)
Table 5. Academic Content Area Assessed Across Three Reports
|Content Area Assessed||2011-2012a||2013-2014b||2015-2016c|
|Mathematics||22 (45%)||14 (26%)||4 (7%)|
|Reading||19 (39%)||16 (30%)||10 (17%)|
|Writing||5 (10%)||2 (4%)||4 (7%)|
|Other Language Artsd||2 (4%)||3 (6%)||3 (5%)|
|Science||4 (8%)||5 (9%)||4 (7%)|
|Social Studies||1 (2%)||0 (0%)||1 (2%)|
|Not Specific||2 (4%)||0 (0%)||0 (0%)|
|Multiple Contente||16 (33%)||9 (17%)||3 (5%)|
Studies in 2011-2012 included examinations of more than one content area ranging in number of areas assessed from 2 to 4.
b Studies in 2013-2014 included examinations of more than one content area ranging in number of areas assessed from 2 to 3.
c Studies in 2015-2016 included examinations of more than one content area comprising exactly 2 areas assessed.
d Detailed descriptions of what constituted “Other Language Arts” for each of the three studies from 2015-2016 can be found in Appendix C, Table C-2.
e Because some studies investigated effects in more than one content area, the percents total more than 100.
The studies in this analysis of 2015-2016 accommodations research included participants in several roles (see Figure 3 and Appendix D). In 2015-2016, a majority of the studies included only students—42 of the 58 studies (74%). The next largest participant group studied (14% of the studies) was “educators only.” This refers to studies that described or analyzed the educator perspective on accommodations. Both educators and students were included in one study. The other participant category, which was included in the report on accommodations research in 2013-2014 (Rogers et al., 2016), was “educators, parents, and students.” None of the studies from 2015-2016 were in this group. Seven studies did not draw data from research participants.
Figure 3. Types of Research Participants (n=51)
Table 6 details the composition and size of the student participant groups in the research studies published during 2015 and 2016. This information is displayed in more detail by study in Appendix D. The size of the participant groups varied from 2 (Timmerman & Mulvihill, 2015) to 52,484 (Seo & De Jong, 2015). The studies in 2015-2016 were mostly about student participant samples that only had disabilities (26 studies) or did not have disabilities (5 studies). Only one study (Miller, Lewandowski, & Antshel, 2015) compared groups of students with an equal number of students with and without disabilities (n=38); only one other study (Couzens et al., 2015) had very similar proportions of both groups in the 15 participants. In addition, the number of studies in which there were more students without disabilities (n=15) was lower than the number of studies in which there were more students with disabilities (n=27). The number of studies in which there were more participants without disabilities decreased (n=15) since the last report (Rogers et al., 2016).
Table 6. Participant Sample Sizes and Ratio of Individuals with Disabilities
|Number of Research
Participants by Study
|Number of Studies by Proportion of Sample
Comprising Individuals with Disabilities
|5000 or more||4||1||0||2||1||8|
Similar to the previous report on accommodations research (Rogers et al., 2016), research during 2015-2016 involved kindergarten through postsecondary participants (see Table 7). See Appendix D for more detail. Postsecondary refers to both university students and other participants in postsecondary settings. For example, Spenceley and Wheeler (2016) investigated the use of extended time during course exams at the postsecondary level. The largest number of studies published in 2015 and 2016 focused on postsecondary students (n=22; 38%), and the second most frequently-studied school level was middle school (n=14; 24%). The proportion of studies at the elementary school (n=10; 17%) and high school (n=8; 14%) levels were almost equal. Ten studies (17%) included students in more than one grade-level cluster—most commonly students from across middle school and high school (Cawthon, Leppo, Ge, & Bond, 2015; Davis, Orr, Kong, & Lin, 2015; Joakim, 2015; Rosenblum & Herzberg, 2015; Seo & De Jong, 2015).
Table 7. School Level of Research Participants
|Education Level of
Participants in Studies
|Elementary school (K-5)||10||17%|
|Middle school (6-8)||14||24%|
|High school (9-12)||8||14%|
a Ten studies (17%) had participants in more than one education level; therefore, the numbers total more than the 58 studies represented, and percents total more than 100.
The accommodations research in 2015-2016 addressed a broad range of disability categories (see Appendix D for details). As shown in Table 8, four studies did not specify any disability categories for the student participants, and 14 studies did not include students in the sample. Of the remaining 40 studies, the most commonly studied student disability category was learning disabilities (n=30); ﬁve of these studies had only participants with learning disabilities, and four more compared students with learning disabilities to students without disabilities.
About one-third of the studies included students with attentional difﬁculties (n=20). The relevant studies also included students with emotional behavioral disabilities (n=15), students with “multiple disabilities” (n=13), students with physical disabilities (n=12), students with autism-related disabilities (n=11), and students with blindness or visual disabilities (n=9). About one-tenth included students with deafness or hearing impairments (n=6), students with speech/ language impairments (n=6), or students with intellectual disabilities (n=5). No studies speciﬁcally mentioned students with traumatic brain injuries. A little over one-fourth of these studies included students without disabilities as comparison groups (n=15). Except for studies that addressed accommodations and students with learning disabilities, very few studies examined accommodations for only participants with one speciﬁc category of disability.
Table 8. Disabilities Reported for Research Participants
|Disabilities of Research
|Emotional behavioral disability||15||26%|
|Traumatic brain injury||0||0%|
a Several studies had participants who fell into various disability categories; therefore, the numbers in this figure total more than the 58 studies represented, and percents total more than 100.
The number of times speciﬁc categories of accommodations were included in 2015-2016 published research is summarized in Table 9. Presentation and timing/scheduling accommodations were the most frequently studied categories, each with 22 studies. Within the presentation accommodations category, the most common accommodation was oral delivery/read aloud—including human reader and various technology approaches (e.g., text-to-speech). Extended time was examined in all 22 of the studies that included the scheduling accommodations category. Several studies (n = 25) analyzed accommodations from more than one category. Of those, three studies (Cawthon et al., 2015; Kettler, 2015; Lin & Lin, 2016) included accommodations from each of the ﬁve accommodations categories. A complete listing of accommodations examined in each study is provided in Appendix E (Tables E-1 through E-5).
Table 9. Accommodations in Reviewed Research
|Accommodations Category||Number of Studiesa|
a Several studies investigated accommodations that fit into more than one category; therefore, the numbers in this figure total more than the 58 studies represented.
The ﬁndings of the studies on accommodations published in 2015 and 2016 are summarized according to the nature of the studies. These ﬁndings were consistent with their various stated purposes and focuses. The ﬁndings included sets of research about speciﬁc accommodations: oral delivery, extended-time, separate setting, and aggregated sets of accommodations commonly called “bundles.” We also report the ﬁndings on impact of unique accommodations—those examined in only one study—including familiar administrator, pacing support, signed administration, tactile graphics, calculator, marking answers in test booklet, word processing, taking breaks during testing, individual administration, and small group administration. We report on accounts of perceptions about accommodations, including those of student test-takers as well as educators. We summarize the ﬁndings of the accommodations, and describe a range of implementation conditions as well as incidence of use of various accommodations across large data sets. The ﬁndings from studies in postsecondary educational contexts, which have grown over time from 6 to 15 in past reports, to 30 studies in this report, are given separate attention. This report also presents ﬁndings by academic content areas: math, reading, science, social studies, and writing. In Appendix F, we provide substantial detail about individual studies.
Research examining the effects of accommodations on assessment performance for students with disabilities comprised 23 studies published in 2015 and 2016 (see Figure 4; see also Appendix F for details about each study of this type). We report the effects of these four discrete accommodations—extended time, oral delivery, computer administration, and separate/specialized setting—along with a list of aggregated accommodations and uncommon accommodations.
Figure 4. Effects of Specific Accommodations (n=23)
Note. Four studies examined the separate impacts of several accommodations; one study examined the effects of accommodations in general, but did not specify comparisons of individual accommodations with one another; one study reported findings on the impact of modifications as well as an accommodation (extended time). one study reported findings on the impact of modifications as well as an accommodation (extended time).
The most investigated accommodation in 2015-2016 was extended time—provided either as 1.5 or 2 times the standard time provided for testing, unlimited time, or unspeciﬁed extended time amount—which was investigated in six studies. Students in grades K through 12 (Cahan et al., 2016; Joakim, 2015; Ohleyer, 2016; Südkamp, Pohl, & Weinert, 2015) and postsecondary students (Lovett & Leja, 2015; Miller et al., 2015) were engaged in investigations about the impact of this accommodation on academic performance. Four studies (Cahan et al., 2016; Südkamp et al., 2015; Lovett & Leja, 2015; Miller et al., 2015) compared the performance of students with speciﬁc disabilities and their peers without disabilities. In contrast, Joakim (2015) examined a large extant data set of students with various disabilities, and discerned the separate impacts of a number of different accommodations, including extended time. Ohleyer (2016) analyzed data longitudinally for students with learning disabilities only. In sum, ﬁve studies provided ﬁndings about the impact of extended time on students’ performance in either reading (Lovett & Leja, 2015; Miller et al., 2015; Südkamp et al., 2015), or in writing (Joakim, 2015; Ohleyer, 2016); in addition, there was a separate meta-analysis (Cahan et al., 2016) of the impact of extended time on assessment performance in various academic content areas.
For the studies with students at the K-12 level, extended time did not affect writing score results for students with various disabilities in grades 5 and 8 (Joakim, 2015) or for students with learning disabilities in grades 4, 5, and 6 (Ohleyer, 2016). Grade 5 students with disabilities completed more items when presented with fewer test items in the same time period (Südkamp et al., 2015). However, differential item functioning analysis yielded that the impact of extended time was complicated by potential validity concerns; that is, several items functioned signiﬁcantly differently for students with disabilities on all test versions, while there were no items of concern for low-performing students without disabilities. Examining impact of extended time on 17 tests in 11 studies, Cahan and colleagues (2016) concluded that there was a low correlation between gain scores and students’ learning disability status for most of the studies. Further, they argued that some students without disabilities beneﬁted from extended time, and indicated that non-timed tests would be a better approach so that students needing additional time, whether having disabilities or not, could have access to it.
For postsecondary students with disabilities, there were more complex and mixed ﬁndings. Lovett and Leja (2015) found that extended time also did not affect reading results across all postsecondary students with attentional or executive functioning difﬁculties, and that postsecondary students with more ADHD symptoms or more executive functioning difﬁculties showed signiﬁcantly less beneﬁt from extended time. Miller and colleagues (2015) indicated that students with attentional difﬁculties performed similarly—in terms of attempting similar numbers of items and scoring correctly on similar numbers of items—as one another within each testing time condition. Further, both students with and without disabilities performed worse with standard time, better with 150% time, and best with 200% time (Miller et al., 2015).
Four studies (Kim, 2016; McMahon, Wright, Cihak, Moore, & Lamb, 2016; Ohleyer, 2016; Ricci, 2015) provided ﬁndings about the oral delivery accommodation. For clarity in this report, as in previous reports (Rogers et al., 2014; Rogers et al., 2016), we used the term “oral delivery” to encompass in-person read-aloud as well as voice recordings and text-reading software or text-to-speech devices. Two studies (Kim, 2016; Ohleyer, 2016) reported on the impact of in-person oral delivery (“read-aloud”) and directions only read aloud, and three studies reported on the impact of voice recording (Kim, 2016; McMahon et al., 2016) or text-to-speech tools (Ricci, 2015). Indeed, a focus of two of these studies was comparing the impact of human voice, live versus recorded (Kim, 2016), or directions only or entire test read aloud versus assistive technology (AT) communication device (Ohleyer, 2016). To be clear, Ohleyer (2016) compared writing assessment performance between common accommodations—extended time, directions (only) read aloud, oral script, and no accommodation—and assistive technology communication device, in terms of the manner by which the test-takers communicated their responses for the writing assessment, including but not limited to a speech-to-text device due to a coding complication. Two studies (Kim, 2016; McMahon et al., 2016) had comparison groups of students without disabilities, while one study (Ohleyer, 2016) engaged only students with learning disabilities across grade levels, and Ricci (2015) completed a post hoc data analysis for students with various disabilities using oral delivery versus other accommodations in general. All four studies included students in grades between kindergarten and grade 6. Two studies (Kim, 2016; Ricci, 2015) examined the impact of oral delivery on reading test performance, noting that directions and items were delivered orally, but that reading passage text segments were not.
Kim (2016) found that kindergarten and grade 2 students scored better in comprehension with in-person versus recorded oral delivery, and grade 4 students scored the same in comprehension in both oral delivery conditions; also, students in all grade levels had retell quality scores that were essentially the same with in-person versus recorded oral delivery. McMahon and colleagues (2016) found that all grade 6 students scored signiﬁcantly better in the oral delivery conditions than without accommodations, yet they did not score differently between the in-person and video podcast-delivered science assessment. Mean score comparisons of students with disabilities versus students with reading difﬁculties (but without disabilities) indicated that students without disabilities scored signiﬁcantly higher in the unaccommodated and in-person oral delivery conditions than their peers with disabilities, yet not signiﬁcantly higher when completing the podcast-delivered science test. Ohleyer (2016) found that students with learning disabilities in grades 4, 5, and 6 performed better on writing assessments when using read-aloud directions only and when using assistive technology versus using no accommodations, and scored not signiﬁcantly differently when receiving oral delivery of the complete assessment. Further, students who used assistive technology across more than one year scored signiﬁcantly better than those who had not. Ricci (2015) found that grade 4 students with disabilities who received text-to-speech delivered by computer scored lower in reading comprehension than students with disabilities receiving other accommodations but not text-to-speech; effect sizes in the three states ranged from medium to very large.
Two studies (Eberhart, 2015; Seo & De Jong, 2015) investigated impacts of computer-administered testing, analyzing population data from all students (both with and without disabilities) together to detect possible different effects based on assessment format. Seo and De Jong (2015) compared traditional paper-based grade 6 and 9 social studies testing to the tests presented via computer, ﬁnding no signiﬁcant differences in scores by these testing formats. Eberhart (2015) compared performance on grade 7 math and language arts assessments, when administered on computers and on tablets; also, she compared items with technological enhancements to traditional multiple-choice items. The ﬁndings were complex: on average, students scored statistically higher on computer than on tablet; further, students answered multiple-choice questions more successfully on computer than tablet, but there were no signiﬁcant performance differences by device for the technologically-enhanced items.
Two studies examined the impact of the separate setting accommodation (Lewandowski et al., 2016; Lin et al., 2016). The ﬁndings were generally consistent with one another. Lewandowski and colleagues (2016) reported that students without disabilities scored signiﬁcantly better in reading with the standard group setting rather than the separate individual setting. Lin and colleagues (2016), in their post-hoc analysis study comparing scores of students with learning disabilities and students without disabilities—with some of each student group taking the test in a standard classroom setting and some completing it in a separate setting—found that students with disabilities using the setting accommodation had the lowest group mean scores, non-accommodated students with disabilities had the next-lowest mean scores, and the non-accommodated students without disabilities scored highest, higher than accommodated students without disabilities. Further, applying multilevel measurement modeling, Lin and colleagues found no individual item effects for the two groups of students with disabilities, and non-accommodated students with LD evidenced lower item difﬁculty than accommodated students with LD.
Four studies (Giusto, 2015; Lin & Lin, 2016; Rudzki, 2015; Spiel et al., 2016) reported the impact of aggregated sets, or bundles, of accommodations. Mentioned previously as reporting impact ﬁndings for pacing-only support, Giusto (2015) also compared the impact of both oral delivery and pacing guidance by the test administrator, ﬁnding that this combination of supports beneﬁted students with reading disabilities more than the pacing-only and unaccommodated testing conditions. In contrast, students without disabilities did not score signiﬁcantly differently across these conditions. Lin and Lin (2016) analyzed literacy test data using an odds ratio approach, and found that the groups of students with disabilities who naturalistically received—in accordance to their IEPs—combinations of certain accommodations performed better than students with disabilities receiving either other accommodations or no accommodations. The bundles included computer administration along with extended time, or specialized setting, or both extended time and specialized setting. Students with learning disabilities beneﬁted most from these three sets of accommodations. Rudzki (2015) found that elementary and middle school students with reading disabilities naturalistically using combinations of extended time, small group administration, and separate setting did not differentially beneﬁt from these accommodation sets; in fact, she noted that none of the students’ scores were at the proﬁcient level. Spiel and colleagues (2016) indicated that in-person oral delivery and small group together beneﬁted the mean science score of students with attention-related disabilities, in comparison to the unaccommodated test condition; also, individual student data analyses indicated that only one student with attentional difﬁculties scored higher without accommodations. In comparison, individual students without attention-related disabilities had mixed results: about half of them beneﬁted from the accommodated condition, and the other half scored lower with the accommodation set than without it. Further analyses to detect differential accommodation beneﬁt yielded that students—both with and without ADHD—who tended to score low in science beneﬁted from this aggregated set in comparison to students with average or above scores; students with ADHD did not differentially beneﬁt from oral delivery in small groups.
We identiﬁed separate reportable ﬁndings on the impact of 10 unique accommodations—that is, accommodations that were the focus of just one study during the two years included in this report. These unique accommodations yielded a variety of effects results. In one study (Joakim, 2015) reporting separate effects for several unique accommodations, no speciﬁc accommodation—such as breaks—beneﬁted either grade 5 or grade 8 students with disabilities, and some student groups scored higher in writing without speciﬁc accommodations—such as familiar administrator, individual, and small group—than when using them.
Two other unique accommodations beneﬁted students with disabilities, both in terms of their completing more test items and in answering more test items correctly. Bouck and colleagues (2015) found that middle school students with various disabilities performed better on math computation and word problems when using a graphing calculator accommodation; grade 7 students had a small effect, and grade 8 students had a small to moderate effect. Potter, Lewandowski, and Spenceley (2016) reported that postsecondary students with learning disabilities or attention-related disabilities, or both, performed better on reading testing when marking their answers in test booklets than when answering on separate bubble-sheets. Another study (Higgins et al., 2016), comparing math performance with and without American Sign Language (ASL) accommodations, yielded that students who were deaf scored on average consistently and signiﬁcantly higher when using the accommodations, at elementary, middle, and high school levels. Closer analyses of student performance in the items comparing differing ASL conditions (such as ﬁnger-spelled only vs. ﬁnger-spelled and signed) indicated non-signiﬁcant score differences; that is, the ways that ASL was presented were less impactful. In the pacing-only support, Giusto (2015) reported on the comparative impact of the test administrator providing guidance throughout the assessment sections, but without also reading the test aloud. When only receiving pacing support, students with reading-related disability scored very similarly to not receiving accommodations, and students without disabilities scored no differently across all the accommodated and non-accommodated conditions.
In summary, of the 10 unique accommodations, three indicated beneﬁts for at least some students with disabilities, two indicated no beneﬁts for students with disabilities, and three indicated negative impacts for students with disabilities. The remaining studies (Davis et al., 2015; Rosenblum & Herzberg, 2015) on unique accommodations compared versions of accommodated conditions only, not including an unaccommodated condition. Davis and colleagues (2015) indicated that students without disabilities (only) performed no differently when composing writing test answers by typing on an external keyboard with a laptop computer versus a touchscreen keyboard on an electronic tablet. Examining the impact of four different tactile graphics conditions of information for math and science test items for students with visual impairments, Rosenblum and Herzberg (2015) found complex impacts, noting that there were some better tactile formats for students to answer correctly. For instance, the largest number of middle and high school participants answered correctly when using the microcapsule map, and the fewest answered correctly when seeking information from a collage picture using hot glue and braille labels; also, an embossed bar graph was confusing such that at least some students could not answer questions, unrelated to the items’ difﬁculty level.
Figure 5 displays the data for the 26 studies on perceptions about accommodations. More than two-thirds of them (n=19) provided ﬁndings about student perceptions only, while less than one-quarter (n=6) provided ﬁndings about educator perceptions only, and two studies (Couzens 25 et al., 2015; Crosby, 2015) reported on accommodations perceptions from both students and educators.
Figure 5. Accommodations Perceptions (n=27)
In seven of the 19 studies on student perceptions (only), researchers found that students had favorable impressions about speciﬁc accommodations—such as speech recognition tools (Nelson & Reynolds, 2015; Weis et al., 2016), and tactile graphics (Hansen et al., 2016)—or accommodations overall (Kaﬂe, 2015; Ruhkamp, 2015; Timmerman & Mulvihill, 2015; Williams, 2015). Students preferred online testing (70%) over paper-based testing (10%), with 20% of students having no preference (Seo & De Jong, 2015). Six studies yielded students’ preferences related to the features of accommodations (Davis et al., 2015; Eberhart, 2015; Hansen et al., 2016; Higgins et al., 2016; Rosenblum & Herzberg, 2015; Williams, 2015). One of these studies (Rosenblum & Herzberg, 2015) found that nearly all participants indicated that they were not asked by educators for their input on the design of tactile graphics.
In ﬁve of the 19 studies, the connections between students’ perceptions and their use of accommodations were reported. Students in Cole and Cawthon’s (2015) study were less likely to disclose about their disabilities and seek accommodations when they had more negative views about seeking accommodations and more negative associations with their disabilities. These ﬁndings were corroborated by other researchers who found that students were less likely to seek accommodations when they had feelings of guilt and shame around seeking accommodations (Ruhkamp, 2015) and doubts about the quality and usefulness of disability services ofﬁces (Lyman et al., 2016). On the other hand, Monagle (2015) found that students were more willing to use accommodations when they had more positive attitudes toward them.
Researchers also reported on students’ reasons for seeking accommodations (Oﬁesh, Moniz, & Bisagno, 2015; Ruhkamp, 2015). Two studies uncovered postsecondary students’ views of how staff members might perceive accommodations and students with disabilities. In the study by Yssel, Pak, and Beilke (2016), students reported they perceived that faculty members—even those who students perceived were unfamiliar with certain disabilities—were positive and willing to provide accommodations. In contrast, students in Zambrano’s (2016) study shared that they perceived that faculty members had limited understanding of students with disabilities, which contributed to insufﬁcient institutional communication about accessibility and accommodations information. Last, Lovett and Leja (2015) found that students with more difﬁculties related to ADHD or executive functioning perceived that they needed extended time to a signiﬁcant degree.
Findings from the six studies on educator perceptions (only) pertained to training and preparedness related to accommodations and their attitudes toward accommodations. First, ﬁve of the six studies provided educators’ comments about preparedness (Ajuwon, Meeks, Grifﬁn-Shirley, & Okungu, 2016; DePountis, Pogrund, Grifﬁn-Shirley, & Lan, 2015; Detrick-Grove, 2016; Gallego & Busch, 2015; Sokal, 2016). Educators in these studies reported that they felt that staff needed more access to assistive technology training (Ajuwon et al., 2016; Gallego & Busch, 2015) and that their employers prepared them to provide accommodations for students with disabilities more than their academic training programs (Detrick-Grove, 2016). Researchers of two studies highlighted educators’ own assistive technology proﬁciency (Ajuwon et al., 2016; DePountis et al., 2015).
In three of the six studies, researchers examined educators’ attitudes toward accommodations. Two studies indicated that educators had positive attitudes toward accommodations, and that teachers tended to report that low-tech accommodations—such as reading directions and reading test questions out loud—were more beneﬁcial for students than more high-tech options (Detrick-Grove, 2016; DePountis et al., 2015). Professors and disability services personnel in Sokal’s (2016) study had different feelings about accommodations for students with anxiety disorders, demonstrating the tension between accommodating the needs of students and supporting the development of students’ coping skills. In the last study (Lawing, 2015), teachers identiﬁed important classroom-level factors that inﬂuence the identiﬁcation of instructional accommodations, which may apply to assessment accommodations as well. Teachers’ most common answers included students’ present levels of functioning, evidence of successful accommodations, and the subject matter being taught or tested. Lawing (2015) also stated “Teachers with the most positive attitude toward inclusion used a systematic approach to accommodation selection” (p. 175).
Researchers investigated the perceptions of both students and educators in two studies. In the ﬁrst study (Couzens et al., 2015), student participants reported various degrees of value about the writing supports available, with some indicating that these were very valuable and others indicating that they were not. Most students also shared that they have not used disability service supports “because they perceived [the services] to be for students with greater needs” (Couzens et al., 2015, p. 35). University personnel participants in this study commented that many students who could beneﬁt from supportive services had not sought them and that staff would have to encourage students to do so. Staff members also noted that resources were not always available for students to explore potentially helpful assistive technologies with which they were not already familiar. In the second study, Crosby (2015) interviewed postsecondary faculty and students about their institution’s social context and culture regarding inclusion practices and perceptions of disability. The researcher remarked that faculty members demonstrated in their responses some misconceptions about disability, particularly around the likelihood of student success and knowledge about laws and policies about accommodating students. Additionally, about 15% of faculty members reported that they were uncomfortable teaching students with disabilities. The postsecondary students with disabilities in this study indicated that the challenges they faced were inﬂuenced by the relevance of their disabilities to their identities, in that self-perceptions of normality or abnormality were associated with their degrees of willingness to disclose their needs for academic assistance. Furthermore, when viewing disability as a negative attribute, students tended to mentally calculate the balance of the social costs of their having disabilities with the beneﬁts of accessing academic supports. The researcher offered some suggestions for supporting students with disabilities in postsecondary education.
The researchers from 14 studies reported ﬁndings related to accommodations use, as well as implementation issues. In twelve studies (Barnhill, 2016; Cawthon et al., 2015; Davis et al., 2015; DePountis et al., 2015; Kim & Lee, 2016; Monagle, 2015; Newman & Madaus, 2015a; Newman & Madaus, 2015b; Ohleyer, 2016; Ricci, 2015; Spenceley & Wheeler, 2016; Weis et al., 2016), researchers described patterns of accommodations use. Of those, four studies (Davis et al., 2015; DePountis et al., 2015; Ohleyer, 2016; Ricci, 2015) reported on accommodations use at the primary and secondary levels; the remaining nine studies pertained to accommodations use at the postsecondary level. Two studies (Lawing, 2015; Peterson, 2016) did not yield use patterns, yet the researchers provided details about implementation processes and difﬁculties, and offered insights about decision-making matters; Lawing (2015) investigated these matters in K-12 education, and Peterson (2016) studied implementation in the postsecondary setting. Lawing concluded that teachers with favorable attitudes toward inclusion were more systematic in their approach, and that accommodations selection did not appear to be inﬂuenced by students’ future postsecondary activities. Peterson found that postsecondary personnel indicated students often did not have opportunity to explore available supports and consequently, did not seek them out.
Accommodations use patterns were reported for speciﬁc accommodations during large-scale assessments in grades 3 through 12. Middle and high school students used keyboards and handwriting on paper for writing tests interchangeably and with similar frequency, and typically did not use tablets with touchscreens at school, more commonly using laptop computers with standard keyboards (Davis et al., 2015). Educators of high school students who were blind reported on many electronic assistive technology devices and software used during math assessments, including various calculators with speech-to-text capabilities, braillewriters, refreshable braille displays, and physical manipulative tools (DePountis et al., 2015). Extant data sets of grade 4 students provided information about incidence of text-to-speech oral delivery of reading assessments (Ricci, 2015).
Researchers related information about accommodations use patterns at the postsecondary level (Barnhill, 2016; Kim & Lee, 2016; Monagle, 2015; Newman & Madaus, 2015a; Newman & Madaus, 2015b; Spenceley & Wheeler, 2016; Weis et al., 2016). Extended time accommodations use patterns were described in ﬁve studies (Barnhill, 2016; Cawthon et al., 2015; Kim & Lee, 2016; Spenceley & Wheeler, 2016; Weis et al., 2016); three of these studies also presented data on use of separate setting (Barnhill, 2016; Kim & Lee, 2016; Weis et al, 2016). Extended time and separate exam site were the most common accommodations reported throughout these studies. For example, Kim and Lee (2016) indicated that 75% of students with disabilities requested extended time, and Weis and colleagues (2016) reported that 27 percent of students with learning disabilities completed their exams in a separate room. Spenceley and Wheeler (2016) reported that many students with various disabilities requested extended exam time, yet that about 55 percent did not end up using the additional time. They distinguished the disability categories of the students who typically used extended time: students with ADHD, autism, physical, and multiple disabilities. Weis et al. also indicated accommodations use patterns for several other accommodations, including taking breaks during exams (less than 10% of participants), and using technological aids (70% of participants)—including calculator, word processor, spell checker, speech-to-text, text-to-speech, reader, dictionary or thesaurus, outlining, and breaks. Three studies previously cited also provided ﬁndings detailing use patterns according to students’ disability categories: autism/Asperger Syndrome (Barnhill, 2016); hearing impairments (Cawthon et al., 2015); and visual, medical, learning, ADHD, autism, and multiple disabilities (Weis et al., 2016), and all of these disability types (Spenceley & Wheeler, 2016).
Two studies’ ﬁndings detailed patterns of unspeciﬁed accommodations (Monagle, 2015; Newman & Madaus, 2015a). Monagle (2015) found that various factors were associated with varying numbers of accommodations used: students most likely to use accommodations included those in their second or third year of college, those with multiple disabilities, those with majors in the liberal arts or humanities areas, and those with positive attitudes toward accommodations. Newman and Madaus (2015a; 2015b) reported incidence of accommodations use among various types of postsecondary programs, as well as differences by disability categories. These researchers calculated odds ratios and found that students in two-year and career and technical programs whose transition plans speciﬁed accommodations needed in postsecondary education were more likely to receive them. Students with apparent and observable disabilities more commonly received accommodations than students with less-visible disabilities, particularly at two-year and four-year institutions. (See Appendix F for more detailed explanation of ﬁndings of each study.)
The topic of validity was addressed in the ﬁndings of four studies (Kettler, 2015; Lane & Leventhal, 2015; Potter et al., 2016; Südkamp et al., 2015). Two studies (Potter et al., 2016; Südkamp et al., 2015) included in their stated research purposes the analyses of construct validity for the assessments at the center of their investigations. Potter and colleagues (2016) reported ﬁndings about whether the response format—that is, answering items in a test booklet versus on a separate bubble sheet—affected the construct validity of the reading test. These researchers indicated that there was no differential beneﬁt for students with disabilities using this response accommodation, and that students without disabilities also preferred answering in the test booklet. Further, they reported that the scores in the different response formats were signiﬁcantly different from one another for the students with and without disabilities. Speciﬁcally, students with disabilities answering in test booklets scored higher as a group than students without disabilities answering in the bubble sheet format, concluding that it is not certain that this response format change did not affect the test construct. They argued in favor of course exams being provided for all students to circle their answers in test booklets. Südkamp and colleagues (2015) reported on comparisons of student scores on two other versions of a reading literacy test, as well as the standard version. They used a differential item functioning analysis and found that low-performing students without disabilities did not evidence any unusual scoring patterns; however, for scores of students with disabilities, they found variances in several items, suggesting construct-irrelevant variance. They concluded that the changes to the test introduced problems in test fairness for all students.
Two literature reviews (Kettler, 2015; Lane & Leventhal, 2015) offered ﬁndings that provide insight into the validity of academic assessments as inﬂuenced or not inﬂuenced by accommodations. Examining 30 studies’ ﬁndings for oral delivery, 24 studies for extended time, and 15 for aggregated sets of accommodations, Kettler (2015) highlighted ﬁndings about accommodations’ validity concerns. He noted that four of the 30 studies indicated that oral delivery did not invalidate tested content, including that only a few items on one reading comprehension test were affected by oral delivery. He also mentioned that one of the 15 studies on accommodations bundles analyzed factor structures and indicated that the IEP-developed set of accommodations did not invalidate the ELA test construct, yet cautioned that accommodations within sets might have unexpected interactions with one another. Lane and Leventhal (2015) examined 11 studies for the possibility of accommodations’ differential boost for students with disabilities, and reported evidence from four studies. They discussed construct-irrelevant variance and described the application of differential item functioning and other analysis procedures to ascertain whether accommodations have different effects than intended based on student characteristics. These researchers also discussed studies using designs such as factor analyses to examine internal test structure. (See Appendix F for more detailed explanation of ﬁndings of each study.)
Thirty studies reported ﬁndings about accommodations at the postsecondary education level. Researchers reported ﬁndings on students with disabilities’ perceptions, preferences, and experiences using accommodations at the postsecondary level (from 13 studies); educators’ perceptions about accommodations (from 3 studies); accommodation use patterns (from 8 studies); effects of accommodations on test performance (from 4 studies); three studies each reported ﬁndings in two of these areas. Four studies were literature reviews describing accommodations issues in postsecondary education, including one study (Cahan et al., 2016) that also reviewed research at the K-12 level. Of the 26 studies that were not literature reviews, 21 reported ﬁndings involving only student participants, four studies involved only educator participants, and one study (Crosby, 2015) had participants who were students and educators.
Fifteen studies provided ﬁndings on perceptions in postsecondary education; 12 reported only on students’ perceptions, while two reported only on educators’ perceptions. One study (Crosby, 2015) reported on the perceptions of both students and educators. The ﬁndings of the 12 studies (Cole & Cawthon, 2015; Couzens et al., 2015; Kaﬂe, 2015; Lovett & Leja, 2015; Lyman et al., 2016; Monagle, 2015; Nelson & Reynolds, 2015; Oﬁesh et al., 2015; Ruhkamp, 2015; Timmerman & Mulvihill, 2015; Yssel et al., 2016; Zambrano, 2016) provided insights about students’ experiences with and outlooks about accommodations. In several studies, researchers’ qualitative data through interviews (Cole & Cawthon, 2015; Couzens et al., 2015; Kaﬂe, 2015; Lyman et al., 2016; Nelson & Reynolds, 2015; Oﬁesh et al., 2015; Timmerman & Mulvihill, 2015; Yssel et al., 2016; Zambrano, 2016) or quantitative data through surveys (Lovett & Leja, 2015; Monagle, 2015; Ruhkamp, 2015) provided insights about students’ accessing accommodations. The researchers described various factors, including students’ perspectives about themselves as learners, and these factors’ inﬂuence on students’ decisions to seek or not seek accommodations for use during course exams. Students lacked familiarity with the process of seeking accommodations at postsecondary institutions, including disability services and resources (Cole & Cawthon, 2016; Kaﬂe, 2015; Lyman et al., 2016). Students reported their self-consciousness about receiving accommodations due to concerns about reactions of their peers without disabilities (Cole & Cawthon, 2015; Kaﬂe, 2015; Lyman et al., 2016; Timmerman & Mulvihill, 2015), yet also recognized that their peers without disabilities might lack understanding about their challenges (Zambrano, 2016). Students were also concerned about whether their professors will be understanding about students’ challenges (Cole & Cawthon, 2015; Kaﬂe, 2015; Lyman et al., 2016; Zambrano, 2016), yet students also indicated that professors were positive and willing to provide accommodations (Yssel et al., 2016). Another factor highlighted by some researchers was students’ self-determination and self-advocacy development (Cole & Cawthon, 2015; Yssel et al., 2016). Students also faced their own desires for self-sufﬁciency, hoping not to need accommodations at the postsecondary level (Lyman et al., 2016), and also did not seek support due to a sense that they did not need them as much as other students with more challenging disabilities (Couzens et al., 2015). Some studies described students’ perceptions of speciﬁc accommodations, such as speech recognition tools for writing (Nelson & Reynolds, 2015). Ruhkamp (2015) relayed students’ experiences about beneﬁting from exam accommodations, including gaining a better understanding of exam items and improved performance, as well as increased conﬁdence and comfort, and a decreased sense of pressure. Monagle (2015) described the link between students’ perceptions of and attitudes about accommodations and their actually using accommodations, and reported on demographic and other factors’ associations with accommodations use. Educators’ perceptions were reported in two studies (Gallego & Busch, 2015; Sokal, 2016). Gallego and Busch (2015) described the perceptions of accommodations from disability services ofﬁce personnel who primarily valued the supports that they could provide yet also sensed that foreign language program directors and their teaching assistants lacked substantial information about accommodations available to students with disabilities.
After interviewing professors and disability services ofﬁce personnel, Sokal (2016) described an essential tension between accommodating students’ needs and supporting the development of students’ coping skills, reﬂected in professors’ concerns about fairness and the philosophy about educational access from disability services ofﬁces.
Seven studies (Barnhill, 2016; Kim & Lee, 2016; Monagle, 2015; Newman & Madaus, 2015a; Newman & Madaus, 2015b; Spenceley & Wheeler, 2016; Weis et al., 2016) reported on accommodation use patterns by postsecondary students. One study (Peterson, 2016) yielded details about implementation processes and difﬁculties in several postsecondary institutions, reported by disability services professionals. The researcher related resource challenges and uneven familiarity at the postsecondary level about accommodations in general, noting no speciﬁc accommodations. All of these studies indicated only use or implementation ﬁndings. Accommodations use patterns were reported according to speciﬁc accommodations (Kim & Lee, 2016; Spenceley & Wheeler, 2016; Weis et al., 2016), and they all provided disability category data as well. All three studies reported on extended time use, two studies reported on separate setting use (Kim & Lee, 2016; Weis et al., 2016), and one study reported on oral delivery use (Kim & Lee, 2016). The other four studies (Barnhill, 2016; Monagle, 2015; Newman & Madaus, 2015a; Newman & Madaus, 2015b) presented ﬁndings in a different manner. Barnhill (2016) indicated that 29 of the 30 postsecondary institutions where participants were enrolled provided extended time and separate setting accommodations for students with autism spectrum-related disabilities, and some combined one or both of these with oral delivery. As mentioned in the “Implementation and Use” ﬁndings, two studies did not provide speciﬁc accommodation details (Monagle, 2015; Newman & Madaus, 2015a).
In ﬁve studies (Dong & Lucas, 2016; Lewandowski et al., 2015; Lovett & Leja, 2015; Miller et al., 2015; Potter et al., 2016), researchers examined the impact of accommodations on student performance. Two studies (Lovett & Leja, 2015; Miller et al., 2015) provided ﬁndings about the impact of extended time. One study (Potter et al., 2016) yielded impact data from participants with learning disabilities, with attention-related disabilities, or with both conditions, who marked their answers in test booklets versus answering on separate bubble-sheets. One study (Lewandowski et al., 2015) compared the performance of students without disabilities testing in a group administration setting with their performance in an individual small testing room. One study (Dong & Lucas, 2016) found impacts on student performance using various unspeciﬁed academic accommodations. Four of the ﬁve studies (Lewandowski et al., 2015; Lovett & Leja, 2015; Miller et al., 2015; Potter et al., 2016) provided ﬁndings about the impact of accommodations on postsecondary students’ performance on the Nelson-Denny Reading Test (NDRT; Brown, Fishco, & Hanna, 1993) subtests on vocabulary and comprehension assessment, while one study (Dong & Lucas, 2016) presented longitudinal data on the impact of accessing or not accessing academic accommodations in general and their persistence in postsecondary education, as indicated by meeting a threshold grade point average (GPA).
Four studies (Cahan et al., 2016; Condra et al., 2015; DeLee, 2015; Zeedyk et al., 2016) reviewed research literature that discussed accommodations in postsecondary education. Condra and colleagues (2015) investigated the needs and considerations of postsecondary students with mental health-related disabilities, highlighting a dynamic in the nature of these types of disabilities: the episodic intensity of the expression of impairments. These researchers indicated that some students with disabilities might need ﬂexibility for implementing accommodations retroactively, based on the timing of the onset of mental health episodes of intensive stress. DeLee (2015) described a prominent conceptualization about accommodations focused toward student-centeredness and service provision for students with disabilities. The researcher recounted the challenges of accommodations selection and decision-making processes, due in part to the multiple and sometimes conﬂicting sources of information about students’ needs. She characterized changes in consideration for students, with an increase in valuing “assistive reading and listening technologies” (p. 45) and exam accommodations, and decreases in reported needs for such resources as recorded lectures. Zeedyk and colleagues (2015) framed considerations for postsecondary students with disabilities in terms of the transition from secondary to higher education. While addressing both social and academic needs, the current summary report prioritized these researchers’ reported ﬁndings focused on academic accommodations’ use: private testing room and ear plugs for minimizing intense sensory stimuli, as well as extended time for processing delays. Cahan and colleagues (2016) employed meta-analysis in summarizing the impact of extended time for students with learning disabilities; most of their ﬁndings applied to students in K-12 education, but they identiﬁed at least one study about postsecondary students [cf., Ranseen & Harris, 2005]. (See Appendix F for more detailed explanation of ﬁndings of each study.)
As in previous reports, we analyzed ﬁndings according to the academic content area that was the focus of each of the studies for which a content area was identiﬁed. We present ﬁndings for each content area according to the frequency with which the content areas were identiﬁed, with most prevalent content areas presented ﬁrst: 12 studies in reading, 9 studies in mathematics, 4 studies in science, 4 studies in writing, 2 studies in other language arts, and 1 study in social studies (see Figure 6). For each content area, we examined the impact of accommodations on assessment performance, perceptions about accommodations, construct validity of accommodated assessments, and implementation and use of accommodations. (See Appendix F for more detailed explanation of ﬁndings of each study.)
Figure 6. Findings by Content Areas (n=27)
Reading. The ﬁndings of the 12 studies in reading included those from nine studies in reading only (Giusto, 2015; Kim, 2016; Lewandowski et al., 2015; Lovett & Leja, 2015; Miller et al., 2015; Potter et al., 2016; Ricci, 2015; Rudzki, 2015; Südkamp et al., 2015) and those from three studies in reading and math (Cahan et al., 2016; Lin et al., 2016; Williams, 2015). The eleven impact studies analyzed effects on reading performance by relatively few speciﬁc accommodations: four studies (Cahan et al., 2016; Lovett & Leja, 2015; Miller et al., 2015; Südkamp et al., 2015) examined extended time, two studies (Kim, 2016; Ricci, 2015) examined oral delivery, two studies (Lewandowski et al., 2015; Lin et al., 2016) focused on separate setting, and different aggregated sets of accommodations were investigated by two studies (Giusto, 2015; Rudzki, 2015). One study (Potter et al., 2016) inquired about the impact of marking in individual test booklets, in comparison with answering on a separate bubble-type sheet.
Eight of the 11 studies on accommodations effects included a comparison group of students without disabilities. The remaining effects studies included two with only participants with disabilities (Ricci, 2015; Rudzki, 2015), and one with only participants without disabilities (Lewandowski et al., 2015).
Four studies engaged in other areas of investigation about accommodations. In addition to examining effects, Ricci (2015) also reported ﬁndings about oral delivery accommodation use patterns by grade 4 students with various unspeciﬁed disabilities. Lovett and Leja (2015) also reported ﬁndings about the perceptions of postsecondary students with disabilities. Potter and colleagues (2016) also reported on the construct validity issues around a response-related accommodation. Williams (2015) reported only about the perceptions of grade 8 students with disabilities (without examining the effects of accommodations).
Accommodations beneﬁted the reading performance of at least some students with disabilities in two studies (Giusto, 2015; Potter et al., 2016) out of the 11 studies investigating impact on reading. Giusto (2015) found that an aggregated set of accommodations, oral delivery and pacing guidance by the test administrator, beneﬁted students with reading disabilities more than the pacing-only and unaccommodated testing conditions; further, students without disabilities did not score signiﬁcantly differently across these conditions. Potter and colleagues (2016) reported that postsecondary students with learning disabilities and/or attention-related disabilities performed better on reading tests when marking their answers in test booklets than when answering on separate bubble-sheets. They also indicated that this accommodation might affect the reading vocabulary construct.
In contrast, most of the impact studies yielded no particular beneﬁts for students with disabilities, especially in consideration of the accommodations’ impact on performance of students without disabilities. In one meta-analysis (Cahan et al., 2016) the impact of extended time on performance in 17 tests in 11 studies was investigated. The authors concluded that the correlation was not particularly strong for students with learning disabilities using this accommodation and their improvements in reading scores.
Kim (2016) found that some participants performed better in reading comprehension when using in-person, rather than recorded, oral delivery, while other participants performed similarly with these two versions of the oral delivery accommodation. All participants performed similarly in retell quality between these accommodation conditions. This study did not have an unaccommodated reading test condition. Lin and colleagues (2016) indicated that students with disabilities had lower reading scores during testing in a separate setting than in the typical classroom setting.
Lewandowski and colleagues (2015) reported that all participants—postsecondary students without disabilities—scored signiﬁcantly better in reading with the standard group setting rather than the separate individual setting. Lovett and Leja (2015) found that reading performance was not improved with extended time for postsecondary students with attentional or executive functioning difﬁculties; in fact, students with more intensive impairments showed signiﬁcantly less beneﬁt. However, students with more intensive impairments perceived, more strongly than students with milder attention difﬁculties, that they needed extended time during the reading test. Miller and colleagues (2015) found that both postsecondary students with attentional difﬁculties and students without disabilities performed worst with standard time, better with 150% time, and best with 200% time. In other words, students with disabilities did not beneﬁt differently than students without disabilities with extended time. Ricci (2015) indicated that students with disabilities using oral delivery of instructions and test items via text-to-speech performed worse in reading comprehension than students with disabilities using other accommodations. The researcher’s detailed provision of data for students with disabilities using various accommodations highlighted the relatively lower incidence of use of text-to-speech. Analyzing an extant data set, Rudzki (2015) reported that students with reading disabilities performed similarly, and below proﬁciency, in reading when using an aggregated set of extended time, small group administration, and separate setting accommodations, than students with disabilities not using this set of accommodations. Südkamp and colleagues (2015) reported on reading literacy performance on a large-scale test in Germany, comparing data for three different testing conditions: standard condition, reduced test with fewer items, and simpliﬁed (i.e., modiﬁed) test with fewer items that were also only low in difﬁculty. They found that grade 5 students with disabilities completed more items with extended time, yet not necessarily improving scores. Using differential item functioning analysis, they found no items of concern for low-performing students without disabilities. However, they concluded that scoring for students with disabilities was not comparable or valid across all three test versions due to variance in item functioning.
Williams (2015) reported that about 40 percent of the grade 8 participants with various disabilities indicated positive feelings, such as conﬁdence and comfort. About 40 percent indicated negative feelings, such as differentiation from peers and inadequacy, about taking reading (or math) tests with accommodations.
Mathematics. The nine studies with ﬁndings about mathematics included four studies in math only (Bouck et al., 2015; DePountis et al., 2015; Higgins et al., 2016; Weis et al., 2016), three studies in math and reading (Cahan et al., 2016; Lin et al., 2016; Williams, 2015), one study in math and science (Rosenblum & Herzberg, 2015), and one study in math and other language arts (Eberhart, 2015). Five studies provided ﬁndings on the impact of accommodations on math assessment performance (Bouck et al., 2015; Higgins et al., 2016; Lin et al., 2016; Rosenblum & Herzberg, 2015; Eberhart, 2015). Many of these ﬁve were experimental or quasi-experimental, investigating more than one accommodation condition. Most had performance data from both groups of students with and without disabilities. Nearly all of the impact studies—except for Lin et al. (2016)—also indicated student perceptions about the accommodations. In addition, three other studies (DePountis et al., 2015; Weis et al., 2016; Williams, 2015) reported perceptions about accommodations, and two studies (DePountis et al., 2015; Weis et al., 2016) provided ﬁndings about accommodation use or practices.
Bouck and colleagues (2015) indicated that students with disabilities completed more test items, and answered more correctly, with a graphing calculator than without one. Further, student participants generally liked calculators, and indicated that they helped, although grade 8 students indicated that they did not need calculators for future testing. Cahan and colleagues (2016) used meta-analysis to examine the impact of extended time on 17 tests in 11 studies, concluding that the correlation was not particularly strong for students with learning disabilities using this accommodation and improvements in math scores. DePountis and colleagues (2015) reported that the self-reported proﬁciency of educators of high school students with blindness to manage assistive technology was strongest in algebra and relatively strong in geometry. Many AT tools were identiﬁed as broadly used and at least 13 AT devices were beneﬁcial for their students, such as audible calculators and electronic refreshable braille notetakers.
Eberhart (2015) found that the overall student population performed multiple-choice items better on computer than tablet. She indicated that it was possible that the embedded tools were more difﬁcult to work with on the smaller tablet screen, particularly when answering technology-enhanced items. Also, of the 10 student participants asked for their perceptions, ﬁve preferred the laptop, one preferred the tablet, and four liked both devices equally well; also, seven preferred the multiple-choice items, one preferred the technology-enhanced items, and two liked both items equally. Higgins and colleagues (2016) found that students who were deaf performed better with ASL than without signed administration, and tended to prefer the ASL features more similar to ASL communication patterns and more familiar to native ASL signers.
Lin and colleagues (2016) indicated that students with disabilities performed lower with separate setting than in the typical classroom setting. Rosenblum and Herzberg (2015) detailed the effects and preferences regarding tactile graphics information that students with visual impairments needed for answering math (and science) test items. They noted that certain tactile formats were in items that students with visual impairments tended to answer more correctly. Further, students mostly tended to seek speciﬁc details for answering items, rather than exploring data images ﬁrst. However, students had divergent opinions about some aspects, such as line textures.
Weis and colleagues (2016) reported on accommodations use by postsecondary students with mostly learning-related disabilities, including that about 50 percent of participants used calculators yet less than half of these students actually met criteria for using calculators in the postsecondary setting. Williams (2015) reported that about 40 percent of the grade 8 participants with various disabilities indicated positive feelings, such as conﬁdence and comfort, and about 40 percent indicated negative feelings, such as differentiation from peers and inadequacy, regarding taking math or reading tests with accommodations. All participants indicated that their accommodations affected their assessment performance scores. In sum, the accommodation conditions beneﬁted the math performance of at least some students with disabilities in four studies, and had a negative impact for students with disabilities in one study. Students with disabilities had favorable impressions of math accommodations in two studies, and shared their preferences about math accommodations in two studies.
Science. The ﬁndings of the ﬁve studies in science included those from four studies in science only (Hansen et al., 2016; McMahon et al., 2016; Seo & Hao, 2016; Spiel et al., 2016), and those from one study in science and mathematics (Rosenblum & Herzberg, 2015). These ﬁndings included those pertaining to the usability of certain accommodations, the performance effects of certain accommodations for students with and without disabilities, and scale comparability between non-accommodated and accommodated forms of an assessment.
Findings from the two studies that examined usability found that students with disabilities were able to use all accommodations but had the most success with the tactile graphic paper-based static simulations (Hansen et al., 2016). When comparing tactile graphics, students had the most success navigating a microcapsule map (Rosenblum & Herzberg, 2015). The two studies that presented ﬁndings related to performance effects found that both students with and without disabilities had higher performance when they had a podcast-delivered science test (McMahon et al., 2016). Although students with ADHD had higher performance when they were provided with in-person oral delivery and small group accommodations, students without disabilities performed the same on average with or without accommodations (Spiel et al., 2016). Scale comparability ﬁndings reported by Seo and Hao (2016) showed that when using person-ﬁt analysis (PFA) the accommodated and non-accommodated versions of a large-scale state high school science assessment were comparable.
Writing. Four studies (Davis et al., 2015; Joakim, 2015; Nelson & Reynolds, 2015; Ohleyer, 2016) provided ﬁndings related to writing. Findings pertained to the impacts of, and students’ preferences for, different technologies while writing and the uses and perceptions of students with disabilities on assistive technologies and accommodations. Enlisting general education participants, without specifying ability or disability status, Davis and colleagues (2015) examined the impact of touchscreens and found that students did not vary in writing assessment performance based on whether they used a laptop or tablet with a touchscreen keyboard. In addition, they found that while few students had difﬁculty using touchscreens, high school students were more likely than grade 5 students to prefer physical keyboards over touchscreens for writing compositions.
Ohleyer (2016) observed accommodations use patterns, ﬁnding that of the 7225 students with learning disabilities in grades 4 through 6, 59% used oral script, 12% used extended time, 8% used directions (only) read aloud, 2% used assistive technology, and less than 2% used scribe. She also examined longitudinal performance data, ﬁnding that assistive technology and administrator-read directions most signiﬁcantly improved scaled scores. Further, students who used assistive technology over two consecutive years were more likely to have higher growth scores on state assessments than students who did not use assistive technology or used assistive technology for only one year. Joakim (2015) found that most study participants used presentation, timing, and setting accommodations during writing assessments. Grade 5 students not using accommodations scored higher than those who used accommodations, while grade 8 students using or not using accommodations did not score signiﬁcantly differently.
Nelson and Reynolds (2015) found that postsecondary students thought that speech recognition tools made composition quicker and easier than manually typing, even though some faced initial challenges training the software to accurately recognize their words. Participants who were new speech recognition software users remarked that using this support made composing quicker and easier than typing (manually), and reduced the likelihood of their becoming tired early in the task. Two more experienced speech recognition users indicated that they performed editing by using the keyboard rather than by voicing edits to their computers, yet that they otherwise had become adept at organizing their thoughts without pre-planning or using written outlines or notes.
Other Language Arts. Two studies (Eberhart, 2015; Lin & Lin, 2016) examined large scale data sets of testing performance on academic constructs of language arts. We did not include discussion of these studies’ ﬁndings as reading due to their differing content. Eberhart (2015) examined data from the Smarter Balanced Assessment, which she described as covering English language arts content, with both reading comprehension of literary and informational texts, and producing effective and well-grounded writing. Lin and Lin (2016) sampled from the 2012-2013 Ontario Secondary School Literacy Test (OSSLT) data set for analysis. The OSSLT, comprising multiple-choice and constructed-response items, was administered as a requirement of high school graduation. Eberhart analyzed performance data from the general student population, with no known details about composition of students with disabilities in the population as a whole, while Lin and Lin performed various analyses on samples of only students with disabilities from of the larger extant literacy test data set.
Eberhart (2015) found that the overall grade 7 student population performed multiple-choice items signiﬁcantly better on computer than tablet on language arts (and math). Further, students on average scored higher on multiple-choice questions when using the computer than tablet; however, there were no signiﬁcant average score differences for the technologically-enhanced items based on computer versus tablet. A small set of student participants were asked for their perceptions and preferences. Fifty percent preferred the laptop, 10 percent preferred the tablet, and 40 percent liked both devices equally well. Seventy percent preferred the multiple-choice items, 10 percent preferred the technology-enhanced items, and 20 percent liked both items equally.
Lin and Lin (2016) found that the groups of students with disabilities who were provided certain aggregated sets of accommodations, based on their IEPs, performed better than students with disabilities receiving either no accommodations or other accommodations. The bundles demonstrating the most beneﬁt, compared to other sets, included computer administration along with extended time, or specialized setting, or both extended time and specialized setting. They highlighted that students with learning disabilities demonstrated the most signiﬁcant beneﬁt from these aggregated sets of accommodations. The researchers also analyzed different data adjustment methods for addressing the problem of data sparsity. They described how the treatment arm correction method was found not to be useful, while the log-linear analysis and adjusted odds ratio method was found to be useful.
Social Studies. One study (Seo & De Jong, 2015) provided ﬁndings about social studies assessment accommodations. Seo and De Jong (2015) analyzed a large extant data set of social studies assessment performance of students with and without disabilities; they did not compare subgroup performance patterns with one another. They found that there were no signiﬁcant differences in group mean performance for either grade 6 or grade 9 students, between paper-based testing versus tests presented via computer. Differential item functioning analyses yielded that the test items in both presentation modes functioned similar to one another. The researchers asserted that the propensity score matching process they employed showed more precise datasets for comparison, with more equivalent comparison groups; they advocated that this research design was more useful for their purposes. The researchers surveyed a very small number of students about their preferences, and found that students preferred online testing (70%) over paper-based testing (10%), with 20 percent of students having no preference. None of the students indicated having any difﬁculties with the online/computer presentation format.
This report provided a snapshot of accommodations research literature in 2015-2016. It addressed the types of accommodations that were studied, the purposes of the research, the research type, data sources, characteristics of the independent and dependent variables under study, and comparability of ﬁndings between studies in similar domains—including by speciﬁc accommodations and their performance effects, by academic content area, and a separate review of postsecondary accommodations.
As we have found previously, mathematics and reading were the content areas most frequently addressed in the studies included in this analysis, although there was a relative increase in the number of science assessment studies. Students were the participant group in nearly two-thirds of the studies. Students with learning disabilities (LD) were participants in over half of the studies reported; indeed, they were more likely to be included in the research samples than other groups. Two other disability categories receiving attention by many studies were “Other Health Impairment” (about one-third of the studies), and students with emotional/behavioral disabilities (about one quarter of the studies).
Accommodations research continues to be an area where a substantial amount of research is occurring. The number of studies we have located has increased across the span of NCEO’s reports in this area; for instance, in 2011-2012, there were 49 identiﬁed studies, in 2013-2014, there were 53 studies, and now in 2015-2016, there were 58 studies. This line of research continues to receive attention. Researchers have been exploring a wide range of topics related to accommodations. For instance, we continue to observe the expansion of questions and issues surrounding the shift from paper and pencil tests to technology-based assessments.
Similar to previous reports (Cormier et al., 2010; Johnstone et al., 2006; Thompson et al., 2002; Rogers et al., 2012; Rogers et al., 2014; Rogers et al., 2016; Zenisky et al., 2007), the ﬁndings for speciﬁc accommodations were often mixed. This range of ﬁndings was demonstrated particularly well in the oral delivery and extended time performance impact ﬁndings. Studies found that oral delivery accommodations supported the performance of students with disabilities (Ohleyer, 2016), extended time mostly had no apparent inﬂuence (Cahan et al., 2016; Joakim, 2015; Ohleyer, 2016), oral delivery provided no differential beneﬁt—speciﬁcally, when provided by an in-person reader—for students with disabilities in comparison to students without disabilities (McMahon et al., 2016), and oral delivery had negative impacts (Ricci, 2015).
The ﬁndings for speciﬁc accommodations were complicated by various factors. In addition to the variety of ways that oral delivery can be offered to students—in-person by test proctor, via human voice audio- or video-recording, and delivered by computer through text-to-speech software—the 2015-2016 set of research ﬁndings also elucidated other factors that might limit positive impacts of accommodations. Student age or grade level showed variation in ﬁndings: participants in earlier grade levels performed better in reading comprehension with in-person oral delivery than recorded voice, while the older participants beneﬁted similarly from both of these types of oral delivery (Kim, 2016). Also, different item types showed varying impacts (Eberhart, 2015), and the content area was a complicating factor, as found by Kim (2016). These factors served to demonstrate the ways that accommodations’ effects are highly inﬂuenced by circumstance, suggesting the importance of individualized assignment of accommodations as intended through the IEP process.
The reauthorization of the Elementary and Secondary Education Act (ESEA) as the Every Student Succeeds Act (ESSA) has given states more ﬂexibility in how they annually assess students on statewide tests for accountability purposes, but there is a continued focus on ensuring that the assessments are accessible to students with disabilities. Relatively recent and continuing issues related to embedded accommodations on computer-based tests, the compatibility of assistive technology with computer platforms, the validity of inferences, and adaptive testing will continue to increase as states and consortia reﬁne their assessment systems. There will continue to be a need for accommodations research that addresses these complexities and other emerging issues.
(References in the report to documents that were part of the 2015-2016 accommodations research analysis are not included in this list. They are in the separate list titled: 2015 and 2016 Accommodation References.)
Altman, J. R., Cormier, D. C., Lazarus, S. S., Thurlow, M. L., Holbrook, M., Byers, M., Pence, N. (2010). Accommodations: Results of a survey of Alabama special education teachers (Synthesis Report 81). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from https://rtc3.umn.edu/docs/OnlinePubs/Synthesis81/Synthesis81.pdf
Barnard-Brak, L., Davis, T., Tate, A., & Sulak, T. (2009). Attitudes as a predictor of college students requesting accommodations. Journal of Vocational Rehabilitation, 31(3), 189-198. doi:10.3233/JVR-2009-0488
Brown, W. M. (2007). Virginia teachers’ perceptions and knowledge of test accommodations for students with disabilities (Doctoral dissertation). Retrieved from ProQuest Dissertations and Theses database. (UMI No. 3254404)
Brown, J. I., Fishco, V. V., & Hanna, G. (1993). Nelson-Denny Reading Test, Form H. Itasca, IL: Riverside.
Caldwell, B., Cooper, M., Reid, L. G., & Vanderheiden, G. (Eds.). (2008). Web content accessibility guidelines 2.0. Cambridge, MA: World Wide Web Consortium. Retrieved from http:// www.w3.org/TR/WCAG20/
Carrow-Woolfolk, E. (2011). Oral and Written Language Scales–2nd edition. Torrance, CA: Western Psychological Services.
Cormier, D. C., Altman, J., Shyyan, V., & Thurlow, M. L. (2010). A summary of the research on the effects of test accommodations: 2007-2008 (Technical Report 56). Minneapolis, MN. Retrieved from http://www.cehd.umn.edu/NCEO/onlinepubs/Tech56/default.htm
CTB/McGraw Hill. (2009). Colorado Student Assessment Program: Technical report 2009. Monterey, CA: Author.
Dunn, L. M., & Dunn, D. M. (2007). The Peabody Picture Vocabulary Test: 4th edition (PPVT-4). Minneapolis, MN: Pearson.
Fabiano, G. A., Pelham, W. E., Jr., Waschbusch, D. A., Gnagy, E. M., Lahey, B. B., Chronis, A. M., Burrows-MacLean, L. (2006). A practical measure of impairment: Psychometric proper-ties of the impairment rating scale in samples of children with attention deﬁcit hyperactivity disorder and two school-based samples. Journal of Clinical Child and Adolescent Psychology, 35, 369-385. doi:10.1207/s15374424jccp3503_3
German National Educational Panel Study (NEPS; n.d.). Bamberg, Germany: Leibniz Institute for Educational Trajectories (LIfBi), University of Bamberg. Accessible at https://www.neps-data.de/en-us/home.aspx
Gillam, R. B., & Pearson, N. (2004). Test of Narrative Language. Austin, TX: Pro-Ed.
Johnstone, C. J., Altman, J., Thurlow, M. L., & Thompson, S. J. (2006). A summary of research on the effects of test accommodations: 2002 through 2004 (Technical Report 45). Minneapolis, MN. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Tech45/default.html
Kincaid, J. P., Fishburne, R. P., Rogers, R. L., & Chissom, B. S. (1975) Derivation of new readability formulas (Automated Readability Index, Fog Count, and Flesch Reading Ease Formula) for Navy enlisted personnel (Research Brank Report 8-75). Millington, TN: Chief of Naval Technical Training, Naval Air Station Memphis.
Kleinmann, A. E. (2005). Not so fast: Using speed to differentiate high and average readers (Unpublished dissertation). Syracuse, NY: Syracuse University.
MacGinitie, W. H., MacGinitie, R. K., Maria, K., Dreyer, L. G., & Hughes, K. (2000). Gates-MacGinitie Reading Tests: 4th edition. Itasca, IL: Riverside.
National Assessment of Educational Progress (NAEP; n.d.). Accessible at https://nces.ed.gov/ nationsreportcard/
Pearson. (2013). South Dakota State Test of Educational Progress (Dakota STEP) (Tech. Rep. Spring 2013 administration). Pierre, SD: Author. Retrieved from http://doe.sd.gov/oats/documents/STEP13All.pdf
Pelham, W. E., Jr., Gnagy, E. M., Greenslade, K. E., & Milich, R. (1992). Teacher ratings of DSM-III-R symptoms for the disruptive behavior disorders. Journal of the American Academy of Child & Adolescent Psychiatry, 31, 210–218. doi:10.1097/00004583-199203000-00006
Ranseen, J. D., & Parks, G. S. (2005). Test accommodations for postsecondary students: The quandary resulting from the ADA’s disability deﬁnition. Psychology, Public Policy, and Law, 11(1), 83-108. doi:10.1037/1076-89126.96.36.199
Rogers, C. M., Christian, E. M., & Thurlow, M. L. (2012). A summary of the research on the effects of test accommodations: 2009-2010 (Technical Report 65). Minneapolis, MN. Retrieved from http://www.cehd.umn.edu/nceo/OnlinePubs/Tech65/TechnicalReport65.pdf
Rogers, C. M., Lazarus, S. S., & Thurlow, M. L. (2014). A summary of the research on the effects of test accommodations, 2011-2012 (Synthesis Report 94). Minneapolis, MN. Retrieved from http://www.cehd.umn.edu/nceo/OnlinePubs/Synthesis94/Synthesis94.pdf
Rogers, C. M., Lazarus, S. S., & Thurlow, M. L. (2016). A summary of the research on the effects of test accommodations: 2013-2014 (NCEO Report 402). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from https://nceo.info/ Resources/publications/OnlinePubs/Report402/default.htm
Roth, R. M., Isquith, P. K., & Gioia, G. A. (2005). Behavior Rating Inventory of Executive Function—Adult version. Lutz, FL: Psychological Assessment Resources.
Semel, E., Wiig, W. H., & Secord, W. A. (2003). Clinical evaluation of language fundamentals–4th edition. San Antonio, TX: Pearson.
Sheldon, K., & Deci, E. (1993). The self-determination scale. Unpublished manscript, University of Rochester, Rochester, NY.
Thompson, S., Blount, A., & Thurlow, M. (2002). A summary of research on the effects of test accommodations: 1999 through 2001 (Technical Report 34) (Vol. 2001). Minneapolis, MN. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Technical34.htm
Valdes, K., Godard, P., Williamson, C., Van Campen, J., McCracken, M., & Jones, R. (2013). National Longitudinal Transition Study–2 (NLTS2) Waves 1, 2, 3, 4, & 5 data documentation and dictionary. Menlo Park, CA: SRI International. Retrieved from http://www.nlts2.org/ data_dictionary/data_documentation_dictionary.html
Wechsler, D. (1997). Wechsler Adult Intelligence Scale: 3rd edition. San Antonio, TX: The Psychological Corporation.
Wechsler, D. (2001). Wechsler Individual Achievement Test: 2nd edition (WIAT-II). San Antonio, TX: The Psychological Corporation.
Wechsler, D. (2003). Wechsler Intelligence Scale for Children (WISC-IV). San Antonio, TX: The Psychological Corporation.
Wechsler, D. (2009). Wechsler Individual Achievement Test–3rd Edition (WIAT-III). London, England: The Psychological Corporation.
Wechsler, D. (2011). Wechsler Abbreviated Scale of Intelligence–2nd Edition manual. Bloomington, MN: Pearson.
Wehmeyer, M. (2000). The Arc’s Self-Determination Scale: Procedural guidelines (Rev. ed.). Silver Spring, MD: Arc of the United States.
Weller, E. B., Weller, R. A., Fristad, M. A., Rooney, M. T., & Schecter, J. (2000). Children’s Interview for Psychiatric Syndromes (ChIPS). Journal of the American Academy of Child & Adolescent Psychiatry, 39, 76-84. doi:10.1097/00004583-200001000-00019
Woodcock, R. W. (1987). Woodcock Reading Mastery Tests–Revised. New York, NY: Pearson.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001a). Woodcock-Johnson III Tests of Achievement. Itasca, IL: Riverside.
Woodcock, R. W., McGrew, K. S., & Mather, N. (2001b). Woodcock-Johnson III Tests of Cognitive Abilities. Itasca, IL: Riverside.
Zenisky, A. L., & Sireci, S. G. (2007). A summary of the research on the effects of test accommodations: 2005-2006 (Technical Report 47). Minneapolis, MN. Retrieved from http://www. cehd.umn.edu/NCEO/OnlinePubs/Tech47/TechReport47.pdf
Barnett, J. E. H., & Gay, C. (2015). Accommodating students with epilepsy or seizure disorders: Effective strategies for teachers. Physical Disabilities: Education and Related Services, 34(1), 1-13. doi:10.14434/pders.v34i1.13258
Bouck, E. C., Bouck, M. K., & Hunley, M. (2015). The calculator effect: Understanding the impact of calculators as accommodations for secondary students with disabilities. Journal of Special Education Technology, 30(2), 77-88. doi:10.1177/0162643415617371
Cawthon, S. W., Leppo, R., Ge, J. J., & Bond, M. (2015). Accommodations use patterns in high school and postsecondary settings for students who are d/Deaf or hard of hearing. American Annals of the Deaf, 160(1) 9-23. doi:10.1353/aad.2015.0012
Cole, E. V., & Cawthon, S. W. (2015). Self-disclosure decisions of university students with learning disabilities. Journal of Postsecondary Education and Disability, 28(2), 163-179. Retrieved from https://www.ahead.org/publications/jped
Condra, M., Dineen, M., Gauthier, S., Gills, H., Jack-Davies, A., & Condra, E. (2015). Academic accommodations for postsecondary students with mental health disabilities in Ontario, Canada: A review of the literature and reﬂections on emerging issues. Journal of Postsecondary Education and Disability, 28(3), 277-291. Retrieved from https://www.ahead.org/publications/jped
Couzens, D., Poed, S., Kataoka, M., Brandon, A., Hartley, J., & Keen, D. (2015). Support for students with hidden disabilities in universities: A case study. International Journal of Disability, Development and Education, 62(1), 24-41. doi:10.1080/1034912X.2014.984592
Crosby, S. (2015). Barriers to access: The experience of students delaying the request for accommodations at an open-access college. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/09(E).
Davis, L. L., Orr, A., Kong, X., & Lin, C.-H. (2015). Assessing student writing on tablets. Educational Assessment, 20(3), 180-198. doi:10.1080/10627197.2015.1061426
DeLee, B. (2015). Academic support services for college students with disabilities. Journal of Applied Learning Technology, 5(3), 39-49. Retrieved from https://www.journalguide.com/journals/journal-of-applied-learning-technology
DePountis, V. M., Pogrund, R. L., Grifﬁn-Shirley, N., & Lan, W. Y. (2015). Technologies that facilitate the study of advanced mathematics by students who are blind: Teachers’ perspectives. International Journal of Special Education, 30(2), 131-144. Retrieved from http://www.internationaljournalofspecialed.com/
Eberhart, T. (2015). A comparison of multiple-choice and technology-enhanced item types administered on computer versus iPad. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/06(E).
Gallego, M., & Busch, C. (2015). Towards the inclusion of students with disabilities: Accessibility in language courses. Innovative Higher Education, 40(5), 387-398. doi:10.1007/s10755- 015-9321-z
Giusto, M. (2015). Effectiveness of a partial read-aloud test accommodation to assess reading comprehension in students with a reading disability. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/02(E).
Joakim, S. E. (2015). Help me fail: A study on testing accommodations for students with disabilities in writing assessments. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/03(E).
Kaﬂe, E. A. (2015). Nursing students with learning disabilities: Perceptions and attitudes regarding the role of disability support program services and access to accommodations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 76/08(E).
Kettler, R. J. (2015). Adaptations and access to assessment of Common Core content. Review of Research in Education, 39(1), 295-330. doi:10.3102/0091732X14556075
Lane, S., & Leventhal, B. (2015). Psychometric challenges in assessing English language learners and students with disabilities. Review of Research in Education, 39(1), 165-214. doi:10.3102/0091732X14556073
Lawing, R. W. (2015). Examining the relationship between teacher attitude and expectations and selection of accommodations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 76/09(E).
Lewandowski, L., Wood, W., & Lambert, T. (2015). Private room as a test accommodation. Assessment & Evaluation in Higher Education, 40(2), 279-285. doi:10.1080/02602938.2014. 911243
Lovett, B. J., & Leja, A. M. (2015). ADHD symptoms and beneﬁt from extended time testing accommodations. Journal of Attention Disorders, 19(2), 167-172. doi:10.1177/1087054713510560
Miller, L. A., Lewandowski, L. J., & Antshel, K. M. (2015). Effects of extended time for college students with and without ADHD. Journal of Attention Disorders, 19(8), 678-686. doi:10.1177/1087054713483308
Monagle, K. (2015). Beyond access: An examination of factors that inﬂuence use of accommodations by college students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/08(E).
Nelson, L. M., & Reynolds, T. W. (2015). Speech recognition, disability, and college composition. Journal of Postsecondary Education and Disability, 28(2), 181-197. Retrieved from https:// www.ahead.org/publications/jped
Newman, L. A., & Madaus, J. W. (2015a). An analysis of factors related to receipt of accommodations and services by postsecondary students with disabilities. Remedial and Special Education, 36(4), 208-219. doi:10.1177/0741932515572912
Newman, L. A., & Madaus, J. W. (2015b). Reported accommodations and supports provided to secondary and postsecondary students with disabilities: National perspective. Career Development and Transition for Exceptional Individuals, 38(3), 173-181. doi:10.1177/2165143413518235
Oﬁesh, N., Moniz, E., & Bisagno, J. (2015). Voices of university students with ADHD about test-taking: Behaviors, needs, and strategies. Journal of Postsecondary Education and Disability, 28(1), 109-114. Retrieved from https://www.ahead.org/publications/jped
Ricci, N. N. (2015). The effect of the read-aloud testing accommodation on the 2011 fourth-grade National Assessment of Educational Progress in Reading, for students with disabilities in the New York State metropolitan, tri-state area. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/01(E).
Rosenblum, L. P., & Herzberg, T. S. (2015). Braille and tactile graphics: Youths with visual impairments share their experiences. Journal of Visual Impairment & Blindness, 109(3), 173-184.
Rudzki, D. R. (2015). The extent of programs, services, and attendance for students with reading disabilities and performance on high-stakes reading assessments. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/09(E).
Ruhkamp, R. (2015). Lived experiences of undergraduate and graduate students utilizing accommodations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 76/11(E).
Seo, D. G., & De Jong, G. (2015). Comparability of onlineand paper-based tests in a statewide assessment program: Using propensity score matching. Journal of Educational Computing Research, 52(1), 88-113. doi:10.1177/0735633114568856
Südkamp, A., Pohl, S., & Weinert, S. (2015). Competence assessment of students with special educational needs—Identiﬁcation of appropriate testing accommodations. Frontline Learning Research, 3(2), 1-26. doi:10.14786/ﬂr.v3i2.130
Timmerman, L. C., & Mulvihill, T. M. (2015). Accommodations in the college setting: The perspectives of students living with disabilities. The Qualitative Report, 20(10), 1609-1625.
Williams, A. D. (2015). Middle school students’ experience of receiving test accommodations. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 76/12(E).
Ajuwon, P. M., Meeks, M. K., Grifﬁn-Shirley, N., & Okungu, P. A. (2016). Reﬂections of teachers of visually impaired students on their assistive technology competencies. Journal of Visual Impairment & Blindness, 110(2), 128-134.
Barnhill, G. P. (2016). Supporting students with Asperger Syndrome on college campuses: Current practices. Focus on Autism and Other Developmental Disabilities, 31(1), 3-15. doi:10.1177/1088357614523121
Cahan, S., Nirel, R., & Alkoby, M. (2016). The extra-examination time granting policy: A reconceptualization. Journal of Psychoeducational Assessment, 34(5), 461-472. doi:10.1177/0734282915616537
Detrick-Grove, T. L. (2016). Teachers’ perceptions and knowledge of accommodations for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 78/05(E).
Dong, S., & Lucas, M. S. (2016). An analysis of disability, academic performance, and seeking support in one university setting. Career Development and Transition for Exceptional Individu-als, 39(1), 47-56. doi:10.1177/2165143413475658
Hansen, E. G., Liu, L., Rogat, A., & Hakkinen, M. T. (2016). Designing innovative science assessments that are accessible for students who are blind. Journal of Blindness Innovation and Research, 6(1). doi:10.5241/6-91
Higgins, J. A., Famularo, L., Cawthon, S. W., Kurz, C. A., Reis, J. E., & Moers, L. M. (2016). Development of American Sign Language guidelines for K-12 academic assessments. Journal of Deaf Studies and Deaf Education, 21(4), 383-393. doi:10.1093/deafed/enw051
Kim, W. H., & Lee, J. (2016). The effect of accommodation on academic performance of college students with disabilities. Rehabilitation Counseling Bulletin, 60(1), 40-50. doi:10.1177/0034355215605259
Kim, Y. S. G. (2016). Do live versus audio-recorded narrative stimuli inﬂuence young children’s narrative comprehension and retell quality? Language, Speech, and Hearing Services in Schools, 47(1), 77-86. doi:10.1044/2015_LSHSS-15-0027
Lin, P.-Y., Childs, R. A., & Lin, Y.-C. (2016). Untangling complex effects of disabilities and accommodations within a multilevel IRT framework. Quality & Quantity, 50(6), 2767-2788. doi:10.1007/s11135-015-0288-8
Lin, P.-Y., & Lin, Y.-C. (2016). Examining accommodation effects for equity by overcoming a methodological challenge of sparse data. Research in Developmental Disabilities, 51-52, 10-22. doi:10.1016/j.ridd.2015.12.012
Lyman, M., Beecher, M. E., Griner, D., Brooks, M., Call, J., & Jackson, A. (2016). What keeps students with disabilities from using accommodations in postsecondary education? A qualitative review. Journal of Postsecondary Education and Disability, 29(2), 123-140. Retrieved from https://www.ahead.org/publications/jped
McMahon, D., Wright, R., Cihak, D. F., Moore, T. C., & Lamb, R. (2016). Podcasts on mobile devices as a read-aloud testing accommodation in middle school science assessment. Journal of Science Education and Technology, 25(2), 263-273. doi:10.1007/s10956-015-9591-3
Ohleyer, A. A. (2016). Elementary tech: Assistive technology, speciﬁc learning disability, and state standardized testing. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/08(E).
Peterson, M. C. (2016). Assistive technology management by disabilities services managers in higher education: A phenomenological study. Dissertation Abstracts International: Section B. Sciences and Engineering, 77/12(E).
Potter, K., Lewandowski, L., & Spenceley, L. (2016). The inﬂuence of a response format test accommodation for college students with and without disabilities. Assessment & Evaluation in Higher Education, 41(7), 996-1007. doi:10.1080/02602938.2015.1052368
Seo, D. G., & Hao, S. (2016). Scale comparability between nonaccommodated and accommodated forms of a statewide high school assessment: Using I person-ﬁt. Journal of Psychoeducational Assessment, 34(3), 230-243. doi:10.1177/0734282915596126
Sokal, L. (2016). Five windows and a locked door: University accommodation responses to students with Anxiety Disorders. The Canadian Journal for the Scholarship of Teaching and Learning, 7(1). doi:10.5206/cjsotl-rcacea.2016.1.10
Spenceley, L. M., & Wheeler, S. (2016). The use of extended time by college students with disabilities. Journal of Postsecondary Education and Disability, 29(2), 141-150. Retrieved from https://www.ahead.org/publications/jped
Spiel, C. F., Mixon, C. S., Holdaway, A. S., Evans, S. W., Harrison, J. R., Zoromski, A. K., & Yost, J. S. (2016). Is reading tests aloud an accommodation for youth with or at risk for ADHD? Remedial and Special Education, 37(2), 101-112. doi:10.1177/0741932515619929
Weis, R., Dean, E. L., & Osborne, K. J. (2016). Accommodation decision making for postsecondary students with learning disabilities: Individually tailored or one size ﬁts all? Journal of Learning Disabilities, 49(5), 484-498. doi:10.1177/0022219414559648
Yssel, N., Pak, N., & Beilke, J. (2016). A door must be opened: Perceptions of students with disabilities in higher education. International Journal of Disability, Development and Education, 63(3), 384-394. doi:10.1080/1034912X.2015.1123232
Zambrano, A. (2016). The experience of student with disabilities in higher education. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 77/11(E).
Zeedyk, S. M., Tipton, L. A., & Blacher, J. (2016). Educational supports for high functioning youth with ASD: The postsecondary pathway to college. Focus on Autism and Other Developmental Disabilities, 31(1), 37-48. doi:10.1177/1088357614525435
|Author(s)||Stated Research Purposes||Purpose Category Identifier|
|Ajuwon, Meeks, Griffin-Shirley, & Okungu (2016)||Report about the self-perceived knowledge of educators about assistive technology supports for students with blindness and visual impairments; also, examine related issues.||P||X|
|Barnett & Gay (2015)||Summarize research literature about providing accommodations to students with epilepsy or seizure disorders; also, examine related issues.||P||X|
|Barnhill (2016)||Report on educators’ practices implementing accommodations for postsecondary students with autism-related disabilities.||P|
|Bouck, Bouck, & Hunley (2015)||Investigate the effects of graphing calculator accommodations on test performance of middle school students with various disabilities; also, inquire about students’ perceptions of this accommodation’s possible benefits.||X||P|
|Cahan, Nirel, & Alkoby (2016)||Investigate a selection of empirical studies for the effects of extended time accommodations on test performance of students with learning disabilities, including comparison between students with and without disabilities; also, summarize research literature and this study’s findings in context; finally, examine related issues.||X||P||X|
|Cawthon, Leppo, Ge, & Bond (2015)||Report on test-takers’ use of various accommodations of students with deafness or hearing impairments through analyzing large self-report extant data set from the second National Longitudinal Transition Study (NLTS-2).||P|
|Cole & Cawthon (2015)||Investigate the connections between self-disclosure and attitudes about accommodations for postsecondary students with learning disabilities; also, examine related issues.||P||X|
|Condra, Dineen, Gauthier, Gills, Jack-Davies, & Condra (2015)||Summarize research literature about the implementation of accommodations in the postsecondary education setting for students with mental health disabilities; also, examine related issues.||P||X|
|Couzens, Poed, Kataoka, Brandon, Hartley, & Keen (2015)||Inquire about postsecondary educators’ perspectives regarding various accommodations, and about the perceptions of students with non-visible disabilities about various accommodations; also, examine related issues.||P||X|
|Crosby (2015)||Inquire about the perceptions of postsecondary students with disabilities and faculty members about various accommodations; also, summarize research literature and this study’s findings in context; finally, examine related issues.||P||X||X|
|Davis, Orr, Kong, & Lin (2015)||Investigate the effects of using tablet devices for assessment delivery on the writing test performance of students without disabilities; also, report on accommodations use patterns; finally, inquire about students’ perceptions of tablet delivery of testing.||X||P||X|
|DeLee (2015)||Summarize research literature about the implementation of accommodations in the postsecondary education setting for students with disabilities; also, examine related issues.||P||X|
|DePountis, Pogrund, Griffin-Shirley, & Lan (2015)||Inquire about the perspectives and use of various assistive technology supports by math educators of students with blindness .||P||X|
|Detrick-Grove (2016)||Report about the self-perceived knowledge of educators about various accommodations; also, summarize research literature about and this study’s findings in context.||P||X|
|Dong & Lucas (2016)||Investigate the effects of various accommodations on course performance of postsecondary students with various disabilities, including comparison between students who reported disabilities and received accommodations and students who did not self-disclose about disabilities and did not receive accommodations; also, examine related issues.||P||X|
|Eberhart (2015)||Investigate the effects of different platforms (computer versus electronic notepad) on test performance of students with no reported disabilities; also, report on participants’ examination experiences; finally, summarize research literature and this study’s findings in context.||X||X||P|
|Gallego & Busch (2015)||Inquire about the perspectives of postsecondary educators regarding accommodations in language courses; also, examine related issues.||P||X|
|Giusto (2015)||Investigate the effects of in-person read aloud accommodation on test performance of students with reading disabilities, including comparing to performance of students without disabilities; also, summarize research literature and this study’s findings in context.||X||P|
|Hansen, Liu, Rogat, & Hakkinen (2016)||Develop an accessible science assessment task for students with blindness; also, report on students’ testing experience including usability issues; report on perceptions of students with blindness about accommodations during science assessment including usability issues; finally, examine related issues.||X||X||P|
|Higgins, Famularo, Cawthon, Kurz, Reis, & Moers (2016)||Investigate the effects of sign language interpreting accommodations on test performance of students with deafness and hearing impairments; also, inquire about students’ perceptions of sign language accommodations.||X||P|
|Joakim (2015)||Examining a large extant data set, investigate the effects of several specific accommodations on writing test performance of students with several specific categories of disabilities; also, summarize research literature and this study’s findings in context.||X||P|
|Kafle (2015)||Inquire about the perceptions of postsecondary students with learning disabilities about accommodations including those provided during course exams; also, summarize research literature and this study’s findings in context.||P||X|
|Kettler (2015)||Summarize research literature about Common Core content assessment accessibility and accommodations; also, examine related issues.||P||X|
|Kim & Lee (2016)||Report on postsecondary students’ use of various accommodations, and their relationships with course grades and persistence in higher education.||P|
|Kim (2016)||Investigate the effects of recorded and in-person oral administration accommodations on test performance of students with and without disabilities, but make no direct comparison of performance of students with disabilities and students without disabilities.||P|
|Lane & Leventhal (2015)||Summarize research literature about the impact of accommodations and modifications on assessment performance for students with disabilities; also, examine related issues.||P||X|
|Lawing (2015)||Inquire about educators’ perspectives on and use of various accommodations, and the relationships between attitudes and accommodations use; also, summarize research literature and this study’s findings in context.||P||X||X|
|Lewandowski, Wood, & Lambert (2015)||Investigate the effects of the separate quiet room accommodation on reading comprehension test performance of postsecondary students without disabilities.||P|
|Lin, Childs, & Lin (2016)||Investigate the effects of setting accommodations on the test performance of students with learning disabilities, including comparison between students with and without disabilities; also, analyze item-level functioning to discern the viability of these procedures for examining accommodations’ effects for participant groups.||P||X|
|Lin & Lin (2016)||Investigate the effects of a bundled set of accommodations on the literacy test performance of students with various disabilities, including learning disabilities and mental health disabilities; also, examine related issues.||P||X||X|
|Lovett & Leja (2015)||Investigate the effects of extended time accommodations on test performance of postsecondary students with attention-related disabilities, including comparison between students with and without disabilities; also, inquire about the perceptions of postsecondary students with disabilities regarding extended time accommodations; also, analyze score patterns and attentional difficulties to discern the need for accommodations.||X||P||X|
|Lyman, Beecher, Griner, Brooks, Call, & Jackson (2016)||Inquire about the perceptions of postsecondary students with various disabilities regarding various accommodations.||P|
|McMahon, Wright, Cihak, Moore, & Lamb (2016)||Investigate the effects of the oral administration accommodation via audio podcast under administrator or student control on reading test performance of students with reading difficulties, including comparison between students with and without reading disabilities.||P|
|Miller, Lewandowski, & Antshel (2015)||Investigate the effects of the extended time accommodations on test performance of students with attention-related disabilities, including comparison between students with and without disabilities.||P|
|Monagle (2015)||Explore the association of various factors on the accommodations use patterns of postsecondary students with disabilities; also, inquire about these students’ perceptions of accommodations; finally, summarize research literature and this study’s findings in context.||X||X||P|
|Nelson & Reynolds (2015)||Inquire about the perceptions of postsecondary students with learning disabilities regarding dictated response accommodations for college composition/writing skills testing; also, summarize research literature and this study’s findings in context; finally, examine related issues.||P||X||X|
|Newman & Madaus (2015a)||Examining a large extant data set, report on patterns of accommodations use, focusing on exam-related accommodations, by postsecondary students with disabilities.||P|
|Newman & Madaus (2015b)||Examining a nationally-representative sample from a large extant data set, report on patterns of accommodations use, focusing on exam-related accommodations, by postsecondary students with disabilities who self-disclosed and did not self-disclose their disabilities.||P|
|Ofiesh, Moniz, & Bisagno (2015)||Inquire about the perceptions of postsecondary students with attention-related disabilities regarding course exam accommodations.||P|
|Ohleyer (2016)||Investigate the effects of assistive technology accommodations on test performance of students with learning disabilities, including comparison with effects of other types of accommodations; also, examine the use patterns of assistive technology longitudinally; finally, summarize research literature and this study’s findings in context.||X||P||X|
|Peterson (2016)||Report on postsecondary educators’ practices implementing assistive technology accommodations, particularly for course exams; also, summarize research literature and this study’s findings in context.||X||P|
|Potter, Lewandowski, & Spenceley (2016)||Investigate the effects of responding in the test booklet (compared to using a bubble sheet for multiple-choice responses) on reading test performance of postsecondary students with learning and/or attention-related disabilities, including comparison between students with and without disabilities; also, discern whether the response format affected the construct validity of the test.||P||X|
|Ricci (2015)||Examining a large extant data set, investigate the effects of oral administration accommodations on reading test performance of students with disabilities, in comparison to when these students did not use these accommodations; also, report on patterns of use of oral administration accommodations; finally, summarize research literature and this study’s findings in context.||X||P||X|
|Rosenblum & Herzberg (2015)||Investigate the effects of tactile graphics accommodations on the math and science test performance of students with visual impairments; also, inquire about the perceptions of students with visual disabilities regarding different versions of tactile graphics accommodations.||X||P|
|Rudzki (2015)||Investigate the relationships, for students with reading disabilities, of various factors including attendance, type and extent of special education services, and use of one or more accommodations, to reading test scores; also, summarize research literature and this study’s findings in context.||X||P|
|Ruhkamp (2015)||Inquire about the perceptions of postsecondary students with disabilities regarding various accommodations; also, summarize research literature and this study’s findings in context.||P||X|
|Seo & De Jong (2015)||Investigate the effects of testing format (online versus paper-based) on test performance of students with disabilities, including performance comparisons between students with and without disabilities; also, analyze item-level and test-level functioning between the testing formats to confirm the validity of the test construct.||P||X|
|Seo & Hao (2016)||Investigate the effects of accommodations on test performance of students with disabilities; also, analyze item-level and test-level functioning between the testing formats to confirm the fairness of the test accommodations; finally, develop an accommodated form of a science assessment.||P||X||X|
|Sokal (2016)||Inquire about the perspectives of postsecondary educators about various accommodations provided to students with mental health disabilities.||P|
|Spenceley & Wheeler (2016)||Report on patterns of use of extended time accommodations by postsecondary students with various disabilities.||P|
|Spiel, Mixon, Holdaway, Evans, Harrison, Zoromski, & Yost (2016)||Investigate the effects of the in-person oral administration accommodation on test performance of students with attention-related disabilities, including comparison between students with and without disabilities.||P|
|Südkamp, Pohl, & Weinert (2015)||Investigate the effects of various accommodations on test performance of students with disabilities, including comparison to the performance of average students and of students with lower reading skills but without disabilities; also, analyze item functioning between testing conditions for test construct validity.||P||X||X|
|Timmerman & Mulvihill (2015)||Inquire about the perceptions of postsecondary students with disabilities regarding various exam-related accommodations.||P|
|Weis, Dean, & Osborne (2016)||Examining extant data, report on patterns of use of various accommodations and modifications by postsecondary students with learning disabilities; also, analyze score patterns to discern the need for accommodations, and associations with educators’ recommendations for accommodations.||P||X|
|Williams (2015)||Inquire about the perceptions of students with disabilities, including learning and intellectual disabilities, regarding various accommodations; also, summarize research literature and this study’s findings in context; finally, examine related issues.||P||X||X|
|Yssel, Pak, & Beilke (2016)||Inquire about the perceptions of postsecondary students with disabilities regarding various accommodations; also, summarize research literature and this study’s findings in context.||P||X|
|Zambrano (2016)||Inquire about the perceptions of postsecondary students with disabilities regarding accommodations; also, summarize research literature and this study’s findings in context.||P||X|
|Zeedyk, Tipton, & Blacher (2016)||Summarize research literature about various accommodations for postsecondary students with autism-related disabilities; also, examine related issues.||P||X|
Note. A-Perceptions = Study/compare perceptions and preferences about use. B-Reviews = Summarize research on test accommodations. C-Effects [both] Compare effects of accommodations on assessment scores [both students with and without disabilities].C-Effects [non] = Compare effects of accommodations on assessment scores [only students without disabilities]. C-Effects [SwD] = Compare effects of accommodations on assessment scores [only students with disabilities]. D-Issues = Discuss issues. E-Implement/Use = Report on implementation practices and accommodations use. F-Items = Compare test items. G-Accomm. Need = Identify predictors of the need for test accommodations. H-Develop = Develop test. I-Test = Test structure. J-Validity = Investigate test validity.
P = Primary Purpose. X = Other Purpose.
|Authors||Publication Type||Research Type||Research Design||Data Collection Source||Collection Instrument|
|Ajuwon et al. (2016)||Journal||Qualitative||Descriptive Qualitative||Primary||Survey|
|Barnett & Gay (2015)||Journal||Expository/ Opinion||Descriptive Qualitative||Secondary||Articles|
|Barnhill (2016)||Journal||Quantitative||Descriptive Quantitative||Primary||Interview Protocol, Survey|
|Bouck et al. (2015)||Journal||Mixed||Descriptive Quantitative||Primary||Survey, Test|
|Cahan et al. (2016)||Journal||Quantitative||Meta-analysis||Secondary||Articles|
|Cawthon et al. (2015)||Journal||Quantitative||Correlation/Prediction||Secondary||Interview Protocol, Survey|
|Cole & Cawthon (2015)||Journal||Mixed||Descriptive Quantitative||Primary||Interview Protocol, Survey|
|Condra et al. (2015)||Journal||Qualitative||Descriptive Qualitative||Secondary||Articles|
|Couzens et al. (2015)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Crosby (2015)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol, Survey|
|Davis et al. (2015)||Journal||Mixed||Quasi-Experimental||Primary||Survey, Test|
|DeLee (2015)||Journal||Qualitative||Descriptive Qualitative||Secondary||Articles|
|DePountis et al. (2015)||Journal||Quantitative||Descriptive Quantitative||Primary||Survey|
|Detrick-Grove (2016)||Dissertation||Quantitative||Descriptive Quantitative||Primary||Survey|
|Dong & Lucas (2016)||Journal||Quantitative||Longitudinal||Primary||Grades|
|Eberhart (2015)||Dissertation||Mixed||Quasi-Experimental||Secondary||Survey, Test|
|Gallego & Busch (2015)||Journal||Quantitative||Descriptive Quantitative||Primary||Survey|
|Authors||Publication Type||Research Type||Research Design||Data Collection Source||Collection Instrument|
|Hansen et al. (2016)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol, Observations|
|Higgins et al. (2016)||Journal||Mixed||Descriptive Quantitative||Primary||Interview Protocol, Test|
|Joakim (2015)||Dissertation||Quantitative||Descriptive Quantitative||Secondary||Test|
|Kafle (2015)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol, Observations|
|Kettler (2015)||Journal||Qualitative||Descriptive Qualitative||Secondary||Articles|
|Kim & Lee (2016)||Journal||Quantitative||Correlation/Prediction||Secondary||Grades|
|Lane & Leventhal (2015)||Journal||Qualitative||Descriptive Qualitative||Secondary||Articles|
|Lawing (2015)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol, Survey|
|Lewandowski et al. (2015)||Journal||Quantitative||Quasi-Experimental||Primary||Grades, Test|
|Lin & Lin (2016)||Journal||Quantitative||Quasi-Experimental||Secondary||Test|
|Lin et al. (2016)||Journal||Quantitative||Correlation/Prediction||Secondary||Test|
|Lovett & Leja (2015)||Journal||Quantitative||Correlation/Prediction||Primary||Survey, Test|
|Lyman et al. (2016)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|McMahon et al. (2016)||Journal||Quantitative||Quasi-Experimental||Primary||Test|
|Miller et al. (2015)||Journal||Quantitative||Quasi-Experimental||Primary||Survey, Test|
|Nelson & Reynolds (2015)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol, Observations, Test|
|Newman & Madaus (2015a)||Journal||Quantitative||Correlation/Prediction||Secondary||Interview Protocol, Survey|
|Newman & Madaus (2015b)||Journal||Quantitative||Descriptive Quantitative||Secondary||Interview Protocol|
|Authors||Publication Type||Research Type||Research Design||Data Collection Source||Collection Instrument|
|Ofiesh et al. (2015)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Peterson (2016)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Potter et al. (2016)||Journal||Quantitative||Quasi-Experimental||Primary||Survey, Test|
|Ricci (2015)||Dissertation||Quantitative||Quasi-Experimental||Secondary||Survey, Test|
|Rosenblum & Herzberg (2015)||Journal||Mixed||Descriptive Qualitative||Primary||Interview Protocol, Test|
|Ruhkamp (2015)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Survey|
|Seo & De Jong (2015)||Journal||Quantitative||Quasi-Experimental||Secondary||Test|
|Seo & Hao (2016)||Journal||Quantitative||Descriptive Quantitative||Secondary||Test|
|Sokal (2016)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Spenceley & Wheeler (2016)||Journal||Quantitative||Descriptive Quantitative||Primary||Observations|
|Spiel et al. (2016)||Journal||Quantitative||Quasi-Experimental||Primary||Test|
|Südkamp et al. (2015)||Journal||Quantitative||Correlation/Prediction||Secondary||Test|
|Timmerman & Mulvihill (2015)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Weis et al. (2016)||Journal||Quantitative||Descriptive Quantitative||Secondary||Test|
|Williams (2015)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Yssel, Pak, & Beilke (2016)||Journal||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Zambrano (2016)||Dissertation||Qualitative||Descriptive Qualitative||Primary||Interview Protocol|
|Zeedyk et al. (2016)||Journal||Qualitative||Descriptive Qualitative||Secondary||Articles|
Note. An additional seven studies (Barnett & Gay, 2015; Cahan et al., 2016; Condra et al., 2015; DeLee, 2015; Kettler, 2015; Lane & Leventhal, 2015; Zeedyk et al., 2016) were literature reviews, and did not use data collection instruments.
Table C-1. Instrument Types and Specific Instruments Used, and Their Sources (n=51)
|Authors||Instrument Types and Description/s||Total|
|Ajuwon et al. (2016)||Researcher Test: Two surveys from other researchers, one with respondents from Texas, and another from across the US; data were responses for open-ended items, which were further open-coded into categories.||1|
|Barnhill (2016)||Author Survey: Twenty-item survey with demographic items as well as items on other aspects of the university setting, about supports for students with Asperger Syndrome (AS) and autism spectrum disorder (ASD), and about outcomes including graduation data and support services' features such as program effectiveness.||1|
|Bouck et al. (2015)||Author Survey: Social validity survey questions using a Likert-type rating scale.
Researcher Test: Twenty math assessments with eight items each (both computation and word problems); for two grade levels (grades 7 and 8), focused on the Common Core State Standards, drawing in part from the state assessment's released items and other sample items; measured the number of correct responses and the number of items attempted.
Norm-ref Ach: Calculation subtest and Writing Fluency subtest of the Woodcock Johnson III Tests of Achievement (WJ-III; Woodcock, McGrew, & Mather, 2001a); Written Expression subtest of the Wechsler Individual Achievement Test-Second Edition (WIAT-II; Wechsler, 2001).
|Cawthon et al. (2015)||Researcher Test: Used extant data set from a separate larger data set about students (NLTS2), including demographic information, incidence of accommodations use, and persistence in postsecondary education; data had been collected from parents and school personnel, through phone interviews or paper surveys.||1|
|Cole & Cawthon (2015)||Author Survey: Student self-report survey asking for demographic information such as GPA, major, type of learning disabilities, along with students' accommodations use and disclosure about disabilities; semi-structured interview of postsecondary students about the factors they identified influencing self-disclosure.
Researcher Test: The Self-Determination Scale (SDS; Sheldon & Deci, 1993), the Revised Self-Disclosure Scale (RSDS; Wheelees, 1978), and the Attitudes Toward Requesting Accommodations scale (ATRA; Barnard-Brak, Davis, Tate, & Sulak, 2009).
|Couzens et al. (2015)||Author Survey: Semi-structured interviews of both postsecondary students and staff members; students were asked about learning strengths and challenges, accessing formal disability services, and support experiences including their perceptions of least useful supports; staff members were asked about the needs of students with learning difficulties, and aspects of supporting these students.||1|
|Authors||Instrument Types and Description/s||Total|
|Crosby (2015)||Author Survey: Survey of faculty members about perceptions of disability, as well as their knowledge of accommodations, and practices providing accommodations; also, semi-structured interview protocol, asking postsecondary students about their experiences with their disabilities and requesting accommodations.
Other: Other document analysis (for data triangulation), including academic records (i.e., grades) and disability documentation.
|DePountis et al. (2015)||Author Survey: Survey that included items about demographic and teaching experience information as well as teachers' perspectives including their self-perceptions about proficiencies with several assistive technology devices.||1|
|Detrick-Grove (2016)||Author Survey: This study was a replication of a study completed by Brown (2007), and the researcher documented the adjustments made to the original survey for the current study. The focus of the survey was to gather educators' perspectives about accommodations, including their knowledge about them.||1|
|Dong & Lucas (2016)||Author Survey: Survey of 200 items designed by researchers covering basic demographics, including students' self-reported disabilities, as well as their academic grades and progress toward postsecondary degrees, and documenting patterns of seeking accommodations.||1|
|Eberhart (2015)||Author Survey: Student self-report questionnaire, requesting their experience and perspectives on testing format. Structured "think-aloud" cognitive laboratory interview form for documenting interviewer observations, along with interview transcripts.
Researcher Test: KITE computerized assessment system, used by the state to measure performance in English language arts and mathematics, based on the Smarter Balanced Assessment Consortium's item/task specifications.
|Gallego & Busch (2015)||Author Survey: Educator surveys requesting respondents' perspectives about accommodations and their observations about how accommodations are provided, including rating scale responses.||1|
|Giusto (2015)||Norm-ref Ach: Primary achievement test: Gates-MacGinitie Reading Comprehension Tests, 4th Edition, Reading Comprehension Subtest, Form S–Grade 3 (MacGinitie, MacGinitie, Maria & Dreyer, 2000); also, for screening/identification: Woodcock Reading-Mastery Test-Revised (WRMT-R) Word Reading Subtest (Woodcock, 1987); WRMT-R Word Attack Subtest (Woodcock, 1987).
Norm-ref Ability: For screening/identification: Clinical Evaluation of Language Functions, Fourth Edition (CELF-4) Understanding Concepts and Spoken Directions Subtest (Semel, Wiig, & Secord, 2003); CELF-4 Understanding Spoken Paragraphs Subtest (Semel et al., 2003); Peabody Picture Vocabulary Test, Fourth Edition (PPVT-IV; Dunn & Dunn, 2007).
|Authors||Instrument Types and Description/s||Total|
|Hansen et al. (2016)||
Author Survey: Structured interview protocol: pre-session included demographic information and details about prior experience with assistive technology, and post-session included gathering information about testing experiences; researcher observation form.
Other: Science assessment task based on Next Generation Science Standards; checked the four test conditions with the W3C Web Content Accessibility Guidelines (Caldwell, Cooper, Reid, & Vanderheiden, 2008).
|Higgins et al. (2016)||Author Survey: Survey from teachers rating students' reading levels; structured "think-aloud" cognitive laboratory interview protocol for documenting interviewer observations.
State Test: Mathematics test items were compiled from states' and consortia's released test items, including various item types requiring different response formats from students.
|Joakim (2015)||State Test: New England Common Assessment Program (NECAP) 2012 writing test scores.||1|
|Kafle (2015)||Author Survey: Structured interview protocols, involving both the disability support program coordinator and postsecondary students as participants, asking about learning disabilities, disclosing disabilities to instructors, instructors' perceptions as experienced, and requesting and provision of support and accommodations.
Other: Other artifacts examined—provided by students—included students' academic records (i.e., grades), diagnostic and medical assessments, and disability support program records of services used by students.
|Kim (2016)||Author Survey: Semi-structured interviews of both postsecondary students and staff members; students were asked about learning strengths and challenges, accessing formal disability services, and support experiences including their perceptions of least useful supports; staff members were asked about the needs of students with learning difficulties, and aspects of supporting these students.||1|
|Kim & Lee (2016)||Other: Postsecondary cumulative grade point averages (GPAs) were reported, as an indicator of both relative academic success and persistence; researchers noted that other factors were also acknowledged as having likely influenced participants' GPAs.||1|
|Lawing (2015)||Author Survey: Attitudes Towards Teaching All Students (ATTAS-mm), a nine-item teacher rating survey (researcher designed) about inclusion; interview protocol to expand upon survey responses.
Researcher Test: Two selected items from the Alabama Accommodations Survey, a 13-item survey of accommodations decision-making, from the Alabama Department of Education and the National Center on Educational Outcomes (NCEO; Altman, 2010).
|Lewandowski et al. (2015)||Author Survey: Demographic survey, and grade point average (GPA).
Norm-ref Ach: Nelson-Denny Reading Test (Brown et al., 1993) Forms G and H, subtest on reading comprehension.
|Authors||Instrument Types and Description/s||Total|
|Lin et al. (2016)||State Test: Province of Ontario (Canada) Junior (grade 6) Assessment of Reading, Writing, and Mathematics, 2005-2006.||1|
|Lin & Lin (2016)||State Test: Ontario Secondary School Literacy Test (OSSLT), grade 10, 2012-2013.||1|
|Lovett & Leja (2015)||Norm-ref Ach: Nelson-Denny Reading Test (NDRT; Brown et al., 1993) Form H comprehension subtest. Screening and correlation: reading fluency subtest, Woodcock-Johnson Tests of Achievement, Third Edition (WJ-III; Woodcock et al., 2001).
Norm-ref Ability: Screening and correlation—processing speed subtests from the Wechsler Adult Intelligence Scale, Third Edition (WAIS-III; Wechsler, 1997).
Other: Screening and correlation—Self-Evaluation of Performance on Timed Academic Reading (SEPTAR; Kleinmann, 2005); ADHD current symptoms scale from Behavior Rating Inventory of Executive Functioning (BRIEF-A; Roth, Isquith, & Gioia, 2005).
|Lyman et al. (2016)||Author Survey: Semi-structured interview protocol, asking postsecondary students about their disabilities, learning experiences, and accommodations from disability support services.||1|
|McMahon et al. (2016)||Researcher Test: Three versions of a 30-item end-of-year science performance assessment developed by the researchers; they checked the reading level of the content using the Flesch-Kincaid readability formula (Kincaid, Fishburne, Rogers, & Chissom, 1975).||1|
|Miller et al. (2015)||Norm-ref Ach: Nelson-Denny Reading Test (Brown et al., 1993) Forms G and H, subtest on reading comprehension.||1|
|Monagle (2015)||Author Survey: Survey included items on demographics and accommodations use information, and also students' attitudes toward requesting accommodations, based on Attitudes Toward Requesting Accommodations scale (ATRA; Barnard-Brak, Davis, Tate, & Sulak, 2009).||1|
|Nelson & Reynolds (2015)||Author Survey: Interview protocol inquiring about postsecondary students' writing experiences, processes, and attitudes, as well as speech recognition experiences; researchers' observation notes of composition sessions.
Researcher Test: Examination of written products from speech recognition-supported composition sessions using quality indicators including both a holistic evaluation of the compositions as well as accounting issues like spelling, vocabulary use, and errors.
|Newman & Madaus (2015a)||Researcher Test: Extant data from larger data set about students—National Longitudinal Transition Study–2 (NLTS2; Valdes et al., 2013); included demographics such as disability categories, and also high school GPA. Also included a subset of survey item data from the Arc’s Self-Determination Scale subscales on self-realization, psychological empowerment, and personal autonomy (Wehmeyer, 2000). The NLTS2 data set was originally collected from students’ parents and school personnel, through phone interviews or paper surveys.||1|
|Authors||Instrument Types and Description/s||Total|
|Newman & Madaus (2015b)||Researcher Test: Extant data from larger data set about students--National Longitudinal Transition Study–2 (NLTS2; Valdes, Godard, Williamson, McCracken, & Jones, 2013); included demographic information, postsecondary enrollment, and incidence of accommodations and supports use. The NLTS2 data set was originally collected from students’ parents and school personnel, through phone interviews or paper surveys.||1|
|Ofiesh et al. (2015)||Author Survey: Structured focus group protocol for multiple participant groups, asking postsecondary students with attention-related disabilities about their experiences taking course exams, including accommodations use.||1|
|Ohleyer (2016)||State Test: Extant data set from Colorado Student Assessment Program (CSAP; CTB/McGraw-Hill, 2009) grades 4, 5, & 6 in writing.||1|
|Peterson (2016)||Author Survey: Semi-structured interview protocol inquiring about postsecondary disabilities services managers' experiences providing assistive technology supports to students with disabilities, including successes and challenges.||1|
|Potter et al. (2016)||Author Survey: Demographic questionnaire and self-reported grade point average, their perceived reading ability, and their preference between the two testing conditions (with or without accommodation).
Norm-ref Ach: Nelson-Denny Reading Test (Brown et al., 1993) Forms G and H, subtest on vocabulary; both number of items completed and number of items correct were documented.
Norm-ref Ability: Wechsler Intelligence Scale for Children (WISC-IV; Wechsler, 2003) and Woodcock-Johnson Tests of Cognitive Abilities III (WJIII COG; Woodcock, McGrew, & Mather, 2001b).
|Ricci (2015)||Norm-ref Ach: Extant data set for the National Assessment of Educational Progress (NAEP) reading assessment (2011) from three states (Connecticut, New Jersey, & New York), comprehension of informational and literary texts; included data from the NAEP Students with Disabilities/English language learners Questionnaire, completed by educators about each assessed student.||1|
|Rosenblum & Herzberg (2015)||Author Survey: Structured interview protocol about recent experiences with tactile graphics, as well as about previous use of tactile graphics and braille; parents reported students' demographic data.
Researcher Test: Set of objective questions after having examined four different tactile graphic representations typically used for math and science content; performance on items was documented.
|Rudzki (2015)||Author Survey: District demographic data records.
State Test: Extant data set of one district's students with reading disabilities in grades 3 through 8 (unspecified) state reading assessment performance scores for fall 2012.
|Ruhkamp (2015)||Author Survey: Student self-report survey, requesting demographic data and perceptions about accommodations; interview protocol.||1|
|Authors||Instrument Types and Description/s||Total|
|Seo & De Jong (2015)||State Test: A subset of state assessment data from 222 volunteer schools, including the 2012 Michigan Educational Assessment Program scores in social studies.||1|
|Seo & Hao (2016)||State Test: Selected extant data—responses from 2010 Michigan Merit Examination grade 11 science (biological sciences, earth/space sciences, physics, and chemistry); also comprised a subset of science test items from the American College Test (ACT).||1|
|Sokal (2016)||Author Survey: Interview protocol, asking postsecondary accessibility services professionals and faculty about providing accommodations and other supportive assistance for students with the mental health concern of anxiety.||1|
|Spenceley & Wheeler (2016)||Author Survey: Records of postsecondary students with disabilities, including demographic data and documentation of time used for course exams at Disability Support Services (DSS) office.||1|
|Spiel et al. (2016)||Researcher Test: 20-item science tests, with multiple-choice and short-answer items, for grades 4, 5, 6, & 7 students.
Norm-ref Ach: Wechsler Individualized Achievement Test-Third Edition (WIAT-III; Wechsler, 2009).
Norm-ref Ability: Screening—Wechsler Abbreviated Scale of Intelligence-Second Edition (WASI-II; Wechsler, 2011).
Other: Screening/diagnostics—Children's Interview for Psychiatric Syndromes-Parent Version (P-ChIPS; Weller, Weller, Fristad, Rooney, & Schecter, 2000), Disruptive Behavior Disorder Rating Scale-Parent Version (DBD; Pelham, Gnagy, Greenslade, & Milich, 1992), and Impairment Rating Scale-Parent Version (IRS; Fabiano et al., 2006).
|Südkamp et al. (2015)||Researcher Test: Three extant data samples of grade 5 students from a larger longitudinal data set about students (German National Educational Panel Study/NEPS), including demographics and performance in reading literacy.||1|
|Timmerman & Mulvihill (2015)||Author Survey: Semi-structured interview protocol, asking postsecondary students with disabilities about their accommodations experiences.||1|
|Weis et al. (2016)||Author Survey: Records of postsecondary students with learning disabilities, including accommodations use.
Other: Redacted (de-identified) data provided to the researchers included unspecified achievement and cognitive testing resulting in diagnostics (applying the Diagnostic and Statistical Manual of Mental Disorders, DSM III-R and DSM IV) and yielding academic accommodations and/or modifications recommended for the students by clinicians, along with disabilities histories and accommodations provided during past schooling.
|Williams (2015)||Author Survey: Semi-structured interview protocol, asking middle school students about their assessment accommodations experiences; also, education records for triangulation purposes.||1|
|Yssel et al. (2016)||Author Survey: Semi-structured interview protocol inquiring about the experiences of postsecondary students and their disabilities, their learning experiences, and how they perceived the accommodations they were provided.||1|
|Zambrano (2016)||Author Survey: Semi-structured interview protocol, asking postsecondary students about their disabilities, their learning experiences; also, asking students and one disability resource services professional about accommodations.||1|
Note. An additional seven studies (Barnett & Gay, 2015; Cahan et al., 2016; Condra et al., 2015; DeLee, 2015; Kettler, 2015; Lane & Leventhal, 2015; Zeedyk et al., 2016) were literature reviews, and did not use data collection instruments.
|Number of Studies|
|Non-Academic Protocols or Surveys Developed
by Study Author/s
|Surveys or Academic Tests Developed by Professionals or Researchers through Work Outside of Current Study||Researcher Test||13|
|State Criterion-referenced Assessment||State Test||9|
|Norm-referenced Academic Achievement
|Norm-referenced Cognitive Ability Measures||Norm-ref Ability||5|
Table C-2. Content Areas Assessed
|Authors||Math||Reading||Writing||Other LA||Science||Social Studies||N|
|Bouck et al. (2015)||•||1|
|Davis et al. (2015)||•||1|
|Hansen et al. (2016)||•||1|
|Higgins et al. (2016)||•||1|
|Lewandowski et al. (2015)||•||1|
|Lin, Childs, & Lin (2016)||•||•||2|
|Lin & Lin (2016)||•b||1|
|Lovett & Leja (2015)||•||1|
|McMahon et al. (2016)||•||1|
|Miller et al. (2015)||•||1|
|Nelson & Reynolds (2015)||•||1|
|Potter et al. (2016)||•||1|
|Rosenblum & Herzberg (2015)||•||•||2|
|Seo & De Jong (2015)||•||1|
|Seo & Hao (2016)||•||1|
|Spiel et al. (2016)||•||1|
|Südkamp et al. (2015)||•||1|
Note. This table encompasses the subset of studies (n=24) which used assessments or tests on academic content area/s or cognitive skills; studies that were excluded used surveys or other data collection mechanisms only.
a In this study, other LA = identified by Smarter Balanced Assessment as English language arts, with both reading comprehension of literary and informational texts, and writing—producing effective and well-grounded writing.
b In this study, other LA = the Canadian province's literacy test, a requirement of high school graduation.
|Ajuwon et al. (2016)||Educators||247||0%||No age||N/A|
|Barnett & Gay (2015)||N/A||N/A||N/A||N/A||N/A|
|Barnhill (2016)||Educators||30||0%||No age||N/A|
|Bouck et al. (2015)||Students||7||100%||Grades 7 & 8||AP, A, EBD, LD|
|Cahan et al. (2016)||N/A||N/A||N/A||N/A||N/A|
|Cawthon et al. (2015)||Students||210||100%||Middle or high school; ages 13-18||HI, Mult.|
|Cole & Cawthon (2015)||Students||31||100%||Postsecondary||LD|
|Condra et al. (2015)||N/A||N/A||N/A||N/A||N/A|
|Couzens et al. (2015)||Students,
|15||47%||Postsecondary students; no age (Disability Services personnel)||LD|
|190||4%||Postsecondary||AP, EBD, LD, PD|
|Davis et al. (2015)||Students||826||0%||Grade 5, Grades 10 & 11||None|
|DePountis et al. (2015)||Educators||122||0%||No age||N/A|
|Detrick-Grove (2016)||Educators||267||0%||No age||N/A|
|Dong & Lucas (2016)||Students||8905||8%||Postsecondary||AP, EBD, LD, ID, PD, None|
|Eberhart (2015)||Students||38010||0%||Grade 7||None|
|Gallego & Busch (2015)||Educators||122||0%||No age||N/A|
|Giusto (2015)||Students||82||34%||Grade 3||LD, None|
|Hansen et al. (2016)||Students||3||100%||Grades 8-9||VI|
|Higgins et al. (2016)||Students||279||100%||Grades 3-5, Grades 6-8, Grades 9-12||HI|
|Joakim (2015)||Students||156||100%||Grades 5 & 8||A, EBD, HI, LD, ID, PD, S/L, Mult.|
|Kim & Lee (2016)||Students||1055||100%||Postsecondary||AP, A, EBD, HI, LD, PD, S/L, VI, Mult.|
|Kim (2016)||Students||193||5%||Kindergarten, Grade 2, & Grade 4||LD, S/L, None|
|Lane & Leventhal (2015)||N/A||N/A||N/A||N/A||N/A|
|Lawing (2015)||Educators||65||0%||No age||N/A|
|Lewandowski et al. (2015)||Students||62||0%||Postsecondary||None|
|Lin & Lin (2016)||Students||14499||100%||Grade 10||EBD, LD, Mult.|
|Lin et al. (2016)||Students||8831||31%||Grade 6||LD, None|
|Lovett & Leja (2015)||Students||141||0%||Postsecondary||AP, None|
|Lyman et al. (2016)||Students||16||100%||Postsecondary||AP, A, EBD, LD, PD, VI|
|McMahon et al. (2016)||Students||47||34%||Grade 6||LD, None|
|Miller et al. (2015)||Students||76||50%||Postsecondary||AP, None|
|Monagle (2015)||Students||285||100%||Postsecondary||AP, A, EBD, LD, PD, Mult.|
|Nelson & Reynolds (2015)||Students||5||100%||Postsecondary||AP, EBD, LD, PD|
|Newman & Madaus (2015a)||Students||2470||100%||Postsecondary||AP, A, EBD, HI, LD, ID, PD, S/L, VI, Mult.|
|Newman & Madaus (2015b)||Students||3190||100%||Postsecondary||AP, A, EBD, HI, LD, ID, PD, S/L, VI, Mult.|
|Ofiesh et al. (2015)||Students||17||100%||Postsecondary||AP, LD|
|Ohleyer (2016)||Students||315||100%||Grades 4, 5, & 6||LD|
|Peterson (2016)||Educators||10||0%||No age||N/A|
|Potter et al. (2016)||Students||101||25%||Postsecondary||AP, LD, Mult., None|
|Ricci (2015)||Students||23015||100%||Grade 4||Missing|
|Rosenblum & Herzberg (2015)||Students||12||100%||Grades 6, 7, 8, 9, 10, 11, & 12 (at least one or more in each grade)||VI|
|Rudzki (2015)||Students||14||100%||Grades 3-8 (but grade level of each participant was not specified)||LD|
|Ruhkamp (2015)||Students||6||100%||Postsecondary||Not Specified|
|Seo & De Jong (2015)||Students||52484||5.8%||Grades 6 & 9||Not Specified, None|
|Seo & Hao (2016)||Students||19788||missing||High school||Missing|
|Sokal (2016)||Educators||5||0%||No age||N/A|
|Spenceley & Wheeler (2016)||Students||1093||100%||Postsecondary||AP, A, EBD, LD, PD, VI, Mult.|
|Spiel et al. (2016)||Students||36||44%||Grades 4, 5, 6, & 7 (ages 9 to 14)||AP, EBD, LD, Mult., None|
|Südkamp et al. (2015)||Students||6341||7%||Grade 5||LD, None|
|Timmerman & Mulvihill (2015)||Students||2||100%||Postsecondary||AP, A, LD, VI|
|Weis et al. (2016)||Students||359||100%||Postsecondary||AP, A, EBD, LD, S/L, Mult.|
|Williams (2015)||Students||10||100%||Grade 8||AP, A, LD, ID, Mult.|
|Yssel et al. (2016)||Students||12||100%||Postsecondary||AP, LD, PD, VI, Mult.|
|Zambrano (2016)||Students||8||0%||Postsecondary||AP, EBD, LD, PD|
|Zeedyk et al. (2016)||N/A||N/A||N/A||N/A||N/A|
|AP: Attention Problem
EBD: Emotional/Behavioral Disability
HI: Hearing Impairment/Deafness
|ID: Intellectual Disability
LD: Learning Disability
PD: Physical Disability
S/L: Speech/Language Disability
|TBI: Traumatic Brain Injury
V/I: Visual Impairment/Blindness
Mult: Multiple Disabilities
None: Students without Disabilities
|Cawthon et al. (2015)||•||•||•||•||•||5|
|Davis et al. (2015)||•||1|
|Hansen et al. (2016)||•||•||•||3|
|Higgins et al. (2016)||•||1|
|Kim & Lee (2016)||•||1|
|Lin & Lin (2016)||•||•||2|
|McMahon et al. (2016)||•||1|
|Rosenblum & Herzberg (2015)||•||•||2|
|Seo & De Jong (2015)||•||1|
|Seo & Hao (2016)||•||1|
|Spiel et al. (2016)||•||1|
|Südkamp et al. (2015)||•||1|
|Timmerman & Mulvihill (2015)||•||1|
|Weis et al. (2016)||•||•||•||•||•||•||6|
Table E-2. Equipment Accommodations Itemized by Study
|Cawthon et al. (2015)||•||1|
|Davis et al. (2015)||•||1|
|Hansen et al. (2016)||•||•||2|
|Kim & Lee (2016)||•||1|
|Lin & Lin (2016)||•||•||2|
|McMahon et al. (2016)||•||1|
|Seo & De Jong (2015)||•||1|
|Seo & Hao (2016)||•||1|
|Weis et al. (2016)||•||•||2|
Table E-3. Response Accommodations Itemized by Study
in Test Booklet
|Bouck et al. (2015)||•||1|
|Cawthon et al. (2015)||•||1|
|Davis et al. (2015)||•||•||2|
|Hansen et al. (2016)||•||1|
|Lin & Lin (2016)||•||•||•||3|
|Nelson & Reynolds (2015)||•||1|
|Potter et al. (2016)||•||1|
|Seo & De Jong (2015)||•||1|
|Weis et al. (2016)||•||•||•||•||•||5|
Table E-4. Scheduling Accommodations Itemized by Study
|Author/s||Extended time||Multiple day||Test breaks||TOTAL|
|Barnett & Gay (2015)||•||•||•||3|
|Cahan et al. (2016)||•||1|
|Cawthon et al. (2015)||•||1|
|Kim & Lee (2016)||•||1|
|Lin & Lin (2016)||•||•||2|
|Lovett & Leja (2015)||•||1|
|Miller et al. (2015)||•||1|
|Ofiesh et al. (2015)||•||1|
|Spenceley & Wheeler (2016)||•||1|
|Südkamp et al. (2015)||•||1|
|Timmerman & Mulvihill (2015)||•||1|
|Weis et al. (2016)||•||•||2|
|Yssel et al. (2016)||•||1|
|Zeedyk et al. (2016)||•||1|
Table E-5. Setting Accommodations Itemized by Study
|Author/s||Individual||Small group||Specialized setting||TOTAL|
|Barnett & Gay (2015)||1||1|
|Cawthon et al. (2015)||1||1|
|Kim & Lee (2016)||1||1|
|Lin et al. (2016)||1||1|
|Lin & Lin (2016)||1||1|
|Yssel et al. (2016)||1||1|
|Zeedyk et al. (2016)||1||1|
|Ajuwon et al. (2016)||The Texas study teacher comments were categorized into eight areas, including most commonly around their need for access to assistive technology training, followed by assistive technology proficiency and collaboration among professionals. The national study teacher comments were categorized into nine areas, including most commonly around their assistive technology proficiency, followed by their need for access to assistive technology training, and collaboration among professionals; the additional topic in the national study was equipment concerns with relatively few comments. Details of both studies' participants' individual comments were presented.||X|
|Barnett & Gay (2015)||The researchers summarized the research literature about the impacts of this medical condition on academic challenges, and offered recommendations about instructional and assessment accessibility matters; this summary emphasizes results concerning assessment accommodations. They reported assessment practices which are responsive to the possible incidence of seizure events while students are at school—including flexible scheduling and timing, such as testing over multiple days, and at best time of day for individual students. They also noted that the medical condition, and medications provided to treat the condition, can have an impact on students' memory and attention, indicating use of test instructions that are simple, paced to individual students, and provided in multiple formats and repeated as needed. The researchers indicated that assessing "recognition rather than recall . . . may provide a more accurate representation of overall understanding" (p. 7).|
|Barnhill (2016)||Survey results provided information about current support practices for students with Asperger Syndrome (AS) and autism spectrum disorder (ASD). Half of the postsecondary institutions had more than 30 students with AS and ASD, and all had more than 5; only 9 institutions had more than 30 students receiving support services in their programs, indicating that not all students with AS and ASD sought support from Disability Services offices. Less than half of the institutions had provided specific support services for more than five years for students with AS and ASD. The most common supports—provided by 29 of the 30 institutions—for students with AS and ASD were the examination accommodations of extended time and alternate site. Some also provided oral delivery of examinations. Other findings included that most institutions did not have outcomes data such as graduation rates for students with AS and ASD. Substantial detail was reported on support program features that increased effectiveness.||X||X|
|Bouck et al. (2015)||Descriptive data indicated that participants scored higher on test items when using the calculator than when not using it. Most students attempted all, or nearly all, items in all 4 test phases, but for those who did not do so, both of them attempted more with the calculator than without. No different pattern of benefit from using calculators was demonstrated for solving computation or word problem item types correctly. Percentage of non-overlapping data between the test phases served as a nonparametric form of effect size. Each participant's relative degree of success (or lack of success) was reported separately. The grade 7 students all evidenced a small effect of using the calculator, while half of the grade 8 students showed a small effect and the other half showed a moderate effect of using the calculator. Nearly all participants' scores were analyzed to be variable across test phases, and the calculator was deemed a questionable or unreliable intervention in many cases. Participants reported that they liked calculators, and that they helped, with both types of problems; however, there was 1 exception to this perception. Grade 7 students said that they do not need to use calculators in the future, but the grade 8 students said they do need calculators.||X||X||M|
|Cahan et al. (2016)||The researchers presented a rank-ordered list of the 17 tests from the 11 studies showing the relative differences (for students with and without learning disabilities/LD) in mean performance gains between extended-time and no accommodation conditions. They concluded that there was a low correlation between gain scores and disability status for most of the studies. Based on the collective results of the 11 studies on effects of extended time for students with learning disabilities and students without disabilities, the researchers argued that granting additional assessment time for only students with LD "erroneously denies time extension from the vast majority of the examinees who could benefit from it" (p. 468), namely, many students without disabilities. This conclusion, the researchers argued, supports a universal design approach —that is, removing time limitation from tests for all students. Along with other study limitations, the researchers observed that most of the studies used research measures that "may have been more highly speeded than actual high-stakes tests are, leading to the very high degree of benefit by both LD and nondisabled participants" (p. 470).||X||X||M,R|
|Cawthon et al. (2015)||The researchers reported accommodations use information pertaining to standardized testing, instruction, and also mental health supports; this summary emphasizes test accommodations. Postsecondary language/communications accommodations use during exams was about 10%, a decrease from 70% in high school; non-language/communications accommodations use was 50%, compared with 60% in high school. Both decreases were statistically significant. Regression analyses of demographic factors and postsecondary accommodations use during testing indicated that students with both hearing impairments and another disability, and students of higher-income families, had higher likelihoods of receiving non-language/communication accommodations (e.g., extended time and abbreviated test). Accommodations use was not significantly related to persistence in or completion of postsecondary education.||X|
|Cole & Cawthon (2015)||Students made various degrees of disclosure about their disabilities and desire or need for accommodations: no disclosure or need, disclosure by contacting the university's disability services (DS) office then only providing letters to professors, and disclosure through the DS office's letter and also having detailed personal conversations with professors. Students who disclosed their disabilities and accommodations requests with professors had significantly different survey results from those who did not do so. Students who did not self-disclose at all to professors had negative attitudes about seeking accommodations and lower levels of self-determination. Further, 9 factors influenced disclosure decisions. These factors were separated into themes: knowledge (about accommodations), experience with people, self-awareness, and supports. The "experience with people" theme encompassed students' sense of professors' demeanor and their experiences with DS, the professor, classmates, and academics in general. Self-awareness factors were about needing accommodations to address their disabilities or not. The "supports" theme was whether students had developed coping strategies without using accommodations. Non-disclosing students commented very negatively about accommodations knowledge, thinking that they did not need them and had other coping strategies instead, had higher negative views of disabilities, and desired to avoid negative comments from peers. Both the students who disclosed through the DS letter only and through both the letter and conversation with professors had fewer negative comments about knowledge of postsecondary accommodations, and commented more positively about their own disabilities and their needing and benefiting from accommodations. Both disclosure groups felt more positively about the professors' demeanors, but made more negative than positive comments about academics, and negative comments about experiences with peers. Both disclosure groups made similar amounts of positive comments about coping mechanisms as the no-disclosure group.||X||X|
|Condra et al. (2015)||The researchers emphasized the need for "retroactive accommodations" (p. 283)—such as makeup exams—due to the episodic nature of mental health disabilities (MHD). The increasing trend of students with MHD in postsecondary education was documented. Recognizing the challenges to current policies (at Ontario higher education institutions) for granting accommodations, the researchers noted that the implementation of retroactive accommodations policy might require an individual case-by-case approach. The researchers highlighted studies indicating various faculty attitudes about accommodating students with MHD, and even about whether these students have a legitimate place in higher education, separate from the legality of access to education in Canada and the US. The researchers reported on research evidence indicating the need for appropriate information and training for postsecondary educators.||X|
|Couzens et al. (2015)||Student participants reported various degrees of value about the writing supports available, with some indicating that these were very valuable and others indicating that they were not. After attending an orientation to available supports, students indicated that available services were not necessarily helpful; none of the students interviewed used assistive technologies. Regarding Disabilities Service supports, most student participants indicated that they had not used them "because they perceived [the DS program] to be for students with greater needs" (p. 35). University personnel participants commented that many students who could benefit from supportive services had not sought them and that staff would have to encourage students to do so. Staff members also noted that resources were not always available for students to explore potentially helpful assistive technologies with which they were not already familiar.||X||X|
|Crosby (2015)||The faculty survey findings detailed the social context and institutional culture regarding inclusion practices and perceptions of disability. Nearly all survey respondents had taught students with disabilities. The researcher remarked that respondents had some misconceptions about disability; nearly 40% indicated that students with ADHD or learning disabilities might be unable to be successful. About 15% were uncomfortable teaching students with disabilities, and 30% were not knowledgeable about laws and policies about accommodating these students. The researcher concluded that the institution was primarily inclusive of and responsive to students with disabilities. The postsecondary students with disabilities interviewed indicated that their delays requesting accommodations have led to stress, anger, anxiety, embarrassment, regret, and relief when eventually documenting their disabilities. Their challenges were based on the relevance of their disabilities to their identities: self-perceptions of normality or abnormality influenced their degrees of willingness to disclose their needs for academic assistance. Students viewing disability as a negative attribute tended to consider balancing the social costs of their having disabilities with the benefits of accessing academic supports. The researcher reflected on the relevance of the student interviewees' insights, suggesting that inclusive institutions can serve as change agents, encouraging students to self-disclose about their disabilities.||X||X|
|Davis et al. (2015)||An early conclusion based on observation was that students typically did not use styluses in general, or specifically for revising essays, and did not revise essays even though they were provided the opportunity and instructions to allow them to do so. Task performance was not significantly different across testing formats by schooling level. The mean evaluation scores were not significantly different within each schooling level across testing formats. Both grade 5 students and high school students did not vary in performance based on testing format—using a laptop or a tablet with an external keyboard or a tablet with a touch-screen keyboard. When writing essays at school, students most commonly composed on paper or on computers (without touchscreens) interchangeably; students rarely used only one format, and even fewer used touchscreens. On ease of use, few students indicated difficulty with tablets with touchscreen keyboards, yet more high school students than grade 5 students indicated that using touchscreens was "somewhat difficult." High school students tended to prefer physical keyboards over touchscreens for writing compositions.||X||X||X||W|
|DeLee (2015)||The researcher summarized several studies, with at least ten bearing on the researcher's recommendations to postsecondary institutions about academic accommodations for students with cognitive, physical, or sensory disabilities. Various approaches, including institution-specific guidelines, have been followed for documenting disabilities and eligibility for accommodations. Rather than a singular standard document, various documents, including high school IEPs, have been used to substantiate disabilities and accommodation needs. Some postsecondary students with disabilities hesitate to self-disclose their disabilities and possible need for accommodations; many of these hesitant students were described in some research as being high-achievers at the secondary level, yet a substantial proportion of them were shown not to persist to completing their postsecondary programs. Additional research was described indicating that students' college success was associated with their proactively seeking supportive services, including those available at or through academic libraries, including online technologies. The researcher reviewed research emphasizing the importance of evaluating accommodations' benefits, noting that postsecondary institutions have shifted toward student-centered perspectives valuing "assistive reading and listening technologies" (p. 45) and exam accommodations, decreasing reported needs for such resources as recorded lectures. The researcher noted findings pertaining to the changing approaches to communicating with students needing accommodations. Reviewing faculty perspectives research, the author pointed out that knowledge of issues and available resources continue to lag for postsecondary educators, and pointed toward research detailing institutional structures countering these limitations.||X|
|DePountis et al. (2015)||High school teachers estimated that their proficiency in supporting students who are blind in learning advanced mathematics classes was highest in algebra, and was also relatively high in geometry. The teachers indicated having used up to 35 different electronic assistive technology (EAT) devices, including a number of EAT devices used during academic assessments, with at least 9 survey respondents indicating they had used each of them. Thirteen devices were identified specifically as beneficial to their students: accessible graphing calculators, audio recording, braille translators, electronic refreshable braille notetakers (ERBN), Excel software, graphing software, talking electronic flashcard software, optical character recognition (OCR) software, personal computers (PCs), scanners/readers, scientific notebooks, talking calculators, and talking scientific calculators. Seven of these devices were typically used by students for producing work products: accessible graphing calculators, braille translators, ERBN, Excel, PCs, talking calculators, and talking scientific calculators. Of these 13 beneficial devices, 7 were typically used in geometry, and 4 were typically used in algebra. Additional devices were mentioned in the open-response question, including those considered high-tech and low-tech: mathematical notation and graphing software, notetakers such as refreshable displays and Perkins braillewriters, embossers and thermal printers, tactile boards such as those from the American Printing House, other manipulatives, and other devices such as abacuses and digital cameras. The researchers noted that teachers were primarily positive about the low-tech options and generally negative about the newest high-tech devices.||X||X||M|
|Detrick-Grove (2016)||Analysis of the K-12 teachers’ survey responses revealed their perceptions and knowledge of accommodations for students with disabilities. First, teachers felt that their employers prepared them to provide accommodations for students with disabilities, more so than their postsecondary education programs. Second, accommodations were reported to be viewed as fair by teachers; no distinctions were made among or across teacher characteristics about any variation in that overall perception. Teachers viewed all ten accommodations specified by the researcher as helpful; on a scale of 1-5, the means were all above 2.5 indicating more helpful than not. The highest means were reading directions (3.5) and reading test items aloud (3.4), and the lowest means were word processor and spelling dictionary (both 2.8). Last, a little over 90% of the teachers correctly answered 6 out of 10 questions related to accommodations.||X|
|Dong & Lucas (2016)||Analysis of the postsecondary students' data revealed trends regarding the relationship between requesting accommodations through the DSS (disability support services) office and disability category, as well as relationships among requesting accommodations with DSS, being in good academic standing, and disability category. Cumulatively, students with cognitive disabilities had the largest proportion of students who requested accommodations (32.3%), compared to students with psychological disabilities (12.4%), physical disabilities (9.2%), and no disabilities (0.7%). The researchers noted that incidence of seeking accommodations increased from the first and second to the third semester. Students with cognitive or psychological disabilities who requested accommodations tended to be in good academic standing, compared to those who did not request accommodations, which was especially true in the third semester. Students without disabilities who requested accommodations also "were significantly more often in good academic standing" (p. 52) than those who did not. For students with physical disabilities, no significant difference was found in academic performance for students who requested accommodations and those who did not request accommodations.||X||X|
|Eberhart (2015)||When comparing scores by the device on which they were presented, the researcher found main effects; that is, there were statistically significant differences in mean scores for both ELA and math: students scored higher on computer than tablet. Further, comparisons by device type when completing multiple-choice questions yielded higher performance on computer than tablet. Notably, there were not significant performance differences by device for the technologically-enhanced items. The researcher noted that the relative amount of screen space for the calculator on the iPad required more manipulation of this tool when completing math items, which could be an explanation for the lower performance in math on the iPad. Students scored, on average, higher (in statistical significance) on multiple-choice items than technology-enhanced items, with moderate to large differences. The pattern of higher scores on multiple-choice compared to technologically-enhanced items has been attributed to the additional time and effort needed for navigating and scrolling on the latter more complex items. Interaction effects were also found, for device and item types. In other words, when simultaneously comparing scores by item type and by the computer device on which they were presented, the researcher found statistically significant interaction effects in mean scores for all three math test forms, and one of three ELA test forms. Overall preferences of the 10 cognitive lab student participants included that 5 preferred the computer laptop, 1 preferred the iPad, and 4 liked both devices, when completing the math and ELA test items. Seven students preferred the multiple-choice items, 1 preferred the technology-enhanced items, and 2 students liked both item types.||X||X||M,O|
|Gallego & Busch (2015)||About 77% of DSO (Disability Services Office) staff members indicated that they had worked with more than 15 students with disabilities attending language courses. About 74% of language program directors had met with at least 1 DSO staff member; 70% had been sent information by the DSO, and about 90% had contacted the DSO. About 31% of teaching assistants (TAs) reported that they had met with at least 1 DSO staff member; about 32% of TAs heard from DSOs, and about 29% had contacted DSOs. A small number (less than 10%) of language program directors, and less than 20% of TAs, reported that they were the sole decision makers about accommodations. About half of the DSO respondents indicated that the DSO provided accommodations training to language TAs. DSO staff members perceived that TAs and language program directors need more information about accommodations policies and procedures. The researchers stated, "the majority of our DSO participants stated that TAs work well to implement accommodations and consult with the DSO when making decisions while TAs indicated the opposite" (p. 396).||X||X|
|Giusto (2015)||Students with decoding-related reading disability scored significantly higher on reading comprehension with the partial read-aloud with pacing accommodation than either with the pacing-only support or without accommodations. The pacing-only and non-accommodated conditions did not yield significantly different scores from one another. The students without disabilities did not score significantly differently across the three testing conditions. The varying linguistic backgrounds among students in both participant groups did not demonstrate an effect on the mean scoring patterns differences. The researcher also reported anecdotal observations of participants while completing the reading comprehension test under different conditions. Students in both participant groups showed no attention or seemed frustrated with waiting to move on to another test section, in the pacing-only condition. Finally, the researcher indicated that the partial read-aloud accommodation with pacing support addressed concerns indicated in other research and practice about invalidating the reading comprehension construct. In other words, not reading the text passages yet providing read-aloud for the remaining parts of the test was demonstrated to differentially benefit students with decoding skills difficulties.||X||R|
|Hansen et al. (2016)||Of the four conditions—(1) screen reader and sound-only static and dynamic simulation, (2) Falcon controller knob haptic static and dynamic simulation, (3) Android touch-screen static only simulation, and (4) tactile graphic paper-based static simulation—all three students most accurately recognized the information in the fourth condition. Both of the device-based haptic simulations (#2 & #3) were mostly unsuccessful communicating needed basic information to the students for understanding the task, except that a student correctly identified the number of particles (under condition #3) during the simulation's third stage. The control (#1) condition had both static and dynamic parts. The screen reader (JAWS) was usable for students, yet students did not answer scaling questions with "strongly agree"—possibly indicating only limited enthusiasm. The format of tables was not navigable for two students. The dynamic simulation—in which sound-only information was presented (about particle collisions)—was confusing for all students. When asked about the 3 quasi-experimental conditions, two students strongly recommended the Falcon tool; however, observations indicated all students had difficulty locating particles in three-dimensional space. Students all had similar difficulties locating the particles when using the Android haptics in the two-dimensional space due to their relative size, and possibly distinguishing the vibrations from other information when they did. Because particle information was central to the task, difficulties locating the particles interfered with student task performance. Students indicated that the tactile graphics were both familiar and easy to use. The dynamic simulations seemed to engage and motivate students: 2 students indicated that they were interesting, and at least 1 indicated learning from the simulation. The researchers concluded that usability issues limited the utility of the haptic tools, and offered recommendations on ways to address these challenges, such as use of multi-touch (i.e., more than one finger) haptics. All of these findings have implications on designing science testing for students who are blind.||X||S|
|Higgins et al. (2016)||The math test results analyses require a detailed description. While controlling for student ability variation, the comparison of performance on math items with and without ASL accommodations yielded that students on average scored consistently and significantly higher when using the accommodations, across all three school levels. The score comparisons between the two versions of ASL conditions were more complex in process, yet yielded similarly consistent but non-significant results in all comparisons: equations and images signed or not signed, finger-spelled only or finger-spelled and signed, diamond item structure or not, reduplication of plurality or showing action and number of times, or graphic presented on signers' hands or in front of signers' bodies. Students' responses to the cognitive lab interview protocol were categorized into three themes. The commonality among the themes is that students who were deaf preferred receiving more information rather than less, and preferred receiving communication more akin to American Sign Language than English. The elementary students indicated that the non-signed version was confusing; the older students explained that math symbols and graphics would typically be signed during instruction. Similarly, finger-spelling provided additional information to students, and use of the diamond structure for items was the common communication pattern in ASL and more familiar to native ASL signers. The researchers offered recommendations on developing ASL videos for online math assessments.||X||X||M|
|Joakim (2015)||The researcher reported student use of each of the accommodation types, including by grade level: presentation (n=131), response (n=7), setting (n=80), and timing (n=107). Overall, 132 students used at least one accommodation, and 24 students used no accommodations. Mean scores of the groups of students taking the writing assessment with and without accommodations at each grade level were compared. Grade 5 students not using accommodations scored statistically higher than those who used accommodations. Grade 8 students using and not using accommodations did not score significantly differently. When making comparisons for specific accommodations, the researcher found no statistically significant differences for test directions comprehension check, breaks, and extended time—in either grade. Grade 5 students not using read aloud, familiar examiner, or small group scored significantly higher than those using them, and grade 8 students being tested individually scored statistically higher than those not using that support. No student group means were found to be significantly higher when using specific accommodations. When making comparisons within disability category, the researcher indicated that there were no statistically significant differences for accommodations users and non-users with autism in grade 5 and for grade 8 students with emotional behavioral disabilities. Students not using accommodations in grade 5 who had learning disabilities, and in grade 8 who had other health impairments, scored significantly higher than those using accommodations. The researcher noted that some disability category subgroups had too few data points in one of the grade levels for comparison purposes: students with emotional-behavioral disabilities and other health impairments in grade 5, and with autism and learning disabilities in grade 8. Some subsets of students using or not using specific accommodations also had too few data points, such as grade 8 students using the time of day accommodation. The researcher cautioned against inferring a causal relationship in these patterns.||X||W|
|Kafle (2015)||Analysis of the community college students’ interview data yielded six themes regarding their perceptions and attitudes: 1) knowledge of disability support services and eligibility for accommodations, which included feelings of relief, gratitude, and happiness; 2) disclosure and feelings related to having a learning disability, including stigma, embarrassment, not fitting in, self-perception, frustration, and self-esteem; 3) emotions related to nursing school, such as anxiousness and stress; 4) attributes contributing to success, including hard work, motivation, determination, pride, and stubbornness; 5) perceptions of their instructors' responses, which ranged from supportive to annoyed; and 6) success strategies, including studying with someone, talking it out, re-reading and re-writing materials, visuals and drawings, and use of accommodations. Each of the perceptions associated with the themes was elaborated through examples from specific interviewees.||X||X|
|Kettler (2015)||The researcher synthesized the findings of several sets of studies on the separate impacts of oral delivery, extended time, and accommodations bundles, including findings pertaining to the differential boost hypothesis. Highlighting at least 30 studies on oral delivery—19 of which were experimental—he noted that the findings of 17 of the 30 studies indicated that oral delivery supported improved scores for students with disabilities. The researcher specified that 11 of the 19 experimental studies yielded evidence of differential benefits of oral delivery for students with disabilities in comparison to students without disabilities. An additional four studies (of the 30) indicated that oral delivery did not invalidate the constructs tested, and one of these studies yielded that only a few items on a reading comprehension test were affected by oral delivery. Noting that the body of research has indicated that read-aloud is appropriate for assessments of math but not reading, the researcher concurred that use of oral delivery during assessments of reading comprehension should be carefully considered. Reviewing at least 24 studies on extended test time, the researcher reported that the findings of 11 studies demonstrated positive impacts of extended time for students with disabilities. Further, the 19 studies examining differential boost yielded findings of eight studies supporting differential benefits of extended time for students with disabilities over students without disabilities, and three other studies indicated that extended time did not invalidate the test constructs. The researcher concluded from the research that extended test time benefits students with and without disabilities, and that this support ought to be used "only for students with impairments in processing speed or fluency to use when taking tests that are not intended to measure processing speech or fluency at all" (p. 319). Examining 15 studies on accommodations bundles, the researcher reported that the findings of 10 studies demonstrated the positive impacts of various sets of multiple accommodations for students with disabilities. Further, of the nine studies about differential boost from bundles, six studies' findings supported differential benefits of various accommodations bundles for students with disabilities over students without disabilities. One study (of the 15) which analyzed factor structures indicated that the IEP-provided accommodations bundles did not invalidate the ELA test construct. The researcher concluded that accommodations bundles or packages benefit students with disabilities as designed, yet that there are complications, including that "the interactions of accommodations within each package are largely unknown" (p. 320).||X|
|Kim & Lee (2016)||Of the 1,055 testing accommodations requested, over 75 percent (n=801) were for extended time, 441 requests (42%) were for specialized setting, and 131 requests (12%) were for presentation accommodations such as assistive technology; some students requested more than one accommodation. Incidence of accommodations requests by disability type were also reported. Beta weights, indicating relative influences, of testing accommodations in the prediction of cumulative GPAs, indicated that extended time had the highest (and significant) influence on GPA, oral delivery and other test presentation accommodations also had a significant influence on GPA, and specialized setting were "not significant . . . in predicting cumulative GPA" (p. 5). Incidentally, classroom accommodations had a lesser relationship with cumulative GPAs, and only accommodations permitted during assignment completion was predictive of GPA.||X||X|
|Kim (2016)||Participants in kindergarten and grade 2 in the live oral delivery condition scored significantly better in comprehension than matched students in the audio-recorded condition; no differences were found for grade 4 students. Student participation groups did not perform significantly differently in retell quality between audio recording and in-person oral delivery accommodation conditions.||X||R|
|Lane & Leventhal (2015)||The researchers indicated that of the 11 studies examining the possibility of differential boosts for students with disabilities using accommodations, four studies reported evidence demonstrating differential boosts. They noted that these studies examined data from many accommodations, grade levels, test content areas, and even sample sizes. When the researchers separated out the evidence of differential boosts based on these factors, they observed patterns: evidence was noted in 50% of studies with elementary students and 30% of studies with middle school and high school students, and in 40% of reading assessment studies and 30% of math test studies. The researchers detailed several studies' research designs and individual findings. They also discussed the tension between the relatively small numbers in student subgroups in some disability categories—often resulting in combining these groups into sets of students with disabilities as a whole—and the challenge of the resulting heterogeneity of the entire group and students' various responses to specific accommodations. The researchers noted that "test scores tend to be less precise at the lower end of the score scale" (p. 204)—and students with disabilities sometimes perform in that range due to various factors, and discussed studies using designs such as factor analyses to examine internal test structure. In sum, relevant topics addressed include reliability and score precision, computer-adaptive testing, differential item functioning, factorial invariance, external structure evidence for test validity, and equating invariance for accurate score interpretation.||X|
|Lawing (2015)||High school educators' survey responses showed frequent and infrequent classroom-level factors affected the accommodations identification process. More than half of respondents indicated that students' present levels of functioning and evidence of successful accommodations were two important factors. Other widely-endorsed factors included expecting the accommodations to support curriculum access (45% of respondents), previous appearance on individualized education program (IEP) plans (35% of respondents), parental input (25% of respondents), and teacher input (22% of respondents). Survey respondents rated the relative importance for accommodations identification of seven school-level conditions for consideration by IEP committees. The single most important consideration, from 29% of respondents, was the academic subject, and least important was "student needs that impede" (p. 133) classroom success, according to 43% of respondents. Common least important considerations were accommodations deemed successful based on trials in the classroom (35%) and students' classroom performance (29%). The researcher noted contradictions in survey responses between factors and considerations, suggesting that these related to differences in decision-making in the classroom and the IEP meeting. Analyses of the relationship of teachers' attitudes about inclusion and accommodation selection factors revealed that special educators had more positive attitudes toward inclusion than general educators; their responses were similar on some attitude survey items and different on others. Over half of the teachers who indicated that students' present levels of functioning were an influential factor also demonstrated highly positive attitudes toward inclusion for six of the nine attitude scale items, yet disagreed about the view that students with disabilities will not require too much teacher time. Interview data indicated further support and elaboration about the three most influential factors on accommodations decisions. For instance, teachers indicated that observation of students' actual use or non-use of accommodations can provide an accurate current view of students' classroom functioning and changing needs. The researcher observed subtle variations in teachers' responses to the attitudinal scale items and their rating of influential factors for accommodations selection, noting that positive inclusion attitudes were held by teachers with systematic selection approaches. Teachers' expectations about students' post-high school activities (academic university, community college, training program, or employment) did not influence classroom accommodation selection or practices. Students' accommodation use and outcomes influenced teachers' perceptions of their academic ability, in that using more accommodations was linked with lower academic ability and fewer options after high school.||X||X|
|Lewandowski et al. (2015)||Participants, all of whom reported having no disabilities, scored significantly better on average in the group administration rather than the individual setting. Further, these postsecondary students' individual scores correlated relatively highly (r=0.72) between conditions, indicating that testing conditions had limited effects on test performance. The researchers concluded that the individual setting did not benefit students without disabilities, suggesting that individual setting might serve as an accommodation for students with disabilities —but that further investigation (involving students with disabilities) would be needed to substantiate such a claim.||X||X||R|
|Lin et al. (2016)||Students using the setting accommodation had lower test scores on average than students not using that support. Students without disabilities had significantly higher scores in both math and reading than students with learning disabilities (LD). Students without disabilities using the setting accommodation were the lowest-scoring group in reading, below both groups of students with LD. In contrast, the accommodated students without disabilities scored higher in math than both groups of students with disabilities, yet lower than students without disabilities not using accommodations. Students with LD using the setting accommodation scored lower than students with LD not using that support in both reading and math. Non-accommodated students with LD evidenced lower item difficulty than students with LD using the setting accommodation. Individual item functioning varied for a few items between the two groups of students without disabilities in math and reading; however, there were no individual item effects for accommodated and non-accommodated students with LD in either math or reading. Overall, the researchers indicated that the setting accommodation did not benefit students with attention or learning difficulties, and remarked that decisions about providing setting accommodations need to be made on an individual basis. The researchers concluded that the multilevel measurement modeling approach was useful in examining accommodations' effects because the data could be analyzed for multiple predictors and interactions at the person-level and item-level.||X||M,R|
|Lin & Lin (2016)||The odds ratio results demonstrated that the three groups of students with disabilities receiving the combinations of: 1) computer administration with extended time, 2) computer administration with specialized setting, and 3) computer administration with both extended time and specialized setting, all "had a higher chance to meet the [literacy] standards" (p. 20) than their peers receiving no accommodations, and also than their peers receiving other accommodations. Comparisons for students with learning disabilities had the most pronounced differences. The researchers cautioned that the matter of accommodations potentially affecting construct validity might be of concern, noting that the OSSLT had both multiple-choice and constructed-response items; the potential that students' responses composed using assistive technology—such as speech-to-text software—could affect the test construct could not be evaluated. The researchers also reported that one of the odds ratio data adjustment methods was found not to be useful, while another approach demonstrated usefulness, in addressing the problem of data sparsity, particularly for some less commonly used accommodations.||X||O|
|Lovett & Leja (2015)||There were significant correlations between measures of similar constructs, including executive functioning and ADHD symptoms, processing speed and reading fluency, and both processing speed and fluency with number of comprehension items correct at 15 minutes. Postsecondary students with more ADHD symptoms or more executive functioning difficulties demonstrated less benefit at a significant level from the extended-time condition. However, participants with more ADHD or executive functioning difficulties perceived that they needed extended time to a significant degree, according to their SEPTAR scores, while their actual comprehension performance and SEPTAR scores had no correlation.||X||X||X||R|
|Lyman et al. (2016)||Analysis of the postsecondary students’ interviews uncovered six themes regarding perceived barriers to accessing and using accommodations, from the perspectives of these students with disabilities. The first theme was a desire for self-sufficiency, which included subthemes of the importance of being independent, being self-accommodating, and using accommodations as a backup. The second theme was a desire to avoid negative social reactions, which included not wanting to be viewed or treated differently, not wanting to be a burden, and fear of suspicion from others for receiving special treatment. Third was insufficient knowledge, including questioning the fairness of accommodations, lacking awareness of the DSS office or accommodations, and doubting whether one is “disabled enough” to receive accommodations (p. 129). The fourth theme, quality and usefulness of the DSS office and accommodations, pertained to “problems working with DSS and the process of setting up accommodations” (p. 129). The last two themes were: negative experiences with professors and fear of future ramifications.||X||X|
|McMahon et al. (2016)||The mean performance score across all participants was highest when students took the podcast-delivered science test (57%), and the second-highest mean score was oral delivery by teacher (54%), in comparison with the non-accommodated condition (46%). The effect sizes for the podcast condition (0.59) and teacher read-aloud condition (0.52) were judged to be medium and significant. The difference between the two accommodated conditions was not significant. Group mean comparisons between students with reading difficulties who were not identified with disabilities and students with disabilities yielded different results. The mean scores of students without disabilities were significantly higher than those of students with disabilities in the standard administration and the teacher-read test delivery, but less than 10% higher (and not significant) for the podcast-delivered test. Participants' teachers commented anecdotally that students seemed more focused and had less wait-time in the podcast testing condition, due to students choosing when and how much to listen to certain test items. The researchers noted that all of the participants had reading difficulties, and that the oral delivery accommodation conditions supported the participants.||X||S|
|Miller et al. (2015)||These postsecondary participants in both groups were comparable with one another in terms of their numbers of items they attempted and completed in standard, 150% time and 200% time conditions; they also had similar numbers of correct answers as one another in each of the time conditions. Comparing students with ADHD with extended-time accommodations against students without disabilities with standard administration time, students with ADHD attempted and completed significantly more items than students without disabilities; the same comparison also yielded that students with ADHD (with extended-time) scored significantly higher than students without disabilities (with standard-time) in reading comprehension. Comparisons between each student's scores across the three conditions for each group indicated that students with ADHD had similar improvements in scores as students without disabilities. In other words, students with disabilities did not differentially benefit under extended-time conditions to a greater degree than students without disabilities.||X||X||R|
|Monagle (2015)||Analysis of the postsecondary students' response data revealed how attitudes and demographic variables—specifically year in school, major course of study, and self-identified disability category—influence students' use of accommodations. Students were more likely to request accommodations in their second or third year, rather than their first year in college. Students with multiple disabilities were found to use accommodations at higher rates than students who self-identified with singular disability categories (e.g., learning disabilities). Additionally, "students majoring in the Humanities and Liberal Arts were more likely to request accommodations than those in math, science and engineering" (p. 88). Last, "students with a more positive attitude toward accommodations were more willing to use [accommodations]" (p. 90).||X||X||X|
|Nelson & Reynolds (2015)||Postsecondary student participants reported varying amounts of experience using speech recognition tools prior to the current writing task. Beginning users learned to use the software well, yet had challenges such as clear enunciation, even after the software was trained to users' voices, and becoming accustomed to continuing to speak without being distracted by their mistakes showing on screens. They determined that use of the keyboard and mouse at times became necessary when editing, rather than using voice commands only. These new users remarked that using this support made composing quicker and easier than typing (manually), and reduced the likelihood of their becoming tired early in the task. For at least one new speech recognition user, the student with ADHD and mental health disabilities, the software allowed better spelling and fewer errors that typically have slowed down her writing process. Two participants indicated substantial experience with speech recognition in middle school and high school, and one indicated using a sort of conversational approach, considering the computer a human listener. Both of these students noted their having learned not to stop after each sentence to make spelling and grammar corrections, so as not to disrupt their trains of thought. They also both became practiced at organizing their thoughts without pre-planning or using written outlines or notes. They both indicated that they performed editing by using the keyboard rather than by voicing edits to their computers.||X||X||W|
|Newman & Madaus (2015a)||The researchers reported that student characteristics facilitated or blocked seeking and receiving accommodations in various postsecondary settings. Overall accommodation use patterns included that 15% of students with disabilities used "accommodations and other disability-specific services" (p. 213) in career and technical education, 22% used them in four-year colleges, and 25% did so in two-year colleges. According to odds ratios, students in two-year programs who had participated in transition planning in high school were more likely to receive academic accommodations. Further, students in two-year and career and technical programs whose transition plans specified accommodations needed in postsecondary education were more likely to receive them. Students with apparent and observable disabilities—sensory disabilities, mobility/orthopedic impairments, and multiple disabilities—more commonly received accommodations than students with less-visible disabilities, such as learning disabilities, particularly at two-year and four-year institutions. Students with attention-related disabilities at four-year institutions were more likely to be provided accommodations than other students with disabilities. Students with lower income family origins were less likely to receive disability-related services than their peers from higher income families. Some factors were associated with not receiving accommodations, including students in two-year programs who scored higher on the self-realization subscale pertaining to knowing their challenges.||X||X|
|Newman & Madaus (2015b)||Of the postsecondary student sample, about 95% used accommodations in high school, and about 23% of them used accommodations at the postsecondary level; this marked a statistically significant decrease in accommodations use rate. The accommodations use rate varied across types of postsecondary institutions: about 15% in career and technical education, about 22% in four-year colleges, and about 23% in two-year colleges. About 35% of the students informed their postsecondary institution of their disabilities, 50% did not consider themselves to have a disability, and about 14% indicated that they chose not to disclose their disability, and then did not seek accommodations. The most commonly used accommodations were test-related accommodations, most often extended time and alternate settings; about 88% used test accommodations in high school and 21% did so in postsecondary education. Specifically, 12% of the student sample used test accommodations in career and technical education, 20% at four-year colleges, and 21% at two-year colleges. These rates by types of postsecondary program were found not to differ in statistical significance. The researchers commented that some possible explanations for the decreases in accommodations use include the students' perception that they do not experience disability-associated academic challenges, or an ongoing lack of full understanding of their disabilities and accommodations during high school and into postsecondary education, or a lack of information about "the differences in legal rights and responsibilities between high school and postsecondary school" (p. 7).||X||X|
|Ofiesh et al. (2015)||These postsecondary participants reported that their ADHD condition affects test-taking related to attention and focus difficulties, distractibility, management and perception of time, and movement. Students with ADHD only reported using extended time for one or more reasons, including addressing attention problems by taking a break then re-focusing, allowing time for their distractibility and executive functioning difficulties, permitting moving around, and self-monitoring. The researchers noted that taking formal breaks during testing could be more appropriate for many students with ADHD. Students with reading disabilities as well as ADHD indicated using extended time for permitting slower reading rates.||X||X|
|Ohleyer (2016)||The first three research questions examined accommodation type and scaled scores across fourth, fifth, and sixth grade levels. Accommodation type, specifically assistive technology and read-aloud directions (by administrator), significantly affected scaled scores but grade level did not. The fourth research question examined differences in growth scores by use of assistive technology. Consecutive use of assistive technology across two years predicted higher growth scores on the state assessment, compared to growth scores of students who did not use assistive technology for two consecutive years. Assistive technology and growth scores were positively correlated with one another. In the process of analyzing data, the researcher noted accommodations use patterns. Grade 3 students tended not to use assistive technology for state writing assessments, due in part to their limited experience with using assistive technology for writing in the classroom (and so the researcher excluded that student population from the study). Of the 7225 students with learning disabilities, 59% used oral script, 12% used extended time, 8% used directions (only) read aloud, 2% used assistive technology, and less than 2% used scribe. The researcher observed that students with learning disabilities using assistive technology tended to be more likely male, White, and not from the low-SES group of students, in relation to the sample as a whole.||X||X||W|
|Peterson (2016)||Three themes emerged in the analysis: 1) academic challenges of students with disabilities were independent of age or type of disability, 2) "Lack of campus-wide disability support" (p. 91), and 3) "The burden of disabilities services" (p. 92). Several subthemes were also identified, and their distribution across participants' statements was reported. For the first theme, a subtheme reported by all participants was "The transition from modifications to accommodations" (p. 93), referring to the difference between K-12 education and higher education in the nature of support: specifically, that education can sometimes be accommodated and sometimes modified for K-12 students with disabilities, and that postsecondary education is only accommodated. Another common subtheme for the first theme was based on seven participants' observations that students with disabilities do not always use accommodations that they have requested. For the second theme, all participants indicated that postsecondary institutions have instances of "poor adherence to Universal Design" (p. 114) and also of not being fully accessible and responsive to the needs of postsecondary students with disabilities, either in-person or online—including the physical facilities as well as the academic learning environments. Additional subthemes each reported by over half of participants were that DS office personnel including service coordinators have received little technology training, that student workers are part of high-demand DS office services, service delivery is often not planned for or budgeted beyond the current school year, and the recognition that the entire institution needs to share responsibility for and be trained in providing access yet the responsibility is routinely shifted only to the DS office. For the third theme, there were five subthemes, and five to seven participants endorsed each of them. Burdens included participants' experiences of hearing from institutional administrators about the financial and other resource costs of disabilities services. In this view, the DS office "is seen as an expensive choice, not a mandate" (p. 151), and even "reasonable accommodations have upfront costs" (p. 151). There is an assumption that community colleges are positioned to more successfully address the access needs of students with disabilities. The other subthemes include: "Disabled students often do not know what has been provided for them," and "There is rarely a one-size-fits-all approach" (p. 151).||X||X|
|Potter et al. (2016)||Both postsecondary students with disabilities and without disabilities completed significantly more items in the test booklet, compared to the standard responding condition—the bubble answer sheet. Students without disabilities answered significantly more items than students with disabilities across both response formats. There were no interaction effects, by group and condition. Both groups also answered significantly more items correctly in the test booklet accommodation. Students without disabilities scored statistically higher on average than students with disabilities. There were no interaction effects, by group and condition. Finally, all students with disabilities, and nearly all (70 out of 76) students without disabilities indicated a preference to use the accommodation of responding to items in the test booklet; that is, there was no significant difference in groupwise preferences. The researchers indicated that the accommodation is not valid, and that the accommodation might affect the reading vocabulary construct.||X||X||X||R|
|Ricci (2015)||In each of the three within-state comparisons between students with disabilities using and not using oral delivery via text-to-speech, the researcher found statistically lower mean scores in NAEP reading comprehension for those using the accommodation. Further, effect sizes were: medium in New York, very large in New Jersey, and medium to large in Connecticut. The researcher concluded that oral delivery did not benefit students with disabilities who received the accommodation.||X||X||R|
|Rosenblum & Herzberg (2015)||Of the four tactile graphics item formats, the largest number of student participants (10 or more) answered correctly using the microcapsule map, and the least number of students (4 or fewer) answered correctly using the collage picture with hot glue and braille labels. Participants described how they examined the map for information in terms of searching for title and map key. Most students indicated that the map information was clear; students indicated their preferences and suggested improvements. Three participants expressed confusion about the information on the embossed bar graph to the degree that they could not answer some or all of the content questions. Most students indicated that the columns were difficult to distinguish between; several suggested using different textures for each column, and some suggested changing label locations. With the thermoform graph, about half of the students indicated that the line textures were clear while the other half expressed that the lines were difficult to discern. The picture collage was reported to clearly communicate the features needed for answering questions—having five labeled parts—yet most students did not accurately measure (with rulers) the dimensions of the object's shape. Few students suggested improvements, but some indicated label locations could be changed; researchers indicated that the ruler might have been unfamiliar, affecting the students' responses. Nearly all participants indicated that they have not been asked by educators for their input on the design of tactile graphics.||X||X||M,S|
|Rudzki (2015)||Analyses revealed no statistically significant relationships between students’ reading proficiency levels and their having learning disabilities, their enrolled program type, amounts or types of special education services they received, their accommodation types, or their attendance patterns. Furthermore, it was found that these variables had no statistically significant relationship to the students’ reading assessment z-scores. The use of accommodations—such as a combination of alternate setting, extended time, and small-group administration, or even just small group administration of the reading assessment—did not differentially impact reading test proficiency levels or scores, in part because there was no variation in scores, and no participants scored at the proficient level. The researcher concluded that the selection of factors analyzed for impacts on test performance could have missed an important additional factor that could have influenced reading performance. Alternately, the researcher noted, if it is correct to expect that students with learning disabilities can demonstrate reading proficiency, perhaps the assessment was not properly accessible for these student participants.||X||R|
|Ruhkamp (2015)||The researcher described the model of themes, with four themes and several subthemes for each. The themes included the "A3 Model," with three components: advocacy, accommodation, and accessibility; positive or negative tone; and testing. Five of the six participants, who were postsecondary students with disabilities, reported on needing and receiving accommodations during exams; the student who was deaf indicated not using test accommodations. Reasons for seeking exam-related accommodations included having already failed or otherwise done badly on college exams, and many had been anxious during exams. Most participants indicated that extended time and alternate setting (with decreased distractions) supported them during exams; one participant reported receiving read-aloud of items by exam proctor. Participants reported various benefits from accommodations, including better understanding of exam items and improved exam performance, and also other effects—such as increased confidence and comfort, and a decreased sense of pressure. Emotions and perceptions about the accommodations process were elaborated, including nervousness about educators' responses to students' needs for accommodations, and concern about the possibility that accommodations would not be beneficial. Two students reported feeling guilty, and one expressed shame, about receiving accommodations when others (who presumably did not need them) were not provided accommodations. Three expressed frustration pertaining to the process of seeking accommodations, especially initially. Three participants indicated that they ought to have sought accommodations earlier than they had. The researcher also compared the experiences of traditionally-aged and older-than-average students, showing little difference between student groups regarding testing accommodations, and regarding positive and negative experiences.||X||X|
|Seo & De Jong (2015)||The propensity score matching process showed that the two samples of students at each of the grade levels were very demographically similar and had very similar means for math and reading scale scores. Further, these performance scores became even more similar after the matching process was completed—that is, the group mean differences across all the tests were smaller when the data matching was completed for each data set. Differential item functioning (DIF) analyses on test items yielded that none of the items performed differently based on test version (paper-based or online). Test level comparisons indicated that there were no significant differences in scoring patterns for the grade 6 and grade 9 students taking the paper-based test version compared to the online version. The online social studies test-takers' mean scores in grade 6 were slightly higher (with no statistical significance) than their matched paper-based test-taker peers, and similarly slightly higher than all paper-based test-takers. The effect size differences, between raw and scale scores, for both grade levels were negligible. Grade 9 online test-takers' mean scores were also not statistically higher than their matched paper-based peers, and slightly lower than all paper-based test-takers. These differences in grade 9 means between matched paper-based test-takers and all paper-based test-takers suggested to the researchers that "the appropriate matching method can have significant impact on the comparability study" (p. 106). The researchers asserted that the propensity score matching process showed more precise datasets for comparison, with more equivalent comparison groups. Following this overall trend of slightly higher means, the proportions of online test-takers in grades 6 and 9 who achieved advanced, proficient, and partially proficient performance levels were slightly larger than their matched paper-based test-taking peers; the proportion of online test-takers with 'not proficient' scores was smaller than the proportion of matched paper-based test-takers in both grades. The brief survey results yielded that about 70% of students preferred the online test mode, 10% preferred the paper-based test, and 20% had no preference; further, none of the online test-takers indicated discomfort with the computer.||X||X||SS|
|Seo & Hao (2016)||The researchers discussed the concerns that other data analysis approaches (e.g., differential item functioning/DIF or differential test functioning/DTF) for comparing the accommodated and non-accommodated versions presented, and the ways that Person-Fit Analysis was designed to consider these issues. They reported that their calculations of fit and misfit percents for the accommodated and nonaccommodated versions yielded similar results, demonstrating scale comparability between these versions. They further explained that the accommodated version validly measured student ability, in a manner equivalent to the nonaccommodated version of the science test. They concluded that, through their use of these data as an example, Iz Person-Fit Index analysis showed itself to have promise for completing these types of comparative analyses when the circumstances do not fit the application of t-tests or DIF approaches.|
|Sokal (2016)||The researcher reported three themes that emerged from the interview data from postsecondary instructors and AS (accessibility services) professionals: perceptions of fairness; roles, adaptation, and training; and providing access. Professors indicated sensitivity to having justification for providing accommodations for students with anxiety-related disabilities. Participants described the accommodation process as responding to a continuum of need, with varying amounts of documentation supporting different accommodations. There were points of disagreement and tensions between professors and AS staff members. The essential tension was between accommodating students' needs and supporting the development of students' coping skills. The researcher described the various perspectives expressed by professors about addressing the needs of students with anxiety disorders, noting that all participants indicated that more open communication could help them to reach understandings about supporting students.||X||X|
|Spenceley & Wheeler (2016)||On average, postsecondary students with disabilities completed course exams within 103% of the standard offered time. In other words, they typically needed only a little more time to answer all exam questions. Participants who were offered 1.5 or 150% extended time (or 90 minutes for every hour of standard exam time), on average, used 96% of standard exam time, and the students who were offered 2.0 or 200% extended time used an average of 112% of standard exam time. The researchers indicated that about 55% of students provided extended time actually completed exams within the standard time. Patterns of time use for students with various disability categories were also reported. On average, postsecondary students with visual disabilities, medical disabilities, and learning disabilities completed exams within the standard time, while students with other disabilities—including ADHD, autism, physical, and multiple disabilities—typically completed exams in more than the standard time. For students provided 1.5 extended time, all students with visual impairments completed exams within that time period, and over 90% of students with LD completed exams within that time, and 89% of students with multiple disabilities completed exams within that time; in contrast, only 63% of students with psychiatric disabilities did so. For students provided 2.0 extended time, all students with visual impairments, 95% of students with learning disabilities, and at least 90% of students with ADHD or psychiatric disabilities or physical disabilities completed exams. About 34% of course exams—taken by participants in various disability categories—were not completed within either extended time accommodation condition provided.||X||X|
|Spiel et al. (2016)||Student participants with ADHD, on average, scored significantly higher when provided in-person oral delivery and small group accommodations in comparison with reading the science test items silently to themselves. Students without ADHD on average scored essentially the same with and without accommodation. The group of participants with ADHD scored significantly lower than the participants without ADHD in the silent testing condition, but both groups did not score significantly differently when receiving oral delivery in small groups. Individual comparisons of students with ADHD in both testing conditions indicated that only one student with ADHD scored lower—and the other 15 scored higher—with oral delivery than with no accommodation. Individual comparisons of students without ADHD yielded that 12 scored lower with oral delivery (and small group), and 8 scored higher with oral delivery (and small group), compared with the silent reading large group testing condition. An additional exploratory analysis was performed to examine whether students with ADHD received a differential boost from the accommodated testing condition. The results indicated that all low-scoring students benefited from the accommodations, and that students with ADHD benefited more but not at the level of statistical significance. The researchers concluded that two factors might have benefited students with ADHD when receiving the accommodated testing condition: the decreased number of test-takers in the small group condition decreased potential distractions, and the engagement of students with ADHD in the test items with both visual and auditory senses increased sustained attention and decreased test response errors.||X||S|
|Südkamp et al. (2015)||The researchers described test response patterns as well as correctness of answers for each student participant group, distinguishing between items that were skipped ("omitted"), "not reached," and invalid responses. Students had the least number of omitted responses for the easy test, the least number of not reached items for the reduced test, and the least number of invalid responses on the reduced test—attributed to the fact that the reduced test has the fewest items requiring matching. Further, for students with special educational needs in learning (SEN-L), item completion increased for the reduced and easy versions in comparison to the standard test. There were few differences between the general education participants and the students in the lowest academic track (LAT) in item fit for the standard test, but a higher rate of item misfit for students with SEN-L. The reduced test demonstrated better item fit for students with SEN-L and for LAT students (without disabilities) than the other test versions; in contrast, the easy test fit relatively well for the LAT students, and not well for the students with SEN-L. In terms of item difficulty, the easy test was found to be too easy for students in the LAT, and too difficult for students with SEN-L. Differential item functioning (DIF) analyses found that, for LAT students, the test versions were comparable to one another; few items had more than slight DIF. In contrast, all three tests demonstrated many items with strong DIF for students with SEN-L. The researchers concluded that, due to item functioning and variance across the measures, scoring for students with SEN-L was not comparable or valid across all three test versions. In addition, the versions other than the standard test did not provide a valid comparison between students with SEN-L and students in general education.||X||X||R*|
|Timmerman & Mulvihill (2015)||One of the participants indicated that the most helpful accommodation was extended time during postsecondary exams, and the other participant indicated that the most helpful accommodation was oral delivery of exams. Both participants reported that they were special education majors, and that their peers were likely more understanding than average classmates about their need for accommodations. However, both noted that students in general courses who did not share their majors sometimes communicated not understanding their need for accommodations. Both participants indicated that a major obstacle to success was needing more time than others to complete work including course exams, and a realization that they have to work ahead when possible.||X||X|
|Weis et al. (2016)||Incidence of recommended exam accommodations and modifications for postsecondary students were reported. The most frequent (about 90%) was extended test time, from 50% additional to unlimited additional time. Other accommodations included technology use during exams (70%)—such as calculator (48%), word processor (30%), spellchecker only (24%), speech-to-text (9%), and text-to-speech (8%). About 27% completed exams in a separate room, and other accommodations were recommended for less than 10% of students—such as dictionary or thesaurus, outlining, and breaks. Researchers also reported recommendations of modified testing (53%)—such as simplified directions, access to notes and formulas, etc.—and modified grading (11%) such as retaking tests with no penalty. The researchers indicated that objective evidence indicating need for accommodations included history of learning disabilities, current diagnoses, test data, and functional impairment, although these evidentiary sources had differing degrees of accuracy in warranting need for accommodations, and that some recommendations were incomplete or unspecified for test types, content, or conditions. According to these criteria, about 94% of participants recommended for extended test time actually met criteria for receiving extended test time, about 46% of students recommended for calculator met criteria for calculator, 70% of students recommended for exam reader met criteria, 30% recommended for word processor met criteria, 26% recommended for separate room met criteria, 24% recommended for spellchecker only met criteria, 9% recommended for speech-to-text met criteria, 8% recommended for text-to-speech met criteria, and 6% recommended for outlining supports met criteria. Evaluations of the appropriateness of individual modifications to testing and grading were also reported.||X||X||M|
|Williams (2015)||Most participants (80%) could identify some of the accommodations they received, while only one participant knew all of them and one knew none of them. Nearly half of the participants (40%) each indicated either positive feelings—confidence and comfort—or negative feelings—differentiation from peers and inadequacy—about taking tests with accommodations. When describing their accommodations, more than half of the participants (60%) indicated positive feelings about them, one student had negative feelings, and the remainder made comments unrelated to the question. When relating their feelings about themselves for needing accommodations, half of the participants indicated positive feelings and fewer participants (30%) indicated negative feelings. When asked if accommodations affected their knowledge and skills during testing, most (70%) reported that they did so, and the remainder indicated that they did not. All participants reported that their accommodations affected their assessment performance scores. Half of participants preferred to take tests with accommodations and half preferred to take tests without accommodations. Participants also reported on the decreased distractions and increased comfort when receiving accommodations, and that oral delivery increases comfort level, makes the test easier to comprehend, and improves test results. One distraction that participants noted in the regular classroom testing environment was the sounds of students who finished before them. A contrasting comment by one participant was that he wants to return to the classroom so he has felt rushed when receiving accommodations and so he has scored worse on tests. A few participants elaborated about their perceptions of social stigma when receiving accommodations.||X||M,R|
|Yssel et al. (2016)||The researchers reported that, different from their previous study published in 1998, postsecondary students perceived that faculty members were positive and willing to provide accommodations to support their academic progress. A few students indicated that, while instructors seemed unfamiliar (and possibly uncomfortable) with blindness and dyslexia as it can affect academics, they nonetheless sought to provide accommodations. In at least one student's experience, faculty efforts have been overly accommodating, resulting in providing accommodations that were not particularly helpful. Students predominantly indicated their desire to be permitted access to higher education, and some indicated their challenges in developing self-determination and self-advocacy skills. Some interviewees indicated that these skill challenges had in some cases become barriers to their academic pursuits.||X||X|
|Zambrano (2016)||Postsecondary students with disabilities indicated their perceptions about academic accommodations relevant to their course exams. Seven of the eight students reported having made use of exam accommodations; one student had not registered with the Disability Resource Services office, and did not receive accommodations. The researcher found few similarities in experience across postsecondary students with various disabilities, and concluded that participants' different disabilities indicated three different ways of understanding challenges in postsecondary education: based on physical disabilities, learning disabilities, and mental health disorders. These groupings yield different primary barriers: barriers in the physical environment, and effecting physical fatigue; barriers pertaining to the nature of the instructional material and learning conditions and requiring organizational and cognitive skills; and learning environment barriers and triggers requiring management of emotional crises. Participants reflected about themselves as learners and the university as their teachers. They viewed themselves as capable, and with neither lesser nor elevated status or importance in comparison to their peers without disabilities. They sensed from peers and faculty members lack of acceptance of them and limited awareness or understanding of them as learners, beyond the legal framework of their rights to have access. And without this understanding, the students observed little institutional communication about accessibility and accommodations information, particularly outside of (and except from) the DRS office.||X||X|
|Zeedyk et al. (2016)||The researchers summarized the research literature about the challenges and needs of youth preparing for postsecondary education, including 12 empirical studies along with other peer-reviewed research reporting expert recommendations. With their analysis of six studies reporting data from the National Longitudinal Transition Study-2, and other studies, the researchers identified underlying social and academic needs, and reviewed transition and accommodations issues. Highlighting the lack of quasi-experimental studies on the impact of academic supports for this population, the researchers reported, in part, about exam accommodations usage, including private testing room and ear plugs for minimizing intense sensory stimuli, as well as extended time for processing delays.||X|
M=mathematics, O=other language arts, R=reading, S=science, SS=social studies, W=writing
R*= reading in German (setting was Germany)