DIAMOND Logo

Data Informed Accessibility: A Review of the Literature

Vitaliy Shyyan, Martha Thurlow, Sheryl Lazarus, Laurene Christensen, Jessica Corpe, Christopher Rogers, and Erik Larson

September 2017

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Shyyan, V., Thurlow, M. L., Lazarus, S. S., Christensen, L. Corpe, J., Rogers, C., & Larson, E. (2017). Data informed accessibility: A review of the literature. Minneapolis, MN: University of Minnesota, Data Informed Accessibility—Making Optimal Needs-based Decisions (DIAMOND).

Table of Contents

 

Introduction

Fairness is defined by the Standards for Educational and Psychological Testing (2014) as every test taker being assessed in a equitable way, with scores reflecting the same construct and having the same meaning for all test takers. Along with demonstrating validity and reliability, all assessments (including classroom-based, formative, interim, and summative assessments) must meet fairness requirements. The provision of universally designed assessments and accommodations has been a primary avenue for striving to attain fairness in testing.

Accommodations are changes to the format or procedures of instruction or assessments that do not change the construct taught or measured. Much attention has been paid over the past several decades to research on the effectiveness and validity of accommodations for students with disabilities (Buzick & Stone, 2014; Cawthon & Leppo, 2013; Li, 2014; Rogers, Lazarus, & Thurlow, 2014, 2016) and English learners (ELs) (Abedi, Lord, Hofstetter, & Baker, 2000; Kieffer & Lesaux, 2009; Pennock-Roman & Rivera, 2011). This research paved the way for recognizing that assessment accommodations that do not change the construct being measured are a vital part of access to the assessment.

Over time, though, there was an increasing recognition that accessibility in assessments is important for all students (Thurlow, Lazarus, Christensen, & Shyyan, 2016), beyond the provisions of universal design (Ketterlin-Geller, 2008; Thompson, Thurlow, & Malouf, 2004). Accessibility has been defined as “the degree to which the items or tasks on a test enable as many test takers as possible to demonstrate their standing on the target construct without being impeded by characteristics of the item that are irrelevant to the construct being measured” (AERA, APA, NCME, 2014, p. 215). With the recognition of the need for broader accessibility, and with changes in state approaches to setting accessibility and accommodations policies, there now are many more students who would benefit from greater accessibility and many more educators who will need to make decisions about the kind of accessibility and the accommodations needed for all students. These students include those who have not been identified as having a disability or being an EL, but would still benefit from the greater accessibility in the assessments they take.

A substantial amount of literature has documented the challenges in making decisions about accommodations (e.g., DeStefano, Shriner, & Lloyd, 2001; Ketterlin-Geller, Alonzo, Braun-Monegan, & Tindal, 2007; Shriner & DeStefano, 2003). At this time, it is more important than ever to provide training to ensure that those educators who have never before made accessibility decisions for students who are not ELs, or who do not have disabilities, know how to make these decisions for each individual student.

The Data Informed Accessibility—Making Optimal Needs-based Decisions (DIAMOND) project is a collaboration of Minnesota, Alabama, Connecticut, Maryland, Michigan, Ohio, West Virginia, Wisconsin, the Virgin Islands, and the National Center on Educational Outcomes. The project’s goal is to improve the validity of assessment results and interpretations for students with documented needs by developing guidelines for making informed decisions about accessibility and accommodations. It will promote a decision-making process that moves beyond the use of a checklist approach (which often results in identifying tools and accommodations that do not provide access to the student), to an approach that relies on the use of classroom progress data and other measures charted over time to evaluate individual student needs. All students who require greater accessibility and accommodations—students with disabilities, ELs, ELs with disabilities, as well as other general education students with accessibility needs and preferences—are served by this project.

The purpose of this literature review is to: (a) summarize the shift that has occurred in approaches to accessibility in testing (including the paradigm shift of what accessibility for all students means, and its relationship to accessibility in instruction), (b) identify the gaps in educators’ knowledge and application of accessibility approaches and their need for training, and (c) explore the approaches to professional development that might be used to develop accessibility and accommodations training for all educators. A wide-ranging search was conducted using typical search engines as well as Google Scholar, with a focus on those articles that had implication for professional development.

 

Accessibility Paradigm Shift

Over the more than 100 years in which large-scale assessments have been administered to groups of students, test developers and administrators have striven to standardize materials and procedures. With the push to ensure that all students were included in assessments in the early 1990s, came the recognition that some students, those with disabilities (with Individualized Education Programs—IEPs or 504 plans) and possibly those who were ELs, might require accommodations to meaningfully participate in the assessments. Accommodations were defined as changes in the materials or procedures that enabled students with disabilities (and ELs) to participate in assessments without changing the construct (what the assessment was intended to measure) (Christensen, Carver, VanDeZande, & Lazarus, 2011; Shyyan, Christensen, Touchette, Lightborne, Gholson, & Burton, 2013). Between 1990 and 2010, all states came to have policies on accommodations for students with disabilities in state testing (Christensen, Braam, Scullin, & Thurlow, 2011), in part prompted by the requirement in IDEA 2004 that states provide accommodations guidelines and report the number of students receiving accommodations on the regular assessment. Most, but not all, of these also identified accommodations for ELs (Christensen, Albus, Liu, Thurlow, & Kincaid, 2013; Rivera & Collum, 2006).

In 2010, the start of a dramatic shift took place. Rather than focusing only on accommodations, assessment experts began to recognize that any student might have an accessibility need that could be addressed if state tests were designed with the principles of universal design in mind (Rose & Meyer, 2002; Thompson et al., 2004) and if features of the assessment were broadened to address some of these accessibility needs.

The paradigm shift that emerged in 2010 was promoted as part of the Race-to-the-Top Assessments funding initiative, which provided funding for consortia of states to develop innovative computer-based assessments. This funding initiative identified the need for accessibility (rather than only accommodations) when it funded two consortia of states to develop rigorous, more innovative assessments based on common college- and career-ready standards and required that the assessments be accessible for a wide range of students (U.S. Department of Education, 2009). The two funded consortia—the Partnership for Assessment of Readiness for College and Careers (PARCC) and the Smarter Balanced Assessment Consortium (Smarter Balanced)—both identified a three-tier structure for ensuring accessibility in their assessments (PARCC, 2015; Smarter Balanced, 2016).

The three-tiered approach used by PARCC and Smarter Balanced defined the tiers of accessibility in terms of the student needs they were intended to serve and which types of changes to materials and procedures might be used by those students. Both consortia provided accessibility approaches not only for those students who in the past had access to accommodations or other accessibility features (i.e., students with disabilities, ELs, and ELs with disabilities), but also for students who had not had access to accommodations or other accessibility features in the past (i.e., other general education students).

Assessments designed specifically for ELs and students with the most significant cognitive disabilities also adopted tiered approaches. Two assessments of English language proficiency (ELP) —English Language Proficiency Assessment for the 21st Century (ELPA21) and WIDA assessments—identified tiers for the ELs taking those assessments (ELPA21, 2015; WIDA, 2015). Similarly, the two consortia for alternate assessments based on alternate achievement standards (AA-AAS) —Dynamic Learning Maps (DLM) and National Center and State Collaborative (NCSC)/Multi State Alternate Assessment (MSAA) —identified tiers for students with the most significant cognitive disabilities (Dynamic Learning Maps, 2016; National Center and State Collaborative, 2015).

Current literature supports the notion of accessibility as a broader concept, one that benefits all students. This literature is based, in part, on the greater possibility of providing different accessibility features and accommodations to different students through a technology platform (e.g., Christensen, Shyyan, & Johnstone, 2014; Thurlow, Lazarus, Albus, & Hodgson, 2010; Thurlow, Quenemoen, & Lazarus, 2011). This approach is consistent with the push to develop assessments for the widest range of students from the beginning while maintaining the validity of results generated by the assessment. It is consistent with the requirement that all students are to work toward grade-level content standards that are aligned with college and career readiness, based either on grade-level achievement standards (for nearly all students) or alternate achievement standards (only for those students with the most significant cognitive disabilities) (U.S. Department of Education, 2015a). It is reinforced by the 2015 reauthorization of the Elementary and Secondary Education Act (ESEA, now titled the Every Student Succeeds Act–ESSA), and with the guidance provided to reviewers of states’ assessment systems (U.S. Department of Education, 2015b).

Current research suggests that the use of pre-identified accessibility features and accommodations should be based on individual student needs and preferences to be most effective (Christensen, Shyyan, Rogers, & Kincaid, 2014; Shyyan, Christensen, Rogers, & Kincaid, 2014). Also, in order to provide optimal results, accessibility features and accommodations must be similar or identical to the accessibility features and accommodations used in the classroom. The process of making accessibility decisions must be done with integrity so that these features and accommodations produce valid results reflective of what students know and can do (Elliott & Thurlow, 2006; Elliott, Kratochwill, & Schulte, 1999).

Although making decisions about accommodations has always been challenging (Ketterlin-Geller et al., 2007; Shriner & DeStefano, 2003), the new system of accessibility features and accommodations poses additional challenges for specialists in measurement, curriculum, special education, English as a second language (ESL)/bilingual education, and general education, who are now asked to make informed decisions on how larger numbers of their students participate in assessments. Simply providing definitions of the new features and accommodations is not enough. Targeted training, including specific approaches to using data to inform decisions, needs to be implemented so that educators can make optimal decisions. These methodologies are needed by educators who may not have had to make these decisions before (for instance, general education teachers), yet are also needed by members of IEP teams to ensure that their decisions are informed by data (Ketterlin-Geller et al., 2007; Mariano, Tindal, Carrizales, & Lenhardt, 2009).

Although instructional and assessment accessibility features and accommodations are provided to students, there is evidence that they do not always lead to valid results. For example, students are sometimes over-accommodated or under-accommodated, which may lead to ineffective use of the accessibility features and accommodations provided and have an impact on students’ test performance (Shyyan, Thurlow, Christensen, Lazarus, Paul, & Touchette, 2016).

 

Universal Features, Designated Features, and Accommodations

As shown in Table 1, the assessment consortia, as well as states that do not belong to a consortium, use different terms for tiers of accessibility, even when they have the same number of tiers. For simplicity here, we use the terms universal features, designated features, and accommodations to refer to the three tiers that appear in the policies of the general assessment consortia and those of many states (see Tables 2-4 for examples of each tier). Consistent across most policies is the designation of both features and accommodations that are embedded within the technology platform and those that are not embedded in the platform. Inconsistent across policies are the names given to the different tiers and the students for whom they are provided (Shyyan, Thurlow, Larson, Christensen, & Lazarus, 2016). Also inconsistent in some cases is the number of tiers available. DLM and WIDA provide two tiers. ELPA21 and PARCC provide a fourth level of administrative considerations for all students, such as time and place of assessment.

Table 1. Terminology and Targeted Students in Consortia’s Tiers1

Consortium Universal Features Designated Features Accommodations
General
Assessment
Name Target
Population
Name Target
Population
Name Target
Population
PARCC Features for
All Students
All Students Accessibility Features
Identified in Advance
All Students Accommodations Students with
Disabilities (IEP
or 504) and ELs
Smarter
Balanced
Universal Tools All Students Designated Supports All Students Accommodations Students with
Disabilities (IEP
or 504)
Alternate
Assessment
Name Target Population Name Target Population Name Target Population
DLM Supports Provided
Within DLM via
Personal Needs
Profile
Some Students
with Significant
Cognitive Disabilities
Supports Requiring
Additional Tools
or
Materials, Supports
Provided by Test
Administrator Outside
the System
Some Students
with Significant
Cognitive
Disabilities
- - - - - -
NCSC/MSAA Optimal Testing
Conditions
All Students with
Significant Cognitive
Disabilities
Assessment
Features
Some Students
with Significant
Cognitive
Disabilities
Accommodations Some Students
with Significant
Cognitive
Disabilities
ELP Assessment Name Target Population Name Target Population Name Target Population
ELPA21 Universal Features All ELs Designated Features Some ELs and
Some ELs with
Disabilities
Accommodations Some ELs
with Disabilities
WIDA Accessibility Tools All ELs - - - - - - Accommodations Some ELs with
Disabilities

1 For further descriptions of the tiers, see Dynamic Learning Maps (2016); ELPA21 (2015); National Center and State Collaborative (2015); PARCC (2015); Smarter Balanced (2015); WIDA (2015).

Universal features generally are available for all participating students in the assessment. For PARCC and Smarter Balanced, universal features are available to students who are not identified as having a disability (via an IEP or 504 accommodation plan) or as an EL. For the AA-AAS and ELP consortia, because they are targeted to specific groups, the universal features tier includes only those students with significant cognitive disabilities or those students who are ELs.

Even though all of the consortia provide universal features, the specific features included within this tier differ. As seen in Table 2, which provides lists of the features in this tier in the six consortia, there are many differences. The number of listed embedded universal features ranges from none (for NCSC/MSAA) to 13 (for Smarter Balanced). The number of non-embedded universal features ranges from none (for DLM) to four (for PARCC, Smarter Balanced, and WIDA). Within both the embedded and non-embedded universal features, there are many differences in the lists of the six consortia. The most common universal features (listed in the policies of two-thirds of the consortia) are highlighter (PARCC, Smarter Balanced, ELPA21, and WIDA), keyboard navigation (Smarter Balanced, DLM, ELPA21, and WIDA), magnification (PARCC, Smarter Balanced, NCSC/MSAA, ELPA21, and WIDA), and scratch paper (PARCC, Smarter Balanced, ELPA21, and WIDA).

Table 2. Consortia’s Universal Features2

Consortium Embedded Features Non-Embedded Features
General Assessment
PARCC Audio amplification
Bookmark
Eliminate answer choices
General masking
Highlight tool
Line reader mask tool
Magnification/enlargement device
Notepad
Pop-up glossary
Spellcheck or external
   spellcheck device
Writing tools
Blank scratch paper
Frequent breaks
Headphones or noise buffers
Separate or alternate location
Smarter
Balanced
Breaks
Calculator
Digital notepad
English dictionary
   (for ELA-performance task
full writes)
English glossary
Expandable passages
Global notes
Highlighter
Keyboard navigation
Mark for review
Math tools
Spell check
Strikethrough
Writing tools
Zoom
Breaks
English dictionary
Scratch paper
Thesaurus
Alternate Assessment
DLM Color contrast/invert color
   choice/overlay color
Magnification
Spoken Audio
None
NCSC/MSAA None Pause the test administration and
resume at a later time or another
day as indicated by student needs

Provide scratch paper for students
to make notes or solve math items
ELP Assessment
ELPA21 Amplification
Answer choice eliminator
Audio support
Digital notepad
Expandable passages
Flag for review
Highlighter
Keyboard navigation
Writing tools
Zoom (item level)
Scratch paper
WIDA Audio aids
Color contrast
Highlight tool
Keyboard shortcuts/equivalents
Line guide
Screen magnifier
Audio aids
Color overlay
Low-vision aids or magnification devices
Scratch/blank paper

2 For further descriptions of universal features, see Dynamic Learning Maps (2016); ELPA21 (2015); National Center and State Collaborative (2015); PARCC (2015); Smarter Balanced (2016); WIDA (2015).

Designated features are identified in advance for some students with documented needs and are determined, in part, through educator input. Most consortia have features within this tier; they are generally available to all students participating in the assessment, but they must be identified in advance by educators who determine that they meet students’ documented needs.

As seen in Table 3, the specific designated features included in this tier differ across the consortia. For example, the numbers of embedded designated features vary from none (WIDA) to nine (NCSC/MSAA). The number of non-embedded designated features varies from none (NCSC/MSAA and WIDA) to eight (Smarter Balanced). There are many differences in the consortia’s lists of designated features. Among the most common features (in at least two-thirds of the consortia) are some form of read aloud (PARCC, Smarter Balanced, DLM, NCSC/MSAA, and ELPA21), some type of color contrast or selection (PARCC, Smarter Balanced, NCSC/MSAA, and ELPA21), and masking (PARCC, Smarter Balanced, NCSC/MSAA, and ELPA21).

Table 3. Consortia’s Designated Features3

Consortium Embedded Features Non-Embedded Features
General Assessment
PARCC Answer masking
Color contrast
Text-to-speech
Human reader
Smarter
Balanced
Color contrast
Masking
Text-to-speech
Translated test directions
Translations (glossaries)
Translations (stacked)
Turn off any universal tools
Bilingual dictionary
Color contrast
Color overlay
Magnification
Noise buffers
Read aloud
Scribe
Separate setting
Translated test directions
Alternate Assessment
DLM None Calculator
Human read aloud
Individualized manipulatives
Language translation of text
Sign interpretation of test
Test administrator entering
   responses for student
Two-switch system
Uncontracted braille
NCSC/MSAA Alternate color themes
Answer masking
Color contrast
General masking
Increase/decrease size
   of text and graphics
Increase volume
Line reader tool
Magnification
Read aloud and reread item
   directions, response options,
   passage
None
ELP Assessment
ELPA21 Answer masking
Color contrast
General masking
Line reader
Print on request
Turn off universal features
Zoom (test level)
Color overlay
Magnification device
Native language translation
   of directions
Noise buffer
Paper-and-pencil test
Read aloud
Student reads test aloud
WIDA None None

3 For further descriptions of designated features, see Dynamic Learning Maps (2016); ELPA21 (2015); National Center and State Collaborative (2015); PARCC (2015); Smarter Balanced (2015); WIDA (2015).

Accommodations are available for students with IEPs and 504 plans, and sometimes ELs, or for a subset of these groups when they are the targeted population for the assessment. All consortia except DLM provide an accommodations tier, and all of these consortia use the term “accommodations” for this tier. As noted in Table 1, a primary difference between PARCC and Smarter Balanced is that Smarter Balanced provides accessibility features for ELs in tiers other than accommodations while offering accommodations for students with disabilities, including ELs with disabilities. Accommodations are also provided for ELs with disabilities by the ELPA21 and WIDA consortia.

Table 4 shows the accommodations provided by each of the consortia. As seen in this table, the numbers of embedded accommodations vary from none (DLM and NCSC/MSAA) to three (Smarter Balanced). Numbers of non-embedded accommodations vary from none (DLM) to 14 (PARCC). Some form of braille (PARCC, Smarter Balanced, ELPA21, and WIDA) and scribing (PARCC, Smarter Balanced, NCSC/MSAA, ELPA21, and WIDA) are the most frequently listed accommodations.

Table 4. Consortia’s Accommodations4

Consortium Embedded Accommodations Non-Embedded Accommodations
General Assessment
PARCC ASL video for ELA/literacy
   assessments
Closed captioning of multimedia
   on the ELA/literacy assessments
Assistive technology
Braille writer/note-taker
Calculation device on calculator
   and non-calculator sections of
   mathematics assessments
Extended time
General administration directions
   read aloud and repeated in student’s
   native language
Hard copy braille edition/tactile graphics/
   Refreshable braille display with screen
   reader version for ELA literacy
Human scribe
Human signer/Human signer for test
   directions
Large print edition
Paper-based edition
Speech-to-text
Student reads assessment aloud
   to him- or herself
Word prediction external device
Word-to-word dictionary
   (English/native language)
Smarter
Balanced
American Sign Language (ASL)
Braille
Closed captioning for ELA
   listening items
Streamline
Abacus
Alternate response options
Calculator
Multiplication table
Print on demand
Read aloud
Scribe
Speech-to-text
Alternate Assessment
DLM None None
NCSC/MSAA None Assistive technology
Paper version of items
Scribe
Sign language
ELP Assessment
ELPA21 Unlimited replays
Unlimited re-recordings
Assistive technology
Braille
Large print test booklet
Scribe
Speech-to-text
WIDA Manual control of item
   audio/repeat item audio
   (one time)
Braille version of test
Extended testing time
Interpreter signs test directions in ASL
Large print version of test
Read aloud and repeat test options
   by human reader
Scribed response
Student responds orally using external
   augmentative and/or alternative
   communication device or software
Student responds using a braille writer
   or braille notetaker
Word processor or similar keyboarding
   device to respond to test items/student
   uses assistive technology to respond
   to test items

4 For further descriptions of accommodations, see Dynamic Learning Maps (2016); ELPA21 (2015); National Center and State Collaborative (2015); PARCC (2015); Smarter Balanced (2015); WIDA (2015).

 

Accessibility for All Students

As large-scale technology-based assessments are being improved and enhanced with customizable accessibility features and accommodations, millions of students with disabilities, ELs, ELs with disabilities, and other general education students now are able to take advantage of these features and accommodations to access assessment and instruction content meaningfully. Current assessments are designed with all students in mind to account for their individual accessibility needs and preferences.

Students with disabilities account for about 13% of all children and youth ages 3 through 21. The primary disability of about 40% of these students is a learning disability; another 19% have speech or language impairments (U.S. Department of Education, 2014a). Students receiving special education services have IEPs that address their needs related to any of 13 disability categories that are diverse in nature: autism, deafness, blindness, emotional disabilities, hearing impairments, intellectual disabilities, multiple disabilities, orthopedic impairments, other health impairments, specific learning disabilities, speech and language impairments, traumatic brain injury, and visual impairments. The customizable accessibility features and accommodations available for these students allow for better ways of leveling the playing field in response to their unique disability needs. It is of paramount importance that IEP team members make appropriate accessibility and accommodations decisions for these students and that these decisions are informed by students’ classroom data.

ELs represent another rapidly growing population in the country. For example, 31% of students ages 6 through 21 in California are ELs (Liu, Albus, Lazarus, & Thurlow, 2016). The appropriate use of such language-related accessibility features as a glossary and thesaurus enables those students to demonstrate their knowledge while they are learning the English language.

ELs with disabilities are a growing portion of the students with disabilities subgroup in nearly every state (Liu et al., 2016). Although they do not represent a large percentage of the student population, they are part of the population of “all students” and are to be included in state assessments. Title I and Title III legislation require that ELs, including those with disabilities, be taught the same challenging content standards as their non-EL peers. Yet, state-level content assessments show that ELs with disabilities are among the lowest-achieving students (Lazarus, Albus, & Thurlow, 2016). For this group of students, both English language-related needs and disability-related needs may require specific accessibility and accommodations decisions from their IEP teams, which, according to guidance from the Department of Education, must include an expert in language acquisition (U.S. Department of Education, 2014b). Historically, ELs with disabilities have had less access to standards-based instruction than their peers with disabilities who are fluent in English and their non-EL peers without disabilities (Liu, Goldstone, Thurlow, Ward, Hatten, & Christensen, 2013; Zehler, Fleischman, Hopstock Stephenson, Pendzick, & Sapru, 2003).

Accessibility features and accommodations play a key role in enabling many students to participate meaningfully in instruction and assessment. Until recently, most accessibility features were deemed to be accommodations; thus, previous research only used the term “accommodations,” even though now the features studied as “accommodations” may be in another tier. Crawford and Ketterlin-Geller (2013) interviewed middle school special educators from five states, and found that assigning assessment accommodations tended not to be supported either from a theoretical or empirical perspective. According to Hodgson, Lazarus, and Thurlow (2011), several factors can explain teachers’ difficulty in making appropriate accommodations decisions. First, teachers may use either too few or inappropriate sources of information for accommodations decision making (Fuchs & Fuchs, 2001; Ketterlin-Geller et al., 2007). Some teachers use informal student observation, without consideration of other data sources for making recommendations (Ketterlin-Geller et al., 2007). Sometimes teachers may consider the feasibility of providing the accommodation, rather than individual student needs (DeStefano et al., 2001; Lazarus, Thompson, & Thurlow, 2006). Some teachers may tend to select accommodations that can be administered to a group of students in a resource or special education classroom setting. And, some teachers may use student placement (e.g., reading instructional level) or demographic characteristics (e.g., ethnicity, socioeconomic status) to make accommodations decisions (Fuchs & Fuchs, 2001). Lovett (2010) concluded, after reviewing 20 studies about extended-time accommodations, that there was insufficient evidence for educators to use students’ identified disabilities as a basis for selecting accommodations.

Cawthon (2010) indicated that educators of students who are deaf or hard of hearing have used several types of evidence to determine effectiveness of accommodations. These include data on whether accommodations were listed on students’ IEPs, whether students expressed satisfaction with their assessment experience, as well as students’ test scores and their relative success on classroom assessments when using those accommodations.

Accessibility features and accommodations are meant to meet students’ individual needs. Generally, there should be consistency in use across instruction, non-summative tests, and large-scale state assessments, though some accessibility features and accommodations used for instruction or formative assessments may not be appropriate during large-scale testing. When this is the case, students should practice participation in assessments without these features and accommodations. Data gathered from the use of instructional accessibility should provide a foundation for making assessment accessibility decisions (Elliott & Thurlow, 2006).

To date, research on instructional accommodations has predominantly focused on examining inconsistencies between accommodations documented in students’ IEPs and accommodations used for standardized tests. Evidence suggests that some accommodations are introduced on test day, rather than implemented consistently across instruction and assessment (Maccini & Gagnon, 2006; Shriner & DeStefano, 2003; Ysseldyke, Thurlow, Bielinski, House, Moody, & Haigh, 2001). Other evidence suggests that some accommodations (e.g., setting accommodations) are more likely to be provided during assessment than instruction, while other accommodations (e.g., extended time, read aloud) are more likely to be provided during instruction than assessment (Bottsford-Miller, 2008). In addition, some accommodations are implemented more consistently across instruction and assessment at the elementary school level than at the secondary school level (Bottsford-Miller, 2008; Maccini & Gagnon, 2006). The diverse and sometimes polarized nature of these findings points to the need to develop a rigorous decision-making approach to inform the use of accessibility features and accommodations.

Many students need accessibility features and accommodations to meaningfully access instruction and tests. For example, in a study of students with disabilities in one state, Wu, Lazarus, Thurlow, and Turner (2010) found that 98.9% of the Grade 5 students with IEPs who participated in the regular assessment in reading used at least one accommodation, and that 99.7% used at least one accommodation in math. However, there is wide variation among states. In the PARCC consortium, for example, the percentage of students with disabilities using accommodations in states ranged from fewer than 10% to nearly 90%. In the Smarter Balanced consortium, the percentage of students with disabilities using accommodations in states ranged from a low of approximately 1% in one state to nearly 90% in another on the Grade 4 Reading assessment (National Center on Educational Outcomes, 2011).

Instructional accessibility features and accommodations are changes and supports that enable students to meaningfully access rigorous content during instruction. According to Nolet and McLaughlin (2005), “deciding on accommodations requires that teachers have a sound knowledge of key constructs—the facts, skills and concepts—embedded in a specific lesson or instructional unit” (p. 85). Instructional accommodation decisions have implications for making assessment accommodation decisions.

Every educator must be knowledgeable about the state and district academic standards and assessments to ensure that all students are engaged in standards-based instruction and assessments. Optimal decision making about the provision of appropriate accessibility features and accommodations begins with making appropriate instructional decisions. In turn, effective instructional decision making is facilitated by gathering, reviewing, and updating reliable information about the student’s accessibility needs and preferences, disability, English language proficiency, and present level of performance in relation to local and state academic standards (Shyyan, Thurlow, Christensen et al., 2016).

Teachers’ attitudes and knowledge base affect how they make accommodations decisions for both instruction and assessment (Sloan, 2015). In a survey of 2,387 special education teachers, Altman et al. (2010) found that 51% of the respondents considered student characteristics to be an important factor when making instructional accommodations decisions and only 12% considered student performance to be an important factor. Further research is needed on decision-making processes related to accessibility features and accommodations and the implications of these decisions for instruction and assessment.

 

Teacher Needs and Gaps

In the context of technology-based assessments, new individualized approaches to accessibility place a much greater burden on educator teams and individuals who make decisions about which students need and should receive specific accessibility features and accommodations among a variety of accessibility choices. With the significant increase in the number of students who can now benefit from new opportunities for improved accessibility on technology-based assessments, larger numbers of educators become responsible for making appropriate accessibility and accommodations decisions for these students (Shyyan, Thurlow, Larson et al., 2016). Currently, general education, ESL/bilingual education, and special education teachers receive information from several sources on accessibility and accommodations for students with documented needs, but it is not always sufficient to enable them to confidently make and implement appropriate accessibility and accommodations decisions (Altman, Lazarus, Quenemoen, Kearns, Quenemoen, & Thurlow, 2010; Langley & Olsen, 2003). Educators need to be able to make optimal decisions about the use of accessibility features and accommodations for instruction and assessment for all students. They also need to provide leadership to encourage systemic school change, creating schools that have high expectations and support the learning of all students.

One of educators’ needs in the changing assessment environment is knowledge of the terminology used to identify accessibility approaches. The language adopted by the various testing platforms is not fully aligned, so educators must be able to navigate a complex system of assessment platforms, with accessibility features and accommodations referred to by platform-specific names (Shyyan et al., 2016). For example, on some assessment platforms, “amplification” may be referred to as “audio amplification” and on other platforms it might be called “increase volume” or “audio aids” (Shyyan et al., 2016, p. 13). Furthermore, as shown earlier, the accessibility tiers that each individual support is in may vary from assessment to assessment. One example is text-to-speech, which may be allowable as a designated feature on one assessment and be considered an accommodation on another assessment.

Not enough is known about what resources educators need to make optimal decisions about accessibility features and accommodations for all students (Warren, Christensen, Chartrand, Shyyan, Lazarus, & Thurlow, 2015). For educators who work with general education students, there may be additional challenges in that some of the accessibility features available for the assessment may not be consistently used in instruction. Educators need additional professional development on how to conduct classroom observations and collect informal data in order to inform their decision making on assessment accessibility features and accommodations.

Not all states report having clear mechanisms for providing professional development on accessibility for educators (Warren et al., 2015). Even in states that do provide professional development on accessibility, an added challenge is ensuring that the decision-making process is fully implemented. Fixsen, Naoom, Blase, Friedman, and Wallace (2005) identified six stages of implementation of new concepts: (1) exploration and adoption, (2) program installation, (3) initial implementation, (4) full operation, (5) innovation, and (6) sustainability. Making the change to a tiered approach of accessibility can be characterized as “implemented by force” (Backer, 2001; Leko, Roberts, & Pek, 2015) because accessibility policies have been implemented at a state or national level. A key issue for professional development is creating opportunities for educators to buy into accessibility decision making in order to foster attitudes of implementation by choice.

Researchers report a substantial and growing need for accessibility and accommodations training. Often the complexity of delivering the training presents significant challenges (Hodgson et al., 2011). Bublitz (2009) examined the relationships among training, knowledge, attitudes, and decision-making accuracy, and found that special educators’ knowledge about accommodations had a strong influence on the accuracy of accommodations decision making. Teachers report that they need additional training to learn how to confidently make and implement accommodations decisions (Thompson, Lazarus, Thurlow, & Clapper, 2005). However, teachers face many competing demands on their time, making it hard to fit professional development into their schedules. General, special, and ESL/bilingual education teachers often need to learn how to develop professional learning communities that work together to help support the learning of all students, including students with disabilities (Dede, Ketelhut, Whitehouse, Breit, & McCloskey, 2009).

 

A Few Examples of Research-based Models of Professional Development

Professional development for teachers comprises informal and formal processes of knowledge and skill building. Informal learning, in contrast to formal learning, refers to “learning that rests primarily in the hands of the learner and happens through observation, trial and error, asking for help, conversing with others, listening to stories, reflecting on a day’s events, or stimulated by general interests” (Dabbagh & Kitsantas, 2012, p. 4). Types of traditional professional development include the pursuit of advanced degrees, school- and district-wide meetings, conferences, workshops, and personal studies on a variety of selected professional development topics. Table 5 highlights three informal learning models of professional development (Clarke & Hollingsworth, 2002; Marcia & Garcia, 2016; Sprinthall, Reiman, & Thies-Sprinthall, 1996).

Table 5. Professional Development Classification through Informal Learning Models


Craft Model Expert Model Interactive Model
Summary Teacher professional development is a result of experiential knowledge acquired from teaching in the classroom. Teacher professional development is the result of training by other expert teachers. Teacher knowledge grows when external sources of information lead to new experiences of insight within the classroom based on student success results.
Teacher Role Teachers’ roles are not fully defined; they learn by trial and error. Teachers are in a passive role; they learn new techniques from an expert teacher. Teachers’ roles are complex; they learn from external sources and apply the concepts in the classroom. This model brings together personal, external, classroom practice, and student result domains.

Research suggests that effective teachers have a positive impact on achievement gaps (Sledge & Pazey, 2013). When Huberman, Navo, and Parrish (2012) studied effective practices in high-performing districts surrounded by low-performing districts in California, they found several common strategies: inclusion and access to core curriculum, collaboration among special education and general education teachers, and targeted professional development. Brock and Carter (2015) completed a meta-analysis of educator training in an attempt to bridge the research-to-practice gap in the field of special education through the use of rigorous evidence-based analysis of professional development and student assessment score outcomes. The researchers asserted that professional development programs should be measured not in terms of the professional development hours a teacher completes, but rather in terms of observable change in teacher behavior. Brock and Carter also found that the second-most important influence on effectiveness was the use of a combination of modeling, one-on-one coaching, and performance feedback.

Collaborative efforts increasingly are a component of addressing the needs of diverse learners. Professional development opportunities are focusing on collaboration among teachers, administrators, and other educators for cross-disciplinary or intergrade-level educational planning. Friend and Cook (2013) and Nevin, Cramer, Voigt, and Salazar (2008) viewed collaboration between special educators and general educators as fundamental to effective instruction and equal access to an academic curriculum for diverse learners. Pellegrino, Weiss, and Regan (2015) argued that collaboration must be deeply embedded into teacher education programs. Developing activities and providing resources needed for creating a collaborative vision are crucial. Further, maintaining interpersonal connections is vital in an effort to have a deliberate vision for teaching and learning.

One strategy for meeting the professional development needs of educators is to create communities of practice, including virtual communities of practice. Teacher communication and professional development in the virtual environment help maintain and improve teacher quality and are also enjoyable for teachers (Wineburg & Grossman, 1998). This process may also have an indirect benefit for students. Through online exchanges, teachers model life-long learning skills that students may begin to imitate (Wineburg & Grossman, 1998).

 

Web-based Professional Development

New technological approaches to professional development are used to help teachers address unique student needs with technology-mediated accessibility features and accommodations (Tsiotakis & Jimoyiannis, 2016). By receiving professional development online, teachers are able to become familiar with using online features. This, in turn, aids teachers in the classroom setting when they assist students struggling with technology. Some classes are now offered partially online and partially in the classroom setting, and teachers are expected to meet this new instruction standard (Matzat, 2013). Special educators, ESL/bilingual educators, and general educators also are expected to be knowledgeable about appropriate technology-based accessibility features and accommodations. Research-based models for professional development (Hodgson et al., 2011) are presented in Table 6.

Table 6. Research-based Formal Models of Professional Development

Model Project-Based Learning (PBL) Case-Based Instruction (CBI) Communities of Practice (CoP)5
Summary Effective teacher professional development must occur in an applied setting of the teacher’s context and be practiced in his or her classroom. Teachers must observe student learning responses to assess and apply new concepts before adding these concepts to their professional repertoire. In CBI, as in PBL, teachers apply concepts they have learned. The difference is that CBI provides more support and scaffolding. Cases are narratives that depict particular problems. Cases reflect the generic and situation-specific nature of a practice, or a problem that parallels a real-life teaching situation. Cases are used to integrate grade-level content standards into teaching practices. CoPs are groups of individuals with shared interests and a similar knowledge base; they may be part of a formal organization. Members may follow formal leadership roles or form their own roles based on needs, skills, or interests. In education, CoPs are composed of novice and expert teachers who share experiences and work toward a common practice or enterprise.
Principles PBL has four domains: External, Personal, Practice, and Consequence. The External domain is the concepts that trainers and staff teach. The Personal domain is the teacher’s attitudes or beliefs. The Practice and Consequence domains indicate how concepts are applied and assessed in the classroom. Cases are discussed in small groups that include teachers from diverse school contexts, which exposes participants to alternate viewpoints. They practice solving “real-world” teaching problems, which may also help them develop generalizable skills for the new concepts being learned. CBI is often combined with other instructional strategies or models (CoP) for optimal learning. The key components of CoPs are: (a) shared agenda, discourse, knowledge, values, and goals; (b) pre-defined roles for all members; and (c) shared products or artifacts generated by the community. Note: Artifacts are publications or other written products, as well as routines, sensibilities, vocabulary, or styles.
Theory An instructional strategy in which both teachers and students learn by engaging in the problem-solving process together. Problem solving must address real-life issues in the applied classroom setting. A collaborative strategy that brings teachers from various settings together to solve problems using different teacher viewpoints. Teachers learn by engaging in the problem-solving process through group discussion. A teacher discussion and participatory learning group that fosters increased teacher accountability. The CoP approach originated in the medical and law professions. It often is combined with CBI.
Online Environment Implications Online courses encourage teachers to learn new concepts. Then, applied experiences and an opportunity to journal about those experiences lead to better teacher and student outcomes. Online CBI has shown an increase in knowledge of instructional strategies for teachers of all experiences and backgrounds. Online CBI with embedded video content helps teachers connect theory to practice. A Virtual Community of Practice (VCoP) is the web-based version of a CoP.

PBL References: Blumenfeld et al. (1991); Clarke & Hollingsworth (2002); Frey (2009); Guskey (1986); Howard (2002).

CBI References: Anderson & Baker (1999); Cutter, Palincsar, & Magnusson (2002); Elksnin (1998); Kagan (1993); McNaughton, Hall, & Maccini (2001); Shulman (2000).

CoP References: Cochran-Smith & Lytle (1999); Cutter et al. (2002); Mott (2000); Supovitz (2002); Wenger (1998); Wineburg & Grossman (1998).

5Professional Learning Communities (PLCs) are an education-specific version of CoPs.

The technological approach applied to professional development and teacher training has opened the door to providing training to educators anywhere and at any time by accessing it online (Burns, 2011). The use of the Internet to provide professional development training in a practical way allows teachers with time limitations to expand their accessibility and accommodations expertise (Tsiotakis & Jimoyiannis, 2016). Online professional development provides educators with an effective and efficient way to reach their professional and personal goals as well as continue lifelong learning opportunities (e.g., certifications, higher degrees). The availability of high-quality professional development for educators has grown significantly with the availability and growth of web technologies.

Several studies have attempted to provide professional development on accommodations and to examine their effects (Ketterlin-Geller, Crawford, & Huscroft-D’Angelo, 2014; Mariano et al., 2009). Mariano et al., for example, compared two different decision-making models: the manual published by the Council of Chief State School Officers (CCSSO) and the interactive online Assessment Decision-making Support System (ADSS). They found that the overall number of accommodations selected by educators and amount of time spent on decision making was similar in the two models, but that the types of accommodations the study participants recommended differed. Specifically, the group using the online model recommended significantly more presentation accommodations. While the study did not address whether the accommodations selected met individualized student needs, it found that online professional development generally had similar outcomes to traditional professional development.

In another study examining the benefits of online professional development, Shriner, Carty, Rose, Shogren, Kim, and Trach (2013) explored the effects of using a web-based tutorial for teachers writing student IEPs. While the comparison group remained static in post-test and pre-test results, the study intervention group made significant positive improvement on most quality ratings, improving from 25% to 66% on IEP goal and objective articulation accuracy.

Established Web-based Models of Professional Development

There is evidence that many elements of traditional professional development can be translated to online platforms (Whitehouse, Breit, McCloskey, Ketelhut, & Dede, 2006). Researchers indicate that Project-Based Learning (PBL), Case-Based Instruction (CBI), and Communities of Practice (CoP) models can all be successfully incorporated into online teacher training (Burns, 2011; Frey, 2009; Pellegrino et al., 2015; Whitehouse et al., 2006). Some established web-based training models discussed in this section include: Computer-Mediated Communication (CMC), Online Learning Communities (OLC), Learning Management Systems (LMS), webinars, webcasts, and forms of online coaching and mentoring.

Clinical decision support systems (CDSS) have been used by the medical field for many years to provide physicians with support in assessing, diagnosing, and prescribing medications for patients (Buzhardt, Walker, Greenwood, & Heitzman-Powel, 2012). The CDSS model served as the basis for similar web-based tools that help educators track student response to intervention (Buzhardt, Greenwood, Walker, Carta, Terry, & Garrett, 2010; Buzhardt et al., 2012).

One of the fastest evolving modes of distance education is web-based or online learning. Online learning is expanding in nations such as the United States, Canada, South Korea, Singapore, Japan, Australia, New Zealand, and much of Europe. The one factor limiting this expansion is access to high-speed broadband access (Burns, 2011; Chen, Chen, & Tsai, 2009; Macia & Garcia, 2016).

For years now, many state and public universities have been offering forms of online training to their teachers, faculty, staff, and students. Online and web-based courses have become a staple in improving teacher skills or continuing professional development. The flexible online learning environment enables a school district or state to provide sustained professional development that has a greater effect on student learning outcomes than a one-time workshop or seminar (George, 2007). Online courses can address specific curricula, target teachers from specific content areas, and provide support to teachers in hard-to-reach districts or schools (Dede et al., 2009).

Computer-Mediated Communication (CMC) refers to all types of asynchronous, text-based online communication, including forums, discussion groups, e-lists, e-mail, bulletin boards, and groupware (Burns, 2011). While Learning Management Systems (LMS) use some forms of CMC on their discussion boards, forums, and other tools, CMC can also be used outside of LMS at low cost and with minimal connectivity requirements (AbuSeileek & Qatawneh, 2013). CMC is popular in Asian and African contexts because it uses less bandwidth.

Modern trends have popularized the use of Voice over Internet Protocol (VoIP) programs such as Skype, video chat, instant messaging, and online conferencing applications. Youth tend to favor these forms of synchronous communication over asynchronous communication such as e-mail (Burns, 2011; Marcia & Garcia, 2016). Webcasts (one-way video transmission) and webinars (two-way video transmission) are either live or prerecorded training programs used as tools to provide teacher content (AbuSeileek & Qatawneh, 2013). The use of webcasts and webinars has become popular due to the convenience of synchronous video communication (e.g., Adobe Connect).

Online coaching and mentoring using VoIP applications are being used to address teacher retention problems and improve teacher quality. The North Carolina Department of Public Instruction includes in-service online coaching for teachers as part of its New Schools Project. Rock, Zigmond, Gregg, and Gable (2011) adopted virtual coaching with teachers in the classroom using a Bluetooth ear piece, computer camera, and remote access to have a professional coach watch the teacher in real time as he or she taught a class; Rock et al. (2011) concluded that the virtual coach had the same benefits as traditional coaching. Similar ongoing studies commissioned by the U.S. Department of Education are exploring the effectiveness of online coaching by virtual schools (Burns, 2011; Marcia & Garcia, 2016; Matzat, 2013).

Paraprofessionals’ performance during training in special education using Video Modeling Plus Abbreviated Coaching (VMPAC) was explored by Brock and Carter (2015). The VMPAC model includes an initial workshop that describes and demonstrates instructional practice followed by video modeling, in which paraprofessionals compare their performance to that of paraprofessionals in video clips. In a brief in-person follow-up session, a coach observes the paraprofessional in a school setting and provides feedback. This research provides evidence supporting the use of integrated online and in-person mentoring and coaching for educators’ initial on-the-job training and continued investment in professional development.

Online learning communities have emerged as an effective tool to meet teachers’ increasing needs for professional development and support. Online communities may be a part of an institution’s website or be separate entities with their own server space. They may also use social media sites such as Ning or Classroom 2.0 to save on administrative and technology costs (Weinstein, 2013). The online learning community may develop lesson plans, full courses, and curriculum ideas or conduct peer mentoring. Two long-standing online learning communities are the International Educational and Resource Network (iEARN) and the IRIS Center at Vanderbilt Peabody College.

Online learning also may have some limitations. For example, some online environments fail to provide authentic learning experiences that enable participants to interact with (and learn from) one another. Additionally, some online professional development is organized in ways that encourage participants to see how quickly they race through the material rather than fully engage with it (Doering & Valetsianos, 2007; Doering, Veletsianos, Scharber, & Miller, 2009).

Emerging Web-based Models of Professional Development

Despite having promising distance-learning applications, web-based applications have not reached their full potential in professional development for teachers (Burns, 2011; Marcia & Garcia, 2016; Matzat, 2013). A new approach in teacher professional development is Webs of Enhanced Practice (WoEPs). WoEPs offer participating teachers opportunities for joint planning and development of materials, peer coaching and mentoring, and reflection and discussion, all in an environment of collegial accountability (Scott & Scott, 2010). The WoEP model advocates for the use of whatever technologies are optimal and convenient for participants. WoEPs interweave first-generation and second-generation technologies and asynchronous and synchronous technologies, offering more flexibility and convenience to teachers.

WoEPs connect teachers to peers, administrators, and experts, who enter and leave groups according to their needs and preferences. Experts in discipline, technology, and pedagogy participating in WoEPs expand the knowledge and skills of everyone in the webs. More senior teachers and administrators can support the career aspirations and development of other participants by mentoring and providing instructional leadership. As participants engage in discussions, solve problems, share expertise, and create resources, they also expand their professional networks. WoEPs can bring together educators across schools, districts, states, and even countries.

Web 2.0, which describes websites that are characterized by dynamic, user-generated content, allows users to have an individualized experience and facilitates collaborative learning experiences. Research on Web 2.0 tools and applications used for teacher education in the literature cited was limited, but was growing because many instructors currently use Web 2.0 tools in various forms (Cochrane & Narayan, 2013). For example, Stevens (2013) uses Web 2.0’s tools for workshops on literacy instruction for teachers with diverse interests, technological skills, and teaching backgrounds.

Web 2.0 can be used to establish and nurture professional relationships, which allows teachers to share ideas, content, and strategies and collaborate on lessons across varied distances. Web 2.0 has technology that identifies the location of users, enabling people to connect locally, as well as a collection of databases that have the capacity to grow and evolve as the community goes about the task of building the discipline’s collective knowledge. Further, Web 2.0 can provide quick answers to questions normally asked by colleagues when an academic-related concern arises.

Combining the applications of Web 2.0 and immersive environments has some of the benefits of case-based instruction, project-based learning, and community of practice (Burns, 2011; Hardman, 2012; Kennedy et. al., 2012; Marcia & Garcia, 2016; Matzat, 2013). As their name suggests, immersive environments allow users to become totally immersed in a self-contained, simulated environment (Burns, 2013). Immersive environments can offer a rich and complex content-based learning milieu with various learning situations that challenge a learner’s technical, creative, and problem-solving skills. The immersive environment is also known as a virtual world or multi-user virtual environment (MUVE).

Second Life is a multi-user immersive environment platform that is starting to be used in some education contexts. It is a 3D virtual world where users create an avatar, interact with artifacts, take part in a range of social and educational activities, and create their own content (Burns, 2011). Researchers at Georgia Tech Research Institute, Florida Diagnostic & Learning Resource System, Harvard University ISTE, and TeacherLine of Texas are incorporating virtual world technology in professional development classrooms for K-12 educators (Burns, 2013). These environments offer teachers feedback, coaching, and mentoring as well as preparation for live interactions with students. Immersive environments provide case-based instruction and project-based learning experience to instructors as they interact with virtual students and review feedback from members of the community. Immersive environments can include a platform and framework for creating a community of practice or connect their technology to a platform that provides such an open dialogue.

Web 2.0 technologies can also support collective intelligence by enabling users to quickly, easily, and securely share ideas with others. In education contexts, this means that educators can work together to compile, organize, and share information from various experts and sources. Educators can use the resulting collective intelligence to improve their own understanding and decision making.

One example is offered by Gregg (2009), who used collective intelligence to improve the process of collecting and analyzing data on individual students’ progress and response to intervention. Schools are responsible for managing student education for up to 16 years, across subjects and with multiple teachers, paraprofessionals, and other educators. This creates a need for asynchronous and longitudinal collaboration tools to create a meaningful education history for the student. Gregg (2009) developed the collective intelligence tool DDtrac to collect and summarize qualitative data (student performance and behavior) to improve decision making for special education students. DDtrac is an application with Web 2.0 tools that provides an easy way to interpret charts, graphs, and longitudinal student progress reports. The participants indicated that they appreciated the tool’s ability to communicate student progress through these charts, graphs, and reports and stated that it helped with the data-based decision-making process. They found sharing the reports with parents to be beneficial as well. The participants also agreed that being able to access the application on a handheld device or laptop would be a significant improvement (Gregg, 2009).

Given the ubiquitous nature of mobile devices, it has become imperative for teachers to familiarize themselves with mobile technologies for learning. Schuck, Aubusson, Kearney, and Burden (2013) found that the use of mobile technologies increased participation in professional learning communities and enriched teacher understanding of such mobile technologies. Further, they found that teachers believe that mobile devices have the potential to positively impact their teaching. A similar study completed by Drouin, Vartanian, and Birk (2014) examined the effectiveness of using mobile devices for a community of practice model within higher education settings and found that most faculty members thought the project and activities were useful in building collaborative professional connections.

Baran (2014) conducted a literature review on mobile learning in teacher education and found that the majority of studies focused on the value of mobile learning for students. He identified six major trends and gaps in the 37 studies he found on using mobile devices for teacher educators: (a) more teacher education programs were integrating mobile learning; (b) theoretical and conceptual perspectives were underrepresented; (c) variations in perceptions, opinions, and usage patterns existed; (d) use of mobile learning devices was primarily reported as beneficial; (e) challenges were scarcely reported; and (f) several pedagogical affordances supported mobile learning integration into teacher education settings (Baran, 2014). Another study by Price, Davies, Farr, Jewitt, Roussos, & Sin (2014) revealed several advantages of mobile learning integration in preservice teacher education: connectivity and collaboration, unique classroom models, mobility within the physical classroom space, backchannel conversations, and participation in professional learning communities.

 

System-wide Accessibility

Systemic change is needed to accommodate the needs and preferences of diverse learners and create schools where staff members feel confident that they have the skills needed to effectively use data and successfully instruct and assess all learners. Data can provide key information that educators can use to improve decision making. Appropriately and consistently used accessibility features and accommodations should be present in all activities, including instruction, formative and high-stakes assessments, and beyond (DeStefano et al., 2001; Thurlow, Lazarus, & Christensen, 2008).

 

References

Abedi, J., Lord, C., Hofstetter, C., & Baker, E. (2000). Impact of accommodation strategies on English language learners’ test performance. Educational Measurement: Issues and Practice, 19(3), 16-26. doi:10.1111/j.1745-3992.2000.tb00034.x.

AbuSeileek, A. F., & Qatawneh, K. (2013). Effects of synchronous and asynchronous computer-mediated communication (CMC) oral conversations on English language learners’ discourse functions. Computers & Education, 62, 181-190. doi:10.1016/j.compedu.2012.10.013.

AERA, APA, & NCME. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.

Altman, J. R., Cormier, D. C., Lazarus, S. S., Thurlow, M. L., Holbrook, M., Byers, M., Chambers, D., Moore, M., & Pence, N. (2010). Accommodations: Results of a survey of Alabama special education teachers (Synthesis Report 81). Minneapolis MN: University of Minnesota, National Center on Educational Outcomes.

Altman, J. R., Lazarus, S. S., Quenemoen, R. F., Kearns, J., Quenemoen, M., & Thurlow, M. L. (2010). 2009 survey of states: Accomplishments and new issues at the end of a decade of change. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Anderson, P. L., & Baker, B. K. (1999). A case-based curriculum approach to special education teacher preparation. Teacher Education and Special Education, 22(3), 188-192.

Backer, T. E. (2001). Finding the balance: Program fidelity and adaptation in substance abuse prevention: A state-of-the-art review. Rockville, MD: Center for Substance Abuse Prevention.

Baran, E. (2014). A review of research on mobile learning in teacher education. Educational Technology & Society, 17(4), 17-32. Retrieved from http://www.ifets.info/.

Blumenfeld, P. C., Soloway, E., Marx, R. W., Krajcik, J. S., Guzdial, M., & Palincsar, A. (1991). Motivating project-based learning: Sustaining the doing, supporting the learning. Educational Psychologist, 26(3), 369-398. Retrieved from www.tandfonline.com/loi/hedp20.

Bottsford-Miller, N. A. (2008). A cross-sectional study of reported inconsistency in accommodation use in the classroom and standardized test settings for elementary and middle school students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(01).

Brock, M. E., & Carter, E. W. (2015). Effects of a Professional Development Package to Prepare Special Education Paraprofessionals to Implement Evidence-Based Practice. The Journal of Special Education, 20(10), 1-13. doi:10.1177/0022466913501882.

Bublitz, D. F. (2009). Special education teachers’ attitudes, knowledge, and decision-making about high-stakes testing accommodations for students with disabilities. Dissertation Abstracts International: Section A. Humanities and Social Sciences, 70(04).

Burns, M. (2011). Distance education for teacher training: Modes, models, and methods. Washington, DC: Education Development Center, Inc.

Burns, M. (2013). The future of professional learning. Learning & Leading with Technology, 40(8), 14-18. Retrieved from http://eric.ed.gov/?id=EJ1015164.

Buzhardt, J., Greenwood, C., Walker, D., Carta, J., Terry, B., & Garrett, M. (2010). A web-based tool to support data-based early intervention decision making. Topics in Early Childhood Special Education, 29(4), 201-213. doi:10.1177/0271121409353350.

Buzhardt, J. R., Walker, D., Greenwood, C., & Heitzman-Powel, L. (2012). Using technology to support progress monitoring and data-based intervention decision making in early childhood: Is there an app for that? Focus on Exceptional Children, 44(8), 1-18. Retrieved from http://www.lovepublishing.com/catalog/focus_on_exceptional_children_31.html.

Buzick, H., & Stone, E. (2014). A meta-analysis of research on the read aloud accommodation. Educational Measurement: Issues and Practice, 33(3), 17-30. doi:10.1111/emip.12040

Cawthon, S. W. (2010). Science and evidence of success: Two emerging issues in assessment accommodations for students who are deaf or hard of hearing. Journal of Deaf Studies and Deaf Education, 15(2), 185-203. doi:10.1093/deafed/enq002.

Cawthon, S., & Leppo, R. (2013). Assessment accommodations on tests of academic achievement for students who are deaf or hard of hearing: A qualitative meta-analysis of the research literature. American Annals of the Deaf, 158(3), 363-376. doi:110.1353/aad.2013.0023.

Chen, Y., Chen, N. S., & Tsai, C. C. (2009). The use of online synchronous discussion for web-based professional development for teachers. Computers & Education, 53, 1155-1166. doi:10.1016/j.compedu.2009.05.026.

Christensen, L. L., Albus, D. A., Liu, K. K., Thurlow, M. L., & Kincaid, A. (2013). Accommodations for students with disabilities on state English language proficiency assessments: A review of 2011 state policies. Minneapolis: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED).

Christensen, L. L., Braam, M., Scullin, S., & Thurlow, M. L. (2011). 2009 state policies on assessment participation and accommodations for students with disabilities (Synthesis Report 83). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Christensen, L., Carver, W., VanDeZande, J., & Lazarus, S. (2011). Accommodations manual: How to select, administer, and evaluate the use of accommodations for instruction and assessment of students with disabilities (3rd ed.). Washington, DC: Assessing Special Education Students State Collaborative on Assessment and Student Standards, Council of Chief State School Officers (CCSSO). Retrieved from http://www.ccsso.org/Documents/ASESManual2011.doc.

Christensen, L., Shyyan, V., & Johnstone, C. (2014). Universal design considerations for technology-based, large-scale, next-generation assessments. Perspectives on Language and Literacy, 40(1), 23-31. Retrieved from http://www.interdys.org/Perspectives.htm.

Christensen, L. L., Shyyan, V., Rogers, C., & Kincaid, A. (2014). Audio support guidelines for accessible assessments: Insights from cognitive labs. Minneapolis, MN: University of Minnesota, Enhanced Assessment Grant (#S368A120006), U.S. Department of Education. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/GAAP/GAAPAudioReport.pdf.

Clarke, D., & Hollingsworth, H. (2002). Elaborating a model of teacher professional growth. Teaching and Teacher Education, 18(8), 947-967. doi:10.1016/S0742-051X(02)00053-7

Cochrane, T., & Narayan, V. (2013). Redesigning professional development: reconceptualising teaching using social learning technologies. Research in Learning Technology, 21. doi:10.3402/rlt.v21i0.19226.

Cochran-Smith, M., & Lytle, S. L. (1999). Relationships of knowledge and practice: Teacher learning in communities. Review of Research in Education, 24(1), 249-305. Retrieved from http://www.jstor.org/stable/1167272.

Crawford, L., & Ketterlin-Geller, L. R. (2013). Middle school teachers’ assignment of test accommodations. The Teacher Educator, 48(1), 29-45. doi:10.1080/08878730.2012.740152

Cutter, J., Palincsar, A. S., & Magnusson, S. J. (2002). Supporting inclusion through case-based vignette conversations. Learning Disabilities Research & Practice, 17(3), 186-200. doi:10.1111/1540-5826.00044.

Dabbagh, N., & Kitsantas, A. (2012). Personal learning environments, social media, and self-regulated learning: A natural formula for connecting formal and informal learning. Internet and Higher Education, 15(1), 3-8.

Dede, C., Ketelhut, D., Whitehouse, P., & Breit, L. & McCloskey, E. (2009). A research agenda for online teacher professional development. Journal of Teacher Education, 60(1), 8-19. doi:10.1177/0022487108327554.

DeStefano, L., Shriner, J. G., & Lloyd, C. A. (2001). Teacher decision making in participation of students with disabilities in large-scale assessment. Exceptional Children, 68(1), 7-22. Retrieved from http://www.cec.sped.org/Publications/CEC-Journals/Exceptional-Children.

Doering, A., & Veletsianos, G. (2007). Multi-scaffolding environment: An analysis of scaffolding and its impact on cognitive load and problem-solving ability. Journal of Educational Computing Research. 37(2): 107-129.

Doering, A., Veletsianos, G., Scharber, C., & Miller, C. (2009). Using the technological, pedagogical, and content knowledge framework to design online learning environments and professional development. Journal of Educational Computing Research. 41(3), 317-344.

Drouin, M., Vartanian, L. R., & Birk, S. (2014). A community of practice model for introducing mobile tablets to university faculty. Innovative Higher Education, 39(3), 231-245. doi:10.1007/s10755-013-9270-3.

Dynamic Learning Maps. (2016). Accessibility manual for the Dynamic Learning Maps alternate assessment, 2015-2016. Retrieved from http://dynamiclearningmaps.org/sites/default/files/documents/Manuals/accessibility_manual_2015-16.pdf.

Elksnin, L. K. (1998). Use of the case method of instruction in special education teacher preparation programs: A preliminary investigation. Teacher Education and Special Education, 21(2), 95-108. doi:10.1177/088840649802100204.

Elliott, J. L., & Thurlow, M. L. (2006). Improving test performance of students with disabilities . . . On district and state assessments. Thousand Oaks, CA: Corwin Press.

Elliott, S. N., Kratochwill, T. R., & Schulte, A. G. (1999). The assessment accommodations guide. Monterey, CA: CTB/McGraw-Hill.

ELPA21. (2015). Accessibility and accommodations manual. Retrieved from http://www.elpa21.org/sites/default/files/Accessibility%20and%20Accommodations%20Manual_SY15_16.pdf.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature (FMHI Publication No. 231). Tampa: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. Retrieved from https://www.researchgate.net/publication/242511155.

Frey, T. J. (2009). An analysis of online professional development and outcomes for students with disabilities. Teacher Education and Special Education, 32(1), 83-96. doi:10.1177/0888406408330867.

Friend, M., & Cook, L. (2013). Interactions: Collaboration skills for school professionals (7th ed.). Boston: Pearson Education.

Fuchs, L. S., & Fuchs, D. (2001). Helping teachers formulate sound test accommodation decisions for students with learning disabilities. Learning Disabilities Research & Practice, 16(3), 174-181. doi:10.1111/0938-8982.00018.

George, M. (2007). Online-learning: The next generation of professional development. MultiMedia & Internet@Schools, 14(6), 14-17. Retrieved from http://www.mmischools.com.

Gregg, D. (2009). Developing a collective intelligence application for special education. Decision Support Systems, 47(4), 455-465. doi:10.1016/j.dss.2009.04.012.

Guskey, T. R. (1986). Staff development and the process of teacher change. Educational Researcher, 15(5), 5-12. doi:10.3102/0013189X015005005.

Hardman, E. L. (2012). Supporting professional development in special education with web-based professional learning communities: New possibilities with Web 2.0. Journal of Special Education Technology, 27(4), 17-31. doi:10.1177/016264341202700402.

Hodgson, J. R., Lazarus, S. S., & Thurlow, M. L. (2011). Professional development to improve accommodations decisions—A review of the literature (Synthesis Report 84). Minneapolis MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Synthesis84/default.htm.

Howard, J. (2002). Technology-enhanced project-based learning in teacher education: Addressing the goals of transfer. Journal of Technology and Teacher Education, 10(3), 343-365. Retrieved from http://www.aace.org/pubs/jtate/.

Huberman, M., Navo, M., & Parrish, T. (2012). Effective practices in high performing districts serving students in special education. Journal of Special Education Leadership, 25(2), 59-71. Retrieved from http://www.casecec.org/resources/jsel.asp.

Kagan, D. M. (1993). Contexts for the use of classroom cases. American Educational Research Journal, 30(4), 703-723. doi:10.3102/00028312030004703.

Kennedy, M., Ely, E., Thomas, C., Pullen, P., Newton, J., Ashworth, K., Lovelace, S. (2012). Using multimedia tools to support teacher candidates’ learning. Teacher Education and Special Education, 35(3), 243-257. doi:10.1177/0888406412451158.

Ketterlin-Geller, L. R., Alonzo, J., Braun-Monegan, J., & Tindal, G. (2007). Recommendations for accommodations: Implications of (in) consistency. Remedial and Special Education, 28(4), 194. doi:10.1177/07419325070280040101.

Ketterlin-Geller, L. R., Crawford, L., & Huscroft-D’Angelo, J. N. (2014). Screening to assign accommodations: Using data to make decisions. Learning Disabilities: A Multidisciplinary Journal, 20(2), 73-86. Retrieved from http://ldaamerica.org/learning-disabilities-a-multidisciplinary-journal/.

Ketterlin-Geller, L. R. (2008). Testing students with special needs: A model for understanding the interaction between assessment and student characteristics in a universally designed environment. Educational Measurement: Issues and Practice, 27(3), 3-16. doi:10.1111/j.1745-3992.2008.00124.x.

Kieffer, M. J., & Lesaux, N. K. (2009). Accommodations for English language learners taking large-scale assessments: A meta-analysis on effectiveness and validity. Review of Educational Research, 79(3), 1168-1201. doi:10.3102/0034654309332490.

Langley, J., & Olsen, K. (2003). Training district and state personnel on accommodations: A study of state practices, challenges and resources. Washington, DC: Council of Chief State School Officers. Retrieved from http://eric.ed.gov/?id=ED482929.

Lazarus, S. S., Albus, D., & Thurlow, M. L. (2016). 2013-14 publicly reported assessment results for students with disabilities and ELLs with disabilities (NCEO Report 401). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Report401/NCEOReport401.pdf.

Lazarus, S. S., Thompson, S. J., & Thurlow, M. L. (2006). How students access accommodations in assessment and instruction: Results of a survey of special education teachers (Issue Brief No. 7). College Park, MD: University of Maryland, Educational Policy Reform Research Institute. Retrieved from http://eric.ed.gov/?id=ED509856.

Leko, M. M., Roberts, C. A., & Pek, Y. (2015). A theory of secondary teachers’ adaptations when implementing a reading intervention program. The Journal of Special Education, 49(3), 168-178. doi:10.1177/0022466914546751

Li, H. (2014). The effects of read-aloud accommodations for students with and without disabilities: A meta-analysis. Educational Measurement: Issues and Practice, 33(3), 3-16. doi:10.1111/emip.12027

Liu, K. K., Albus, D. A., Lazarus, S. S., & Thurlow, M. L. (2016). State and national demographic information for English learners (ELs) and ELs with disabilities, 2012-13 (Data Analytics #4). Minneapolis MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from https://nceo.info/Resources/tools/data_visualization.

Liu, K. K., Goldstone, L. S., Thurlow, M. L., Ward, J. M., Hatten, J., & Christensen, L. L. (2013). Voices from the field: Making state assessment decisions for English language learners with disabilities. Minneapolis, MN: University of Minnesota, Improving the Validity of Assessment Results for English Language Learners with Disabilities (IVARED). Retrieved from http://www.cehd.umn.edu/nceo/OnlinePubs/IVAREDFocusGroupReport.pdf.

Lovett, B. J. (2010). Extended time testing accommodations for students with disabilities: Answers to five fundamental questions. Review of Educational Research, 80(4), 611-638. doi:10.3102/0034654310364063.

Maccini, P., & Gagnon, J. C. (2006). Mathematics instructional practices and assessment accommodations by secondary special and general educators. Exceptional Children, 72(2), 217-234. Retrieved from http://www.cec.sped.org/Publications/CEC-Journals/Exceptional-Children.

Macia, M., & Garcia, I. (2016). Informal online communities and networks as a source of teacher professional development: A review. Teaching and Teacher Education, 55, 291-307. doi:10.1016/jtate.2016.01.021

Mariano, G., Tindal, G., Carrizales, D., & Lenhardt, B. (2009). Analysis of teacher accommodation recommendations for a large-scale test (Technical Report No. 0905). Eugene, OR: Behavioral Research and Teaching, University of Oregon. Retrieved from http://eric.ed.gov/?id=ED531557.

Matzat, U. (2013). Do blended virtual learning communities enhance teachers’ professional development more than purely virtual ones? A large scale empirical comparison. Computers & Education, 60(1), 40-51. doi:10.1016/j.compedu.2012.08.006.

McNaughton, D., Hall, T. E., & Maccini, P. (2001). Case-based instruction in special education teacher preparation: Practices and concerns of teacher educator/researchers. Teacher Education and Special Education, 24(2), 84-94. doi:10.1177/088840640102400203.

Mott, V. W. (2000). The development of professional expertise in the workplace. New Directions for Adult and Continuing Education, 2000(86), 23-31. doi:10.1002/ace.8603.

National Center and State Collaborative. (2015). National Center and State Collaborative Alternate Assessment Based on Alternate Achievement Standards (NCSC AA-AAS) Test Administration Manual. Minneapolis, MN: University of Minnesota, National Center and State Collaborative. Retrieved from http://www.ncscpartners.org/Media/Default/PDFs/Resources/TAM.pdf.

National Center on Educational Outcomes (2011). Developing common accommodations policies: Discussion points for consortia (NCEO Brief 2). Minneapolis MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/briefs/brief02/NCEOBrief2.pdf.

Nevin, A. I., Cramer, E., Voigt, J., & Salazar, L. (2008). Instructional modifications, adaptations, and accommodations of coteachers who loop: A descriptive case study. Teacher Education and Special Education, 31(4), 283-297. doi:10.1177/0888406408330648.

Nolet, V., & McLaughlin, M. J. (2005). Accessing the general curriculum: Including students with disabilities in standards-based reform (2nd ed.). Thousand Oaks, CA: Corwin Press.

PARCC. (2015). PARCC accessibility features and accommodations manual: Guidance for districts and decision-making teams to ensure that PARCC summative assessments produce valid results for all students. Retrieved from http://www.parcconline.org/images/Assessments/Acccessibility/PARCC_Accessibility_Features__Accommodations_Manual_v.6_01_body_appendices.pdf.

Pellegrino, A., Weiss, M., & Regan, K. (2015). Learning to collaborate: General and special educators in teacher education. The Teacher Educator, 50(3), 187-202. doi:10.1080/08878730.2015.1038494

Pennock-Roman, M., & Rivera, C. (2011). Mean effects of test accommodations for ELLs and non-ELLs: A meta-analysis of experimental studies. Educational Measurement: Issues and Practice, 30(3), 10-28. doi:10.1111/j.1745-3992.2011.00207.x.

Price, S., Davies, P., Farr, W., Jewitt, C., Roussos, G., & Sin, G. (2014). Fostering geospatial thinking in science education through a customisable smartphone application. British Journal of Educational Technology, 45(1), 160-170. doi:10.1111/bjet.12000.

Rivera, C., & Collum, E. (Eds.). (2006). State assessment policy and practice for English language learners: A national perspective. Mahwah, NJ: Lawrence Erlbaum Associates.

Rock, M. L., Zigmond, N. P., Gregg, M., & Gable, R. A. (2011). The power of virtual coaching. Educational Leadership, 69(2), 42-47. Retrieved from http://www.ascd.org/publications/educational-leadership.aspx.

Rogers, C. M., Lazarus, S. S., & Thurlow, M. L. (2014). A summary of the research on the effects of test accommodations, 2011-2012 (Synthesis Report 94). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from https://nceo.info/Resources/publications/OnlinePubs/Synthesis94/default.html.

Rogers, C. M., Lazarus, S. S., & Thurlow, M. L. (2016). A summary of the research on the effects of test accommodations: 2013-2014 (NCEO Report 402). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Report402/NCEOReport402.pdf.

Rose, D. H., & Meyer, A. (2002). Teaching every student in the digital age: Universal design for learning. Alexandria, VA: Association for Supervision and Curriculum Development.

Schuck, S., Aubusson, P., Kearney, M., & Burden, K. (2013). Mobilising teacher education: A study of a professional learning community. Teacher Development, 17(1), 1-18. doi:10.1080/13664530.2012.752671.

Scott, D. E. (2009). Effective Voice-over-Internet-Protocol (VoIP) learning experiences: The relationship between adult learning motivation, multiple intelligences, and learning styles (Doctoral thesis, Curtin University of Technology, Perth, Australia. Retrieved from http://espace.library.curtin.edu.au:80/R?func=dbin-jump-full&local_base=gen01-era02&object_id=131165.

Scott, D. E., & Scott, S. (2010). Innovations in the use of technology and teacher professional development. In J. O. Lindberg & A. D. Olofsson (Eds.), Online learning communities and teacher professional development: Methods for improved education delivery. Hershey, PA: Information Science Reference.

Shriner, J., Carty, S., Rose, C., Shogren, K., Kim, M., & Trach, J. (2013). Effects of using a web-based individualized education program decision-making tutorial. The Journal of Special Education, 47(3), 175-185. doi:10.1177/0022466912453940.

Shriner, J. G., & DeStefano, L. (2003). Participation and accommodation in state assessment: The role of Individualized Education Programs. Exceptional Children, 69(2), 147-161. doi:10.1177/001440290306900202.

Shulman, J. H. (2000). Case methods as a bridge between standards and classroom practice. Washington, DC: National Partnership for Excellence and Accountability in Teaching. ERIC ED 452188. Retrieved from http://eric.ed.gov/?id=ED452188.

Shyyan, V., Christensen, L. L., Rogers, C., & Kincaid, A. (2014). Sign support guidelines for accessible assessments: Insights from cognitive labs. Minneapolis, MN: University of Minnesota, Enhanced Assessment Grant (#S368A120006), U.S. Department of Education. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/GAAP/GAAPSignItemsReport.pdf.

Shyyan, V., Christensen, L., Touchette, B., Lightborne, L., Gholson, M., & Burton, K. (2013). Accommodations manual: How to select, administer, and evaluate use of accommodations for instruction and assessment English language learners with disabilities (1st ed.). Washington, DC: Assessing Special Education Students and English Language Learners State Collaboratives on Assessment and Student Standards, Council of Chief State School Officers. Retrieved from http://www.cehd.umn.edu/NCEO/onlinepubs/ELLSWDAccommodationsManual.pdf.

Shyyan, V., Thurlow, M., Christensen, L., Lazarus, S., Paul, J., & Touchette, B. (2016). CCSSO Accessibility Manual. Washington, DC: Assessing Special Education Students and English Language Learners State Collaboratives on Assessment and Student Standards, Council of Chief State School Officers.

Shyyan, V. V., Thurlow, M. L., Larson, E. D., Christensen, L. L., & Lazarus, S. S. (2016). White paper on common accessibility language for states and assessment vendors. Minneapolis, MN: University of Minnesota, Data Informed Accessibility—Making Optimal Needs-based Decisions (DIAMOND).

Sledge, A., & Pazey, B. L. (2013). Measuring teacher effectiveness through meaningful evaluation: Can reform models apply to general education and special education teachers? Teacher Education and Special Education, 36(3), 231-246. doi:10.1177/0888406413489839.

Sloan, C. J. (2015). Special education teachers’ perception of accountability testing and the self-efficacy of students with special needs (Unpublished doctoral dissertation). Texas Tech University, Lubbock TX.

Smarter Balanced. (2016). Smarter Balanced Assessment Consortium: Usability, accessibility, and accommodations guidelines. Retrieved from http://www.smarterbalanced.org/wp-content/uploads/2015/09/Usability-Accessibility-Accommodations-Guidelines.pdf.

Sprinthall, N. A., Reiman, A. J., & Thies-Sprinthall, L. (1996). Teacher professional development. In J. P. Sikula (Ed.), Handbook of research on teacher education (2nd ed., pp. 666-703). London: Prentice-Hall.

Stevens, E. Y. (2013). Web 2.0 reflective inquiry: A transformative literacy teacher education tool. Journal of Adolescent & Adult Literacy, 56(5), 368. doi:10.1002/JAAL.156

Supovitz, J. (2002). Developing communities of instructional practice. The Teachers College Record, 104(8), 1591-1626. Retrieved from http://www.tcrecord.org/library/.

Thompson, S. J., Lazarus, S. S., Clapper, A. T., & Thurlow, M. L. (2006). Adequate yearly progress of students with disabilities: Competencies for teachers. Teacher Education and Special Education, 29(2). doi:10.1177/088840640602900206.

Thompson, S. J., Lazarus, S. S., Thurlow, M. L., & Clapper, A. T. (2005). The role of accommodations in educational accountability systems (Topical Review 8). College Park MD: University of Maryland, Educational Policy Reform Research Institute. Retrieved from https://archive.org/details/ERIC_ED509864.

Thompson, S. J., Thurlow, M. L., & Malouf, D. (2004, May). Creating better tests for everyone through universally designed assessments. Journal of Applied Testing Technology, 10(2). Retrieved from http://www.jattjournal.com/index.php/atp.

Thurlow, M., Lazarus, S. S., Albus, D., & Hodgson, J. (2010). Computer-based testing: Practices and considerations (Synthesis Report 78). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from https://nceo.info/Resources/publications/OnlinePubs/Synthesis78/default.htm.

Thurlow, M. L., Lazarus, S. S., & Christensen, L. L. (2008). Role of assessment accommodations in accountability. Perspectives on Language and Literacy, 34(4), 17-20. Retrieved from http://www.interdys.org/Perspectives.htm.

Thurlow, M. L., Lazarus, S. S., Christensen, L. L., & Shyyan, V. (2016). Principles and characteristics of inclusive assessment systems in a changing assessment landscape (NCEO Report 400). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/Report400/default.html.

Thurlow, M. L., Quenemoen, R. F., & Lazarus, S. S. (2011). Meeting the needs of special education students: Recommendations for the Race to the Top consortia and states. Washington, DC: Arabella Advisors.

Tsiotakis, P., & Jimoyiannis, A. (2016). Critical factors towards analysing teachers’ presence in on-line learning communities. The Internet and Higher Education, 28, 45-58. doi:10.1016/j.iheduc.2015.09.002.

U.S. Department of Education. (2009). Race to the Top program: Executive summary. Retrieved from http://www2.ed.gov/programs/racetothetop/executive-summary.pdf.

U.S. Department of Education. (2014a). 36th annual report to Congress on the implementation of the Individuals with Disabilities Education Act, 2014. Washington DC: Office of Special Education and Rehabilitative Services.

U.S. Department of Education. (2014b). Questions and Answers Regarding Inclusion of English Learners with Disabilities in English Language Proficiency Assessments and Title III Annual Measurable Achievement Objectives. Retrieved from http://www2.ed.gov/policy/speced/guid/idea/memosdcltrs/q-and-a-on-elp-swd.pdf.

U.S. Department of Education. (2015a). Dear colleague letter on FAPE. Washington, DC: Office of Special Education and Rehabilitative Services. Retrieved from https://www2.ed.gov/policy/speced/guid/idea/memosdcltrs/guidance-on-fape-11-17-2015.pdf.

U.S. Department of Education. (2015b). Peer review of state assessment systems: Non-regulatory guidance for states for meeting requirements of the Elementary and Secondary Education Act of 1965, as amended.

Warren, S., Christensen, L., Chartrand, A., Shyyan, V., Lazarus, S., & Thurlow, M. (2015). Forum on implementing accessibility frameworks for ALL students. Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved from http://www.cehd.umn.edu/NCEO/OnlinePubs/2015ForumReport.pdf

Weinstein, M. (2013). Regulating informal learning: Informal learning--in which employees learn from one another, on the job, and through other unstructured methods--continues to gain favor. But in regulated, compliance-driven industries, informal learning can be a dicey proposition. Training, 50(2), 34-37.

Wenger, E. (1998). Communities of practice: Learning as a social system. Systems Thinker, 9(5), 2-3. Retrieved from https://thesystemsthinker.com/communities-of-practice-learning-as-a-social-system/.

Whitehouse, P., Breit, L., McCloskey, E., Ketelhut, D. J., & Dede, C. (2006). An overview of current findings from empirical research on online teacher professional development. In C. Dede (Ed.), Online professional development for teachers: Emerging models and methods (pp. 13-30). Cambridge, MA: Harvard Education Press.

WIDA. (2015). ACCESS for ELLs 2.0 Accessibility and Accommodation Guidelines. Madison, WI: Author. Retrieved from https://www.wida.us/assessment/access20-prep.aspx.

Wineburg, S., & Grossman, P. (1998). Creating a community of learners among high school teachers. Phi Delta Kappan, 79(5), 350-353. Retrieved from http://pdk.sagepub.com/.

Wu, Y., Lazarus, S. S., Thurlow, M. L., & Turner, L. (2010). What have we learned about student characteristics, accommodations, and AA-MAS? Paper presented at the annual meeting of the American Education Research Association (AERA), Denver, CO. Retrieved from http://www.cehd.umn.edu/NCEO/Presentations/Posters/5AERA2010paper.pdf.

Ysseldyke, J., Thurlow, M., Bielinski, J., House, A., Moody, M., & Haigh, J. (2001). The relationship between instructional and assessment accommodations in an inclusive state accountability system. Journal of Learning Disabilities, 34(3), 212-220. doi:10.1177/002221940103400302.

Zehler, A., Fleischman, H., Hopstock, P., Stephenson, T., Pendzick, M., & Sapru, S. (2003). Descriptive study of services to LEP students and LEP students with disabilities (Vol. 4). Washington, DC: U.S. Department of Education, Office of English Language Acquisition, Language Enhancement, and Academic Achievement for Limited English Proficient Students.