NCEO Policy Directions

Published by the National Center on Educational Outcomes
Number 15 / January 2003


Using Computer-based Tests with Students with Disabilities

Prepared by Sandra Thompson, Martha Thurlow, and Michael Moore


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Thompson, S., Thurlow, M., & Moore, M. (2003). Using computer-based tests with students with disabilities (Policy Directions No. 15). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/Policy15.htm


Background

Computer-based testing has been called the “next frontier in testing” as educators, testing companies, and state departments quickly work to transform paper/pencil tests into technology-based formats. These efforts have occurred in a variety of ways and for a variety of tests. Some educators have transferred all of their classroom quizzes and tests into a computer-based format.

With the dramatic increase in the use of the Internet over the past few years, and the considerable potential of online learning, assessment will need to undergo a complete transformation to keep pace. Experts suggest that the Internet will be used to develop tests and present items through dynamic and interactive stimuli such as audio, video, and animation. Given this momentum, it is not surprising that there is a trend toward investigating and incorporating the Internet as the testing medium for statewide assessments.

Computer-based testing is viewed by many policymakers as a way to meet the requirements of the No Child Left Behind Act of 2001 (NCLB). The need to produce itemized score analyses, disaggregation within each school and district by gender, racial and ethnic group, migrant status, English proficiency, disability, and income challenges states to create new and more efficient ways to administer, score, and report assessment results.

There clearly are many opportunities created when computer-based tests are used. These include more efficient test administrations, the availability of immediate results, and student preferences for this form of testing over paper and pencil tests. In addition, computer-based testing opens up the possibility for built-in accommodations, student selection of testing options, and increased authenticity in items that are included. Other benefits have been identified as well, so there is considerable pressure to move toward computer-based testing.

While computer-based testing may address the challenges of NCLB and has many other positive characteristics as well, it potentially creates other problems unless a thoughtful and systematic process is used to transfer existing paper/pencil assessments to computer-based assessments. Not only will poor design elements on the paper test transfer to the screen, but additional challenges may result that reduce the validity of the assessment results and possibly exclude some groups of students from assessment participation.

This Policy Directions presents factors to consider in the design of computer-based testing for all students, including students with disabilities and students with limited English proficiency. It also provides a process for the initial transformation of paper/pencil assessments to inclusive computer-based testing.

A report to the National Governors’ Association sums up what we need to remember as computer-based testing grows across the United States and throughout the world:

Do not forget why electronic assessment is desired. Electronic assessment will enable states to get test results to schools faster and, eventually, cheaper. It will help ensure assessment keeps pace with the tools that students are using for learning and with the ones that adults are increasingly using at work. The technology will also help schools improve and better prepare students for the next grade, for postsecondary learning, and for the workforce. (Using Electronic Assessment to Measure Student Performance, 2002, p. 9)


Challenges

The concept of universal design is not new. Its use began in the field of architecture, but its application has spread rapidly into environmental initiatives, recreation, the arts, health care, and education. Principles of universal design that traverse all of these areas have been developed (see Table 1). It is reasonable to expect that they can apply equally as well to large-scale assessments.

Despite the potential advantages offered by computer-based testing, there remain several challenges, especially in the transition from paper/pencil assessments. First of all, the use of technology cannot take the place of content mastery. No matter how well a test is designed, or what media are used for administration, students who have not had an opportunity to learn the material tested will perform poorly. Students need access to the information tested in order to have a fair chance at performing well. Researchers strongly caution that the use of a computer, in and of itself, does not improve the overall quality of student writing. We continue to find significantly lower mean test scores for students with disabilities than for their peers without disabilities. The following are some challenges that must be overcome in order for computer-based testing to be effective.

Issues of Equity and Skill in Computer Use
Concerns continue to exist in the area of equity, where questions are asked about whether the required use of computers for important tests puts some students at a disadvantage because of lack of access, use, or familiarity. Concerns include unfamiliarity with answering standardized test questions on a computer screen, using buttons to search for specific items, and indecision about whether to use traditional tools (e.g., hand held calculator) vs. computer-based tools.

Added Challenges for Some Students
Some research questions whether the medium of test presentation affects the comparability of the tasks students are being asked to complete. For example, (1) computer-based testing places more demands on certain skills such as typing, using multiple screens to recall a passage, mouse navigation, and the use of key combinations, (2) some people become more fatigued when reading text on a computer screen than on paper, (3) long passages may be more difficult to read on a computer screen, and (4) the inability to see an entire problem on screen at one time is challenging.

Lack of Ability to Design Accessible Web Pages
According to WebAIM, (Web Accessibility in Mind, an initiative of the Center for Persons with Disabilities at Utah State University), there are 27.3 million people with disabilities who are limited in the ways they can use the Internet: “The saddest aspect of this fact is that the know-how and the technology to overcome these limitations already exist, but they are greatly under-utilized, mostly because Web developers simply do not know enough about the issue to design pages that are accessible to people with disabilities.”


Developing Inclusive Computer-based Testing

The transformation of traditional paper/pencil tests to inclusive computer-based tests takes careful and thorough work that includes the collaborative expertise of many people. Five steps should be used to address these transformation issues (see Table 1).

Step 1. Assemble a group of experts to guide the transformation. Include experts on assessment design, accessible Web design, universal design, and assistive technology, along with state and local assessment and special education personnel and parents.

Step 2. Decide how each accommodation will be incorporated into the computer-based test. Examine each possible accommodation in light of computer-based administration. Some traditional paper/pencil accommodations will no longer be needed, while others will become built-in features that are available to every test-taker.

Step 3. Consider each accommodation or assessment feature in light of the constructs being tested. For example, what are the implications of the use of a screen reader when the construct being measured is reading, or the use of a spellcheck when achievement in spelling is being measured as part of the writing process? As the use of speech recognition technology permeates the corporate world, constructs that focus on writing on paper without the use of a dictionary or spellchecker may need to be reconsidered.

Step 4. Consider the feasibility of incorporating the accommodation into computer-based tests. The feasibility of some accommodations may require review by technical advisors, or members of a policy/budget committee, or may require short-term solutions along with long term planning. Construct a specific plan for building in features that are not immediately available, and conduct extensive pilot tests with a variety of equipment scenarios and accessibility features.

Step 5. Consider training implications for staff and students. The best technology will be useless if students or staff do not know how to use it. Special consideration needs to be given to the computer literacy of students and their experience using features like screen readers. Information about the features available on computer-based tests needs to be available to IEP teams to use in planning a student’s instruction and assessments. Practice tests that include these features need to be available.

Skipping any of these steps may result in the design of assessments that exclude large numbers of students.

 

Table 1. Steps for Developing Inclusive Computer-based Testing

Step 1. Assemble a group of experts to guide the transformation.
Step 2. Decide how each accommodation will be incorporated into the computer-based test.
Step 3. Consider each accommodation or assessment feature in light of the constructs being tested.
Step 4. Consider the feasibility of incorporating the accommodation into computer-based tests.
Step 5. Consider training implications for staff and students.

Considerations and Examples

Most states have a list of possible or common accommodations for students with disabilities within the categories of presentation, response, timing/scheduling, and setting. Some states also list accommodations specifically designed for students with limited English proficiency. The list of considerations in Table 2 was generated to address the needs of students with a variety of accommodation needs—including students with disabilities, students with limited English proficiency, students with both disabilities and limited English proficiency, and students who do not receive special services but have a variety of unique learning and response styles and needs. Here are some considerations for a few examples of specific accommodations.

Large print and magnification (presentation)
When type is enlarged on a screen, students may need to scroll back and forth, or up and down to read an entire test item. Graphics, when enlarged, may become very pixilated and difficult to view. Students who use hand held magnifiers or monocular devices when working on paper may not be able to use these devices on a screen because of the distortion of computer images. If a graphics user interface is used (versus text based), students will not have the option of altering print size on the screen.

Audio presentation of instructions and test items (presentation)
Screen readers can present text as synthesized speech. The use of text-to-speech for test items may not be a viable option if the construct tested is the ability to read print.

Instructions simplified/clarified (presentation)
Instructions for all students need clearly worded text that can be followed simply and intuitively, with a consistent navigational scheme between pages/items. Students may need an option to self-select alternate forms of instructions in written or audio format.

Write in test booklet (response)
There are many options for marking responses on computer-based tests that are not available on paper. It would still be possible for a student to dictate responses to a teacher, who would then mark them on the computer. The option of speech recognition software is also becoming more available. Speech recognition technology enables computers to translate human speech into a written format. Currently, speech recognition only works for some people, while others, especially those who are not native English speakers or those with speech impairments, can be frustrated by the software’s lack of ability to differentiate many of the sounds that they make.

Calculator (response)
Calculator use is often allowed on paper/pencil tests when arithmetic is not the construct being measured. However, standardization of the type of calculator used has been very difficult and would be much easier if all students had the same online calculator to use. Use of an online calculator is challenging for some students, especially if they have not had practice with this tool in their daily work.

Breaks and multiple test sessions (timing/scheduling)
Technology is required for multiple test sessions that would allow individual students to submit their completed responses and be able to log out and back on again at another time, starting at the place where they previously left off. Careful scheduling is needed for multiple test sessions to make sure that computers are available. Test security becomes an issue if students who have responded to the same test items have opportunities to interact with each other between test sessions.

Individual or small group administration (setting)
Computer-based tests create increased individualization for every student. Each student can be seated at a separate computer station wearing ear/headphones for audio instructions or items. Students using speech recognition systems or other distracting response methods need to be tested in individual settings.

 

Table 2. Considerations in the Transformation of Accommodations from Paper/pencil to Computer-based Tests

Presentation Accommodations
  • Capacity for any student to self-select print size or magnification
  • Graphics and text-based user interfaces have different challenges
  • Scrolling issues
  • Variations in screen size
  • Effects of magnification on graphics and tables
  • Capacity for any student to self-select audio (screen reader), alternate language, or signed versions of instructions and test items (all students wear ear/headphones)
  • Capacity to have instructions repeated as often as student chooses
  • Variable audio speed and quality of audio presentation
  • Capacity for pop-up translation
  • Use of screen reader that converts text into synthesized speech or Braille
  • Alternative text or “alt tags” for images
  • Avoidance of complex backgrounds that interfere with readability of overlying text
  • Tactile graphics or three-dimensional models may be needed for some images
  • Capacity for multiple screen and text colors

Response Accommodations

  • Capacity for multiple options for selecting response – mouse click, keyboard, touch screen, speech recognition, assistive devices to access the keyboard (e.g., mouth stick or head wand)
  • Option for paper/pencil in addition to computer (e.g., scratch paper for solving problems, drafting ideas)
  • Option for paper/pencil in place of computer (e.g., extended response items)
  • Capacity for any student to self-select spell check option
  • Capacity to disable spell check option when spelling achievement is being measured
  • Spelling implications when using speech recognition software
  • Capacity for any student to select calculator or dictionary option

Timing/Scheduling Accommodations

  • Availability/location of computers and peripherals
  • Flexible, individualized timing
  • Capacity of network system
  • Maintaining place and saving completed responses during breaks
  • Capacity to turn off monitor/ blank screen temporarily
  • Test security
  • Capacity for self-selection of subtest order

Setting Accommodations

  • Grouping arrangements
  • Use of earphones or headphones
  • Use of individual setting if response method distracts other students
  • Availability/comparability/location of computers and peripherals
  • Glare from windows or overhead lights
  • Adaptive furniture
  • Test security

Summary

With the enactment of NCLB, nearly all states are in the process of designing new assessments. As part of this process, several states are considering the use of computer-based testing, since this is the mode in which many students are already learning. Several states have already begun designing and implementing computer-based testing.

Because many accessibility features can be built into computer-based tests, the validity of test results can be increased for many students, including students with disabilities and English language learners, without the addition of special accommodations. However, even though items on universally designed assessments are accessible for most students, there will still be some specialized accommodations, and computer-based testing must be amenable to these accommodations.

Students with disabilities will be at a great disadvantage if paper/pencil tests are simply copied on screen without any flexibility. Until the implications of the use of graphics versus text-based user interfaces are considered and resolved, a large number of students will need to continue to use paper/pencil tests, with a possible reduction in the comparability of results, and an increase in administrative time and potential errors when paper/pencil responses are transferred by a test administrator to a computer for scoring.


Resources

Access to Computer-based Testing for Students with Disabilities (Synthesis Report 45). Thompson, S.J., Thurlow, M.L., Quenemoen, R.F., & Lehr, C.A. (2002). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. http://cehd.umn.edu/NCEO/OnlinePubs/Synthesis45.html

Assistive Technology Competencies for Special Educators. Lahm, E.A., & Nickels, B.L. (1999). Teaching Exceptional Children, 32(1), 566-63.

Computerized Test Accommodations: A New Approach for Inclusion and Success for Students with Disabilities. Burk, M. (1999). Washington, D.C.: A.U. Software.

Effects of Computer-based Test Accommodations on Mathematics Performance Assessments for Secondary Students with Learning Disabilities. Calhoon, M.B., Fuchs, L.S., & Hamlett, C.L. (2000). Learning Disability Quarterly, 23, 271-282.

Introduction to Web Accessibility. WebAIM (2001). Retrieved March, 2002, from the World Wide Web: http://www.webaim.org/intro/

Reinventing Assessment: Speculations on the Future of Large-scale Educational Testing. Bennett, R.E. (1998). Princeton, NJ: Policy Information Center, Educational Testing Service. Retrieved March, 2002, from the World Wide Web: www.ets.org/research/pic/bennett.html

Technology and Assessment: Thinking Ahead: Proceedings of a Workshop. National Research Council. (2002). Washington, DC: Board on Testing and Assessment, Center for Education. Division of Behavioral and Social Sciences and Education, National Academy Press. Retrieved March 2002 from the World Wide Web: http://www.nap.edu/books/0309083206/html

Universal Design Applied to Large-scale Assessments (Synthesis Report 44). Thompson, S.J., Johnstone, C.J., & Thurlow, M.L. (2002). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. http://cehd.umn.edu/NCEO/OnlinePubs/Synthesis44.html

Using Electronic Assessment to Measure Student Performance. National Governors’ Association. (2002). Education Policy Studies Division: National Governors Association. Retrieved March, 2002, from the World Wide Web: http://www.nga.org/cda/files/ELECTRONICASSESSMENT.pdf

Web Accessibility Initiative, World Wide Web Consortium. Retrieved March, 2002, from the World Wide Web: http://www.w3.org/WAI/