NCEO Brief

February 2018
Number 14


Using Local Assessment Data to Measure Progress Toward the State-Identified Measurable Result (SIMR)

The U.S. Department of Education’s Office of Special Education Programs (OSEP) implemented Results Driven Accountability (RDA) in 2014 to help improve the educational outcomes of students with disabilities. As part of RDA, states were required to develop a State Systemic Improvement Plan (SSIP), which is a comprehensive, multi-year plan designed to improve outcomes for children with disabilities, and to commit to a State-Identified Measurable Result (SIMR) focused on student outcomes. Beginning in 2015, states incorporated the SSIP into their Annual Performance Reports (APR) that they submit to OSEP.

Many states include local assessments in their SSIP, often as a measure of progress toward their SIMR. In this Brief, the term “local assessments” refers to assessments other than state tests. Examples of local assessments that are mentioned in SSIPs include organization-developed measures such as DIBELS and AIMSweb, curriculum-based measures, school-selected formative assessments, and screening and benchmark assessments.

The purpose of this Brief is to outline key strategies for the collection, analysis, and use of local assessment data to monitor progress toward the SIMR. It provides information and suggestions for state education agencies and technical assistance providers who work with local education agencies (LEAs). It highlights six strategies, and then identifies several questions that states may want to consider if they choose to use local assessment data to measure progress toward their SIMRs. It is hoped that these strategies will support improved decision making at the state and local levels.

SSIP/SIMR Background

States developed their SSIPs and SIMRs through three phases. States now are in Phase III of the SSIP/SIMR work.

During Phase I, states reviewed their data, assessed their infrastructure, identified improvement strategies, developed a theory of action, and identified a SIMR. They established baseline data as well as targets for improvements in their SIMRs through 2020. States reported on Phase I as part of their APRs in 2015.

In 2016, states reported on Phase II. During this phase, they developed written plans that included details about how they would achieve their SIMRs through the implementation of improvement strategies. States were required to develop an evaluation plan that included measures that would provide feedback on progress, so they could make needed adjustments to their plan.

In Phase III, states shifted to implementing and evaluating their SSIPs. States first reported for Phase III in 2017. For Phase III they report annually through 2020.

Many states selected SIMRs based on academic achievement data. The targeted population of students with disabilities varies across states. The SIMRs often address one content area (reading, math), and one grade, grade band, or school level. Some states included all LEAs in their SIMRs; others identified SIMRs focused on one or more LEAs.

Forty-two of the 60 regular states and unique states (e.g., American Samoa, District of Columbia, etc.) selected an assessment-related SIMR. The SIMR for 37 of these 42 states was based on performance data from the statewide assessment that is used for state accountability for the target population of the SSIP; the other five states identified a SIMR based on performance data for another assessment (e.g., DIBELS, AIMSweb, curriculum-based measure, etc.).

Thirty-six of the 42 regular and unique states with assessment-related SIMRs focused on improving literacy proficiency in 2017-18. Seventeen of the states in this group identified a SIMR based on performance on the Grade 3 Reading/English Language Arts (ELA) statewide assessment. (The 17 states with a SIMR based on performance on the Grade 3 Reading/English Language Arts [ELA] statewide assessment included 14 states with SIMRs that addressed Grade 3 reading/literacy and three states with SIMRs that addressed K-3 reading/literacy.) Other states selected SIMRs that address reading/ELA at other grade levels or that focus on math.

To achieve their SIMRs, states identified and implemented improvement strategies based on their theory of action that described hypothesized relationships between inputs, activities, outputs, and outcomes. In their evaluation plans, states with SIMRs based on statewide assessments typically analyze data each year to obtain feedback on progress. Though not required, the evaluation plans of some states with assessment-related SIMRs indicated that the state also planned to measure interim progress toward the SIMR using other assessments, usually local assessments.

Based on an analysis of the submitted Phase III SSIPs conducted by OSEP-funded technical assistance centers during Spring and Summer of 2017, 33 states mentioned using local assessments to track interim SSIP progress. In some states, different local assessments are currently used across LEAs to measure progress toward the SIMR.

Strategies

Table 1 presents six strategies and questions that states should keep in mind if they are considering using local assessment data to measure progress toward a SIMR. Each of these is discussed in more detail in this Brief.

Table 1. Strategies and Questions to Consider if Using Local Assessment Data to Measure Progress Toward the SIMR

Strategy Number Strategy Questions
1 Identify Standards-based Local Assessments to Measure Student Academic Progress and to Monitor Progress Toward the SIMR
  • If local assessments are used to measure progress toward the SIMR, are they aligned to state academic content standards?
  • Do all students with disabilities have access to grade-level content?
2 Engage Stakeholders in the Selection of Measures that Produce Valid and Reliable Indications of Progress Toward the SIMR
  • How have stakeholders at the local and state levels been involved in discussions and decision making about the identification of appropriate measures to track interim progress toward the SIMR?
  • How is information presented to help stakeholders better understand the appropriate use of local assessments?
3 Ensure that the Targeted Population is Participating in Ways that Will Allow for Valid Measurement of Progress Toward the SIMR
  • Do all students in the targeted group of students participate in the local assessment?
  • Are appropriate accessibility and accommodations policies and procedures in place at the local level to ensure the meaningful participation of students with disabilities?
  • Are procedures in place that will help ensure that students have the opportunity to try different accommodations to see which are needed, prior to the collection of data for the purpose of measuring progress toward the SIMR?
4 Use Common Terminology for Different Local Assessments, and Equate When Needed
  • Have differences in local assessments posed any challenges or barriers to your state’s SSIP evaluation?
  • How might results from different local assessments be compared across schools or districts?
  • If different local assessments are currently used across schools or districts to track SIMR progress, how are these assessments being equated?
5 Provide Technical Assistance that Supports Development of Assessment-Curriculum Literacy
  • What support is provided to local educators to improve assessment-curriculum literacy?
  • Does the SSIP in your state address assessment-curriculum literacy?
  • How does your state increase stakeholder understanding of assessments and assessment data?
6 Analyze Data to Support Improved Decision Making at State and Local Levels
  • What questions can be asked of the data to learn more about what is working well, and what needs to be improved?
  • What are the challenges to compiling and analyzing local assessment data? How can they be minimized?

Strategy 1. Identify Standards-based Local Assessments to Measure Student Academic Progress and to Monitor Progress Toward the SIMR

To accurately measure progress toward the SIMR, it is vital to select appropriate local assessments. To document progress, the local assessment needs to be aligned to state standards so that it can serve as a helpful barometer of progress toward the state’s desired long-term student outcomes. Alignment of local assessments to state academic content standards also encourages alignment of classroom instruction to those standards. According to the Office of Special Education and Rehabilitative Services (OSERs) guidance letter on free and appropriate public education (FAPE), “An IEP Team must ensure that annual IEP goals are aligned with the State academic content standards for the grade in which a child is enrolled.” (U.S. Department of Education [2015, November 16]. Dear Colleague Letter. Washington, DC: Office of Special Education and Rehabilitative Services. The letter is available at: https://www2.ed.gov/policy/speced/guid/idea/memosdcltrs/guidance-on-fape-11-17-2015.pdf)

Local assessments that are not aligned to state academic content standards may predict future scores on statewide assessments, especially same-year scores for elementary students, but prediction is not what states should strive for in their SIMR measures. Simply predicting poor state assessment performance of students with disabilities may have the unintended consequence of lowering expectations. To support high expectations, students with disabilities must have meaningful access to grade-level academic content instruction. States should strive for measures that produce valid and reliable indications of actual progress toward state standards for students with disabilities. If assessments currently in use are not aligned to state standards, schools and districts may need to consider the limitations of the local assessment, and use multiple measures to evaluate progress toward the SIMR.

Questions for Consideration:

If local assessments are used to measure progress toward the SIMR, are they aligned to state academic content standards?

Do all students with disabilities have access to grade-level content?

Strategy 2. Engage Stakeholders in the Selection of Measures that Produce Valid and Reliable Indications of Progress Toward the SIMR

Stakeholder (e.g., parents/families, special and general teachers, school administrators, representatives of higher education, etc.) engagement is vital throughout all phases of the SSIP. Including local stakeholders in decision making encourages the use of assessments that will appropriately measure progress toward the SIMR. Given the current assessment context and the push at local, state, and federal levels to reduce unnecessary testing, any decision to add a new measure to local assessment systems must be well supported by broad stakeholder input and a well-defined need.

During the process of engaging stakeholders, it is important to discuss the fact that existing local level assessments may be used for different purposes (e.g., to assess students’ progress toward school or district-specific goals, or to monitor an individual student’s progress over time), and may not be aligned to state standards. Data from measures not aligned to standards will not accurately assess progress toward the SIMR. Open dialogue and two-way communication between state staff and local stakeholders about the use of local measures for SSIP purposes will help surface any challenges to using these measures for a purpose other than that for which they were developed. Engaging stakeholders early in this process can help ensure that appropriate measures are actually used.

The U.S. Department of Education’s Testing Action Plan outlines principles for “fewer and smarter assessments,” including that they should be worth taking, high quality, time-limited, fair, fully transparent to students and parents, just one of multiple measures, and tied to improved learning. Resources to support authentic stakeholder engagement are available from the National Center for Systemic Improvement’s Leading by Convening framework. (More information on the U.S. Department of Education’s Testing Action Plan is available at: https://www.ed.gov/news/press-releases/fact-sheet-testing-action-plan. More information on Leading by Convening is available at: https://ncsi.wested.org/resources/leading-by-convening/)

Questions for Consideration:

How have stakeholders at the local and state levels been involved in discussions and decision making about the identification of appropriate measures to track interim progress toward the SIMR?

How is information presented to help stakeholders better understand the appropriate use of local assessments?

Strategy 3. Ensure that the Targeted Population is Participating in Ways that Will Allow for Valid Measurement of Progress Toward the SIMR

If local assessment data are to validly measure progress toward the SIMR, it is vital that targeted students with disabilities participate in the assessment. For example, if some students in the targeted population do not participate in the local assessment because they are in alternate placements where that assessment is not administered, the data will be of limited use in measuring the SIMR progress of all targeted students. Similarly, if some targeted students with disabilities do not participate because needed accessibility features and accommodations are not available for the local assessment (e.g., blind students do not participate because there is no braille accommodation), the data will have limited value for measuring progress toward the SIMR. (Accessibility features and accommodations are tools and procedures that enable students to meaningfully participate in instruction and assessment, e.g., formative assessment, classroom assessments, local assessments used to measure progress toward the SIMR, state tests used for accountability purposes, etc. Many online assessments have a tiered accessibility framework. Accessibility features are tools and procedures that may be used by any student, but must be designated in advance by an adult. Accommodations are tools and procedures that provide equitable access to instruction and assessment for students with disabilities [as well as English Learners or ELs, including ELs with disabilities]. Accessibility features and accommodations allow for more valid results and interpretations of scores.)

When targeted students with disabilities are participating in local assessments, it is critical to ensure that the assessment results are a valid indicator of progress toward the SIMR. First, students with disabilities need to be abl[e to meaningfully access the assessment. As with all assessments, some students will need accessibility features and accommodations for the local assessment results to be reliable and valid. Some local assessments may not allow needed accessibility features and accommodations. If this is the case, states will need to work with LEAs to choose other local assessments that provide the accessibility features and accommodations that students need.

Second, educators and policymakers need to balance the potentially conflicting uses of data from the same assessment. Local assessments may be used for multiple purposes, such as evaluating instructional interventions, in addition to measuring progress toward the SIMR. When this occurs, it is important to consider how to balance the need for data about the effects of various accommodations and instructional interventions with the need for valid measures of progress toward the SIMR. When local assessment data are used for instructional decision making (i.e., to understand what is working or not working for students in the classroom), that may be the best opportunity to try different accommodations to better understand which are appropriate (or not appropriate) for a student. Given this intended use (accommodations tryouts), then it would be inappropriate to also use the same administrations of the assessment to accurately measure progress toward the SIMR.

Questions for Consideration:

Do all students in the targeted group of students participate in the local assessment?

Are appropriate accessibility and accommodation policies and procedures in place at the local level to ensure the meaningful participation of students with disabilities?

Are procedures in place that will help ensure that students have the opportunity to try different accommodations to see which are needed, prior to the collection of data for the purposes of measuring progress toward the SIMR?

Strategy 4. Use Common Terminology for Different Local Assessments, and Equate When Needed

When different LEAs or schools in the same state use different local assessments, it is difficult to put the results together into a measure of progress toward the SIMR. If the desire is to equate the two assessments (which is required to appropriately measure progress toward the SIMR when different assessments are used across districts or years), drawing appropriate conclusions will require technical expertise. However, if the desire is merely to allow educators and administrators to examine trends across schools, developing common terminology can allow two or more entities to analyze performance of groups of students.

Table 2 shows how two schools could “crosswalk” their local assessment performance levels to an agreed-on, common performance level. This would give district leaders and educators a way to begin a conversation about the similarities and differences in their students’ performances.

Table 2. Performance-Level Crosswalk Example Applied to Two Schools’ Assessment Level Categories

School A (5 levels) Common Terminology School B (4 levels)
Very Low Not Proficient Well Below Benchmark
Low Intermediate Below Benchmark
Average Proficient At Benchmark
High
Very High Advanced Above Benchmark

A potential next step is for the schools or LEAs to examine their data using common categories (see Figure 1 for an example). Following discussions about the data, schools could talk about possible resources and collaborative approaches, including common strategies, practices, curricula, and supports.

Developing common terminology and processes for working across LEAs or schools is not a substitute for resolving the many technical challenges that arise when combining data from different measures. If the intent is to continue to use different local assessments and combine them as a measure of progress toward the SIMR, then states will need to address the technical issues involved. Equating measures is a complex process that will likely require additional data collection.

Questions for Consideration:

Have differences in local assessments posed any challenges or barriers to your state’s SSIP evaluation?

How might results from different local assessments be compared across schools or districts?

If different local assessments are currently used across schools or districts to track SIMR progress in your state, how are these assessments being equated?

Figure 1: Use of Common Categories to Compare Performance Across Schools

Figure 1 Bar Chart

Strategy 5. Provide Technical Assistance that Supports Development of Assessment-Curriculum Literacy

It is important to consider how to improve and support the assessment-curriculum literacy of educators as they seek to successfully measure progress toward the SIMR and improve instruction. This Brief uses the term assessment-curriculum literacy instead of the more commonly used term, assessment literacy, to emphasize the relationship between instruction and assessment.

Often educators struggle to appropriately and effectively use data to close the achievement gap, and there is confusion about what assessments and which data sets will best inform instructional shifts. There is a need to carefully consider how to help educators identify the “best” products or processes that will actually improve student outcomes and enable states to reach their SIMR.

Assessment-curriculum literacy can help educators appreciate the purpose of various assessments and understand how assessment data can be used throughout the year to strengthen instruction. It also helps them avoid making inappropriate interpretations of the data that could lead to poor decision making.

Stakeholders, also would benefit by knowing more about assessments and the interpretation of data. States may want to consider developing communication plans that will help message high-quality information about interpreting and using assessment data. Communication plans should explain why some local assessments may not be appropriate to measure progress toward the SIMR, and how to collect data that are useful.

Questions for Consideration:

What support is provided to local educators to improve assessment-curriculum literacy?

Does the SSIP in your state address assessment-curriculum literacy?

How does your state increase stakeholder understanding of assessments and assessment data?

Strategy 6. Analyze Data to Support Improved Decision Making at the State and Local Levels

Now that the SSIPs have been in place for several years, it is important to analyze the data. What is working well? What is not? Are there pockets of concern? Questions that might be asked of the data include: What are participation rates for the targeted population on local assessments that are being used to measure progress toward the SIMR? Are targeted students with certain characteristics excluded from participation (e.g., those with sensory disabilities, those who are in placements other than the general education classroom, etc.)?

One challenge to conducting data analyses is the current concern of policymakers and parents about data privacy and protecting student information. This issue can be particularly challenging when states seek to use local assessment data as a measure of progress toward the SIMR.

Clearly articulating the rationale and purpose for using local student assessment data is critical in fostering buy-in from educators, parents, and students. It is also important to explain how data will be stored and handled to allay concerns. LEA and state staff need to be able to explain that key purposes for collecting and managing assessment results from local assessments used to measure progress toward the SIMR include examining results, making decisions about programs, and ultimately improving outcomes for students with disabilities.

Questions for Consideration:

What questions can be asked of the data to learn more about what is working well, and what needs to be improved?

What are the challenges to compiling and analyzing local assessment data? How can they be minimized?

Conclusion

There are many things that states and LEAs must consider when deciding whether to use local assessment data to measure progress toward the SIMR. The strategies included in this Brief have the potential to improve the reliability and validity of these measures—and ultimately to improve student learning and performance. Some states may discover that the processes and procedures they are currently using have limitations. They may want to consider using multiple measures to evaluate progress toward the SIMR, an approach that likely will produce richer, more accurate evidence of student progress.

NCEO Brief #14, February 2018

IDEAs the Work GraphicContributors to the writing of this Brief were Sheryl S. Lazarus, Susan A. Hayes, Carla Howe, Cesar D’Agord, Kristin K. Liu, Maureen E. Hawes, and Martha L. Thurlow.

NCEO Director, Martha Thurlow; NCEO Associate Director, Sheryl Lazarus; NCEO Assistant Director, Kristin Liu.

All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Lazarus, S. S., Hayes, S. A., Howe, C., D’Agord, C., Liu, K. K., Hawes, M. E., & Thurlow, M. L. (2018, February). Using local assessment data to measure progress toward the State-Identified Measurable Result (SIMR) (NCEO Brief #14). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

NCEO is supported through a Cooperative Agreement (#H326G160001) with the Research to Practice Division, Office of Special Education Programs, U.S. Department of Education. The Center is affiliated with the Institute on Community Integration at the College of Education and Human Development, University of Minnesota. The contents of this report were developed under the Cooperative Agreement from the U.S. Department of Education, but does not necessarily represent the policy or opinions of the U.S. Department of Education or Offices within it. Readers should not assume endorsement by the federal government. This document is available in alternative formats upon request.

Project Officer: David Egnor

In collaboration with Applied Engineering Management (AEM), Council of Chief State School Officers (CCSSO), National Association of State Directors of Special Education (NASDSE), and West Ed.

AEM Logo CCSSO Logo NASDSE Logo WestEd Logo

 

National Center on Educational Outcomes
University of Minnesota • 207 Pattee Hall
150 Pillsbury Dr. SE • Minneapolis, MN 55455

Phone 612/626-1530 • Fax 612/624-0879

The University of Minnesota is an equal opportunity employer and educator.

NCEO is an affiliated center of the Institute on Community Integration

Visit our website at www.nceo.info