Sheryl S. Lazarus, Andrew R. Hinkle, Kristin K. Liu, Martha L. Thurlow, and Virginia A. Ressa
May 2021
All rights reserved. Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:
Lazarus, S. S., Hinkle, A. R., Liu, K. K., Thurlow, M. L., & Ressa, V. A. (2021). Using interim assessments to appropriately measure what students with disabilities know and can do: Advisory panel takeaways and NCEO recommendations (NCEO Report 427). National Center on Educational Outcomes.
The National Center on Educational Outcomes (NCEO) held a virtual meeting of an Interim Assessment Advisory Panel in February, 2021 to identify issues and make recommendations for state departments of education about how to enable all students with disabilities, including students with the most significant cognitive disabilities and English learners with disabilities, to participate in interim assessments in ways that yield valid inferences about what they know and can do. Advisors noted concerns and gaps about current interim assessments and their uses, suggested what should be, and proposed practical considerations for a better interim assessment system for each of five areas: (a) Participation; (b) Accessibility; (c) Role of academic standards; (d) Technical issues; and (e) Data use, interpretation, and reporting.
Based on the advisors’ discussions, NCEO proposed seven recommendations for states and districts that use (or are considering use of) interim assessments.
The National Center on Educational Outcomes (NCEO) held a virtual meeting of an Interim Assessment Advisory Panel on February 16 and 17, 2021 to tap into the panel members’ collective knowledge about using interim assessments to support valid interpretations of what students with disabilities know and can do. The Interim Assessment Advisory Panel was composed of individuals with expertise in interim assessments and students with disabilities. They represented psychometricians, researchers knowledgeable about students with disabilities, state education agency (SEA) assessment and special education leaders, local education agency (LEA) leaders, and parents. See Appendix A for a list of the advisors, as well as a list of other meeting participants.
In this report, interim assessments refer to assessments that are administered several times during a school year to measure student progress. They may be commercially produced or developed by groups of states or other organizations. Other terms that are sometimes used to describe these assessments are local assessments, district assessments, and even formative assessments (although interim assessments rarely provide formative information).
The meeting purpose was to develop and disseminate guidance to state departments of education on how to enable all students with disabilities, including students with the most significant cognitive disabilities and English learners with disabilities, to participate in interim assessments in ways that yield valid inferences of what they know and can do. A specific goal of the meeting was to have the advisors discuss the issues related to this purpose. NCEO believed that these discussions would lead to the identification of current gaps or issues, suggestions for what should be, practical considerations, and some recommendations for policies and practices. Takeaways from the meeting and recommendations created by NCEO could inform state departments of education in their efforts to support improved measurement of what students with disabilities know and can do. Guidance from states can, in turn, inform decisions of districts and Individualized Education Program (IEP) teams, and ultimately improve the measurement of student outcomes.
Interim assessments are used for a variety of purposes. Some of these include:
Additionally, widespread interest is expressed by states and districts about the possible use of interim assessments to measure learning losses that may have resulted because of COVID-19 school closures and inconsistent distance learning. Some states also are interested in potentially using either commercially-produced or state-developed interim assessments as a replacement or supplement to state tests used for accountability during this and other times. However, little attention has been given to how to include some students with disabilities in these assessments in ways that produce valid results and support valid interpretations of those results. Further, there are federal requirements that all state- and district-wide administrations of an assessment must have an alternate assessment, so states that adopt interim assessments will need to implement approaches that are appropriate for students who participate in their alternate assessments.
Because there is so much interest in using interim assessments, there is an urgent need for guidance to support states and districts as they make decisions about the use of interim assessments to measure progress of students with disabilities as well as to inform instruction, predict performance, and measure learning loss. The Interim Assessment Advisory Panel discussions summarized in this report were to help NCEO develop recommendations for states. The recommendations will also benefit those states that already use some form of interim assessments to measure progress as part of the U.S. Department of Education’s Office of Special Education Programs’ Results-Driven Accountability (RDA) system.
Fourteen advisors participated in the meeting. Additionally, six NCEO staff members, five NCEO partners, and three consultants attended the meeting. NCEO staff, partners, and consultants gave presentations that provided background information, facilitated advisor discussions, and provided meeting support. Seven participants from the U.S. Department of Education’s Office of Special Education Programs (OSEP) and Office of Elementary and Secondary Education (OESE) attended the meeting as well.
To prepare for this meeting NCEO supported a scan of the interim assessment landscape. It focused on the publicly available documentation supporting the appropriateness of these assessments for students with disabilities (Boyer & Landl, 2021). It also compiled information about how interim assessments were being used by states for OSEP RDA accountability (Lazarus et al., 2021). These sets of information were shared with advisory panel members prior to the meeting.
The Interim Assessment Advisory Panel Meeting was held for five hours on each of two days (see meeting agenda in Appendix B). The first day of the meeting alternated between presentations and breakout discussions. (The Facilitator Guide for the breakout sessions is provided in Appendix C). Advisors considered the needs of three groups of students throughout the meeting: students with disabilities in general, students with the most significant cognitive disabilities, and English learners with disabilities.
During the second day of the meeting, facilitators supported the advisors as they discussed the issues and created takeaways. Using the meeting takeaways, NCEO developed a set of recommendations for states.
NCEO provided advisors with a framework to help organize their thinking about interim assessments for students with disabilities (including those with sensory disabilities), students with the most significant cognitive disabilities, and English learners with disabilities. The five components of the framework, and the general question for each component were:
After the meeting, NCEO summarized the advisors’ comments and then sent them back out to the advisors to review and suggest revisions. Based on this summary, NCEO produced a draft of this report, which advisors also reviewed before publication.
This report summarizes both the overview information provided to meeting participants and the advisors’ discussions in the form of takeaways. NCEO staff developed summaries of the presentations from notes taken during the presentations and from the presenters’ slides (see Appendix D for a summary of the proceedings). NCEO also developed summaries of the facilitated discussions from notes taken by note takers. These are reflected in the next section of this report. The advisors were encouraged by a facilitator to comment and discuss freely, with assurances from NCEO that the final report would not attribute any particular comment to any specific advisor. This led to frank and open conversations.
Interim Assessment Advisory Panel members expressed concerns about the quality of interim assessments and their use. They were concerned that many of the commercially available interim assessments had incomplete evidence to support the validity of their intended interpretations and uses. This led some advisors to question whether students with disabilities should be included in them, and whether an alternate assessment should be developed for students with the most significant cognitive disabilities. Despite these concerns, the discussion during the meeting focused on the five topics in recognition of federal requirements for the inclusion of all students with disabilities in state and districtwide assessments, either in the general assessment, with or without accommodations, or in an alternate assessment.
This section first presents the legal background that framed the advisors’ discussion about each of the five components of the framework. This is followed by a summary of the advisors’ discussion. The discussion summary for each component is organized into three parts: (a) Current Concerns or Gaps; (b) What Should Be; and (c) Practical Considerations for a Better System.
Federal laws require that all children with disabilities be included in all state and districtwide administrations of interim assessments. This includes students who need appropriate accommodations to participate in the general assessment and students with the most significant cognitive disabilities who may need an alternate assessment, as well as English learners with disabilities. Regulations for the Individuals with Disabilities Education Act (IDEA) state:
A State must ensure that all children with disabilities are included in all general State and district-wide assessment programs, including assessments described under section 1111 of the ESEA, 20 U.S.C. 6311, with appropriate accommodations and alternate assessments, if necessary, as indicated in their respective IEPs. (Sec. 300.160(a))
If an interim assessment is used for federal accountability, the requirements of the Elementary and Secondary Education Act (ESEA), reauthorized in 2015 as the Every Student Succeeds Act (ESSA), also apply. ESSA requires the inclusion of all students in assessments used for accountability (Sec. 1111(2)(B)(i)(II)). For English learners with disabilities, participation requirements are reinforced by several civil rights laws and court cases (e.g., Title VI of the Civil Rights Act of 1964, Lau v. Nichols, 414 U.S. 563 (1974)).
Advisors identified several current concerns or gaps, what should be, and practical considerations for a better system.
Federal law requires that states (or in the case of districtwide assessments, LEAs) develop accommodations guidelines that address the provision of appropriate accommodations for students with disabilities. According to IDEA regulations:
(1) A State (or, in the case of a district-wide assessment, an LEA) must develop guidelines for the provision of appropriate accommodations.
(2) The State’s (or, in the case of a district-wide assessment, the LEA’s) guidelines must—
(i) Identify only those accommodations for each assessment that do not invalidate the score; and
(ii) Instruct IEP Teams to select, for each assessment, only those accommodations that do not invalidate the score. (Sec. 300.160(b))
If an interim assessment is used for federal accountability, ESSA requirements would also apply. ESSA requires that states make appropriate accommodations available and ensure that their assessments are accessible to students with disabilities (Sec. 1111(2)(B)(vii)(II)).
Advisors identified several current concerns or gaps, what should be, and practical considerations for a better system.
Federal laws require that all children with disabilities have the opportunity to learn grade-level academic content. This includes students with the most significant cognitive disabilities who may need an alternate assessment, and English learners with disabilities.
According to IDEA, all students with disabilities must have meaningful access to content aligned with the State’s academic content standards for the grade in which the child is enrolled.1 ESSA also requires that students with disabilities participate in academic instruction and assessments for the grade level in which the student is enrolled, and are tested based on challenging State academic standards for the grade level in which the student is enrolled (Sec. 1111(2)(B)(ii)).
Some interim assessments may not be aligned to a state’s grade-level academic content standards. The purpose for which an assessment is being used affects whether it needs to be aligned to grade-level standards. If the intent is to determine whether students are learning grade-level academic content, the interim assessment should be aligned to the standards.
Advisors identified several current concerns or gaps, what should be, and practical considerations for a better system.
The U.S. Department of Education’s Office of Elementary and Secondary Education (OESE) conducts peer reviews of states’ assessments used for accountability. According to A State’s Guide to the U.S. Department of Education’s Assessment Peer Review Process (U.S. Department of Education, 2018):
The purpose of the Department’s peer review of State assessment systems is to support States in meeting statutory and regulatory requirements under Title I of the Elementary and Secondary Education Act of 1965 (ESEA), as amended by the Every Student Succeeds Act (ESSA), for implementing valid and reliable State assessment systems. Under sections 1111(a)(4) and 1111(b)(2)(B)(iii)-(iv) of the ESEA and 34 CFR § 200.2(b)(4) and (5) and (d), the Department has an obligation to conduct a peer review of the technical quality of State assessment systems implemented under section 1111(b)(2) of the ESEA. Assessment peer review is the process through which a State demonstrates the technical soundness of its assessment system. A State’s success with its assessment peer review begins and hinges on the steps the State takes to develop and implement a technically sound State assessment system. (p. 4)
The federal peer review process is used for state summative assessments used for accountability. The peer review process results in assessments that are technically stronger. Historically there has sometimes been less emphasis on the technical soundness of interim assessments, but as they increase in profile and are used for new purposes, there is recognition that it is important to consider technical issues for these assessments. Frequently cited evidence of technical adequacy includes item statistics, reliability and measurement error, differential item functioning, factor analysis, linking and equating, and correlation studies. Particular attention needs to be given to validity evidence for interim assessments.
Test scores should have the same meaning for a student with disabilities as they do for other students. In order to draw valid inferences from an assessment, it is important that an assessment produce valid interpretations for the purpose of the assessment. The fairness and equity of an assessment, the comparability of different forms (including alternate formats such as braille), the appropriateness and effect of various accessibility features and accommodations, and other issues related to the validity of score interpretations and uses all merit evaluation and documentation.
Advisors identified several current concerns or gaps, what should be, and practical considerations for a better system.
Data from interim assessments are used for many purposes, and have both intended and unintended consequences. When using assessment data, considerations include context, data quality, intended purpose, data limitations, and consequences. If there is public reporting of interim assessment data, data also need to be reported for students with disabilities.
IDEA requires that if a state or a district publicly reports assessment data for students without disabilities, then it must report assessment data for students with disabilities to the public with the same frequency and in the same detail. This includes reporting on participation and performance in regular and alternate assessments, as well as the number of students participating with accommodations. A state is eligible for IDEA funding only if it provides assurances to the Department of Education that it takes the following steps:
(D) REPORTS.—The State educational agency (or, in the case of a districtwide assessment, the local educational agency) makes available to the public, and reports to the public with the same frequency and in the same detail as it reports on the assessment of nondisabled children the following:
(i) The number of children with disabilities participating in regular assessments, and the number of those children who were provided accommodations in order to participate in those assessments.
(ii) The number of children with disabilities participating in alternate assessments described in subparagraph (C)(ii)(I).
(iii) The number of children with disabilities participating in alternate assessments described in subparagraph (C)(ii)(II).
(iv) The performance of children with disabilities on regular assessments and on alternate assessments (if the number of children with disabilities participating in those assessments is sufficient to yield statistically reliable information and reporting that information will not reveal personally identifiable information about an individual student), compared with the achievement of all children, including children with disabilities, on those assessments. (20 U.S.C. § 1412(a)(16))
Advisors identified several current concerns or gaps, what should be, and practical considerations for a better system.
Based on takeaways from the Interim Assessment Advisory Panel discussions, NCEO proposes the following recommendations for states and districts that use (or are considering use of) interim assessments.
There is widespread interest in using interim assessments for many purposes, including using them for instructional decision making, to monitor progress, to measure learning loss, and to predict success on a summative test. These assessments are currently used as part of OSEP’s RDA accountability system. Many State Systemic Improvement Plans (SSIPs) include interim assessments either as the State Identified Measurable Result (SIMR) or as a measure of progress in the evaluation plan. Some states are also interested in using interim assessments as a replacement or supplement to summative tests used for ESSA accountability.
Federal laws require that all state and districtwide interim assessment administrations include an alternate assessment, yet there currently are no alternate interim assessments. Federal laws also require that accommodations be provided for students with disabilities who need them, but many interim assessments do not offer a full range of accommodations, especially those that would enable students with sensory impairments to participate in the assessment. There is often a lack of transparency about whether an interim assessment is aligned to grade-level academic content standards, as well as whether these assessments are properly designed to elicit the intended constructs and cognitive processes when administered to students with disabilities.
More work is needed to ensure that interim assessments provide valid information on what students with disabilities know and can do. Because so many states and districts use interim assessments—and more are considering using them—there is an urgent need for validity evidence for each of their uses and additional guidance on how states can effectively use and implement these assessments and provide training for districts and schools.
American Educational Research Association (AERA), American Psychological Association (APA), National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing. https://www.testingstandards.net/open-access-files.html
Boyer, M., & Landl, E. (2021). Interim assessment practices for students with disabilities (NCEO Brief #22). National Center on Educational Outcomes and National Center for the Improvement of Educational Assessment. https://nceo.umn.edu/docs/OnlinePubs/NCEOBrief22.pdf
Browder, D., Lazarus, S. S., & Thurlow, M. L. (2021). Alternate interim assessments for students with the most significant cognitive disabilities (NCEO Brief #23). National Center on Educational Outcomes.
Lazarus, S. S., Hayes, S. A., Nagle, K., Liu, K. K., Thurlow, M. L., Dosedel, M., Quanbeck, M., & Olson, R. (2021). The role of assessment data in state systemic improvement plans (SSIPs): An analysis of FFY 2018 SSIPs (NCEO Report 425). National Center on Educational Outcomes. https://nceo.umn.edu/docs/OnlinePubs/NCEOReport425.pdf
U.S. Department of Education (2018). A state’s guide to the U.S. Department of Education’s assessment peer review process. Office of Elementary and Secondary Education. https://www2.ed.gov/admins/lead/account/saa/assessmentpeerreview.pdf
Alison Bailey Professor and Division Head Human Development and Psychology University of California – Los Angeles |
Trinell Bowman Associate Superintendent for Special Education Prince George’s County Public Schools (Maryland) |
Derek Briggs Professor and Chair Research and Evaluation Methodology Program University of Colorado |
Diane Browder Professor Emeritus University of North Carolina at Charlotte |
Stephanie Cawthon Professor and Strategic Advisor (former director) National Deaf Center on Postsecondary Outcomes University of Texas at Austin |
Chris Domaleski Associate Director National Center for the Improvement of Education Assessment (Center for Assessment, NCIEA) |
John Eisenberg Executive Director National Association of State Directors of Special Education (NASDSE) |
Jody Fields Director IDEA Data & Research Part B Data Manager DESE Special Education University of Arkansas at Little Rock |
Lynn Fuchs Professor Department of Special Education Vanderbilt University |
Suzanne Lane Professor Educational Measurement and Statistics University of Pittsburgh |
Eloise Pasachoff Professor and Associate Dean Georgetown University Law Center |
Ricki Sabia Senior Education Policy Advisor National Down Syndrome Congress (NDSC) |
Vince Verges Assistant Deputy Commissioner Division of Accountability, Research and Measurement Florida Department of Education |
Markay Winston Assistant Superintendent (Curriculum, Instruction, & Assessment) Monroe County Community Schools (Indiana) |
Andrew Hinkle Education Program Manager |
Sheryl Lazarus Director, NCEO |
Kristin Liu Assistant Director, NCEO |
Chris Rogers Research Fellow |
Kathy Strunk Education Program Specialist |
Martha Thurlow Senior Research Associate |
Fen Chou Program Director Council of Chief State School Officers (CCSSO) |
Susan Hayes Senior Program Associate Special Education Policy & Practice WestEd |
Bill Huennekens Associate Director AEM |
Kate Nagle Senior Program Associate Special Education Policy & Practice WestEd |
Carol Seay Technical Assistance Provider AEM |
Michelle Boyer Senior Associate Center for Assessment (NCIEA) |
Erika Landl Senior Associate Center for Assessment (NCIEA) |
Rachel Quenemoen Consultant |
David Egnor Project Officer Office of Special Education Programs (OSEP) |
Susan Kirlin Monitoring & State Support (MSIP) Office of Special Education Programs (OSEP) |
Donald Peasley School Support & Accountability (SSA) Office of Elementary and Secondary Education (OESE) |
Christine Pilgrim Monitoring & State Support (MSIP) Office of Special Education Programs (OSEP) |
Angela Tanner-Dean Monitoring & State Support (MSIP) Office of Special Education Programs (OSEP) |
Susan Weigert Research to Practice Office of Special Education Programs (OSEP) |
Jennifer Wolfsheimer Monitoring & State Support (MSIP) Office of Special Education Programs (OSEP) |
11:00 – noon | Welcome and Opening Remarks Sheryl Lazarus (Director, NCEO) David Egnor (Project Officer, OSEP) Christine Pilgrim (MSIP, OSEP) Donald Peasley (OESE) Everyone |
noon – 12:15 | Meeting Overview Sheryl Lazarus (NCEO) |
12:15 – 12:30 | Meeting Processes and Specific Areas that Need to be Considered to Ensure that Interim Assessments Yield Valid Inferences About What Students with Disabilities Know and Can Do - Participation - Accessibility - Role of standards - Technical Issues (e.g., reliability, validity, fairness, comparability, aggregation across multiple interim assessments, etc.) - Interpretation and use of data Rachel Quenemoen Sheryl Lazarus |
12:30 – 12:40 | Break |
12:40 – 1:00 | How Students with Disabilities are Included in Interim Assessments Erika Landl and Michelle Boyer (NCIEA) |
1:00 – 1:10 | Process for Breakout Discussion Rachel Quenemoen |
1:10 – 2:10 | Advisor Discussion (Breakout Groups) |
2:10 - 2:20 | Break |
2:20 – 2:50 | OSEP Results Driven Accountability, State Systemic Improvement Plans (SSIPs), and Assessment-related State Identified Measurable Results Susan Hayes, Kate Nagle, and Sheryl Lazarus |
2:50 – 3:50 | Advisor Sharing (Breakout Groups) |
3:50 – 4:00 | Summary of Day Sheryl Lazarus |
11:00 – 11:05 | Reflections on Day 1 Sheryl Lazarus |
11:10 – 11:15 | Process for Developing Recommendations, and Introduction to Notes Rachel Quenemoen |
11:15 – noon | Participation - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
noon – 12:45 | Accessibility - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
12:45 – 12:55 | Break |
12:55 – 1:45 | Role of Standards - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
1:45 – 2:30 | Technical Issues (e.g., Reliability, Validity, Fairness, Comparability, Aggregation Across Multiple Interim Assessments, Etc.) - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
2:30 – 2:40 | Break |
2:40 – 3:25 | Interpretation and Use of Data - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
3:25 – 3:40 | Other Suggestions for Recommendation - Review of Notes - Suggestions for Recommendations · ESSA accountability and use by states and districts · IDEA SSIPs/SIMRs Facilitated by Rachel Quenemoen |
3:40 – 3:50 | Next Steps Sheryl Lazarus |
February 16, 2021
Facilitation Notes for the Breakout Group Sessions
Breakout Session Process Set-up
1:00 - 1:10: Rachel Quenemoen shares process for breakout discussion.
Rachel will provide specific details about how the breakout will work, including logistics, time, etc.
Some ground rules for sharing with the group include:
1. Non-advisory participants (NCEO/OESE/OSEP/NCIEA) are generally observers only. However, they may be called upon by the facilitator to clarify or provide supporting information if necessary.
2. Participants will follow standard expectations for group work.
- All will be heard
- Be respectful
- Agree to disagree and move on
- Have a parking lot for tangential issues
Materials for each breakout:
Breakout Session #1
How Students with Disabilities are Included in Interim Assessments (1:10 – 2:10)
1:10 – 1:15: Welcome the Advisory Panel members.
Time is very limited and members have already introduced themselves. Facilitator should politely move group into discussion quickly.
“Welcome everyone. We hope you found the presentation and report informative and useful. There is a lot for us to discuss and our time is limited so let’s jump into it. As Rachel explained, we have two note takers taking notes for us. Advisory panel members should feel free to add any comments to the chat. We’ve also sent panel members a link to a template in Google Docs. If you choose, you can also write your thoughts in the Google doc while we are having our discussion.”
1:15 – 1:25: Participation
1:25 – 1:35: Accessibility
1:35 – 1:45: Role of Standards
1:45 – 1:55: Technical Issues
1:55 – 2:05: Interpretation and Use of Data
2:05 – 2:10: Last Thoughts
Breakout Session #2
OSEP Results Driven Accountability, State Systemic Improvement Plans (SSIPs), and Assessment-related State Identified Measurable Results (2:50 – 3:50)
2:50 – 2:55: Welcome back the Advisory Panel members.
Time is very limited and members have already introduced themselves. Facilitator should politely move group into discussion quickly.
“Welcome back everyone. We hope you found the second presentation and report informative and useful. As we saw from the first break-out session there is a lot for us to discuss and our time is limited so let’s jump into it. We have our two note takers taking notes for us again. You should also have the note-taking template again so you can follow along. Advisory panel members should feel free to add any comments to the chat. We’ve also sent panel members a link to a template in Google Docs. If you choose, you can also write your thoughts in the Google doc while we are having our discussion.”
2:55 – 3:05: Participation
3:05 – 3:15: Accessibility
3:15 – 3:25: Role of Standards
3:25 – 3:35: Technical Issues
3:35 – 3:45: Interpretation and Use of Data
3:45 – 3:50: Last Thoughts
The meeting opened with welcomes and introductions. Sheryl Lazarus, NCEO director, David Egnor, OSEP project officer, Christine Pilgrim, MSIP, and Donald Peasley, OESE, welcomed all to the virtual meeting. Following a high-level overview of the meeting and its purpose, Rachel Quenemoen, consultant, provided a brief history of inclusive assessment. Among the points made during the opening were:
A framework was presented to help organize advisor thinking about interim assessments. It included five areas: (a) Participation, (b) Accessibility, (c) Role of Standards, (d) Technical Issues, and (e) Interpretation and Use of Data.2 This framework was not meant to constrain thinking or contributions.
Two presentations provided background information for advisors. After each presentation, advisors broke into two groups for facilitated discussions. Note takers compiled notes from the breakout sessions so that the advisors could review them overnight in preparation for Day 2 of the meeting. Non-advisors in the breakout sessions were to be observers only, unless they were asked by the facilitator to clarify or provide supporting information. Discussions were summarized in a set of takeaways that addressed current concerns or gaps, what should be, and practical considerations for a better system. The two presentations are summarized here.
Michelle Boyer and Erika Landl from the National Center for the Improvement for Educational Assessment (Center for Assessment) provided a presentation on interim assessment practices. They noted that traditionally interim assessment data have been used by teachers and students to support classroom learning. LEAs also have used aggregated results from these assessments to support administrative policy. More recently, emerging from the disrupted schooling that occurred as a result of the pandemic, interim assessment vendors have been publishing “learning loss” studies, and states are exploring the potential use of interim assessments to better understand the impact of COVID.
Dr. Boyer and Dr. Landl conducted a document analysis of publicly available test vendor materials (e.g., marketing materials, accessibility guides, technical manuals and reports) for 13 interim assessments (including several suites of assessments) to learn more about vendor claims about the use of these assessments with students with disabilities (Boyer & Landl, 2021). They found that:
Susan Hayes and Kate Nagle provided an overview of OSEP’s accountability framework, RDA, which is used to monitor and support states’ implementation of IDEA. States annually submit a State Performance Plan/Annual Performance Report (SPP/APR) to report on a variety of indicators including assessment (Indicator 3). As part of RDA, states develop State Systemic Improvement Plan (SSIPs) (Indicator 17), which are comprehensive, multi-year plans designed to improve outcomes for children with disabilities; within this plan, states commit to improving a State-Identified Measurable Result (SIMR) focused on student outcomes. Many, but not all states, specified SIMRs that use assessment data as the outcome measure. Dr. Hayes and Dr. Nagle also shared information about a new 6-year SPP/APR cycle (FFY 2020-FFY 2025).
Sheryl Lazarus then presented a summary of an analysis of how interim assessments are included in the SSIPs of states with assessment-related SIMRs (Lazarus et al., 2021). A few states included data from interim assessments in their SIMRs; many more states included interim assessments in their evaluation plans as a measure of progress toward the SIMR. States also reported on data limitations in their SSIPs. Identified data limitations included: different districts used different interim assessments making it challenging to aggregate data across districts; some districts that were part of the SSIP cohort did not use any interim assessment; and data systems in the state lacked capacity to handle interim assessment data. Only seven states included alternate assessment data in their SIMRs; all of the states that included the alternate assessment used the state summative assessment as their SIMRs.
Sheryl Lazarus thanked the advisors for their thoughtful discussions. She said that NCEO would summarize the advisors’ comments, and then send it to them to the advisors to review and suggest revisions. Based on these, NCEO would prepare a draft report that contained a summary of the reviewers’ comments and recommendations. The advisors would then have an opportunity to review the complete report draft. After this review NCEO would make final revisions and publish the report.
Footnotes
1 https://www2.ed.gov/policy/speced/guid/idea/memosdcltrs/guidance-on-fape-11-17-2015.pdf
2 Following the panel meeting, “Interpretation and Use of Data” was revised to “Data Use, Interpretation, and Reporting” to better reflect the panel members’ discussion of this component of the framework.