Instructional and Assessment Accommodations in Kentucky


Maryland / Kentucky Report 7

Published by the National Center on Educational Outcomes

January 2000


This document has been archived by NCEO because some of the information it contains is out of date.


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Thurlow, M., Ysseldyke, J., Bielinski, J., House, A., Trimble, S., Insko, B., Owens, C. (2000). Instructional and assessment accommodations in Kentucky (Maryland-Kentucky Report 7). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://education.umn.edu/NCEO/OnlinePubs/MDKY_7.html


Overview

As accountability systems are implemented across the United States, Kentucky’s system has been looked to as a model of what an inclusive accountability system should be. This is due in part to the fact that Kentucky has successfully managed to reach participation of nearly 100% of its students in the state assessment and accountability system.

Earlier reports have described Kentucky’s accountability system in detail (e.g., Ysseldyke et al., 1996, Ysseldyke et al., 1997). Briefly, the Kentucky Instructional Results Information System (KIRIS) is intended to monitor performance at the student and school level. Results on KIRIS can lead to financial rewards for the school or school-level assistance. The system is based on the KIRIS tests as well as other indicators of performance. In 1999, KIRIS was replaced with a new assessment, the Commonwealth Accountability Testing System. However, the policies related to inclusion of students with disability populations remain consistent with some instructional clarification about the uses of accommodations. The tests are administered in grades 4/5, 7/8, and 11/12, and all students, including those with disabilities, are expected to participate in either the regular assessment or the alternate portfolio system.

The accomplishment of the full participation of students with disabilities in KIRIS is in large part due to a commitment to meeting those students’ needs through the use of appropriate accommodations during KIRIS testing as well as to a more general commitment to high levels of participation. In Kentucky, Individualized Education Program (IEP) teams decide about the types of accommodations needed for each student. The accommodations for KIRIS testing are intended to be related to the instructional accommodations in place for the student, and should not inappropriately impact the content being measured. Locating the process of decision-making within IEP teams allows flexibility in meeting students’ needs, but also makes it more difficult to understand whether accommodations are being provided as intended within the system.

In an earlier analysis in this series (Trimble, 1998), the scores of students with disabilities who used accommodations on KIRIS were compared to the scores of students with disabilities who did not use accommodations. Overall, students with disabilities scored below students from the general population on the KIRIS tests, and students who received accommodations also generally scored below students from the general population. In some cases performance of students using accommodations was lower than that of students using no accommodations. This may be due to a number of factors, including decision-making processes, and magnitude of impairment of students receiving accommodations.

Recent national legislative changes, especially the reauthorization in 1997 of the Individuals with Disabilities Education Act (IDEA 97), have also had effects on the ways students with disabilities are included in accountability systems. IDEA 97 requires that students with disabilities be included in accountability systems with appropriate accommodations. However, it is less clear in the law how one should determine the appropriateness of accommodations used during testing, and their potential effects on the scores.

Questions raised by earlier reports (e.g., Trimble, 1998), as well as those raised by legislation such as IDEA, suggested the need for further analysis. Specifically, this study was designed to explore two main questions. One area of examination is the match between instructional and assessment accommodations for the KIRIS tests. We wanted to know the extent to which accommodations are being provided as intended in the system. The second area of analysis is the relationship between performance on KIRIS for students with disabilities receiving accommodations and other measures of performance. It is anticipated that examining this area may help us understand whether KIRIS scores are equally representative of achievement for students with disabilities using accommodations and the general population of students.


Study Procedures

Participants

The educational records for 155 students were examined for this study: 78 students with disabilities, and 51 students from the general population (26 students were missing information on disability status). Table 1 contains demographic information for the sample. Similar to the ratio of boys to girls in special education in the state as a whole, twice as many boys’ (n=83) as girls’ (n=35) records were reviewed. These ratios generally held true for the matched sets of students with disabilities and students from the general population. Students ranged in grades from 8th to 12th, with the majority of students in 9th (n=21), 10th (n=42), or 11th (n=44) grade. Two students (2%) were classified as Limited English Proficient; both students were also receiving special education services.

As mentioned previously, 51 students were sampled from the general population. About half of the student records surveyed (n=72) had an IEP in 1995-96, and 3% (n=6) had a 504 plan (26 of the surveys were missing this information). In Kentucky’s testing system, students without disabilities are not allowed to receive accommodations.

 Table 1. Demographic Information on the Sample*

Students with Disabilities General Population Students Total
Sex      
Male 52 (67%) 31 (61%) 83 (65%)
Female 25 (32%) 20 (39%) 45 (35%)
Missing 1 (1%) 0 (0%) 1 (1%)
Total 78 (100%) 51 (100%) 129 (100%)
Grade      
8th 2 (3%) 0 (0%) 2 (1%)
9th 12 (15%) 9 (18%) 21 (16%)
10th 30 (38%) 12 (24%) 42 (33%)
11th 24 (31%) 20 (39%) 44 (34%)
12th 0 (0%) 1 (2%) 1 (1%)
Missing 10 (13%) 9 (18%) 19 (15%)
Total 78 (100%) 51 (100%) 129 (100%)
Receiving Services for Limited English Proficiency      
No 79 (97%) 48 (94%) 124 (96%)
Yes 2 (3%) 0 (0%) 2 (2%)
Missing 0 (0%) 3 (6%) 3 (2%)
Total 78 (100%) 51 (100%) 129 (100%)

* Note: 26 students were missing information on disability status; they are included in the “Total” column, but are not included in the other two columns.

 

Instrument Development

The Kentucky Department of Education developed a survey that would be as similar as possible to the survey used by Maryland, yet adjusted to conform to the unique circumstances in Kentucky schools. Thus, to develop its survey for collecting data from school records, Kentucky first reviewed the survey developed by Maryland, then adjusted and added items as needed to produce Kentucky’s survey. A copy of the Kentucky survey is included in Appendix A.

The Maryland survey, on which the Kentucky survey was based, was developed by a focus group formed specifically to assist in survey development. The focus group was comprised of Department of Special Education staff and local district teachers, administrators, and school psychologists. In addition, both Kentucky and staff at the National Center on Educational Outcomes (NCEO) reviewed this initial version of Maryland’s survey, and then revisions were made to it for Maryland’s use. When the Maryland survey was completed, it then went to Kentucky for adaptation. The adaptations were reviewed by NCEO.

 

Data Collection

Data for the surveys were obtained by Kentucky Department of Education staff, who randomly sampled from the data files of special education students in Kentucky. In order to be included in the surveys, students needed to have participated in the 1995-96 administration of the KIRIS test. Disabilities were represented randomly within the sample of students with disabilities, and the sample from the general population is representative of the general population of students in Kentucky schools. Staff members from the districts sampled were paid for transferring the appropriate data from the records to data collection forms.

After the records of students with disabilities were sampled, records from a matched set of students from the general population were sampled. The matching variables included the school the student attended, sex, and grade in school. Due to Kentucky’s policy of only providing testing accommodations for students with disabilities, the section of the survey on testing and instructional accommodations and modifications was not completed for general education students.


Results

Instruction and Test Accommodations

Testing accommodations data were reported for seven categories of accommodations: (1) reading the assessment, (2) paraphrasing assessment materials, (3) scribing or writing responses for students, (4) use of technology, (5) Braille, (6) signing, (7) large-print, and (8) other. Roughly 17% of students with either an IEP or a 504 plan were missing data on testing accommodations. The most commonly listed accommodation was having someone read the test to the student (47%), followed by paraphrasing (31%) and scribing/writing responses for students (20%). Nearly 21% of the records documented an accommodation other than the choices provided. In this sample, no students received the Braille or signing accommodations.

None of the non-IEP students had an accommodation documented. All of the students on 504 plans (n=6) received an accommodation to KIRIS, while 89% of students on IEPs had received accommodations to KIRIS (n=82). When students with a 504 plan and those with an IEP were combined, roughly 89% (n=58) received at least one testing accommodation during the 1995-96 KIRIS tests; 11% did not receive an accommodation.

The survey also asked whether the documented accommodation/modification was added to the student’s instructional program prior to the 1995-96 school year and whether the accommodation/modification seemed reasonable when considering how the student functions outside the classroom. For the 58 cases in which an accommodation was indicated, 41 (70%) reported that the instructional accommodation was added prior to 1995-96 academic year, 37 (65%) reported still using the accommodation in the instructional program, and 50 (86%) cases were judged to be receiving an accommodation that seemed reasonable given how the student was expected to function outside the classroom. Seven percent of the students’ records were missing data about whether the accommodation was added, 14% were missing data about whether the student still received the accommodation in instruction, and 12% were missing data about whether the accommodation was reasonable.

Besides being used for statewide testing, accommodations and modifications may be provided for classroom tests. For those who received an accommodation (n=58) in the 1995-96 school year, 48 (83%) received some sort of accommodation or modification to classroom tests, nearly equal to the percent that reportedly received accommodations to KIRIS (87%) during that same year. In addition, roughly two-thirds of the students received accommodations to other standardized assessments.

We also examined whether the percent of students using accommodations in the classroom or on other standardized assessments increased from the 1994-95 academic year to 1996-97. Table 2 shows the frequency and percent of those 58 cases in which an accommodation was indicated. The percent of cases in which an accommodation was used on regular classroom tests increased from 69% in 1995 to 83% in 1996, however it dropped back down to 69% in 1997. As for accommodation use on other standardized assessments, a similar trend occurred. The percent of cases increased from 43% in 1995 to 64% in 1996, but dropped down to 53% in 1997.

Table 2. Frequency and Percent of Cases Using an Accommodation in Regular Classroom Tests and Other Standardized Assessments Across the Academic Years 1995-97

Type of Test 1995 1996 1997
Regular Classroom Tests 40 (69%) 48 (83%) 40 (69%)
Other Standardized Tests 25 (43%) 37 (64%) 31 (53%)

                                                           

Course Grades and KIRIS performance

To compare performance on KIRIS with classroom grades, student performance on KIRIS in each of five content areas (reading, math, science, social studies, and writing) was reported using the four proficiency levels, and student grades were reported on the A through F scale. Grades were further subdivided into plusses and minuses. The grades were translated into an 13-point scale, ranging from 0 for an F to 12 for an A. The relationship between KIRIS performance and grades was evaluated within each content area for the year in which KIRIS test scores were obtained (1996). Not all records contained grade data, so the sample size is somewhat different for each correlation. Table 3 shows the Spearman correlations between each content area and the respective classroom grade. There were no consistent relationships found between KIRIS performance and classroom grades in reading, math, science, or social studies. There was a low, positive, statistically significant relationship between KIRIS writing scores and classroom grades.

Table 3. Spearman Correlation Between KIRIS Test Performance and Classroom Grades for the 1995-96 Year

Course r p N
Reading .05 .67 88
Math .17 .10 98
Science .18 .07 97
Social Studies .17 .09 99
Writing .30 .01 70

 

In order to ascertain whether the relationship between KIRIS scores and classroom grades was the same for students without an IEP and those with an IEP or 504 plan, separate Spearman correlations were obtained for each group. The correlations are reported in Table 4. The pattern of correlations is very different for students with an IEP and those without an IEP. For the IEP group the correlation between grades and KIRIS test performance was near zero, or even negative. None of the correlations was significant. However, for students without an IEP, the correlations were all positive, and three (reading, math, science) were statistically significant. It seems reasonable to expect that classroom grades would correlate at least modestly with KIRIS test performance, particularly when one considers that KIRIS contains performance based measures that reflect more classroom like tasks.

Table 4. Spearman Correlation Between KIRIS Test Performance and Classroom Grades, Computed Seperately on Students With and Without an IEP

  No IEP or 504 IEP/504
Course r p N r p N
Reading .06 .69 41 .02 .85 47
Math .36 .01 48 -.05 .74 50
Science .36 .01 48 .01 .97 49
Social Studies .23 .15 49 .09 .52 50
Writing .51 .00 34 -.04 .84 36

 

Because it is clear that the relationship between KIRIS performance and classroom grades differs for students currently with an IEP and those without an IEP, it was important to determine whether the two groups differed in the types of grades earned, and in the scores obtained on KIRIS. Although KIRIS performance is reported at four levels (Novice, Apprentice, Proficient, Distinguished), most students in this study fell at the two lowest levels (Novice and Apprentice). First, we looked overall at students on either IEPs or 504 plans compared to students without either of these. The percent of each group of students scoring above novice in each content area of the 1996 KIRIS is shown in Table 5. Chi-square tests were run to determine whether there was a relationship between the number of students above Novice and the status of those students. Generally, the proportion of students scoring above the Novice level was similar for the two groups, regardless of the content area. The significant chi-square for writing is questionable given the small number of students scoring above Novice in either group for this content area.

Table 5. Number and Percent of Students With or Without IEP/504 Plans Scoring Above Novice in Each Content Area of the 1996 KIRIS

Content Area No IEP or 504 IEP/504 c2 p
Reading 60 (81.1) 59 (76.6) .45 .50
Math 28 (37.8) 28 (36.4) .04 .85
Science 10 (13.5) 8 (10.4) .35 .55
Social Studies 25 (33.8) 23 (29.9) .27 .61
Writing 16 (23.2) 6 (8.6) 5.00 (.02)

 

Next we examined the performance of the students IEP/504 students separately for students who used accommodations and those who did not. These data and the chi-square test results are included in Table 6. The performance of the IEP/504 students receiving accommodations is about the same as the performance of those students not receiving accommodations, where performance is defined in terms of the numbers of students performing above the Novice level. This was true regardless of content area.

Table 6. Number and Percent of IEP/504 Students With or Without Accommodations Scoring Above Novice in Each Content Area of the 1996 KIRIS

Content Area With Accommodations Without Accommodations c2* p
Reading 46 (80.7) 6 (85.7) .10 .61
Math 24 (42.1) 2 (28.6) .47 .40
Science 7 (12.3) 1 (14.3) .02 .63
Social Studies 16 (28.1) 4 (57.1) 2.45 .13
Writing 6 (11.1) 0 .74 .52

* Fisher's Exaqct Test was used to evaluate significance.

 

Another question that lends itself to examination in this study is whether grades change as a function of year and group (IEP vs. non-IEP). Table 7 shows the means and standard deviations by group and content area for each year. Assigned grades were transformed to numbers using a 0–12 scale, with 0=F and 12=A. For the most part, the grades were fairly similar. The average grade ranged from around 4.5 to about 6.0. In terms of grades this indicates a range of around a C to a C+. There was no content area in which one group consistently outperformed the other group. However, there was a large mean difference in writing grades in 1997. The mean for the IEP students was 5.6 and that for the non-IEP students was 4.0, which translates to approximately a .5 standard deviation unit difference. Most mean differences were less than .2 standard deviation units.

Table 7. Comparison of the Average Grade Earned Between Students With and Without an IEP in Each of Five Content Areas (Reading, Math, Science, Social Studies, and Writing) Across Three Years 1995-97

1995 1996 1997
Content Mean SD N Mean SD N Mean SD N
Reading
Non-IEP 5.0 3.20 44 6.1 3.07 43 5.0 3.37 45
IEP 5.9 2.93 49 5.8 3.24 48 4.9 3.25 54
Math
Non-IEP 4.7 3.19 51 6.5 3.49 50 4.6 3.79 55
IEP 5.3 3.01 51 5.5 3.42 51 5.1 3.20 60
Science
Non-IEP 5.3 3.40 50 5.9 2.93 50 4.8 3.55 54
IEP 4.6 2.96 50 5.5 3.26 50 5.1 3.14 59
Social Studies
Non-IEP 5.2 3.43 51 6.0 3.18 51 4.8 3.54 40
IEP 5.1 2.95 49 4.4 2.66 51 5.0 3.04 34
Writing
Non-IEP 5.1 3.15 36 6.0 3.18 37 4.5 3.31 39
IEP 6.0 2.92 38 5.5 2.71 37 5.6 3.41 46

To examine how class time spent in instruction outside the unmodified, general classroom impacts student performance, we looked at the instructional conditions within which each student received instruction, and the number of minutes of instruction per week received under the various conditions. The options were: a self-contained room, a resource room, a collaborative team, a resource allocation, or unmodified delivery. Table 8 shows the number and percent of students in each group who received instruction under various delivery systems. It is important to note that between 22% and 31% of the cases in the data set were missing some information on these variables. Nearly 70% of the students who did not have an IEP were instructed under unmodified conditions, about 25% received instruction in a self-contained room, and about 40% were missing data. Instructional delivery for the IEP group was more dispersed. For reading, math, and writing, the resource room instruction was most common (over 40%). Instruction in a self-contained room and by a collaborative team were also common (around 20% for each). For science and social studies, far fewer students received instruction in a resource room (11% and 13%, respectively). On the other hand, a greater percent of IEP students received their science and social studies instruction under unmodified instruction. About 13% of the IEP/504 students were missing information on instructional modification. Overall, between 87% and 97% of instruction for IEP students was provided under modified conditions.

Table 8. Number and Percent of Students in Each Group Receiving Instruction Under Various Delivery Systems

Instruction Time

  Read Math Science Social Studies Writing
No IEP          
Self-contained room 13 (26) 13 (26) 13 (26) 13 (26) 11 (22)
Resource room 0 0 0 0  
Collaborative team 0 0 0 0  
Consultation 0 0 0 0  
Unmodified delivery 32 (63) 35 (69) 35 (69) 35 (69) 32 (63)
IEP          
Self-contained room 14 (18) 14 (19) 16 (20) 15 (19) 13 (17)
Resource room 32 (41) 34 (47) 11 (14) 13 (17) 34 (44)
Collaborative team 16 (20) 15 (21) 18 (23) 16 (20) 12 (15)
Consultation 3 (4) 2 (3) 2 (3) 2 (3) 3 (4)
Resource allocation 1 (1) 1 (1) 1 (1) 1 (1) 1 (1)
Unmodified delivery 2 (3) 5 (7) 10 (13) 10 (13) 3 (4)

A relationship receiving an increasing amount of attention is the correlation between performance assessments and standardized norm-referenced tests. Many educators believe that multiple-choice norm-referenced tests are inadequate tools for assessing the breadth and type of skills students learn in the classroom. Performance assessments are a way of tapping those skills without compromising the standard of reliability achieved with most NRTs. Still, NRTs are the gold standard to which other, newer assessments are compared. Here we correlated performance on the KIRIS test with performance on NRTs. Because several different NRTs were used, a common metric was needed. The stanine was chosen because both the percentile rank and the normal curve equivalent could be converted to stanines. Although the stanine represents a common metric, it should be kept in mind that different norm-groups were used for the various assessments. The most commonly used norm-referenced tests were the CTBS (44%) and the CAT/5 (37%). Results are reported separately for students with an IEP and those without an IEP.

One would anticipate moderate associations for tests measuring similar content (e.g., reading). Spearman rank-order correlations between KIRIS performance and the associated content area evaluated with an NRT are shown in Table 9. For the no IEP group the correlations range from .31 in math to .68 in writing, and all were statistically significant. The correlations were weaker for the IEP students. Correlations ranged from .18 in writing to .32 in reading, and only the reading correlation was significant. A correlation in science could not be computed because all students scored at the novice level.

Table 9. Spearman Correlation Between NRT (Administered Between 1995 and 1998) Test Performance and KIRIS Performance (1996), Computed Separately on Students With and Without an IEP

  No IEP IEP/504
Course r p N r p N
Reading .45 .02 28 .32 .02 48
Math .38 .05 28 .26 .08 47
Science .59 .01 19 -- -- 16
Social Studies .67 .00 18 .24 .41 14
Writing .68 .01 13 .18 .50 17

 

Course grades were also correlated with test score performance on the norm-referenced tests. One would expect to find at least moderate correlations between classroom grades and test performance, assuming that grades are reliably assigned and that the tests measures concepts similar to those assessed in the classroom. Table 10 shows the Spearman correlation between NRT test scores and classroom grades. Separate correlations were computed for IEP and no-IEP students. For the no-IEP group, the correlation ranged from -.04 in writing to .61 in social studies. The math, reading, and social studies correlations were significant. The correlations were smaller for the IEP group. Three of the five correlations were negative and none was statistically significant. The fact that the correlations between classroom grades and NRT scores were moderate for the no-IEP group, whereas they were very small or negative for the IEP group, may suggest that grades are assigned differently to students with IEP than their non-IEP peers.

Table 10. Spearman Correlation Between NRT (Administered Between 1995 and 1998) Test Performance and Classroom Grades (Given in 1997), Computed Separately on Students With and Without an IEP

  No IEP IEP/504
Course r p N r p N
Reading .27 .24 20 .10 .55 35
Math .43 .02 27 -.12 .48 40
Science .29 .25 17 -.18 .58 12
Social Studies .59 .06 11 .30 .70 4
Writing -.05 .90 8 -.03 .92 12

 


Discussion

We examined use of accommodations for students receiving special education services in the 1995-96 KIRIS administration, and the relationship between KIRIS scores and grades for a matched set of students receiving and not receiving special education services in 1995-96.

There was a high degree of match between the use of instructional accommodations (e.g., on classroom tests) and the use of accommodations in the KIRIS administration for both students on IEPs or 504 plans in 1995-96. Further, most of the accommodations used by students in the KIRIS testing were in place prior to the testing year and were still being used two years later. These findings help confirm that the accommodations used in the KIRIS testing are not generally being put in place capriciously during the year of the assessment, but are related to long-term needs of the students. However, the increased presence of accommodations during classroom testing and standardized tests other than KIRIS only during the year of KIRIS testing indicates that either too many students may be getting accommodations during that testing year, or that KIRIS testing helped raise the IEP team’s awareness of the need for providing accommodations.

Students in this sample who were receiving special education services were not generally as successful in their classroom grades or their KIRIS scores as students not receiving special education services. This finding was a general trend, though less often a statistically significant finding. This is a finding that is not necessarily a surprise, given the fact that students are in special education because they have an identified educational need. Further, other examinations of the success of students receiving special education have found that they generally do not score as high as other students on large scale assessments (Ysseldyke, Thurlow, Langenfeld, Nelson, Teelucksingh, & Seyfarth, 1998).

It would be expected that two measures of the same content area would be related. In order to examine this hypothesis, correlations were run between school grades in 1995-96 and KIRIS performance the same year. Correlations between student grades and KIRIS scores in the subject areas of mathematics, science, and writing were found to be significant for students who were not receiving special education services; the correlation for social studies was approaching significance. The relationship between reading grades and KIRIS reading scores did not differ significantly from zero for students not receiving special education services. The correlations between course grades and KIRIS scores for students on an IEP or 504 plan did not differ significantly from zero in any case.

The finding that the relationships between KIRIS and course grades are different based on whether or not a student was receiving special education services could be explained in a number of different ways. It may be a statistical artifact, due to the restricted ranges of scores on KIRIS for students receiving special education services. Another potential explanation would be if course grades for students in special education were not as closely related to their school achievement as they are for other students. This suggestion is supported to some extent by earlier findings in which students with mental retardation received higher grades than students with either learning or behavioral disabilities (Bruininks, Thurlow, Lewis, & Larson, 1988). That research, however, did not find that students with disabilities received grades that were higher than students without disabilities. A further potential explanation would be if the KIRIS scores measure school achievement differently for students in special education than for other students. This could be a function of the students themselves or it could be a function of the accommodations to the test that they received. If the accommodations did affect the relationship between KIRIS scores and school grades, it is due to a change in the construct tested; if an accommodation simply resulted in a test score boost, than the strength of the association would not be affected. And, one would assume that the association between test scores would be the same regardless of disability status.

A final area of analysis was the type of instructional delivery a student received, whether modified or unmodified, and the relationship with KIRIS scores. In general, students receiving unmodified instructional delivery were more likely to receive higher scores on KIRIS than students being taught with modified instructional delivery such as self-contained rooms, resource rooms, collaborative teams, and consultation. However, due to a small sample size, it was not possible to examine the statistical significance of the findings meaningfully.

Overall, in this examination of the relationship between instructional accommodations and testing accommodations, it appears that testing accommodations used in KIRIS are similar to those used in the instructional environment by most students. Many unanswered questions remain about the relationship between performance on KIRIS, school grades, and performance on NRTs for students in special education who receive accommodations.

This study helps highlight the critical need for experimental research on the effects of specific accommodations during testing. Studies of the decision-making process for accommodation provision are also needed. The trend that accommodations were most common during the year of KIRIS administration may indicate unnecessary accommodations are being provided, or that KIRIS helps IEP teams focus on the need to document and provide appropriate accommodations.

Overall, this study helped validate that most accommodations provided during testing were related to those provided in instruction and were provided over a number of years. Additionally, it suggested that for this sample, on the 1995-96 administration of KIRIS, the relationship between KIRIS test scores and norm-referenced tests was significantly affected by whether a student was on an IEP/504 or not. The relationship between KIRIS scores and course grades is much more complex, and probably confounded by differences in grading practices for students with and without disabilities, as well as possibly by the use of accommodations. As future research examines the effects of accommodations on test validity and also how IEP teams make decision about accommodations, we should be able to better understand how to help students with disabilities best display their knowledge and skills.


References

Bruininks, R. H., Thurlow, M. L., Lewis, D. R., & Larson, N. W. (1988). Post-school outcomes in special education and other students one to eight years after high school. In R. H. Bruininks, D. R. Lewis, & M. L. Thurlow (Eds.), Assessing outcomes, costs and benefits of special education programs (Project Report Number 88-1). Minneapolis: University of Minnesota, University Affiliated Program on Developmental Disabilities.

Koretz, D. M., & Barron, S. L. (1998). The validity of gains in scores on the Kentucky Instructional Results Information System (KIRIS). Santa Monica, CA: Rand Corporation.

Thurlow, M., Seyfarth, A., Scott, D., & Ysseldyke, J. (1997). State assessment policies on participation and accommodations for students with disabilities: 1997 update (Synthesis Report 29). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Trimble, S. (1998). Performance trends and use of accommodations on a statewide assessment: Students with disabilities in the KIRIS on-demand assessments from 1992-93 through 1995-96 (State Assessment Series, Maryland/Kentucky Report 3). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.       

Ysseldyke, J., Thurlow, M., Erickson, R., Gabrys, R., Haigh, J., Trimble, S., & Gong, B. (1996). A comparison of state assessment systems in Kentucky and Maryland with a focus on the participation of students with disabilities (State Assessment Series, Maryland/Kentucky Report 1). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.

Ysseldyke, J., Thurlow, M., Erickson, R., Haigh, J., Moody, M., Trimble, S., & Insko, B. (1997). Reporting school performance in the Maryland and Kentucky accountability systems: What scores mean and how they are used (State Assessment Series, Maryland/Kentucky Report 2). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.        

Ysseldyke, J., Thurlow, M., Langenfeld, K., Nelson, J. R.,  Teelucksingh, E., & Seyfarth A. (1998). Educational results for students with disabilities: What do the data tell us? (Technical Report 23). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes.


Appendix A

Data Collection Survey

                           Survey of Services to Students with Disabilities

 

School Name:                                                                                                            LSS School #                    

Student Name:                                                                                                   LSS Student ID #                    

Student Date of Birth:        \       \             Grade:                                             SSIS Student ID #                    

Gender:                                                                                                                                Male:                                                                                                                                                       r                                                                                                             Race/Ethnicity:                    

Female:                                                                                                                    r  

Disability:                                                                               Federal Disability Code:            

                  Previous Disability (if any):                                                  Federal Disability Code:            

Date of Last IEP:        \      \                          Date of Last ARD Committee Meeting:        \      \           

Survey Prepared by:                                                                 Date Survey Prepared:        \      \           

 

1a.       In what setting is the student receiving services in accordance with the IEP and/or ARD Minutes? (Check the setting which best describes the student’s learning environment, then  consider if English as a Second Language (ESOL) is a service being provided to the student.)

o General Education Class – includes student enrolled in a comprehensive school who receives Special Education and related services OUTSIDE THE GENERAL EDUCATION CLASSROOM  for less than 21% of the school day.  For Preschool students, includes any combination of regular early childhood settings with no pull-out (e.g. Extended Elementary Education Program, Head Start, or other early childhood settings) as “inside the general education classroom.”

o Separate Class – includes student enrolled in a comprehensive school who receives Special Education and related services  (Including Preschool pull-out programs) OUTSIDE THE GENERAL CLASSROOM for more than 60% of the school day.

o Resource Room/Combined Program – includes student enrolled in a comprehensive school who receives Special Education and related services  (Including Preschool pull-out programs) OUTSIDE THE GENERAL CLASSROOM for at least 21%, but no more than 60% of the school day.

 

1b. English as a Second Language – student is also enrolled in English as a Second Language classes.                                                                                          o NDF   o Yes     o No

 

2.         What is the intensity of services stated on the IEP?

o I             o II             o III            o IV             o V             o VI             o Unknown

 

3a.     Does the student receive services which are provided in an extended school year calendar?                                                                 o NDF          o Yes     o No

 

3b.     Does the student receive services which are provided in a program which uses a twelve-month school year calendar?                                         o NDF          o Yes     o No

 


4a.          According to the student’s IEP, what related service(s) is the student receiving this school year (’1996 – ’97)?  Is the service provided direct, indirect, or both (as in an inclusion model)?  What is the schedule to proved the service?  (Related services and other supportive services are required to assist a disabled student to benefit from Special Education.  The related services include speech pathology and audiology, psychological services, physical and occupational therapy, recreation, early identification and assessment of disabilities in students, counseling service, and medical services for diagnostic or evaluation purposes.  This also includes health services, social work services in the school, and parent counseling training.)

 

 

Related Service Type

 

Direct/Indirect

 

Schedule/Time [hours per week]

 

EXAMPLE: Speech Therapy

 

Direct

 

3 times/week for 1 hour, total of 3 hours/week)

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

4b.          According to the student’s IEP, what specific academic goals require accommodation(s) and/or modification(s) in the student’s instructional setting this school year (1996 – ’97)?

 

 

IEP Academic

Goal Areas

 

Accommodation/

 Modification

Required?

 

 

Specific Accommodation

 

Reading

 

 

 

 

 

Writing

 

 

 

 

 

Language Usage

 

 

 

 

 

Mathematics

 

 

 

 

 

Science

 

 

 

 

 

Social Studies

 

 

 

 

 

 


5.       Do the IEP and/or ARD minutes document the decision as to which outcomes the student will be pursuing?

r    Maryland Learning Outcomes Only      r    Alternative Outcomes Only

r Both Maryland Learning Outcomes and r Documentation not found

Alternative Outcomes

 

6.       What year(s) and grade(s) did/will the student participate in MSPAP?

 

 

 

 

School Year

 

 

 

1994 – 95

 

1995 – 96

 

1996 – 97

 

1997 – 98

 

1998 – 99

 

Grade

 

 

3

 

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

7a.     What year(s) and grade(s) did/will the student participate in IMAP?

 

 

 

 

School Year

 

 

 

1994 – 95

 

1995 – 96

 

1996 – 97

 

1997 – 98

 

1998 – 99

 

Grade

 

 

3

 

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

7b.     In the ARD minutes or on the student’s IEP is there documentation to indicate why the student was not taking MSPAP and why IMAP was more appropriate for the student?

r No     r Yes     r Not available

Explain [indicate source of information (ARD minutes or student’s IEP)]             

 


8a.     What year(s) and grade(s) did/will the student participate in CTBS?

 

 

 

 

School Year

 

 

 

1994 – 95

 

1995 – 96

 

1996 – 97

 

1997 – 98

 

1998 – 99

 

Grade

 

 

3

 

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

8b.     What year(s) and grade(s) did/will the student participate in MFT? [MFT is not applicable for this study.

 

 

 

 

School Year

 

 

 

1994 – 95

 

1995 – 96

 

1996 – 97

 

1997 – 98

 

1998 – 99

 

Grade

 

 

3

 

 

 

 

 

 

 

 

 

 

 

5

 

 

 

 

 

 

 

 

 

 

 

8

 

 

 

 

 

 

 

 

 

 

 

9.       List the student’s end-of-year grades for the 1995 – 96 school year.  For the 1996 – 97 school year, list the most recent grades reported, and indicate if they are mid-year or first-quarter grades.

 

 

Areas

(if the areas don’t fit for a student seeking a certificate, indicated with and “X” and complete Notes section

 

Grade

for

1995-96

 

Grade for

1996-97

1st Quarter  r

Mid-year r

 

Areas

(if the areas don’t fit for a student seeking a certificate, indicated with and “X” and complete Notes section

 

Grade

for

1995-96

 

Grade for

1996-97

1st Quarter  r

Mid-year r

 

Reading

 

 

 

 

 

Mathematics

 

 

 

 

 

Writing

 

 

 

 

 

Science

 

 

 

 

 

Language Usage

 

 

 

 

 

Social Studies

 

 

 

 

 

Notes:                                                                                                                                                                                         

 

 


 


10.     If accommodations are made for a student, list them below, one accommodation per row.

 

 

Type

 

Description

 

Scheduling

 

 

 

Setting

 

 

 

Equipment/Format

 

 

 

Presentation

 

 

 

Response

 

 

 

[Use the “Accommodations Permitted” Document for details]

 

 

1995 – 96 School Year

 

1996 – 97 School Year

 

Instructional

Accommodations

 

Test

Accommodations

List State Test Name: MSPAP, MFT, CSTB

 

Instructional

Accommodations

 

Test

Accommodations

List State Test Name: MSPAP, MFT, CSTB

 

General Education

 

Special Education

 

General Education

 

Special Education

 

General Education

 

Special Education

 

General Education

 

Special Education

 

Examples

 

(1.B)

 

None

 

(1.B)

 

(MSPAP)I.B

 

NONE

 

NONE

 

NONE

 

NONE

 

NONE

 

NONE

 

NONE

 

(MSPAP)II.G.

 

NONE

 

NONE

 

NONE

 

NONE

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

11a.   In the ARD minutes, is there documentation to indicate why INSTRUCTIONAL ACCOMMODATIONS were or were not made in the 1995 – 96 school year?

r No     r Yes     r Not available

Explain [indicate source of information (ARD minutes or student’s IEP)]                         In the examiner’s opinion, was the explanation well-grounded? r No     r Yes

 


11b.   In the ARD minutes, is there documentation to indicate why INSTRUCTIONAL ACCOMMODATIONS were or were not made in the 1996 – 97 school year?

r No     r Yes     r Not available

Explain [indicate source of information (ARD minutes or student’s IEP)]                In the examiner’s opinion, was the explanation well-grounded? r No     r Yes

 

12a.   In the ARD minutes, is there documentation to indicate why ACCOMMODATIONS for state test(s) were or were not made in the 1995 – 96 school year?

r No     r Yes     r Not available

Explain [indicate source of information (ARD minutes or student’s IEP)]                                                                                                                                                                               In the examiner’s opinion, was the explanation well-grounded? r No     r Yes

 

12b.   In the ARD minutes, is there documentation to indicate why ACCOMMODATIONS for state test(s) were or were not made in the 1995 – 96 school year?

r No     r Yes     r Not available

Explain [indicate source of information (ARD minutes or student’s IEP)]                                                                                                                                                                               In the examiner’s opinion, was the explanation well-grounded? r No     r Yes

 

13.     Was the student EXEMPTED by the ARD Committee from state test(s) listed below?

 

 

Test Name

 

Exempted?

(yes, no, dnf)

 

List the reason(s) for Exemption(s) and include the year of the exemption (e.g., ’96 – 7).

 

Examples:

 

(I)   the student transferred into the local school system with Limited English Proficiency.  (96 – 7)

(ii) the student is in need of function life skills.  (’95 – 7)

 

MSPAP

 

 

 

 

 

MFT

 

 

 

MFT is currently not applicable for this study.

 

CTBS

 

 

 

 

dnf = documentation not found

 


14.      Was the student EXCUSED by the ARD Committee from state test(s) listed below?

 

 

Test Name

 

Exempted?

(yes, no, dnf)

 

List the reason(s) for being Excused and include the year of the exemption (e.g., ’96 – 7).

 

Examples:

 

(I) the student demonstrated extreme frustration and was not able to complete the assessment.  (’96 – 7)

 

MSPAP

 

 

 

 

 

MFT

 

 

 

MFT is currently not applicable for this study.

 

CTBS

 

 

 

 

dnf = documentation not found

 

15.      Is there a local Criteria Reference Test (CRT)? r No     r Yes

 

16.      Did the student participate in the local CRT? r No     r Yes