A Comparison of State Assessment Systems in Maryland and Kentucky

with a focus on the participation of students with disabilities


Maryland / Kentucky Report 1

Published by the National Center on Educational Outcomes

November 1996


This document has been archived by NCEO because some of the information it contains is out of date.


Any or all portions of this document may be reproduced and distributed without prior permission, provided the source is cited as:

Ysseldyke, J., Thurlow, M., Erickson, R., Gabrys, R., Haigh, J., Trimble, S., & Gong, B. (1996). A comparison of state assessment systems in Maryland and Kentucky with a focus on the  participation of students with disabilities (Maryland-Kentucky Report 1). Minneapolis, MN: University of Minnesota, National Center on Educational Outcomes. Retrieved [today's date], from the World Wide Web: http://cehd.umn.edu/NCEO/OnlinePubs/MDKY_1.html


Abstract

States currently are struggling with the development of assessment and accountability systems that are for all students. Two states, Maryland and Kentucky, have made major advances toward fully inclusive assessment systems. We describe and compare the systems used in the two states with a focus on components, scoring methods, reporting practices, data use, participation of students with disabilities, testing accommodations, and implementation considerations. Maryland and Kentucky practices are then contrasted to those used in the other states.


Nearly every state department of education is engaged in the specification of standards and the development of systems of assessment to use in making judgments about the extent to which students and schools are meeting high standards (AFT, 1996; Bond & Roeber, 1995). Major reform legislation (Goals 2000, School-to-Work, Improving America's Schools Act, and the Individuals with Disabilities Education Act) includes wording indicating that high standards and accompanying assessments are for all students. Two states, Maryland and Kentucky, have made major advances in the development of fully inclusive assessment and accountability systems. Maryland has about 99% of its students participating in the assessments, and Kentucky has 100%. In this paper, we describe and compare the assessment systems and practices in the two states. The purpose in documenting what is happening in these two states is both to clarify the systems and to provide models for other states.


Assessment Practices in Maryland

Overview

Maryland's School Performance Program (MSPP) was established in 1989 by the Maryland State Board of Education as the vehicle to move toward a high quality educational system for all of Maryland's students in the 21st century. Maryland's focus on school performance and standards was the result of a report by a Governor's Commission on School Performance. The Commission reported that the state lacked an accountability system that could produce good information on how students in Maryland were doing and who should be accountable for producing high performance by each student within the education system. The Commission, which had broad representation from stakeholder groups, deliberated from 1987 to 1989. As a result of the State Board of Education's acceptance of the Commission's recommendations between 1989 and 1990, representatives of numerous groups from across the state (e.g., teachers, content supervisors, administrators, citizens, parents, special educators, and higher education personnel) worked to reach consensus on performance areas for which schools should be held accountable. From an original list of some 200 variables, the list was narrowed to approximately twelve, with an additional seven other areas reported as supporting information. The twelve variables were linked to state standards, and the seven were used to provide information regarding the difficulty of the education task for a particular building. All of the effort was focused at the building level with companion reports at the district and state levels.

 

Standards and Curriculum Frameworks in Maryland

Maryland established content standards (learning outcomes) in mathematics, reading, writing, language usage, social studies, and science. The expression of these standards is expected to differ in complexity, language, abstractness, and the structure used to elicit the skill at different grade levels. These content standards were followed by the development of proficiency levels and a single state standard for satisfactory and excellent performance. The content and performance standards set the parameters for assessment practices that would be used in Maryland to hold schools accountable. Performance assessments are conducted to ascertain the extent to which students are meeting or making progress toward meeting state standards, which were established for the year 2000. Schools are responsible for showing continuous improvement toward the standards annually, the publishing of an annual report to the public on that progress, and for involving the business and citizen community in the development of school improvement plans to ensure continuous progress.

The mathematics standards, following closely the standards from the National Council of Teachers of Mathematics (NCTM, 1989), set goals for students in the areas of reasoning, problem solving, communication in mathematics, and understanding mathematics connections within the discipline, with other content areas, and with the real world.

The standards for reading are concerned with the ways students construct, extend, and examine meaning by orienting themselves to various texts and purposes, and the various ways that students interact with texts.

The writing and language usage standards examine the ways students create meaning for various purposes and audiences, using both rhetorical devices and written expression. According to the standards, the purposes served by writing are focused on writing to inform, persuade, and express personal ideas.

The social studies standards have been informed by a number of different sources, including the Maryland Social Studies Task Force, the National Council of Social Studies (NCSS), the National Assessment of Educational Progress (NAEP), the Bradley Commission (named after Maryland's governor), the National Governors' Association, and other significant groups that have been investigating the role of social studies in schools. The standards focus on the knowledge base necessary for understanding history and the social sciences, analysis and application of the knowledge base, and the attitudes necessary to use the knowledge and skills within a context of justice and democratic decision making.

Finally, in the area of science, the standards are based on the principle that science is not a collection of facts nor a collection of processes, but rather that science involves construction of meaning through particular ways of observing, thinking, experimenting, and validating.

In addition to these content standards, the Maryland State Board of Education adopted the Dimensions of Thinking presented by the Association for Curriculum Development (ASCD) as a statewide definition of thinking concepts and skills that must be infused in the assessments. Teams of experts in Dimensions periodically review Maryland assessment tasks to ensure that the concepts and skills are covered in both depth and breadth.

 

Purposes

The Maryland School Performance Program is used for school improvement and accountability purposes. It includes both student accountability and program accountability. Student accountability is based on performance on the Maryland Functional Testing Program (MFTP). The test is used for high stakes purposes: high school graduation. This program will soon be replaced by a series of 10 end-of-course content assessments that are related to the number and content of graduation requirements. Passage of these state tests will also be a condition of graduation for any student receiving a regular high school diploma.

Program accountability is based on information obtained from the Maryland School Performance Program performance assessments, plus the performance indices and "supporting information" described below.

 

Components of the Maryland Assessment System

The accountability system in Maryland is called the Maryland School Performance Program (MSPP) and it includes data from two sources: student performance and supporting information. The data on student performance are collected from four sources. First, the Maryland School Performance Assessment Program (MSPAP), which is a totally performance-based assessment given to all students in grades 3, 5, and 8. The assessments do not contain items, but tasks, which may call for group interaction. The results of the assessments produce scale scores in reading, mathematics, writing, science, social studies, and language usage. These scale scores align with five proficiency levels. Each proficiency level describes what a student at the level is able to do. Proficiency level three has been adopted by the State Board as the standard for "satisfactory" performance.

Three additional indices of performance, student attainment (promotion rate and high school graduation rate), student participation (attendance and dropout rates), and postsecondary plans and decisions (grade 12 documented decisions) are also reported annually. All standards are within grade level and school, and data are disaggregated by race and gender.

The second component, supporting information, includes: information on student characteristics (e.g., enrollment and mobility), kindergarten completion, and number of students receiving special services. Other factors reported include financial information, staffing ratios, instructional time, and results of performance on the Comprehensive Test of Basic Skills/4, which is given to a sample of students in grades 3, 5, and 8 in each district and reported only at the district level.

 

What Do the Tests Look Like?

The Maryland Functional Testing Program includes four basic minimum competency tests: three multiple choice tests in reading, mathematics and citizenship, and a modified-holistically scored direct writing assessment of both narrative and explanatory writing skills. All of these tests are untimed. The multiple choice tests take approximately one and one half hours apiece to give; the writing measure takes approximately two hours, and is given over two days. There are computer-adaptive versions of the reading and math tests, which usually take approximately 30 minutes. The Maryland Functional Testing Program began as a graduation requirement program, and is now required to be completed by the end of eighth grade.

The Maryland School Performance Assessment Program (MSPAP), currently available at the elementary and middle school levels, measures higher order thinking processes and the application of knowledge and skills to real world situations. It is a single test covering mathematics, reading, writing, science, language usage, and social studies. The MSPAP takes nine hours of testing time and is composed of performance tasks with open-ended, short answer, and extended performance based on the State Board adopted outcomes. Item specifications and scoring rubrics for the MSPAP were built by 350 teachers who worked with the Maryland State Department of Education personnel and CTB Macmillan/McGraw Hill. The performance assessments are scored each summer by approximately 650 teachers who are hired and trained by the Maryland State Department of Education personnel and a scoring contractor, currently Measurement Incorporated of Durham, North Carolina (samples of released items and student performances can be found in Appendix A).

 

Psychometric Properties of the Tests

For each administration of the Maryland School Performance Program a technical report is completed. Similar reports are developed for the administrations of the Maryland Functional Testing Program as well. Maryland made an early commitment, not only to performance assessment, but also to ensuring the psychometric integrity of those assessments. The state's performance assessment for students with severe disabilities, the Independence Mastery Assessment Program (IMAP), is currently being piloted and will also be expected to meet psychometric standards.

In addition to data on the validity and internal consistency of the performance tasks, the scoring contractor is required to provide scorer qualifying information for all scorers before they begin to score tests. Additionally, daily check sets and a daily report on scorer reliability must be produced, and retraining provided for scorers who fall below an agreed upon standard. Approximately five to six scorers are involved in the scoring of an individual student answer booklet.

The Maryland School Performance Assessment Program is divided into three equivalent clusters at each grade level, 3, 5, and 8. Each cluster includes the six content areas, and each student takes one cluster. Assignments to clusters is by a randomization formula provided by the State Department of Education to each of the schools. Each school takes all three clusters. School program data are derived by aggregating across the three clusters. Scale scores are reported at the global content area level (e.g., mathematics), but also at suboutcome levels (e.g., reasoning) to facilitate use by classroom teachers for instructional improvement. Scale scores are then reported at the building, system, and state levels in the context of the five proficiency levels. The sampling design serves to limit the amount of instructional time that is devoted to state testing, while at the same time producing valid and reliable scores at the program level.

 

What Do the Scores Look Like?

The Maryland Functional Test scores are Pass-Fail. The Maryland Functional Test results are included in the school accountability system by assigning ratings of excellent, satisfactory, or "not met" to the percentage of students passing the tests by the end of grade 9 and the end of grade 11. For the Maryland School Performance Assessment Program three categories are reported related to standards performance: excellent, satisfactory, or not met. They are defined as follows:

For other standards of pupil performance (promotion rates, attendance rates, dropout rates, etc.) schools are rated on the same levels of state standard attainment of excellent, satisfactory, and not met. The state standards for "satisfactory" govern all schools regardless of demographics or difficulty of the education task. The standards for "excellent" require that a school meet the satisfactory standards and have at least 25% of its students performing at the higher levels.

 

What Do Reporting Practices Look Like?

The Maryland School Performance Report: School System and Schools is published in December every year for the State and each school system. It includes summary and disaggregated data at the State and school system levels. The numbers of students exempted from assessments are reported. Disaggregated data are reported by gender and race/ethnicity for percent passing the Maryland Functional Test, ratings on each Maryland School Performance Test, and ratings on each performance index (attainment, participation, and postsecondary plans). A comparable report that provides the same data for each school building must be produced by each school system. Each building is responsible for producing and disseminating its own report within the local community. Both system and building reports may go beyond the variables reported by the state, but must use a parallel format. Approximately 16 of 24 systems have exercised the option to add local variables that are of concern to the immediate community. Examples of these additional variables are advanced placement tests, elementary parent-teacher conferences, and number of volunteers per school.

 

How Are the Data Used?

Each school that is not meeting the state standards must have a School Improvement Team. School level reports are provided to each building through the local assessment coordinator. The results are used by the team to develop a School Improvement Plan. The team looks at the performance of regular and special students by curriculum area, compares actual performance relative to the performance standards, establishes milestones for improvement, and monitors changes in performance at various checkpoints during the school year. The information is used to guide and improve a school's instructional and organizational activities.

The State Department of Education monitors progress of each school annually under an accreditation provision, known as reconstitution. This provision requires that a school that is not meeting standards must make progress toward those standards. No growth or movement in a downward direction triggers a reconstitution review, which ultimately could lead to a state takeover of the school. However, low performing schools first get technical assistance and additional funding to alter their performance.

The governor introduced budget provisions in the 1996 legislature to provide a recognition program for high performing schools. The final intent is to allow like-schools that are successful to help those that are not.

 

To What Extent Do Students with Disabilities Participate in the Accountability System and the Assessments?

MSPP requires that all students be included in the accountability process at the designated grades in all data areas unless those students meet exemption requirements that release them from specific areas of participation.

For students with disabilities, each student's Admission, Review, and Dismissal (ARD) school level committee determines, on an individual basis, whether the student meets the exemption criteria. Students, including students with disabilities, may be exempt from the Maryland School Performance Assessment Program if they are:

a) second semester senior transfers from out of state,

b) first time LEP student administration, or

c) not pursuing the Maryland Learning Outcomes (Reading, Writing, Language Usage, Mathematics, Science, and Social Studies).

An alternative set of outcomes, more life-skills oriented, has been identified and developed by the state under the direction of the IMAP Advisory Committee. The Independence Mastery Assessment Program (IMAP) is a pilot in approximately one third of the local school systems in Maryland. In those districts, the students with severe disabilities participate in either MSPAP or IMAP, generally depending on which outcomes they are pursuing. The various ways the students with disabilities in non-pilot districts participate in MSPAP include:

a) total participation in MSPAP with no accommodations,

b) total participation in MSPAP with accommodations, and

c) total exemption from MSPAP for students who meet criteria for exemptions.

The various ways the MSPAP/IMAP pilot districts participate include:

a) same as a above,

b) same as b above,

c) total participation in IMAP. IMAP students have severe cognitive disabilities that prevent them from successfully completing regular education course work even with accommodations. This does not mean, however, that they are not included in the regular education program or that they cannot participate in regular school classes and/or activities.

There is flexibility built into MSPAP/IMAP decisions to allow these students to move, when appropriately determined by the school ARD committee, between MSPAP and IMAP. The three components of IMAP include:

a) performance tasks,

b) parent survey, and

c) a student portfolio.

Students with disabilities are included in all other data elements of the Maryland School Performance Program (MSPP) including the Maryland Functional Testing program, a graduation requirement.

 

What Accommodations Are Permitted?

Accommodations are permitted in the administration of the various assessments of MSPP (MSPAP, MFT, CTBS/4, and IMAP). The state has produced guidelines to assist in the determination of appropriate accommodations. Some general principles include:

The accommodations should reflect the student's ongoing instructional program including classroom assessments.

For each of the state assessments, accommodations may include changes in: scheduling, setting, equipment, presentation, and response. These five areas are further delineated for each statewide assessment (i.e., MSPAP, MFT, CTBS/4).

 

Implementation Considerations

The implementation of Maryland's assessment and accountability system is broad, requiring personnel, resources and costs, training, and monitoring.

Personnel. MSPAP State Level&endashThe Maryland State Department of Education operates the design, development, and implementation of its three state level testing programs through the Assessment Branch of the Division of Planning, Results, and Information Management (PRIM). That branch has a total of 10 staff members. Additionally, for the Maryland School Performance Assessment Program (the state's performance assessment) curriculum specialists in language arts, reading, mathematics, social studies, and science are assigned to work collaboratively on the development of the assessments. Their time commitment is approximately 75%. Reports on the results of the assessments are carried out through the Results branch of the PRIM. It should be noted that in 1987 the Assessment Branch consisted of approximately 12 staff members. Hence, the performance assessment program has been implemented through cooperative activities with the curriculum specialists with virtually no addition in staff in the area of assessment.

MSPAP Local Level--At the local school system level each system has a formally appointed Local Accountability Coordinator. This individual, named by the local superintendent, is directly responsible for linkages to the State Department of Education. Additionally, each building has designated a school test coordinator, who is responsible for the administration and management of the assessments, particularly the Maryland School Performance Program, at the building level.

To begin the development of the performance assessment in grades 3, 5, and 8*, each school system was invited to send 15 teachers (five at each level) to assist in the development. These individuals were released approximately 20 days during the school year. Additionally, approximately 12 curriculum supervisors worked approximately 30 days in the development effort. This major commitment of local time was supported by the local superintendents as crucial to local teachers' understanding the assessment and ownership of the product.

IMAP State Level--The planning and development of alternate outcomes and assessment procedures was initiated by the IMAP Advisory Committee with the assistance of a federally funded grant to develop state outcomes and indicators for this group of students with severe disabilities. One staff member at the state spent approximately 30% time on the project. The advisory committee was identified and selected to serve, with little or no additional cost, primarily through participation in meetings. After a series of initial meetings, the federal grant was secured to assist the development process and to focus the development of the outcomes and assessment system. The federal funding greatly accelerated the development process and relieved the state from some initial start-up costs for this assessment. Once begun, allocations for a core group of teachers, substitutes, and summer workshops were developed. Assessment procedures and process were designed, initiated, and have continued to be developed and refined after the end of the federal grant.

In Maryland, ten to fifteen stakeholders, one SEA staff, one to two Institutions of Higher Education (IHE) personnel and eight to ten teachers working one week in the summer were sufficient to begin the development of IMAP assessments. Teachers were added each summer to serve as trainers in their school systems the following year. Each year, a grade/age group was added to be trained and to assist in the scoring process.

General Cost Factors. Although we do not provide a cost analysis, there are several cost factors that can be shared. In the interest of generalization to other states, this discussion of cost focuses at the cost per pupil level since that cost is not as readily influenced by the size of the state or the number of school districts.

Regular Assessment System Costs--The state's norm-referenced program (CTBS) typically operates at approximately $5-6 per student. The Maryland Functional Testing Program, including tests in reading, mathematics, citizenship, and the direct assessment of writing, operates at a cost of approximately $12-15 per student. The Maryland School Performance Program was designed, developed, and implemented at a cost of approximately $21-23 per student. While this cost may be considered high compared to traditional norm-referenced selected-response testing, when viewed from the perspective of quality control, this cost is a minor part of the $5,000-8,000 per pupil being spent within public education today.

These estimates do not include the cost of State Department of Education assessment staff, since no new staff were added as a result of the programs. Indeed, if any of the programs were to cease to exist, there is no assumption that staff savings would accrue. Hence, to include their costs would appear to inflate the figure in a way that would not represent the true costs of the program.

IMAP Alternate Assessment Costs--The cost of the alternate assessment in Maryland has been relatively nominal. Data on costs are shown in Table 1. Performance assessments required paying teacher stipends to develop an initial bank of performance items. Parent surveys were approximately $400 per local school system for a site license to make copies. The cost of portfolio training and development were nominal and completed by teachers during the summer workshops. Videotape, used for recording the individual student performance tasks, was purchased in bulk, costing approximately $3.50 per VHS tape.

All of the these costs are developmental and linked to pilot testing, since the program has yet to be implemented for all students eligible for IMAP. As that time approaches, a more comprehensive implementation cost analysis will be conducted.

Training Issues. Regular Assessment System Training--As mentioned earlier, 15 teachers from each school system were invited to participate in the development of the assessment. Annually, task development for new editions of the tests take place using approximately 100 teachers. All of these individuals received training in task development and test specifications prior to task development.

Additionally, the MSPAP is scored each summer by approximately 600-650 Maryland teachers. Each of these individuals is trained in the scoring methodology and monitored on a daily basis for the maintenance of scorer reliability. Daily reports are submitted by the scoring contractor, and remedial training occurs for individuals whose scoring reliability falls below an acceptable level. All scorers are hired by the contractor, so that scorers unable to reach or maintain an acceptable level are terminated.

IMAP Training--Training in the alternate assessment in Maryland's IMAP takes place in two stages. First, a presentation is given in the local school system for awareness and to respond to questions. The second phase is a two week training session in the summer with one week for scoring and the second week for development of new performance tasks and revision of old tasks. Every year new teachers are trained and the process is reviewed. There are two review panels: one of field experts to review the content, appropriateness, and completeness of the outcomes, indicators, and rationale; the second panel reviews the technical soundness of the assessment process and the congruence with the other state assessments.

Monitoring. Regular Assessment System--Scores are monitored for unusual peaks and valleys annually. In fact, the monitoring system has identified several instances of cheating. The State Board of Education has established a stringent policy regarding assessment irregularities, and several teachers have had their licenses suspended or revoked permanently.

IMAP Alternate Assessment System--IMAP monitoring currently is a local school system process with the appointment of a program coordinator/director and accountability coordinator to carry out the duties of monitoring. This is expected to change as the program moves from a development to an implementation phase.

 

Table 1. Estimated Cost of Initial IMAP

The estimated cost of IMAP development is based on cost summaries and factors included in the development of the system. As changes emerge and refinements are made, cost factors will vary. It is important to understand that start up costs generally are higher than maintenance costs. The cost estimates are based on per student estimates.

ITEM

COST

NOTES

Activity Materials

$6.00*

Vary according to task
Parent Survey

$3.00*

Site license for entire LSS $400
Video Tapes

$3.50*

Bulk rate when using MPT possible
Task Scoring

$18.75*

One teacher scoring eight videos per day @ $150
Task Development Revisions Outcomes/Indicators

$75.00Ý

One teacher develops two tasks per day @ $150
Portfolio
a) Basic
b) Optional
 
$5.00ÝÝ
$30.00
 
Basic components and packaging
CD ROM site (LSS) with SEA assistance
Substitutes

$23.00

Training assumes 2-3 students per teacher/sub. Pay @ $70 per day for one day training
Staff Development

$27.00

Trainer of trainers model/$2.00 per in-service, 75 students per LSS

Note: All costs are based on an estimate of 1,803 students in the state and 75 students in each local school system (LSS). Totals are:

$191.25 per child for initial start-up cost ($14,344 per LSS)

$31.25 per child for basic cost and ongoing costs ($2,343 per LSS)

* Each year cost for minimum per student assessment amount

* Reduced each year depending on amount of new tasks required and revisions necessary

** Basic amount, generally a one time cost


Assessment Practices in Kentucky

Overview

The Kentucky Education Reform Act (KERA) of 1990 formed the basis for massive change in the state's educational system. This massive reform was enacted by the Kentucky General Assembly as a result of a lawsuit brought against the General Assembly, the governor, the Kentucky Board of Education, and the chief state school officer by the Coalition for Better Education (CBE), which represented approximately 60 of the state's 176 school districts. The successful 1988 lawsuit found the state's funding mechanisms inequitable and mandated that the educational system be redesigned. One of the most comprehensive, statewide restructuring efforts ever attempted in the United States, the reform called for top-down and bottom-up systemic change in finance, governance, curriculum, and assessment.

KERA established six goals for the schools of the Commonwealth: (1) expect a high level of achievement of all students; (2) develop students' abilities in six cognitive areas; (3) increase school attendance rates; (4) reduce dropout and retention rates; (5) reduce physical and mental health barriers to learning; and (6) increase the proportion of students who make a successful transition to work, postsecondary education, and the military.

The first and second of these goals were translated into a "primarily performance-based assessment" program using two approaches. First, a Council on School Performance Standards was created to further define the six cognitive areas related to student ability. Through a two-year period of public input and review, the Council arrived at 75 performance goals now known as "Academic Expectations." Through various review processes, these expectations were refined and consolidated; 58 are assessed formally within the measurement system. The Council presented the results of its efforts to the State Board of Education in December, 1991.

The other major activity was to establish an assessment system, the Kentucky Instructional Results Information System (KIRIS), to measure progress toward the goals, primarily the academic expectations reflected in the first two goals of the act. Through a competitive process, the Kentucky Department of Education selected Advanced Systems in Measurement and Evaluation as the contractor for the assessment program. In addition to creating and operating the program based on the Academic Expectations, Advanced Systems was required to assist the Department in creating operational definitions of the third through sixth goals, referred to as the "non-cognitive goals." Advanced Systems hired a subcontractor, the Far West Laboratory for Educational Research and Development, to develop recommendations for the non-cognitive goals for the Department.

 

Standards and Curriculum Frameworks in Kentucky

Kentucky's reform effort is based on the notion that all students are capable of learning. The expectations for students outlined in the six learning goals and the identified academic expectations provide the frameworks for the high expectations of all students.

The six Learning Goals of the KERA, as outlined in the second of the six major goals of the act, are as follows:

  1. Students are able to use basic communication and mathematics skills for purposes and situations they will encounter throughout their lives.
  2. Students shall develop their abilities to apply core concepts and principles from mathematics, the sciences, the arts, the humanities, social studies, practical living studies, and vocational studies to what they will encounter throughout their lives.
  3. Students shall develop their abilities to become self-sufficient individuals.
  4. Students shall develop their abilities to become responsible members of a family, work group, or community, including demonstrating effectiveness in community service.
  5. Students shall develop their abilities to think and solve problems in school situations and in a variety of situations they will encounter in life.
  6. Students shall develop their abilities to connect and integrate experiences and new knowledge from all subject matter fields with what they have previously learned and to build on past learning experiences to acquire new information through various media sources.

 

The Alternate Portfolio Advisory Committee, a committee consisting mainly of teachers of students with moderate to severe disabilities, identified a subset of Kentucky's Academic Expectations for use in the Alternate Portfolio Process. There were 28 expectations identified as critical to maintaining the integrity of functional programming for students participating in the Alternate Portfolio Process (p.6, KIRIS Kentucky Alternate Portfolio Project&endashTeacher's Guide). This subset of Kentucky's Academic Expectations is used to assess the performance of students participating in the Alternate Portfolio system. The Alternate Portfolio is intended to be used with that small percentage of students who cannot participate in the mainstream assessment process, even with the assistance and/or adaptations that can be made available to the instructional process.

 

Purposes

The purposes of the KIRIS assessment system include:

 

Components of the Kentucky Assessment System

The contents of the KIRIS assessment components are influenced primarily by the direction of content area advisory committees with members drawn mostly from classrooms, schools, professional education organizations, higher education, community groups, the Department of Education, and Advanced Systems in Measurement and Evaluation. The KIRIS assessment, which has been administered annually from 1991 to 1996, includes three types of assessment tasks:

Assessment tasks involving portfolios--Each student in grades 4, 8, and 12 is required to assemble a Writing Portfolio and a Mathematics Portfolio (as of the 1994-95 school year Mathematics Portfolios are required in grade 5, rather than grade 4). These portfolios represent collections of the student's best work developed over time in conjunction with support from teachers, peers, and parents. The portfolios are scored by local teachers, and the scores are reported to the Kentucky Department of Education for use in the accountability assessment. Mathematics portfolios will not be included in the baseline calculation for 1996-97 and 1997-98, but will be included for instructional purposes in 1997-98, and for accountability purposes in 1998-99.

Assessment tasks involving performance events--Students participate in performance-based assessment tasks that require them to use knowledge and skills learned in school to produce a product or solve a problem. Rather than recall facts, students apply what they have learned to a real (or real-life simulated) situation. Performance event tasks involve both group and individual work, are based on manipulatives and/or other materials, and take about an hour each for completion. Performance event tasks are administered by test administrators hired by Advanced Systems in Measurement and Evaluation. For 1996-97 and beyond, performance events enter a research and development phase. Until this is complete, they will not be included in the accountability index.

Assessment tasks involving open-ended questions--Students respond to open-ended questions requiring extended written responses. The focus is on higher-order thinking skills, solving multi-step problems, and using reasoning, analytical, and written communication skills.

Assessment tasks involving machine-scorable questions--In 1992-94 students also answered a section of multiple choice questions, although these were not used for accountability purposes. Beginning in 1994-95, KIRIS included a section of other item types being evaluated for possible inclusion in the future. Beginning in 1996-97, a section of multiple choice questions will be included in each content area for accountability purposes.

KIRIS also monitors school progress in terms of non-cognitive indicators such as school attendance rates, dropout and retention rates, reduction in physical and mental health barriers to learning, and the proportion of students who make a successful transition to work, postsecondary education, or the military.

Within the accountability system used in Kentucky, school performance is reported as an overall Index score, an Index score for each of the cognitive areas, and for the non-cognitive measures taken together. Students' results in the cognitive areas are reported as four performance levels: Novice, Apprentice, Proficient, and Distinguished.

The Alternate Portfolio Assessment process is a multi-disciplinary approach as opposed to a single curriculum area. It follows the model of the Kentucky mathematics and writing portfolios in using a holistic scoring guide. The following are key concepts of the Alternate Portfolio:

 

What Do the Tests Look Like?

The KIRIS Transitional (On-Demand) Assessment currently consists of challenging open-response items. A student may be asked to read a real selection from grade-appropriate literature and demonstrate comprehension of that passage by writing several paragraphs demonstrating such understanding. KIRIS Performance Events require students to complete group activities first and then to produce an individual student written response to a specific question or questions. For example, students may be asked to perform a brief science experiment and record the resulting data in a group setting before individually responding to an open-response item (samples of released items and student performances can be found in Appendix B).

Portfolio entries ideally should resemble normal classroom work and should represent the student's best work in that content area, determined mostly by the student with advice from the instructional staff.

 

Psychometric Properties of the Various Measures

Content validity. The KIRIS assessment components address content validity in a traditional manner. First, the Kentucky State Board of Education established a set of academic expectations that were to form the basis for the KIRIS assessment process. Content Advisory Committees were established in the subject areas of reading, mathematics, science, social studies, writing, arts and humanities, and practical living/vocational studies. In addition, an Alternate Portfolio advisory committee was created for the assessment mechanism that crosses content area lines and is designed for student with severe disabilities. These committees consisted mostly of teachers with expertise in the content areas tested. The content advisory committees reviewed assessment items and tasks to assure that they did address the academic expectations and identified core content.

Consequential validity. In an environment where assessment results have significant impact on the instructional process, the question of consequential validity must also be considered. The components of the KIRIS assessment were designed to encourage certain kinds of instructional practices. While addressing traditional instruction in the basic skills was encouraged, the KIRIS assessment components clearly required instruction to consider the student's ability to apply both basic skills and higher-order kinds of skills in the form of constructing responses to rather complex and challenging assessment tasks.

Reliability and generalizability. Although accountability decisions take place only at the school level, results are reported to parents at the student level. It is useful, therefore, to examine reliabilities at that level. In Table 2, we list student-level coefficient alphas for open-response items for the first accountability cycle. These figures are calculated using results from students who were eligible to complete open-response tests and who were present on the day of a testing, with absence being determined by having blank responses to all items; blank responses by students who had attempted at least one question were counted as zero in the computation of coefficient alpha. The reason for not including the responses of absent students is that score reliability would be slightly overestimated.

 

Table 2. Open-Response Test Reliabilities by Subject and Grade

  Subject

1991-92 Open-Response (three items, 12 possible score points

1992-93 Open-Response (five items, 20 possible score points)

1993-94 Open Response (five items, 20 possible score points)

4 Reading

.58

.77

.79

  Mathematics

.63

.72

.75

  Science

.63

.65

.73

  Social Studies

.64

.69

.76

  Composite

.88

.92

.93

8 Reading

.72

.83

.80

  Mathematics

.61

.79

.71

  Science

.62

.75

.66

  Social Studies

.74

.81

.83

  Composite

.89

.94

.92

12 Reading

.79

.85

.75

  Mathematics

.65

.81

.76

  Science

.72

.81

.76

  Social Studies

.77

.85

.85

  Composite

.92

.95

.94

 

NOTE: The above information is from pages 196-198 of the KIRIS Accountability Cycle I Technical Manual, KDE, July 10, 1995. The reliabilities were computed by form for common and matrix items, and averaged.

 

With the exception of grade 4 Science, open-response reliabilities increased from 1991-92 to 1992-93, a logical result of increasing the number of items from four to seven. The 1994-95 assessment includes an additional item in grades 8 and 11.*

As a caveat, it should be noted that using coefficient alpha probably underestimates score reliability insofar as item raw scores are the basis for the computation, whereas the fundamental scaling method employs a logistic model. The use of item response theory takes into account differences in item difficulty not reflected in the computational use of raw scores. A five item test having a single, relatively difficult item is likely to yield a diminished coefficient alpha in comparison with results from a comparable test with essentially uniform item difficulty.

Student-level reliability estimates (in terms of scorer agreement) for portfolios and performance events are addressed in Kentucky's technical reports. Given that each student produces only one writing portfolio and, with limited exceptions, takes only one performance event, score reliability cannot be examined in the same manner as open-response items.

* Although the Kentucky Department of Education advises against making student-level decisions based on individual test scores alone, open-response test reliabilities compare favorably with reliabilities from other tests used to make student-level decisions. Compare score reliability is comparable to reliabilities noted for the ACT Composite, and individual subject area reliabilities are similar to ACT and CTBS subject area reliabilities, as well as to reliabilities for a national certification test in accountancy.

 

Who Takes the Tests and What is the Unit of Analysis?

In general, students with disabilities take all of the regular KIRIS assessment components, either without adaptations as do all other students, or they participate in the regular components of the KIRIS assessment with that assistance and/or adaptation normally made through the daily delivery of instructional services. A small percentage (generally less than one percent of the total accountability grade) of the population of students with disabilities participate in the KIRIS Alternate Portfolio Assessment; if the student with disabilities, with the aid of all assistance and adaptive devices that can be made available, cannot participate in the regular curriculum, then the student participates in the Alternate Portfolio process.

 

What Do Reporting Practices Look Like?

Each school district is required to publish an annual report that informs the public about its performance in each of the critical areas used to determine school success. In addition, a system for determining successful schools has been established in Kentucky based upon student achievement relative to both the school's improvement goals and the state goals.

 

How are the Data Used?

In January 1993, using the results from the spring 1992 administration of the KIRIS, the Kentucky Department of Education assigned to each of Kentucky's approximately 1,255 schools (1) a baseline score on a metric referred to as an accountability index, and (2) an improvement goal (previously referred to as a threshold), which is a two-year target for improvement. The accountability index in the first cycle of implementation (1991-92 through 1993-94) was a number ranging from one to approximately 133 that is based on six components: assessment scores for the five content areas of reading, writing, mathematics, science, and social studies; and a number representing a score for the combined noncognitive indicators. The accountability system is not intended to compare schools or districts to each other; instead, it provides both a school baseline score and a target for improvement for each school and district. Every two years each school will receive a new baseline accountability index and a new improvement goal based upon its students' performance on the assessment. This means that the school must continue to improve student achievement, as measured by KIRIS, every two years.

This same information applies to students with disabilities. Their results include performance results on the KIRIS Alternate Portfolio Assessment and data for the non-cognitive indicators; both are included in determining a school's accountability index. Data reported for schools and school districts may be disaggregated for students with disabilities by disability category at grades at the elementary, middle, and high school levels.

Schools receive rewards and assistance (sanctions) based on their performance. Schools achieving above their improvement goal (threshold score) receive financial rewards. Schools failing to achieve their target are required to develop school improvement plans with assistance from the Kentucky Department of Education. KERA requires that a school scoring more than five percent below the school baseline be declared a "school in crisis." After learning that the school is in crisis, parents may transfer their children to other schools. To assist the school considered to be in crisis, the Department of Education will assign a "distinguished educator" with statutory powers to enforce sweeping changes, including staff assignments. Implementation of the "school in crisis" provision of KERA was delayed by the Legislature until at least 1996. If an entire district fails to meet academic goals or make significant improvement, the commissioner and state board can remove the local superintendent and board members from office and appoint replacements.

Results of the first accountability biennium announced in January 1995 indicated that 95% of Kentucky schools had improved over their baseline, with almost one third eligible for and receiving financial rewards.

 

To What Extent Do Students with Disabilities Participate in the Accountability System and the Various Assessments?

The Kentucky Education Reform Act of 1990 requires the inclusion of all students in the accountability process at the designated grades. For students with disabilities, each student's Admissions and Release Committee determines on an individual basis how the student will be included in the KIRIS assessment program. Students with disabilities must participate in the KIRIS assessments in one of the following ways:

Students participating in the Alternate Portfolio assessment do not participate in the other components of the assessment [i.e., Transitional (On-demand), Performance Events, writing portfolios and mathematics portfolios]. Schools, as well as the state as a whole, are expected to place no more than two percent of their population in the Alternate Portfolio Program. Schools that do exceed this percentage would be monitored by the state agency; however, no school has exceeded this criterion.

 

What Accommodations Are Permitted?

Accommodations may include changes in the administration of the assessment and/or recording of student responses that are consistent with the normal instructional strategies and assistive devices and services identified on the student's IEP or 504 plan. Adaptations in the instructional process must be related to the student's disability and specially designed instruction as described on the student's IEP or 504 plan, and must be age-appropriate. It is also specified that adaptations shall not inappropriately impact the content being measured. Reading assessments may be read to a student if the intent of the assessment is to measure comprehension, but only if this is the normal mode through which the student is presented regular print materials and if this is documented on the student's IEP or 504 Plan.

 

What Are the Implementation Considerations?

The implementation of Kentucky's accountability system requires personnel, resources and costs, training, and monitoring.

Personnel. The need for additional personnel to implement policies related to the participation of students with disabilities in the regular KIRIS student assessment components with adaptations and assistance is certainly difficult to project above and beyond the normal personnel needs of the regular instructional program. In theory, there would be no additional needs because these kinds of staff would be needed throughout the instructional year and would therefore be available during the KIRIS assessment administration. In practice, local school districts may have to reassign staff, seek volunteers, etc., in order to meet the simultaneous needs of students. Under normal instructional workloads, local schools may find it easier to schedule special assistance so as not to require the numbers of staff required during KIRIS administration. The Alternate Portfolio process does require two to three staff members at the state level to coordinate advisory committee activities and provide training.

Costs. The cost for operating the Alternate Portfolio program from the state perspective is approximately $75,000 to $90,000 per year. However, there are additional uncalculated costs associated with LEAs providing release time, etc., to allow teachers involved in developing and scoring the alternate portfolios to attend the necessary training and scoring activities.

Training related to the scoring of KIRIS transitional (on-demand) assessments. The answer documents of students with disabilities are scored in the same way as those of the total population. In the KIRIS system, prior to training the scorers, the test developers and scoring trainers meet to review the test items, scoring guides, and more than 200 student responses in order to select those to be used in the training pack. Scoring training involves from 210 to 350 scorers divided into smaller groups (30-50) to work with two trainers in each content area. Scorers are trained on a particular test form and then score those responses in a group to keep the scoring guides fresh in their minds. Scores are supervised by training leaders, and two percent are rescored by the supervisor to test reliability. Training leaders rescore a minimum of four papers per scorer per day, to ensure continued reliability.

Monitoring of the implementation of alternate portfolios and KIRIS modifications. To date, student participation in the Alternate Portfolio assessment program has been less than 0.5% of the total student population, with very small increases since its inception. It is estimated that approximately 15% of the student population participate in the KIRIS assessments through the use of modifications each year.


Comparison of Assessment Practices in the Two States

Overview

In both Maryland and Kentucky the assessment and accountability systems came about in response to pressures for accountability. In Maryland the pressure was from the legislature, while in Kentucky it took legal action to drive reform. Maryland and Kentucky are the two states that have made the most progress in including all students, including students with disabilities, in their assessment systems. In this section a comparison of the assessment practices in the two states on a number of dimensions is made.

 

Definition of Accountability

The Maryland School Performance Program has both student accountability and program accountability components. Student accountability is high stakes for students. They must take and pass the Maryland Functional Testing Program (MFTP) in order to graduate from high school. School accountability is based on performance on the Maryland School Performance Assessment Program (MSPAP). MSPAP and MFTP are both components of Maryland's larger School Performance Program (MSPP). MSPP includes a number of school data elements, including attendance, norm-referenced or criterion referenced assessments, dropout rate, etc.

The Kentucky School and School District Accountability program is primarily an accountability model that hold schools and districts accountable for demonstrating improved delivery of instructional services directed at high academic expectations. The school is held accountable for improving its delivery of instruction until in the ideal sense it meets the long range goal of producing a student body that:

While there are certain compensatory features in the Kentucky system, the above description does state the ideal condition under which a Kentucky accountability index of 100 could be achieved. No schools or districts are currently meeting this ideal goal even considering the compensatory nature of the scale. Schools and districts are required to reduce the distance between their index score and 100 by 10% on a biennial basis. Student accountability is certainly a component that Kentucky recognizes as important, but the current model leaves student accountability to the discretion of local schools and/or districts.

 

Measurement of Progress/Performance

Progress in Maryland is measured by setting five-year performance goals and then monitoring school system progress toward meeting those long-range goals. In Kentucky progress is measured by establishing baseline performance for each school and then monitoring change in performance relative to the baseline and the long-range state established goal.

 

Consequences (Rewards and Sanctions) for Student or District Performance

In Maryland there are both student and school accountability systems. The student accountability system has one major consequence: high school graduation. The school accountability system has sanctions rather than rewards, although rewards have been proposed to the 1996 legislature. The state department of education monitors school progress. Schools not meeting standards must make progress toward meeting those standards. No growth triggers reconstitution review and may lead to state takeover.

In Kentucky there are both rewards and sanctions (or assistance provided to schools or districts not meeting biennial expectations) as part of the school accountability system. Schools and/or districts exceeding their improvement goals are to be given financial rewards of which they, as a staff, determine the dispersal. Schools failing to meet their improvement goals but still improving are required to produce a specific school improvement plan addressing how greater progress will be made in the future. After a biennium, schools that are in decline or have scored below their baseline are, in addition to designing the improvement plan described above, also assigned a Kentucky Distinguished Educator who will serve in an advisory role assisting in implementing the school's improvement plan and in monitoring progress. After 1996, schools declining substantially (by five points or more on the Kentucky index) will be considered schools in crisis. The assigned Kentucky Distinguished Educator must basically take over the school. After first placing certified staff on probation, the Distinguished Educator will then notify parents of students served by the school of the school's status and of their options as parents to have their child placed in a school not in crisis, if they so choose. The sanctions become progressively stiffer if the school or district continues to fail to meet its improvement goal.

 

Rates of Participation of Students with Disabilities

In Maryland, all but about one percent of the students participate in the state accountability system. Most students take the MFTP and the MSPAP. Some students work toward alternative objectives and are assessed using the IMAP.

In Kentucky, all students, including students with disabilities, participate in state assessments and thereby the state school accountability program. Approximately 85 percent of the students take the regular state assessment, approximately 15 percent take the state assessment with accommodations, and 0.5 percent participate in the Alternate Portfolio assessment system.

 

Accommodation Rules

Both Maryland and Kentucky permit accommodations in assessments. Maryland publishes a set of very specific rules about allowable accommodations. Kentucky students are permitted any accommodations in assessment that are also permitted in and consistent with the appropriate delivery of instructional service.

 

Is Performance Assessment Necessary to a Good Assessment System?

Performance assessment is critical to any assessment system intended to serve within a high-stakes school accountability environment or in any assessment program that is otherwise intended to influence the quality of the instructional program. Where assessment influences instructional practices (intended or unintended), the assessment model implemented must address the content that the curriculum is to be centered around, the desired modes of instruction, and the level of performance toward which the student must strive. Where the results of an assessment process have intended (or unintended) consequences for a school as a whole, the teaching staff must spend some portion of their instructional time preparing students to perform well on the assessment. Therefore, it is not just a matter of including performance assessment within the assessment process, but these performances must be of the kind that will encourage desired instructional practices within the daily delivery of instructional services, and must be visible enough within the assessment process to draw attention to the desired instructional practices.

 

The Nature of Participation and Accommodations Decisions

The decisions about which accommodations for students with disabilities can be used in the Kentucky assessment process are individually made with the intent that these decisions permit adaptations consistent with the normal and appropriate delivery of instruction.

A major feature of the assessment systems in both states is that participation and accommodations decisions are individualized rather than categorical (based on disability category) in nature.


Comparison of Practices in Maryland and Kentucky to Other States

States with Standards

These two states are certainly not alone in the enterprise of establishing educational standards. Almost all states are in the midst of such reform. In its 1995 report Making Standards Matter, the American Federation of Teachers found that 27 states already had established an identifiable set of educational standards, and 23 other states (including Washington, DC) were in the process of establishing them (AFT, 1995). Iowa remains the only state not focused on establishing academic standards.

Where Kentucky and Maryland hold a comparative advantage over many other states is in their consideration of students with disabilities throughout the various stages of standard-setting. Taking the term all to really mean all, these two states were able to establish policies and procedures that considered and provided the means by which students with disabilities could be included, from the very earliest stages of the process. And, in developing assessments of the extent to which students are achieving standards, Kentucky and Maryland have been developing assessments for all students.

 

Participation Decisionmaking

In contrast to Kentucky and Maryland, there is great variability in the rate at which students with disabilities participate in assessments in other states. In its 1994 survey of state special education practices, NCEO found that state directors of special education could report participation rates for only 49 of the 133 tests administered that year, less than 37% of the national total. For those cases where participation rates were reported, there appeared to be wide variability in the degree to which students with disabilities were participating (Erickson, Thurlow, & Thor, 1995).

In analysis of state policies about participation of students with disabilities in assessment, Thurlow, Scott, and Ysseldyke (1995b) suggest that decisions are based most often on one or more of the following criteria:

  1. the primary educational setting of the student (e.g., special school, residential facility, ungraded program, or special classroom);
  2. the student's disability category;
  3. the student's level of access to the general curriculum. In the past, for example, NAEP participation criteria allowed for the exclusion of a student with an IEP if that student had been mainstreamed "less than 50 percent of the time in academic subjects and is judged to be incapable of meaningfully taking part in the assessment" (Mullis, 1990); or
  4. case-by-case decisions of individual administrators or IEP teams. In many cases, this decision must be documented on the student's Individualized Education Program (IEP).

These various eligibility criteria can make the comparison of participation rates between schools, districts or states problematic, if not impossible, since the reported rates may or may not be including many special education students in the population being used as the reference group.

In a very real sense, Kentucky and Maryland eliminate the confusion of such policies by having fully inclusive participation policies in place. For these two states, it is not a question of who participates, simply a question of how. And that question is given to the people closest to the individual student to decide, along with unambiguous guidelines regarding the use of accommodations, or placement into the alternative testing program.

 

Participation and High Stakes

In both Kentucky and Maryland, policymakers recognized the fundamental relationship between participation rates and high stakes testing&endashthat under a system of accountability that compares the performance of schools and districts (and often make awards or sanctions based on these results) administrators and teachers will be motivated to minimize the number of low-performing test takers in order to raise their school or district's overall performance. Zlatos (1994) examined 14 major urban school districts and found that participation rates in testing varied from 93% of all enrolled students in Memphis to 66% in Boston. This system of academic "red shirting" of students with learning problems has resulted in a system that perpetuates invalid comparisons among our nation's schools and school districts.

Both these states have created reporting mechanisms to audit the exclusion of students with disabilities from participation in their large-scale assessment programs. And both have in place alternate assessments designed to integrate the performance data of students with severe disabilities with the scores of other students, thus ensuring accountability for all students being served in their schools.

 

Using Assessment Information for Improving Programs

Participation by students with disabilities in assessment programs serves many purposes, not the least of which is when the information it gathers can be used for programmatic and policy decision making by either local or state educational agencies. Unfortunately, only a small number of states report using the data in this manner. In its 1995 national survey of the 50 states and 10 special territories, NCEO found that nine state directors of special education did not know whether the performance scores for students with disabilities could be disaggregated from their state's assessment database, and another 20 stated that they definitely could not compile such information. Only 25 states or territories could identify the performance scores of students with disabilities, and out of those, only 12 reported actually analyzing the scores (Erickson, Thurlow, Seyfarth, & Thor, 1996). Because of the inclusiveness of their systems, both Kentucky and Maryland have begun to analyze the achievement data of students with disabilities and have plans to make performance reports available to local and state audiences.

 

Accommodations in Assessment

Kentucky and Maryland join many other states in their provision of testing accommodations for students with disabilities. In fact, the number of states with formal written policy guidelines on accommodations jumped from 30 in 1991 to 43 in 1995 (Thurlow, Scott, & Ysseldyke, 1995a). That is not to say that their use has not been problematic for state testing officials. Questions continue to surface about the effect of such accommodations on the psychometric integrity of the assessment. With little research to guide them, state assessment offices have produced guidelines that differ greatly from one another, and often allow testing accommodations to be used that other states specifically prohibit.

Kentucky and Maryland have secured federal funding from the U.S. Office of Special Education Programs (OSEP) for a series of empirical studies focused on the impact of accommodations on performance data. Only two other such grants were awarded from that governmental agency during that particular funding cycle.

An alternative testing program for students with the most severe disabilities, such as Kentucky's Alternate Portfolio or Maryland's IMAP, might be thought of as a major form of accommodation. In this regard, the two states have few equals. As currently stated, Part B of the Individuals with Disabilities Education Act (IDEA) will stipulate the use of alternative testing for those students who cannot participate in a state's regular testing program. This impending change has motivated several states to release requests for assistance in building such alternative systems. State education officials in states such as Texas, Rhode Island and Massachusetts are currently investigating ways to develop an alternate assessment program for students with very severe disabilities. It is anticipated that the lessons learned through their own experiences could make Kentucky and Maryland valuable sources of information as other states undergo such efforts to build truly inclusive accountability systems.


Conclusion

Maryland and Kentucky are two states with inclusive assessment systems. The assessment system in both states came about in response to state legislative pressure for accountability for the results of education for all students. The two states have multiple forms of assessment and explicit procedures for deciding who participates in the various assessments. In both Maryland and Kentucky, schools are held accountable for improved student performance. In Maryland, schools are expected to show progress toward state-defined long-term goals. In Kentucky, schools are expected to improve over baseline performance.

Maryland, Kentucky and most other states permit accommodations in assessments. Yet, Maryland and Kentucky are among the very few states that have an alternate assessment system, one that permits participation by students with severe disabilities. We believe the assessment and accountability systems used in Kentucky and Maryland serve as good models for other states.


References

American Federation of Teachers. (1996). Making standards matter 1996: An annual fifty-state report on efforts to raise academic standards. Washington, DC: Author.

Bond, L. & Roeber, E. (1995). The status of state student assessment programs in the United States. Washington, DC: Council of Chief State School Officers and North Central Regional Educational Laboratory.

Erickson, R. N., Thurlow, M. L., & Thor, K. (1995). State special education outcomes 1994. Minneapolis, MN: National Center on Educational Outcomes, University of Minnesota.

Erickson, R. N., Thurlow, M. L., Seyfarth, A. & Thor, K. (1996). State special education outcomes 1995. Minneapolis, MN: National Center on Educational Outcomes, University of Minnesota.

Gandal, M. (1995). Making standards matter: A fifty-state progress report on efforts to raise academic standards. Washington, DC: American Federation of Teachers, Educational Issues Department.

Kentucky Department of Education. (No Date). Kentucky's Learning Goals and Learner Outcomes.

Kentucky Systems Change Project for Students with Severe Disabilities. (1992). KIRIS Kentucky Alternate Portfolio Project&endashTeacher's Guide. Frankfort, KY: Kentucky Department of Education.

Maryland School Performance Program. (1990). Learning Outcomes in Mathematics, Reading, Writing/Language Usage, Social Studies, and Science for Maryland School Performance Assessment Program. Baltimore, MD: Maryland State Department of Education.

Mullis, I. (1990). The NAEP Guide: A Description of the content and methods of the 1990-1992 assessment. Washington, DC: National Center for Education Statistics.

Thurlow, M. L., Scott, D. L., & Ysseldyke, J. E. (1995a). Compilation of states' guidelines for accommodations in assessments for students with disabilities. Minneapolis, MN: National Center on Educational Outcomes, University of Minnesota.

Thurlow, M. L., Scott, D. L., & Ysseldyke, J. E. (1995b). Compilation of states' guidelines for including students with disabilities in assessments. Minneapolis, MN: National Center on Educational Outcomes, University of Minnesota.

Zlatos, B. (1994). Don't ask, don't tell. The American School Board Journal, 11, 24-28.


This report was prepared by J. Ysseldyke, M. Thurlow, R. Erickson, R. Gabrys, J. Haigh, S. Trimble, and B. Gong.

The Maryland-Kentucky-NCEO Assessment Project is supported, in part, by a grant to the Maryland State Department of Education from the U.S. Department of Education, Office of Special Education Programs (Grant #H023F50008). Opinions expressed herein do not necessarily reflect those of the U.S. Department of Education or Offices within it.