Communication Center  Conference  Projects Share  Reports from the Field Resources  Library  LSC Project Websites  NSF Program Notes
 How to Use this site    Contact us  LSC-Net: Local Systemic Change Network
Educational Reform & Policy

Systemic Reform

Math Reform

Science Reform

Technology Integration

Equity

International Focus and TIMSS

Standards

Assessment and Accountability

School Culture

Public Engagement

Professional Development

Teaching and Learning

LSC Papers and Reports

Paper

  New!     

ABC on Scoring Rubrics Development for Large Scale Performance Assessment in Mathematics and Science

author: Westat
description: As part of its technical assistance effort, Westat is developing an Occasional Papers series addressing issues of concern in doing outcome evaluation. The first of these papers, the development of scoring rubrics, has now been completed and is available for use and comment. Suggestions for additional papers are welcome.

Remember Westat staff and their consultants are available to provide assistance to you in developing or reviewing your outcome evaluation plans. NSF is providing the resources for this technical assistance. Please don't wait until the last minute to ask for help.

To suggest themes for occasional papers or request technical assistance, please contact Joy Frechtling. She can be reached at frechtj1@westat.com or (301) 517-4006.

published in: WESTAT
published: 05/01/2002
posted to site: 05/31/2001

5.1 Off-line Sources

Airasian, P. (1997). Classroom Assessment. 3rd ed. New York: McGraw-Hill. (Note: Chapter 8 of Airasian’s book, entitled "Performance Assessment," offers narrative text and samples pertaining to the development of scoring rubrics.)

American Educational Research Association, American Psychological Association, and National Council on Measurement in Education. (1999). Standards for Educational and Psychological Testing. Washington, DC: Author.

Arter, J. (1990). Performance Rubric Evaluation Form (Metarubric). Portland, OR: Northwest Regional Educational Laboratory.

Brewer, R. (1996). Exemplars: A Teacher’s Solution. Underhill, VT: Exemplars.

Culham, R., and Spandel, V. (1993). Problems and Pitfalls Encountered by Raters. Developed at the Northwest Regional Educational Laboratory for the Oregon Department of Education.

Danielson, C. (1997). A Collection of Performance Tasks and Rubrics: Middle School Mathematics. Larchmont, NY: Eye on Education.

Danielson, C. (1997). A Collection of Performance Tasks and Rubrics: Upper Elementary School Mathematics. Larchmont, NY: Eye on Education.

Danielson, C., and Hansen, P. (1999). A Collection of Performance Tasks and Rubrics: Primary School Mathematics. Larchmont, NY: Eye on Education

Danielson, C., and Marquez, E. (1998). A Collection of Performance Tasks and Rubrics: High School Mathematics. Larchmont, NY: Eye on Education.

Herman, J., Aschbacher, P., and Winters, L. (1992). A Practical Guide to Alternative Assessment. Alexandria, VA: Association for Supervision and Curriculum Development.

Johnson, B. (1996). The Performance Assessment Handbook: Designs from the Field and Guidelines for the Territory Ahead. Princeton, NJ: Eye on Education.

NOTE: Each of the two volumes of Johnson’s work cited above contains a chapter entitled "Standards, Criteria, and Rubrics: Including Teachers and Students in the Search for Quality," replete with detailed samples of rubrics for a variety of subjects.

Lazear, D. (1998). The Rubrics Way: Using MI to Assess Understanding. Tucscon, AZ: Zephyr Press.

Marcus, J. (1995). Data on the Impact of Alternative Assessment on Students. Unpublished manuscript. The Education Cooperative, Wellesley, MA.

Marzano, R., Pickering, D., and McTighe, J. (1993). Assessing Student Outcomes: Performance Assessment Using the Dimensions of Learning Model. Alexandria, VA: ASCD.

Perkins, D., Goodrich, H., Tishman, S., and Mirman Owen, J. (1994). Thinking Connections: Learning to Think and Thinking to Learn . Reading, MA: Addison-Wesley.

Taggart, G.L., Phifer, S.J., Nixon, J., and Wood, M. (Eds.). (1998). Rubrics: A Handbook for Construction and Use. Lancaster, PA: Technomic Publishing.

 

5.2 On-line Sources

Chicago Public School District

http://intranet.cps.k12.il.us/Assessments/Ideas_and_Rubrics/Intro_Scoring/intro_scoring.html

(Note: This site contains many examples of general scoring rubrics.)

Johnson County, Wyoming, School District #1 (Mathematics Assessment Rubrics)

http://www.jcsd1.k12.wy.us/Standards/Assess/MAR.htm

National Center for Research on Evaluation, Standards, and Student Testing (CRESST)

Scoring Rubrics:

http://cresst96.cse.ucla.edu/CRESST/pages/Rubrics.htm

New Jersey Statewide Assessment Sample Forms

http://www.state.nj.us/njded/stass/assessment/sampleforms.htm

Project-Based Learning

Rubrics, Rubrics for Web Lessons, and S.C.O.R.E. Rubrics

Rubrics. http://wwwodyssey.on.ca/%7Eelaine/coxon/rubrics.htm

Rubrics for Web Lessions. http://edweb.sdsu.edu/triton/july/rubrics/Rubrics_for_Web_Lessons.html
S.C.O.R.E. Rubrics. http://www.sdcoe.k12.ca.us/score/actbank/trubrics.htm

RMC Research Corporation

http://www.rmcdenver.com/useguide/assessme/identify.htm

Kathy Schrock’s Guide for Educators — Assessment - Rubrics

http://school.discovery.com/schrockguide/assess.html

Spokane Public Schools, Washington State

http://www.sd81.k12.wa.us/tcenter/

SRI International

Tasks [from the] Performance Assessment Links in Science [PALS]

http://www.ctl.sri.com/pals/tasks.html

(The tasks are arranged by grade range and by subject area. Within each task, select the link to Rubric.)

Toronto District School Board (Etobicoke) Research Department

Language and Mathematics Rubrics:

http://www.ebe.on.ca/DEPART/RESEAR/RUBRIC.HTM

Dr. Patrick J. Greene, a professor of education at Florida Gulf Coast University, web page

The Use of a Rubric for Assessment Purposes:

http://ruby.fgcu.edu/pgreene/rubirc_rules.htm

Mr. David Warlick, Instructional Technology Consultant

Rubric Construction Set:

http://landmark-project.com/classweb/rubrics

The Rubricator (rubrics building software)

http://www.rubrics.com/

http://www.teentalk.com/geo/samples/sample.html

6. Summary

Basic concepts of scoring rubrics and general procedures of rubrics developed are introduced in this paper. The focus is on holistic scoring for the performance assessment items or tasks in standardized testing situations in mathematics and science.

A scoring rubric is the established criteria, including rules, principles, and illustrations, used in scoring responses to individual items and clusters of items of performance assessment. It has three main functions: establishing objective criteria of judgment, providing established expectations to teachers and students, and maintaining focus on content and standards of a student work.

There are two major approaches to classify scoring rubrics. Analytic scoring and holistic scoring procedures are rubrics by depth of information provided. General scoring and item-specific scoring are rubrics by breadth of application.

Holistic scoring rates a student*s work as a whole and produces a single score. The method is preferred when a quick and consistent judgment is needed and when the skills being assessed are complex and interrelated. Standardized assessments usually use holistic scoring. Analytic scoring judges each dimension of a performance item or task independently and produces both dimension scores and a total score. It provides more detailed information but takes more time. It is mostly used for diagnostic purposes.

A general scoring rubric applies to similar performance tasks such as presentation, while specific rubric is designed for a particular item. Most standardized assessments in mathematics and science design their performance assessment items with specific rubrics.

A scoring rubric includes four important elements: dimension, definition and example of dimension, scale, and standards of excellence.

A scoring rubric scale can be numerical, qualitative, or combination of the two. A numerical scale is often used in mathematics and science performance items. The maximal possible points of a scale depend on factors such as number of dimensions measured, cognitive stages, weight of each dimension, and a developer*s preference. Usually the total number of scores is between 2 to 6 points. The bottom line is not to have so many points that it is hard for scorers to reach agreement, or too few to distinguish between students.

A scoring rubric developer can have three options: adopt, adapt, or start from the beginning. If you can find an exact match of an existing rubric with your item, you may adopt it. Otherwise, you may modify it to fit your need. The most difficult is to do it by yourself. For many standardized assessments in mathematics and science, however, this is the only choice, because each item is new and measures a specific skill.

Scoring rubrics development is an integrated process of writing, revising, piloting and trying it out until you are satisfactory. It also requires teamwork. Generally, there are nine steps to develop a scoring rubric:

    1. Decide the dimension(s) to be assessed in an item.
    2. Look at actual examples of student work.
    3. Refine and consolidate the list of dimensions.
    4. Write a definition of each dimension.
    5. Develop a scale.
    6. Evaluate a scoring rubric with questions.
    7. Peer review and pilot test on actual student samples of work.
    8. Revise and try again.
    9. Share the sample rubric with teachers, students and parents.

 

The most common challenge in developing a rubric is to write it in clear and direct language. It is also useful to write it positively and avoid unnecessary negative wording. Additionally, articulating the grading system in an easily understandable and clearly distinguishable way will benefit both teachers and students.

 

 to previous page