Ethics Assessment Rubrics

views updated

ETHICS ASSESSMENT RUBRICS

The introduction of new engineering accreditation criteria that includes "an understanding of professional and ethical responsibility" has firmly established the teaching of ethics as an important component of undergraduate education (Engineering Accreditation commission 2003, Herket 2002) yet, in establishing this outcome criterion, the commission also required its assessment. This is a particularly challenging proposition because ethics education is concerned not only with learning content but equally important, with developing problem solving skills. Further, such problems, or dilemmas, are rarely clear-cut and consequently do not have a definitive resolution, making traditional forms of assessment of limited value. One promising approach to this challenge is the development and use of scoring rubrics, a process that has been used for a broad range of subjects when a judgment of quality is required (Brookhart 1999). As opposed to checklists, a rubric is a descriptive scoring scheme that guides the analysis of a student's work on performance assessments. These formally defined guidelines consist of pre-established criteria in narrative format, typically arranged in ordered categories specifying the qualities or processes that must be exhibited for a particular evaluative rating (Mertler 2001, Moskal 2000). A valid rubric would allow educators to assess their students learning to date, and identify areas of weakness for further instruction.

There are two types of scoring rubrics: holistic and analytic. A holistic rubric scores the process or product as a whole, without separately judging each component (Mertler 2001). In contrast, an analytic rubric allows for the separate evaluations of multiple factors with each criterion scored on a different descriptive scale (Brookhart 1999). When it is not possible to separate the evaluation into independent factors—that is, when overlap between criteria exists—then a holistic rubric with the criteria considered on a single descriptive scale may be preferable (Moskal 2000).

Further, rubrics are intended to provide a general assessment rather than a fine-grained appraisal (such as in a 1–100 grading scale). For example, a rubric might include levels from one ("shows little or no understanding of key concept") to five ("shows full understanding of key concept; completes task with no errors"). Among the advantages of using rubrics are: (1) assessment can be more objective and consistent; (2) the amount of time faculty spend evaluating student work is reduced; (3) valuable feedback is provided to both students and faculty; and (4) they are relatively easy to use and explain (Georgia Educational Technology Training Center 2004).

Generally, rubrics are best developed starting from a desired exemplar learning outcome and working backward to less ideal outcomes, preferably using actual student work to define the rubric's various levels. The scoring system should be objective, consistent, and relatively simple, with a few criteria sets and performance levels; three to five evaluative criteria seem to be appropriate (Popham 1997).

Extensively used in K–12 education assessment, higher education areas such as composition and art, and, increasingly, engineering education (Moskal, Knecht, and Pavelich 2001), rubrics have yet to be widely adopted for assessing ethics tasks. An example is Holt et al. (1998) who developed an analytical rubric for assessing ethics in a business school setting, identifying five categories:

  1. Relevance: Analysis establishes and maintains focus on ethical considerations without digressing or confusing with external constraints;
  2. Complexity: Takes into account different possible approaches in arriving at a decision or judgment;
  3. Fairness: Considers most plausible arguments for different approaches;
  4. Argumentation: Presents a well-reasoned argument for a clearly-identified conclusion, including constructive arguments in support of decision and critical evaluation of alternatives;
  5. Depth: Shows an appreciation of the grounds or key moral principles that bear on the case.
TABLE 1
Shown is the Analysis Component (one of five components) of the rubric. Note that the rubric gives the rater criteria to classify the student's response into one of five levels with five being the highest. The rater should choose the criteria set that most closely matches the student's response.
Analysis Component of Scoring Rubric for Assessing Students' Abilities to Resolve Ethical Dilemmas
Level 1Level 2Level 3Level 4Level 5
SOURCE: Courtesy of Larry J. Shuman, Barbara M. Olds, and Mary Besterfield-Sacre.
No analysis provided.Authoritative rule driven without justification. Position may be less definitive (e.g., "should do" vs. "must do").Applies rules or standards with justification, notes possible consequences or conflicts.Applies rule or standard considering potential consequences or conflicts.Correctly applies ethical constructs.
Defaults to a superior or authority without further elaboration.Minimal effort at analysis and justification.Correctly recognizes applicability of ethical concept(s).Uses an established ethical construct appropriately. Considers aspects of competence and responsibility of key actors.May offer more than one alternative resolution.
Takes a definitive and unambiguous position without justification.Relevant rules ignored.Recognizes that contexts of concepts must be specified.May cite analogous cases.Cites analogous cases with appropriate rationale.
Any analysis appears to have been done without reference (explicit or implicit) to guidelines, rules or authority.May miss or misinterpret key point or position.Coherent approach.Incomplete specification of contexts of concepts.Thorough evaluation of competence and responsibility of key actors.
  If ethical theory is cited, it is used incorrectly.    Considers elements of risk for each alternative.
        Explores context of concepts.

These categories were rated from 1 for "non-proficient" to 6 for "excellent" according to each level's criteria.

Although not developed specifically for assessing ethical problem solving, the widely used Holistic Critical Thinking Scoring Rubric (HCTSR) with its four criteria could be adapted for a holistic assessment of students' ethical problem solving ability (Facione and Facione 1994). One recent effort along these lines has resulted in the development and validation of a rubric designed to measure engineering students' ability to respond to ethical dilemmas using case scenarios, for example, a case based on the first use of an artificial heart (Sindelar, Shuman, Besterfield-Sacre, et al. 2003). To a certain extent, the rubric follows the case analysis process of Charles E. Harris, Michael S. Pritchard, and Michael J. Rabins (1999). It consists of five components each with five levels (See Table 1):

  1. Recognition of Dilemma (relevance): Levels range from not seeing a problem to clearly identifying and framing the key dilemmas.
  2. Information (argumentation): At the lowest level, pertinent facts are ignored and/or misinformation used. At the high end, assumptions are made and justified; information from student's own experiences may be used.
  3. Analysis (complexity and depth): At the lowest level no analysis is performed. Ideally, thorough analysis includes citations of analogous cases with consideration of risk elements with respect to each alternative.
  4. Perspective (fairness): The lowest level is a lack thereof; that is, a wandering focus. The ideal is a global view of the situation, considering multiple perspectives.
  5. Resolution (argumentation): At the base level only rules are cited, possibly out of context. The ideal considers potential risk and/or public safety, and proposes a creative middle ground among competing alternatives.

Using such a rubric holds out the promise of being able to assess the learning of ethics reasoning skills in a more objective manner than has previously been the case. Indeed, there is the possibility that, given new developments in technology and learning, such rubrics could be programmed into computer-based learning modules that would be comparable to some of those developed for the self-guided teaching and learning of technical subjects.

LARRY J. SHUMAN
BARBARA M. OLDS
MARY BESTERFIELD-SACRE

BIBLIOGRAPHY

Brookhart, Susan M. (1999). The Art and Science of Classroom Assessment: The Missing Part of Pedagogy. Washington, DC: George Washington University, Graduate School of Education and Human Development.

Harris, Charles E., Jr.; Michael S. Pritchard; and Michael J. Rabins. (2000). Engineering Ethics: Concepts and Cases, 2nd ed. Belmont, CA: Wadsworth.

Herkert, Joseph R. (2000). "Continuing And Emerging Issues In Engineering Ethics Education," The Bridge 32(2): 8–13.

Holt, Dennis; Kenneth Heischmidt; H. Hammer Hill, et al. (1998). "When Philosophy And Business Professors Talk: Assessment Of Ethical Reasoning In A Cross-Disciplinary Business Ethics Course," Teaching Business Ethics 1(3): 253–268.

Moskal, Barbara M.; Robert D. Knecht; and Michael J. Pavelich. (2001). "The Design Report Rubric: Assessing the Impact of Program Design on the Learning Process." Journal for the Art of Teaching: Assessment of Learning 8(1): 18–33.

Popham, W. James. (1997). "What's Wrong—and What's Right—with Rubrics." Educational Leadership 55(2): 72–75.

INTERNET RESOURCES

Engineering Accreditation Commission. (2003). Criteria for Accrediting Engineering. Baltimore, MD: Accreditation Board for Engineering and Technology (ABET). Available from http://www.abet.org/criteria.html.

Facione, P.A., and N.C. Facione. (1994). "Holistic Critical Thinking Scoring Rubric." Available from http://www.insightassessment.com/HCTSR.html.

Georgia Educational Technology Training Center. (2004). "Assessment Rubrics." Available from http://edtech.kennesaw.edu.

Mertler, Craig A. (2001). "Designing Scoring Rubrics for Your Classroom." Practical Assessment, Research & Evaluation 7(25). Available from http://PAREonline.net.

Moskal, Barbara M. (2000). "Scoring Rubrics: What, When, and How?" Practical Assessment, Research & Evaluation 7(3). Available from http://PAREonline.net.

Sindelar, Mark; Larry J. Shuman; Mary Besterfield-Sacre; et al. (2003). "Assessing Engineering Students' Abilities to Resolve Ethical Dilemmas." Proceedings, Frontiers in Education Conference. Available from http://fie.engrng.pitt.edu/fie2003/index.htm.

About this article

Ethics Assessment Rubrics

Updated About encyclopedia.com content Print Article