Ray Boss's Portfolio Page

From KNILT
Kniltapple.jpg

Navigation:

Topic/purpose

Good assessment practices are essential to tracking student progress, evaluating instruction, and implementing effective changes in instructional design.

This course will review assessment reliability and validity, common item-writing flaws, and ways in which instructional designers can assess higher levels of cognition using these assessment formats. Focus will be placed on multilpe-choice items.

These types of items seem simple at first, but they are often misused. After completing this course, readers should be confident in their ability to:

  • Appreciate the importance of item validity and reliability;
  • Identify and understand the structure of multiple choice items;
  • Analyze these items and look for potential flaws;
  • Modify them to eliminate those flaws; and
  • Create quality multiple choice items to assess a variety of learning outcomes.

Assessing Higher Order Thinking

Multiple choice items, although commonly used to assess lower order thinking such as recalling facts and definitions and understanding basic cause and effect relationships, can also be tailored to assess taxonomic levels such as analysis and evaluation. This can be done through careful construction of items and by increasing the complexity items or providing supplemental materials necessary to answering correctly.

For example, when provided with a graphic or data table, students can be required to analyze the supplement in order to answer specific questions. Also, by asking questions such as "In [given situation], which of the following are the most significant regarding [a certain outcome]?", students can be required to evaluate the severity of the possible implications of each of the provided alternatives.

Needs Assessment

Instructional Problem

Instructonal designers, teachers, and other educators generally recognize the importance of creating quality assessments. Ideally, assessments should measure student performance in valid and reliable ways and at a variety of cognitive levels. Unfortunately, many assessments that do this consume large amounts of time and effort to develop, administer, and grade. They often involve extended response questions, project or performance based tasks, and complex rubrics and grading scales.

In many environments, time constraints significantly deter instructional designers and educators from using such types of assessment. Also, techniques such as "ongoing assessment" described by Perkins and Blythe (1994) highlight a need for quality assessments that are able to be administered, graded, and interpreted quickly and accurately in order to increase levels of feedback for students.

Multiple choice items are a common, familiar framework for assessment in many educational settings. They also have a wide range of capablities in terms of assessing different taxonomic levels of cognition. In this way, they are one practical solution to the problem of quickly and accurately assessing high-level objectives. However, creating quality multiple choice items that do this can be problematic for many teachers. Additionally, in my experience, current and prospective teachers have often expressed their desire for a deeper knowledge of best assessment practices.

In his paper examining the effects of multiple choice item flaws, Steven Downing states that item flaws introduce error in assessments "thereby reducing the validity evidence for examinations and penalizing some examinees" (2005). Educators need to be able to create and understand valid, reliable, objective, and overall well-written multiple choice items.


What Is To Be Learned

Participants will learn how to construct and evaluate valid, reliable, objective multiple-choice items. They will review common item-writing flaws and how to avoid them. They will also learn how to develop multiple-choice items that assess high taxonomic levels of cognition.

Through a supplemental survey of individuals in the potential learning pool, the following conclusions were drawn:

  • Multiple choice items are commonly used in the classroom among many teachers;
  • Teachers tend to become less comfortable with writing multiple choice items as the taxonomic level rises;
  • Focus should be placed on item writing flaws first, then tailoring items to high taxonomic level objectives, then assessment validity and reliability; and
  • Significant focus should be placed on creating quality multiple choice items, not simply evaluating existing ones.


The Learners

Participants will include current and prospective teachers and instructional designers. Learners will have various levels of experience with writing assessment items. Familiarity with the multiple-choice question format is assumed. Learners are likely to be in a position where they frequently employ formative assessment strategies or administer a high volume of summative assessments. Many will be participating in the course in order to improve their professional practices.


Instructional Context

This is a fully online mini-course and requires a computer and internet access to view and use the materials. Learners will interact with readings, quizzes, and assignments that include creation and evaluation of various multiple-choice items. They may also use the Internet to search for content-specific multiple-choice items.


Problem and Solution Exploration

Due to the variety in learners' prior knowledge regarding this type of assessment, participants will be able to explore the relatively self-contained parts of this mini-course and pursue only those that are appropriate for them. Although the mini-course will be structured in a linear progression, the earlier segments will not nessarily be prerequisite to the later ones.

Participants will have the opportunity to complete tasks that require certian general understandings of assessment theory, but will also be able to self-tailor the course to fit their own instructional needs through free response activities and opportunities to reflect and apply thier learning in specific, self-determined contexts.


Course Goals

  • Solidify future instructors' conceptions of assessment validity and reliability;
  • Clearly define objective assessment items and convince participants of their value;
  • Identify and increase awareness of common item-writing flaws with regard to multiple choice items; and
  • Demonstrate ways in which multiple choice items can be designed to measure various taxonomic levels (with an emphasis on higher levels).

These goals all contribute to the overarching aim of empowering educators to use and understand multiple-choice assessment items in their daily practice.


Task Analysis

Prerequisites

Participants should have prerequisite knowledge including:

  • Basic expreience writing an answering multiple choice items,
  • Reading comprehension and critical thinking skills, and
  • Basic knowledge of Blooms Taxonomy or similar framework for cognitive levels of thinking.

Unit Objectives

Unit 1 - Discussion of Validity and Reliability
  • Recognize the elements that make up a standard multiple choice item,
  • Understand the importance of using valid and reliable assessment items, and
  • Determine the degree of validity or reliability for a test item, given a situation.
Unit 2 - Multiple Choice Item Anatomy and Common Item Flaws
  • Identify common item writing flaws and their causes, and
  • Correct and rewrite flawed multiple choice items.
Unit 3 - Tailoring Multiple Choice Items To High Level Objectives
  • Review behaviors associated with higher order thinking, and
  • Develop item writing strategies that require these behaviors and therefore assess higher order thinking.

Performance Objectives

1. Participants will be able to accurately identify the various parts of a multiple choice question.

2. Participants will be able to accurately define "item validity".

3. Participants will be able to accurately define "item reliability".

4. Given a situation involving a multiple choice item, participants will be able to approximate the item's degree of validity with 80% accuracy.

5. Given a flawed multiple choice item, participants will be able to identify the type of error in the item with 80% accuracy.

6. Given a flawed multiple choice item, participants will be able to rewrite a correct version of the item with no flaws.

7. For each taxonomic level, participants will be able to identify at least two behaviors that demostrate thinking of that level.

8. Given a multiple choice item, participants will be able to determine what taxonomic level it assesses with 80% accuracy.

9. Given a general topic, participants will be able to create multiple choice items that accurately assess knowledge of that topic at high taxonomic levels (analysis, evaluation, or synthesis).

Curriculum Map

CMapRev2.png

Folder79.png References and Resources

  • Downing, S. M. (2005). The effects of violating standard item writing principles on tests and students: the consequences of using flawed test items on achievement examinations in medical education. Advances in Health Sciences Education, 10(2), 133-143.
  • Linn, R., & Miller, M. (2005). Measurement and assessment in teaching (9th ed.). Upper Saddle River, N.J.: Prentice Hall.
  • Perkins, D., & Blythe, T. (1994). Putting understanding up front. Educational leadership, 51, 4-4.
  • Zimmaro, D. M. (2004). Writing good multiple-choice exams. Retrieved June,16, 2006.
  • Icon Images by Freepik via flaticon.com