Outcomes Assessment

      Outcomes assessment is important for a variety of reasons germane to both departmental and university interests. Obvious interests include student recruitment and retention, major building, university and department reputation, delivery quality and effectiveness, morale, and so forth. Assessment begins by identifying the critical short-term and long-term objectives at all levels (i.e., class, course, minor, major, program, department, school, administration, college or university). Assessment never ends, but has as its rolling goal the task of determining as accurately as possible whether and how well the objectives have been met.

      An important and often stated goal of outcomes assessment is to acquire multiple sources of feedback that will allow each unit of the institution to improve its programs and services. It is not intended to serve as an evaluation of individual students, faculty, staff, or administrators. Nonetheless, there is the potential for such evaluations to occur. Given the scope and clearly evaluative nature of typical outcomes assessment plans, it is not surprising that the planning and pre-implementation phase can be met with a great deal of resentment, hostility, and outright foot-dragging apathy.

      There are two major sources of this reaction that need to be dealt with in order to engage the full support of all members of the academic community. First, it is understandable that some individuals will feel threatened by the unspecified consequences of assessments that have the potential to implicate their job performance. A typical view is that implementation of an outcomes assessment plan means the administration is looking for a way to assign blame. "If my students do not like me or my class, then I am going to lose my job." The second source of resistance comes from the occasionally accurate view that an individual is already doing a great job and therefore does not need to degrade themselves by submitting to a battery of assessments from a committee made up of relatively inexperienced individuals.

      How these are dealt with depends on the level of support the institution is willing to provide. Important questions to be addressed include: What are the consequences of relatively poor assessment results? Can and should assessment results be compared across departments, programs, schools, etc.? Will there be opportunities and programs designed to deliver remediation at each level (instructor, department, etc.) of assessment when needed? How can the successes of some classes, programs, etc. be communicated to others?

      There is no "perfect assessment plan" in existence. If there were, we would surely have heard of it by now. Assessments will necessarily differ across departments and schools. However, although there are no standardized assessment plans, there is a universal approach which relies on multiple rather than single measures and these approaches ultimately tie back to the stated objectives of the course, department, administration, etc. In order to be successful, then, it seems reasonable that an assessment plan have certain minimum characteristics.

      Successful assessments:

  1. Derive from unambiguous goal-appropriate objectives.
  2. Have the financial and administrative support of the institution.
  3. Rely on multiple measures rather than emphasizing any single outcome measure.
  4. Provide feedback to all members of the institution (including students).
  5. Must be cost-effective (a great deal of departmental resources could be committed to assessments; however, some of that money could be better spent).
  6. Do not ignore or interfere with other important campus goals and issues such as diversity and access.
  7. Are tied to improvement. That is, assessment data are not merely collected and ignored.
  8. Must themselves be assessed.


Course Outcomes Assessment: An issue of quality assurance

      Just because a teacher feels that he or she is actually teaching (providing goods as promised), that doesn't mean that they are.

      Humans are prone to errors of thinking and judgment. For example, the "confirmation bias" occurs when we have a belief and actively look for evidence to confirm or support that belief. Evidence that is inconsistent with that belief is rarely sought; and when found, is ignored, re-interpreted to support the belief, or explained away.

      Grades alone are typically insufficient as evidence of teaching success because they are tied to, and therefore influenced by, variables virtually impossible to control across courses and teachers: Content delivered, methods of testing, test quality, etc.

      Student evaluations are subject to varieties of influence including teaching quality, but also students' moods during assessment, inter-personal factors between student and teacher, the idiosyncratic definitions of "teaching quality" across students, differences in what is perceived to be of value in the classroom, etc. In addition, students' responses to course assessment questionnaires are equally weighted. This means that responses from a student who was absent for many classes are treated as equivalently accurate as the responses from students with perfect attendance (and thus a better sample of the teaching).

      Clearly if a standardized assessment tool were to be developed it would need to be multi-faceted to accommodate the varied topics and delivery methods. It would have to be objective (otherwise we might as well stick with course grades and student course evaluations).

      Naturally, all of this quickly becomes moot if there is no reasonable defense for assessing teaching to begin with. That is, why have a course outcomes assessment tool? There are many valid answers and they depend on the perspective taken (student/consumer, teacher, or administration). So here the question is begged: "What do we do with the data obtained from a course outcomes assessment tool?"

      From the student's (or the consumer's) point of view, evidence of effective course delivery may be a personal issue. Students may feel that a course was effective if they simply received a grade they desired. They may believe it was effective if they later discover that the content of that course is useful to them in an applied or convenient way. Effective teaching might simply mean that they were entertained during lecture. These are important components of the learning experience, but really the only substantive types of concerns from this perspective seem to me to be whether the material promised (in the course description and syllabus) was actually delivered, and if it was provided in a more effective way than would be expected had the student been given the task of learning the material on their own (e.g., by reading the text). To the extent that these concerns are satisfied, the consumer can then determine value.

    Note: I perceive a growing, and therefore disturbing trend among students and administrators that teaching effectiveness or course outcomes are solely the responsibility of the teacher. Such a view lifts the burden of effort from the students' shoulders. There is no literature that I am aware of that supports the view that effective learning is effort-less. On the contrary, learning is very effortful. Teachers should be held to task for not delivering goods promised, but I do not see that it is constructive to hold them responsible for students' lack of effort or motivation to learn. Forcing students to engage when they do not wish to engage will result in decreased interest, less motivation, and greater reliance on external motivational factors in the future.

      From the teachers' perspective, it would seem that an objective and effective assessment tool would potentially allow them the opportunity to discover content that may need greater coverage in future class offerings. Unfortunately, this type of research is extremely unreliable. Education research is not easy to do well. There are many factors that conspire against valid research and conclusions. There are too many important variables left uncontrolled. Veteran teachers can attest to the variability afforded by the luck of the student draw. Two identical course sections offered the same semester by the same teacher can be wildly different. In section-1, the students sit quietly and rarely engage in course discussion which forces the teacher to employ assorted tactics to draw them out (e.g., forced discussion groups, homework, pop-quizzes, etc.). While in section-2 the students actively participate and interact with one another during every class and as a group outperform the students in the other section. There is no need, and certainly no time, for additional assignments or pop-quizzes ("tactics"). If this can (and does) happen within a single semester, how can we draw any conclusions about teaching effectiveness when comparing courses taught but once a year? Additional variables include changing textbooks, different teaching styles between instructors assigned to identical courses, etc. If the purpose of course outcomes assessment here is to find ways to improve course delivery, then there is a disconnect between "assessment" and "what went wrong" or "what needs to be fixed" in the classroom.

      The tools that a teacher needs must provide them with the means to improve delivery of course content. Merely assessing what has passed tells the instructor nothing about what was or was not successful. In this case, perhaps the students and their insights as to what worked and what did not work are best. At the least, this information is pretty easy to obtain. Veteran teachers could also provide pointers and tips/tricks that they've found to be successful (albeit "successful" in the subjective sense).

      Realistically, systematic efforts to document course outcomes derive mainly to support applications for promotion or merit raises (or, at the extreme, to keep one's job). In other words, the perspectives of administrators drive this effort more than any other.

      From an administrative perspective, it makes sense to want to objectively document the high quality of the product (education) being offered to consumers (students/parents). In addition, an effective assessment tool would help to locate, and therefore to begin the processing of fixing, defects in product craftsmanship. Because "defects in product craftsmanship" translates too easily in the minds of faculty to "fire the teachers who don't show sufficient levels of student performance," there is knee-jerk resistance among teachers to the process of establishing any mechanism that has the potential to make them look bad. In psychology, this might be described as a problem of "framing."

      In other words, the apparent assumption driving the need for instantiating a course outcomes assessment policy is that some teachers are not doing their jobs. Or, "some teachers at our school are crappy teachers and they need to be found and eliminated."

      What other conclusion could be drawn if there is no discussion of, nor effort to implement any remediation programs to assist teachers who may be "found out" as crappy teachers?

      This would be better approached by starting with a universal teaching assistance program (TAP) tasked with the goals of finding (and making widely available) innovations, techniques, and tools to improve the teaching art for all teachers. Once established, such a program would later seem to beg some form of assessment. Did the program improve content delivery? At this point, the focus would NOT be on the faculty member, but instead on a technique or service provided to the faculty member. Success (however "assessed") could be shared between teacher and TAP, while insufficiency (again, however "assessed") would be attributed to TAP for failing to adequately support the teacher. Surely the teacher would be willing to work with TAP to determine where they went wrong and to determine a more successful future outcome.

      Personally, I think that there are many more variables and perspectives that need to be taken into account (e.g., adjunct faculty working teaching 6-8 courses a semester between two or more schools). If there was a successful model of course outcomes assessment, it wouldn't be a secret.

      I also (perhaps naively) believe that most teachers are always striving to find ways to improve their teaching. Thus, when I'm suddenly told that I need to document my effectiveness, I become defensive because it seems like the assumption being made is that I'm not effective enough, or at least I'm not trying as hard as I could be to improve. My job becomes heavier with paperwork demands (ironically taking time away from finding ways to improve my teaching performance).

      Basically, I support the philosophy of assessment. However, I become suspicious of plans to implement assessment when it isn't clear how the data can be, or will be, used.