Course evaluations don’t add up, according to Sean Rule, a statistics professor at Central Oregon Community College.
Most students are probably familiar with the Likert scale used in the first section of student evaluations. The evaluation prompts students to rate their professor and the class in various areas on a scale of 1 to 5. Administration then takes those numbers, averages them out and assigns scores like “3.64” to the evaluated professor, according to Rule. The problem is, this method is not fair or statistically sound, he said.
“Those numbers aren’t really numbers,” said Rule.
Numbers on a Likert scale do not represent actual numbers, but rather sentiments such as “Strongly Agree,” “Agree,” and “Neutral.” Rule likened the system to one that might be used by a hotel to collect customer ratings. That scale would consist of options such as “loved,” “good,” “neutral,” etc., and those options would also have numbers assigned to them.
An average garnered from numbers on that scale would also be flawed, according to Rule.
“What do those numbers even mean?” Rule said. “Just because you called ‘loved’ a five doesn’t mean that ‘loved’ is actually one more than ‘good,’ but that’s what the numbers imply.”
Student evaluations run into the same problem, according to Rule. The scores being assigned by students don’t have an actual numerical value, but they are being treated like numbers that can be averaged.
“It’s tricky to take a word, put a number on it and average that number,” said Rule.
Attempting to average the numbers assigned on the Likert scale, and then trying to judge a professor’s teaching ability by that is misleading, according to Rule.
“[The information gathered from the Likert scale] is called ordinal data,” he said, “and the problem with that is that you can’t do anything except look at which one has the most in it.”
The Likert scale could be used to accurately gather a different kind of average called the mode, according to Rule, identifying which data point appears most often. The mode would show what score a professor received most often in each area of the evaluation.
The interpretation of the data isn’t the only potential flaw in the student evaluation system, according to Donna Raymond, a math professor at COCC. The faculty and administration is not currently satisfied with the amount of students who complete the evaluations, she said.
The previous system, which involved students handwriting evaluations in class, which would later be typed by administration, was abandoned because it was too time consuming, Raymond stated. During the 2012-13 school year, evaluations have been voluntary and online.
The problem is, few students actually choose to complete the evaluations.
In fall term, only 35 percent of students completed course evaluations, according to Barbara Klett, course evaluations administrator, in an interview with The Broadside in December 2012.
“When we have a population of interest that is that small,” Raymond said, “it’s hard to get a representative sample.”
The samples that are collected likely have a negative slant, according to Raymond, because dissatisfied students are more likely to take the time to evaluate than their satisfied counterparts, who would otherwise balance the scale.
“If I have a student who hates my guts, he’s much more likely to go down to the computer lab,” said Raymond. “Someone who likes their professor may make plans to, but it’s much more easy to get sidetracked and distracted.”
Rule and Raymond are both part of a task-force, led by Associate Math Professor Kathy Smith, to develop a better way of evaluating professors.
“The next few months are going to involve a lot of conversation,” Raymond said. “We’ll be doing research to find out what has worked at other colleges around the nation.”
With information contributed by Scott Greenstone