Ten Miles Square

Blog

October 21, 2013 6:30 PM How Might Economics Education Be Improved?

By Michael O'Hare

Bob Frank opens his reflections on teaching economics with a discouraging examination of how badly we get our students to understand the really wonderful content of his discipline. Why do educated people think it makes them appear witty to repeat a dumb bromide like “economists know the price of everything and the value of nothing”? Statistics is in a similar position (“there are lies, damn lies, and statistics”). This kind of joke, based in wilful ignorance, is diagnostic of an affective failure, not an intellectual one. Students are afraid of this material, not just bored, as something being done to them that will make them worse people in some way, against which they need to defend themselves. How many people loved their intro stats course, and still remember the eye-opening realization that they had acquired powerful tools with which to understand and improve a complicated, random, changing world?

Statistics and economics university departments are also similar in being tasked to teach “service” courses for students in other majors and general education introductions, as well as professional apprenticeships for people whose careers will be in creating new methods in the disciplines themselves. As a consumer of the former service-those introductory courses are an input to my teaching production function-I often have occasion to weep, gnash my teeth, and rend my academic regalia, even though I am only hoping for student command of the few big ideas Frank claims should constitute the entire curriculum of an introductory course. But as a disciple of Deming, I discount absolute-scale measures and prefer to manage on the derivative: no matter where we are now, could things be [even] better? How?

My colleague Philip Stark, in our statistics department, is on the job. But first, a little background. At Berkeley, promotions and tenure for faculty are based on a “case” prepared by the department, comprising (i) all the research the candidate has ever published and stuff “in the pipeline”, and letters from outside scholars critiquing that research (ii) a summary of the candidate’s service on committees and the like, including public service and outreach (op-ed articles, for example) (iii) a letter from the chair asserting that the candidate is fine teacher and teaches many courses, and some sort of summary of student evaluations of courses taught. Under (iii): not classroom visit reports, critiques of assignments and feedback by peers, not video of actual classroom practice; what I said.

We have rules about this, of course, which are under 210-1-d-1 right here . However, I have never seen a promotion case (as member of one of the “ad-hoc committees” that reviews them) that satisfies these rules. Indeed, when I warned our previous chancellor on my last appointment to one of these that I would not be able to vote on the case if the teaching part of the package didn’t approximate the requirements of 210-1-d-1, he immediately removed me from the committee. So we are promoting faculty on the basis of research we look at and have experts in the field evaluate, and on teaching we do not see, evaluated exclusively by students.

Student evaluations of teaching (SET’s) have many important advantages as a quality assurance mechanism. First, they are extremely cheap, requiring only a quarter-class-session or so of class time with no significant payroll impact; in fact, they get the prof back to the lab for fifteen or twenty extra minutes each semester. Second, they completely protect faculty from engaging with each other about pedagogy, which in my experience is up there next to cleaning the break room on a scale of stuff we will avoid if we possibly can (more on this below). Third, it has never been shown conclusively that outsourcing teaching quality assurance in this way has damaged any core values, neither research productivity nor the record of the football team. Nor parking, I guess.

The foregoing is a strong case, but we have to ask, do good SET’s indicate more learning by students? On the Berkeley Teaching Blog, along with the director of our Teaching and Learning Center, Richard Freishtat, Phil has posted the first and the second of three analyses of what we know about this, and his findings are devastating. Not troubling; not “maybe this needs a little fixing”; devastating to the claim that we are managing the resources society has given us in the way we say we are, for excellence in research and teaching. (If you are a student at Cal, or a taxpayer in California, you should be in the streets with pitchforks and torches. If you are our new chancellor (or our new president), fixing this should be your Job One. If you are in, or paying for, another great research university, better ask some questions before you think you’re OK.) The best part is, he and his colleagues are going at this the right way, trying to find ways to examine teaching effectiveness that will actually lead to more student learning: stay tuned for Part III of their project.

I want to highlight the contrast between our manifest institutional respect for research and real expertise in the research sphere, and on the teaching side of our business. What Phil presents is actually not a secret from most of us. We have all had low SET scores in courses where we have other evidence that the students really learned a lot, and we know about highly rated courses that seem to be a bunch of fluff, and many of us know some of the literature he cites. So continuing to use SETs in this consequential way is behavioral evidence that we do not care enough about teaching to use all our skills and powers to advance it.

Outsourcing to students has an even more toxic effect: I have had high SETs and low ones, and the high ones are much nicer, but the fact is that I have never had any evidence of a type I can respect as a scholar that I am any good at teaching or could become so if I tried. So allowing them to displace real coaching and peer evaluation reinforces the fear of engaging with our peers to learn to teach better that everyone in a high-performance institution always feels. As regards my next few hours of work: I know I can write a paper that people I respect will value (or I wouldn’t have got here in the first place), but I really have no idea what will happen if cook up an innovative new exercise for class and invite a colleague to hang out in my classroom and give me some coaching on it. OK, I do have an idea, rooted in my knowledge that teaching is affectively fraught, and my deep-down sense that I am not the warm, supportive, emotionally competent person I want to be. The sleep of reason breeds nightmares and so does the drought of facts. Deming: “Drive out fear.”

But I have a nice set of powerpoints from last year, and the students didn’t ask any questions I couldn’t answer, and I can stick in a couple of slides from this fascinating paper that just came out in the Journal of Really Arcane Stuff, plus a new joke…speaking of which, here’s one that actually has some basis in our reality: “Teaching is the tax you pay to do your research. Tax avoidance (not evasion) is the duty of a citizen.”

[Cross-posted at The Reality-Based Community]

Back to Home page

Michael O'Hare is a Professor of Public Policy at the University of California, Berkeley.

Comments

(You may use HTML tags for style)

comments powered by Disqus