What would a quality assurance program for teaching at the higher-education level look like? We don’t have one now, nor even much to build on, but perhaps there are analogous programs we could adapt or copy. I think there are, and I will suggest an approach below. But first:
We are starting from a very modest base (not of teaching but of teaching support). As I railed in my previous post , what we have now is an incentive system under which professors are individually rewarded by retention or raises if they get adequate scores from students in surveys administered at the end of courses, surveys that do not reflect learning. I have been assured that we have departments in which high SETs are a tenure liability, but that’s too depressing to dwell on. We have some ancillary activities, like a teaching and learning center that provides a lot of on-line resources and some training if anyone asks (and whose staff is highly informed and dedicated). Almost no-one ever does (the last event they put on at Cal drew about 40 people from a faculty of a thousand-odd, including a fair number of lecturers and staff). We require one two- or three-unit course for our GSIs (graduate student instructors, = TAs in some schools), but of course this only trains the one prof who teaches it, plus the very few of our own grad students we eventually hire. And we give an annual teaching award, for which the first hurdle is spotless SETs, with no mechanism for the winners to diffuse and replicate what they do well. There is an annual teaching seminar that meets monthly, which usually has trouble recruiting a dozen participants, so in eighty years it might reach all of us.
Several of my colleagues, at Cal and elsewhere, assert firmly that our teaching is actually very good. Our alumni are certainly in demand. I am happy to stipulate that our teaching is superb, and that teaching at Berkeley deserves A+ across the board, with a cherry on top. We are all really great teachers, bow, exeunt stage left with armfuls of flowers.
But I don’t care! No action follows from that proposition: the operational question is not whether to pat ourselves on the back some or a lot, but whether there are things that we could do that would cause enough more learning to be worth doing. If there are, and we are doing C work for our students, we should do them, and if we are doing A work, we should also do them. Absolute-scale measures are managerially pretty much useless. When I critique a student paper draft, the advice I give about how it could be [even] better is worth about a hundred of the letter grade itself. If you still think high absolute performance is a license not to seek improvement, ask yourself whether you would fly on an airline whose maintenance principle was “if it ain’t broke, don’t fix it!”
Some other colleagues, mostly economists, believe incentives are everything: if we pay faculty enough more for better teaching, and punish them enough for bad teaching, the market will waft us to an optimum. After all, Pharaoh beat the Hebrews if they didn’t work hard, fed them if they did, and got a nice pyramid, right? Incentives do matter, but fear of firing and money rewards are not well-suited for this particular population, which operates pretty far up the hierarchy of needs. Anyway, if you can’t observe good performance (cf Philip Stark’s discussion of SETs), the workforce doesn’t know how to effect it, and they have the wrong tools, all the incentives in the world won’t work.
Finally, there is assuredly a production possibility frontier in research-teaching space. It slopes down monotonically and it is concave to the origin. If we were on it, any improvement in student learning would be at the expense of some research productivity (still might be worth it, but that’s a tough sell). But this is another misuse of good economic theory, like thinking a market equilibrium is where the world is rather than where it’s always trying to grope towards. As I learned from Bob Leone, one of the real live paid professional economists who have taught me so much good stuff, no real organization is ever at its PPF for any pair of output measures, and if it were, it would not be next week, as the PPF moves outward with organizational learning and technological advance. Indeed, organizations without good QA systems are always quite far from their PPF. The wise manager assumes she can move up or to the right or both, and is almost always correct; the foolish manager assumes she is on the PPF and wanders back and forth along where she thinks it is, like the tiger pacing along remembered cage bars.
Let’s start where college faculty should be comfortable: we have a highly developed QA system for research that has, by near-universal agreement, made our research the wonder of the world and getting better all the time. The way that goes is that we:
- collaborate on papers and projects,
- read each other’s work and cite it carefully in our own,
- seek out experts and advice, for example on methodological issues, and
- coach each other in institutionalized ways, like journal prepublication reviews and conference presentations.
Feed the Political AnimalDonate
Washington Monthly depends on donations from readers like you.