Taken together, the seven solutions are remarkably student-friendly. Four of them focus on improving the quality of university teaching by developing new methods of evaluating teaching performance, tying tenure to success in the classroom, separating the teaching and research functions within university budgets, and using teaching budgets to reward professors who excel at helping students learn. The fifth solution would give prospective students choosing colleges more information about things like class size, graduation rates, and earnings in the job market after graduation. The sixth would make state higher education subsidies more student-focused, and the seventh would shift university accreditation toward measures of academic outcomes.Yes, great, better teaching, more accountability, blah, blah. But how is this going to be done in practice? In elementary and high schools, a focus on teacher effectiveness has mutated into an obsession with standardized tests, because these are the only way we have to measure what students are learning. How would that work at the university level? Isn't "higher" education supposed to be about imparting skills that are hard to measure? Are Perry and his allies really going to push for the development of a vast new set of standardized tests for college students? If not, how are they going to measure teacher effectiveness?
Carey provides some thoughts of his own on the research side, and you can immediately see the dire outcomes that might arise from this "reform":
Last year, the Texas A&M system published a report comparing the salaries of individual professors to their teaching loads and their success in garnering external research funding. Most professors were pulling their weight. But some were enjoying fat, publicly-funded salaries while doing little work in return. At UT-Austin, one group of 1,748 mostly-tenured professors, representing 44 percent of the faculty, generated 54 percent of institutional costs, taught only 27 percent of students, and brought in no external research funding whatsoever.So we are going to judge intellectual productivity by the twin pillars of bogus scholarship, outside funding and the number of publications. If you ask me, universities already put far too much emphasis on these two numbers, which bear a weak relationship to what anyone is actually producing.
Like bad teaching, the reality of freeloading professors is openly acknowledged on college campuses. And like bad teaching, it is confirmed by research from within the academy itself. Lawrence Martin, Dean of the Graduate School at SUNY-Stony Brook, has compiled a database of scholarly productivity—including books, journal articles, citations, research grants, and awards—for every tenure-track professor in America. He found that while the top 20 percent of professors are producing a remarkable amount of work, “in most fields for which journal publishing would be expected, fully 20 percent of the faculty associated in Ph.D. training programs have not authored or co-authored a single publication in one of the 16,000 journals indexed” in the previous three years. The fact that some of these laggards simultaneously enjoy light teaching loads is galling.
Yes, it also galls me that terrible teachers with no publications to speak of occupy tenured seats that I could have had instead. But if the alternative is a move toward quantitative evaluation of professors, using the sorts of numbers universities find it easy to collect -- numbers of students taught, papers published, grants won, awards accumulated -- then sign me up as a defender of the status quo.