College Guide

Blog

August 22, 2010 11:00 PM A Note on Methodology: 4-year Colleges and Universities

By the Editors

There are two primary goals to our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. Thus, all three main categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score within any given category, we standardized each data set so that each had a mean of zero and a standard deviation of one. The data were also adjusted to account for statistical outliers. No school’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. Thanks to rounding, some schools have the same overall score. We have ranked them according to their pre-rounding results.

Each of our three categories includes several components. We have determined the community service score by measuring each school’s performance in five different areas: the size of each school’s Army and Navy Reserve Officer Training Corps programs, relative to the size of the school; the number of alumni currently serving in the Peace Corps, relative to the size of the school; the percentage of federal work-study grant money spent on community service projects; a combined score based on the number of students participating in community service and total service hours performed, both relative to school size; and a combined score based on the number of full-time staff supporting community service, relative to the total number of staff, the number of academic courses that incorporate service, relative to school size, and whether the institution provides scholarships for community service.

The latter two measures are new to this year’s rankings. The first is a measure of student participation in community service and the second is a measure of institutional support for service. The new measures are based on data reported to the Corporation for National and Community Service by colleges and universities in their applications for the President’s Higher Education Community Service Honor Roll. Colleges that did not submit applications had no data and were given zeros on these measures. Many of the schools that dropped in our service rankings this year fall into this category. (Our advice to those schools is that if you care about service, believe you do a good job of promoting it, and want the world to know, then fill out the application!)

The research score for national universities is also based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the school; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a school’s final score in the category. For liberal arts colleges, master’s universities, and baccalaureate colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academy membership were not included in the research score for these institutions because such data is available for only a relative handful of these schools.

As some readers have pointed out in previous years, our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy. But the two measures of university research productivity and quality—faculty awards and National Academy members, relative to the number of full-time faculty (from the Center for Measuring University Performance)—are independent of a school’s size. This year’s guide continues to reward large universities for their research productivity, but these two additional measures also recognize smaller institutions that are doing a good job of producing quality research.

the Editors can be found on Twitter: @washmonthly.

Comments

  • Dave in New York on August 23, 2010 1:07 PM:

    "The research score for national universities is also based on five measurements... For national universities, we weighted each of these components equally to determine a school’s final score in the category."

    This explanation of the methodology by you means to me that for national universities, a simple sum of the five components comprising the research category, with the lowest sum being the best (since the best university in any given component is shown by being ranked with the lowest number, 1), should reveal the best national university in research. (Alternatively, the lowest average of the five components will obviously yield the same ranking order but will involve averaging the 5 components rather than summing them).

    Unless your methodology description is inaccurately written, a simple view of the sort of the research category, showing Harvard ranked #1, is wrong.

    Top 5 in research category (by your sort):
    1 Harvard - sum 54; average 10.8
    2 UC Berkeley - sum 41; average 8.2
    3 Stanford - sum 46; average 9.2
    4 Princeton - sum 137; average 27.4
    5 MIT - sum 65; average 13

    Based on your methodology, the ranking of those 5 schools should of course be:
    1 UC Berkeley
    2 Stanford
    3 Harvard
    4 MIT
    5 Princeton

    Undoubtedly, universities currently displayed by you as being ranked outside the top 5 for research, would replace some of the schools currently shown as being top 5 (undoubtedly Princeton would fall far out of top 5 based on your scoring methodology - and if someone were to argue that Princeton is a liberal arts college and therefore some of its weakest numbers according to your methodology are stripped out, then of course Princeton should not be ranked as a national university at all and instead belongs in the liberal arts college ranking only; aside from human error in the sorting algorithm, I don't know how else you would explain Princeton as #4 in research with the results you display).

  • Rick on August 25, 2010 5:26 PM:

    Your numbers for the University of Colorado Denver are ridiculously inaccurate. $0 in research expenditure? Only 13 PhDs awarded?

    The correct numbers are >$261 million in research expenditure (source: The Center for Measuring University Performance; University of Colorado Health Sciences Center), and number of PhDs awarded: 518 (source: The Higher Learning Commission). The majority of these degrees are in the biomedical sciences. Not sure where you've gotten your numbers from.

    Please note that The University of Colorado Denver now encompasses the former University of Colorado Health Sciences Center (which no longer exists as a separate institution), and you must include statistics from the medical campus too. Otherwise, you might as well include UC Denver in the Liberal Arts College rankings, since you seem to only include data from the primarily undergraduate campus.

  • Rick on August 27, 2010 6:45 PM:

    A correction to my previous post: UC Denver actually awarded 518 doctoral degrees (which would also include MD and other doctoral degrees), and not 518 PhDs. Sorry about that error. Your number of 13, however, is still way off, and it'd be great if you would correct it. Your claim that UC Denver did not spend any money in research expenditure is just silly, considering that more than $373 million in research and training grants were awarded to UC Denver researchers on the Anschutz Medical Campus alone (source: http://www.ucdenver.edu/about/denver/Pages/AnschutzMedicalCampus.aspx)

  • Other David in NY on September 02, 2010 11:00 AM:

    Both WM and US News use formulae for predicted graduation rates that are simply bad science. The regressions involved take into account almost none of the many, often significant factors that are now very well-known from educational research to affect graduation, and instead rely essentially on SAT scores. No question, SAT scores do correlate positively and significantly with grad rates. But the variation around the values predicted using just SAT's (the residuals) is enormous. And, adding in % on Pell does not begin to control for the variables known to be important in student success beyond SAT's. Schools are then rewarded, or punished, in these ranking methodologies for their residuals, as if each school itself is responsible for all of the difference, positive or negative. This ignores a vast ocean of good educational research to the contrary. Bad science. And this from WM: "In addition, we used a second metric that predicted the percentage of students on Pell Grants based on SAT scores." Again, I strongly doubt that anyone actually doing educational research would consider actually using such an altogether loosey-goosey metric in ultimately attempting to identify " ... which selective universities ... are making the effort to enroll low-income students." Why? Because predicting Pell %-ages from SAT scores alone is research craziness! There are so many other factors involved in Pell beyond SAT's that, again, this is simply bad science. At the very least, WM, give the regression statistics that describe how well the dependent variable (Grad Rates) is predicted by each of the independents used: R-squareds and significances, t-tests on each of the independents and significances, etc. And give the same stats for prediction of % on Pell from SAT's. Neither WM or US News ever does this. Why? It is not because there is no interest in these statistics, nor that no one calls for it. It is because the results will show how inadequate these simplistic regressions, i.e., these simplistic predictions of schools' actual grad rates, really are. Also, publishing the basic stats on regression models is simply good research practice, always. It's good science. Something that WM has some serious problems with.

  • R.Will on September 13, 2010 5:18 PM:

    1) the University of Michigan has a research budget that was over $1Bn in 2008/2009, and has only gotten larger with $277MM of stimulus funding; this report shows that budget at $809MM; how many other errors does the report contain?

    2) schools are penalized by the spread between predicted and actual graduation rates. It is well understand by pundits in the field that it is very hard to build a predictive model of this sort. Given that fact, rather than admitting that the problem is intractable, Washington Monthly is penalizing the schools for their inability to build such a model; this is an entirely perverse outcome that should have NOTHING to do with ranking the schools discussed...perhaps the error function should be scaled and subtracted from the author's/modeler's compensation at year end...a negative bonus?

  • Mags on September 02, 2011 3:01 AM:

    It's curious that this methodology has a shockingly narrow definition of 'service.' To assume that only organizations like ROTC and Peace Corps serve the world is short-sighted. Institutional leaders - in the realms of business, government, journalism, social entrepreneurship and religion - have more of a far-reaching impact on the 'good' of the world than many non-profits combined. I would far rather grow effective, compassionate, thoughtful leaders who work in organizations than an army of people trying to change the system from the outside any day.

  • JC on September 02, 2011 10:11 PM:

    Your graduation rate for UChicago is low, since it is based on outdated data. Its freshman retention rate is now 98%+ and the few others schools with that retention rate all have a graduation rate of 94.4 - 98%, not 92%. But freshman retention rate is easy to calculate--a one year figure, whereas graduation rate is based on a 6-year timeframe. For instance, last year U.S. News used averages for 2005-2008 for retention rate but averages for 2001-2003 for graduation rate. That wouldn't matter for most schools, but UChicago has made tremendous strides in increase in graduation rate in recent years -- no other university has seen such an increase. (The graduation rate used to be 70ish% maybe 25 years ago) But the new higher rate for current student won't show up for a few years in the rankings, but nonetheless it is real.

  • Renate in New York on September 03, 2011 1:45 PM:

    Why was the City University of New York not represented?

  • Washington Monthly on September 03, 2011 2:25 PM:

    @Renate in New York. CUNY schools were ranked individually. They are in the Master's Universities section.

  • Mike on October 17, 2011 11:03 PM:

    Can each column be sort? like Peace Corps, ROTC, etc.

  • Tim Walsh on August 27, 2012 8:54 AM:

    I would love to know where WM gets its data for the Service calculations. While graduation rate methodology allows for exclusion of those in the military or public service, there does not seem to be anywhere to obtain such information. But apparently WM gets it from somewhere. I guess it pays to have Washington connections.

  • Lipozene on December 05, 2012 9:07 AM:

    Outstanding blog post, I have marked your site so ideally Iíll see much more on this subject in the foreseeable future.
    Lipozene

  • Elena on February 25, 2013 10:05 AM:

    Hi. Why was the City University of New York not represented?

  • Daniel on February 25, 2013 11:02 AM:

    @Elena. CUNY schools appear in the rankings. They're ranked as individual colleges, not as a single entity. See Master's universities, where CUNY schools appear.

  • Joshua on April 18, 2013 8:33 AM:

    In Michigan, one of the best schools that they have is the University of Michigan. If this methodology is implemented, it will even put the university to a higher pedestal when it comes to giving quality education!

    Joshua