College Guide

Blog

August 20, 2009 1:35 PM A Note on Methodology

By the Editors

There are two primary goals to our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. All categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score in any given category, we standardized each data set so that each had a mean of zero and a standard deviation of one. The data were also adjusted to account for statistical outliers. No school’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. Thanks to rounding, some schools have the same overall score. We have ranked them according to their pre-rounding results.

Each of our three categories includes several components. We have determined the community service score by measuring each school’s performance in three different areas: the size of its Army and Navy Reserve Officer Training Corps programs relative to the size of the school; the number of alumni currently serving in the Peace Corps relative to the size of the school; and the percentage of its federal work-study grant money spent on community service projects.

The research score for national universities is based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs awarded by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject relative to the size of the school; the number of faculty receiving prestigious awards relative to the number of full-time faculty; and the number of faculty in the National Academies relative to the number of full-time faculty. The latter two measures are new to this year’s rankings. For national universities, we weighted each of these components equally to determine a school’s final score in the category. For liberal arts colleges, which do not grant doctorates, baccalaureate PhDs were given double weight. Faculty awards and National Academy membership were also excluded from the research score for liberal arts colleges because we did not have data for all of the institutions.

As some readers have pointed out in previous years, our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy. But the two new measures of university research productivity and quality that we have added this year—faculty awards and National Academy members relative to the number of full-time faculty (from the Center for Measuring University Performance)—are independent of a school’s size. The 2009 guide continues to reward large universities for their research productivity, but these two new measures also recognize smaller institutions that are doing a good job of producing quality research.

The social mobility score is more complicated. We have data that tells us the percentage of a school’s students on Pell Grants, which is a good measure of a school’s commitment to educating lower-income kids. We’d like to know how many of these students graduate, but schools aren’t required to track those figures. Still, because lower-income students at any school are less likely to graduate than wealthier ones, the percentage of Pell Grant recipients is a meaningful indicator in and of itself. If a campus has a large percentage of Pell Grant students—that is to say, if its student body is disproportionately poor—it will tend to diminish the school’s overall graduation rate.

We have a formula that predicts the graduation rate of the average school given its percentage of Pell students and its average SAT score. (Since most schools only provide the twenty-fifth percentile and the seventy-fifth percentile of scores, we took the mean of the two. For schools where a majority of students took the ACT, we converted ACT scores into SAT equivalents.) Schools with graduation rates that are higher than the “average” school with similar stats score better than schools that match, or, worse, undershoot the mark. Two schools had comparatively low Pell Grant rates and comparatively high SAT scores, and had predicted graduation rates over 100 percent. We left these results alone to keep the metric consistent. In addition, we used a second metric that predicted the percentage of students on Pell Grants based on SAT scores. This indicated which selective universities (since selectivity is highly correlated with SAT scores) are making the effort to enroll low-income students. The two formulas were weighted equally.



In compiling this year’s rankings we established that we had made data-entry errors in our 2007 rankings. Texas A&M University was given the wrong Pell Grant number, so it placed first in the university ranking, when it should have been slightly lower down. In addition, some colleges were assigned the wrong research expenditure figures, throwing off the overall college rankings. We deeply regret these past errors and have changed our procedures for compiling the annual rankings so that they do not recur.

the Editors can be found on Twitter: @washmonthly.

Comments

  • coral on September 02, 2009 3:51 PM:

    What does the asterisk mean after the college name?

  • ehk on September 02, 2009 11:10 PM:

    You've ranked national universities and liberal arts colleges. Aren't there a great many universities and colleges that don't fall into either category and aren't taken into consideration by your rankings? How many institutions aren't considered by these rankings?

  • Beth on September 03, 2009 11:33 AM:

    I thought you had previously included data on the percentage of graduates who become teachers -- as a form of national service. Looks like I had remembered it incorrectly, but you might consider this addition to later versions of the ranking system. Colleges and universities whose graduates enter K-12 teaching, whether through a service program like Teach for America, or on their own, represent a crucial investment in this country's future.

  • C Gilbert on September 07, 2009 1:47 PM:

    Why just Pell Grants? Why not financial aid from other sources, including the school itself?

  • karen on September 10, 2009 1:35 PM:

    Did you consider St.Olaf College in your rankings? It may have topped the charts...

  • lb on September 12, 2009 12:52 PM:

    Not counting AmeriCorps, Teach For America,... as AMERICAN social services is so flawed and myopic.

  • Marybeth Neal on September 21, 2009 3:03 PM:

    The methodology for collecting the service data sounds problematic.

    First of all, the methodology doesn't say how data for the numbers of graduates going on to Peace Corps and ROTC were collected. I would be pleasantly surprised if all undergraduate institutions could keep such good track of their alumni, and in a consistent way to allow aggregation of data across all institutions. I am thinking the Carnegie Foundation for the Advancement of Teaching and Learning classification system would provide a better measure using the "Community Engagement" classification. My understanding is that there are two stages to the "Community Engagement" designation process. The first asks institutions to demonstrate how their institutional identities and commitment support community engagement. The second stage asks institutions to document concrete efforts in either A) Curricular Engagement; B) Outreach and Partnerships; or C) both categories. Campus Compact has a survey on service that collects great data, unfortunately it only collects data from institutions that are members of Campus Compact.

    Secondly, there are certainly many other forms of service that should be included -- all the national service programs of the Corporation for National and Community Service, as well as Teach for America, faith-based service such as the Mennonite Central Committee, the Mormons.

    Thirdly, there are lots of informal ways alumni might provide service to their communities on a less-than-full time basis -- participating in service clubs, writing letters to the editor, organizing their community to address community needs.

    There are a lot data out there that should be included in order for rankings of service to be meaningful. And I think it is an important aspect to measure -- as institutions of higher education do have, I think, a duty to prepare and encourage their graduates to be apply what they've learned so that they can be of service to others in need.

  • Richard Destin on October 02, 2009 3:13 PM:

    What do the asterisks mean? (*)

  • Malingo on May 12, 2010 12:32 AM:

    How is a school with a 50% expected graduation rate and a 60% actual rate better than a school with a 95% expected rate and a 100% actual rate? There's a bias in this category toward schools with low expected graduation rates, since they have a lot more room to exceed expectations.

  • KG on September 05, 2012 4:17 PM:

    Also how can Harvard have an expected graduation rate of 101%? Seems like even if they did "perfect" and graduated every single student they still couldn't top the charts in this category. There's some funny-business somewhere.

  • HarvardExplains on September 05, 2012 5:46 PM:

    @KC transfer students. School has more people who graduate than started six years prior.

  • Ken on September 29, 2012 2:37 PM:

    I would suggest you include in your criteria the % of the undergraduates who earn certain awards, which usually require some service, including:
    * Fulbright award recipients
    * Rhodes scholars,
    * Marshall Scholarship winners
    * Gates-Cambridge Scholarship winners
    * Truman Scholarship winners

  • critic on September 30, 2012 2:48 PM:

    @Ken: the % of the total undergraduates who earn those awards at any school is so small as to be meaningless.