Features

September/October 2012 A Note on methodology: 4-year colleges and universities

By the Editors

There are two primary goals to our methodology. First, we considered no single category to be more important than any other. Second, the final rankings needed to reflect excellence across the full breadth of our measures, rather than reward an exceptionally high focus on, say, research. Thus, all three main categories were weighted equally when calculating the final score. In order to ensure that each measurement contributed equally to a school’s score within any given category, we standardized each data set so that each had a mean of zero and a standard deviation of one. The data were also adjusted to account for statistical outliers. No school’s performance in any single area was allowed to exceed five standard deviations from the mean of the data set. Thanks to rounding, some schools have the same overall score. We have ranked them according to their pre-rounding results.

The set of colleges included in the rankings has changed since last year. For the 2011 rankings, we included all colleges ranked by U.S. News & World Report in 2010. U.S. News changed its selection criteria in 2011 and we wanted a clear set of rules for including or excluding colleges, so we developed specific criteria for the Washington Monthly rankings. We started with 1,762 colleges that are listed in the U.S. Department of Education’s Integrated Postsecondary Education Data System (IPEDS) as having a Carnegie basic classification of research, master’s, baccalaureate, and baccalaureate/associate’s colleges and were not exclusively graduate schools. We then excluded 145 colleges which reported that at least half of the undergraduate degrees awarded in 2009-10 were not bachelor’s degrees as well as the seventeen colleges with fewer than 100 undergraduate students in fall 2010. Next, we decided to exclude the five federal military academies (Air Force, Army, Coast Guard, Merchant Marine, and Navy) because their unique missions make them difficult to evaluate using our methodology. Our rankings are based in part on the percentage of students receiving Pell Grants and the percentage of students enrolled in ROTC, whereas the service academies provide all students with free tuition (and thus no Pell Grants) and commission graduates as officers in the armed services (and thus not the ROTC program). Our final set of exclusions was to not rank colleges that had not reported any of the three main measures used in the social mobility section (percent Pell, graduation rate, and net price) in the past three years. This resulted in a final sample of 1,569 colleges and includes public, private nonprofit, and for-profit colleges.

Each of our three categories includes several components. We have determined the community service score by measuring each school’s performance in five different areas: the size of each school’s Air Force, Army, and Navy Reserve Officer Training Corps programs, relative to the size of the school; the number of alumni currently serving in the Peace Corps, relative to the size of the school; the percentage of federal work-study grant money spent on community service projects; a combined score based on the number of students participating in community service and total service hours performed, both relative to school size; and a combined score based on the number of full-time staff supporting community service, relative to the total number of staff, the number of academic courses that incorporate service, relative to school size, and whether the institution provides scholarships for community service.

The latter two measures are based on data reported to the Corporation for National and Community Service by colleges and universities in their applications for the President’s Higher Education Community Service Honor Roll. The first is a measure of student participation in community service and the second is a measure of institutional support for service. Colleges that did not submit applications had no data and were given zeros on these measures. Some schools that dropped in our service rankings this year completed an application in 2010 and therefore received credit in last year’s rankings, but did not submit an application in 2011 and therefore did not receive credit on these measures in this year’s rankings. (Our advice to those schools: If you care about service, believe you do a good job of promoting it, and want the world to know, then fill out the application!)

The research score for national universities is also based on five measurements: the total amount of an institution’s research spending (from the Center for Measuring University Performance and the National Science Foundation); the number of science and engineering PhDs award-ed by the university; the number of undergraduate alumni who have gone on to receive a PhD in any subject, relative to the size of the school; the number of faculty receiving prestigious awards, relative to the number of full-time faculty; and the number of faculty in the National Academies, relative to the number of full-time faculty. For national universities, we weighted each of these components equally to determine a school’s final score in the category. For liberal arts colleges, master’s universities, and baccalaureate colleges, which do not have extensive doctoral programs, science and engineering PhDs were excluded and we gave double weight to the number of alumni who go on to get PhDs. Faculty awards and National Academy membership were not included in the research score for these institutions because such data is available for only a relative handful of these schools.

As some readers have pointed out in previous years, our research score rewards large schools for their size. This is intentional. It is the huge numbers of scientists, engineers, and PhDs that larger universities produce, combined with their enormous amounts of research spending, that will help keep America competitive in an increasingly global economy. But the two measures of university research quality—faculty awards and National Academy members, relative to the number of full-time faculty (from the Center for Measuring University Performance)—are independent of a school’s size.

The social mobility score is more complicated. We have data from the federal Integrated Postsecondary Education Data System survey that tell us the percentage of a school’s students on Pell Grants, which is a good measure of a school’s commitment to educating lower-income students. We’d like to know how many of these Pell Grant recipients graduate, but schools aren’t required to report those figures. Still, because lower-income students at any school are less likely to graduate than wealthier ones, the percentage of Pell Grant recipients is a meaningful indicator in and of itself. If a campus has a large percentage of Pell Grant students—that is to say, if its student body is disproportionately poor—it will tend to diminish the school’s overall graduation rate.

We first predicted the percentage of students on Pell Grants based on the average SAT score and the percentage of students admitted. This indicated which selective universities (since selectivity is highly correlated with SAT scores and admit rates) are making the effort to enroll low-income students. (Since most schools only provide the twenty-fifth percentile and the seventy-fifth percentile of scores, we took the mean of the two. For schools where a majority of students took the ACT, we converted ACT scores into SAT equivalents.)

the Editors can be found on Twitter: @washmonthly.

Comments

  • matt w on August 27, 2012 10:54 AM:

    I see that you're still pro-rating some scores for size and others not, and still counting humanities PhDs differently whether they're awarded by the university or to alumni. And you still haven't uttered a word to justify these arbitrary choices.

    To paraphrase Apocalypse Now:
    "Is my methodology unsound?"
    "I don't see any methodology at all, sir."

  • Tom M on August 29, 2012 3:49 AM:

    Your score for research expenditures (and science PhDs) aren't scaled for the size of the college. This produces strange effects. If we look at the Claremont group of colleges, we see Harvey Mudd College in 25th place spending $3.21 million, Pomona College in 31st place spending $2.92 million, and Claremont McKenna College in 9th place spending $5.73 million. But these colleges are all part of the same group, sharing facilities. If they merged, the result would be their spending $11.86 million, putting them in 2od place. This would not be an actual change in what they are doing, just a change in paperwork. The standings should not be affected by arbitrary changes in organization.

    A more general objection is that you are using these measures to estimate how valuable an institution is to America. By not adjusting for size, you reward mergers. This is the theory of too big to fail.

    An additional problem is the use of Pell grants to judge social mobility. If a school has a student body that has won a lot of scholarships, that would throw off the statistics. It would also be a good idea to take into account legacy admissions and athletic admissions.

  • Christine C. on August 29, 2012 10:03 AM:

    Tom, the larger point you were making is well taken, but as a Claremont 5C alumna, I must correct the example you gave. Harvey Mudd, Pomona and CMC do share some facilities (such as their main library, Honnold/Mudd) but not all (or even most). Pomona and Harvey Mudd each have their own science facilities, for instance. CMC's science facilities (Keck Science Department) are shared with Pitzer and Scripps.

  • Tom M on August 31, 2012 3:03 AM:

    Christine: Thank you for the additional information.

  • Robert Kelchen on August 31, 2012 11:31 AM:

    Hi folks,

    My name is Robert Kelchen and I'm the consulting methodologist for this year's rankings. A few responses to the above comments:

    (1) We do not adjust for institutional size when examining PhDs awarded by national universities. The denominator for this measure is unclear--we may want to divide by the number of PhD students, but that number isn't available. Dividing by overall graduate populations or institutional size produces fuzzy results, at best.

    (2) We do adjust for institutional size when examining the baccalaureate origins of PhD recipients. This gives us an idea of a college's relative commitment to research. Additionally, there is much more variation in size for non-research universities and we have a good denominator to use (number of undergraduates).

    (3) We would love to be able to look at more than the Pell Grant to judge social mobility, but there are severe data limitations. There isn't data on legacy or athletic admissions, which could potentially be fixed with a national unit record dataset. It is just now possible to get Pell recipient graduation rates.

    (4) Cases like Claremont are tricky, to be sure. But shared facilities are not at all uncommon. For example, state university systems often share resources and it can be difficult to parse out an institution's effect.

  • WHP on September 05, 2012 9:23 PM:

    The biggest problem with these rankings is the absence of any measure of quality with the possible exception of the ratio of Phds to BAs. This is truly unfortunate. Even data about income, the value of the degree, would be a step in the right direction.

    I noticed that the data on research expenditures cannot possibly be correct. The institutions ranked "105" with respect to research expenditures include colleges with faculty that are required to do research to attain tenure along with colleges staffed by part-time adjuncts with second jobs who do no research and hired to teach.

  • Robert Kelchen on September 06, 2012 9:10 AM:

    WHP--we would love to have data on income for a representative sample of students. Sadly, it doesn't exist across a broad swath of universities and is fraught with selection bias when available.

    The research expenditures data come from the Center for Measuring University Performance and the National Science Foundation, which are the most reliable sources available.

  • Suzanne Klonis on September 12, 2012 9:00 AM:

    I only found these rankings by accident, which is unusual, because as the institutional research director at my college, I report most of the numbers that are published about my college. I find it a little weird that IR directors were not contacted to verify any of the data that were used in the rankings. For example, how do you know the percent of graduates who go on to get PhD's? How would you know this information without contacting the school itself?

  • Robert Kelchen on September 12, 2012 1:19 PM:

    The number of graduates who go on to get PhDs came from the Survey of Earned Doctorates via the National Opinion Research Center. Most colleges don't track all of their alumni who go on to get PhDs.

  • Liz Sanders on September 21, 2012 3:25 PM:

    After reading the comments, and trying to locate details of the methodology, am I correct in concluding that you gather none of the information for the rankings from the schools themselves? Just to clarify.

  • Robert Kelchen on September 24, 2012 11:33 AM:

    Liz, correct. All measures are provided by outside sources. I think we have all of the sources listed with the exception of ROTC and Peace Corps enrollment, which come directly from those sources instead of from the colleges.

  • K Gilman on October 04, 2012 11:04 AM:

    Why is every public university in Maryland except Morgan State mentioned?