Editore"s Note
Tilting at Windmills

Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

March 31, 2007
By: Kevin Drum

FLAT MAXIMA....A couple of weekends ago I linked to an op-ed piece that mentioned something called the "principle of the flat maximum." The idea is that at the very top range of ability level, everyone is so highly qualified that it's almost impossible to predict who's really going to be the best at the next level of performance. The measured differences are just too small.

What would be a good test of this? How about the NFL draft? The players drafted in the first two rounds are the 64 best college football players in the country, and this is very elite territory indeed. The fact that pro scouts have a ton of information on each player makes this a very stringent test of the PotFM, but even so, if the principle is really true, the performance of these 64 players once they get to the pros ought to be fairly random.

So is it? If you compared the pro careers of, say, all the players drafted in the first round during the 1980s to those drafted in the second round, what would you find? Obviously this requires some consistent measure of pro performance, but it seems like there are thousands of sports geeks out there who have come up with performance metrics of various kinds, so this ought to be doable. Does anyone know if this kind of comparison has ever been done?

POSTSCRIPT: What might prevent this from being a good test? One thing that comes to mind is the possibility that first round draft picks are given more opportunity to prove themselves. If you're drafted #3 and have a multi-million dollar contract, your team will probably keep playing you even if you have a mediocre season or two. If you're drafted #58, you'll probably get cut.

What else would be a good test of the PotFM? Outside of sports, that is.

Kevin Drum 1:23 AM Permalink | Trackbacks | Comments (77)

Bookmark and Share

Your test fails because you are applying it to a local maximum - the best college players - the true maximum would be found within the set of professional players.

Posted by: Just Think on March 31, 2007 at 1:54 AM | PERMALINK

I say apply the Principle of Flat Minima (amazingly with the same acronym PofFM) to the Bushies.

It does not matter even if they change the Sec Def or the AG or the Secretary of State or the NSC advisor or the SecAg.

We are still screwed.

Posted by: gregor on March 31, 2007 at 1:57 AM | PERMALINK

Michele tested European unions.

Posted by: Brojo on March 31, 2007 at 1:57 AM | PERMALINK

Robert Micheles

Posted by: Brojo on March 31, 2007 at 2:10 AM | PERMALINK


How about another favorite topic of yours -- CEO pay.

CEO pay as compared to change in stock price/market cap?

Posted by: Alex on March 31, 2007 at 2:10 AM | PERMALINK

Another possible problem with the NFL draft example is that players at different positions in the NFL have different maximum salaries (not mandated within the collective bargaining agreement I don't think, but plainly observable based on average salaries at each position). So if you have a team specifically drafting based on the need to fill a particular position, which has a lower average salary, then comparing the careers of players who play positions that are seen as easily replaceable (say kicker or long snapper, for example, those guys are constantly changing teams), then it would be much different than say center or defensive tackle, where there are only so many people in the world who can play those positions.

Not to sound too Canadian, but perhaps a better idea would be to apply it to the NHL draft, rather than the NFL draft, and compare it to the future earnings (or games played) of players in each round. Unlike the NFL draft, where players have to specifically declare for the draft, in the NHL everyone born after a certain date in September of each year is automatically eligible, and thus the players drafted each year are all approximately the same age. However, also unlike the NFL draft, the teams themselves are under no obligation to then sign the players that they draft, although they re-enter the draft if they remain unsigned after a particular period of time, or become free agents.

So perhaps a good test would be to take a draft of a particular year (say 1980) and look at the average earnings of the players say, ten years down the road. Presumably, according to this theory, the earnings would be approximately the same per round, at least for the first few rounds (obviously comparing rounds 1 and 9 would skew the results). And presumably the fact that players in each round never get signed would remove most of the random outliers from the statistics. There would need to be some consideration for position (again), since goaltenders generally don't enter the league until much later than forwards, and also won't play as many games in a season as the skaters.

Posted by: msmackle on March 31, 2007 at 2:25 AM | PERMALINK

Three thoughts:
1. In chess it's just not true - Kasparov was absolutely dominant for over a decade.
2. Football's a lousy example - the possibility of injury is so great as to trash your sample.
3. Page Bill James! (Actually, IIRC he did a vast study about 20 years ago and found that (when corrected for park effects and other Jamesian stuff) major league performance could be very accurately predicted from minor league performance. (Not quite the same argument, but close.)
3.14 - Bill James is a demi-god. His mid 80s Baseball Abstracts were brilliantly written, truly profound pieces of critical thinking.

Posted by: The Sophist on March 31, 2007 at 2:25 AM | PERMALINK

National Merit Scholars or something like that would be a good test, Kev.

Posted by: SocraticGadfly on March 31, 2007 at 2:36 AM | PERMALINK

Tracking first round draft choices in football, or any other sport, won't help support or refute the PofFM. The persons chosen are not all doing the same thing under the same circumstances.

Even players at the same position cannot be compared without reference to their teams. For example, compare the fortunes of Tim Couch and Ben Roethlisberger.

The same would be true of other elite groupings. All graduates of the Harvard Business School or Yale Law School do not pursue identical or even substantially similar careers.

There is no single "next level of performance."

Posted by: James E. Powell on March 31, 2007 at 2:49 AM | PERMALINK

SAT/CAT/MCAT/GSAT, etc. All are standardized tests to score your aptitude within a cohort for particular skills or careers. Surely there are studies that examined some measure of outcome as a function of test score? But perhaps the empahasis has never been on the difference between the top 1 and 5 percent.

Posted by: Michael Tanowitz on March 31, 2007 at 3:15 AM | PERMALINK

People, listen up! I don't know what you are talking about — and I don't want to know. I'm keeping my eyes tight except enough to see what I am typing. But I refuse to read your character assassinating mean nastiness.

This is Ann Althouse. And I bring you a message.

—I voted for Clinton twice.
—I voted for Russ Feingold.
—There's this little thing called "9/11." Maybe you've heard of it???

I will have you know that Garance used tricks on my mind to make me explode like that. I mean, not explode. To make me call her out as a character assassinater, I mean.

Did you know that Barack Obama can infiltrate your mind and take over your thoughts? It's true. Mark Schmitt said Barack Obama is just like Lex Luthor.

It's true. See? Right here:
— http://bloggingheads.tv/video.php?id=192&cid=952&in=42:00

I blogged about Obama's mind-control powers here:
— http://althouse.blogspot.com/2007/01/he-can-enter-your-space-and-organize.html

—Ann Althouse, Professor of Law

Posted by: Ann Althouse on March 31, 2007 at 3:48 AM | PERMALINK

I'm not sure we need a principle to tell us this - clearly, there are people at the elite level who are better than their peers, but it's impossible to predict who they are beforehand because there are too many variables - there are many highly touted college prospects who don't pan out at the professional level for one reason or another. There are some lower picks who become stars. You can apply it to stocks - two stocks in the same sector with the similar earnings, growth rate, pe ratio, etc, but one stock ends up outperforming another. That's just life.

Posted by: Andy on March 31, 2007 at 4:05 AM | PERMALINK

Tanowitz - The SAT is an unreliable predictor of success. The College Board states the only correlation (and a low one at that)is with grades in the first year.

Posted by: non standard on March 31, 2007 at 5:25 AM | PERMALINK

To the extent that an exercise like this is even useful, I would imagine that a good test might be one of the middle popular, strictly objective Olympic sports. Not so popular that sponsorships or other money comes into it (e.g., track) but not so obscure that only self-selected weirdos devote that kind of effort (e.g., BB gun shooting).

I don't know if such a sport exists (maybe whitewater kayak?) but if it did that would probably be the best test because it's relatively easy to measure "success." Even so, I don't think the results would be worth much because you can't sort out skill from discipline, life circumstances, available resources, pure luck in terms of when you enter the field, etc.

In the end maybe only computer simulations can fairly test this idea . . .

Posted by: skeptic on March 31, 2007 at 5:53 AM | PERMALINK

in the best wingnut tradition, how about an argument by anecdote: graduates of the best universities (george bush) vs. college dropouts (bill gates). note both are the sons of privilege, so it should all equal out.

but seriously, this isn't really a question statistics can answer is it? you cited one obvious reason, because the stars are treated differently, and I cited another, because a single good decision can make all the difference.

Posted by: supersaurus on March 31, 2007 at 6:04 AM | PERMALINK

Look at the draftees in their third year and see how many are starters, how many are nonstarters but still on some team, and how many are not in the NFL. Track this for enough years to get statistically significant data (probably 5 years or so).

If you want to be really pedantic about it, you can break the comparisons down to subcategories such as line, runner, receiver, etc. Quarterbacks may be too small a data set, and the time lag is probably longer for them, but anecdotally they will probably come out similarly.

By the way, it's curious how many of the initial group of responses were unresponsive to the question you asked.

Posted by: Bob G on March 31, 2007 at 6:05 AM | PERMALINK

"What else would be a good test of the PofFM? Outside of sports, that is.


I've never praised American Hawk before, but that's a good one.

I'd add single malt scotch to that list. And I'd offer another reason why you might not be able to differentiate among the top performers: personal taste. This certainly applies to single malt scotches. Lagavulin, Laphroaig, Oban, and McCallan are all great scotches, but have very different tastes. You might love one and hate another. Or you might even prefer one over the other in specific circumstances. I like Laphroaig with a cigar because it's smokey. I like Oban with dessert because it's sweet. I like Lagavulin when I just want to sit down and drink a lot of scotch. I like Mcallan whne I'm drinking with a large group because it has a middle-of-the road flavor.

Posted by: fostert on March 31, 2007 at 6:45 AM | PERMALINK

the investment management business(mutual fund managers, etc.) is a very good example of this principle. Among the set of highly skilled managers, there is no evidence that some are consistently better than others. There is an ocean of research on this point for the obvious reason that if you could identify unique skill you could make a lot of money.

Posted by: rich on March 31, 2007 at 7:23 AM | PERMALINK

The author of the article concluded that "There is another potential benefit that extends far beyond the confines of the college admissions game. We like to believe, in our least cynical moments, that the U.S. is a meritocracy. Success is about talent and hard work. Luck has nothing to do with it. This attitude may well contribute to a lack of sympathy, sometimes even bordering on disdain, for life's losers.
I believe that this attitude is profoundly false. It is not the case that people always get what they deserve. There just aren't enough top rungs on the Ivy League's (or life's) ladders for everyone to fit. If talented and hardworking people are forced to confront the element of chance in life's outcomes when they (or their kids) fail to get into the "best" college, they may be more inclined to acknowledge the role of luck in shaping the lives of the people around them. And this may make them more empathic toward others — and a good deal more committed to creating more room at the top."

This puts to mind--well, which presidential candidate is good enough, which is the best?
There aren't enough rungs on the ladders for everyone, and with luck playing a part in it--eegads, it might be that John Edwards is the best of those on the rungs of the ladder, and it might not work out for him.

Posted by: consider wisely always on March 31, 2007 at 8:15 AM | PERMALINK

You get the flat maximum because all of your measures used to make decisions with flatten out. They hit 700 on SATs, straight A's, etc. So to find a similar analog, you need to find a situation where
a) The decision making measures flatten out, and
b) There are other measures not used that do further discriminate.

Maybe the lists of wines from Wine Spectator, with the good ones crowding up in the 90's. Books on Amazon that get 5 stars.

One possible analog is Navy SEAL training. They have very high standards of entry, and have studied how to discriminate the guys who will make it from the guys who don't for years, but still the only way to find out the true performers is to see if they perform.

Posted by: Red State Mike on March 31, 2007 at 8:43 AM | PERMALINK

Scientific literature. Scientific papers must pass a peer-review process in which it is determined whether the paper gets to be published in a given journal. The perceived prestige of a journal follows a hierarchy that is, for the most part, agreed upon. For biologists and some other scientists the hierarchy is approximately: Nature, then Science, then PNAS (Proceedings of the National Academy of Sciences), then PLoS (Public Library of Science), then specialty journals. There are exceptions to this, such as Cell and Nature Neuroscience, which to people in those specialties, jump the queue to a higher position than some general-interest journals.

The way to evaluate papers afterward is to see how many times they are cited. In the journals cited above, the average number of citations per paper follows the ranking I have given. However, the averages are dominated by a few super-papers; most papers aren't cited all that much. I think this could be interpreted as supporting the Flat Maximum principle.

One caveat is similar to the NFL first-draft idea: higher-profile papers get more of a chance in the sense that more people see them at the start (more people subscribe to or read Nature than PNAS). This effect should die down, to an extent, after a year or so, as it becomes clearer how worthwhile a contribtuion was.

Posted by: Sam on March 31, 2007 at 8:47 AM | PERMALINK

Someone mentioned Bill James. He did a studyin a little remembered newsletter he published for about 4 months sometime in the mid 1980s (1985, I think) that pretty much rebuts the PofFM, at least as it applies to baseball draftees. Specifically, he studied the career performance of every major league baseball draft choice during the first 16 years of the MLB draft (at the time, that was the entire population that could be studied). Although he found many misvaluations (southerners, high schoolers and pitchers then overvalued, for example), the high draft picks were not fungible; the first player chosen outperformed the second player chosen, etc... It is the most impressive work of his incredibly impressive career and more or less formed the foundation of Moneyball, v.1.

Posted by: tigertears on March 31, 2007 at 8:48 AM | PERMALINK

The flat maximum is crap. The SAT and GRE just don't test the relevent criteria.

Doing well in graduate school is measured by someones ability to do gruelling monotonous work for years on end with high efficiency and little reward. The best test would be a 24 hour sudoko marathon followed by a 1 hour period where the student stares at a shot gun blast through a piece of graph paper and churns out a 250 word abstract.

Posted by: amerlcan buzrd on March 31, 2007 at 8:48 AM | PERMALINK

There is much MORE variation in the performance of top performers than mediocrites. The ratings of the top hundred chess players span a greater range than the ratings from C to expert. The times of the first 50 runners in a marathon span 20 or 30 minutes--at four hours, 50 runners finish each minute or two.

Statistically, the tail of a distribution has greater variance than the entire distribution.

Posted by: hexatron on March 31, 2007 at 8:58 AM | PERMALINK

This letter sent by Harold Waxman (D. Rep. Calif.) to Karl Rove on Friday makes me want to share this--and maybe it kind of relates as well.


Posted by: consider wisely always on March 31, 2007 at 9:09 AM | PERMALINK

OK. I got it now. This comment over at firedoglake absolutely is the answer that Kevin seeks as related to the test of the flat maximum: This is from Kyle Sampson's live testimony to Congress Thursday, and all the "I don't recalls and I don't knows." Do I win anything?

mbbsdphil says:

March 29th, 2007 at 3:23 pm

"eCAHNomics @ 200

Jim Clausen @ 194
eCAHNomics @ 188
WRT controversy about whether Sampson knew a lot & was play acting, or whether he was just a DFI, I vote with the latter. I’ve seen lots of these 30-something hotshots on Wall Street, and in the end they’re almost all DFI. Sampson would stack well below the worst of these on Wall St.

You don’t get out of law school by being obtuse. Read the e-mails. This guy was well coached to be obtuse.

The DFIs on Wall Street go the Harvard Business School. Going to a good school, even a law school, is no guaranty of smarts in all situations. I agree that the “I don’t recalls” were coaching-don't even need to be a lawyer to know that, but good coaching would have provided a coherent story of why the 8 were fired, one that would stand up to criticism. And as you saw from the pitiful Hatch attempts, they didn’t give their allies much to work with.

“Going to a good school…is no guaranty of smarts.” Were you thinking of Mr. Bush’s Harvard MBA? Or, just his B.A. from Yale?

I would not assume that a “coherent story” is what Mr. Rove wanted today. Good theater manipulates the audience; so does good politics.

How would you want this theater to play if you were Mr. Rove’s defense lawyer? A clear, easy narrative, with obvious, discrete gaps? Or, would you want as many conflicting narratives as possible? With inconsistent holes, Ms. Nobody’s taking the Fifth, etc., etc., etc? Confusion, distraction, multiple possible conflicting explanations. The goal is to avoid allowing twelve people to agree on anything beyond a reasonable doubt.

The theater that played out today was just fine for Karl. We don’t have much more than we started with. We’re already exhausting the public and even the Senate. And if anybody has a target painted on their bald head, it’s not Karl."

Quote This Comment

Posted by: consider wisely on March 31, 2007 at 9:24 AM | PERMALINK

I think the academic job market and tenure in the Humanities probably works similarly. One applies for jobs along with 200+ other PhDs, and the person who gets the job may be perceived as the "best fit" rather than the best candidate. But whether somebody ends up getting tenure is anybody's guess; it depends on so many factors--relationships with colleagues, publishers/academic presses, teaching evaluations, journals in the sub-fields--and though departments try to make their requirements for tenure clear, any one academic's ability to teach and publish as much as one needs to can not be predicted by their high performance in pre-full-time faculty positions.

Posted by: CattyinQueens on March 31, 2007 at 9:26 AM | PERMALINK

Your test fails because you are applying it to a local maximum - the best college players - the true maximum would be found within the set of professional players.

Wow. A geneoos.

Hey, Geneoos, do you understand the problem? Moron, let me lay it out for the terminally stoopid (that would be you). They are trying to pick the professional while they are in college, you total idiot

It's a very difficult problem. The main problem is that most of the good players never play against one another, so a difference cannot be determined.

Posted by: fucking mental midget morons for Ralph on March 31, 2007 at 9:30 AM | PERMALINK

In many individual sports there is something like a flat maximum principle observable for the large second tier, among whom anyone of a crowd of elite trained, highly gifted athletes can come second to the current, once-a-decade/once-in fifty-years/once-a-century dominant champion. This is particularly true in sports where performance enhancing drugs are not a major factor in levelling the field between world's best & the top second tier (ie drug soaked athletics & cycling).

In golf Tiger Woods dominance is astonishing, but among the second tier something like an elite flat maximum applies. The same is true in tennis with Federer (& Sampras before him) completely blitzing the very competitive flat maxed 2nd tier. Same is currently true of Michael Phelps in swimming & Kelly Slater in surfing. I'm sure those knowledgeable about other individual sports can cite comparable exceptions to respective second tier flat maximums.

Sport science has become so skilled at shaving microseconds here, refining strokes there, shortening response time, boosting endurance & generally micromanaging every aspect of elite performance that in many endeavours athletes are now performing at the very limits of human capability & therefore the differences between them are negligable, producing a very high quality flat maximum. But, as with Woods, Federer, Phelps & Slater, there are also, always generational geniuses, true exceptions, freaks really, who both receive & transcend the very best training, technology & sports science to rise above their remarkable peers.

Posted by: DanJoaquinOz on March 31, 2007 at 9:34 AM | PERMALINK

This is related to what you are looking for:

It states that from an economic point of view, the 25th-50th picks in the NFL draft are the best because the picks before them get overpaid.

However, from a pure performance point of view it shows top picks performing better than 2nd and 3rd rounders. See Figure 9 on page 48.

The PotFM applies more to college students than professional athletes because:
1) SAT scores and grades don't do a good job of differentiating the great from the very good. When you compare them to watching game tapes of every game a player has been in and then conducting your own drills with the player, there is no comparison.
2) It is easier to rate people later in their lives--that is, at the end of college vs the end of high school.
3) It is easier for the worst NFL team to pick one great athlete than it is for Brown to pick 2000 great students. The NFL spends several hundred man-hours per selection, while Brown spends at most 1 man-hour per selection.

Posted by: reino on March 31, 2007 at 9:38 AM | PERMALINK

The "long tails" idea seems better. The SAT has three math tests: Aptitude (which everyone takes), Achievement I, and Achievement II (harder). There are a lot of 800s on the first one (even though very bright people might be happt with a 700), but even on the Advanced Achievement test about 10% of the takers get 800s.

One the floor, in basketball some individuals dominate the other pros and even the other All-Stars: Magic, Bird, Kareem, Dr. J, Charles Barkley, etc.

Posted by: John Emerson on March 31, 2007 at 9:46 AM | PERMALINK

This idea fails because

  1. potential and actual are not the same things

  2. other elements such as self-interest vs. corporate interest vs. societal interest will result in different outcomes

  3. define success

Posted by: workingclassannie on March 31, 2007 at 9:54 AM | PERMALINK

this has been done for qb's picked in the first as opposed to later in the draft. over time the first rounders fail in comparison. i do not know if it has been done for first rounders as a whole. this has been alluded to on the GIANT message board and i do not know how the metrics have been applied.

Posted by: peter gevas on March 31, 2007 at 10:07 AM | PERMALINK

Tanowitz - The SAT is an unreliable predictor of success. The College Board states the only correlation (and a low one at that)is with grades in the first year.
Posted by: non standard

The SAT may be not be a very good predictor of academic success, the ACT is much better in this regard, but its got a very high correlation to family income.

Posted by: MsNThrope on March 31, 2007 at 10:16 AM | PERMALINK


Posted by: Rob on March 31, 2007 at 10:18 AM | PERMALINK

The football position that best demonstrates your principle is quarterback, which is also the position that is more like the rest of life. Think about it, the quarterback is the player on the field who is most dependent on the other players and while all quarterbacks possess extraordinary athletic skills and abilities, most of the physicial requirements for the job are far more subtle than any other position. Quarterbacks, for instance, don't possess the immense size and strength of linemen or the speed of receivers and cornerbacks.

How high a quarterback is drafted in the NFL is largely a product of what kind of a high school they went to. If you went to a very good high school surrounded by lots of talent and a good coaching staff, you would have a much greater probability of going to a big-time program like USC or LSU. If your high school program wasn't top-flight, you will probably wind up at a mid-major or less, or if you're lucky, like Tom Brady who went to a school with a mediocre football team, you wind up being overshadowed by someone like Drew Henson. Being dependent on going to a good high school if you play other positions. If I'm 6'5', 300 pounds, am fast, and can life a truck with one arm, someone will give me a chance as Michael Lewis demonstrated in his recent book.

You can really see this play out in the NFL where a large number of highly drafted quarterbacks have been busts while a large number of outstanding quarterbacks have been low draft picks or weren't even drafted at all.

Posted by: Guscat on March 31, 2007 at 10:33 AM | PERMALINK

Someone said National Merit Scholars. The comparison could be against National Merit Finalists. The first cut takes just test scores in mind, the second cut makes the kind of distinctions colleges make based on grades, etc.

It occurs to me that what skill is being measured matters a lot. Free-throwing shooting is a lot easier to measure than how a CEO will interact with a corporate culture and the many variables of the marketplace. The PotFM is probably an artifact of measuring difficulties rather than a case of there being less range of ability at the top than elsewhere.

Posted by: Wagster on March 31, 2007 at 10:43 AM | PERMALINK

I like that DanJoaquinOz mentioned surfer Kelly Slater along with Tiger Woods as an example of a dominant athlete. Both practice very popular sports better than anyone else.

People do because they like them. A local kid got into the University of California at San Diego as an out-of-state student. He applied there partly because it's the best possible school for surfers (with apologies to UC Santa Cruz and UC Santa Barbara). Whitewater kayaking in the US seems to have started out as something that fascinated college science and engineering faculty and students. I doubt that anyone ever got admitted to a college based on kayaking prowess. Then there's riverboarding (check facelevel.com). Dragon boat racing. Outrigger canoe racing. Wave ski racing.

Most of these sports wouldn't gain an applicant any traction for undergrad admission to an exclusive college. In fact, I suspect that UCSD kid carefully hid his surfing interest. On the other hand, elite colleges eagerly recruit athletes for their varsity sports. So why doesn't Harvard have a better football team? And why did Swarthmore get rid of football? (Princeton University Press has published a series of very careful analyses of college admissions, including a book on sports that should be enough to get parents to yank Johnny out of SAT cram school and onto the baseball diamond or into french horn lessons).

In the sciences, citation analysis (how often a researcher's papers are cited) is a useful, though not perfect, ranking method. It's also an excellent way to rank the journals in which research is published. There are indeed Kelly Slater/Tiger Woods equivalent scientists who are enormously important. The down side is that they become celebrities and may behave that way, which is a pain for agencies that make grants, not to mention university administrators.

Posted by: Dave on March 31, 2007 at 10:59 AM | PERMALINK

I think one area where you do see a flat maximum is in organizations that provide "non-differentiating" goods and services to other organizations (or people.) For instance, many organizations use what is essentially a "last men (not man) standing" approach to making procurement decisions - basically it invovlves ascertaining that the product is "good enough" and that the provider will be a survivor, viable long enough for investment protection. Examples for people's indivdual choices are things like insurance, beyond basic due diligence there is not much to learn that will make a difference to the sane consumer. [That is why they advertise so much.] What I am really describing here is "satisficing" by the consumer which is a reflection of their utility function, but my guess is that providers in an area that is being so assessed will naturally "flatten out" their capabilities (at least for long periods of time - there will breeakthroughs to new levels from time-to-time).

For individual abilities, I think things like typing speed etc. will have fairly flat maxima (yes there will be a few Roger Federer's of typing, but I bet there is big flat plateau above some "good enough" speed.)

And for the biggest flattest maximum of them all (that is "advertised" as being the exact opposite), people who fit your "compatible spouse" utility function. ... My spouse loves it when I talk that way...

Posted by: JP Stormcrow on March 31, 2007 at 11:17 AM | PERMALINK

As I mentioned in the other thread, with regard to intellectual abilities, it seems that it's much easier to discriminate very high abilities in the very upper range of mathematical/technical abilities than it is in "verbal" areas.

For example, one simply can't ignore the number of Fields medal winners who have done remarkably well in the Mathematical Olympiad at very young ages, or the number of Nobel Prize winners in Physics who have won the Westinghouse (now Intel) awards. And I've heard of no cases in which people with truly distinguished careers in technical areas actually turned in mediocre performances on, say, the math SAT.

On the other hand, I don't even know what might count as a good test at the upper ranges in "verbal" abilities. The SAT is probably as good as anything, and it hardly does a good job. There are simply too many people who have gone on to distinguished careers in "verbal" areas who have have mediocre SAT scores for the SAT to be much of a discriminator.

Posted by: frankly0 on March 31, 2007 at 11:26 AM | PERMALINK

"So why doesn't Harvard have a better football team?"--Dave

Harvard does not give football scholarships, which keeps almost all good football players away. It also requires its athletes to get decent scores on SATs and get decent grades, which also keeps almost all good football players away.

They still get some decent players, but there are obvious reasons why they can't compete with Florida State.

Posted by: reino on March 31, 2007 at 11:29 AM | PERMALINK

"It is not the case that people always get what they deserve."

Potentially depressing question of the day: Would you be better off or worse off it they did?

Posted by: CJColucci on March 31, 2007 at 11:33 AM | PERMALINK

I post this in trepidation (go easy on me fucking mental midget, we slow learners need to be encouraged to participate in classroom discussions too), but let's suppose you were arrested for murdering your spouse. The crime is a particularly gruesome one, so in the state you happen to live in you're pretty sure to get the death penalty if convicted. You happen to be innocent, but you can’t afford a lawyer, so you get your pick of three or four public defenders just out of law school who have never been near a big case like this before. All of them are from law schools with big reputations and all of them finished in the top three their classes. You don’t know them from Adam and neither does anyone else. So, students, what would you want to know about any of these birds if you could interview each of them? What would you look for before you trusted one of them with your life? Temperament? A gleam of the tiger in the eye? Stage presence? Dedication to the cause of justice for the downtrodden masses (that would be you)? Or would you just pick the one that seemed the least like an arrogant young dickhead?

Posted by: JHM on March 31, 2007 at 11:37 AM | PERMALINK

Rather than give an example of minute differences in human ability, I'll point out one that deals more with human nature: comparing high-end audiophile gear. Devotees examine specsheets in detail, arguing about infinitesimal differences in the measurements used to sell the stuff. The term "flat maximum" is appropriate, since any edge these high-end products have over one another isn't apparent in a practical sense. But that edge... the audiophiles will forever seek it.

Posted by: rp on March 31, 2007 at 11:41 AM | PERMALINK

The problem with this is that football is such a highly team-intradependent sport.

There are players who are considered all-time greats essentially because of the strength of their surrounding cast. Joe Montana, I'm looking at you.

I'm not claiming that Montana wasn't good, or even very good, or even great, but every time he went down with an injury, San Francisco's second-string-QB-du-jour stepped up and became awesome.

Did he really become awesome?

Or did he have Jerry Rice and an offensive line that could stop a bullet?

So, anyway, anti-Montana rant aside, you're probably better off looking at baseball, in which stats are way more reflective of individual performance.

Posted by: ergqaqerhetartjy on March 31, 2007 at 11:42 AM | PERMALINK

If my fading memory serves me correctly, you are using some type of logical fallacy in making the assumption that the first two rounds of the NFL draft are 1:1 predictors of future NFL success. It ain't exactly the sine qua non you seek.

There are far too many highly successful lower round picks and free agents to use that criterion as THE hallmark. I'm convinced that scouts, player personnel veeps, GMs, etc fall into the trap of using "collective wisdom" to rank the pre-draft talent at the various positions. I would suggest that it is the same type of collective wisdom used by the beltway pundits. And, we all know how "successful" they are.

To prove the point (as a Packers fan) I have two words for you: Tony Mandarich.

Posted by: calvinthecat on March 31, 2007 at 11:44 AM | PERMALINK

The Laffer curve suggests that if you lowered the pay of professional athletes and CEOs to zero, you would maximize performance.

Posted by: Qwerty on March 31, 2007 at 11:44 AM | PERMALINK

Acceptance into an elite graduate program in the arts, like the University of Iowa Writers Workshop, or the Yale drama program.

Posted by: Tad Richards on March 31, 2007 at 12:10 PM | PERMALINK

Acceptance into an elite graduate program in the arts, like the University of Iowa Writers Workshop, or the Yale drama program.

I've got to believe that in cases like this it's going to be very hard to distinguish what's cause and what's effect when it comes to such acceptances and outstanding success in one's career.

In literature and acting, the audience relies very heavily on credentials when evaluating the quality of work. Really, everybody does it -- only dishonest people will deny it.

Posted by: frankly0 on March 31, 2007 at 12:20 PM | PERMALINK

I think that your question is poorly worded.

My understanding of this principle is that a statistic which correlates well with an outcome over the entire population may not correlate AS well with that outcome over a subset taken at one extreme of the population.

An example might be football ability and weight. The correlation between the two might be large over the entire population and much weaker over the subset of the population that happens to be in the NFL.

The big problem with what you are suggesting is this: the correlation is LESS over the subset, but not necessarily absent. My suspicion is that the scouts have constructed a performance measure sufficiently robust that it is useful in ranking the college players (otherwise why bother). If you went out to the population as a whole and examined the correlation between that same performance measure and NFL outcome, the correlation would be even STRONGER. However, the correlation over the restricted range is SUFFICIENT for the purposes of the scouts.

Posted by: Adam on March 31, 2007 at 12:29 PM | PERMALINK

I'm not sure I buy even the idea of the flat maximum.
At any level of performance (top 0.01%, 90.01%, 45.01%, whatever) there's not going to be any way to accurately predict who is going to do well or not. Given the narrowness of the band that we're talking about, all of the people in it will be roughly comparably qualified and the actual outcome of who does well or not is probably going to be random chance anyway.
The idea seems superficially interesting because you're starting with the "top" group. But any similar small slice, anywhere in the spectrum of abilities, is going to be similarly difficult to measure in the way you seem to want to.

Posted by: failingeconomist on March 31, 2007 at 12:33 PM | PERMALINK

But doesn't which of the ones on top do best when competing against each other, count for something?

Posted by: Neil B. on March 31, 2007 at 12:36 PM | PERMALINK


Given the narrowness of the band that we're talking about, all of the people in it will be roughly comparably qualified
Miguel Cairo is not, repeat not, as good a baseball player as Alex Rodriguez. Nowhere near. Not even remotely.

They are, however, both among the two thousand or so best baseball players in the world.

Posted by: agosdihghasdohgaoihf on March 31, 2007 at 12:38 PM | PERMALINK


Your NFL analogy...well...it really, really stinks. It's almost mind-blowingly bad and I'm afraid it forces me to call into question your credentials as a red-blooded American male sports fan. I mean really...did you really suggest that success could be random among the top-64 players taken in the NFL draft? Do you even watch football?

Look, I don't need any statistics to back this up. There is simply no question that the round an NFL player is drafted in is an extremely good predictor of how successful they will be as a pro. In other words, the majority of NFL starters are 1st round picks, a significantly smaller number are 2nd round picks, a much smaller number 3rd round picks, etc. Sure there are busts like Ron Dayne. And there is the occasional Tom Brady fluke (even Brady would have gone much higher if the strange saga of Drew Henson had not unfolded at UM). But overall, it's very easy to predict. Just look at their draft number and you'll have a good idea how they'll do.

Sports like football and basketball depend on genetically rare traits of speed, size, strength, and agility. While "skill" and "intensity" and "intelligence" are also important, there is a reason that Randy Moss was the best WR in the game for five years: he was faster and more athletic than his peers. Raw athleticism trumps "intangibles" 9 out of 10 times, and it simply isn't that hard to quantify raw athleticism.

Anyway, it is relatively easy to isolate and test for genetically rare traits like speed, size, strength, and agility. Which is why a 1st round NFL pick is infinitely more likely to be a great player than a 3rd round pick.

Here's a breakdown of the 2005 Pro Bowl: "Of the 84 players selected: 40 (48%) were first round picks; 14 (17%) were drafted in round two; 10 (12%) were selected in the third round; four (5%) were picked in round four; two (2%) were taken in round five; the sixth round provided six (7%) players; round seven supplied a big fat zero; and eight (9%) pro bowlers were undrafted. The top three rounds accounted for over three-fourths of the players selected (64 for 76%). Only 12 (14%) of the pro bowlers were drafted after round three."


Not random at all, I'm afraid. Pick a new analogy.

Posted by: owenz on March 31, 2007 at 12:47 PM | PERMALINK

"One thing that comes to mind is the possibility that first round draft picks are given more opportunity to prove themselves. If you're drafted #3 and have a multi-million dollar contract, your team will probably keep playing you even if you have a mediocre season or two. If you're drafted #58, you'll probably get cut."

This is also not true. Again, you need to...um...be a football fan to know this stuff. A healthy 3rd round draft pick is expected to contribute to a modern NFL team. They will not "probably get cut." Rather, the average 3rd-rounder will get every opportunity to win a job. Teams covet such overachievers, since they are paid so much less than 1st rounders. Given football's hard salary cap and the 25% roster turnover teams experience each year, getting solid contributions from lower-round players is an essential part of a team's success.

Fewer 3rd rounders win starting jobs because they are so much less talented then their 1st round peers. When it happens, however, the team will rejoice, since they are paying the 3rd-rounder less on his rookie contract than the 1st rounder.

Posted by: owenz on March 31, 2007 at 12:58 PM | PERMALINK

There is a ton of evidence against this "Principle." Archimedes, Newton, Einstein, Shakespear, The Beatles, Lance Armstrong, Bill Russell, Michael Jordan. If the principle were true, every star would be a one hit wonder - it just doesn't work that way.

Posted by: CapitalistImperialistPig on March 31, 2007 at 1:00 PM | PERMALINK


There is a ton of evidence against this "Principle." Archimedes, Newton, Einstein, Shakespear, The Beatles, Lance Armstrong, Bill Russell, Michael Jordan. If the principle were true, every star would be a one hit wonder - it just doesn't work that way.
This has nothing to do with what is being discussed.

What is being discussed is whether future greatness can be predicted accurately from youthful potential, not whether greatness can be achieved.

Posted by: hryeqhgih on March 31, 2007 at 1:07 PM | PERMALINK

High school valedictorians and class standing at law school graduation.

Posted by: tkacz on March 31, 2007 at 1:09 PM | PERMALINK

The problem with college admissions is that there are a lot more qualified people than spaces at elite colleges, and as a result, college admissions is mostly a crapshoot of picking out athletes, legacy kids, kids from the right geographic region with a particular interest, etc. Elite colleges are attempting to identify the top few thousand applicants with little to go on other than SAT scores, which are mostly meaningless, and grades, which vary by school and teacher.

Football teams are trying to pick the top athlete they can find. Even the 58th pick is 58th out of at least tens of thousands of appropriately aged men who would like to be professional athletes.

I think picking out the top handful of people in anything is markedly easier than trying to figure out how to draw a line between very good and excellent.

There are also some performance issues, as a number of commenters have pointed out. Performance in school is fairly narrow. Steinbeck may be one of the greatest writers of all time, but in a classroom with a bunch of very good writers, his outcome (an A) would look good but not exceptional. The goal isn't to be the best, but to be good enough. (And, as a practical matter, grades and test scores can't distinguish Steinbeck from, say, Kevin Drum, which is one of the problems with college admissions.)

Where performance is more open-ended, it's much easier and more practical to find the best performers.

Posted by: brad on March 31, 2007 at 1:47 PM | PERMALINK

Here's a test of the "Principle" of Flat Maxima. Back in the early 1990s, there was this kid who became the first person ever to win the US Golf Association junior championship more than once, and in fact he did it three times. So, how did this "prodigy" do in the real world of professional golf when he ran into the Principle of Flat Maxima?

Pretty good, actually. His name was Tiger Woods.

Shouldn't you rename it the Delusion of Flax Maxima?

Posted by: Steve Sailer on March 31, 2007 at 1:59 PM | PERMALINK

I agree with Red State Mike and Wagster. You get a flat maximum because the measure you are using does not discriminate between people of very high performance, ie. there is a ceiling effect. In the case of university admissions, the measure is a combination of SAT scores, grades, extracurricular activities, recommendation letters, etc. which are thought to be indicators of future academic success. All of these have a ceiling effect, and none of them can truly predict future performance.

Regarding chess scores and tennis rankings, these are measures of actual performance. So by themselves, they don't constitute the kind of situation Kevin is looking for, where some measure is used as a predictor of future performance. But I think a similar measure (ie. ranking based on win/lose matches between subjects) better discriminates among subjects of high ability. (Of course this would be impossible in the case of university admissions.)

But then there is also the question of the cutoff point. This problem arises because there are many more "qualified" subjects than there are spots for them. Even if we can better discriminate between those of high ability, we can't really say that the "good enough" subjects ranking below the cutoff will have poorer future performance than those above. Even if our measure correlates perfectly with current ability, future performance just depends on too many other things.

So in summary: The objective is to select the top n subjects that will have the best future performance. No measure we use can exactly predict future performance, but some measures are thought to be correlated. But because of the disparity between what we want to measure (future performance) and what we actually measured, the top n scorers on the measure we used do not correspond to the n best future performers. In addition, because the measure we used has a ceiling effect, picking out the top n scorers is not reliable either.

Umm, end of ramble.

Posted by: rambling stats student on March 31, 2007 at 2:00 PM | PERMALINK

There is a counterpart to the FMP called the 20% rule. Acoording to legend, studies showed that among fighter pilots, 20% of the pilots go 80% of the kills. This (quite possibly apocryphal) theroy is widely cited in business and management.

I think we have here a manifestation of what Bohr called "great truths." Unlike ordinary statements, which could be true, or if false, it's opposite would be true, great truths have the property that their opposites are also great truths.

Or both these ideas could be nuts.

Posted by: CapitalistImperialistPig on March 31, 2007 at 3:22 PM | PERMALINK

This has nothing to do with what is being discussed.
What is being discussed is whether future greatness can be predicted accurately from youthful potential, not whether greatness can be achieved.

Posted by: hryeqhgih on

OK, I agree that my point is not spot on - but it's not irrelevant either. Tests like the SAT have artificially flat maxima - they could be made harder, like the tests used by the Indian Institute of Technology. They aren't, because elite colleges are trying to satisfy a more complex criterion - they don't want too many Asians, Jews, males, whites or people whose parents aren't rich and famous. The whole admissions mumbo jumbo was invented to keep Harvard from becoming "too Jewish" and has been repeatedly tweaked to satisfy other criteria since. The whole subject has been extensively discussed in the literature, and if you aren't familiar with at least some of that, you can't appreciate how foolish and beside the point the whole flat maximum "principle" is in this case.

Why do you suppose they dumbed down the SAT a few years back? It was exactly because the elite schools wanted a "flat maximum" so they could admit favored groups and exclude disfavored groups without the evidence being quite so obvious.

Posted by: CapitalistImperialistPig on March 31, 2007 at 3:39 PM | PERMALINK

Tests like the SAT have artificially flat maxima - they could be made harder..

You've made a very good point there. Why not have two tiers of testing? The basic SAT, then a second one for the top 10% of test takers? You should see enough variability in the 2nd test to make easier decisions. Of course.. test *design* becomes kind of tough here.

Posted by: Doc at the Radar Station on March 31, 2007 at 5:26 PM | PERMALINK

Doc at the RS -

My point was that the elite schools don't really want harder tests. When they first started using the SAT, Harvard found that it was "in danger" of having a school consisting mainly of Jewish kids from NY - and decided alumni wouldn't like that. Nowadays, you can add Chinese, Indian, and other Asians, but either way, it wouldn't fit Harvard's desired image.

The elite schools want a mysterious, hidden admissions process for reasons somewhat similar to the reason Kyle Sampson kept his USA file in his shred box. That way they can take a good selection of the superbright and still have room for all the others they want in their school.

Posted by: CapitalistImperialistPig on March 31, 2007 at 5:40 PM | PERMALINK

owenz is spot on, to the extent that I wonder as he does whether we are not really dealing with a bunch of sophisticated NFL fans here. Sorry, folks, and Kevin, but owenz is correct that there simply can't be any question that 1st-rounders are indistinguishable from 2nd-rounders. It's a ridiculous notion.

It's quite true that first-rounders are given a better chance than other draft picks to prove themselves, but that only accounts for a small portion of the difference. Most of the difference is the obvious talent difference.

From reading Kevin's original post, I had an idea of an improved metric to prove this notion and skimmed the thread to see if someone had beaten me to it. owenz kind of did, although not entirely, by posting the draft rounds of a recent Pro Bowl roster. All I would do is gather more of the exact same data. Look at the Pro Bowl rosters of, say, the last 30 years, and see how many of those guys were drafted in each round. If you wanted a control group, compare those numbers to players who did not play in the Pro Bowl.

But still, I have to agree with owenz that it's really not even necessary to test this. The answer is very obvious.

Posted by: Trickster on March 31, 2007 at 5:54 PM | PERMALINK

"The elite schools want a mysterious, hidden admissions process.."
Posted by: CapitalistImperialistPig on March 31, 2007 at 5:40 PM

I would generally agree with that-it's really a control issue. But, what if all the top *public* universities ran a 2nd-tiered standardized set of tests to get the variability in the top 10% (i.e.) and they got the "true pick". Then we should see greater performance from the top public schools versus the top private ones at some point. Maybe that would break the elite private monopoly and the credentialist mess?

Posted by: Doc at the Radar Station on March 31, 2007 at 6:10 PM | PERMALINK

Owenz/Trickster: you're misunderstanding my point. (Not your fault, since I didn't explain it.) My expectation is that first round picks would indeed do better than second round picks, and that this would demonstrate that the PotFM is wrong, at least in the extreme case where lots of information about top prospects is known. However, we still need a demonstration of this, not merely an assertion that "everyone knows" this to be true. Lots of stuff that everyone knows turns out to be wrong under critical inspection.

Tigertears' reference to a Bill James study of draft picks in baseball would be a good substitute for my NFL question. I'll see if I can find it.

Everyone else: Please keep in mind that the question here isn't whether some people are better than others. Of course they are. The question is whether you can predict who's going to be better based on tests, previous performance, etc.

Posted by: Kevin Drum on March 31, 2007 at 6:28 PM | PERMALINK

Thinking of flat maxima and college admissions, the University of Florida offered lush scholarship packages to Florida residents to keep them out of the Ivies. The scheme worked well enough that it's now been dropped. The quota for Ivy-ish students is being met without excessive bribery.

When I took the GRE (graduate school) exams 30 years ago, the upper reaches of the scoring system seemed to get pretty Everest-ish. It would figure that grad programs would be more interested in this sort of talent than colleges seeking to produce the next generation of investment bankers (who perhaps need to be on the rowing team).

To my chagrin as a biology student, my GRE verbals were better than the maths. Probably something to do with intro biology courses requiring students to learn more new words than equivalent foreign-language courses.

One of the better signs of prospective success in the sciences is the level of interest. Kids who really like what they're doing, do better than those who merely want to get into med school so they can afford their golf habit. Not to mention that good experimenters invariably like playing with things, and perhaps being "gadget guys" (an ecological plant physiologist I know is one of them).

In athletics, that probably doesn't work when genetics is a big issue, but the kid who makes a point of getting to the beach and trying to do a bit of surfing every day is likely to be much better than the one who only bothers on "good" days. That was certainly the case with Kelly Slater. There used to be a theory that surfing required a short-legged, long-torso body type (like Tom Curren), but that doesn't seem to have held up.

Posted by: Dave on March 31, 2007 at 6:59 PM | PERMALINK

If you look at those who are given faculty positions at major universities in say the physics or chemistry departments and compare this to those who get tenure five years later. Random or do some grad schools generate successful professors while others don't?

Posted by: Dave on March 31, 2007 at 8:05 PM | PERMALINK

Kevin Drum, the Massey and Thaler paper "The Loser's Curse: Overconfidence vs. Market Efficiency in the National Football League Draft" someone already cited above shows higher round draft picks have better NFL careers. See figure 6. For example relative to first round picks second round picks will appear in about 45% as many pro bowls, start in about 70% as many games and appear in about 90% as many games. The paper also argues that teams overpay for high picks but that is a different issue.

Posted by: James B. Shearer on March 31, 2007 at 8:08 PM | PERMALINK

Joe Montana, arguably the greatest QB ever, was a third round draft pick.

Posted by: JohnK on March 31, 2007 at 8:33 PM | PERMALINK

Mutual fund managers? At least the performance metric is clear.

Posted by: pjcamp on March 31, 2007 at 11:38 PM | PERMALINK

The players drafted in the first two rounds are the 64 best college football players in the country

This is incorrect.

The top 64 players of the draft are by no stretch the best 64 college players. In many cases the best college players don't get drafted that high,and in some cases not at all. Consider a few Heisman trophy winners - theoretically the best college player, but realistically the best skill position player - White from Oklahoma a few years ago, Eric Crouch in 01, Charlie Ward in 93 - none of them drafted (or Crouch may have been drafted in the 7th - sorry husker fans)

Posted by: ssdagger on March 31, 2007 at 11:44 PM | PERMALINK

"The question is whether you can predict who's going to be better based on tests, previous performance, etc."
Posted by: Kevin Drum on March 31, 2007 at 6:28 PM

I think there has been some confusion between the soundness of the evaluative techniques of the person who makes the *decision* to admit (select) a top student at a top university (i.e.) versus methods used to *predict* future performance of top students who have already been picked (selected) by a top university. Suppose that admissions officers at top schools believe the techniques they use to discriminate potential admits are predictive of superior future performance. How can you prove them right or wrong-if the people they reject have reduced opportunities because their credentials have been reduced by that selection?

Ok, this all started with the following LA Times link - maybe a stats person can help clarify this some more:


"The tragedy of all this selectivity and competition is that it is almost completely pointless. Students trying to get into the best college, and colleges trying to admit the best students, are both on a fool's errand. They are assuming a level of precision of assessment that is unattainable. Social scientists Detlof von Winterfeldt and Ward Edwards made this case 30 years ago when they articulated what they called the "principle of the flat maximum." What the principle argues is that when comparing the qualifications of people who are bunched up at the very top of the curve, the amount of inherent uncertainty in evaluating their credentials is larger than the measurable differences among candidates. Applied to college admissions, this principle implies that it is impossible to know which excellent student (or school) will be better than which other excellent student (or school). Uncertainty of evaluation makes the hair-splitting to distinguish among excellent students a waste of time; the degree of precision required exceeds the inherent reliability of the data. It also makes the U.S. News & World Report annual rankings of colleges silly for assuming a precision of measurement that is unattainable."

I googled about and found this about Detlof von Winterfeldt and Ward Edwards:


"Accession Number : ADA138506

Title : Equal Weights, Flat Maxima, and Trivial Decisions.

Descriptive Note : Research rept.,


Personal Author(s) : John,R. S. ; Edwards,W. ; Von Winterfeldt,D. ; Barron,F. H.

Report Date : JUN 1980

Pagination or Media Count : 28

Abstract : Most predictions are intended as a basis for decision making. The point of this paper is that prediction and decision require different methods. Equal weights, while often useful for prediction, are less useful for decision making. The action options available in any decision problem fall into three classes: sure winners, sure losers, and contenders. Sure winners and sure losers are defined by dominance, accepting sure winners and rejecting sure losers is trivial. Good decision rules should discriminate well among contenders. In the familiar pick-1 decision problem, options on the Pareto frontier (i.e. undominated options) almost always show negative correlations among attributes. Such negative correlations make equal weights inappropriate. This paper extends that result to the case in which a decision maker must pick k options out of n. In this case, the set of sure winners is usually not empty. It develops general procedures for identifying the set of contenders, given the options, k, and n. This set is a generalized Pareto frontier, of which the traditional kind is a special case. Simulations show that attribute intercorrelations among contenders are substantially depressed and typically negative, even if the intercorrelations in the whole set are positive. Such negative correlations among contenders strongly question the usefulness of equal weights for decision making.

Descriptors : *Decision theory, *Mathematical prediction, Decision making, Cueing, Social sciences, Research management, Game theory, Monte Carlo method

Subject Categories : PSYCHOLOGY

Distribution Statement : APPROVED FOR PUBLIC RELEASE"

Posted by: Doc at the Radar Station on April 1, 2007 at 2:12 AM | PERMALINK

Then there's the Indiana Jones effect. Possibly it's urban legend, but supposedly undergrad applications to the University of Chicago doubled after the first movie came out. Professor Jones was a faculty member.

I wonder whether there could be an institutional "Jones effect" too. Could certain sorts of students be fashionable in a particular year? Or pariahs? I suspect that military brats weren't welcome at elite colleges circa 1968. Was everyone competing for hard-right religious conservative home schooled kids about 2002?

It's kind of interesting that a substantial number of colleges have provided data for the College and Beyond database, which has been mined for books such as "Reclaiming the Game: College Sports and Educational Values."

Posted by: Dave on April 1, 2007 at 2:55 AM | PERMALINK

There was a time in my life when I was greatly concerned with rations for dairy cows. Scientists were evangelizing about the benefits for a variety of micro-nutrients, measured in milligrams/animal/day. One day a farmer acknowledged that the the science was all very impressive except for one thing--it's hard to worry about milligrams when your only tool is a front-end loader. Very smart farmer. Except that the benefits of micro-nutrients were real, so the best dairy farmers tucked the knowledge away until technological developments made it practical to worry about micro-nutrients. (The dairy industry was a trailblazer in taking advantage of computer controlled robotics and other very impressive technology). The moral of the story is that it is important to recognize the limits of your measuring tools, but also to recognize the value in improving those tools.

Posted by: tigertears on April 1, 2007 at 9:34 AM | PERMALINK



Read Jonathan Rowe remembrance and articles
Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

Advertise in WM

buy from Amazon and
support the Monthly