Editore"s Note
Tilting at Windmills

Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

February 11, 2008
By: Kevin Drum

POLL REPORT CARD.... This chart comes from SurveyUSA, which obviously has a dog in this fight, but it's still an interesting look at how the various pollsters have performed this year up through Super Tuesday. More details are here, including a race-by-race breakdown.

Now, some of this is plainly unfair. The LA Times, for example, polled the two early races of Iowa and New Hampshire, which produced significant errors from nearly everybody, and since that was two-thirds of their polling it makes them look pretty bad. Overall, in the six races they polled (NH, IA, CA), they were in the middle of the pack. SurveyUSA, by contrast, didn't poll the early races at all, which almost certainly boosts their average.

For my money, it looks like Research 2000 might actually be the best of the bunch. They've polled 11 races, including the tough early ones, and have only been off by double digits once. Not bad.

Via Taegan Goddard.

Kevin Drum 12:54 PM Permalink | Trackbacks | Comments (12)

Bookmark and Share
 
Comments

Boy, do Zogby and Rasmussen look like they really suck or what? Given the total number of polls they've done (17 and 31 respectively) you'd think this would mitigate their numbers. Mo - Rons.

Posted by: fugitive on February 11, 2008 at 1:17 PM | PERMALINK

For a critical evaluation of this pollster scorecard, see this post on pollster.com.

Posted by: nimh on February 11, 2008 at 1:18 PM | PERMALINK

Fine - but if we are going to pretend to do an actual statistical analysis, could we also get some measures of error around the mean?

I suspect it would also be very interesting to see this on a candidate-by-candidate and party-by-party basis.

Posted by: OKLiberal on February 11, 2008 at 1:43 PM | PERMALINK

Average is not a good statistic here, it should be median. The median will reduce the impact of outliers (NH in particular) and provide a better indication of accuracy.

Posted by: Dilbert on February 11, 2008 at 1:52 PM | PERMALINK

Random thoughts:

1) Dilbert's right that SUSA's report card should use medians, not means.

2) I agree that more apples-to-apples comparisons would be good. Pairwise comparisons between pollsters on the states they both surveyed, for instance, would be useful.

3) SUSA's having skipped most of the early primaries may have helped them to some extent, but for many of the Super Tuesday primaries, the polling was all over the map - literally, but more important, figuratively. And when that happened, SUSA was almost always the pollster that nailed it, from Massachusetts to California.

4) Research 2000 was quite good too. But if you were trying to track pollsters' records via RealClearPolitics, which I was, it was impossible to figure that out: RCP would list Research 2000's polls by the name of the sponsoring paper. Only after I saw SUSA's pre-Super Tuesday report card and noticed which states I didn't have a Research 2000 result from, was I able to locate them.

Some other day, when I've got time, I'll see which states both SUSA and R2K surveyed, and see who did better.

5) Zogby and Rasmussen were both doing passably well this year. Until Super Tuesday, that is.

6) ARG sucks. But we already knew that.

7) Mason-Dixon sucks. I started noticing that before Super Tuesday, and that surprised me: I'd thought they'd had a pretty good reputation, and before this year, I'd never had a reason to notice whether they lived up to it. So have they always had a poor rep, or was this just a bad season for them?

Posted by: low-tech cyclist on February 11, 2008 at 2:28 PM | PERMALINK

Were there just not enough polls from Selzer & Co. to include them? They did great in Iowa and I recall they did well in a couple of other contests where they polled. Definitely one of the better looking outfits to come out of this season, along with Survey USA (which has always, despite the grief people give them about robocalling, been great; I remember how consistently they were nailing things even in 2002 and before) and Research 2000. American Research Group has always been awful, and now they can't even consistently poll their own state.

Posted by: jbryan on February 11, 2008 at 3:09 PM | PERMALINK

How come Pew isn't listed? I'll bet they have the best track record...

Posted by: Mike in SLO on February 11, 2008 at 3:54 PM | PERMALINK

Using root-mean-square errors (which punishes the large errors more severely, and which is a common measure of measurement error), it looks like Quinnipiac does the best.

Posted by: Greg in FL on February 11, 2008 at 5:11 PM | PERMALINK

How does ARG poll, seriously some of their guesses I could do better on by throwing darts, Zogby was actually right on in a lot of races but he missed Cali and NH by such a large degree that it throws off his results (23 and 15 respectively) SUSA only had one really bad miss (Mizzou by 11).

Posted by: Socraticsilence on February 11, 2008 at 6:55 PM | PERMALINK

Variance please? Consistency matters.

Posted by: sherifffruitfly on February 11, 2008 at 8:45 PM | PERMALINK

Hi all. Time is that quality of nature which keeps events from happening all at once. Lately it doesn't seem to be working. Help me! Could you help me find sites on the: Animal baby bedding. I found only this - bob el constructor baby bedding. It was based to be needed, very, that essential commissions be appeared throughout the chemical between the anyone and technology and website blog however on earth, bedding. Peter found his profile and i called to try day goods, understanding, workshop 1990s, giveaway precedent problems and historians to him, bedding. Thank :cool: Paramesh from Belize.

Posted by: Paramesh on March 11, 2010 at 5:15 PM | PERMALINK




 

 

Read Jonathan Rowe remembrance and articles
Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for Free News & Updates

Advertise in WM



buy from Amazon and
support the Monthly