Jonathan Bernstein gives (and Ed Kilgore endorses) plenty of reasons to think that it will be hard for any Democrat to challenge Hillary Clinton for the nomination. Both focus on the actions of the party elites—broadly understood—who decide the nomination. Ed notes that the analogy of 2008 is inapt, since at this stage back then Edwards had made tremendous organizational progress and Obama commanded universal fame and a unique ability to build up quickly. A couple of days ago Nate Silver noted that Clinton is very popular among Democrats of all ideological tendencies, and has racked up an unprecedented number of endorsements from Democratic members of Congress, again from across the party spectrum. But Jonathan has the most exhaustive list of what seriously running for president takes:
During the invisible primary, potential candidates introduce themselves to party actors and demonstrate their fealty to the party’s policy positions, their capacity for running a national campaign and the skills and abilities that promise to make them reliable presidents. They also begin to demonstrate that they can attract enthusiastic support from party voters (before the actual primaries and caucuses), and that they would make solid general election candidates. But not all candidates begin at the same starting line. Hillary Clinton had already achieved pretty much everything on the 2016 nomination checklist by November 2012. By contrast, Massachusetts Governor Deval Patrick, and Maryland Governor Martin O’Malley, have a lot more to do. The more a candidate must achieve, the more time it will take to do it.
This all sounds completely right. Any Democratic candidate jumping in at this point will have to have already demonstrated party loyalty, actual or likely executive skills, and the ability to win a majority of votes in both a party primary and a general election. Moreover, it would help if that candidate had a record of early and loud opposition to doing “stupid [stuff]” in the Middle East—the same issue that sank Hillary in 2008, and that deserves to sink her now—and a history of running, long before Elizabeth Warren, as a candidate of “the people” against “powerful forces.” It would help if the candidate had vast personal wealth, maybe not enough to self-finance a whole campaign, but enough to buy a campaign infrastructure and the advertising to compete immediately in early primary states, as well as strong and deep connections to Silicon Valley, the only serious rival to Wall Street (Clinton’s base) as a source of campaign cash. It would help, morally if not politically, if the candidate were universally regarded as caring fervently and persistently—as Clinton palpably does not—about the biggest issue of our time, global warming. Finally, it would be great if the candidate had a demonstrated willingness to tick off both Clintons, and were old and accomplished enough not to care about the future consequences of doing so if the challenge failed—though let’s say not too much older than Hillary Clinton, or a tiny bit younger.
No, I don’t have any evidence that Al Gore has any interest in coming out of political retirement; I see as of a few minutes ago that he has more interest in suing Al Jazeera (though that would hardly hurt a campaign). But if he did, and if he ran as the anti-war and populist—yet impeccably mainstream—candidate that Hillary clearly is not and has no desire to be, things would suddenly get interesting. And if he’s not, they won’t.
Ida Lupino was a central figure in the breaking of the all-male lock on the Hollywood director’s chair. While she was looking for a new project to make with her then-husband Collier Young, she met one of the men who had been kidnapped and forced to drive through Mexico by spree killer Billy Cook. That inspired her (and co-screenwriter/producer Young) to make this week’s film recommendation, the first film noir directed by a woman: 1953′s The Hitch-Hiker.
The plot is straightforward and crisply told. Wonderfully, there is none of the extended, needless expository “set-up” of the characters and story of which too many film makers are enamored. Rather, the movie opens with a solitary figure walking slowly along a highway, looking for a ride. His face is off-camera. A car stops to pick him up, and moments later we see the same car on a dark side road, with dead bodies next to it. The solitary figure, face still obscured, harvests wallets and jewelry from the corpses. And then we see two pals on a fishing trip pick up a hitchhiker, who draws a gun and tells them to drive to Mexico. Somewhere along the way, he announces blandly that he is going to kill them too. From there, the movie is a three-handed nail biter, with William Talman as the hitchhiker and Frank Lovejoy and Edmund O’Brien as the luckless captives. Lupino keeps the brutal tale moving quickly and tells it an unromantic, unadorned style reminiscent of one of her mentors, Raoul Walsh.
Like most people, I only knew William Talman as the Prosecuting Attorney who got his head handed to him every week by Perry Mason. But there was more to the man than the role of Hamilton Berger let him show. As the gun-toting, sadistic Emmett Myers, he’s truly chilling. Yet like most bullies, he conveys an undercurrent of weakness and fear. It’s a pity Talman’s addiction to tobacco took him away from us at such an early age, leaving The Hitch-Hiker as the only big screen work for which he is even occasionally remembered.
O’Brien is credible as the more macho of the kidnappers, who chafes at Talman’s psychological terrorism and keeps looking for a way to confront him. But the more complex performance is by Frank Lovejoy, whom Lupino seems to have coached to play his part more like O’Brien’s wife than friend. He cooks, he tends injuries, he loves children, he counsels patience and he better endures Talman’s taunts that the captives are soft and unmanly. Yet when the need arises, Lovejoy is heroic. I wonder if Lupino saw herself this way. In any case, I doubt that a male director/scriptwriter would have crafted Lovejoy’s part in this complex and compelling fashion.
The film is also a master class in noir cinematographer, with Nicholas Musuraca behind the camera. The eerie shots of Talman’s menacing face floating in the dark in the back seat with the two terrified captives harshly lighted and staring at the camera are unforgettable. But Musuraca also puts paid to the idea that film noir camerawork has to be all about shadow. Noir is a mood and not just a lighting style. The lonely, glaring shots of the car rolling through the bleak desert utterly isolated under the burning Mexican sun are just as much iconic noir as are all the dark scenes. Musuraca is revered in film noir uber-buff circles, but not widely respected beyond that, perhaps because his oeuvre was so enormous that he inevitably worked on some zero-budget tripe. But with this film, the trend-setting noir Stranger on the Third Floor and his movies with Jacques Tourneur (also once unappreciated), he has the basis to accrue a stronger reputation over time.
The Hitch-Hiker is a minor classic of the noir genre and a feather in the cap for Lupino, Young and everyone else involved. After this gripping movie, you may find yourself hesitant to ever again slow down and pick up that guy with his thumb out on the side of the road.
In his latest anti-cannabis-legalization screed, (behind the Wall Street Journal paywall), written with a former federal prosecutor named Robert White, William Bennett writes:
Mark A.R. Kleiman, a professor of public policy at the the university of California, Los Angeles, has estimated that legalization can be expected to increase marijuana consumption by four to six times. Today’s 2.7 million marijuana dependents (addicts) would thus expand to as many as 16.2 million with nationwide legalization.
Now, if Bennett wants to make silly predictions, and if Rupert Murdoch wants to publish them, all I can say is, “It’s a free country.” But I think I’m entitled to protest when he attributes that silliness to me. It’s hard to count how many ways that short paragraph is wrong, but the central points are simple:
1. An estimate of the possible change in quantity consumed is not an estimate of the change in the number of dependent users. Consumption can also grow because the amount consumed per dependent user increases.
2. Even most dependent users are not, by any reasonable definition, “addicts.”
3. The large estimated impact on consumption depends the factor-of-ten price decrease (to about $1-2/gm. for moderately potent product) that would result if cannabis were treated like an ordinary commodity. If taxation or production limits prevent such a drastic decrease, the effect of legalization on consumption would be much smaller.
According to a phone conversation I had this morning with Robert White, the source of the “estimate” is p. 25 of Drugs and DrugPolicy. Here’s the actual relevant text from that page:
We might expect something like four to six times as much cannabis to be consumed after legalization as is consumed now.
That’s the culmination of a somewhat complex argument (starting on p. 22) arguing against the claim that drug laws have no impact on consumption. If cannabis were produced and sold on the same legal basis as tea, the price could be about one-tenth of the current illicit-market price. For reasonable values of the price-elasticity of demand, such a factor-of-ten decrease in price might lead to something like a threefold increase in quantity consumed. Obviously that’s speculation, because we have no experience with prices that low, or with the consequences of the shift between smoked herbal product and vaporized concentrates of the consumption of cannabis in edible and potable forms.
Other factors would change with legalization in addition to price, all in the direction of encouraging more use: increased access, better product quality, product innovation, marketing effort, decreased social stigma, and decreased legal risk. De facto legalization without a price decrease in the Netherlands has been associated with approximately a doubling of prevalence, though to levels still very low by U.S. standards.
Combining those non-price factors – which shift the demand curve – with the movement along the demand curve as price falls might lead to something like a four-to-sixfold increase in total quantity consumed, compared with a strict prohibition. (Of course, places such as California don’t actually have strict prohibition today.) That seems to me like a reasonable estimate, no more likely to be too high than too low, though of course the error band is enormous. But that doesn’t necessarily mean a proportionate increase in either the number of total users or the number of problem users; in fact, what we’ve been seeing over the past decade has been an increase in cannabis use at the intensive margin, among already-heavy users. The number of people meeting diagnostic criteria hasn’t moved much, but days of use and quantity per use-day have both increased.
The calculation of an increase in the number of “addicts” is entirely Bennett and White’s based on confusing a change in the quantity consumed with a change in the number of users, and assuming (without any basis) that a constant fraction of consumers become “addicts.” But they write it in a way that suggests, without actually saying, that it is my calculation. (And of they attribute to me alone, though the book has three authors.)
In fact, an estimated 2.7 million Americans today meet diagnostic criteria for cannabis dependence. (Perhaps unexpectedly, only about half of daily cannabis users meet diagnostic criteria.) And dependence” is typically transient rather than chronic. Chronic, relapsing cannabis use disorder – which is what “addiction” means, if it means anything except “Bill Bennett wants you to be very afraid” – probably claims fewer than 1 million American victims today. And that’s after 15 years of rapid growth in heavy use under a weakening system of prohibition.
So to assert the existence of 2.7 million cannabis “addicts” and then sextuple to estimate 16.2 million “addicts” post-legalization is flat-out silly. Even putting aside its false precision, the number can’t be right, unless you expect chronic, relapsing cannabis dependence to become much more common than chronic, relapsing alcohol dependence. The annual prevalence of total alcohol dependence (again, most of it not chronic) is estimated at something under 10 million. (3.8% of the 80% of the U.S. population of 318 million that is over 15). Does any sane person expect cannabis dependence to become more common than alcohol dependence? Seriously?
It’s the old hit-and-run: put up nonsense, and trust the capacity of your lie to get halfway around the world before the truth has had time to put its shoes on.
These folks could teach Principles of Sliminess at Slug University. I guess Bennett must not have read the chapter on honesty in The Book of Virtues.
Footnote Yes, I have a call in to the WSJ op-ed desk, demanding a retraction. And of course they’re going to tell me they don’t take any responsibility for what they publish in the opinion section. And yes, I’ll try writing a letter, and of course they will tell me they don’t have enough space to let me explain how many ways Bennett and White got it wrong. And that post will be on line forever, so that 10 years from now I’ll have to explain, “No, I never predicted that legalization would lead to 16 million cannabis addicts.”
Commenter “Davis X. Machina” asks: “What’s the absolute latest a candidate can realistically get into a presidential race? Not ballot-deadlines-latest, but real-world.”
Great question. During the last cycle, numerous pundits talked up a late entry in the Republican nomination contest well into fall 2007, when it was clearly too late. So what’s the real answer? Unfortunately, the best answer is a big, “it depends.”
I can do a little better than that, though. The invisible primary for 2016 began, on the Republican side, at approximately the same time that Karl Rove was having that Election Night tantrum on Fox News — although I suppose that Republicans who believed the aggregate polls may have started a few weeks earlier. On the Democratic side? The incumbent president’s party begins its jockeying for the next nomination some time during his first term. The invisible primary continues until the actual voting in Iowa and New Hampshire approaches, in this case at the beginning of 2016.
At some point in that long process, party actors begin to make commitments, and eventually settle on one or more finalists. Other competitors are winnowed out de facto (Democrats Joe Biden, Bill Richardson and Chris Dodd all made it to Iowa in 2008 but only after having long since lost any chance of winning), or formally (Republicans Tim Pawlenty, Haley Barbour and John Thune in 2012, the latter two of whom did some candidate-like things but never officially entered the field).
So when is too late? There are two key variables. One is the point at which party actors have moved from auditioning the prospects to actually deciding. That may happen as early as two or three years before the election, which appears to be happening among Democrats right now. Or it may take until the late fall or early winter entering the election year.
The other variable concerns potential candidates. During the invisible primary, potential candidates introduce themselves to party actors and demonstrate their fealty to the party’s policy positions, their capacity for running a national campaign and the skills and abilities that promise to make them reliable presidents. They also begin to demonstrate that they can attract enthusiastic support from party voters (before the actual primaries and caucuses), and that they would make solid general election candidates. But not all candidates begin at the same starting line. Hillary Clinton had already achieved pretty much everything on the 2016 nomination checklist by November 2012. By contrast, Massachusetts Governor Deval Patrick, and Maryland Governor Martin O’Malley, have a lot more to do. The more a candidate must achieve, the more time it will take to do it.
So the more open the nomination is, the longer the window for a candidate to enter the fray. Likewise, the stronger a given candidate is (think Clinton), the longer that candidate can wait to begin campaigning.
In this presidential election cycle, the Democratic side looks pretty settled already. Unless Clinton drops out or encounters unexpected turbulence, it’s already pretty late to enter the nomination contest except for the very heaviest of heavyweights. The Republican side, on the other hand, still seems fluid. In addition to candidates such as Rand Paul, Ted Cruz, and Rick Perry, who are actively campaigning, there is a large number of quasi-candidates — Mike Pence, John Kasich, Rob Portman — straddling the line between almost in the race and really in. I suspect that a candidate who hasn’t done some preliminary work would have difficulty catching up to the pack at this point, but it’s probably not too late for a plausible candidate to start from scratch. However, the window is closing. At some point in the next several months, Republican party actors are going to move from window-shopping to committing.
Avik Roy released a health reform proposal yesterday, published by the Manhattan Institute (full pdf). I am not going to go all post-modern literary critic on this (only deconstruct), in part because a lot of it lines up nicely with things I have been writing about/calling for over the past few years, in search of a political deal that could move the policy ahead. For example, I called for replacing the individual mandate and federalizing the dual eligibles and buying low income persons into exchanges in December of 2010! (these are “cousins” of what Avik proposes). My more fully fleshed out “next step health reform” version came in my book in 2011. Again, it is not hard to imagine a deal between what Avik and I wrote.
Perhaps most importantly is the tone, that acknowledges that policy deals are available. However, politics have been standing in the way.
As Avik puts it:
One of the fundamental flaws in the conservative approach to health care policy is that few—if any—Republican leaders have articulated a vision of what a market-oriented health care system would look like. Hence, Republican proposals on health reform have often been tactical and political—in opposition to whatever Democrats were pitching—instead of strategic and serious.
The biggest question facing Avik’s proposal is not in policy terms or what supporters of the ACA will think, but whether any elected Republicans will be willing and able to shift gears and begin trying to move health reform ahead instead of simply looking for what helps in the next election. My hope (and cautious expectation) is that the answer is yes, after the 2014 election.
Two things I especially want to encourage in reform discussions that overlap with what Avik has proposed and that I have previously proposed as part of a North Carolina-specific reform/waiver approach within the ACA (p. 6-7):
Imagine a Medicaid waiver in which the cost of the dual eligible beneficiaries (those covered by Medicare and Medicaid) are federalized to reduce the perverse incentives inherent with two payers of care; state cost savings could be used to expand insurance coverage
Pilot a premium support approach to the setting of premiums for Medicare advantage plans in North Carolina, two to three years after we begin a State-run insurance exchange with the Medicaid waiver/BHP expansion I suggest
There is lots of health policy to be banged out in those two points that I have suggested, but the need for LTC reform is a crucial issue that I have written much about. And the current political stalemate in which exchanges are the panacea in the Medicare program, and the worst thing ever in the ACA–and vice versa, is silly.
I will have more detailed comments later, but I commend Avik for offering this plan, and think there is a plenty to like in the proposal itself, as we look for the next step in health reform.
Dismiss them, that is, as anything resembling tea leaves about 2016. You know this, right? There is absolutely no reason to believe that book sales are a useful predictor of future elections.
We are, by nature, curious about the future and eager to latch on to anything that promises to reveal its mysteries. So expect all sorts of political entrails to be carefully examined over the next two years. Polls, crowd sizes, fundraising, body language, campaign hires — we’ll hear about it all. And most of it will be humbug.
What isn’t humbug? For nominations, the best predictors tell us what party actors — politicians, activists, party officials, campaign and governing operatives, party-aligned interest groups and media — are thinking. Endorsements provide some clues. So, to some extent, does fundraising (at least from party sources) and the ability to attract talent to campaigns. But you need to know the parties and the environment to know exactly how much weight to assign various elements.
For general elections, the two early indicators to watch are the popularity of the incumbent president (not so great for Democrats right now) and the state of the economy (improving). Trouble is, we want to know the status of those things in August 2016, not August 2014.
Book sales don’t qualify — nor do early polls.
The only asterisk is that party actors often are as likely to be taken in by otherwise meaningless indicators as anyone else. So if some Republican donors believe that book sales matter, then maybe Carson will get a small, temporary boost — not that he is a viable nomination candidate in a party with many solid contenders. And if Democratic-aligned interest groups believe the early polls and jump on the Clinton bandwagon as a result, then those polls matter, even if they don’t predict anything about eventual voter behavior.
In any event, expect plenty of nonsense in the coming months. That’s just a consequence of a big, important story that currently offers very little that’s visible to reporters and pundits. Enjoy it — but don’t fall for it.
Yesterday I returned to banging the (Kevin) drum about the pernicious effects of environmental lead. Both a note from a reader and comments elsewhere about Kevin’s latest post suggest that an earlier post of mine had created confusion.
1. Lead causes crime, and the effect size is large.
2. Deceasing lead exposure was the primary cause of the crime decline that started in 1994.
#1 is certainly true, and nothing Cook has written or said contradicts it. We have both statistical evidence at the individual level and a biological understanding of the brain functions disrupted by lead.
Cook convinced me that #2 is not true, or at least is not the whole story, because the decline happened in all cohorts while lead exposure deceased only for the younger cohorts. You can still tell plausible stories about how the young’uns drove the homicide wave of the late 1980s/early 1990s and that less violent subsequent cohorts of young’uns reduced the overall level of violence, but that’s not as simple a story. In a world of many interlinked causes and both positive and negative feedbacks, the statement “X% of the change in A was caused by a change in B” has no straightforward empirical sense.
So I’m absolutely convinced that lead is criminogenic, in addition to doing all sorts of other personal and social harm, and strongly suspect that further reductions in lead exposure (concentrating on lead in interior paint and lead in soil where children play) would yield benefits in excess of their costs, even though those costs might be in the billions of dollars per year. What’s less clear is how much of the crime increase starting in the early 1960s and the crime decrease starting in the early 1990s (and the rises and falls of crime in the rest of the developed world) should be attributed to changes in lead exposure.
The original point of yesterday’s post stands: When you hear people complaining about environmental regulation, what they’re demanding is that businesses should be allowed to poison children and other living things. No matter how often they’re wrong about that – lead, pesticides, smog, sulphur oxides, chlorofluorocarbons, estrongenic chemicals – they keep on pretending that the next identified environmental problem – global warming, for example – is just a made-up issue, and that dealing with it will tank the economy. Of course health and safety regulation can be, and is, taken to excess. But the balance-of-harms calculation isn’t really hard to do. And the demand for “corporate free speech” is simply a way of giving the perpetrators of environmental crimes a nearly invincible political advantage over the victims.
A range of lawsuits filed over many years (and fought by successive governors) recently culminated in the federal government forcing the state to move some prisoners to local jails. But Governor Jerry Brown is defying federal pressure to fully comply with the U.S. Supreme Court’s order to reduce overcrowding to a still problematic 137% of capacity.
Even modest reforms to criminal sentencing get little love from elected officials in this Democratic Party-dominated state. Hope for change surged briefly last year when a bill to convert simple drug possession from a felony to a “wobbler” (a crime that could, depending on circumstances, be treated as either a felony or a misdemeanor) actually passed the state legislature after many similar prior bills had failed.
But Brown promptly vetoed it. At the time, he stated that a broad review of all sentencing was commencing, and that would be the vehicle to revamp the criminal justice system more broadly, including sentencing policy.
Kevin Drum, who’s been doing Pulitzer-quality science and policy reporting on the behavioral effects of environmental lead, has yet another item today, once again reporting a new paper by Jessica Wolpaw Reyes of Amherst, who’s been doing the fancy number-crunching on the topic. No real surprise: in addition to greatly increasing rates of criminal behavior, lead exposure also increase the risk of other consequences of poor self-command, such as early pregnancy. Kevin draws one of the right morals of the story: that biology matters, while liberals and conservatives tend to unite in blaming everything on society, economics, and culture:
It’s a funny thing. For years conservatives bemoaned the problem of risky and violent behavior among children and teens of the post-60s era, mostly blaming it on the breakdown of the family and a general decline in discipline. Liberals tended to take this less seriously, and in any case mostly blamed it on societal problems. In the end, though, it turned out that conservatives were right. It wasn’t just a bunch of oldsters complaining about the kids these days. Crime was up, drug use was up, and teen pregnancy was up. It was a genuine phenomenon and a genuine problem.
But liberals were right that it wasn’t related to the disintegration of the family or lower rates of churchgoing or any of that. After all, families didn’t suddenly start getting back together in the 90s and churchgoing didn’t suddenly rise. But teenage crime, drug use, and pregnancy rates all went down. And down. And down. Most likely, there was a real problem, but it was a problem no one had a clue about. We were poisoning our children with a well-known neurotoxin, and this toxin lowered their IQs, made them into fidgety kids, wrecked their educations, and then turned them into juvenile delinquents, teen mothers, and violent criminals. When we got rid of the toxin, all of these problems magically started to decline. This doesn’t mean that lead was 100 percent of the problem. There probably were other things going on too, and we can continue to argue about them. But the volume of the argument really ought to be lowered a lot. Maybe poverty makes a difference, maybe single parenting makes a difference, and maybe evolving societal attitudes toward child-rearing make a difference. But they probably don’t make nearly as much difference as we all thought. In the end, we’ve learned a valuable lesson: don’t poison your kids. That makes more difference than all the other stuff put together.
But there’s another moral to be drawn. The toxicity of lead has been known for at least a century. The introduction of tetraethyl lead into gasoline in the 1920s sparked a controversy, which the automobile industry, the petroleum industry, and Ethyl Corporation (a GM/Esso joint venture) won, using the usual mix of dirty tricks including lying and threatening scientists with lawsuits. A similar battle was fought over lead paint in the 197os, with the lead-paint vendors in the bad-guy role, and over lead emissions from smelters, with the American Iron and Steel institute trying to destroy Herb Needleman’s scientific career.
Then, mostly by the accident that leaded pain fouled catalytic converters, the battle was rejoined over lead in gasoline, with the old pro-toxin coalition fighting a drawn-out rearguard action to delay regulation as much as possible.
As far as I know, not a single executive, lobbyist, or scientist working for any of the companies that were making money by poisoning children and causing a crime wave spoke out in favor of public health and safety. Why should they? After all, they were just doing their jobs and paying their mortgages, and Milton Friedman had proclaimed that the only social responsibility of business was to make money (and that anyone who believed otherwise was a closet socialist): a morally insane proposition still widely repeated.
All of which makes me think of C.S. Lewis’s preface to The Screwtape Letters, explaining his image of Hell as the realm of the Organization Man:
I live in the Managerial Age, in a world of “Admin.” The greatest evil is not now done in … sordid “dens of crime.” … It is conceived and ordered (moved, seconded, carried, and minuted) in clean, carpeted, warmed and well-lighted offices, by quiet men with white collars and cut fingernails and smooth-shaven cheeks who do not need to raise their voices.
The plutocrat majority on the Supreme Court has ruled that, whatever the facts, as a matter of law using money to influence the outcome of elections does not constitute “corruption,” because “there is no such thing as too much speech.” Soon it will probably rule that the companies can cut the comedy and make contributions directly from corporate coffers to campaign accounts, but by now the rules are so leaky that it hardly matters anymore. As a result, quiet men (and women) in pleasant offices, who have not only neatly-trimmed fingernails but utterly clear consciences – men and women most of whom would be psychologically incapable of injuring a child with their own hands - will continue to poison other people’s children (with environmental toxins, unhealthy foods, alcohol, tobacco, and, shortly, cannabis), call anyone who tries to interfere a socialist, and use everything short of explicit bribery to get their way.
And that, my friends, is what’s at stake this year, and in 2016, and – unless we’re very lucky – in every election for the rest of my lifetime.
In last weekend’s New York Times magazine, Robert Draper profiles Sen. Rand Paul and describes the nation’s current “libertarian moment.” “Today,” writes Draper, “for perhaps the first time, the libertarian movement appears to have genuine political momentum on its side.” It’s an enjoyable enough read, and there are some interesting interviews and discussions about Paul and some of the personalities who make up the Libertarian Party. There’s also some entertaining pushback from mainstream Republicans.
But I found myself questioning the main premise of the piece. What evidence do we have that we’re in the middle of a libertarian moment, that there’s “genuine momentum” for the ideology? As evidence, Draper offers the following:
Most Americans support marriage rights for same-sex couples.
Marijuana legalization has become a mainstream position.
Americans are reluctant to back US military actions abroad.
Well, okay, that strikes me as a reasonable reading of current public opinion. But that’s pretty far from proving that the average American is now libertarian. Take same-sex marriage. To some, that is a statement of libertarian values: the government should not be arbitrarily deciding who can and can’t have access to marriage. To others, this is a civil rights issue: People are being denied a basic right simply because of their sexual preferences, and they are seeking government action to ensure equal access to a key societal institution. That actually sounds like the opposite of libertarianism. Similarly, nearly every American now favors the rights of African Americans to vote or to marry outside their race; that doesn’t make every American a libertarian.
And then Draper hits us with this:
Deep concern over government surveillance looms as one of the few bipartisan sentiments in Washington, which is somewhat unanticipated given that the surveiller in chief, the former constitutional-law professor Barack Obama, had been described in a 2008 Times Op-Ed by the legal commentator Jeffrey Rosen as potentially “our first president who is a civil libertarian.”
Okay, if the definition of libertarian now includes Barack Obama, the man who ran for office promising that the federal government would take responsibility for people’s access to health care and actually delivered on it, then that term has no meaning at all.
Libertarianism is, in fact, a tricky combination of policy stances, combining liberal positions on personal freedom (drug use, sexuality, and maybe abortion, even if that allows a decline of national morality) with conservative positions on the economy (low taxes and minimal regulation, even if that allows inequality, entrenched bigotry, and environmental degradation). By its own nature, libertarianism runs against the main ideological current in American politics right now, since politically aware people tend to want either the liberal package of personal freedom with economic regulation or the conservative package of social conformity with economic liberty. Basically everyone agrees with libertarians on something, but they tend to get freaked out just as quickly by the ideology’s other stances. Which is why libertarianism tends to have a low ceiling for popularity.
Pretty much the only way you can make it look like libertarianism is a popular movement is if you expand its definition so far as to be almost meaningless. This is more or less what Draper has done in this article.
Southwest Asia has been with me for a long time. For over a decade, I was a small part of a fairly well-orchestrated U.S. strategy to maintain the balance of power in the Persian Gulf. When the Shah of Iran fell in 1979, we knew that the papier-mache kingdom of Saudi Arabia could not replace Iran as our “protector in the Gulf,” so we settled for the next best thing: a relatively stable balance between the Arabs of Iraq and the Persians of Iran. We had to work hard to maintain this balance.
The first serious challenge came in the mid-1980s when I was a joint-staff officer for the principal military force-provider for the region, U.S. Pacific Command. I helped plan the U.S. military’s response to defeat a push by the Soviets into Iran in search of warm-water ports. In 1979, Russia had already invaded Afghanistan and many predicted Iran was next. The Cold War’s strategy preempted everything else, but we still kept a wary eye on others in the Persian Gulf, particularly after Saddam Hussein invaded Iran.
When it looked as if the long and bloody war Hussein had started might eventually destroy the balance we sought and draw the Soviets into Gulf waters, the U.S. openly took Iraq’s side. We re-flagged and escorted Kuwaiti tankers, a U.S. warship absorbed two Iraqi Exocet missiles and almost sank, another of our warships struck an Iranian mine, we attacked Iran’s command-and-control assets, sunk one Iranian warship and badly damaged another, and then tragically shot down an Iranian civilian airliner with 290 people on board. It was this tragic act that many believe caused Ayatollah Khomeini to “drink the hemlock,” as he put it, and declare an end to the disastrous war Iraq had begun. The stability we sought was reestablished.
At the end of the 1980s, I became a special adviser to the chairman of the Joint Chiefs of Staff. Having been thwarted in his attempt to conquer Iran, Saddam Hussein invaded Kuwait and we immediately launched Operation Desert Shield to protect Saudi oil facilities and, some months later, Operation Desert Storm to kick the Iraqi Army out of Kuwait.
Desert Storm accomplished our strategic objective: restoring the balance in the Gulf. We did not march to Baghdad to unseat Saddam Hussein, because had we done so alone, we would have assumed the role of balancer and would have had to remain in that country indefinitely, something we wisely judged as not only untenable but extremely dangerous for long-term U.S. interests.
Through four presidents—Carter, Reagan, George H.W. Bush and Clinton—the U.S. played an adroit strategic game in the Persian Gulf. As a member of the Marine Corps War College faculty from 1993-1997, I and my joint-force students studied, analyzed and evaluated this strategy. As a personal adviser to retired General Colin Powell from 1998-2000, I often discussed how Saddam was contained and the Gulf was stable. In short, we watched U.S. strategy work. It maintained stability in one of the most vital regions of the world and cheap oil flowed to Japan, to Europe and to us.
For over a decade, I was a small part of a U.S. strategy to maintain the balance of power in the Persian Gulf, however ignominiously to the purer hearts of the world. In 2003, George W. Bush and the neoconservatives destroyed that balance.
Imagine my utter surprise, then, when I returned to government in 2000 and began to hear talk of destroying that relative stability by invading Iraq and taking out Saddam Hussein. Had I stumbled into an administration of neophytes in national security policy, lunatics, power-mad zealots, or what?
Some would say the neoconservatives and hyper-nationalists who seemed to crawl out of the dark and advise or enter the Bush administration were all of these and more. But these descriptions omit an important element: the messianic and arrogant belief in American exceptionalism.
Many of the men and women I encountered in 2001-2005, or who are now speaking out loudly about America’s responsibilities toward Iraq, sincerely believed that their country has a mission in the world to evangelize its unbelievers. Theirs is a long tradition in U.S. foreign policy, loathed and despised by John Quincy Adams as wanting to go “abroad, in search of monsters to destroy.”
No matter how many times their beliefs are proven insane, destabilizing, immoral, dangerous, ruinous even—consider L. Paul Bremer’s disbanding of the Iraqi military, de-Baathification and refusal to establish an Iraqi government in 2003—they continue to advocate identical policies and actions. Regardless of previous decisions gone horribly awry, they push for similar decisions today. Despite clear proof that civil war cannot be safely managed by outside parties, they—the outside party—insist on intervening. Today, moreover, they insist on calling all opposition “terrorists,” even in Iraq where the most formidable forces opposing Nouri al-Maliki are the very Sunnis “awakened” by General David Patraeus in 2007.
Worse, because of a truly apathetic Congress, a largely ignorant or complicit media, a dramatically incompetent legal system, and those who enrich themselves on the anti-terrorist industrial complex, these neocons get away with this characterization.
From 1953 to 2000, we crafted and maintained a balance of power in the Persian Gulf, however ignominiously to the purer hearts of the world. In 2003, we destroyed that balance. We are now reaping the consequences. To thrust more military power into such a situation will only work if we remain indefinitely and massively deployed there—an extremely dangerous proposition. The only other solution is to craft a new balance of power. Iran just might be ready to assist.
UPDATE (August 8, 2014)
With the recent U.S. airstrikes near Irbil, President Obama has reentered the miasma that is Iraq.
In for a penny, in for a pound comes to mind. A couple of laser-guided bombs are the penny. The pound will come when air power, as usual, proves insufficient.
Compelling the U.S. to reenter the fray is precisely what the Islamic State (IS) desires: the ultimate tactical goal of every rabid terrorist spawned by Al Qaeda is to kill Americans. What IS does not want is Tehran and Washington working in concert.
The IS leadership knows no solution can be achieved—in Syria, Afghanistan, Lebanon or Iraq—and no long-term security for Israel can be forged, and no peace can come to the region, unless Iran is a fully participating and cooperating party on the side of IS’s enemies.
This does not mean sectarian war; it means a war of both Sunnis and Shia, along with Christians and others—and of all those desirous of stability and peace—against the real terrorists. It also means that all religious groups, in Iraq and elsewhere, who join this struggle have to be treated with tolerance, respect, both during the struggle and after it’s won. There can be no Malikis any more than there can be new Saddam Husseins.
There has to be real and sustainable political change in Iraq—and it has to come now.
In his recent interview with Thomas Friedman the president was clear that he’s hesitant to “be in the business of being the Iraqi air force.” Since he is in that business, it’s fair to wonder how he got there. Commentators friendly to Obama, including Matt Yglesias, have taken the view that his desire to limit engagement in the region reflects a kind of necessary realism. No doubt this is how the president himself views his actions. However this view contains a presumption of American competence that is not warranted. The truth is that military action became necessary precisely because the administration lacks a realistic strategy.
In Kurdistan examples are everywhere of the failure of American diplomacy. Refugees have been a problem for months, but only in the last few days has our government gotten serious about providing large scale material support to the Kurds. My work in Kurdistan was focused on public health problems like typhoid, and even under the best of circumstances the infrastructure there is strained. There are now added a million refugees to that mix, an enormous burden for the healthcare and security services of a small region. A large American relief effort was needed back in June, or better yet earlier, and would have taken great stress off the Kurdistan Regional Government.
On the economic front the State Department has gone out of its way to be unhelpful. The Kurdish government is in a desperate economic situation due to the refugee crisis, the security crisis, and the central government’s refusal to share oil revenue. The refusal to share revenue is a recent problem, and it’s an attempt by Baghdad politicians such as Nouri al-Maliki to keep the Kurds from developing their own export capacity. The Obama team has adopted Maliki’s line, in essence arguing that Kurdish oil undermines Iraqi unity. That’s an idea that has become increasingly ridiculous with each setback in Baghdad.
In fact Kurdish oil was an opportunity for Obama to rid America of the whole awful business of buying oil from anti-American interests in Baghdad. Kurdish export capacity could have been built into the broader Iraqi system, provided it can reform itself, or developed separately, depending on the need. But the idea of Kurds breaking away from Iraq was anathema to the Obama team. Instead they welded themselves to Baghdad, and compounded the Kurds’ economic problems, by actively discouraging buyers of Kurdish oil. Even before a Texas court ruled that a Kurdish tanker be seized if it approaches the US, the Obama team was undermining Kurdish oil sales. Here’s an excerpt from the State Department’s daily briefing on July 25th:
QUESTION: Are you actively warning the - say, the U.S. firms or other foreign governments to not buy Kurdish oil specifically?
MS. HARF: Well, we have been very clear that if there are legal issues that arise, if they undertake activities where there might be arbitration, that there could potentially be legal consequences. So we certainly warn people of that.
QUESTION: Do you keep doing that now too?
MS. HARF: We are repeatedly doing that, yes.
The result is ongoing economic strangulation at precisely the moment the Kurds are being attacked by ISIS. Government salaries haven’t been paid in months. One physician friend in Sulaimania wrote to me that the doctors are working for free. There have also been acute fuel shortages.
Security is the most obvious area where American soft power has failed. For months now the Kurds have been lobbying for a more coordinated approach against ISIS, and they have gotten the cold shoulder over and over. The Obama team was content to arm a disloyal and unreliable Iraqi Army, and they were perplexed when those heavy weapons ended up under ISIS control. But they refused to coordinate significant weapons procurement for the Peshmerga, despite increasingly desperate appeals, until the ISIS rampage forced them to change tack this past week.
Obama likes to claim he’s a supporter of American soft power. He stresses that military action is a last resort. In Kurdistan it’s been the only resort. American soft power, through obtuse policy and simple negligence, has been working for the other side. For Obama’s inner circle the priority has been the concept of Iraqi unity, not the empowerment of people in that country who are actually friendly towards Americans. This is anything but enlightened realism. Rather there’s cynicism and wishful thinking in equal measure. It’s not too late for Obama to change course, but the air strikes themselves do not indicate such a change. Rather they are further evidence that during the past few months there has been no American strategy for Kurdistan other than emergency response.
For those who don’t pay close attention to Republican presidential nomination politics, the much-ridiculed Ames event works like this: Anyone with a ticket can vote in the straw poll, so campaigns bus as many people as possible to the event (and pay for the tickets). Ames rarely predicts the winner of the the Iowa caucuses the following winter, much less who will win the nomination. However, the ritual does appear to play a winnowing role (for example, in 2012, Tim Pawlenty dropped out after disappointing at Ames).
Since any resemblance to rigorous procedural democracy is purely incidental, and because it’s a silly (and costly and time-consuming) hoop for candidates to have to jump through, Ames is reviled by almost everyone except the Iowa Republican Party, which profits directly from it.
Oh, and me. I have nothing against the Ames straw poll.
To understand Ames, it’s important to understand how and why it matters. The context is the invisible primary, the long (long, long) portion of the campaign before voters get involved when party actors — the politicians, formal party officials and staff, campaign and governing professionals, activists, party-aligned interest groups and party-aligned partisan media who have the biggest say in nomination politics — coordinate and compete over the nominee and the direction of the party. Ames is important only if those party actors treat it as important.
So, for example, it really didn’t matter that Michele Bachmann finished first at Ames in 2011, because very few party actors were even marginally interested in her. Nor did it matter how Ron Paul did in that event, because almost all party actors had long since made up their mind up about him. Pawlenty’s case was different. He had already fallen flat in debates and in other ways. It’s not so much that Ames destroyed him as that it was his last chance to show party actors that there was something they were missing about his candidacy. When he did poorly there, too, it was time for him to drop out.
In 2008, Ames may have put Mike Huckabee in the driver’s seat with Christian conservatives, though it’s also possible it just registered a victory that already had happened. If Ames did have an independent effect, however, it probably was among a group of people who werelooking for an excuse to choose among very similar contenders. That Ames was (perhaps) enough to make a difference is less about the outsized importance of a screwy straw poll and more about a group using whatever was available to choose between two almost-equals. If it hadn’t been Ames, it would have been something else equally arbitrary.
Which is really the answer here. No one uses the Ames straw poll as some sort of binding contract. Party actors are free to use it as they see fit — just as they use fundraising success, debate performances, polling data and reports of crowd reactions to stump speeches. Ames is no more of a joke than any of those, and there’s no particular reason to believe that it’s used any more seriously, except when party actors are looking for some arbitrary method of forcing their own decision.
So have fun pointing out the limitations of the Ames straw poll. But then realize that it’s a perfectly fine part of the Republican nomination process.
The fact is that the leaders of erstwhile socialist parties have been talking the talk of responsible capitalism for a very long time. It was how they covered their tracks as they retreated from offering people a way out of the rat race of capitalism - rather than compensation for being losers in it - even in the postwar era. Those who imagine that the progressive reforms achieved in that era stand as proof today that a responsible capitalism is possible are sorely mistaken. On the contrary, the undoing of those reforms after just a few decades shows that a responsible capitalism is indeed a contradiction in terms…Ordinary people recognise it for the doublespeak it is. And if they are not offered a positive vision and plan for a renewed democratic socialism that embodies cooperation rather than competition as the basis of social life - if they are not offered, that is, any alternative to capitalism - they will increasingly cling to whatever toehold they have within it at the expense of the “others”.
That even the son of Ralph Milliband is fleeing his socialist roots is another demonstration that Margaret Thatcher remains the defining figure in the recent history of British politics, and not just because she slammed the Overton Windowpane shut on the fingers of so many leftist politicians then and continues to do so now. In the nearly quarter century since she left office, the ensuing PMs haven’t had anywhere near as much impact on British society, whether one judges them individually or as a group. Yes, in public, the opponents of those PMs dutifully shrieked that each was implementing enormous and catastrophic changes, but in private, the same people would have admitted that the alterations were minor compared to those undertaken by The Iron Lady.
It is hard for Americans to appreciate how broadly Britons of Panitch’s generation endorsed socialism and how profoundly its principles shaped the British economy (especially through trade unions) and political system when Thatcher came to power. Some of British socialism’s legacy, particularly many of the nationalized industries, was destined for the dustbin of history in any event with the advent of the global economy, but much of it she personally, gleefully crushed. Ed Milliband is smart enough to know that if he ran on a 1970s-style anti-capitalist platform, he would be crushed in similar fashion, and it’s hard to believe that Panitch doesn’t know that too.
Two lessons for Americans. First, although Ronald Reagan is correctly recalled as a transformative U.S. President, he was governing a much more conservative country than Thatcher. Reagan’s triumphs over leftists in the U.S. were thus both easier and more popular than those of Thatcher (She remains deeply loathed by a significant minority of the country, especially as one moves north). She shouldn’t therefore be cast, as she sometimes is by Americans, as a lapdog or knockoff version of Reagan: Potent a politician as he was, she was moreso. Second, surveying the current gridlock between a Democratic President and a Republican House of Representatives, many people conclude that it would be much easier to implement sweeping political changes in the U.S. if we had a parliamentary system like the U.K. But in the past six decades, Thatcher is the only British Prime Minister who set the limits of acceptable political discussion for decades after her time in office and who implemented political changes that will be broadly studied and debated by historians a century from now.
A few weeks ago, I got to have dinner with Julian Bond. We have a friend in common, who asked me to recommend a play for when “my friend Julian Bond” came to town. “Did you say ‘your friend Julian Bond?’” I squeaked into the phone; whereupon she invited my boyfriend and me to join her and her husband and Bond and his wife for dinner.
As I drove our star-struck way downtown, I listened to Michael read from Bond’s biography on Wikipedia, even as I pretended to ignore him: “Honey, they’re not going to give us a test!” But after he rolled through the familiar list of credits-leader in the American civil rights movement, helped establish the Student Nonviolent Coordinating Committee, first president of the Southern Poverty Law Center, twenty years in the Georgia legislature, University of Virginia history professor, past chair of the NAACP-Michael said, “Oh, listen to this. His father got one of the first PhDs granted to an African-American by the University of Chicago.”
“Really,” I said. “I wonder if he was a Rosenwald Fellow.”
You’ve probably never heard of the Rosenwald Fellowships, but you’ve undoubtedly heard of many of the Fellows: W.E.B. DuBois, Gordon Parks, Jacob Lawrence, Zora Neale Hurston, Alain Locke, Langston Hughes, James Baldwin, Marian Anderson, Katherine Dunham, James Weldon Johnson, Ralph Ellison and nearly every other African-American artist and scholar active in mid-Twentieth Century America. The Rosenwald Fellowships, like the MacArthur genius grants which succeeded them, gave no-strings-attached cash to scholars and artists to continue their work; but unlike the MacArthur grants, the Rosenwalds went almost exclusively to African-Americans.
The fellowship program was part of Julius Rosenwald’s one-man campaign for racial justice, a campaign which led him to build the Rosenwald Apartments in Chicago and YMCAs in other Northern cities to provide housing for African-Americans moving up from the South. It also led him to construct 5,000 schools for black children who were kept out of public classrooms occupied by white students. The Rosenwald Schools provided primary education to one-third of the South’s African-American schoolchildren between World War I and Brown v. Board of Education.
So why haven’t you learned about any of this? Because Julius Rosenwald, who made a fortune as the president of Sears, gave much of that fortune away during his lifetime and directed that the rest be spent within ten years of his death. So his legacy isn’t a foundation with a big building giving out the occasional grant and the frequent press release; it’s the thousands of people educated and housed by his generosity. But no good deed goes unpunished: for failing to make perpetuity his highest concern, Rosenwald has largely been forgotten.
Not by all of us, though. I learned the story several years ago when the Spertus Museum in Chicago put on an exhibit of work by Rosenwald Fellows. One item in the exhibit was enough to persuade me of the Fellowships’ significance: a kinescope of Katherine Dunham performing new dances influenced by her Rosenwald-funded trip to the Caribbean. As I watched the motions and the gestures, I recognized the origins of Alvin Ailey’s classic “Revelations.” Ailey was Dunham’s student; and so, from Rosenwald to Dunham to Ailey, we have perhaps the premier work of American dance.
Thus, after a pleasant dinner in which we talked about theater and travel and the demographic transformation of Washington-Bond’s wife Pam said, “Yes, Julian calls our neighborhood Upper Caucasia”-I turned to him and said, “So, your father was a Rosenwald Fellow?”
He seemed equal parts surprised and gratified to encounter someone who knew about the Rosenwalds, and what an honor it was to receive one, and told the following story:
During a trip South in the mid-1930s to do research as part of his fellowship, Horace Mann Bond drove his car into a ditch. Apparently a pair of rural African-Americans made their living digging holes in the road and then charging hapless motorists to tow their cars out of them. While the two entrepreneurs were hooking up the tow truck, one of them observed Mr. Bond’s elegant city clothes and the new car he was driving, and asked how a black man came to have such luxuries. Mr. Bond explained that he was a Rosenwald Fellow and that the fellowship had paid for the clothes and the car as well as the research he was about to do. His interlocutor smiled: “You know Cap’n Julius?” He hoisted the car back onto the road. “No charge.”
Later, over coffee, Julian showed me an iPhone photo of himself seated next to an extremely elderly white lady who was holding his hand in both of hers. “Do you know who this is?” he asked. “In 1961 her book outsold the Bible!” It was, of course, Harper Lee, author of To Kill A Mockingbird; and on one of his recent trips South, Bond had gotten to meet her. “I’m so excited, I’m stopping people on the street to say, ‘Look at this! I had coffee with Harper Lee!’”
Which is, of course, just how I feel about my dinner with Julian.
The tumultuous decade that followed the Civil War failed to enshrine black voting and civil rights, and instead paved the way for more than a century of entrenched racial injustice. By Nicholas LemannJanuary/ February 2013
As president of Colombia, Álvaro Uribe triumphed over a fierce narco-insurgency. Then the U.S. helped to export his strategy to Mexico and throughout Latin America. Here’s why it’s not working. By Elizabeth DickinsonJanuary/February 2012