Thursday, July 31, 2008

Influencing the Election

One of the on-going disputes is whether or not Les Roberts and his co-authors attempted to influence that US election with the publication of L1 (or L2). There is no doubt that they did. Consider the original AP article on L1.

Researchers have estimated that as many as 100,000 more Iraqis -- many of them women and children -- died since the start of the U.S.-led invasion of Iraq than would have been expected otherwise, based on the death rate before the war.

Writing in the British-based medical journal The Lancet, the American and Iraqi researchers concluded that violence accounted for most of the extra deaths and that airstrikes by the U.S.-led coalition were a major factor.

There is no official figure for the number of Iraqis killed since the conflict began, but some non-governmental estimates range from 10,000 to 30,000. As of Thursday, 1,106 U.S. servicemen had been killed, according to the U.S. Defense Department.

The scientists who wrote the report concede that the data they based their projections on were of ``limited precision,'' because the quality of the information depends on the accuracy of the household interviews used for the study. The interviewers were Iraqi, most of them doctors.

Designed and conducted by researchers at Johns Hopkins University, Columbia University and the Al-Mustansiriya University in Baghdad, the study was published Thursday on The Lancet's Web site.

The survey attributed most of the extra deaths to violence and said airstrikes by coalition forces caused most of the violent deaths.

``Most individuals reportedly killed by coalition forces were women and children,'' the researchers wrote.

The report was released just days before the U.S. presidential election, and the lead researcher said he wanted it that way. The Lancet routinely publishes papers on the Web before they appear in print, particularly if it considers the findings of urgent public health interest.

Those reports then appear later in the print issue of the journal. The journal's spokesmen said they were uncertain which print issue the Iraqi report would appear in and said it was too late to make Friday's issue, and possibly too late for the Nov. 5 edition.

Les Roberts, the lead researcher from Johns Hopkins, said the article's timing was up to him.

``I emailed it in on Sept. 30 under the condition that it came out before the election,'' Roberts told The Associated Press. ``My motive in doing that was not to skew the election. My motive was that if this came out during the campaign, both candidates would be forced to pledge to protect civilian lives in Iraq.

``I was opposed to the war and I still think that the war was a bad idea, but I think that our science has transcended our perspectives,'' Roberts said.


1) As here, Roberts often tries to deny that he sought to affect the election even as he admits to doing so. What does it even mean to "skew the election?" The easiest way to think about the issue is to compare two worlds: In world A, L1 comes out after the election. In that world, Bush and Kerry spend the week before the election campaigning, arguing about issues X, Y, and Z. Both candidates seek to focus the debate on topics most likely to benefit them. Iraqi (civilian) mortality plays a role in that world but it is a small one. The less that Iraqi civilian mortality is discussed, the better off that Bush is (I think). In world B, L1 comes out before the election. (This is the world we actually live in.) Iraqi mortality is much more a part of the campaign then it was in world A. Issues X, Y and Z are still discussed, but less than they were in world A. (There are only so many hours in the day, questions that reporters can ask, speeches that Bush and Kerry can give.)

The causal effect of issuing L1 before the election rather than after is the difference between world A and world B. Roberts sought to force the candidates to "pledge to protect civilian lives in Iraq." He sought to influence what issues the candidates addressed. He wanted to change the debate from what it would have been without L1 so that more time/energy/attention was focussed on Iraqi mortality.

And that is fine! It's a free country and Roberts has the right to try to influence the electoral process. He is smart enough to know that his actions, alone, are unlikely to be the deciding factor in which candidate wins. But that does not change the fact that he sought to force the candidates to address an issue that they would not have otherwise addressed were it not for his decision to publish L1 before the election.

2) Can Roberts (or any author) insist on a specific publication date when working with journal like the Lancet? I guess so, if the article is desirable enough, from the editors point of view. Given what we know about Lancet editor Richard Horton's politics (and don't forget his YouTube videos here and here), it seems likely that he was also in favor of the article coming out before the US elections.

3) A similar story on the importance of timing applies to L2. Paul Foy reported in 2006 that:

Roberts organized two surveys of mortality in Iraqi households that were published last October in Britain's premier medical journal, The Lancet. He acknowledged that the timing was meant to influence midterm U.S. elections.

This is confused in that L1 was published in October 2004 and L2 in October 2006, but midterm elections were (obviously) only in 2006.

4) Fox News is probably quoting Burnham out of context here.

We wanted to get the survey out before the election if at all possible, but our agenda on this is concern for the humanitarian issues.

As always, I think that Burnham is a good guy. He is really focussed on humanitarian issues. But there is no compelling scientific or humanitarian reason why the studies needed to come out in the two weeks prior to the US elections. You can make the case (and I have seen Horton, for example, make it) that the key issue is not that the studies come out before the election but that they come out as soon as possible. That is a reasonable belief. Alas, the Lancet studies did not work that way. It took only 6 weeks for L1 to go from survey finish (mid September 2004) to publication. Why, if getting the information out fast is what matters, did it take 14 weeks for L2? (The survey work was completed in early July 2006.) The obvious explanation is that the authors and/or editors wanted the articles to come out just before the US elections. They worked more than twice as fast on L1 than on L2 in order to make that happen.

5) Roberts has spent the last 5 years backpedaling. He realizes that his credibility is damaged if people know that he sought to "influence" the US elections in 2004 and 2006. See our many transcripts for examples. The Hopkins Q&A (since deleted) included this tidbit.

At no time did study authors Les Roberts or Gilbert Burnham say that the release of their mortality studies was timed to affect the outcome of elections.

The only way for this to be true is for "outcome of elections" to be defined as "whether Bush or Kerry won." It is reasonable to think that Roberts/Burnham were smart enough to realize that their articles would not swing the election one way or the other. But there is no doubt that they (or at least Roberts) have admitted that they sought to influence the course of the campaign by forcing the candidates to address the issue of Iraqi mortality when they would have otherwise spent their time on other topics.

Wednesday, July 16, 2008

Eight Transcripts

Looking for more reading material? I am here to help. Consider these 8 transcripts of talks (7 from Les Roberts and 1 Gilbert Burnham) over the last few years. There are some interesting/annoying passages, but nothing much beyond what we would expect. We have:


I hope to post comments and highlights at some point, but no promises. Only feel like reading one? Start with the first.


This article (pdf) by Mark van der Laan and Leon de Winter takes a hard anti-Lancet line.

Given all of this, one needs to wonder if this large estimated number of violent deaths is not only due to statistical uncertainty (100.000-1000.000), but possibly also due to one or more of the potential biases mentioned above (and biases not mentioned at all because of a lack of space). Could it be that The Lancet’s survey is juggling with statistics and defies common sense?

We conclude that it is virtually impossible to judge the value of the original data collected in the 47 clusters. We also conclude that the estimates based upon these data are extremely unreliable and cannot stand a decent scientific evaluation. It may be that the number of violent deaths is much higher than previously reported, but this specific report, just like the October 2004 report, cannot support the estimates that have been flying around the world on October 29, 2006. It is not science. It is propaganda.

It surprising to see an academic of van der Laan's stature use such strong language about a peer-reviewed article.

Thursday, July 10, 2008

Daponte article

Must read article from Yale researcher Beth Osborne Daponte.

Challenges exist when making reliable and valid estimates of civilian mortality due to war. This article first discusses a framework used to examine war’s impact on civilians and then considers challenges common to each statistical approach taken to estimate civilian casualties. It examines the different approaches that have been used to estimate civilian casualties associated with the recent fighting in Iraq to date and compares the results of different approaches. The author concludes by proposing that after fighting has ceased, other approaches to estimating Iraqi civilian mortality, such as post-war retrospective surveys and demographic analysis, should be employed.

Daponte's article is fair and professional. If you only have time to read 15 pages about the debate over Iraqi mortality, this is the paper for you. Bottom line:

Perhaps the best that the public can be given is exactly what IBC provides – a running tally of deaths derived from knowledge about incidents. While imperfect, that knowledge, supplemented by the wealth of data of the Iraq Living Conditions Survey and Iraq Family Health Survey (which have their own limitations), provides enough information in the light of the circumstances. At a later date, additional surveys can be conducted to determine the impact and/or do demographic analysis. But for now, the Iraq Body Count’s imperfect figures combined with the date of the ILCS and IFHS may suffice.

Exactly right. No survey is perfect, but combining the information from IBC, ILCS and IFHS is the best way to get a handle on Iraqi mortality. But what about those Lancet surveys? Why does Daponte not even mention them in her conclusion? Because she thinks that they are highly suspect. Read the whole thing, but my favorite quotes are:

The estimates from these students [DK --- I think that this should have been "studies"] have been lauded but also questioned, partially because the researchers have misinterpreted their own figures but also because of fundamental questions about the representativeness of the achieved survey sample.

The [Lancet I] authors misinterpreted the analysis of the data . . .

Problems with the analysis of the data also plagued the second effort.

The pre-war CDR that the two Lancet studies yield seems too low. That is not to say that it is wrong, but the authors should provide a credible explanation as to why their pre-war CDR is nearly half that of what the UN Population Division estimates for pre-war Iraq. Since Burnham et al. arrive at their estimate of Iraqi ‘‘excess deaths’’ by taking the difference in the pre-war and wartime crude death rates and applying it to a population, if the pre-war mortality rate was too low and/or if the population estimates are too high (e.g., do not take into account the refugee movement out of Iraq), then the resulting number of ‘‘excess deaths’’ would be too high, yielding inflated estimates. Unfortunately, the authors have not adequately addressed these issues.

Burnham et al. sent interviewers to the field to ask respondents for information, knowing that this could put interviewers’ lives at risk. In doing so, the research team was professionally irresponsible. Further, in an effort to ‘‘protect interviewers’’ (even though they had already put them in danger), they sacrificed the scientific randomization that the research relies upon.

Further, one should question how a proposal to conduct this research made it through the Institutional Review Board at a US university.

However, unlike the Lancet studies, the ILCS was careful in its attribution of the root causes of civilian casualties in Iraq.

Les Roberts likes to claim that no one with expertise in estimating conflict mortality criticizes the Lancet results. He should stop making such false claims.

Daniel Davies likes to get his boots on when he thinks that someone is unfairly criticizing the Lancet studies. Time to get walking! Davies will have a hard time portraying Daponte as either incompetent or a Neocon stooge of the Bush administration. If Daponte doesn't think that the Lancet estimates are worth paying attention to, why does Davies defend them so relentlessly?

Just asking!

Tuesday, July 01, 2008

Obermeyer, Murray and Gakidou

Thanks to Donald Johnson writing at Deltoid for the pointer to this article from the British Medical Journal, "Fifty years of violent war deaths from Vietnam to Bosnia: analysis of data from the world health survey programme" by Ziad Obermeyer, Christopher J L Murray, and Emmanuela Gakidou. Below are my comments, expanded from my initial thoughts at Deltoid. I refer to the authors by their initials: OMG.

OMG's key claim, for purposes of Lancet aficionados, is that "media estimates capture on average a third of the number of deaths estimates from a population based surveys." This matters because Les Roberts has been running around for years claiming that passive surveillance (as Iraq Body Count uses) is a horrible method of estimating mortality and never (except possibly in places like Bosnia) captures more than a small percentage of all deaths. (In fairness to Roberts, this is a new article and so, perhaps, his previous claims were justified by the research he had access to at the time.)

Will Roberts now acknowledge this? Time will tell.

The paragraph most directly relevant to disputes over Iraq mortality is:

As a final point of comparison, we applied our correction method, derived from the comparison of survey estimates with Uppsala/PRIO data, to data from the Iraq Body Count project’s most recent report of 86,539 (the midpoint of the 82,772 to 90,305 range reported in April 2008) dead in Iraq since 2003. Our adjusted estimate of 184,000 violent deaths related to war falls between the Iraq Family Health Survey estimate of 151,000 (104,000 to 223,000) and the 601,000 estimate from the second Iraq mortality survey by Burnham and colleagues. [footnotes omitted]


1) Tim Lambert enjoys making a listing of various (reputable) estimates of mortality in Iraq. Example here Now, I might argue that, given all the problems that ORB has had, its estimate does not belong in Tim's collection. But there can be no doubt that OMG's estimate does belong. Will Tim add it?

2) If your main interest is judging the quality of L2, then a better comparison would have used the IBC numbers to July 2006 (mid-point 47,668), thus covering the same time period as L2 and IFHS. I am not sure what the exact formula is that allows OMG to go from 86,539 to 184,000. Assume that we can just apply this ratio (184,000/86,539 = 2.13) to the IBC estimate of 47,668. That would yield a violent death estimate of 102,000. Recall that, 2 years ago, Jon Pedersen estimated violent deaths at 100,000. Moreover, the IFHS estimate would be lowered from 151,000 to around 100,000 if you removed the "arbitrary fudge factor" (in Debarati Guha-Sapir's marvelous phrasing) that IFHS employs.

Call me crazy, but I would say that the emerging scientific consensus is that approximately 100,000 excess war-related violent deaths had occurred in Iraq through June 2006. This is 1/3 lower than 150,000 (0 -- 500,000) estimate that I was comfortable with earlier in this year. Given all the new research since then, I update my estimate to 100,000 with a 95% confidence intervals of (0 -- 300,000).

[Those who think that a lower bound of zero is too low should remember that the definition of "excess" implies a comparison to what would have happened in a counterfactual world without a US invasion/occupation. Although I do not think that it is likely that Saddam would have engaged in substantial internal (against Kurds/Shia) or external (against Iran/Kuwait) aggression, it is not impossible that he would have. Those (possible) violent deaths were prevented by the war. If the comparison is against mortality in Iraq in 2002, then the lower bound should be raised substantially.]

So, if IFHS and IBC+OMG are consistent with each other and with the opinions of informed observers like Pedersen and Guha-Sapir, why does L2 estimate violent mortality approximately 6 times higher? I think that the raw data underlying L2 is not reliable.