Sunday, January 13, 2008


The always informed Tim Lambert provides an approving reference to Rebecca Goldin's post at on the National Journal article. Goldin is a serious statistician. She provides an excellent overview of the dispute over the accuracy of L2.

The National Journal team did its homework, interviewing many experts (rather than conservative pundits) and categorizing the potential flaws of the study into different headings. However suspicious some facts surrounding the Lancet study might be (such as the anti-war position held by the scientists conducting the study), only two criticisms cited by the Journal raise any alarms.

One is “main street bias,” the idea that the Lancet study authors over-sampled regions near main streets, which were in turn more likely to be home to victims of car-bombs or other violence. The other is fraud – not by those who wrote the Lancet article, but by those in the field, doing the interviews under minimal supervision.

Goldin is obviously not a knee-jerk Lancet defender or attacker. I agree with her that these are, far and away, the most important criticisms of L2. I also agree with some, but not all, of her criticisms of Munro.

The Journal made a convincing argument that the data may well have been tweaked, in part based on the theory that faked data has patterns that true data rarely fit into; for example, invented people reported as killed may be more likely to be 30 or 40 than 32 or 43. It doesn’t seem unusual if any individual is 30, but it’s awfully strange if all of the deaths consist of 30-year-olds. Apparently, those conducting the Lancet study did not put enough checks in place to ensure that interviewers didn’t pad the books. The data look like inventiveness may have played a role, based on which death certificates the survey conductors reported to have seen, and which they didn’t.

So, the data "may well have been tweaked" and "look like inventiveness may have played a role." In other words, there are good reasons for suspecting "fraud," as many of us have for more than a year. Goldin is correct to note that "we should be careful in reading too much into any particular statistical anomaly" and she is right to worry about that "If those looking for fault in the Lancet study only considered a few possible ways in which the data didn’t look random, then unusually distributed data is far more damning than if they considered many, many ways and found one."

Luckily, I was among the first people to look closely at the data (and certainly the first to describe (pdf) the problems with it). I can confirm that just about the very first thing that I looked at was whether the rate of "forgetting" to ask for death certificates was correlated to date or type of death. And, sure enough, it was! We can be sure that the interviewers did not just "forget" to ask for death certificates, that they purposely asked sometimes and did not ask others.

Does that invalidate the whole study? No. Yet it provides further evidence that the US authors (like Gilbert Burnham) had only the foggiest idea of what the Iraqi interviewers were up to. And it makes any reasonable person suspicious of what else the interviewers were up to. If they felt comfortable picking and choosing which families to ask about death certificates then how can we be sure that they didn't similarly pick and choose which neighborhoods to place clusters in and which houses to visit?

At the end of the day, "tweaked" and "inventiveness" are just nice terms for "fraud." Neither Goldin nor I know, for a fact, that there was fraud in the Lancet data collection process, but much of the circumstantial evidence points in that direction.


Post a Comment

<< Home