Friday, January 04, 2008

National Journal Article

Neil Munro's National Journal article is out. I haven't had a chance to read it closely, but my quotes are not as contextualized as I would like them to be.


Still, the authors have declined to provide the surveyors' reports and forms that might bolster confidence in their findings. Customary scientific practice holds that an experiment must be transparent -- and repeatable -- to win credence. Submitting to that scientific method, the authors would make the unvarnished data available for inspection by other researchers. Because they did not do this, citing concerns about the security of the questioners and respondents, critics have raised the most basic question about this research: Was it verifiably undertaken as described in the two Lancet articles?

"The authors refuse to provide anyone with the underlying data," said David Kane, a statistician and a fellow at the Institute for Quantitative Social Statistics at Harvard University.


That is correct, but it is important to note that the authors' behaviour was much better in L2 than in L1. In L2, most researchers were provided with some of the data. I attribute this to goodwill and professionalism on the part of lead author Gilbert Burnham. But, it is still pathetic that they refuse to share the data with Spagat et al and that they have yet to (will never?) allow Scheuren and others to see if there are problems with different interviewers providing anomalous results. In L1, their behaviour has been horrible, due mostly, I believe, to Les Roberts' attitude. No one has seen the underlying data for L1, other than cluster-level summaries. This is not the way that scientists ought to behave.

On this topic, I wish that Munro had quoted me about the fact that, as far as I know, no scientific team has ever granted data access, however incomplete, to some critics but not others. It is inexcusable for the Lancet authors to show data to me but not to Spagat.


To Kane, the study's reported response rate of more than 98 percent "makes no sense," if only because many male heads of households would be at work or elsewhere during the day and Iraqi women would likely refuse to participate. On the other hand, Kieran J. Healy, a sociologist at the University of Arizona, found that in four previous unrelated surveys, the polling response in Iraq was typically in the 90percent range.


Again, this is an accurate quote, but the context is off. My key point is not about how much time Iraqi men are away or how likely Iraqi women are to participate in a survey. Who knows? My key point is that there has never been a single-contact survey with 98%+ participation, in any country at any time on any topic. Never. What are the odds that the most controversial survey of the decade with achieve an unprecendented repsonse rate? More background here.


The authors should not have included the July data in their report because the survey was scheduled to end on June 30, according to Debarati Guha-Sapir, director of the World Health Organization's Collaborating Center for Research on the Epidemiology of Disasters at the University of Louvain in Belgium. Because of the study's methodology, those 24 deaths ultimately added 48,000 to the national death toll and tripled the authors' estimate for total car bomb deaths to 76,000. That figure is 15 times the 5,046 car bomb killings that Iraq Body Count recorded up to August 2006.

According to a data table reviewed by Spagat and Kane, the team recorded the violent deaths as taking place in early July and did not explain why they failed to see death certificates for any of the 24 victims. The surveyors did remember, however, to ask for the death certificate of the one person who had died peacefully in that cluster.


First, where is documentation for the claim that the survey was supposed to end on June 30th? I have never heard of that. In fact, I doubt it. When you go and start fieldwork for a survey in some war-torn country, you certainly have a plan and a schedule that you hope to keep. I believe that they wanted to finish by June 30th. But why would the study protocal require that? It wouldn't. There is no reason to put on such a straight-jacket. Instead, the protocal called for getting 50 clusters. However long that took is how long it would take.

Of course, I am still deeply suspicious of the results for that cluster. Finding a whole bunch of deaths at the end of the survey --- and in a category, car bombs, that you wanted/expected to much larger than the data that you had gathered so far --- is awfully convenient, just as finding scores of deaths in Falluja at the very end of L1 was convient. But the June 30 date is, as far as a I can tell irrelevant.

Second, I agree that the authors did not explain why they did not ask for death certificates in that specific case. But a plausible explanation would be that the deaths happened a day or two before the survey and that, therefore, the interviewers knew that the families would not yet have death certificates available. So, why ask for them? As always, the authors should be a lot more transparent and willing to answer questions, but I think that they have plausible responses to these issues.

None of which means that I believe those answers. My guess continues to be that the/some interview teams went to a neighborhood and asked the kids who had died and then interviewed those houses preferentially. I suspect that they went out looking in early July for a neighborhood with car-bomb deaths, even went to that specific neighborhood after they heard about the car-bomb on the news. But suspicions are not proof. I would be happy to bet, however, that Lafta was a part of the team that did those interviews, just as he was the one to go to Falluja for L1.

I think that the issue about car-bomb deaths that is most damning is how the authors pretended in the paper that there was a gradual rise in such deaths, consistent with news reports and IBC, over the course of the time period when, in fact, car-bomb deaths were constant for the two years prior to July 2006. Alas, Munroe does not make that point.

I think that the table associated with the article is fine as far as it goes. But I wish that Munro had used my tables which show how "forgetting" to ask for death certificates was much more common for later deaths and for violent ones. That is the damning evidence that something more than forgetfulness was going on when the interviewers failed to even check for death certificates.

But all these are quibbles. Munro has done a fine job in gathering all sorts of evidence and arguments. I spoke with him several times and there is no doubt that he understands the ins and outs of the debate.

I hope to have more substantive comments on the article in due course.

0 Comments:

Post a Comment

<< Home