Tuesday, March 13, 2007

Child Mortality

Comment threads at Crooked Timber are often interesting. Daniel Davies claims:


The thing is, that there is a “marker” in the Times article – as in, a statement that is not true and that is obviously not true to anyone who has read the article. It is in the following paragraph:

Dr Richard Garfield, an American academic who had collaborated with the authors on an earlier study, declined to join this one because he did not think that the risk to the interviewers was justifiable. Together with Professor Hans Rosling and Dr Johan Von Schreeb at the Karolinska Institute in Stockholm, Dr Garfield wrote to The Lancet to insist there must be a “substantial reporting error” because Burnham et al suggest that child deaths had dropped by two thirds since the invasion. The idea that war prevents children dying, Dr Garfield implies, points to something amiss.


This is not true. As table 2 of the study shows, infant mortality remained constant in the survey (when you adjust for the greater number of months in the post-war recall period) while child deaths increased substantially. They did not drop by two thirds, or indeed drop at all. Von Schreeb, Rosling and Garfield did not say they dropped either (presumably because they have read the survey). They said that the crude estimate of under-15 mortality was substantially lower than other estimates of under-5 mortality in Iraq, and that this implied that there may have been substantial under-reporting of child deaths. They then suggested that this reporting error might lead to additional uncertainty in the estimates of roughly the same size as the sampling error – +/- 30%. Note that, for bonus hack points, the “plus” sign in “+/- 30%” is not ornamental, and to treat Von Schreeb et al as providing evidence that the study was an overestimate is Kaplan’s Fallacy. This is my reason for believing that Anjana Ahuja didn’t read the research; it’s an error that could easily have been made in transcribing notes of a half-understood conversation but couldn’t have been made at all if you read the articles.


"ragout" writes:


On the question of whether the Times article is a “bad piece of science journalism,” I much prefer the Times’ version to Daniel’s. Specifically, Daniel summarizes Garfield and other critics as saying “that the crude estimate of under-15 mortality was substantially lower than other estimates of under-5 mortality in Iraq.”

But Daniel’s version is misleading. The critics were not quoting mortality rates as such, which would be deaths of the under-15 per kid under-15. If the critics had really compared under-15 mortality to under-5 mortality, as Daniel says, the critics would indeed be foolish.

But since the critics are prominent scientists, they certainly did not do anything so foolish. Instead, they compared under-15 deaths per birth in the Lancet study to under-5 deaths per birth in another study. The critics’ rely on the fact that, as a matter of logic, if there are X deaths of kids under 15, there must be less than X deaths of kids under 5.

The Lancet study has 36 under-15 deaths per 1000 births, and another pre-war study has 100 or so under-5 deaths per 1000 births. If follows that the Lancet study found an under-5 death rate less than 1/3 of the pre-war study. This is exactly what the Times article says, and Daniel obscures.

Second, Daniel claims that “infant mortality remained constant in the [Lancet] survey.” But as far as I can see there is no data in the paper to calculate pre and post war infant mortality. The paper just reports total births, not pre and post war births. Daniel, without telling the reader, is implicitly assuming that the birth rate remained constant (which hardly seems consistent with the drastic increase in violence).


I don't have the energy to dive into this one right now. Ragout seems to get the better of it but Davies is almost always correct (in my experience) in this sort of analysis. First pass, it seems like Davies is correct to criticize the wording of the original news article but that ragout (and Gilbert?) are correct on the substance. Is this point worth exploring further? Perhaps.

If the Lancet II survey team "made up" large portions of the data, then we would expect to find all sorts of anomalies like this. It is very hard (I would guess!) to make up data that "hangs together" and is consistent with other known information. Or could the difference be the fault of the previous survey? Or could it be due to chance?

0 Comments:

Post a Comment

<< Home