Monday, May 12, 2008

IFHS As An Overestimate?

I have spent so much time and energy fighting the claim, made by Lancet defenders, that IFHS is an underestimate of violent mortality in Iraq, that I had never really considered the other side. Might IFHS be an overestimate? (Thanks to a reader for pointing this out.)

Consider one of the adjustments made in IFHS:


To estimate the most probable rate of violent deaths after the invasion and the range of uncertainty, we performed Monte Carlo simulations that took into account the survey sampling errors that were estimated with the use of the jackknife procedure and uncertainty regarding the missing cluster-adjustment factors, the level of underreporting, and the projected population numbers. We assumed that the level of underreporting was 35% (95% uncertainty range, 20 to 50), and its uncertainty was normally distributed.


I had initially misread this as referring to the issues of clusters that the interviewers had not been able to visit because they were too dangerous. Looking more closely, I now see that this "underreporting" has nothing to do with missing clusters. Instead, I think, the concern is that some households might have, for whatever reason, failed to inform interviewers about violent deaths. I also think that "underreporting" covers the concern that entire households may have been killed or that families with higher than average mortality would have been more likely to leave the country. In either case, no household would be left to interview, thereby leading to underreporting.

But 35% is a big number! Where does it come from? I can't see any discussion in the paper or literature references. Why not use 5% or 200%? Note that the Lancet papers make no adjustment for underreporting, although they do discuss the issue. The Lancet authors use concerns with underreporting to justify, reasonably enough, that their estimates are "conservative."

But that means that we should not be comparing the 151,000 violent deaths estimate from IFHS with the 601,000 violent deaths from L2! The first number adjusts for underreporting while the second does not. This is an apples-versus-oranges comparison.

We can get a sense of the magnitude of this issue by comparing Table 3 and 4 in IFHS. In Table 3, the overall violent mortality rate is 1.09 (0.81 -- 1.50) per 1,000 per year without any adjustment for underreporting, but with adjustments for the missing clusters. In Table 4, on the other hand, we have these estimates for the 3 years after the invasion: 1.77, 1.56 and 1.67. The key point is, as the legend indicates, that these numbers are adjusted for "underreporting," unlike those in Table 3.

In fact, the main results section of the paper makes clear that this is a big issue.


Interviewers visited 89.4% of 1086 household clusters during the study period; the household response rate was 96.2%. From January 2002 through June 2006, there were 1325 reported deaths. After adjustment for missing clusters, the overall rate of death per 1000 person-years was 5.31 (95% confidence interval [CI], 4.89 to 5.77); the estimated rate of violence-related death was 1.09 (95% CI, 0.81 to 1.50). When underreporting was taken into account, the rate of violence-related death was estimated to be 1.67 (95% uncertainty range, 1.24 to 2.30). This rate translates into an estimated number of violent deaths of 151,000 (95% uncertainty range, 104,000 to 223,000) from March 2003 through June 2006.


The adjustment from underreporting raises the violent death rate about 50%, from 1.09 to 1.67. Both those numbers include the adjustment for missing clusters. That 50% increase does not match perfectly against the 35% figure quoted above, but there are a lot of messy details to consider so we are safe in assuming that the 1.09 to 1.67 increase is accurate.

So, back of the envelope, the appropriate number to compare to the 601,000 excess violent deaths from L2 is not the 151,000 cited by IFHS. Instead, we should use 100,000 or so, the excess death estimate which is implied by an violent death rate of 1.09. [1.67 divided by 1.09 times 151,000 equals 98,600, but I am rounding up to be "conservative."] In other words, once we adjust for the use of underreporting, we see that the L2 estimate is six times larger than the one provided by IFHS, not only four times larger.

(Of course, one could also adjust the L2 number up to account for underreporting. Assuming an adjustment factor consistent with IFHS yields an excess violent death estimate of around 900,000.)

The main point is that we want to compare L2 with IFHS on an equal footing, either with adjustment or without. The current convention --- of which I am as guilty as anyone --- of comparing 601,000 (L2) with 151,000 (IFHS) and then concluding that the L2 estimate is 4 times higher is fatally flawed. To compare L2 and IFHS sensibly, we must either adjust L2 upward (so that it also adjusts for underreporting) or adjust IFHS downward (so that it does not adjust). Whichever you prefer, the conclusion is the same: the L2 estimate of violent deaths is six times larger than IFHS.

The next step in the analysis is to do the same for the issue of missing clusters. Both IFHS and L2 have missing clusters, clusters that were too dangerous for the interviewers to enter. In such cases, it is reasonable to believe that mortality might be higher in those clusters than in ones that the interviewers were able to visit. (But see here for a contrary view.) IFHS adjusts for this bias, but L2 does not. So, to compare the two surveys on an equal footing, we need to either remove this adjustment from IFHS or add it to L2. Doing so will make L2 an even greater multiple of IFHS than it already is.

UPDATE: Note that one can think about this change as either an increase in 53% (1.09 to 1.67) or a decrease of 35% (1.67 to 1.09). I have also been told that Deberati Guha Sapir refers to this as an "arbitrary fudge," but I am still tracking down that reference.

UPDATE II: Here is the quote in context:


Now comes the WHO survey. Conducted with the help of the Iraqi government, it is by far the most comprehensive mortality assessment to date. Interviewers visited 9345 homes in more than 1000 clusters. But its estimate of 151,000 violent deaths has come in for some criticism, too. Unlike other Iraq casualty surveys, this one includes an upward adjustment of 35% to account for “underreporting” of deaths due to migration, memory lapse, and dishonesty. “That is really an arbitrary fudge factor,” says Debarati Guha-Sapir, an epidemiologist at the WHO Collaborating Centre for Research on the Epidemiology of Disasters in Brussels, Belgium. But the number falls squarely within the range produced by a meta-analysis of all available mortality studies by Guha-Sapir and fellow centre epidemiologist Olivier Degomme. The Johns Hopkins figure is an outlier, she says.


UPDATE III: L2 also had a problem with clusters that were to violent to visit. From the paper:


The survey was done between May 20 and July 10, 2006. Only 47 of the sought 50 clusters were included in this analysis. On two occasions, miscommunication resulted in clusters not being visited in Muthanna and Dahuk, and instead being included in other Governorates. In Wassit, insecurity caused the team to choose the next nearest population area, in accordance with the study protocol. Later it was discovered that this second site was actually across the boundary in Baghdad Governorate.


So, we know that there was at least one cluster that was too violent to visit and we know that they had a "protocol" to deal with this problem. (It is unclear how many such clusters there were.) This means that a fair comparison between L2 and IFHS would need to either adjust both for this problem or adjust neither. The problem --- and, again, I am as guilty as anyone --- is that the standard 151,000 (IFHS) to 601,000 (L2) comparison adjusts only the former but not the later.

Fortunately, Table 4 in the Supplementary material (pdf) shows that the post-invasion violent mortality rate unadjusted for missing clusters is 0.80 (0.63 1.03). So, we need to cut the 151,000 IFHS estimate for excess deaths (which is based on a mortality rate of 1.67) in more than half in order to provide a fair comparison to the 601,000 figure from L2. Scaling appropriately, the IFHS estimate would be 72,000.

Summary: Using the same assumptions for L2 and IFHS (no adjustments for underreporting or for clusters that could not be visited) generates estimates that differ by more than a factor of 8: 601,000 to 72,000.

Disappearing Questions

The Johns Hopkins webpage continues to be airbrushed in ways both small and large. Previous discussion here and here. Consider some changes:

1) The main page used to have a link to "Answers to Questions About Iraq Mortality Surveys." That link has now disappeared. And, along with the link, there is no longer a set of questions and answers.

2) I have not had the time or inclination to follow this closely but it seems like they had one version of this up for some time. Then, for whatever reason, they decided to delete that and create a second version (saved here as pdf). But now that version is gone too.

3) My guess is that all these changes are caused by the involvement of the lawyers from Johns Hopkins. If it were up to, say, Les Roberts (and perhaps Gilbert Burnham) all this material would still be available. But the lawyers have told them to either be quiet or to be very sure that everything they claim is correct. Since there are numerous problems with the claims made in these documents, Johns Hopkins has decided that the simplest course of action is to post nothing.

Tuesday, May 06, 2008

"Civilian" Casualties

Consider the original news release about L1, before our friends from Hopkins dump it down the memory hole. The title is "Iraqi Civilian Deaths Increase Dramatically After Invasion." Lancet aficionados will recall that this news release (and, I think, the original Lancet editorial) caused some controversy because they both discussed "civilian" deaths. Needless to say, L1 did not differentiate between civilians and non-civilians in its methodology, so the authors have no idea whether the excess mortality was driven by military deaths (whether Saddam-era soldiers during the initial invasion or insurgents thereafter) or civilian ones. (Leaving aside deductions one might make by focusing on female/child/elderly deaths.)

Anyway, there was some dispute about this in the first few months (before I started paying attention to the debate) but I always thought that none of this mattered, that it was an honest mistake caused by someone who did not read the study closely. And, indeed, if I just point to the title of the press release, this would be reasonable. After all, Roberts/Burnham do not make the titles for press releases (one assumes). So, we should not blame them if some secretary in the news office, presumably acting in good faith, gives this release a misleading title.

But note how the release begins.


Civilian deaths have risen dramatically in Iraq since the country was invaded in March 2003, according to a survey conducted by researchers from the Johns Hopkins Bloomberg School of Public Health, Columbia University School of Nursing and Al-Mustansiriya University in Baghdad. The researchers found that the majority of deaths were attributed to violence, which were primarily the result of military actions by Coalition forces. Most of those killed by Coalition forces were women and children. However, the researchers stressed that they found no evidence of improper conduct by the Coalition soldiers.

The survey is the first countrywide attempt to calculate the number of civilian deaths in Iraq since the war began.


Leave aside the anti-coalition slurs here, we see that the author of the news release maintains that this is a count of civilians, rather than just Iraqis. Again, this could be just a mistake by someone in the Hopkins press office, but one expects more care to be taken with the actual body of a news release rather than just its title. Indeed, one would expect the news office to show a news release to the researchers whose work it is describing before the news release is made public. Did Roberts/Burnham know ahead of time that the news release would make claims about civilians? Did they approve this ahead of time? Again, both are busy academics, so we might want to give them the benefit of the doubt. Perhaps they never saw the release, either before publication or even now. But then we read:


“Our findings need to be independently verified with a larger sample group. However, I think our survey demonstrates the importance of collecting civilian casualty information during a war and that it can be done,” said lead author Les Roberts, PhD, an associate with the Bloomberg School of Public Health’s Center for International Emergency, Disaster and Refugee Studies.

...

“There is a real necessity for accurate monitoring of civilian deaths during combat situations. Otherwise it is impossible to know the extent of the problems civilians may be facing or how to protect them,” explained study co-author Gilbert Burnham, MD, associate professor of International Health at the Bloomberg School of Public Health and director of the Center for International, Disaster and Refugee Studies.


Burnham and Roberts themselves used the term "civilian(s)" in direct quotes! There is no need to blame Tim Parsons, the media contact at Hopkins. He, or whoever wrote the news release and its title, were just repeating what the professor had told them, that the Lancet survey showed a dramatic rise in civilian mortality.

This is, in many ways, a minor sin. There is no doubt that thousands of civilians have died in Iraq. But the fact that Roberts/Burnham were happy to mislead readers about exactly what their study measured indicates that we need to maintain a skeptical attitude about the claims that they make.