100% Response Rate
My main source of suspicion with regard to Lancet II concerns its amazingly high response rate. See previous discussion here, here and here. Kieran Healy counters that concern by writing:
Related discussion here. Healy is a serious scholar so his objections are worth careful study. The paper and associated slides are available. Keep in mind that we have three levels of references: Healy talking about what UK Polling Report (a blog written by Anthony Wells, not a formal organization) says about a paper by John Heald of Opinion Research Business (ORB).
Looking at the paper and slides, it is clear that, although Heald (presumably) presented the paper, the other author, Munqith Daghir of IIACSS in Iraq was responsible for the polling results. (It is not clear to me if Heald has ever been to Iraq, much less conducted a poll there.) ORB seems (?) like a competent organization, but its website makes little mention of doing work in Iraq. Instead, it reports that:
This report indicates that ORB first started to collaborate with Daghir sometime in 2005. The paper referenced by Healy/Wells reports that:
Daghir is to be congratulated for making a life for himself as a pollster amidst the chaos of Iraq. But "having read a book about market research and polling" is not, shall we say, the most impressive pollster resume the world has ever seen. The entire article continues in a similar style. This is not so much a "paper" on the polling situation in Iraq as an advertisement for Daghir and his company. Now, there is nothing wrong with advertisements, and I have no reason to believe that Daghir and IIACSS aren't high quality pollsters, but Healy cites all this to demonstrate that the response rate for Lancet II is not an outlier result, that it is typical of Iraqi polls. Note again the quote from Wells which Healy selects and then comments on.
Speaking of "ignorance," it is not clear if Healy (or even Anthony Wells) ever read the report in question. It does not mention ORB! It seems to me that Wells/Healy are mistaken, that ORB had no "experience of polling in Iraq" prior to 2005, that "they" did none of this work. Instead, Daghir/IIACSS have done some subcontracting for ORB in 2005/2006 and both IIACSS and ORB are eager to do more business together. But, Healy/Wells have no business claiming that ORB had anything to do with the poll results prior to 2005.
Consider this January 2006 here with Daghir.
In other words, Daghir has been working with ORB for "almost a year" as of January 2006. They had no relationship that we know of back in November 2003 when the implausible 100% response rate poll was conducted.
It is a nice story and Daghir deserves credit for his ambition and bravery. But, he was novice, self-taught surveyer. What are the odds that he got everything correct the first time?
Indeed.
The key table is on page 2 of the report. There are six surveys listed. The first one, from November 2003, reports a 100% "Response Rate" in sample of 1,167. This is, of course, absurd. A proper poll picks out the 1,000 or so people it wants to sample from a larger population. It then searches for those people. You can never, ever find them all, no matter how "friendly" the local population. A charitable interpretation is that these are more "participation rates" than "response rates." (Background on terminology here.) In other words, the interviewers kept on looking for people to contact, perhaps by going around a market, perhaps by traveling from house to house. Some houses were empty. Some people refused to answer the door and/or talk with them. But, of those that they did contact, most were willing to participate. (The fact that the first column is labeled "Total Contacts" makes this plausible.)
The main problem with Lancet II is not that participation rates were high. That's true for polling all over Iraq. The problem is with the contact rates. These polls seem to be ignoring that aspect in their reporting. No poll of 1,000 people finds everyone.
And, even if we wanted to believe this outlandish result, it would still have little bearing on the Lancet II data because the polls are almost 3 years apart. Perhaps response rates were extremely high in November 2003, but the Lancet II interviews were done in 2006. The remaining 5 polls in the table have an average response rate of 87%, with none higher than 91%. How did the Lancet surveys do 10% better?
The immediate problem with this charge is that, as it turns out, phenomenally high response rates are apparently very common in Iraq, and not just in this survey. UK Polling Report says the following:
The report suggests that over 98% of people contacted agreed to be interviewed. For anyone involved in market research in this country the figure just sounds stupid. Phone polls here tend to get a response rate of something like 1 in 6. However, the truth is that – incredibly – response rates this high are the norm in Iraq. Earlier this year Johnny Heald of ORB gave a paper at the ESOMAR conference about his company’s experience of polling in Iraq – they’ve done over 150 polls since the invasion, and get response rates in the region of 95%. In November 2003 they did a poll that got a response rate of 100%. That isn’t rounding up. They contacted 1067 people, and 1067 agreed to be interviewed.
If this is correct, then the only bit of circumstantial evidence that Kane proffers in support of his insinuation is in fact a misconception based on his own ignorance.
Related discussion here. Healy is a serious scholar so his objections are worth careful study. The paper and associated slides are available. Keep in mind that we have three levels of references: Healy talking about what UK Polling Report (a blog written by Anthony Wells, not a formal organization) says about a paper by John Heald of Opinion Research Business (ORB).
Looking at the paper and slides, it is clear that, although Heald (presumably) presented the paper, the other author, Munqith Daghir of IIACSS in Iraq was responsible for the polling results. (It is not clear to me if Heald has ever been to Iraq, much less conducted a poll there.) ORB seems (?) like a competent organization, but its website makes little mention of doing work in Iraq. Instead, it reports that:
Research professionals at ORB have worked with over 15 world leaders on their Public Affairs initiatives including studies in the UK, US, Russia, South Africa, Malaysia, Taiwan, Turkey, Bulgaria, Lithuania, Malta and Gibraltar.
This report indicates that ORB first started to collaborate with Daghir sometime in 2005. The paper referenced by Healy/Wells reports that:
One man who listened to what his people allegedly thought about the invasion was Dr Munqeth Daghir, a lecturer in the University. He was caught between occupying forces saying that they had come to liberate Iraq and protect them from the Baathist regime and Iraqi exiles slowly returning saying that they wanted to help and could represent the real Iraqi. However, he knew that neither was a real reflection of what Iraqis really thought and wanted.
So, having read a book about market research and polling in between avoiding bombs that were dropping on his city, the first poll was conducted in Baghdad amongst a representative sample of 1,000 adults. That was in April 2003 and more than three years later and having the benefit of carrying out more than 150 polls, we want to demonstrate the advantages and difficulties of polling in Iraq and then quantify what people really thought both then (post conflict) and now.
Daghir is to be congratulated for making a life for himself as a pollster amidst the chaos of Iraq. But "having read a book about market research and polling" is not, shall we say, the most impressive pollster resume the world has ever seen. The entire article continues in a similar style. This is not so much a "paper" on the polling situation in Iraq as an advertisement for Daghir and his company. Now, there is nothing wrong with advertisements, and I have no reason to believe that Daghir and IIACSS aren't high quality pollsters, but Healy cites all this to demonstrate that the response rate for Lancet II is not an outlier result, that it is typical of Iraqi polls. Note again the quote from Wells which Healy selects and then comments on.
Earlier this year Johnny Heald of ORB gave a paper at the ESOMAR conference about his company’s experience of polling in Iraq – they’ve done over 150 polls since the invasion, and get response rates in the region of 95%. In November 2003 they did a poll that got a response rate of 100%. That isn’t rounding up. They contacted 1067 people, and 1067 agreed to be interviewed.
If this is correct, then the only bit of circumstantial evidence that Kane proffers in support of his insinuation is in fact a misconception based on his own ignorance.
Speaking of "ignorance," it is not clear if Healy (or even Anthony Wells) ever read the report in question. It does not mention ORB! It seems to me that Wells/Healy are mistaken, that ORB had no "experience of polling in Iraq" prior to 2005, that "they" did none of this work. Instead, Daghir/IIACSS have done some subcontracting for ORB in 2005/2006 and both IIACSS and ORB are eager to do more business together. But, Healy/Wells have no business claiming that ORB had anything to do with the poll results prior to 2005.
Consider this January 2006 here with Daghir.
In under three years he [Daghir] has run over 100 surveys for the UN, international NGOs, foreign governments, ad agencies, the media and FMCG companies. He has also carried out research on behalf of UK-based research agency ORB for almost a year now, looking at public attitudes towards topical issues and is about to explore attitudes towards tobacco.
In other words, Daghir has been working with ORB for "almost a year" as of January 2006. They had no relationship that we know of back in November 2003 when the implausible 100% response rate poll was conducted.
At that time [April 2003] I hadn’t much money. All of our savings in the bank had been looted. We hadn’t received the payment for most of the government projects we had done. So we were mostly bankrupt. I sold my car, and my partners sold things to fund this project. And 14 of my students on masters and PhD degree courses agreed to work with me for free. My daughter and my son worked as data punchers. I used my own computer with a generator. We started like a family business, in a big room with my son, my daughter, my partner’s son, my partner’s daughter, working together with our students. I told them how to code the questionnaire, how to enter the data. After two weeks I started the fieldwork.
It is a nice story and Daghir deserves credit for his ambition and bravery. But, he was novice, self-taught surveyer. What are the odds that he got everything correct the first time?
I knew that Baghdad is distributed into nine different areas, and how many citizens lived in each one. But to tell the truth, I didn’t know anything about the real random systematic sample. We did it randomly by going to any house we wanted to go to. So it wasn’t a perfect sample.
Indeed.
The key table is on page 2 of the report. There are six surveys listed. The first one, from November 2003, reports a 100% "Response Rate" in sample of 1,167. This is, of course, absurd. A proper poll picks out the 1,000 or so people it wants to sample from a larger population. It then searches for those people. You can never, ever find them all, no matter how "friendly" the local population. A charitable interpretation is that these are more "participation rates" than "response rates." (Background on terminology here.) In other words, the interviewers kept on looking for people to contact, perhaps by going around a market, perhaps by traveling from house to house. Some houses were empty. Some people refused to answer the door and/or talk with them. But, of those that they did contact, most were willing to participate. (The fact that the first column is labeled "Total Contacts" makes this plausible.)
The main problem with Lancet II is not that participation rates were high. That's true for polling all over Iraq. The problem is with the contact rates. These polls seem to be ignoring that aspect in their reporting. No poll of 1,000 people finds everyone.
And, even if we wanted to believe this outlandish result, it would still have little bearing on the Lancet II data because the polls are almost 3 years apart. Perhaps response rates were extremely high in November 2003, but the Lancet II interviews were done in 2006. The remaining 5 polls in the table have an average response rate of 87%, with none higher than 91%. How did the Lancet surveys do 10% better?