10 Comments
Jun 6Liked by Shruti Rajagopalan

Thanks for covering what for me was the biggest question coming out of the Indian elections. Sample size and its construction was probably the reason. Larger samples and well constructed survey plans may have have narrowed the confidence interval, but it costs more and involves delays. And who wants to be last in declaring the results of an exit poll.

Expand full comment
Jun 6Liked by Shruti Rajagopalan

Great piece ma'am. I was also wondering the same thing and am glad that I found this article.

Expand full comment

1. Dewey Defeats Truman. Basically sampling bias. I've never been convinced about polling in the Indian context being particularly scientific. Form 20 data isn't out yet, but I'd say that in most cases you'll see some very strong booth-level trends. So randomly selecting 1-2 booths in a constituency won't give a proper sample

2. What puzzles me is that the exit polls were remarkably mutually consistent. That was a clear red flag to me - as if nobody was sure of their numbers, and they all sort of mutually spoke and everyone gave "consensus estimates"

3. I haven't really studied the methodologies of any of hte exit polls (if they've been published), or the vote share predictions. Typically FPTP makes it harder to predict seats, but vote shares (as long as you've taken alliances into account properly) should be less hard.

Expand full comment

While the issues flagged by you could have individually and, or jointly contributed to the bizarre exit poll results, the elephant in the room so to speak is the bias of the person / team who compiles the report. The individual or the team tends to correct the ‘raw data’ and postulate a possibility. This is where the rub lies. Individuals personal bias creeps in and out out gets distorted.

While a lot has been said about sample size the reluctance to share raw data is definitely not conducive to start believing in them next time around.

Expand full comment
author

You know when they get it right we call them “weights” and when they get it wrong we call it “bias.” But I think even weighting the data correctly and without bias needs better quality census data.

Expand full comment

Yes, I would say the probability of preference falsification is higher compared to other factors in this particular election. As social beings, humans worry a lot about what others think of their views and actions. Many a -time, they express falsely sheld views just to fit in with what the majority consider appropriate.

Expand full comment

Could there be some role of social desirability bias?

Expand full comment

Exit polls in India are notoriously unreliable. Until recently, Indian media openly admitted as much. Look at polls from 2014 and 2019 and you will see how wildy variable the polls were and how inaccurate most of them turned out to be. But the word collectively ignored that because "Modi Wave" was too compelling of a story for both sides.

Expand full comment

What do you think of the main theme behind modi's loss of seats? An outsider it seems like Indians want to vote for strong local leaders and Modi's interns. Is my assessment correct?

Expand full comment

Thanks for the Article. The wrong sampling due to lack of data is the most plausible reason for the misjudgement of exit polls. However, I have a difference of opinion regarding the media bias. I feel, even if exit polls suggested a coalition government with BJP winning less seats than previous elections, the media houses will be wary of portraying the same right?

Expand full comment