Home » Methods
Category Archives: Methods
We know people don’t pay complete attention on surveys. Our new article shows how to tell if it’s a problem in your own discrete choice research.
Like it or not, economists rely on surveys to conduct their research and survey fatigue is an increasing issue. The problem of survey fatigue has led many to conclude that household surveys are in crisis in large part because of measurement error. Those survey issues can have serious problems when it comes to policy analysis. As Jayson Lusk and I showed in an article recently published in Ecological Economics, inattentive participants can reduce policy-relevant estimates by almost half. For that study, we identified inattentive participants with a question with an obvious answer: if the participant missed it, we would know that they weren’t paying attention. These questions effectively “trapped” these people, which meant that they had revealed their inattention. Our results showed that people who missed the trap question responded differently than people who responded correctly.
The trap questions worked well for revealing that inattention is important. But what about if you don’t want to find “revealed inattention” but would rather infer inattention in discrete choice data? A new article written by Jayson Lusk and I does precisely that. Titled “A simple diagnostic measure of inattention bias in discrete choice models” and released this week in the European Review of Agricultural Economics, we recommend discrete choice modelers consider estimating a “Random Response Share” – or RRS for short.
Basically, you estimate a 2-class latent class model where all of the parameters in the second class are restricted to zero. The percent of the sample that is placed in the restricted class is equal to the RRS. It’s easy, simple, and comparable across discrete choice models – which makes it super useful when discussing survey data quality. As a proof of concept, we ran the RRS model on some of our trap question data from the Ecological Economics paper, and showed that – as one would expect – people who missed the trap question were also much more likely to be inattentive in the discrete choice questions.
Of course, this measure only gives us a starting point to compare the survey quality of different groups (producers/consumers/voters?) and methods (hypothetical/non-hypothetical?). A next step to this project will be to derive RRS estimates for data from those different types of groups to see how different they in fact are.
One issue with the measure is that this article doesn’t really tell you what to do to discourage inattention bias. That being said, we do have a paper in review on that topic as well!