- Lazy, not biased: Susceptibility to partisan fake news is better explained by lack of reasoning than by motivated reasoning
- Gordon Pennycook
- David Rand
- Why do people believe blatantly inaccurate news headlines (“fake news”)? Do we use our reasoning abilities to convince ourselves that statements that align with our ideology are true, or does reasoning allow us to effectively differentiate fake from real regardless of political ideology? Here we test these competing accounts in two studies (total N = 3446 Mechanical Turk workers) by using the Cognitive Reflection Test (CRT) as a measure of the propensity to engage in analytical reasoning. We find that CRT performance is negatively correlated with the perceived accuracy of fake news, and positively correlated with the ability to discern fake news from real news – even for headlines that align with individuals’ political ideology. Moreover, overall discernment was actually better for ideologically aligned headlines than for misaligned headlines. Finally, a headline-level analysis finds that CRT is negatively correlated with perceived accuracy of relatively implausible (primarily fake) headlines, and positively correlated with perceived accuracy of relatively plausible (primarily real) headlines. In contrast, the correlation between CRT and perceived accuracy is unrelated to how closely the headline aligns with the participant’s ideology. Thus, we conclude that analytic thinking is used to assess the plausibility of headlines, regardless of whether the stories are consistent or inconsistent with one’s political ideology. Our findings therefore suggest that susceptibility to fake news is driven more by lazy thinking than it is by partisan bias per se – a finding that opens potential avenues for fighting fake news.
- I am Assistant Professor with the Web Information Systemsgroup, at the Delft University of Technology. I am Research Fellow at the AMS Amsterdam Institute for Advanced Metropolitan Solutions, and a Faculty Fellow with the IBM Benelux Center of Advanced Studies.
My research lies at the intersection of crowdsourcing, user modeling, and web information retrieval. I study and build novel Social Data science methods and tools that combine the cognitive and reasoning abilities of individuals and crowds, with the computational powers of machines, and the value of big amounts of heterogeneous data.
I am currently active in three investigation lines related to Social Data Science: Intelligent Cities (SocialGlass; Crowdsourced Knowledge Creation in Online Social Communities (SEALINCMedia COMMIT/, StackOverflow); and Enterprise Crowdsourcing (with IBM Benelux CAS).
- Modeling CrowdSourcing Scenarios in Socially-Enabled Human Computation Applications
- User models have been defined since the 1980s, mainly for the purpose of building context-based, user-adaptive applications. However, the advent of social networked media, serious games, and crowdsourcing/human computation platforms calls for a more pervasive notion of user model, capable of representing the multiple facets of social users and performers, including their social ties, interests, capabilities, activity history, and topical affinities. In this paper, we define a comprehensive model able to cater for all the aspects relevant for applications involving social networks and human computation; we capitalize on existing social user models and content description models, enhancing them with novel models for human computation and gaming activities representation. Finally, we report on our experiences in adopting the proposed model in the design and implementation of three socially enabled human computation platforms.
- Sparrows and Owls: Characterisation of Expert Behaviour in StackOverflow
- Question Answering platforms are becoming an important repository of crowd-generated knowledge. In these systems a relatively small subset of users is responsible for the majority of the contributions, and ultimately, for the success of the Q/A system itself. However, due to built-in incentivization mechanisms, standard expert identification methods often misclassify very active users for knowledgable ones, and misjudge activeness for expertise. This paper contributes a novel metric for expert identification, which provides a better characterisation of users’ expertise by focusing on the quality of their contributions. We identify two classes of relevant users, namely sparrows and owls, and we describe several behavioural properties in the context of the StackOverflow Q/A system. Our results contribute new insights to the study of expert behaviour in Q/A platforms, that are relevant to a variety of contexts and applications.