Meta Research Foundational Integrity Research

Closing Date: 22/11/2022

Funding research to explore integrity issues related to social communication technologies, such as misinformation, hate speech, violence and incitement, and coordinated harm.

Meta Research works on cutting edge research with a practical focus and builds long-term relationships with top research institutions around the world. It also publishes papers, give talks, and collaborates broadly with the academic community.

Meta is seeking to better understand the challenges related to integrity issues on social media and social technology platforms. Meta is providing $1 million in funding for foundational and applied research around challenges pertaining to platform governance in domains such as misinformation, hate speech, violence and incitement, and coordinated harm. The goal is to increase the scientific knowledge in these spaces and to contribute to a shared understanding across the broader scientific community and technology industry on how social technology companies can better address integrity issues on their platforms.

Research areas of interest include, but are not limited to, the following:

  1. Interventions to counter misinformation: Proposals that explore how to best measure the relative benefits and consequences of interventions to counter misinformation or provide access to authoritative content. In particular, ways of reliably quantifying how these interventions can provide the most defence against harm. Interventions could include but are not limited to: institutional intermediaries such as fact-checkers, community leaders, community moderation or feedback, credibility signals, and techniques related to inoculation theory such as “prebunking”. The need for causal evaluations to understand the relative impact of interventions on people’s attitudes, knowledge, and behaviour is important.
  2. Information processing around sensational, hateful, divisive, or provocative problematic content: Proposals that explore the social, psychological, and cognitive variables involved in the consumption of “grey area” content experiences – sensational, provocative, divisive, hateful, misleading, polarising, or biased information – received and produced on social media platforms. In particular, understanding how people across different backgrounds, communities, and cultures interact with, are affected by, and decide to promote or share the spectrum of possibly problematic content. Studies that explicitly examine long-term exposure to these types of content or behaviours and their effects on people with deeper or longer engagement are encouraged. Understanding what aspects of the experience might help individuals engage more critically with or more consciously avoid problematic experiences is of interest. Measurement of perceptions and awareness of the prevalence or distribution of this content can be an additional impactful contribution.
  3. Violence and incitement, hateful and/or graphic content: Proposals that examine how people and organisations are leveraging social media to organise and potentially influence intergroup relations in their constituencies. Projects that probe the connection between online speech and subsequent consequences of both offline and online harms are of interest. In particular, research that explores deterrents to online and offline problematic behaviour related to dangerous speech and harmful conflict. Projects that focus on actors, content, and behaviours related to sharing inflammatory, offensive, or dangerous content are encouraged. Meta is keen to understand this space in markets with limited institutions, developing media markets, and variations in levels of democracy in non-Western contexts, or in additional contexts where sudden conflict crises have introduced new challenges.
  4. Misinformation across formats: Proposals that investigate the role of non-textual media (images, videos, audio, etc) on the effectiveness of and people’s engagement with misinformation. This area includes basic multimedia like infographics, memes, and audio, compared to more-complex video and emerging technological advances. In particular, cognition and susceptibility in the face of either simple or advanced manipulated multimedia (misleading synthetic “deepfakes” and simpler edited “cheapfakes”), particularly investigating the impact on people’s attitudes and behaviours. Additional areas could include the dynamics of rumours, out-of-context imagery, impersonation of public figures/organisations, etc.
  5. Trust, legitimacy, and information quality: Proposals that examine social media users’ exposure to, interaction with, and understanding of qualities of information, especially their attitudes and interpretations of information quality, trust, and bias. Studies focusing on the dynamics and effects of information diversity, whether from the user audience or content producer perspectives will be accepted. Work may also focus on social media companies’ own efforts to maintain information quality, trust and credibility signals, and perceptions of legitimacy, particularly during crises or other critical events.
  6. Coordinated harm and inauthentic behaviour: Proposals that inspect information practices and flows across multiple communication technologies or mediums. In particular, individual, group, and community effects of information campaigns, inauthentic behaviour, or coordinated activities across multiple communities, networks, channels, or platforms. Studies may examine the impact of such harms on marginalised groups and communities.
  7. Digital literacy, demographics, and misinformation: Proposals that explore the relation between digital literacy and vulnerability to misinformation in communication technologies. Especially in some emerging markets, social media platforms have gained many participants among those new to the internet and populations with lower exposure to technology. Research that informs efforts to incorporate technology effectively and contextually into underserved geographical regions is of interest. This includes studies of individuals, small groups, and larger communities, but also wider inquiries into factors that shape the context for the user experience online.

Meta invites innovative proposals that can demonstrate convincing social science research that has the potential to significantly advance the community’s understanding of the impact of technology on society. Proposals are encouraged with the following two emphases:

  • Studies that draw on traditional social science methods like interviews, surveys, ethnographic observation, content analyses, and survey/behavioural experiments, or innovative mixed methodological approaches that combine these methods.
  • Comparative research and inclusion of non-Western regions that have experienced a growth in social media platform use, including South and Central America, Sub-Saharan and North Africa, the Middle East, and Central, South, and Southeast Asia. Proposals from researchers, or collaborations with researchers, based in the country/countries being researched are encouraged.

Research is not restricted to focusing on Meta apps and technologies.

Funding body Meta Research
Maximum value 100,000 USD
Reference ID S24460
Category Economic and Social Research
Science and Technology
Fund or call Fund