Europeans Call for Strong Oversight of AI-Generated Content by Media, Technology, and Government
As artificial intelligence technology becomes increasingly sophisticated, concerns are rising among European citizens about the spread of misinformation and manipulation through AI-generated content. A recent study conducted by the Vodafone Institute surveyed over 12,000 eligible voters across twelve European countries, examining public attitudes toward the verification and oversight of digital content created by AI.
The findings reveal that Europeans favor a multi-pronged approach to safeguard democratic processes from the risks posed by AI-driven disinformation. The majority believe that traditional media organizations, AI detection tools, and government regulation must work together to authenticate and monitor digital content, rather than relying solely on individual media literacy.
The Growing Importance of Traditional MediaThe survey indicates a renewed trust in classical media outlets such as television and radio for political information, with 60% of respondents across all age groups turning to these sources. This reliance increases with age, while younger demographics continue to consume more news via social media. Nevertheless, 53% of all participants acknowledge that the significance of editorially curated news has grown in light of the rising threat of fake news and misinformation.
Younger Generations and Exposure to MisinformationAccording to the research, younger Europeans are more likely to encounter fake news online. Nearly 38% of respondents aged 18 to 24 reported frequent exposure to fake news in recent months, compared to just 16% among those over 64. This disparity is attributed to the greater online activity of younger age groups, which increases their risk of encountering manipulated or false content.
AI Amplifies the Spread of Fake NewsThe study highlights concerns that AI can accelerate the creation and dissemination of deceptive media, including convincing videos and images that can influence public opinion and potentially impact elections. More than one-third of those surveyed view AI as a significant threat to democracy, particularly in the context of possible election interference and the erosion of trust in political institutions. Skepticism towards AI-generated content remains high, especially in countries like Germany and the United Kingdom, while southern European nations display more confidence in the technology's benefits.
Preferred Solutions: Media, Detection Tools, and RegulationEuropeans express strong support for a combination of methods to address AI-driven misinformation. Traditional media are regarded as key verification channels by 45% of respondents, especially older individuals. Meanwhile, 43% see AI detection tools -- software designed to identify digitally altered images, videos, or audio -- as promising, particularly among younger and more educated participants. State oversight and regulatory measures are also favored by 58% of those surveyed, with support increasing with age and education level.
Specific measures that receive widespread backing include mandatory labeling of AI-generated content (65%), systematic fact-checking (59%), and the involvement of national supervisory authorities. There is also notable interest in using AI detection tools for verifying political information, with about one-third of respondents open to their implementation in this context.
Technology Adoption Patterns by AgeThe study finds that younger Europeans are more willing to engage with AI technologies, with 79% of those aged 18 to 24 having used AI tools at least once for political topics, compared to 24% among those over 64. ChatGPT leads in adoption for political information, although its use remains limited overall; only 11% of participants regularly use AI tools for political news, with most still relying on established media and news portals.
Public Attitudes Toward Trust and RegulationDespite the growing use of AI, skepticism toward AI-generated political content persists. Across all age groups, 40% of respondents distrust AI-based content, a figure comparable to the skepticism shown toward social media. Political advertising is viewed with even greater suspicion. Support for regulatory frameworks such as the Digital Services Act and the AI Act is particularly high in countries like Portugal, while less so in others, including Poland. The study also notes a trend where higher education levels correlate with greater support for regulatory and fact-checking initiatives.
The results underscore the need for a coordinated strategy involving trusted news organizations, advanced technological tools for content verification, and robust government oversight to protect democratic institutions from the challenges posed by AI-generated misinformation.