
The rise of Artificial Intelligence (AI) has introduced a new dimension to the information landscape, enabling the creation of fabricated videos, images, and voices that convincingly imitate reality. Known as “deepfakes”, these synthetic media forms have become increasingly sophisticated and accessible, prompting concern among European policymakers and citizens alike. While they may once have appeared as online jokes or curiosity pieces, they now pose a serious threat to democratic trust across Europe. In the lead-up to elections, during conflict, or in everyday social media use, manipulated content can spread fast, eroding confidence in what we see and hear.
For example, in Slovakia during the 2023 election campaign, a deepfake audio clip was used to impersonate one of the candidates discussing electoral fraud with a prominent journalist. Although both denied its authenticity, the clip, together with other misleading videos, went viral, contributing to an atmosphere of suspicion and “what is real?” among voters. The subsequent loss of this candidate, even though he had been leading in the polls, to his counterpart, fueled speculation about the election being the first “swung” by deepfakes (Meaker, 2023). In another case, researchers found that across Europe and the world, many more elections are now experiencing deepfake incidents: among the 87 countries with elections since 2023, 33 have seen deepfake-related cases (Surfshark, 2025). Even though there is not yet clear evidence that any one election result in the EU was decisively changed by a deepfake, the psychological impact is real: people may doubt everything, or trust nothing, and that shifts how democracy works.
The impact of deepfakes, however, extends beyond electoral contexts. Every day, social media users, particularly young individuals, interact with algorithmically curated content that blurs the distinction between authentic and artificial. A study by the European Digital Media Observatory (EDMO, 2024) found that over 40% of respondents encountered AI-generated media in the previous six months, often without recognising it as such. This constant exposure not only shapes public opinion but also affects users’ self-perception, fostering confusion, distrust, and polarisation. Over time, the inability to differentiate between fact and fabrication risks weakening citizens’ confidence in legitimate journalism and institutions.
In response, the European Union has introduced a comprehensive regulatory framework to ensure accountability and transparency in the digital sphere. The Artificial Intelligence Act, provisionally agreed in 2024, is the world’s first major attempt to regulate AI according to its level of risk (European Commission, 2025b). It prohibits systems that pose “unacceptable risks” to fundamental rights, such as social scoring or real-time facial recognition in public spaces, and requires strict oversight of high-risk applications used in education, employment, or law enforcement. The Act also mandates transparency for general-purpose AI, obliging providers to clearly label synthetic or manipulated content and disclose when users are interacting with AI systems. These provisions aim to balance technological innovation with ethical responsibility.
Complementing the AI Act, the Digital Services Act (DSA) introduces new safeguards for online platforms where deepfakes and disinformation circulate most widely. Under the DSA, users have the right to understand why certain content is recommended and can opt for non-profiling feeds. Platforms are required to remove illegal content swiftly, explain moderation decisions, and provide avenues for appeal. The Act also prohibits targeted advertising to minors and the use of sensitive personal data for ad targeting. Very large online platforms, those exceeding 45 million users, must undergo independent audits and risk assessments to limit the spread of harmful content (European Commission, 2025a).
Together with the Digital Markets Act, which addresses the concentration of power among large digital “gatekeepers”, these laws represent a significant step toward a more transparent and accountable online environment. Nevertheless, regulation alone cannot resolve the underlying issue. The success of these policies depends on citizens’ ability to navigate digital spaces critically and responsibly.
Developing media literacy across all age groups has become a central priority in addressing the challenges of the digital information environment. Citizens who possess the skills to verify sources, identify manipulation, and understand how algorithms shape what they see online are less likely to fall victim to misinformation. According to the European Commission’s Standard Eurobarometer 102 (European Commission, 2024), 82% of Europeans believe that false or misleading information represents a threat to democracy, while 77% consider it a serious issue within their own country. These findings highlight not only the scale of public concern but also the urgent need for education systems and lifelong learning initiatives to strengthen critical thinking and digital literacy as essential components of democratic resilience.
The proliferation of deepfakes reflects a broader shift in the information ecosystem: the erosion of visual and auditory evidence as markers of truth. Restoring public confidence requires both structural reforms and individual awareness. The European Union’s recent legislative efforts lay the groundwork for a safer digital environment, yet their effectiveness ultimately depends on how citizens engage with technology.
Trust in the digital age will not be maintained by restricting innovation but by ensuring people understand it. The ability to think critically, question digital content, and identify manipulation is now as important as traditional civic education. As deepfakes continue to evolve, so too must our capacity to see and believe responsibly.
References
European Commission. (2024). Standard Eurobarometer 102 – Autumn 2024. European Commission.
European Commission. (2025b). AI Act | Shaping Europe’s digital future. European Commission.
Meaker, M. (2023). Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. Wired.
Surfshark. (2025). Global map: election-related deepfakes reached 3.8 B people. Surfshark

The rise of Artificial Intelligence (AI) has introduced a new dimension to the information landscape, enabling the creation of fabricated videos, images, and voices that convincingly imitate reality. Known as “deepfakes”, these synthetic media forms have become increasingly sophisticated and accessible, prompting concern among European policymakers and citizens alike. While they may once have appeared as online jokes or curiosity pieces, they now pose a serious threat to democratic trust across Europe. In the lead-up to elections, during conflict, or in everyday social media use, manipulated content can spread fast, eroding confidence in what we see and hear.
For example, in Slovakia during the 2023 election campaign, a deepfake audio clip was used to impersonate one of the candidates discussing electoral fraud with a prominent journalist. Although both denied its authenticity, the clip, together with other misleading videos, went viral, contributing to an atmosphere of suspicion and “what is real?” among voters. The subsequent loss of this candidate, even though he had been leading in the polls, to his counterpart, fueled speculation about the election being the first “swung” by deepfakes (Meaker, 2023). In another case, researchers found that across Europe and the world, many more elections are now experiencing deepfake incidents: among the 87 countries with elections since 2023, 33 have seen deepfake-related cases (Surfshark, 2025). Even though there is not yet clear evidence that any one election result in the EU was decisively changed by a deepfake, the psychological impact is real: people may doubt everything, or trust nothing, and that shifts how democracy works.
The impact of deepfakes, however, extends beyond electoral contexts. Every day, social media users, particularly young individuals, interact with algorithmically curated content that blurs the distinction between authentic and artificial. A study by the European Digital Media Observatory (EDMO, 2024) found that over 40% of respondents encountered AI-generated media in the previous six months, often without recognising it as such. This constant exposure not only shapes public opinion but also affects users’ self-perception, fostering confusion, distrust, and polarisation. Over time, the inability to differentiate between fact and fabrication risks weakening citizens’ confidence in legitimate journalism and institutions.
In response, the European Union has introduced a comprehensive regulatory framework to ensure accountability and transparency in the digital sphere. The Artificial Intelligence Act, provisionally agreed in 2024, is the world’s first major attempt to regulate AI according to its level of risk (European Commission, 2025b). It prohibits systems that pose “unacceptable risks” to fundamental rights, such as social scoring or real-time facial recognition in public spaces, and requires strict oversight of high-risk applications used in education, employment, or law enforcement. The Act also mandates transparency for general-purpose AI, obliging providers to clearly label synthetic or manipulated content and disclose when users are interacting with AI systems. These provisions aim to balance technological innovation with ethical responsibility.
Complementing the AI Act, the Digital Services Act (DSA) introduces new safeguards for online platforms where deepfakes and disinformation circulate most widely. Under the DSA, users have the right to understand why certain content is recommended and can opt for non-profiling feeds. Platforms are required to remove illegal content swiftly, explain moderation decisions, and provide avenues for appeal. The Act also prohibits targeted advertising to minors and the use of sensitive personal data for ad targeting. Very large online platforms, those exceeding 45 million users, must undergo independent audits and risk assessments to limit the spread of harmful content (European Commission, 2025a).
Together with the Digital Markets Act, which addresses the concentration of power among large digital “gatekeepers”, these laws represent a significant step toward a more transparent and accountable online environment. Nevertheless, regulation alone cannot resolve the underlying issue. The success of these policies depends on citizens’ ability to navigate digital spaces critically and responsibly.
Developing media literacy across all age groups has become a central priority in addressing the challenges of the digital information environment. Citizens who possess the skills to verify sources, identify manipulation, and understand how algorithms shape what they see online are less likely to fall victim to misinformation. According to the European Commission’s Standard Eurobarometer 102 (European Commission, 2024), 82% of Europeans believe that false or misleading information represents a threat to democracy, while 77% consider it a serious issue within their own country. These findings highlight not only the scale of public concern but also the urgent need for education systems and lifelong learning initiatives to strengthen critical thinking and digital literacy as essential components of democratic resilience.
The proliferation of deepfakes reflects a broader shift in the information ecosystem: the erosion of visual and auditory evidence as markers of truth. Restoring public confidence requires both structural reforms and individual awareness. The European Union’s recent legislative efforts lay the groundwork for a safer digital environment, yet their effectiveness ultimately depends on how citizens engage with technology.
Trust in the digital age will not be maintained by restricting innovation but by ensuring people understand it. The ability to think critically, question digital content, and identify manipulation is now as important as traditional civic education. As deepfakes continue to evolve, so too must our capacity to see and believe responsibly.
References
European Commission. (2024). Standard Eurobarometer 102 – Autumn 2024. European Commission.
European Commission. (2025b). AI Act | Shaping Europe’s digital future. European Commission.
Meaker, M. (2023). Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. Wired.
Surfshark. (2025). Global map: election-related deepfakes reached 3.8 B people. Surfshark

The rise of Artificial Intelligence (AI) has introduced a new dimension to the information landscape, enabling the creation of fabricated videos, images, and voices that convincingly imitate reality. Known as “deepfakes”, these synthetic media forms have become increasingly sophisticated and accessible, prompting concern among European policymakers and citizens alike. While they may once have appeared as online jokes or curiosity pieces, they now pose a serious threat to democratic trust across Europe. In the lead-up to elections, during conflict, or in everyday social media use, manipulated content can spread fast, eroding confidence in what we see and hear.
For example, in Slovakia during the 2023 election campaign, a deepfake audio clip was used to impersonate one of the candidates discussing electoral fraud with a prominent journalist. Although both denied its authenticity, the clip, together with other misleading videos, went viral, contributing to an atmosphere of suspicion and “what is real?” among voters. The subsequent loss of this candidate, even though he had been leading in the polls, to his counterpart, fueled speculation about the election being the first “swung” by deepfakes (Meaker, 2023). In another case, researchers found that across Europe and the world, many more elections are now experiencing deepfake incidents: among the 87 countries with elections since 2023, 33 have seen deepfake-related cases (Surfshark, 2025). Even though there is not yet clear evidence that any one election result in the EU was decisively changed by a deepfake, the psychological impact is real: people may doubt everything, or trust nothing, and that shifts how democracy works.
The impact of deepfakes, however, extends beyond electoral contexts. Every day, social media users, particularly young individuals, interact with algorithmically curated content that blurs the distinction between authentic and artificial. A study by the European Digital Media Observatory (EDMO, 2024) found that over 40% of respondents encountered AI-generated media in the previous six months, often without recognising it as such. This constant exposure not only shapes public opinion but also affects users’ self-perception, fostering confusion, distrust, and polarisation. Over time, the inability to differentiate between fact and fabrication risks weakening citizens’ confidence in legitimate journalism and institutions.
In response, the European Union has introduced a comprehensive regulatory framework to ensure accountability and transparency in the digital sphere. The Artificial Intelligence Act, provisionally agreed in 2024, is the world’s first major attempt to regulate AI according to its level of risk (European Commission, 2025b). It prohibits systems that pose “unacceptable risks” to fundamental rights, such as social scoring or real-time facial recognition in public spaces, and requires strict oversight of high-risk applications used in education, employment, or law enforcement. The Act also mandates transparency for general-purpose AI, obliging providers to clearly label synthetic or manipulated content and disclose when users are interacting with AI systems. These provisions aim to balance technological innovation with ethical responsibility.
Complementing the AI Act, the Digital Services Act (DSA) introduces new safeguards for online platforms where deepfakes and disinformation circulate most widely. Under the DSA, users have the right to understand why certain content is recommended and can opt for non-profiling feeds. Platforms are required to remove illegal content swiftly, explain moderation decisions, and provide avenues for appeal. The Act also prohibits targeted advertising to minors and the use of sensitive personal data for ad targeting. Very large online platforms, those exceeding 45 million users, must undergo independent audits and risk assessments to limit the spread of harmful content (European Commission, 2025a).
Together with the Digital Markets Act, which addresses the concentration of power among large digital “gatekeepers”, these laws represent a significant step toward a more transparent and accountable online environment. Nevertheless, regulation alone cannot resolve the underlying issue. The success of these policies depends on citizens’ ability to navigate digital spaces critically and responsibly.
Developing media literacy across all age groups has become a central priority in addressing the challenges of the digital information environment. Citizens who possess the skills to verify sources, identify manipulation, and understand how algorithms shape what they see online are less likely to fall victim to misinformation. According to the European Commission’s Standard Eurobarometer 102 (European Commission, 2024), 82% of Europeans believe that false or misleading information represents a threat to democracy, while 77% consider it a serious issue within their own country. These findings highlight not only the scale of public concern but also the urgent need for education systems and lifelong learning initiatives to strengthen critical thinking and digital literacy as essential components of democratic resilience.
The proliferation of deepfakes reflects a broader shift in the information ecosystem: the erosion of visual and auditory evidence as markers of truth. Restoring public confidence requires both structural reforms and individual awareness. The European Union’s recent legislative efforts lay the groundwork for a safer digital environment, yet their effectiveness ultimately depends on how citizens engage with technology.
Trust in the digital age will not be maintained by restricting innovation but by ensuring people understand it. The ability to think critically, question digital content, and identify manipulation is now as important as traditional civic education. As deepfakes continue to evolve, so too must our capacity to see and believe responsibly.
References
European Commission. (2024). Standard Eurobarometer 102 – Autumn 2024. European Commission.
European Commission. (2025b). AI Act | Shaping Europe’s digital future. European Commission.
Meaker, M. (2023). Slovakia’s Election Deepfakes Show AI Is a Danger to Democracy. Wired.
Surfshark. (2025). Global map: election-related deepfakes reached 3.8 B people. Surfshark




































































































































































