Every generation faces new frontier problems. For today’s teens, the frontier is digital — a world of likes, shares, endless scrolling, and unseen pressures. With concerns mounting about mental health, attention spans, and the unseen harms of social media, Australia has taken an important step: a law banning social media access for under-16s. As that policy looms, the question for other countries is unavoidable: is this kind of ban effective or a misunderstanding of technology and young people?

In November 2024, the Parliament of Australia passed a world-first law prohibiting youth in early adolescence from accessing social media platforms. Framed by the Government as necessary to protect mental health, the measure carries hefty penalties, up to AU 50 million (€28M), for companies that fail to introduce effective age-verification systems. The move comes amid growing concerns about teenagers’ mental health. Research increasingly associates excessive use of social networking services with stress, anxiety, poor sleep, low self-esteem, and other outcomes, leading policymakers to explore ways to balance technology access with support for young people’s emotional well-being.

This law, which comes into force in December 2025, requires platforms to take “reasonable steps” to keep younger users off their sites. In practice, this means deploying appropriate “age assurance” tools. Yet these age-verification technologies are controversial: they can mistakenly block teens who are old enough to participate, while allowing others to bypass the rules.

Critics argue it was rushed through with limited public consultation and it would risk cutting young people off from channels of digital connection. Iyer and Haidt, nonetheless, suggest that the policy is backed by parents in Australia and across the globe. On the other hand, some experts caution that the evidence underpinning the ban does not conclusively support such sweeping restrictions. Meanwhile, existing protections, such as the over-13 rule and platform-led security features, are fragmented and inconsistently implemented. Researchers note that children still receive unwanted contact from adults, highlighting how far current systems fall short. Until broader regulations, such as the UK’s Online Safety Act or the EU’s Digital Services Act, are fully enforced, meaningful safeguards remain elusive.

While Australia has broken new ground, European governments are also moving quickly to address similar concerns. Across the EU, policymakers are debating how best to shield minors from online harms, with proposals ranging from stricter age thresholds to innovative age-verification systems. These developments reflect a growing consensus that the current reliance on self-reported ages and weak platform safeguards is certainly not enough.

France, Spain, and Greece have been at the forefront of this push, calling for a so-called “digital majority” age across the EU. Under their proposal, children below a set minimum age, often framed as under 15, would not be able to open social media accounts without parental consent. French officials have been particularly vocal, with the country’s AI and digital affairs minister arguing that only a robust, EU-wide ban, backed by mandatory verification, can provide meaningful protection for younger users. Spain has echoed this position, pressing the European Commission to make both age-verification and parental controls compulsory for platforms.

Greece, meanwhile, has gone further by experimenting with a homegrown solution. Its “Kids Wallet” app is designed as a digital identity tool for minors, linking civil registry data with parental authorization to verify a child’s age. The app also incorporates features like screen-time limits and app blocking, reflecting a dual aim: safeguarding children while giving parents greater oversight. By promoting this initiative, Athens hopes to inspire an EU-wide framework for online child protection that balances innovation with safety.

Several other Member States, including Italy and Denmark, are piloting age-verification schemes that align with the European Digital Identity Wallet project. These trials aim to test privacy-preserving technologies that can be used across borders, ensuring both security and compatibility across countries. Parallel efforts are underway to encourage the European Commission to require stronger parental controls on internet-enabled devices and to enforce “age-appropriate design” principles, limiting addictive or manipulative features in apps and platforms.

Yet, as with Australia’s law, Europe faces key challenges. Age-assurance technologies remain fallible, sometimes excluding teenagers who should legally be allowed access or raising privacy concerns among families wary of facial recognition and biometric data. Moreover, while Australia can legislate at a national level, Europe must balance the interests and legal frameworks of 27 Member States, making uniform standards more difficult to achieve. Enforcement also remains an open question: without consistent oversight, even strong laws may prove inefficient. A further complication lies in the privacy trade-offs that age-gating tools demand. Teenagers may be required to upload official identification documents, raising fears about how that sensitive data is stored and who has access to it. Other systems rely on short video selfies, with AI tools estimating a user’s age based on facial features. There is a possibility that such technologies are flawed to some extent.

In both contexts, Governments are increasingly willing to impose legal age thresholds and to hold platforms accountable through regulations (including fines). But the tensions between protection and social participation online, law enforcement and privacy, national action and international coordination, show how complex the issue has become. At present, much of the responsibility for monitoring social media use falls on parents, even though companies profit from keeping young users engaged. A clear minimum age standard changes that dynamic: it shifts enforcement to the platforms themselves, reduces pressure on children, and gives families some relief from the constant struggle over online activity. Practical measures like reliable and non-intrusive age verification and stronger guidance for teens can ensure that social media is safer, without taking away opportunities for thoughtful engagement.

References

Datta, A., & Moreau, C. (2025). France, Spain and Greece urge EU to curb child access to social media. Euractiv.

Datta, A., & Moreau, C. (2025). Greece introduces “Kids Wallet” to urge EU action on child protection. Euractiv.

Datta, A. (2025). How EU countries are clamping down on kids’ access to social media. Euractiv.

European Commission. (n.d.). A digital ID and personal digital wallet for EU citizens, residents and businesses. EU Digital Identity Wallet Home.

Iyer, R., & Haidt, J. (2025). Australia’s social media age limit policy delays account creation, not access to content. After Babel.

Stokel-Walker, C. (2025). Social media bans for teens: Australia has passed one, should other countries follow suit? The Guardian.

Sullivan, H. (2024). Australia passes world-first law banning under-16s from social media despite safety concerns. The Guardian.

Share This Post, Choose Your Platform!

Every generation faces new frontier problems. For today’s teens, the frontier is digital — a world of likes, shares, endless scrolling, and unseen pressures. With concerns mounting about mental health, attention spans, and the unseen harms of social media, Australia has taken an important step: a law banning social media access for under-16s. As that policy looms, the question for other countries is unavoidable: is this kind of ban effective or a misunderstanding of technology and young people?

In November 2024, the Parliament of Australia passed a world-first law prohibiting youth in early adolescence from accessing social media platforms. Framed by the Government as necessary to protect mental health, the measure carries hefty penalties, up to AU 50 million (€28M), for companies that fail to introduce effective age-verification systems. The move comes amid growing concerns about teenagers’ mental health. Research increasingly associates excessive use of social networking services with stress, anxiety, poor sleep, low self-esteem, and other outcomes, leading policymakers to explore ways to balance technology access with support for young people’s emotional well-being.

This law, which comes into force in December 2025, requires platforms to take “reasonable steps” to keep younger users off their sites. In practice, this means deploying appropriate “age assurance” tools. Yet these age-verification technologies are controversial: they can mistakenly block teens who are old enough to participate, while allowing others to bypass the rules.

Critics argue it was rushed through with limited public consultation and it would risk cutting young people off from channels of digital connection. Iyer and Haidt, nonetheless, suggest that the policy is backed by parents in Australia and across the globe. On the other hand, some experts caution that the evidence underpinning the ban does not conclusively support such sweeping restrictions. Meanwhile, existing protections, such as the over-13 rule and platform-led security features, are fragmented and inconsistently implemented. Researchers note that children still receive unwanted contact from adults, highlighting how far current systems fall short. Until broader regulations, such as the UK’s Online Safety Act or the EU’s Digital Services Act, are fully enforced, meaningful safeguards remain elusive.

While Australia has broken new ground, European governments are also moving quickly to address similar concerns. Across the EU, policymakers are debating how best to shield minors from online harms, with proposals ranging from stricter age thresholds to innovative age-verification systems. These developments reflect a growing consensus that the current reliance on self-reported ages and weak platform safeguards is certainly not enough.

France, Spain, and Greece have been at the forefront of this push, calling for a so-called “digital majority” age across the EU. Under their proposal, children below a set minimum age, often framed as under 15, would not be able to open social media accounts without parental consent. French officials have been particularly vocal, with the country’s AI and digital affairs minister arguing that only a robust, EU-wide ban, backed by mandatory verification, can provide meaningful protection for younger users. Spain has echoed this position, pressing the European Commission to make both age-verification and parental controls compulsory for platforms.

Greece, meanwhile, has gone further by experimenting with a homegrown solution. Its “Kids Wallet” app is designed as a digital identity tool for minors, linking civil registry data with parental authorization to verify a child’s age. The app also incorporates features like screen-time limits and app blocking, reflecting a dual aim: safeguarding children while giving parents greater oversight. By promoting this initiative, Athens hopes to inspire an EU-wide framework for online child protection that balances innovation with safety.

Several other Member States, including Italy and Denmark, are piloting age-verification schemes that align with the European Digital Identity Wallet project. These trials aim to test privacy-preserving technologies that can be used across borders, ensuring both security and compatibility across countries. Parallel efforts are underway to encourage the European Commission to require stronger parental controls on internet-enabled devices and to enforce “age-appropriate design” principles, limiting addictive or manipulative features in apps and platforms.

Yet, as with Australia’s law, Europe faces key challenges. Age-assurance technologies remain fallible, sometimes excluding teenagers who should legally be allowed access or raising privacy concerns among families wary of facial recognition and biometric data. Moreover, while Australia can legislate at a national level, Europe must balance the interests and legal frameworks of 27 Member States, making uniform standards more difficult to achieve. Enforcement also remains an open question: without consistent oversight, even strong laws may prove inefficient. A further complication lies in the privacy trade-offs that age-gating tools demand. Teenagers may be required to upload official identification documents, raising fears about how that sensitive data is stored and who has access to it. Other systems rely on short video selfies, with AI tools estimating a user’s age based on facial features. There is a possibility that such technologies are flawed to some extent.

In both contexts, Governments are increasingly willing to impose legal age thresholds and to hold platforms accountable through regulations (including fines). But the tensions between protection and social participation online, law enforcement and privacy, national action and international coordination, show how complex the issue has become. At present, much of the responsibility for monitoring social media use falls on parents, even though companies profit from keeping young users engaged. A clear minimum age standard changes that dynamic: it shifts enforcement to the platforms themselves, reduces pressure on children, and gives families some relief from the constant struggle over online activity. Practical measures like reliable and non-intrusive age verification and stronger guidance for teens can ensure that social media is safer, without taking away opportunities for thoughtful engagement.

References

Datta, A., & Moreau, C. (2025). France, Spain and Greece urge EU to curb child access to social media. Euractiv.

Datta, A., & Moreau, C. (2025). Greece introduces “Kids Wallet” to urge EU action on child protection. Euractiv.

Datta, A. (2025). How EU countries are clamping down on kids’ access to social media. Euractiv.

European Commission. (n.d.). A digital ID and personal digital wallet for EU citizens, residents and businesses. EU Digital Identity Wallet Home.

Iyer, R., & Haidt, J. (2025). Australia’s social media age limit policy delays account creation, not access to content. After Babel.

Stokel-Walker, C. (2025). Social media bans for teens: Australia has passed one, should other countries follow suit? The Guardian.

Sullivan, H. (2024). Australia passes world-first law banning under-16s from social media despite safety concerns. The Guardian.

Share This Post, Choose Your Platform!

Every generation faces new frontier problems. For today’s teens, the frontier is digital — a world of likes, shares, endless scrolling, and unseen pressures. With concerns mounting about mental health, attention spans, and the unseen harms of social media, Australia has taken an important step: a law banning social media access for under-16s. As that policy looms, the question for other countries is unavoidable: is this kind of ban effective or a misunderstanding of technology and young people?

In November 2024, the Parliament of Australia passed a world-first law prohibiting youth in early adolescence from accessing social media platforms. Framed by the Government as necessary to protect mental health, the measure carries hefty penalties, up to AU 50 million (€28M), for companies that fail to introduce effective age-verification systems. The move comes amid growing concerns about teenagers’ mental health. Research increasingly associates excessive use of social networking services with stress, anxiety, poor sleep, low self-esteem, and other outcomes, leading policymakers to explore ways to balance technology access with support for young people’s emotional well-being.

This law, which comes into force in December 2025, requires platforms to take “reasonable steps” to keep younger users off their sites. In practice, this means deploying appropriate “age assurance” tools. Yet these age-verification technologies are controversial: they can mistakenly block teens who are old enough to participate, while allowing others to bypass the rules.

Critics argue it was rushed through with limited public consultation and it would risk cutting young people off from channels of digital connection. Iyer and Haidt, nonetheless, suggest that the policy is backed by parents in Australia and across the globe. On the other hand, some experts caution that the evidence underpinning the ban does not conclusively support such sweeping restrictions. Meanwhile, existing protections, such as the over-13 rule and platform-led security features, are fragmented and inconsistently implemented. Researchers note that children still receive unwanted contact from adults, highlighting how far current systems fall short. Until broader regulations, such as the UK’s Online Safety Act or the EU’s Digital Services Act, are fully enforced, meaningful safeguards remain elusive.

While Australia has broken new ground, European governments are also moving quickly to address similar concerns. Across the EU, policymakers are debating how best to shield minors from online harms, with proposals ranging from stricter age thresholds to innovative age-verification systems. These developments reflect a growing consensus that the current reliance on self-reported ages and weak platform safeguards is certainly not enough.

France, Spain, and Greece have been at the forefront of this push, calling for a so-called “digital majority” age across the EU. Under their proposal, children below a set minimum age, often framed as under 15, would not be able to open social media accounts without parental consent. French officials have been particularly vocal, with the country’s AI and digital affairs minister arguing that only a robust, EU-wide ban, backed by mandatory verification, can provide meaningful protection for younger users. Spain has echoed this position, pressing the European Commission to make both age-verification and parental controls compulsory for platforms.

Greece, meanwhile, has gone further by experimenting with a homegrown solution. Its “Kids Wallet” app is designed as a digital identity tool for minors, linking civil registry data with parental authorization to verify a child’s age. The app also incorporates features like screen-time limits and app blocking, reflecting a dual aim: safeguarding children while giving parents greater oversight. By promoting this initiative, Athens hopes to inspire an EU-wide framework for online child protection that balances innovation with safety.

Several other Member States, including Italy and Denmark, are piloting age-verification schemes that align with the European Digital Identity Wallet project. These trials aim to test privacy-preserving technologies that can be used across borders, ensuring both security and compatibility across countries. Parallel efforts are underway to encourage the European Commission to require stronger parental controls on internet-enabled devices and to enforce “age-appropriate design” principles, limiting addictive or manipulative features in apps and platforms.

Yet, as with Australia’s law, Europe faces key challenges. Age-assurance technologies remain fallible, sometimes excluding teenagers who should legally be allowed access or raising privacy concerns among families wary of facial recognition and biometric data. Moreover, while Australia can legislate at a national level, Europe must balance the interests and legal frameworks of 27 Member States, making uniform standards more difficult to achieve. Enforcement also remains an open question: without consistent oversight, even strong laws may prove inefficient. A further complication lies in the privacy trade-offs that age-gating tools demand. Teenagers may be required to upload official identification documents, raising fears about how that sensitive data is stored and who has access to it. Other systems rely on short video selfies, with AI tools estimating a user’s age based on facial features. There is a possibility that such technologies are flawed to some extent.

In both contexts, Governments are increasingly willing to impose legal age thresholds and to hold platforms accountable through regulations (including fines). But the tensions between protection and social participation online, law enforcement and privacy, national action and international coordination, show how complex the issue has become. At present, much of the responsibility for monitoring social media use falls on parents, even though companies profit from keeping young users engaged. A clear minimum age standard changes that dynamic: it shifts enforcement to the platforms themselves, reduces pressure on children, and gives families some relief from the constant struggle over online activity. Practical measures like reliable and non-intrusive age verification and stronger guidance for teens can ensure that social media is safer, without taking away opportunities for thoughtful engagement.

References

Datta, A., & Moreau, C. (2025). France, Spain and Greece urge EU to curb child access to social media. Euractiv.

Datta, A., & Moreau, C. (2025). Greece introduces “Kids Wallet” to urge EU action on child protection. Euractiv.

Datta, A. (2025). How EU countries are clamping down on kids’ access to social media. Euractiv.

European Commission. (n.d.). A digital ID and personal digital wallet for EU citizens, residents and businesses. EU Digital Identity Wallet Home.

Iyer, R., & Haidt, J. (2025). Australia’s social media age limit policy delays account creation, not access to content. After Babel.

Stokel-Walker, C. (2025). Social media bans for teens: Australia has passed one, should other countries follow suit? The Guardian.

Sullivan, H. (2024). Australia passes world-first law banning under-16s from social media despite safety concerns. The Guardian.

Share This Post, Choose Your Platform!