Governments in more than a dozen countries are trying to limit minors’ access to social media. These include France, New Zealand, Norway, Malaysia, Slovenia, Spain and the United Kingdom; Germany is considering action. Australia became the first country to introduce bans for users under 16 in late 2025, and Indonesia introduced age limits in late March.
The aim is to protect young people. Children’s screen time can be high and provoke family conflict: a 2025 OECD study found half of all 15‑year‑olds in OECD countries spent at least 30 hours a week on digital devices. But are age limits the best way to address social media’s harms?
Psychologist and neuroscientist Christian Montag says the debate often misses the point. When new technologies appear, moral panic follows, and calling for bans can be an easy way to attract attention without solving root problems. Educational researcher Nina Kolleck agrees: raising the minimum age, as Australia did, doesn’t resolve fundamental issues. The core problems include addictive personalized algorithms and design gimmicks — push notifications and endless scrolling — meant to maximize users’ time on a platform. Once on a platform, young users can encounter violent or sexual content.
While digital media let children learn, play and socialize, excessive use can have negative consequences. The OECD study links high device use to sleeplessness, lack of movement, cyberbullying, social isolation and depression. Montag notes isolating social media’s exact effects is hard because environment and genetics also matter, but excessive smartphone use is reasonably well associated with poorer academic performance and body image problems.
Children and teens are particularly vulnerable because the brain’s prefrontal cortex continues developing into the early to mid‑20s, making self‑regulation harder. Adults, however, also struggle to regulate screen time. That raises doubts about how much difference bans for under‑14s or under‑16s will make. Many experts see age limits as one tool among several and worry they distract from more effective measures.
Kolleck points to the EU’s Digital Services Act (DSA) as containing several potentially effective measures. The DSA obliges large platforms to assess and reduce systemic risks and to be more transparent about algorithms. It requires companies to provide independent researchers access to data so outside monitors can study how features influence users. Montag says access has long been insufficient and remains a major problem despite the DSA’s introduction. The DSA’s reach and enforcement have limits, and political resistance — including threats of retaliatory measures from abroad — complicates implementation.
Another approach is modifying platform design for younger users. Douyin, the Chinese version of TikTok, offers an under‑14 mode that limits scrolling to 40 minutes, after which no new content appears. TikTok implements time limits too, but they are easy to disable: under‑13 users theoretically need a guardian to enter a code to extend time; from 13 users set their own code — a system undermined when users lie about their age during signup.
Montag argues platforms need fundamentally different designs, especially if the goal is healthier usage for kids and adults. The current data‑driven business model, which surveils users to maximize time online, is inherently unhealthy. Alternatives could include subscription financing or other models that don’t incentivize hooking users. If platforms weren’t designed to keep people glued to screens, they would likely be less engaging but also less harmful.
Beyond age limits and media‑education efforts, pressure must be applied to platforms through strict regulation and enforcement, better transparency, and meaningful research access to understand and mitigate harms.
This article was originally published in German.