More than a dozen governments are trying to restrict minors’ access to social media. Countries including France, New Zealand, Norway, Malaysia, Slovenia, Spain and the United Kingdom have proposed or enacted limits; Germany is considering action. Australia became the first to ban users under 16 in late 2025, and Indonesia introduced age limits in late March. The stated aim is to protect children and teenagers from harm, but critics say age thresholds alone are an incomplete solution.
Concerns driving regulation are familiar: young people spend a lot of time on screens, which can spark family tension and affect health and wellbeing. A 2025 OECD report found that half of 15‑year‑olds in OECD countries used digital devices for at least 30 hours a week. High device use has been linked with sleep problems, reduced physical activity, cyberbullying, social withdrawal and higher rates of depression. Researchers also observe associations between heavy smartphone use and worse school performance and body‑image issues, though disentangling causation from correlation is difficult because environment and genetics also shape outcomes.
Experts warn the public debate often focuses on the wrong lever. Psychologist and neuroscientist Christian Montag argues that new technologies tend to trigger moral panic and that headline-grabbing calls for bans can divert attention from the mechanisms that cause harm. Educational researcher Nina Kolleck agrees: raising the minimum age — as Australia did — does not directly address the platform features that drive problematic use.
Those features are at the heart of the concern. Personalized, engagement‑maximizing algorithms, push notifications, endless scrolling and similar design choices are deliberately engineered to prolong attention. Once young people are on a platform they can encounter violent or sexual content, harassment, or other risks. Because the prefrontal cortex continues developing into the early to mid‑20s, children and adolescents have less capacity for self‑regulation, making them particularly susceptible to addictive patterns — but adults are affected too, which complicates reliance on age limits alone.
Many specialists thus see age limits as one tool among several rather than a silver bullet. Kolleck highlights the European Union’s Digital Services Act (DSA) as an example of regulation that targets systemic harms: large platforms must assess and mitigate systemic risks, increase transparency about how algorithms work, and provide independent researchers with access to data so outside monitors can study effects. Montag notes that researcher access has historically been inadequate and remains a major sticking point despite the DSA, and that enforcement and political pushback limit the law’s reach in practice.
Design changes targeted at younger users offer another approach. The Chinese short‑video app Douyin (the mainland version of TikTok) has an under‑14 mode that ends new content after 40 minutes. TikTok itself offers time limits, but they are easily bypassed: for example, under‑13 users are supposed to need a guardian code to extend usage, while older teens can set their own code — a system undermined when people lie about their age at signup.
Montag and others argue that more fundamental shifts are needed: platforms built on surveillance‑based, attention‑maximizing business models will always have incentives to keep people glued to screens. Alternatives could include subscription models or other financing structures that do not reward maximizing time on site. If products were designed without the primary goal of maximizing engagement, they would likely be less addictive, even if they proved less lucrative for advertising‑driven firms.
In short, age restrictions can reduce some harms but risk giving a false sense of security if they draw focus away from algorithmic design, weak enforcement and limited research access. A more effective strategy would combine sensible age policies with stronger regulation and enforcement, meaningful transparency about algorithms and content moderation, improved access for independent researchers, and platform design reforms or business‑model changes that reduce incentives to manipulate attention.
This article was originally published in German.