TheNewzealandTime

The evidence is in, so why keep hurting our kids?

2026-03-13 - 16:06

Comment: New Zealand’s Parliament has just completed the most comprehensive examination yet of the online harms facing young people. Drawing on more than 400 submissions, the committee concluded that the scale of online harm warrants urgent government action, and among its recommendations was the introduction of age restrictions for social media platforms. Most parties now support exploring age restrictions, but Act and the Greens have both expressed opposition, citing concerns that such restrictions are disproportionate or could push young people into riskier corners of the internet. The Select Committee’s recommendation is not radical. Arguably, it is a necessary public health response to products intentionally designed to be addictive (or as Big Tech would have it, to ‘maximise engagement’). And these are design choices that are harming children. Documents disclosed in these US proceedings show Google executives referring to their design as not being about viewership “but viewer addiction”. Tech execs earn millions each year, and their compensation isn’t about user wellbeing. At Meta, in one internal study, 18 out of 18 wellbeing experts warned that beauty filters were deeply concerning for young people, particularly girls who tend to be more vulnerable to the negative effects of image-based content. The company chose to keep them, limiting only those associated with cosmetic surgery. Another Meta study found that users who stopped using social media for just one week reported lower levels of depression, anxiety, loneliness and social comparison. The company didn’t change their product in response. It shut down the research. Of course, we didn’t need the release of internal documents from Silicon Valley to tell us this. Independent research consistently mirrors these findings, showing population-wide harms in young people, including depression, anxiety, self-harm, body image disorders, disrupted sleep and reduced cognitive functioning. How much more evidence do we need before we are willing to act? Just weeks ago in the United States, parents painted the names of 108 children on the pavement outside Snapchat’s headquarters. These were children allegedly lost to harms linked to the platform. The harms included bullying, sexual exploitation and buying drugs that were laced with fentanyl. If a physical consumer product was linked to the death of 108 children, it would be recalled immediately until it could be proven safe. Social media should not be treated differently because the product is digital. The Select Committee undertook a rigorous process to examine these issues. The inquiry was initially called for by the Act Party, but the party then argued the committee had not identified a proportionate response, despite the breadth of evidence presented. The Green Party raised a different concern, suggesting that age limits would drive kids to dangerous, fringe platforms. This is not backed by empirical study. What the research does show is that when youth are restricted from mainstream social media, they simply move their group chats to gaming platforms and messaging apps where their friends are. Further, the argument ignores that children see horrific material, including extreme violence, on social media, not because they seek it out but because it is recommended to them algorithmically. Opponents of age restrictions also point to studies claiming no link between social media and mental health decline, or argue that age verification is a technological impossibility requiring invasive government surveillance. Neither argument holds up to scrutiny. The ‘no harm’ studies often have methodological flaws. For instance, a widely cited recent Australian study found that moderate use of social media was better than no use. But the researchers only measured screen time between 3pm and 6pm on weekdays and classified ‘moderate use’ as anywhere between >0 hours and 12.5 hours. This means a teen who has an afterschool job and then comes home and binges on social media all night is classified as a social media abstainer, while a kid who uses their mum’s social media for a couple of minutes a week to message a friend is classified as a ‘moderate’ user. As to the claim that platforms cannot verify age without harvesting government IDs, current trials in the US reveal that companies like Meta and Google already use sophisticated data profiling to accurately ‘re-age’ young users. They promote themselves as being good at identifying a user’s true age so they can target them with ads. Which seems to contradict their claim that this capability is impossible when asked to use it to protect children. As such, critics of age restrictions often point to alternative solutions, such as media literacy or safety-by-design programmes. Media literacy programmes may improve young people’s ability to recognise online manipulation but there’s limited evidence such programmes translate into improved mental health outcomes, such as reductions in anxiety, depression or self-harm. Likewise, the idea that we can simply redesign these platforms to make them safe for developing brains is optimistic at best. Research increasingly shows that even when harmful content is removed, the format itself can cause harm. Studies have suggested a link between short-form video platforms, for example, and reduced attention, impaired inhibitory control and cognitive disruption in both adults and children. Governments may be able to regulate algorithms or platform features to some extent, but they’re not able to eliminate the underlying design structures that drive engagement. Age restrictions are not a complete solution, nor should they be presented as one. Protecting young people online will require a multi-layered approach that includes better regulation, stronger platform accountability, improved parental support and ongoing research. But age limits will create a barrier. They signal clear social expectations. And they buy time while governments work to address the deeper structural problems embedded in these platforms. Right now, children of any age can access systems designed to maximise engagement, that is, systems that combine addictive design features, harmful content, algorithmic amplification, cyberbullying, and exposure to predators. Not surprisingly, almost four in 10 New Zealand children say they wish social media had never been invented. At the very least, we should introduce some friction into the system, and continuing to debate endlessly while harm accumulates is not a responsible option. Age restrictions are one of the few tools available to governments to disrupt this cycle of harm, at least until the companies that build these platforms can demonstrate that their products are genuinely safe for developing brains. So far, that burden of proof remains unmet.

Share this post: