صدى البلد البلد سبورت قناة صدى البلد صدى البلد جامعات صدى البلد عقارات
Supervisor Elham AbolFateh
Editor in Chief Mohamed Wadie
ads

How Australia’s First Day Without Social Media Looked for Millions of Children


Wed 10 Dec 2025 | 09:45 PM
(Illustration by Hiroko Aida)
(Illustration by Hiroko Aida)
Taarek Refaat

Millions of children across Australia woke up on Wednesday to find themselves locked out of their social media accounts, marking the first day of a world-first nationwide ban aimed at shielding those under 16 from addictive algorithms, online predators and digital bullying.

Under the new law, ten major platforms, Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kik, Reddit, Twitch and X, confirmed they would block access for users under 16 and deploy age-verification tools. However, several companies warned that the restrictions alone may not necessarily make children safer.

Prime Minister Anthony Albanese called it a “day of pride” for Australia.

“This is the day Australian families take back control from big tech,” he told ABC, adding that the new rules ensure children “get to live their childhoods” while giving parents peace of mind. Still, he acknowledged the rollout “won’t be easy.”

Platforms must prove they are taking “reasonable steps” to disable underage accounts and prevent new ones from being created, or face fines of up to A$49.5 million (US$32 million).

Authorities expect some children, and parents, to try circumventing the ban, though no penalties will apply to the users themselves.

Snapchat accounts belonging to under-16s will be suspended for three years, or until the user turns 16. YouTube will automatically log out affected users starting December 10 and hide their channels, though children can still watch videos without logging in.

TikTok said it would deactivate all accounts used by under-16s, and past posts from young users will no longer be visible. Meta began removing teen accounts on December 4, allowing reactivation once users reach the legal threshold.

Platform X has not clarified how it intends to comply and continues to object to the law, calling it an infringement on free expression.

Several popular platforms, including Discord, GitHub, Google Classroom, Lego Play, Messenger, Pinterest, Roblox, Steam, Steam Chat, WhatsApp and YouTube Kids, were excluded from the ban.

The decision to leave Roblox out sparked confusion among parents and safety advocates, especially after recent reports of adult predators targeting children inside the game.

Australia’s eSafety Commissioner, Julie Inman Grant, said talks with Roblox began in June, and the platform agreed to introduce new controls this month across Australia, New Zealand and the Netherlands, with a broader rollout planned for January.

Roblox users will need to verify their age to enable chat and will only be able to communicate with others in the same age group.

Under the new rules, platforms must verify age using actual checks rather than relying solely on the birthdate users enter when creating accounts.

Age-verification methods include selfie videos, email scrutiny and official documents.

Yoti, a company specializing in age estimation, said most users prefer the video-selfie option, which uses facial analysis to gauge age without storing biometric data.

With the ban rolled out, some teens immediately began looking for services that offer similar features but remain unrestricted.

Photo-sharing app Yubo said it attracted 100,000 new Australian users in the run-up to the deadline, while interest surged in Lemon8, a TikTok-like platform owned by ByteDance.

The eSafety Commissioner formally warned both companies. Lemon8 pledged compliance with the new law, while Yubo told CNN it is not covered by the ban because it does not permit communication with strangers.

Grant noted that the list of banned platforms is “continuously evolving” and new services may be added as they rise in popularity or introduce risky features.

Grant said officials will monitor wide-ranging outcomes in the coming months, from whether children sleep more, read more, interact more and rely less on antidepressants, to whether they migrate toward darker corners of the internet.

A team of six experts from Stanford University’s Social Media Lab will work with the eSafety Commissioner to collect data. The entire process will be reviewed by an independent academic advisory group of 11 scholars from the US, UK and Australia.

Stanford University announced that its methodology, data and findings will be publicly released to help researchers and policymakers worldwide improve children’s digital safety.