Australia has moved to restrict access to major social networks for children under 16, enacting a federal rule announced in early December 2025 that requires technology platforms to ensure under‑16s do not hold accounts. The measure targets a set of widely used services and places the legal responsibility for age enforcement on the companies that operate them. Officials say the policy aims to reduce harms linked to early social media exposure, while tech firms, parents and researchers prepare for the technical and practical challenges of implementation. The rollout, which regulators say begins the week after the law’s announcement, will be closely watched internationally as other governments assess whether to adopt similar policies.
Key Takeaways
- The law sets the minimum age to create accounts on covered social platforms at 16 nationwide in Australia, taking effect in December 2025.
- Platforms currently listed as covered include Facebook, Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube.
- Australia’s eSafety regulator excluded several messaging and gaming services — Discord, Messenger, Pinterest, Roblox, WhatsApp and YouTube Kids — from the list for now.
- Estimated users aged 13–15 include about 440,000 on Snapchat, 350,000 on Instagram, 200,000 on TikTok and 150,000 on Facebook.
- The legal burden for age checking falls on companies, which may use self‑reported data and technical age‑estimation tools such as account history, interaction patterns, device signals, or biometric analysis.
- Phones are already prohibited in all Australian schools, which regulators cite as part of the broader policy context for youth device use.
- Regulators have signaled ongoing review authority and could add or remove services from the restricted list over time.
Background
Concerns about children’s mental health, exposure to harmful content and data privacy have driven a wave of policy responses in several countries. Australia’s law is one of the first nationwide efforts to set a uniform minimum age for social media use across all major platforms rather than relying on platform terms or parental controls alone. Prior to the new rule, platforms typically set age minimums at 13, with enforcement left to account sign‑up flows and parents. That approach drew criticism from child welfare advocates and some lawmakers who argued it left too much discretion to companies.
The eSafety commissioner, the federal regulator responsible for online safety, evaluated services by primary function and user experience. Messaging apps and games that are designed chiefly for private messages or play were judged differently from public social platforms with broad content distribution. Industry groups and civil society organizations have debated whether a legal age floor addresses root causes of harm — such as algorithmic recommendation, targeted advertising and data collection — or whether it will simply shift young people to other services or workarounds.
Main Event
The law announced in early December 2025 requires platforms to ensure accounts for users in Australia are held only by people 16 and older. Regulators provided a list of covered services at launch; companies operating those services must implement verification steps or deactivate accounts identified as belonging to under‑16s. Officials emphasized that firms may use a mix of techniques to assess age, from analyzing account activity and connections to applying automated estimation models that consider behavioral and device signals.
Regulators said some commonly used services were excluded because they are primarily for messaging or gaming. The eSafety office will monitor usage patterns and can update the covered list. Companies have been given a compliance window to build or adapt systems; the government indicated it will pursue enforcement actions against platforms that fail to meet the new obligations. Alongside the ban, communications to parents and schools are expected to explain the changes and practical steps families should take.
Industry responses have varied. Some platforms signaled readiness to adjust account policy and improve age‑verification tools, while smaller services warned about the technical burden and privacy implications of some verification methods. Public discussion quickly turned to questions about children who already have accounts, cross‑border access, and the accuracy and fairness of automated age‑estimation techniques.
Analysis & Implications
Shifting the legal responsibility for verifying age from parents and platform terms to companies marks a substantive regulatory change. If platforms adopt robust verification systems, the policy could reduce the number of pre‑teen and early‑teen users on large public networks, potentially lowering exposure to viral content and stranger contact. However, enforcement relies on imperfect technologies: detection algorithms can misclassify adults as minors and vice versa, and biometric checks raise privacy and civil‑liberties questions.
The policy could also reshape the market. Platforms that can implement seamless, privacy‑preserving age verification may gain trust among regulators and parents, while services that cannot may see user declines or be pushed to niche markets. There is a secondary impact on advertising and data economies: fewer younger users on major platforms may reduce ad impressions in a demographic advertisers target, prompting shifts in monetization strategies.
Internationally, Australia’s move will be watched as a test case. Policymakers in the European Union, Denmark and Malaysia have shown interest in age‑based limits; observers will study Australia’s enforcement outcomes, legal challenges and technical approaches. Domestic debates will likely continue over whether a single age threshold is the right policy instrument, or whether layered approaches — combining age limits with tighter content controls and ad restrictions — would better protect children.
Comparison & Data
| Platform | Estimated users aged 13–15 |
|---|---|
| Snapchat | 440,000 |
| 350,000 | |
| TikTok | 200,000 |
| 150,000 |
Those platform counts indicate substantial youth presence on services that will now be subject to an age floor. The figures do not capture messaging apps and gaming platforms excluded from the initial list, where young teens may still be highly active. Analysts caution that headline totals understate where young people spend time online: a move off public platforms can push activity into private groups, encrypted messaging or niche apps not immediately covered by regulation.
Reactions & Quotes
Officials framed the law as a public‑health and child‑protection measure, while industry and civil liberties advocates raised technical and privacy concerns.
“We have an obligation to reduce the risks children face online and to set clearer standards for platforms.”
eSafety Commissioner (official statement)
This statement came as the regulator outlined the covered platforms and the rationale for excluding messaging and gaming apps at this stage. The commissioner emphasized monitoring and the possibility of updates to the list.
“Parents welcome steps that make it easier to keep younger children off public social feeds, but practical clarity is still needed.”
Parent, Melbourne (interview)
Many parents expressed support for stronger protections but also asked how existing accounts would be handled and what proof would be required to restore or create accounts. Schools and parent groups asked for straightforward guidance.
“We will work to meet legal obligations while protecting user privacy and minimizing friction for legitimate users.”
Social platform spokesperson (statement)
Company representatives stressed they are assessing technical options to comply, and flagged concerns about accuracy, costs, and cross‑border enforcement. Several platforms said they would publish more detailed compliance plans in the weeks ahead.
Unconfirmed
- Whether and how the law will block access for Australian minors who use VPNs or foreign accounts is unresolved and likely difficult to fully prevent.
- The long‑term accuracy rates for proposed biometric or behavioral age‑estimation tools in Australian deployment remain unproven in public tests.
- Which additional services, if any, will be added to the restricted list in future reviews has not been announced.
Bottom Line
Australia’s decision to require platforms to prevent under‑16s from holding accounts is a significant regulatory step that reallocates verification responsibility to companies and sets a national precedent. In the near term, the most visible effects will be on account populations reported by platforms and on the compliance programs companies must build. Parents and schools will also play a role in how the policy functions in everyday life.
Longer term, the law’s success will depend on technical implementation, judicial and political challenges, and whether it reduces the harms legislators sought to address or simply shifts young people to other corners of the internet. Policymakers elsewhere will closely examine the Australian experience as they consider similar measures.