Australia has recently taken a bold step in child safety online with a new law banning social media accounts for users under the age of 16. In its first week of enforcement, Meta blocked around 550,000 accounts across its platforms, including 330,639 on Instagram and 173,497 on Facebook.
This controversial legislation, designed to shield children from harmful content and algorithms, has caught the attention of governments worldwide. Lawmakers and advocates assert that the ban is essential for protecting youth, while tech companies like Meta acknowledge the need for improved safety measures but argue for alternative approaches rather than blanket bans.
In a blog post, Meta suggested that the government should collaborate with the industry to enhance online safety without extensive laws that may prove ineffective. They suggested that age verification should be the responsibility of app stores rather than individual companies, promoting a uniform standard across platforms.
Despite the popularity of the ban among parents, experts have voiced concerns about its efficacy and the potential consequences for children, particularly those from marginalized communities who may use social media for connection and support. Critics worry that the ban could inadvertently isolate young people and that tech-savvy youth might easily circumvent the restrictions.
As other governments, including the U.S. and EU, explore similar measures, Australia's law stands out as the strictest, allowing no exemptions for parental approval. This could set a precedent for future regulations aimed at protecting children online.


















