Australia's digital safety authority has issued a warning to major technology companies, including Apple and Google, regarding the implementation of age verification systems for artificial intelligence platforms. The deadline for compliance is set for March 9, as part of a broader initiative to regulate AI technologies in the country.
This regulatory effort follows a report indicating that over half of popular AI services have not publicly detailed their compliance strategies. The eSafety Commissioner highlighted that internet platforms, including AI tools like OpenAI’s ChatGPT, must restrict access for users under 18 to harmful content, including pornography and information related to self-harm and eating disorders. Non-compliance could lead to penalties reaching up to A$49.5 million (around $35 million).
Concerns about the impact of AI on young users have intensified, particularly after reports of legal cases involving wrongful deaths linked to AI interactions. The commissioner noted that children as young as 10 are spending excessive time, up to six hours daily, using AI chatbots, which raises alarms about potential emotional manipulation.