Striking the right balance: Australia’s approach to data and AI regulation
March 2025 | SPECIAL REPORT: DATA PRIVACY & CYBER SECURITY
Financier Worldwide Magazine
Data is now a critical intangible asset, underpinning economic growth and technological innovation, as well as procuring positive social benefits. For businesses, data has become a valuable and strategic asset allowing the ability to foster new revenue streams, and it can be leveraged to drive innovation and improve productivity in operations.
The acceleration of artificial intelligence (AI) adoption has amplified the importance of data, and all businesses are now equipped with the ability to transform mere information into a valuable asset with the potential to optimise their business operations and performance. The Latin saying attributed to Francis Bacon in the 16th century ‘scientia potentia est’ translates to ‘knowledge is power’. This saying needs adjustment for the AI age, where data is power.
However, harnessing this power requires a careful balancing act between mitigating risks and fostering innovation. Regulation must protect privacy, intellectual property and security, while addressing both foreseeable and unforeseen harm, without stifling innovation and productivity.
Achieving this balance is essential for Australian regulators to allow their citizens to capitalise on the opportunities presented by data assets in a new AI-driven landscape.
Access to and use of data in Australia
Australia has been proactive in establishing frameworks for public or government held data usage. The Data Availability and Transparency Act 2022 and The Data and Digital Government Strategy collectively lay a robust foundation for the responsible and innovative use of data and digital services by the Australian public sector to enhance public services.
The Consumer Data Right (introduced in 2019 under the Competition and Consumer Act 2010) has aimed to give Australians control over their data, enabling secure sharing with accredited providers in banking, energy and other sectors. The Privacy Act 1988 continues to set the rules for protecting the privacy and personal information of individuals while enabling businesses and government agencies to operate confidently, within clear privacy parameters.
However, despite these efforts, access and use of data in the country remains constrained.
Australia’s relative performance in data availability and access has been mixed. In the ‘Open Data Inventory 2022’, Australia ranked 23rd out of 195 countries, with a coverage score of 60/100, highlighting gaps in social, economic and environmental statistics. Conversely, in the ‘OECD Digital Government Index 2023’, Australia ranked fifth, reflecting strong use of data for service delivery by the public sector.
While Australia has made progress in data management, there is still a need to improve data availability and access, particularly in the private sector. Limited access to high-quality data risks hindering AI development and effective use. Addressing such challenges will require a cohesive national strategy that ensures secure and efficient access to high-quality data sets, while safeguarding the rights of data owners and holders. The regulatory approach in other jurisdictions may offer valuable lessons.
Accessing and using data
Switzerland’s federal government has introduced a Swiss Data Ecosystem to create a legal, organisational, semantic and technical framework for effective data use across sectors within a trusted environment. This system aims to break data silos by creating common spaces for secure data sharing among businesses, universities, public authorities, organisations and individuals, thereby contributing to prosperity, economic success and scientific progress. A similar framework could support AI innovation in Australia while ensuring privacy and security.
In another example, the Chinese government now recognises data as one of five key market production factors alongside land, labour, capital and technology. New regulations classify data as intangible assets on corporate balance sheets and facilitate a data exchange market. Government-initiated data exchanges are emerging to trade AI training sets, to facilitate cross-border use of data and to allow public data access. These exchanges use intermediaries for trader verification and data origin explanation to protect personal and sensitive information and to build trust between data providers and licensees.
A November 2024 publication by the Centre for International Governance Innovation highlights an example where the Shenzhen Data Exchange enabled a loan agreement between China Everbright Bank and Shenzhen Weiyan Technology based on data products listed on the exchange. Law firms and third party service providers supported with valuation, quality assessment and compliance verification.
The Chinese model is ambitious and aims to overcome barriers in leveraging data assets in business, leading with regulatory innovation that could lead to success in a post-AI economy.
Navigating proposed AI regulation in Australia
Australia’s approach to AI regulation reflects a growing focus on risk-based frameworks. While no binding laws specific to AI currently exist in Australia, several legal regimes already directly impact the collection of data for the development and use of AI technologies, including privacy and intellectual property laws.
In 2023, Australia was among the 27 countries along with the European Union (EU) to sign the Bletchley Declaration as a joint effort to identify safety risks relating to AI and building risk-based policies. Subsequently, the Australian government progressed two key measures in 2024 to better manage AI-related risks and provide certainty about the governance expected in the development and use of AI: the ‘Voluntary AI Safety Standards’ and the ‘Mandatory Guardrails for High-Risk Settings Proposals Paper’.
The Voluntary AI Safety Standard consists of 10 voluntary guidelines offering practical advice to Australian organisations on safely and reliably developing and deploying AI. Meanwhile, the Proposals Paper sought feedback on mandatory guardrails for high-risk AI settings, to inform future regulatory requirements in Australia. The Australian government deliberately avoided proposing mandatory requirements in non-high-risk AI applications so as not to deter innovation.
Legislative options under consideration to introduce mandatory guardrails include adapting existing laws, creating framework legislation or enacting an AI-specific Act, as seen in the EU and Canada. Draft legislation, expected in 2025, will need to balance the risks of AI with the need for innovation.
A critical challenge lies in avoiding overregulation, which could stifle competition and innovation, particularly for smaller enterprises. A nuanced approach that safeguards rights, mitigates risks and fosters innovation will be essential.
Regulation and innovation
Australia’s Privacy Act plays a pivotal role in regulating data use while protecting individual privacy. Recent amendments have increased penalties, strengthened enforcement powers, introduced requirements for transparency in AI decision making, and proposed a tort for privacy invasions, as part of a major reform designed to support digital innovation and enhance Australia’s reputation as a trusted trading partner. Several proposals aim to align Australia’s privacy laws more closely with Europe’s General Data Protection Regulation (GDPR). Further reform is on the horizon.
Commentators and industry participants debate the reasons for US and Chinese dominance of the technology innovation landscape in contrast to EU relative underperformance. There is convincing evidence of the negative impact of compliance costs on early stage companies working in the AI sector in the EU.
One commentator examining the debate noted: “Research surveying small AI startups has shown that the GDPR can adversely affect early-stage companies. Small startups often have access to limited data from their own pool of customers and rely on third-party data to develop their algorithms. With restrictions imposed on such data gathering, the GDPR increases the costs incurred by these firms to collect and analyse the data they need to develop AI applications.… This research suggests that one of the unintended consequences of the GDPR is that it may protect, or perhaps even further entrench, the relative power of the largest tech companies that are better placed to comply with demanding regulations such as the GDPR.”
Aligning with stricter international standards like the GDPR presents opportunities and challenges.
The GDPR’s stringent rules protect individual rights and create business certainty, but these have arguably hindered AI and technology innovation in the EU by limiting access to high-quality datasets. Australia must learn from this experience, ensuring its privacy regulations uphold rights without stifling innovation. Clear guidelines on transparency, consent and use of anonymised and aggregated data are critical to achieving this balance.
Addressing data scraping challenges
The Office of the Australian Information Commissioner has intensified enforcement efforts in privacy law breaches involving data scraping. The high-profile ruling against Clearview AI in 2021 for scraping biometric data without consent, highlights the significant risks of improperly using personal information.
Similarly, in 2024, companies linked to Ms Dominique Grubisa were found to have unlawfully collected and distributed personal information from third-party databases, targeting individuals in distress. The Commissioner ruled that these companies failed to collect data by fair means, neglected to notify individuals, and acted in violation of third party website terms.
These enforcement actions, coupled with legitimate data deals like those between Reddit, and OpenAI and Google outside of Australia, underscore the necessity of acquiring data rights through lawful means and complying with privacy regulations. These examples also point to a future where the largest of the technology companies have the sophistication and budget to access and own the major AI training data sources, and smaller competitors are forced to compete with inferior access to data.
Australia is actively exploring how to address some of these challenges. In addition to the gathering of data in breach of website terms and in breach of privacy laws, copyright in web content provides an additional hurdle for access to data in Australia. The attorney general established the Copyright Artificial Intelligence Reference Group to examine challenges arising from data scraping for AI development.
Additionally, the Senate’s Select Committee on Adopting AI issued a report in November 2024 with key recommendations, including: (i) consulting creative workers, rights holders and their representatives to address the unauthorised use of copyrighted works by technology companies; (ii) requiring AI developers to be transparent about the use of copyrighted material in training datasets and ensuring such use is appropriately licensed and compensated; and (iii) collaborating with the creative industry to establish mechanisms for fair remuneration for AI-generated outputs derived from copyrighted material.
Licensing frameworks, transparency requirements and fair remuneration mechanisms could balance AI developers’ interests with rights holders’ protections, fostering innovation while safeguarding creators’ rights.
Encouraging responsible innovation
Australia’s AI productivity relies on a regulatory framework that balances risk and innovation. A cohesive national data strategy, informed by global best practices, can establish governance for data access and use across public and private sectors.
Proportionate guardrails are necessary to mitigate risks without stifling innovation, particularly in high risk AI settings, supported by clear responsibilities, adequate regulatory resourcing and practical guidance for businesses. Transparency must remain a priority, with AI developers disclosing data usage and ensuring compliance with privacy and copyright laws.
Learning from best practices globally and a willingness of regulators to think laterally about how to maximise Australia’s potential uses of data and AI technology to drive innovation and productivity ought to be core government priorities. Aligning Australia’s AI regulations with international standards while addressing local needs should be achieved with a focus on competitiveness and relevance in global markets.
By fostering a regulatory environment that protects rights, mitigates risks and enables innovation, Australia can unlock AI’s potential to drive innovation and economic growth.
Mark Vincent is a principal, Nadine Martino is a senior associate and Jesmine Medina is a lawyer at Spruson and Ferguson Lawyers. Mr Vincent can be contacted on +61 2 9393 0100 or by email: mark.vincent@spruson.com. Ms Martino can be contacted on +61 2 9393 0300 or by email: nadine.martino@spruson.com. Ms Medina can be contacted on +61 2 9393 0300 or by email: jesmine.medina@spruson.com.
© Financier Worldwide
BY
Mark Vincent, Nadine Martino and Jesmine Medina
Spruson and Ferguson Lawyers
Q&A: Tackling the cyber skills gap
Pursuing acquisitions and joint ventures – a cyber security perspective
Health advertising and US privacy law – what is at stake?
Interaction between the GDPR and other EU regulations
Regulation of biometric data in Europe
The sweet voices of robots – cloning voices with AI
Emerging technologies and privacy
Striking the right balance: Australia’s approach to data and AI regulation