Q&A: AI regulation in Australia

September 2024  |  SPECIAL REPORT: DIGITAL TRANSFORMATION

Financier Worldwide Magazine

September 2024 Issue


FW discusses AI regulation in Australia with Katherine Boiciuc, Christina Larkin and Ean Evans at EY.

FW: Could you provide an overview of key regulatory developments in Australia, at both federal and state level, relating to artificial intelligence (AI)?

Boiciuc: Australia is at a critical crossroads with artificial intelligence (AI) regulation. Although there is no comprehensive AI law yet, the government is intent on establishing stringent rules for high-risk AI, with a focus on rigorous testing and clear accountability. The initiative to develop voluntary standards for AI safety and efforts to align with international norms reflect a proactive stance. The New South Wales AI Assurance Framework, a trailblazing state-level policy, has set a precedent that has been embraced nationally. The push for an AI commissioner is gaining traction, a move that would ensure AI applications respect fundamental human rights. These developments signify a growing recognition of AI’s transformative impact and the need for a robust regulatory framework that safeguards the public interest while fostering innovation.

Larkin: Australia’s commitment to responsible AI deployment is underscored by the establishment of eight ethical principles in 2019, guiding AI governance. The country is advancing its governance framework, particularly within the public sector, through a blend of voluntary guidelines and mandatory self-assessment protocols. Efforts are focused on harmonising AI regulations at both federal and state levels, with ongoing consultations to determine the most effective frameworks. The prevailing sentiment among the public and regulators leans toward a risk-based approach, advocating for compulsory safeguards in high-risk AI applications and for the advancement of cutting-edge ‘frontier AI models’. The consensus also highlights the need to align Australian AI regulations with international standards, ensuring that Australia remains globally competitive and adheres to ethical benchmarks, thereby fostering a safe and responsible AI ecosystem.

Evans: Observing the evolution of AI regulation both globally and within Australia offers a fascinating perspective. I think we are, at best, a medium-speed follower, selectively incorporating elements from international regulatory frameworks. It is also worth noting that numerous existing laws and regulations already provide a degree of protection against the potential risks posed by AI. However, the current patchwork of regulations presents a complex landscape for organisations, often making it challenging to navigate without missteps. There is a palpable need for greater clarity and certainty, which would empower businesses to implement AI with confidence. As we witness the global development of AI regulation, there is an anticipation for Australia to adopt more definitive and clear-cut regulations that will provide a stable foundation for businesses to innovate and leverage AI effectively.

Recent developments in generative AI and wider understanding of what is possible with the latest models have pushed AI to the top of the ‘hype’ cycle.
— Ean Evans

FW: To what extent do you believe the current regulatory framework is lagging behind the rise of AI? What are the key challenges facing Australian regulators in this regard?

Evans: The current regulatory framework in Australia is indeed lagging behind the rapid rise of AI. This lag poses key challenges for Australian regulators, who must navigate the complexities of an evolving technological landscape. One of the primary hurdles is the lack of specific AI legislation, which leaves organisations to interpret and apply a patchwork of existing laws that may not adequately address the nuances of AI. Additionally, defining ‘high-risk’ AI is a contentious and complex issue that requires careful consideration. Regulators also face the challenge of managing international AI systems that operate across borders, further complicating the regulatory environment. Ensuring that regulators possess the necessary expertise to understand and govern AI technologies is another significant challenge. It is imperative that Australia accelerates its regulatory efforts to provide a clear and effective framework that can keep pace with the advancements in AI and support the responsible development and deployment of AI technologies.

Boiciuc: Frankly, we are not moving quickly enough, as the pace at which AI technology is advancing far exceeds the rate of regulatory development in Australia. Our regulatory maturity is not keeping up with the rapid growth and evolution of AI, and this discrepancy is a cause for concern. With a national investment of around $500m, we are far behind the US’s $67bn, which puts Australia at risk of falling behind in the global AI race. We are currently grappling with several challenges, including the absence of specific AI laws, the rapid technological evolution of AI, the task of defining what constitutes ‘high-risk’ AI, the complexities of managing international AI systems and the imperative need to ensure that our regulators possess the necessary expertise. It is a delicate balancing act to promote technological innovation while simultaneously protecting societal interests, and it is clear that we need to accelerate our efforts to keep pace with the advancements in AI.

Larkin: Australian regulators are actively confronting the intricate challenges presented by the swift advancement of AI. The nature of regulatory frameworks is such that they typically lag behind technological progress, and this is particularly true for AI, whose rapid development necessitates forward-looking, anticipatory regulations. These regulations must be designed to pre-emptively address the potential societal, economic and environmental repercussions that AI might bring. Policymakers are trying to comprehend the unique prospects and risks that AI presents across various industries, with research and industry leaders contributing voluntary benchmarks for responsible AI development and operation. In the absence of global regulatory exemplars, it is crucial for Australian regulators to cultivate a resilient AI ecosystem that emphasises skills and capacity enhancement. Moreover, adopting a collaborative, multistakeholder approach is essential, as it engages the public and industry experts in the regulatory discourse, ensuring that the resulting regulations are shaped by community expectations and sectoral proficiency.

FW: How does AI regulation in Australia compare to frameworks being adopted elsewhere?

Larkin: The trajectory of Australian AI regulation is gradually aligning with global trends, which are increasingly adopting a risk-based approach to balance innovation with safety. This involves discussions on establishing mandatory guardrails for high-risk AI applications and considering the adaptation of models like the European Union’s (EU’s) AI Act’s risk tier system to the Australian context. Public submissions have highlighted the need for ex-ante regulations and, in some cases, potential bans on high-risk AI uses. The debate continues on whether to amend existing laws or to create new AI-specific legislation, with a focus on ensuring testing, transparency and accountability. Australia is committed to ensuring that low-risk AI remains unimpeded by excessive regulation, but further work is needed to define ‘high-risk’ AI and establish appropriate guardrails. The aim is to create a regulatory environment that fosters innovation while protecting the public from potential risks associated with AI technologies.

Boiciuc: Australia’s approach to AI regulation is currently in a phase of catch-up with regions like the EU and the US, which have already implemented comprehensive AI laws such as the EU’s AI Act. While Australia is keen to adopt a risk-based approach and collaborate with the international community, we are still in the early stages of defining the foundational elements of our AI regulatory strategy. We are relying on existing laws as a stopgap measure, but this is insufficient for the long-term governance of AI. To close the gap with leading AI nations, Australia needs to expedite its efforts in both strategy formulation and actionable policy implementation. The goal is to establish a regulatory framework that not only addresses the current landscape but is also agile enough to adapt to future developments in AI technology. This will require a concerted effort to understand and integrate international best practices while also considering the unique context of Australia’s AI ecosystem.

Evans: Australia’s position on AI regulation should be one that enables businesses to confidently embrace AI as a lever for productivity growth and international competitiveness. Simplifying AI regulation is imperative, with clear laws, safeguards and penalties in place to protect citizens and consumer rights while encouraging innovation. The potential of AI to support Australia’s productivity and competitiveness is significant, and a streamlined regulatory environment is essential to harness this potential. By establishing clear guidelines and frameworks, businesses can navigate the AI landscape with greater certainty and contribute to the nation’s economic growth. The goal is to create a regulatory climate that not only safeguards against the risks associated with AI but also fosters an environment conducive to the development and application of AI technologies that can drive progress and prosperity.

For Australian regulators, striking the right balance between managing AI risks and fostering innovation is critical.
— Christina Larkin

FW: What steps do regulators in Australia need to take to address the ethical implications of AI, including issues such as model bias and privacy infringement?

Boiciuc: To address the ethical implications of AI, Australia must establish a comprehensive ethical AI framework that strikes a balance between innovation and risk mitigation. This framework should be supported by enforceable standards, robust governance processes, human oversight mechanisms and a commitment to multistakeholder collaboration. Key components of this framework include rules for algorithmic auditing to ensure fairness and transparency, as well as the promotion of AI systems that are designed with human values at their core. Australia has the potential to lead the way in ethical AI by setting enforceable standards and adopting a collaborative approach that brings together diverse perspectives from industry, academia and civil society. Such an approach would not only enhance the ethical development and deployment of AI but also foster public trust in AI technologies. It is imperative that we prioritise the creation of a regulatory environment that encourages responsible innovation while safeguarding the rights and values of individuals.

Larkin: Australian regulators should prioritise the integration of AI assurance throughout the entire AI lifecycle, addressing ethical considerations such as fairness, data privacy and accountability from the design phase to post-deployment monitoring. In a similar approach to what we have seen adopted in Australia’s national framework for the assurance of AI in government, organisations need guidance on how to translate high level ethical frameworks into practical steps for implementation. Such guidance should also include references to existing AI standards, such as the AS ISO/IEC 42001:2023 Information technology – artificial intelligence – management system. Independent certification against standards provide organisations with a mechanism to enhance the trust in AI and implement AI in a consistent and interoperable manner.

FW: How important is it for Australian regulators to protect citizens and society from the risks of AI without curtailing AI innovation and application for businesses? Do laws need to be widely harmonised to achieve this goal?

Evans: Protecting citizens and consumers from the risks of AI is a critical responsibility for Australian regulators. However, it is equally important to recognise that these risks have been present for some time, and both voluntary and regulatory frameworks have evolved at varying paces across Australia. Recent developments in generative AI and wider understanding of what is possible with the latest models have pushed AI to the top of the ‘hype’ cycle. With 47 percent of chief executives prioritising additional investments in technology, including AI, to improve growth and productivity, and 45 percent prioritising enhancing data management and cyber security, there is an imperative for government to further clarify and simplify how organisations should operate and how citizens’ rights and data will be safeguarded. As AI continues to transform industries and value chains, strategic risk management, including defining each organisation’s risk appetite related to AI, should be a topic of discussion at the executive level.

Larkin: For Australian regulators, striking the right balance between managing AI risks and fostering innovation is critical. This requires a holistic ecosystem approach and a national consensus on a risk-based regulatory strategy. Enhanced coordination and alignment with global regulatory frameworks are vital to ensure interoperability and maintain competitiveness on the international stage. Australia must actively participate in shaping global AI governance and align domestic responses with international best practices. This will involve establishing compliance requirements for different levels of AI risk and implementing safety frameworks that reflect both national and global standards. As the AI landscape continues to evolve, Australian regulators will need to adopt a flexible framework that can navigate the complexities of data governance and access to computing power, amid geopolitical challenges. Such a framework will be instrumental in safeguarding the interests of citizens while enabling businesses to leverage AI for growth and innovation.

Boiciuc: It is vital that we protect Australian citizens from the risks associated with AI without stifling the innovation that AI can bring to businesses. Achieving this requires a regulatory approach that is both balanced and adaptive. While harmonising laws with those of major economies can facilitate cross-border AI development and deployment, it is essential that Australian regulators carefully tailor these rules to reflect the unique context and needs of the country. A regulatory framework that is risk-based and principles-based, mandating robust testing, transparency and accountability measures for high-risk AI applications, while allowing low-risk uses to thrive with lighter governance, can strike the desired balance. To further this goal, fostering cross-sector collaboration, investing in regulatory expertise and proactively shaping global AI governance norms are crucial steps. Ultimately, the aim is to create an environment where innovation flourishes within a framework that ensures safety and ethical standards.

With AI rapidly evolving, Australia’s regulatory approach will likely need to be flexible, principles-based and amenable to updates as new AI capabilities and risks emerge, to futureproof the framework.
— Katherine Boiciuc

FW: What advice would you offer to companies on navigating an uncertain regulatory environment to maximise their use of AI while maintaining compliance with existing, and future, laws?

Larkin: Companies looking to develop or deploy AI today should understand how they can apply Australia’s AI Ethics Principles from the outset. Staying informed on digital policy developments and actively engaging in policy discussions is crucial to influence the evolution of regulations. By embedding responsible AI principles into their operations, companies can not only maintain compliance with existing laws but also position themselves to adapt to future legislative changes and enhance public trust in their product or organisation. This proactive approach will help companies to maximise the benefits of AI while upholding ethical standards and contributing to the shaping of a regulatory environment that supports responsible innovation.

Boiciuc: In an uncertain regulatory environment, my advice to companies is to be proactive. Rather than waiting for regulations to crystallise, companies should remain agile and invest in robust AI governance structures. It is crucial to closely monitor regulatory developments and build internal expertise in AI ethics and compliance. Implementing flexible governance frameworks that can adapt to new rules and aligning with global standards will be key to staying ahead. Companies should prioritise responsible innovation and maintain transparency about their AI’s capabilities and risks. Being adaptable, ethical and responsible in AI development is essential, and working with trusted partners to implement AI ethics principles with human oversight will position companies for success. Staying ahead of the curve means being prepared to adjust to regulatory changes while maintaining a commitment to ethical practices and responsible AI deployment.

Evans: Companies should not wait for a ‘perfect’ regulatory environment to emerge. Regulation will inevitably lag behind AI developments, as is the case with many emerging technologies. Voluntary alignment with global standards and adherence to ethical and responsible AI principles is critical. Ensuring that the board and executive are fully invested in AI governance and continue to educate themselves on developments is key to navigating the regulatory landscape. Executives must also be alert to the potential impacts of AI on their industry and value chains. The advent of ever more powerful AI models, coupled with increasing levels of computational power, may create opportunities for new market entrants or enable rapid disintermediation. Strategic risk management, including the definition of each organisation’s risk appetite related to AI, should be a central topic of discussion at the executive level. This proactive and strategic approach will help companies navigate an uncertain regulatory environment and maximise their use of AI.

FW: Looking ahead, how do you predict AI regulation in Australia will unfold over the coming months and years? What general trends do you expect to see in this space?

Larkin: Over the next few years, I expect to see a national alignment in Australia’s AI regulation, with compliance requirements tailored to different risk levels and safety frameworks reflecting national and global best practices. A risk-based approach with mandatory guardrails for high-risk AI and sector-specific initiatives is likely to become more prevalent. Amendments to specific regulations, such as the Privacy Act, are anticipated to address AI risks more directly. Amid geopolitical complexities, a flexible regulatory framework will be necessary to navigate challenges related to data governance and access to computing power. Such a framework will enable Australia to respond effectively to the dynamic nature of AI and its implications for society. As AI continues to advance, Australian regulators will need to remain agile, adapting their strategies to ensure that the regulatory environment supports responsible AI development while protecting the interests of citizens and enabling businesses to innovate.

Boiciuc: In the coming months and years, I predict that AI regulation in Australia will evolve toward a risk-based, enforceable and globally aligned governance regime, with a focus on transparency, accountability and ethical guardrails for high-risk applications. This trajectory, as discussed at the Australian Financial Review (AFR) AI Summit earlier this year, is expected to continually adapt alongside the technology itself. Given the cross-border nature of AI, aligning with frameworks like the EU AI Act and participating in global forums such as the Bletchley Declaration, are anticipated priorities. Regulations will likely mandate transparency obligations, such as public reporting, disclosures on AI model design and data usage, and watermarking of AI-generated content. Accountability measures defining organisational roles, responsibilities and training requirements are also anticipated. Establishing a dedicated AI regulator or investing in regulatory expertise across sectors could provide technical guidance to the industry and ensure effective implementation and enforcement of new rules. With AI rapidly evolving, Australia’s regulatory approach will likely need to be flexible, principles-based and amenable to updates as new AI capabilities and risks emerge, to futureproof the framework. It is an exciting time, and we have the opportunity to set a global example for responsible AI governance.

Evans: I am hopeful for a rapid evolution in AI regulation in Australia. Generational factors, such as millennials’ greater trust in AI applications compared to boomers, and global investment in increasingly powerful models and processing units, indicate that AI will continue to be a rapidly evolving field. I hope that Australia can proactively establish a clear, flexible national risk-based AI framework, with specific enforceable AI legislation that allows citizens, investors and businesses to move forward confidently and utilise the power of AI to improve working conditions, national productivity and international competitiveness. As AI models become more powerful, there is an opportunity for Australia to lead in creating a regulatory environment that not only addresses the risks but also harnesses the transformative potential of AI for economic and societal benefit.

Katherine Boiciuc helps clients gain the most from innovative technology like generative artificial intelligence (GenAI) and advanced computing, using her experience as a futurist to lead innovations and guide EY clients on how to adopt new technologies ahead of their competitors. She has more than two decades of experience and is one of Australia’s go-to professionals on technology. She can be contacted on +61 407 867 544 or by email: katherine.boiciuc@au.ey.com.

Christina Larkin has over 15 years of experience helping clients assess the risks and opportunities that technologies present, providing trust in the integrity of data and digital solutions. She leads a team that considers the ethical and control implications of data quality and emerging technologies such as artificial intelligence and autonomous systems. She can be contacted on +61 404 881 489 or by email: christina.larkin@au.ey.com.

Ean Evans supports clients to implement innovative solutions through the use of intelligent automation solutions, including process reengineering, automation and a range of enterprise, AI and emerging technologies. He is a recognised voice and speaker on AI, intelligent automation, digital finance transformation and global business services, with 25 years of experience in delivering tangible value from consulting and implementation programmes. He can be contacted on +61 420 546 741 or by email: ean.evans@au.ey.com.

© Financier Worldwide


©2001-2024 Financier Worldwide Ltd. All rights reserved. Any statements expressed on this website are understood to be general opinions and should not be relied upon as legal, financial or any other form of professional advice. Opinions expressed do not necessarily represent the views of the authors’ current or previous employers, or clients. The publisher, authors and authors' firms are not responsible for any loss third parties may suffer in connection with information or materials presented on this website, or use of any such information or materials by any third parties.