Introduction

The Australian Securities and Investments Commission (ASIC) has highlighted the critical need for financial services businesses to comply with existing regulatory obligations when integrating artificial intelligence (AI) technologies. This is particularly pertinent as the adoption of AI by Australian Financial Services (AFS) licensees is rapidly increasing. ASIC’s report [9], REP 798 [5] [6] [7] [8], published on 29 October 2024, addresses the governance challenges faced by AFS licensees in AI adoption, emphasizing the importance of updating risk and compliance frameworks to mitigate significant risks, including potential consumer harm [6] [8].

Description

ASIC has emphasized the necessity for financial services businesses to adhere to existing regulatory obligations when adopting AI technologies [2], particularly as the integration of AI by Australian Financial Services (AFS) licensees accelerates. On 29 October 2024 [8], ASIC published REP 798 [8], which addresses the governance challenges faced by AFS licensees in the adoption of AI [6]. The report details findings from a review of AI adoption in financial services and by credit licensees [8], analyzing 624 AI use cases from 23 licensees across banking [10], credit [3] [8] [10], insurance [10], and financial advice as of December 2023 [10]. It underscores that many licensees are implementing AI technologies more swiftly than they are updating their risk and compliance frameworks [6] [8], which poses significant risks [3], including potential consumer harm [6] [8].

Key findings indicate significant variation in AI usage among licensees [10], with 61% planning to increase their AI applications in the next year [10]. While many current use cases utilize established techniques [10], there is a notable shift towards more complex methods [10], including a rapid rise in generative AI [10], which accounted for 22% of development use cases [10]. Most applications augment human decision-making rather than operate autonomously [10], with common uses including document drafting [10], call analysis [10], and fraud detection [10]. ASIC specifically raised concerns about the use of “black box” AI models for generating credit risk scores [8], highlighting their lack of transparency regarding the variables influencing outcomes [8].

The regulatory framework is technology-neutral [2] [5] [6] [8] [9], applying equally to AI and non-AI systems [2] [6] [8] [9]. Businesses must ensure that their use of AI aligns with obligations to provide services efficiently [2], honestly [2] [6], and fairly [2] [6], while avoiding any unfair treatment of consumers or unconscionable actions [2]. Accurate and factual representations regarding AI [4], including model performance and outputs [4], are essential to prevent misleading claims. Directors and officers are required to exercise care and diligence in the adoption and use of AI [3], being aware of the risks associated with AI-generated information [2] [3]. Their duty of care extends to the adoption and use of AI, necessitating vigilance regarding foreseeable risks [2].

Licensees must assess whether recent changes in AI technology require updates to their risk management frameworks [1]. Best practices dictate that the same governance principles should apply to third-party AI models as to internally developed ones [1]. However, only half of the licensees updated their risk management policies to specifically address AI [10], with many relying on existing frameworks that do not adequately cover AI-related privacy and security concerns [10]. Establishing a robust accountability process is crucial for regulatory compliance [1], which includes governance [1] [7], internal capabilities [1] [2] [7], and a compliance strategy [1] [7]. A comprehensive risk management process should be implemented to identify and mitigate potential harms [1] [7], with ongoing risk assessments [1].

Best practices identified by ASIC include the need for licensees to update governance and compliance measures in response to evolving AI risks [2]. This encompasses areas such as documentation [5], resource allocation [9], and risk management systems [2] [3] [4] [6] [9] [10], as well as the management of third-party providers [9]. A proactive approach is essential to prevent consumer harm [2], with regular reviews of AI arrangements [2]. Establishing a clear AI strategy that aligns with organizational objectives and capabilities is recommended [2] [4], along with the formation of a specialist executive-level committee responsible for AI governance [7], which should report regularly on AI-related risks to the board. Incorporating the eight Australian AI Ethics Principles into policies is also advised [4] [7].

Licensees should assess their human capital and technological resources to effectively implement AI solutions while maintaining data integrity and confidentiality [2] [7]. Data integrity must be prioritized by protecting AI systems and implementing data governance measures to ensure data quality [1]. Evaluating how AI adoption alters risk profiles and management obligations is crucial [2], as is ensuring appropriate measures for selecting and monitoring AI service providers throughout the AI lifecycle [2] [7]. Testing of AI models is necessary to evaluate performance and monitor systems post-deployment [1]. Human oversight should be integrated into AI systems to minimize unintended consequences [1] [7]. Licensees must apply the same governance principles to third-party AI models as they do to internally developed models [4].

Transparency with end-users about AI-enabled decisions is vital for building trust [1]. Processes should be established for users and affected parties to challenge AI usage and contest decisions [1]. Transparency across the AI supply chain regarding data and models is essential for effective risk management [1] [7]. Maintaining records is necessary for third-party compliance assessments [1] [7]. Stakeholder engagement is important to address safety [1], diversity [1] [7], inclusion [1] [3] [4] [5] [7] [8] [10], and fairness [1] [7], with assessments to identify potential biases in AI deployment [1]. To assist licensees in balancing AI innovation with regulatory obligations [2] [9], ASIC has provided a set of 11 questions for consideration [9].

Additionally, the Australian Government is engaging in consultations regarding legal changes prompted by generative AI technology and has introduced a Voluntary AI Safety Standard [5] [9], which outlines ten voluntary guidelines for companies utilizing AI [5] [9]. Key components of these standards include establishing accountability processes, implementing risk management [1] [3] [4] [6], ensuring data integrity [2] [3] [4] [7], testing AI systems [1] [3] [4] [7], enabling human oversight [4], creating user trust [4], maintaining transparency [4], and engaging stakeholders to assess potential biases [4]. These standards are anticipated to shape future mandatory regulations for AI in high-risk environments [5] [9]. Licensees intending to adopt AI are encouraged to utilize these resources to ensure compliance with both current and forthcoming regulatory requirements [9].

Implementing a robust risk and governance framework is essential [1] [2], and licensees are encouraged to conduct thorough due diligence and establish effective monitoring to address ongoing AI-related risks [1] [2]. Organizations with strong technology platforms and risk management practices are well-positioned to explore AI advancements confidently while prioritizing governance [2], risk [1] [2] [3] [4] [5] [6] [7] [8] [9] [10], and regulatory compliance [1] [7] [8]. However, significant gaps remain in how licensees assess risks associated with AI [3], particularly concerning algorithmic bias [3]. The governance arrangements for AI vary widely [3] [10], often lacking the maturity needed to match the scale of AI use [3], which can lead to consumer harm [3]. Many licensees depend on third-party AI models [3], yet not all have adequate governance to manage the associated risks [3].

A primary goal for regulators is to ensure that licensees using AI maintain strong governance frameworks to mitigate risks and protect consumers [3]. This includes focusing on the detection and management of AI-related risks [3], the quality of data used in AI systems [3], and the integration of ethical considerations into AI governance [3]. Identified risks to consumers include biases in AI decision-making [3], transparency issues [3], and data privacy concerns [3]. To address these risks, it is recommended that licensees develop comprehensive risk management plans [3], ensure transparency in AI processes [3], and uphold high standards of data quality and ethical conduct [3].

Directors must consider the implications of AI development and deployment [3], including potential personal liability for governance failures related to AI systems [3]. They are expected to exercise reasonable care [3], understand key AI risks [3], ensure appropriate governance frameworks [3] [10], and maintain oversight of AI use cases [3]. Organizations face complexities in data protection and privacy as AI scales [3], particularly when utilizing third-party AI models [3]. Key risks include safeguarding proprietary and client information [3], ensuring compliance with data sovereignty regulations [3], and managing cross-border data flows [3]. Leaders must proactively protect privacy and data integrity [3].

Operational risks [3], especially in financial services [3], require attention [3], particularly regarding misinformation [3], business continuity [3], and systemic dependencies [3]. Organizations must support AI systems with robust risk management practices to mitigate systemic risks across interconnected systems [3]. Engagement with regulators and industry bodies is crucial [3]. By addressing key findings and implementing strong governance and risk management practices [3], licensees can bridge governance gaps [3], enhance consumer and market confidence [3], and foster long-term resilience in an increasingly AI-driven financial landscape [3].

Conclusion

The integration of AI technologies in financial services presents both opportunities and challenges. ASIC’s emphasis on robust governance and compliance frameworks is crucial to mitigating risks and protecting consumers. As AI adoption accelerates, licensees must prioritize updating their risk management practices, ensuring transparency [3], and maintaining high ethical standards. By doing so, they can confidently explore AI advancements while safeguarding consumer interests and fostering trust in the financial sector. The evolving regulatory landscape [3], including voluntary standards and potential future mandates, underscores the importance of proactive engagement and adherence to best practices in AI governance.

References

[1] https://www.klgates.com/AI-and-Your-Obligations-as-an-Australian-Financial-Services-Licensee-11-19-2024
[2] https://www.jdsupra.com/legalnews/ai-and-your-obligations-as-an-2845459/
[3] https://www.minterellison.com/articles/asic-urges-stronger-ai-governance-for-afs-and-credit-licensees
[4] https://natlawreview.com/article/australia-ai-and-your-obligations-australian-financial-services-licensee
[5] https://www.investmentlawwatch.com/2024/11/20/australia-ai-and-your-obligations-as-an-australian-financial-services-licensee/
[6] https://www.lexology.com/library/detail.aspx?g=1305779c-0843-4ad1-b553-72c0917e2abc
[7] https://www.lexology.com/library/detail.aspx?g=4ce6e05a-9d13-4091-94f2-3ccccd546de1
[8] https://natlawreview.com/article/ai-and-your-obligations-australian-financial-services-licensee
[9] https://natlawreview.com/article/ai-and-your-obligations-licensee
[10] https://hallandwilcox.com.au/news/asic-report-on-ai-governance-learnings-for-licensees/