Introduction
The United Kingdom is actively advancing its artificial intelligence (AI) policy and regulatory framework. With the new Labour Government expected to introduce a UK AI Bill within the year [1], this initiative builds on the previous Conservative Government’s approach, emphasizing a sector-based regulatory strategy. The forthcoming legislation aims to support innovation while addressing risks associated with frontier AI, maintaining continuity with prior policies [1].
Description
The UK is advancing its AI policy and regulation [1] [2], with the new Labour Government expected to publish a UK AI Bill within the year [1]. This initiative builds on the previous Conservative Government’s response to the AI White Paper [1], which emphasized an agile [1], sector-based regulatory approach [1]. While the Labour Government aims to support innovation and address risks associated with frontier AI [1], it appears to maintain continuity with prior policies [1], suggesting no significant changes in the regulatory landscape [1].
The forthcoming legislation is anticipated to focus on a limited number of companies developing powerful AI systems [1], likely with a narrower scope than the EU AI Act [1]. The timeline for the AI Bill remains unspecified [1], as the Government prioritizes other technology-focused legislation [1], including the Data (Use and Access) Bill and the Cyber Security and Resilience Bill [1]. The Data Bill does not significantly address AI [1], leaving the industry reliant on regulatory guidance [1].
In addition, a Private Members’ Bill has been introduced to regulate automated decision-making in the public sector [1] [3], requiring impact assessments and transparency standards [3]. This may also influence private companies collaborating with public authorities [3]. Although such bills often face challenges in becoming law [3], they could shape future government legislation on AI [3]. The Government is also exploring AI’s potential for economic growth [1], appointing a lead to develop an AI Opportunities Action Plan and funding various AI projects [1].
The UK AI Safety Institute (AISI) is set to play a pivotal role in establishing standards and testing for powerful AI models [2]. It may gain statutory status [1] [3], reinforcing its role in AI safety and international engagement [1]. In 2024 [2], the AISI will deliver a research program focused on frontier AI safety [2], develop an open-source framework for evaluating large language models [2], and launch a grants program to support research in AI safety across various sectors [2]. Its guiding principles align with those of the EU AI Act [2], emphasizing dignity [2] [3], transparency [2] [3], accountability [2] [3], equality [2], privacy [2], reliability [2], and safe innovation [2].
The UK Government has released a report emphasizing the importance of AI assurance to ensure trustworthiness and mitigate risks [3]. Plans include developing a roadmap for trusted third-party AI assurance and creating an AI Essentials toolkit to promote best practices [3]. This toolkit will feature a self-assessment tool aligned with existing standards [3], open for consultation until January 2025 [3], with a focus on feedback from SMEs [3]. The Government is also addressing cybersecurity risks associated with AI [2], seeking input on a Voluntary Code of Practice based on guidelines from the National Cyber Security Centre (NCSC) [2]. The NCSC is enhancing guidance on secure AI system development and updating machine learning security principles [2].
As the UK navigates its regulatory approach [1], it remains aligned with the EU’s evolving framework while facing challenges related to overregulation and enforcement that could impact innovation [1]. While comprehensive AI legislation akin to the EU’s is not yet in place [2], existing regulators such as the ICO [2], CMA [2], Ofcom [2], and FCA are actively engaging in AI regulation through the Digital Regulation Cooperation Forum (DRCF) [2]. The ICO is expected to consider a statutory code when interpreting the UK GDPR in relation to AI [3], providing clarity and confidence for the industry [3]. Companies must stay alert to changes in AI regulation and government policy [2] [3], particularly in light of potential divergences from EU standards [2]. The Government’s report on AI assurance highlights the need for trustworthiness in AI systems [1], with plans to create a roadmap for third-party AI assurance [1].
Conclusion
The UK’s efforts to advance its AI policy and regulatory framework reflect a commitment to fostering innovation while addressing associated risks. By maintaining continuity with previous policies and aligning with international standards, the UK aims to balance regulation with the need for technological advancement. The anticipated UK AI Bill, along with other legislative initiatives, will shape the future landscape of AI regulation, impacting both public and private sectors. As the UK continues to navigate its regulatory approach, it must remain vigilant to ensure that innovation is not stifled by overregulation, while also ensuring the trustworthiness and safety of AI systems.
References
[1] https://www.aoshearman.com/en/insights/ao-shearman-on-data/uk-ai-policy-developments-and-where-next
[2] https://www.jdsupra.com/legalnews/uk-ai-policy-developments-and-where-next-6238048/
[3] https://www.lexology.com/library/detail.aspx?g=405cbc5d-ea4c-430c-b160-469b90c0bb62