Introduction

In 2025 and beyond [2] [4], the regulatory landscape for technology and digital operations in the EU and the UK is undergoing significant changes. These developments present both opportunities and compliance challenges for businesses [2] [4], particularly in areas such as online safety, artificial intelligence (AI) [1] [3], data governance [1] [2] [3] [4], and cybersecurity [1] [2] [4].

Description

The UK Online Safety Act (OSA) establishes a regulatory framework aimed at protecting children and adults online [2] [4], imposing duties of care on service providers [2] [4], including those based outside the UK [2]. By April 2025 [2] [4], the Office of Communications (Ofcom) will require service providers likely accessed by children to complete risk assessments and implement protective measures by July 25, 2025 [2].

In the realm of AI [2], the EU AI Act’s first regulations will take effect on February 2, 2025, with additional obligations becoming binding on August 2, 2025 [1]. This Act establishes the first comprehensive legal framework for AI [1], emphasizing safety [3] [4], transparency [1] [2] [3] [4], and ethical use while adopting a risk-based approach that aligns regulatory requirements with the specific risks associated with AI systems [1]. Companies may face challenges in structural [1], technical [1] [3], and governance aspects [1], particularly concerning General Purpose AI (GPAI) [1]. Starting August 2, 2025 [1], due diligence [1], transparency [1] [2] [3] [4], and documentation requirements will be enforced for various stakeholders in the AI value chain [1]. Providers of GPAI models [1], such as large language or multimodal models [1], will be subject to specific regulatory frameworks [1], requiring them to maintain technical documentation that tracks the model’s development [1], training [1], and evaluation [1], as well as prepare transparency reports detailing capabilities [1], limitations [1] [3] [4], risks [1] [2] [3] [4], and guidance for integrators [1]. Stricter obligations will apply to powerful GPAI models classified as “systemic,” necessitating reporting to the European Commission and structured evaluations.

High-risk AI system requirements will be implemented in August 2026 [2] [4], affecting areas such as recruitment [1] [2], healthcare [1] [3], and critical infrastructure [1]. Users of these systems must maintain an inventory and ensure compliance with prohibitions on certain applications [1], including data protection impact assessments and internal monitoring [1]. Enhanced transparency requirements [1], such as labeling AI-generated content [1], will also become binding at that time [1]. Continuous monitoring and reporting will be mandated to ensure compliance throughout the lifecycle of AI systems [3], with clear legal obligations established for AI developers and users [3], particularly in high-stakes sectors [3].

The EU Data Act will introduce obligations regarding data access [2], sharing [2] [4], and transparency [1] [2] [3] [4], with most requirements effective from September 12, 2025 [2]. This includes stipulations for connected products and cloud computing providers [2], creating opportunities in the data market while requiring navigation of its interplay with GDPR and other regulations [4].

The EU Digital Operational Resilience Act (DORA) [2] [4], effective from January 17, 2025 [1] [2], imposes resiliency requirements on financial services and critical technology providers [2] [4], with compliance plans required from designated entities [2]. DORA’s influence extends to technology and data service providers [2] [4], emphasizing the need for enhanced cybersecurity and resilience across supply chains [4].

NIS 2 broadens the cybersecurity framework [2] [4], enhancing standards and incident response requirements [2] [4], with national registration for in-scope entities beginning in the first quarter of 2025. NIS 2 and the UK NIS Regulations are expected to align more closely [2], intersecting with DORA in areas such as cybersecurity and resilience measures [2], allowing organizations to streamline compliance through a unified risk-based management program [2] [4].

Conclusion

As the regulatory landscape continues to evolve, companies must remain vigilant and informed about these developments. The finalization of the Code of Practice for GPAI models and the harmonization of technical standards will be crucial for effectively navigating compliance complexities. Balancing innovation with risk mitigation will be essential for businesses to thrive in this dynamic environment.

References

[1] https://natlawreview.com/article/eu-ai-act-key-compliance-considerations-ahead-august-2025
[2] https://www.lw.com/en/insights/charting-the-future-regulatory-milestones-opportunities-in-ai-online-safety-cybersecurity-eu-uk
[3] https://brusselswatch.org/eu-ai-act-europes-first-comprehensive-ai-regulation-unveiled/
[4] https://www.jdsupra.com/legalnews/charting-the-future-regulatory-5897488/