AI in Spring 2025: safety done, growth to come

For two years, Rosenblatt has published a regular series of thought pieces in the artificial intelligence (“AI”) space.

In this update, Rosenblatt explores legal, political, and regulatory developments in the use of AI in the UK, Europe, the US and in international arbitration.  

UK

Following Labour’s return to government in July 2024, Prime Minister Sir Keir Starmer set out a “blueprint to turbocharge AI” on 13 January 2025[i]. In it, the Prime Minister announced £14 billion of new UK AI investment by Vantage Data Centres, Nscale and Kyndryl, leading to 13,250 new jobs. On AI data centres more specifically, McKinsey has estimated that global demand for capacity could rise between 19 and 22 per cent annually up to 2030[ii], which will have wide-ranging implications for the construction, environmental, and data sectors. To accompany the Government’s 13 January headline figures, they also plan to establish AI Growth Zones, an AI Energy Council, a National Data Library, and place a renewed focus on AI skills and talent. The previous Conservative Government’s spotlight on AI safety seems to have shifted under Labour, either because the case has been made and the arguments now won, or else because the more pressing policy of growth and industrial strategy to stimulate AI adoption has taken over.

A challenge facing the Government, however, is from the creative industries, stridently pushing back on proposals to enable AI companies to use copyright-protected work without permission. The Government launched a consultation on 17 December 2024 on its plan to allow AI companies to train models on copyrighted work by giving them an exception for “text and data mining” (the “Consultation”). Whilst there is talk of an opt-out rights reservation system, critics from across the arts and media sectors suggest that such an opt-out system would not be watertight. Sceptics also suggest it unfairly puts the onus on artists to track any potential misuse of their content – across the internet behemoth. Extraordinarily, on the 25 February 2025 deadline for submitting responses to the Consultation, such was the furore, that for the first time in history, all major British printed newspapers included an identical front-page wraparound with the text: “MAKE IT FAIR – The government is siding with big tech over British creativity”. Since the Consultation closed, more than 1,000 artists have released a “silent album”, and 30 leading performing arts organisations co-signed a statement, both in protest at the proposed changes. The Government’s next steps will no doubt continue to be watched extremely closely.

AI, of course, continues to affect British working practices. One UK headquartered law firm has recently blocked access to several generative AI tools and issued a strict forbiddance of uploading client information to such platforms and an insistence on verifying output accuracy. As noted in a previous Rosenblatt AI thought piece, businesses will increasingly need to consider formulating clear AI usage policies for employees, not least to keep up with market competitors and clients, but also to mitigate the risks and cater for the ‘safe’ adoption and deployment of AI technology.

Europe

February 2025 brought with it three significant AI developments in Europe.

1.    First, on 2 February 2025, the first provisions of the European Union (“EU”) AI Act came into effect: (i) requiring providers and deployers of AI systems to take steps to ensure their personnel have sufficient AI literacy to operate AI systems (Article 4); and (ii) banning the use of AI systems that involve prohibited AI practices (Article 5), including subliminal, manipulative or deceptive techniques, social scoring techniques, and facial recognition databases through untargeted scraping.

By way of a reminder, and as set out in Rosenblatt’s 1 August 2024 article commenting on the EU AI Act, the remaining implementation is phased as follows:    

  • 2 May 2025: voluntary codes of practice for Global Partnership on Artificial Intelligence (“GPAI”) are issued.

  • 2 August 2025: rules for GPAI models come into force. The EU’s AI governance & enforcement framework enters into operation.

  • 2 August 2026: high-risk AI systems are subject to regulatory obligations. All of the Act is in force except obligations relating to high-risk AI systems covered by product safety legislation.

  • 2 August 2027: high-risk AI systems covered by EU product safety legislation are subject to regulatory obligations.

  • 2 August 2029: the European Commission shall submit its first review of the Act to the European Parliament and the Council, to be repeated every 4 years.

  • 2030: grandfathering provisions come into play in respect of high-risk AI systems put in use before relevant dates under the Act.

    o   2 August 2030: Providers and Deployers of high-risk AI systems intended to be used by public authorities must comply.

    o   31 December 2030: AI systems which are components of certain large-scale IT systems placed on market or put in service before 2 August 2027 must comply.

 

2.    Second, between 10-14 February 2024, France’s President Macron hosted an AI Action Summit (“Summit”). Just as the UK Government has recently seemingly put safety on the backburner in the grander prism of AI priorities, so too in Paris, there was barely a mention of the ongoing global concerns arising from AI. Interestingly, both the US and UK opted against signing the “inclusive and sustainable” diplomatic declaration. The UK cited a lack of progress on global AI governance and impacts on national security to justify its reasoning.

 

3.    Third, in its 2025 Commission work programme, published on 11 February 2025, the European Commission scrapped the AI Liability Directive (“AILD”). The Commission’s 2022 AILD proposal, which was originally conceived alongside the EU AI Act, was a stated attempt to “improve the functioning of the internal market by laying down uniform rules for certain aspects of non-contractual civil liability for damage caused with the involvement of AI systems”. This volte-face is likely due to pressure from lobbyists and big tech who see any liability regulation as an existential threat to their business models – and is demonstrative of a choice erring to big tech over consumers. The EU Parliament’s legal representative for developing liability rules, Axel Voss, described the move as a “strategic mistake” and that the decision will lead to “legal uncertainty, corporate power imbalances”, and a “Wild West approach to AI liability that only benefits big tech”.

US

In his totemic Summit speech, US Vice-President JD Vance demanded that international regulatory regimes “foster the creation of AI technology rather than strangle it”.

This approach to AI was writ large in recent Silicon Valley drama when Elon Musk’s $97.4 billion consortium-led bid for OpenAI was swiftly rejected by the ChatGPT maker, who suggested the startup is not for sale and that any future bid by Mr Musk would be disingenuous. It remains to be seen whether that will be the end of days for that particular corporate takeover.

On the legal side, AI focussed case law in the US, which has perhaps inevitably been at the forefront of global AI litigation has also ramped up in recent months. Of most interest recently is the Delaware AI fair use decision of Thomson Reuters v Ross Intelligence, in which Judge Bibas reversed much of his 2023 decision, at that point in Thomson Reuters’ favour. In his latest decision, Bibas found that Thomson Reuters’ headnotes were copyrightable, and that Ross’ use of Thomson Reuters’ data was “commercial”, given it stood to profit. Whilst this case does not involve generative AI, Bibas’ findings that: (i) transformation of text into an algorithm when training an AI model does not necessitate a finding of non-infringement or fair use; and (ii) the effect of copyright infringement can impact the licensing data, will likely have significant implications for users and providers alike of AI products and services.

The US is also home to one of the first comprehensive AI-specific regulations for an arbitral body[iii]. The Silicon Valley Arbitration & Mediation Centre introduced guidelines on 30 April 2024 promoting standardised methodology for the deployment of AI technology in arbitral proceedings. The guidelines focus on the non-delegation of decision-making responsibility, in short, the preservation of the ‘human’ element, together with the need for maintenance of confidentiality – AI’s ethical and effective use in international arbitration.

AI and International Arbitration continued

International arbitration rules naturally provide for discretion, including when it comes to adopting technology, such as AI, for example Article 17(1) of the UNCITRAL Arbitration Rules:

 “…the arbitral tribunal may conduct the arbitration in such manner as it considers appropriate, provided that the parties are treated with equality and that at an appropriate stage of the proceedings each party is given a reasonable opportunity of presenting its case. The arbitral tribunal, in exercising its discretion, shall conduct the proceedings so as to avoid unnecessary delay and expense and to provide a fair and efficient process for resolving the parties’ dispute.” (emphasis added)

Whilst this discretion can be broad enough to include the use of technology (including AI), reliance on broad tribunal disclosure does not necessarily provide a clear answer in relation to disclosure or specific use within proceedings, which may be best dealt with on a case-by-case basis, including in procedural orders at an early stage in proceedings.

There is an ongoing debate as to whether an AI programme could effectively write an arbitral award. Known as the “Black Box” problem, the path an AI model takes to reach a result, which may itself be perfectly sensible, is not, reliably, identifiable. This is problematic when the current international arbitration legal framework almost unanimously set out requirements for reasoned awards.[iv]

There are also currently challenges in utilising AI to select and appoint arbitrators, including establishing biases, filtering through previous decisions, the currently high cost of models, and diversifying the pool of arbitrators. 

On 18 March 2025, the Chartered Institute of Arbitrators (“CIArb”) launched its guidelines on AI 2025 (“CIArb AI Guidelines”). The CIArb AI Guidelines, which the CIArb itself describes as “soft law” are intended to be applied in conjunction with existing regulatory frameworks. The CIArb Guidelines cover the benefits and risks of AI in arbitration; general recommendations for AI usage; an arbitrator’s powers to give rulings; and an arbitrator’s use of AI. The overriding objective of the CIArb guidelines is to encourage arbitration practitioners to tool up, and engage in exploring the benefits (and to mitigate the risks) of existing AI technology.

Whilst CIArb acknowledges that the parties (by agreement) ultimately have autonomy as to whether they use AI technology during arbitral proceedings, standards of transparency, consistency and fairness should apply. In the spirit of this, the CIArb suggests parties agree which uses of AI should be disclosed, or admissible and request case management fora, where necessary. When it comes to documents generated by AI, or “enhanced evidence”, practitioners should be very alive to what will be disclosable, or accepted as evidence in an arbitral record. When it comes to issues around enforceability of awards, and challenges (for instance, under section 68 of the English Arbitration Act 1996), it is vital that arbitrators are very clear about any part AI has had in their work, and that parties have opportunities to discuss, question and, where appropriate, approve any usage in advance. Whatever the extent of AI usage, arbitrators should maintain full ownership and responsibility of their decision making, which should not be delegated to an AI tool – especially considering the “Black Box” problem.

Onwards

As understanding of the capabilities of AI develops, businesses and individuals across sectors are preparing for the changes AI will bring. For most, though, the full extent of AI remains an abstract concept. For now, at least, the common cross-sector view appears to be that technology will augment roles and work, rather than replace. The growing phenomena of AI twins, which replicate the nuances of human traits, decision making, interactions and preferences, may alter the picture of work further still in the months and years ahead. The legal, political, and regulatory developments resulting from the use of AI in the UK, Europe, the US and globally will continue. Going forwards, perhaps the most interesting and complicated discussion to have around AI is ethics: not whether the technology can be implemented, but whether it “should” be.

For further information or assistance in relation to AI, please reach out to Dispute Resolution Partner Elizabeth Weeks (elizabeth.weeks@rosenblatt.law) or Dispute Resolution Associate Jacques Domican-Bird (jacques.domican-bird@rosenblatt.law).

Rosenblatt also has a dedicated international arbitration team. For arbitration enquiries, please contact Rosenblatt’s Co-Heads of International Arbitration, Sara Paradisi (sara.paradisi@rosenblatt.law) or Dr. Leonardo Carpentieri (leonardo.carpentieri@rosenblatt.law).


[i] “Prime Minister sets out blueprint to turbocharge AI”, dated 13 January 2025, https://www.gov.uk/government/news/prime-minister-sets-out-blueprint-to-turbocharge-ai.

[ii] “AI power: Expanding data center capacity to meet growing demand”, McKinsey & Company, dated 29 October 2024, https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/ai-power-expanding-data-center-capacity-to-meet-growing-demand.

[iii] The other being the Judicial Arbitration and Mediation Services Inc AI Rules, effective in April 2024 addressing the rise in usage and development of AI systems and smart contracts.

[iv] • Article 31(2) of UNCITRAL Model Law on International Commercial Arbitration provides that “[t]he award shall state the reasons upon which it is based, unless the parties have agreed that no reasons are to be given or the award is an award on agreed terms […].”

• The UNCITRAL Arbitration Rules align with the Model Law, providing in Article 34(3) that “[t]he arbitral tribunal shall state the reasons upon which the award is based, unless the parties have agreed that no reasons are to be given.”

• Section 52(4) of the English Arbitration Act 1996 states that “[t]he award shall contain the reasons for the award unless it is an agreed award or the parties have agreed to dispense with reasons.”

• Article 32(2) of the ICC Rules also requires that “[t]he award shall state the reasons upon which it is based.”