Pixels Hunter / Shutterstock.com
20 February 2024FeaturesEuropeJackie Mulryne and Beatriz San Martin

What does the UK’s pro-innovation approach to AI mean for life sciences companies?

It remains to be seen whether AI developers in the medical sector will benefit from the UK’s ‘flexible’ regulatory environment or find the lack of cohesion detrimental, say Jackie Mulryne and Beatriz San Martin of Arnold & Porter.

There is currently no specific legislation in the UK that governs AI, or its use in healthcare. Instead, a number of general-purpose laws apply that have to be adapted to specific AI technologies.

As a step towards a more coherent approach, on February 6 the government published its response to its consultation on regulating AI in the UK. This maintains the government’s “pro-innovation” framework of principles, to be set out in guidance rather than legislation, which will then be implemented by regulatory authorities in their respective sectors, such as by the Medicines and Healthcare products Regulatory Agency (MHRA) for medicines.

The MHRA has already started this process and signalled itself as an early-adopter of the UK government’s approach. The hope is that this will lead to investment in the UK by life science companies as the UK is seen as a first-launch country for innovative technologies.

Pro innovation

The overall approach by the UK government is to combine five cross-sectoral principles with context-specific guidance developed by the relevant regulatory authority. According to the government, the aim is that this will enable the UK to remain flexible enough to deal with the speed at which AI is developing, while also being robust enough to address key concerns.

In its response to the consultation, the UK government confirmed the five cross-sectoral principles to be used by regulators to develop their own guidance relevant to the use of AI within their field. These principles are:

1. Safety, security and robustness

2. Appropriate transparency and explainability

3. Fairness

4. Accountability and governance

5. Contestability and redress

To assist in this process, the government has committed to providing regulators with funding to train and upskill their workforce, and to develop tools to monitor and address risks and opportunities.

In addition, the government has proposed, and has already started to establish, a new central function to coordinate regulatory activities and help address regulatory gaps.

However, the role of this central function is not clear, and there is a risk that rather than streamlining the process, this may lead to an additional layer of bureaucracy that technologies will need to satisfy.

If the central function has the ability to impose additional requirements or to review AI technologies on the market, and therefore review the guidance and decisions of regulators such as the MHRA, there is a risk that this leads to less cohesion rather than more.

Position of the MHRA

The government’s proposal is in line with the MHRA’s approach to the regulation of AI, and the MHRA is highlighted in the consultation response as a regulatory authority which has already set out guidance for its sector.

In the MHRA’s response to its consultation on the medical devices regime in the UK post-Brexit, it announced similarly broad-brush plans for regulating AI-enabled medical devices. In particular, the devices regime is unlikely to set out specific legal requirements beyond those being considered for software as a medical device.

Instead, the MHRA intends to publish guidance on how AI fits into the regulatory regime. For example, the MHRA has recently updated its guidance on Software and AI as a Medical Device and its Change Programme Roadmap.

Further, although development of a regulatory sandbox is not included as a specific action point by the government, the response notes that the majority of respondents stated that healthcare and medical devices would benefit most from an AI sandbox. The MHRA has already announced its intention to launch a regulatory sandbox, called the “AI-Airlock”, in April 2024.

The intention is for this “sandbox” to provide a regulator-monitored virtual area for developers to generate robust evidence for their technologies. This seeks to foster a collaborative approach to development of novel technologies that may not fit well within the existing regulatory regime.

AI innovation: the IP challenge

The government response acknowledges the deep concern amongst creative industries and media organisations over the large-scale use of copyright protected content for training AI models and their desire to retain autonomy and control over their valuable works. They also note the need from AI developers to easily access a wide range of high-quality datasets to develop and train cutting-edge AI systems in the UK.

Although the UK Intellectual Property Office (UKIPO) was tasked in March 2023 to convene a working group of stakeholders and to produce a balanced and pragmatic voluntary code of practice that would enable both sectors to grow in partnership, the government response now accepts that consensus will not be reached through a voluntary code.

Instead, there will now be a period of consultation and engagement with stakeholders to seek an approach that “allows the AI and creative sectors to grow together in partnership” and which the government adds will need to be underpinned by trust and transparency, particularly from AI developers in relation to data inputs and the attribution of outputs having an important role to play. The government also notes the need for close engagement with international counterparts.

The challenge of providing a balanced solution is unlikely to be solved without some legislative guidance in line with the recommendation from the House of Commons interim report on the governance of AI published at the end of August 2023, urging government to “accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed”.

Criticism of the UK’s approach

There have been questions as to whether the government’s approach is already behind the curve. Increasingly, commentators have suggested that a firmer approach is required. Moreover, there is a risk that by not setting out a clear regulatory framework now, regulators will have the difficult task of having to regulate AI systems once they are already on the market.

In November, a Private Members’ Bill was introduced to the House of Lords. The main purpose of the Bill was to establish a central AI Authority to coordinate and monitor the regulatory approach to AI, whilst promoting transparency, reducing bias and balancing regulatory burden against risk. This largely tracks the government’s position, but seeks to introduce the provisions into law.

The government clearly does not agree with this approach. At this stage, it does intend to pass AI-specific legislation, although a central oversight function is being established. However, it does acknowledge that “further targeted binding requirements” may need to be introduced for “highly capable general-purpose AI” in the future.

These AI systems are defined as foundation models that can perform a wide variety of tasks. The risk with highly capable general-purpose AI is that it could be used across sectors and therefore fall between the competence and powers of regulatory authorities.

Further, given the wide ranging uses, the potential harms are more wide-spread. The government therefore proposes to introduce legislation to regulate this category of AI. In contrast, “highly capable narrow AI” systems are foundation models that can perform a narrow set of tasks, normally within a specific field such as biology or healthcare.

Where AI is used in healthcare for example, the MHRA can regulate such use without the need for additional legislation.

While AI developers within life sciences may not be caught by any future legislation, narrow AI systems may well be based on, or use models or data generated by, general-purpose AI systems. Companies will need to consider the extent to which partners may be subject to binding legislation and how this impacts the development and use of narrow AI they are developing.

Comparison with the EU

The “flexible approach” is in contrast to the position in the EU, that has chosen to introduce specific legislation on AI, where AI systems will need to meet new legal requirements overseen by regulatory bodies.

In December, the European Parliament and European Council announced that they had reached a provisional agreement on the text of the EU AI Act. The AI Act takes a risk-proportionate approach and categorises four risk levels of AI systems.

Medical devices will be classed as "high risk", and would therefore be subject to a set of requirements proportionate to this risk before the products are placed on the market and throughout the product life cycle.

There has been concern through the legislative process that medical devices that incorporate AI will therefore need to meet both sets of requirements: the rules of AI and the rules on medical devices.

A recent leaked text of the proposed Act acknowledges the need for alignment, and the need to avoid duplication between sectoral legislation and the AI Act. It implies that conformity assessment can be performed under the medical devices rules, taking into consideration the requirements of the AI Act.

How this works in practice, and the additional level of burden on manufacturers and Notified Bodies, remains to be seen.

The UK’s approach seeks to avoid this duplication altogether.

So what should companies do?

The government will be hoping that companies will use the UK as a launch country to test out—and invest in—innovative products and make use of the regulatory flexibility rather than having to meet the more stringent rules of the EU AI Act. There is some evidence that this is currently the case for certain digital devices, for example, that have been “up-classified” under the EU medical device regulations and so are subject to more stringent conformity assessment procedures. Under the UK rules, digital devices remain a lower classification and so can be placed on the market under a self-certification, leading to a quicker (and usually cheaper) launch. Whether this will also happen with more complex AI systems will need to be monitored by the government.

In addition, it seems unlikely that a company will only launch a product in the UK, and therefore will need to meet the EU AI Act requirements in any event. The relative benefit of an early launch in the UK may not be commercially advantageous unless the government can also ensure widespread use within the NHS.

Jackie Mulryne and Beatriz San Martin are partners at Arnold & Porter


More on this story

Biotechnology
25 January 2024   Despite the clamour for guidance, applicants, attorneys, and lawyers may be in for a frustrating wait before the UK updates AI protections, says Rebecca Lawrence of DLA Piper.
Americas
13 February 2024   US Patent and Trademark Office publishes ‘clear’ guidance mandating AI-related patents to have ‘significant’ human involvement | Cites one hypothetical example relating to the development of a compound for treating cancer | Kathi Vidal: ‘patent system was developed to incentivise and protect human ingenuity’.

More on this story

Biotechnology
25 January 2024   Despite the clamour for guidance, applicants, attorneys, and lawyers may be in for a frustrating wait before the UK updates AI protections, says Rebecca Lawrence of DLA Piper.
Americas
13 February 2024   US Patent and Trademark Office publishes ‘clear’ guidance mandating AI-related patents to have ‘significant’ human involvement | Cites one hypothetical example relating to the development of a compound for treating cancer | Kathi Vidal: ‘patent system was developed to incentivise and protect human ingenuity’.