Medical device manufacturers’ responsibilities under the AI Act

As the AI Act introduces new layers of regulation for AI-enabled medical devices, manufacturers will need to adapt their processes to ensure compliance.

Important to-dos for manufacturers working with high-risk AI systems

In this instalment of our AI Act Insights, we outline the obligations of medical device manufacturers, particularly those developing high-risk systems like we do.

Risk management system

Your processes must continuously assess and mitigate potential risks throughout the product lifecycle, from development to market surveillance. Manufacturers must ensure the reproducibility, reliability and safety of AI systems by regularly reviewing and updating their risk mitigation strategies.

Implement data management

This includes ensuring the quality, accuracy and integrity of the data used to train and validate AI systems. The use of datasets that are representative, free from bias and handled in accordance with GDPR standards to ensure the protection of personal data is key here.

Technical documentation

This must detail the design, functionality, intended use and risk management strategies of the AI system. This documentation must be kept up to date and be available to regulators for conformity assessment. It plays a critical role in demonstrating that the AI system meets safety, transparency and performance requirements.

Keep detailed records

Manufacturers must keep detailed records of all processes related to the AI system. This includes performance, updates and any changes in the life cycle of the system. Thorough records are critical for market surveillance and ensure transparency to regulators.

Transparency and clear information

Users (healthcare professionals and patients) must be provided with clear information about how the AI system works, including its intended purpose, risks and limitations. This transparency is crucial for healthcare professionals to make informed decisions when using AI-enabled devices, and promotes patient acceptance.

Human Oversight

It must be ensured that AI systems do not operate autonomously without adequate human oversight. For medical devices, this means that healthcare professionals must be able to intervene and override the decisions of the AI system at any time, especially when the safety of patients, users or third parties is at stake.

Accuracy and robustness

Manufacturers must ensure that their AI systems are reliable and protected from cybersecurity threats. Manufacturers must validate the accuracy of AI outputs to avoid the risk of errors or malfunctions.

AI Systems in the sandbox

To ensure that these requirements are met and that companies can develop in a safe environment, the EU recommends the establishment of so-called AI regulatory sandboxes.

These sandboxes are controlled environments where AI developers and providers can test innovative AI systems under the supervision of regulators. Article 57 of the AI Act requires each Member State to establish at least one sandbox at national level by August 2026.

The main objectives of AI regulatory sandboxes are:

  • Encourage innovation by allowing developers to test AI technologies under real-world conditions.
  • Improve legal certainty for AI providers.
  • Support and encourage collaboration between regulators and AI developers.

Most importantly for developers and manufacturers, participation in a sandbox can provide written evidence of compliance that can be used in formal conformity assessments or in working with Notified Bodies, thereby accelerating the market entry of your products. These sandboxes also contribute to evidence-based regulation by allowing authorities to refine their guidance based on real-world testing – actively influencing regulation, as mentioned in part two.

On the right path to greater safety

Based on our experience of developing software and algorithms in AI-assisted testing environments and AI-assisted regulatory support, these sandboxes are a step in the right direction towards developing safer and more reliable devices. For us, for manufacturers and for patients, the AI Act ensures that new systems, which will increasingly incorporate AI, are developed and deployed with safety, transparency and ethical considerations in mind.

With the necessary preparation and the right partner at your side, these regulations will not be an additional burden, but will have a positive impact on your product, your entire development phase and your market position.

If you missed the first two articles in this series, you can catch up here:

Scroll to Top