“The European Union’s Artificial Intelligence Act (AI Act) introduces regulations aimed at ensuring the safety, transparency, and ethical use of AI systems, with a focus on high-risk applications, including those embedded in medical devices.

AI systems integrated into medical devices are classified as high-risk when they serve as safety components and are subject to third-party conformity assessments under Union legislation. Providers of such AI systems must meet stringent requirements, including conformity assessments to ensure data quality, transparency, and cybersecurity. They are also required to implement risk management systems to ensure ongoing compliance even after market placement.

High-risk AI systems used by public authorities must be registered in a public EU database, with exceptions for specific systems, such as those related to law enforcement. Regular audits and post-market monitoring will be conducted by surveillance authorities to oversee these high-risk AI systems.

European harmonized standards for high-risk AI systems, including medical devices, will grant a presumption of conformity once published. Additionally, general-purpose AI models that could be integrated into medical devices must undergo thorough risk assessments and compliance checks to ensure safety.”

Source: https://ec.europa.eu/commission/presscorner/api/files/document/print/en/qanda_21_1683/QANDA_21_1683_EN.pdf##~https://ec.europa.eu/commission/presscorner/detail/en/qanda_21_1683

Region:  EU Harmonized (27 Markets)

pattern
pattern
Got questions? We’ve got answers!

Let's talk Regulatory!

Reach out to us and let’s unravel your Regulatory puzzle together. Unlock regulatory solutions with our global subject matter experts and tap into a wealth of regulatory intelligence.

Speak to an Expert