PDA Letter Article

News Brief: EMA and FDA Unite on AI Principles to Revolutionize Medicine Development

Justin Johnson, PDA

Quality & Regulatory

A row of 5 tablet shaped capsules with clear surfaces like glass reflecting lines of codeIn a landmark transatlantic initiative, the European Medicines Agency (EMA) and the U.S. Food and Drug Administration (FDA) have jointly released a set of 10 guiding principles for the ethical, safe, and responsible use of artificial intelligence (AI) across the full lifecycle of medicine development — from early discovery and clinical trials to manufacturing and post-market safety monitoring.

These principles establish a common framework for developers, regulatory applicants, and industry stakeholders to leverage AI tools effectively while safeguarding patient welfare and scientific integrity. With AI becoming increasingly central to drug design and evaluation, regulators hope this guidance will underpin future, more detailed regulations in both jurisdictions and foster deeper international cooperation.

Key elements of the new guidance include ensuring AI is human-centric by design, grounded in a risk-based approach, and supported by strong data governance, transparency, lifecycle management, and multidisciplinary expertise.

European Commissioner for Health and Animal Welfare, Olivér Várhelyi, described the principles as a “first step” in strengthened European Union (EU) and U.S. collaboration on emerging medical technologies — a move aimed at preserving global leadership in innovation while maintaining the “highest level of patient safety.”

The EMA’s effort builds on its 2024 AI reflection paper and aligns with broader EU pharmaceutical strategy goals to harness digital tools responsibly, including ethical considerations and regulatory compliance. Further guidance is expected in the coming years as the field evolves.