AI
The risk-based framework described in a new draft guidance starts with defining the question of interest and context of use and includes development and execution of a credibility assessment plan. The guidance is limited to AI models used to support regulatory decisions about drug safety, effectiveness or quality.
An artificial intelligence-based pathology tool for metabolic dysfunction-associated steatohepatitis shows promise for a drug development landscape that is said to be “fraught with trials that have shown borderline results or outright failures based on liver histology.”
As RWE advances, a multiplicity of roles are emerging, but case studies show how it is more effective filling evidence gaps during drug development and improving surveillance than creating a randomized controlled trial alternative.
The pharma industry wants regulators around the world to engage with companies and “articulate the value added” when introducing new regulations and guidance around the use of AI in drug development, AstraZeneca’s director for data and AI policy says.
Global regulators should work together on producing standard terminology around the use of AI in drug development to align as much as possible on their approaches, according to the Food and Drug Administration’s Tala Fakhouri.
Regulators do not have the resources to “double check” that companies are using AI appropriately, meaning that manufacturers must ensure the AI tools they use meet relevant standards, says an EU regulatory expert.
A panel of experts from J&J, UCB and Takeda deliberated on the use of AI and internal processes to strike the right balance between speed and accuracy in MLR reviews that could protect a company from serious repercussions. They also spoke of the need for regulatory systems to catch up.
The European Medicines Agency should be responsible for the regulatory oversight of AI in the drug development process in the EU and provide clarity on its “risk-based” approach to governance, pharma industry federation EFPIA says.
US FDA commissioner says the agency plans to look for clinical trial fraud in applications using artificial intelligence, in part to find problems sooner.
Pink Sheet reporter and editors discuss an emerging pharma strategy to avoid Medicare price negotiations, legal wrangling related to compounding GLP-1 drugs for obesity and diabetes, and the varying opinions of FDA officials on the acceptability of artificial intelligence models that are not fully explainable.
Chief Medical Officer Hilary Marston says a black box model could be a problem for reviewers, after CDER and CBER officials said unexplainable AI models could be acceptable but transparency is important.
There is “a lot of flexibility” in the European Medicines Agency’s reflection paper on the use of artificial intelligence during drug development, which is principles-driven rather than setting rigid recommendations, says the agency’s Florian Lasch.
The FDA is developing several structures and a broad group of experts across disciplines to help craft artificial intelligence policy. But the proliferation of AI-related initiatives raises the question of who, ultimately, will make decisions about when novel applications of AI are acceptable.
Many of the comments were very helpful in improving the document in relation to both form and content, the European Medicines Agency said of its newly published reflection paper on the use of artificial intelligence during the drug development, marketing authorization and post-authorization phases.
The five-year roadmap aims to expand support for AI research and development in essential health care and new drug development, as well as advance medical data usage systems and enable its safe use.
Medicines regulators in the EU have “much to gain” from using AI models in their processes, but this technology must be used in a “safe and responsible” way, says the European Medicines Agency.
A recurring question about using artificial intelligence in drug development is whether the US Food and Drug Administration can accept a model that operates as a black box, meaning that developers cannot explain exactly how the model does what it does.
Pink Sheet reporter and editors discuss new developments in the FDA’s plans to regulate artificial intelligence and drugs associated with it, including a new AI Council within CDER, as well as some of the unanswered questions about AI in drug development.
AI modeling can predict which animal tests are useful and necessary, saving money for companies and meeting objectives set by regulators in the US and EU, VeriSIM Life’s CEO and founder Jo Varshney tells the Pink Sheet.
The Artificial Intelligence Council takes over work that three different entities in the US FDA’s drugs center had been performing. The new, centralized entity will develop and promote consistency in AI-related activities and advance innovative uses.