AI
The pharma industry wants regulators around the world to engage with companies and “articulate the value added” when introducing new regulations and guidance around the use of AI in drug development, AstraZeneca’s director for data and AI policy says.
Global regulators should work together on producing standard terminology around the use of AI in drug development to align as much as possible on their approaches, according to the Food and Drug Administration’s Tala Fakhouri.
Regulators do not have the resources to “double check” that companies are using AI appropriately, meaning that manufacturers must ensure the AI tools they use meet relevant standards, says an EU regulatory expert.
A panel of experts from J&J, UCB and Takeda deliberated on the use of AI and internal processes to strike the right balance between speed and accuracy in MLR reviews that could protect a company from serious repercussions. They also spoke of the need for regulatory systems to catch up.
The European Medicines Agency should be responsible for the regulatory oversight of AI in the drug development process in the EU and provide clarity on its “risk-based” approach to governance, pharma industry federation EFPIA says.
US FDA commissioner says the agency plans to look for clinical trial fraud in applications using artificial intelligence, in part to find problems sooner.
Pink Sheet reporter and editors discuss an emerging pharma strategy to avoid Medicare price negotiations, legal wrangling related to compounding GLP-1 drugs for obesity and diabetes, and the varying opinions of FDA officials on the acceptability of artificial intelligence models that are not fully explainable.
Chief Medical Officer Hilary Marston says a black box model could be a problem for reviewers, after CDER and CBER officials said unexplainable AI models could be acceptable but transparency is important.
There is “a lot of flexibility” in the European Medicines Agency’s reflection paper on the use of artificial intelligence during drug development, which is principles-driven rather than setting rigid recommendations, says the agency’s Florian Lasch.
The FDA is developing several structures and a broad group of experts across disciplines to help craft artificial intelligence policy. But the proliferation of AI-related initiatives raises the question of who, ultimately, will make decisions about when novel applications of AI are acceptable.
Many of the comments were very helpful in improving the document in relation to both form and content, the European Medicines Agency said of its newly published reflection paper on the use of artificial intelligence during the drug development, marketing authorization and post-authorization phases.
The five-year roadmap aims to expand support for AI research and development in essential health care and new drug development, as well as advance medical data usage systems and enable its safe use.
Medicines regulators in the EU have “much to gain” from using AI models in their processes, but this technology must be used in a “safe and responsible” way, says the European Medicines Agency.
A recurring question about using artificial intelligence in drug development is whether the US Food and Drug Administration can accept a model that operates as a black box, meaning that developers cannot explain exactly how the model does what it does.
Pink Sheet reporter and editors discuss new developments in the FDA’s plans to regulate artificial intelligence and drugs associated with it, including a new AI Council within CDER, as well as some of the unanswered questions about AI in drug development.
AI modeling can predict which animal tests are useful and necessary, saving money for companies and meeting objectives set by regulators in the US and EU, VeriSIM Life’s CEO and founder Jo Varshney tells the Pink Sheet.
The Artificial Intelligence Council takes over work that three different entities in the US FDA’s drugs center had been performing. The new, centralized entity will develop and promote consistency in AI-related activities and advance innovative uses.
Regulatory uncertainty and the biopharma industry’s longstanding aversion to risk are hindering adoption of artificial intelligence and machine learning in drug and biologic development, panelists said at a recent US FDA/CTTI workshop.
AI has the potential to save vast amounts of time and money by optimizing pharma supply chain processes, but companies must think about legal risk and liability from all angles, Ewan Townsend, partner at law firm Arnold & Porter, tells the Pink Sheet.
Pharmaceutical companies should only use AI in evidence generation and reporting where there is “demonstrable value from doing so,” according to England’s health technology assessment body, NICE.