Robert Califf warned that the US Food and Drug Administration soon will augment its efforts to detect and eliminate data integrity problems with artificial intelligence.
Key Takeaways
- Califf told generic drug industry stakeholders that if they had cheated on clinical trials previously, watch out because it is easy to find with AI.
- He also told generic drug sponsors to ensure clinical studies are outsourced to reputable firms.
- Data integrity remains a concern for FDA officials, especially after several instances that forced product downgrades.
“You apply AI to an application, you find ... when people are cheating,” Califf, the FDA commissioner, said 23 October during an appearance at the Association for Accessible Medicines’ GRx-Biosims conference. “If you cheated in the past, watch out because it’s going to be easy to apply AI to the applications we already have.”
“I don’t like being a cop,” Califf added. “But I also don’t like people who cheat. The system is very dependent on the integrity of the industry. If you’re cheating out there we’re going to be applying AI to your stuff.”
Califf’s statement came in response to a question about the agency and drug industry’s growing use of AI. Applying the technology to those investigations could cut the time needed to find and prove fraud, which could save sponsors from having ANDAs caught in the dragnet.
Several high-profile data integrity issues have emerged in recent years, in most cases impacting generic drug sponsors. The agency has spotted questionable practices from contract research organizations and declared any applications that relied on the data must redo the trials or withdraw the ANDA.
“Be honest,” Califf said. “I think specifically on clinical studies that you do, make sure that they are outsourced to high integrity people because we’re going to get much better at catching dishonest activity.”
Data Integrity Issues Could Take Years To Resolve
Among the more recent data integrity problems found was at Synchron Research Services and Panexcell Clinical Lab, both based in India. The FDA investigated concerns about the CROs for two years before announcing the BE data problems in 2021.
More than 100 applications were initially thought to have been affected by the clinical trial fraud. The products’ were downgraded to a BX rating, meaning they could not be automatically substituted for the reference product at the pharmacy.
Experts thought at the time that the issues with the affected applications could take years to resolve, either through repeated BE studies or formal withdrawals.
Similar data integrity issues, including clinical trial fraud, also were uncovered at Cetero Research, as well as Semler Research Center Private Ltd.
But while AI could improve data integrity investigations and other processes, the human element still may be necessary. FDA officials already have said AI currently cannot perform postmarket drug safety work alone.
Regulators’ AI Work Continuing
The FDA continues to prepare for AI-related issues as more applications arrive employing the technology.
Agency officials already are considering how so-called “digital twins,” which are AI models that predict patient behavior, could be used as control arms in oncology trials. They could replace placebo arms, but the idea still seems to be early in development.
Other questions, such as how the agency will handle AI models that cannot be fully explained, also need answers. FDA officials seem open to the accepting “black box” AI models, so long as evidence is available supporting their output.
Sponsors also want more information on FDA inspection policy for drug development with AI components, and many stakeholders have questioned the potential use of third-party assurance labs, which the agency cannot regulate.
In addition, multiple offices and centers within the FDA have staff focused on AI policy, which has lead to questions about who will make the final decisions on the subject.
Ultimately, Califf has admitted the FDA cannot regulate AI alone and has called on the entire ecosystem to be more accountable.
The European Medicines Agency also recently finalized a reflection paper on AI, which called for flexibility and a risk-based approach.