The US Food and Drugs Administration (FDA) has stated that it wants to accelerate the deployment of AI across its centres. FDA Commissioner Martin A. Makary has announced an aggressive timeline to scale use of AI by 30 June 2025 and is betting big on the technology to change drug approval processes for the US.
But the rapid AI deployment at the FDA raises important questions about whether innovation can be balanced with oversight.
Strategic leadership drive: FDA names first AI chief
The foundation for the ambitious FDA AI deployment was laid with the appointment of Jeremy Walsh as the first-ever Chief AI Officer. Walsh previously led enterprise-scale technology deployments in federal health and intelligence agencies and came from government contractor Booz Allen Hamilton, where he worked for 14 years as chief technologist.
His appointment, announced just before the May 8th rollout announcement, signals the agency’s serious commitment to technological transformation. The timing is significant – Walsh’s hiring coincided with workforce cuts at the FDA, including the loss of key tech talent.
Among the losses was Sridhar Mantha, the former director of strategic programmes at the Center for Drug Evaluation and Research, who had co-chaired the AI Council at CDER and helped develop policy around AI’s use in drug development. Ironically, Mantha is now working alongside Walsh to coordinate the agency-wide rollout.
The pilot programme: Impressive results, limited details
What’s driving the rapid AI deployment is the reported success of the agency’s pilot programme trialling the software. Commissioner Makary said he was “blown away by the success of our first AI-assisted scientific review pilot,” with one official claiming the technology enabled him to perform scientific review tasks in minutes that used to take three days.
However, the scope, rigour and results from the pilot scheme remain unreleased.
The agency has not published detailed reports on the pilot’s methodology, validation procedures, or specific use cases tested. The lack of transparency is concerning given the high-stakes nature of drug evaluation.
When pressed for details, the FDA has promised that additional details and updates on the initiative will be shared publicly in June. For an agency responsible for protecting public health through rigorous scientific review, the absence of published pilot data raises questions about the evidence base supporting such an aggressive timeline.
Industry perspective: Cautious optimism meets concerns
The pharmaceutical industry’s reaction to the FDA AI deployment reflects a mixture of optimism and apprehension. Companies have long sought faster approval processes, with Makary pointedly asking, “Why does it take over 10 years for a new drug to come to market?”
“While AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the centre. We’re pleased to see the FDA taking concrete action to harness the potential of AI,” said PhRMA spokesperson Andrew Powaleny.
However, industry experts are raising practical concerns. Mike Hinckle, an FDA compliance expert at K&L Gates, highlighted a key issue: pharmaceutical companies will want to know how the proprietary data they submit will be secured.
The concern is particularly acute given reports that the FDA was in discussions with OpenAI about a project called cderGPT, which appears to be an AI tool for the Centre for Drug Evaluation and Research.
Expert warnings: The rush vs rigour debate
Leading experts in the field are expressing concern about the pace of deployment. Eric Topol, founder of the Scripps Research Translational Institute, told Axios: “The idea is good, but the lack of details and the perceived ‘rush’ is concerning.”
He identified critical gaps in transparency, including questions about which models are being used to train the AI, and what inputs are provided for specialised fine-tuning.
Former FDA commissioner Robert Califf struck a balanced tone: “I have nothing but enthusiasm tempered by caution about the timeline.” His comment reflects the broader sentiment among experts who support AI integration but question whether the June 30th deadline allows sufficient time for proper validation and safeguards to be implemented.
Rafael Rosengarten from the Alliance for AI in Healthcare supports automation but emphasises the need for governance, saying there is a need for policy guidance around what kind of data is used to train AI models and what kind of model performance is considered acceptable.
Political context: Trump’s deregulatory AI vision
The FDA AI deployment must be understood in the broader context of the Trump administration’s approach to AI governance. Trump’s overhaul of federal AI policy – ditching Biden-era guardrails in favour of speed and international dominance in technology – has turned the government into a tech testing ground.
The administration has explicitly prioritised innovation over precaution. Vice President JD Vance outlined four key AI policy priorities, including encouraging “pro-growth AI policies” instead of “excessive regulation of the AI sector,” and he has taken action to ensure the forthcoming White House AI Action Plan would “avoid an overly precautionary regulatory regime.”
The philosophy is evident in how the FDA is approaching its AI deployment. With Elon Musk leading a charge under an “AI-first” flag, critics warn that rushed rollouts at agencies could compromise data security, automate important decisions, and put Americans at risk.
Safeguards and governance: What’s missing?
While the FDA has promised that its AI systems will maintain strict information security and act in compliance with FDA policy, specific details about safeguards remain sparse. The agency’s claims that AI is a tool to support, not replace, human expertise and can enhance regulatory rigour by helping predict toxicities and adverse events. This provides some reassurance but lacks specificity.
The absence of published governance frameworks for what is an internal process contrasts sharply with the FDA’s guidance for industry.
The agency has previously issued draft guidance to pharma companies, providing recommendations on the use of AI intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality. Its published draft guidance in that instance was based on feedback from over 800 external comments and its experience with more than 500 drug submissions involving AI components in their development since 2016.
The broader AI landscape: Federal agencies as testing grounds
The FDA’s initiative is part of a larger federal AI adoption wave. The General Services Administration is piloting an AI chatbot to automate routine tasks, and the Social Security Administration plans to use AI software to transcribe applicant hearings.
However, GSA officials noted its tool has been in development for 18 months – highlighting the contrast with the FDA’s accelerated timeline, which at the time of writing, is a matter of weeks.
The rapid federal adoption reflects the Trump administration’s belief that America is well-positioned to maintain its global dominance in AI and that the Federal Government must capitalise on the advantages of American innovation. It also maintains the importance of strong protections for Americans’ privacy, civil rights, and civil liberties.
Innovation at a crossroads
The FDA’s ambitious timeline embodies the fundamental tension between technological promise and regulatory responsibility. While AI offers clear benefits in automating tedious tasks, the rush to implementation raises critical questions about transparency, accountability, and the erosion of scientific rigour.
The June 30th deadline will test whether the agency can maintain the public trust that has long been its cornerstone. Success requires more than technological capability – it demands proof that oversight hasn’t been sacrificed for speed.
The FDA AI deployment represents a defining moment for pharmaceutical regulation. The outcome will determine whether rapid AI adoption strengthens public health protection or serves as a cautionary tale about prioritising efficiency over safety in matters of life and death. The stakes couldn’t be higher.
See also: AI vs COVID-19: Here are the AI tools and services fighting coronavirus
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Read the full article here