FDA’s new AI tool has blind spots—Could that be bad news for seniors?
- Replies 0
The Food and Drug Administration (FDA) is racing to bring artificial intelligence into the heart of its medical device review process, promising faster approvals for everything from pacemakers to insulin pumps.
But behind the scenes, the agency’s much-hyped AI tool—internally dubbed CDRH-GPT—seems to be tripping over its own shoelaces.
For those who rely on safe, effective medical devices, this is more than a tech hiccup—it’s a matter of trust, safety, and peace of mind.
The Promise: AI to the Rescue?
Imagine a world where the FDA could review mountains of clinical trial data in a fraction of the time, greenlighting life-saving devices faster and getting innovations into the hands (and bodies) of patients who need them most.
That’s the dream behind CDRH-GPT, an AI assistant designed to help the Center for Devices and Radiological Health (CDRH) sift through complex data, answer questions, and streamline the reviews and approval process of medical devices like pacemakers and insulin pumps.
With the FDA’s device division stretched thin after recent layoffs, and reviewers drowning in paperwork, the idea of a digital helper sounds like a lifeline.
In theory, AI could cut review times from months—or even years—to mere weeks.

The Reality: Not Ready for Prime Time
But according to insiders, the reality is far less rosy.
The tool, still in its beta phase, is reportedly glitchy and not yet integrated with the FDA’s internal systems.
According to sources, it struggles with uploading documents and doesn't reliably support user-submitted questions.
Additionally, it lacks internet access, meaning it can’t retrieve newly published studies or content behind paywalls.
Also read: Alone with a panic attack? Here's how ChatGPT stepped in
A Rushed Rollout?
Since taking office on April 1, Commissioner Dr. Marty Makary has pushed for widespread integration of AI across the FDA. However, it's still unclear how this technology might impact the safety and effectiveness of medical devices and drugs.
Makary set a June 30 deadline for the AI rollout and claimed Monday that the agency is “ahead of schedule.”
But according to two individuals familiar with the CDRH-GPT tool, it still requires substantial development. Staff have expressed doubts about meeting the original timeline.
“I worry that they may be moving toward AI too quickly out of desperation, before it’s ready to perform,” said Arthur Caplan, head of medical ethics at NYU Langone.

He emphasized the stakes involved in device evaluation: “It still needs human supplementation,” Caplan said. AI “is really just not intelligent enough yet to really probe the applicant or challenge or interact.”
The FDA has referred all media inquiries to the Department of Health and Human Services, which did not respond to a request for comment.
Meet Elsa: The FDA’s Other AI Assistant
Makary also announced the launch of a separate tool, Elsa, which is now available agencywide for tasks like summarizing adverse event reports.
“The first reviewer who used this AI assistant tool actually said that the AI did in six minutes what it would normally take him two to three days to do,” Makary said last week. “And we’re hoping that those increased efficiencies help. So I think we’ve got a bright future.”
Inside the agency, however, the sentiment is more cautious. While sources acknowledged the promise of tools like Elsa, they noted the rollout feels rushed and the tool remains underdeveloped.
You might like: How Candy Crush quietly uses AI to keep you coming back—and why it works
“AI tools to help with certain tasks for reviewers and scientists seems reasonable given the potential utility of AI,” said one person familiar with the rollout. Still, they questioned the “aggressive roll out” and claims of major time savings, calling them unrealistic.
Staff have worked hard to implement Elsa, the sources said, but it still lacks key functionality needed to support complex regulatory processes.
In testing this week, the tool reportedly returned answers that were either partially correct or inaccurate when asked about FDA-approved products and public information.
It remains uncertain whether CDRH-GPT will eventually merge with Elsa or continue as a separate platform.
The Human Factor: Job Security and Ethics
Concerns over ethics and oversight are also surfacing. Richard Painter, a law professor and former government ethics official, raised questions about potential conflicts of interest.
Source: NBC News / Youtube.
“We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts,” he said. “A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.”
Some FDA employees don’t view AI as a solution—they see it as a potential threat to their roles.
The FDA is “already spread thin from the RIF [layoffs] and the steady loss of individuals while in a hiring freeze and no capacity to backfill,” one person said.
Related news:
What do you think about the FDA’s push to use AI in medical device approvals? Are you excited about the potential for faster innovation, or worried about safety and accuracy? Have you had experiences—good or bad—with medical devices or new healthcare technology? Share your thoughts, questions, and stories in the comments below!
But behind the scenes, the agency’s much-hyped AI tool—internally dubbed CDRH-GPT—seems to be tripping over its own shoelaces.
For those who rely on safe, effective medical devices, this is more than a tech hiccup—it’s a matter of trust, safety, and peace of mind.
The Promise: AI to the Rescue?
Imagine a world where the FDA could review mountains of clinical trial data in a fraction of the time, greenlighting life-saving devices faster and getting innovations into the hands (and bodies) of patients who need them most.
That’s the dream behind CDRH-GPT, an AI assistant designed to help the Center for Devices and Radiological Health (CDRH) sift through complex data, answer questions, and streamline the reviews and approval process of medical devices like pacemakers and insulin pumps.
With the FDA’s device division stretched thin after recent layoffs, and reviewers drowning in paperwork, the idea of a digital helper sounds like a lifeline.
In theory, AI could cut review times from months—or even years—to mere weeks.

The FDA’s new AI tool intended to speed up the review and approval of medical devices is still in beta, with significant bugs and struggles with basic tasks such as document uploads and connecting to internal systems. Image source: NBC News / Youtube.
The Reality: Not Ready for Prime Time
But according to insiders, the reality is far less rosy.
The tool, still in its beta phase, is reportedly glitchy and not yet integrated with the FDA’s internal systems.
According to sources, it struggles with uploading documents and doesn't reliably support user-submitted questions.
Additionally, it lacks internet access, meaning it can’t retrieve newly published studies or content behind paywalls.
Also read: Alone with a panic attack? Here's how ChatGPT stepped in
A Rushed Rollout?
Since taking office on April 1, Commissioner Dr. Marty Makary has pushed for widespread integration of AI across the FDA. However, it's still unclear how this technology might impact the safety and effectiveness of medical devices and drugs.
Makary set a June 30 deadline for the AI rollout and claimed Monday that the agency is “ahead of schedule.”
But according to two individuals familiar with the CDRH-GPT tool, it still requires substantial development. Staff have expressed doubts about meeting the original timeline.
“I worry that they may be moving toward AI too quickly out of desperation, before it’s ready to perform,” said Arthur Caplan, head of medical ethics at NYU Langone.

Some FDA staff feel the rollout of AI tools is being rushed and that the new technology isn’t ready to support the complex and essential regulatory work required for medical device safety. Image source: NBC News / Youtube.
He emphasized the stakes involved in device evaluation: “It still needs human supplementation,” Caplan said. AI “is really just not intelligent enough yet to really probe the applicant or challenge or interact.”
The FDA has referred all media inquiries to the Department of Health and Human Services, which did not respond to a request for comment.
Meet Elsa: The FDA’s Other AI Assistant
Makary also announced the launch of a separate tool, Elsa, which is now available agencywide for tasks like summarizing adverse event reports.
“The first reviewer who used this AI assistant tool actually said that the AI did in six minutes what it would normally take him two to three days to do,” Makary said last week. “And we’re hoping that those increased efficiencies help. So I think we’ve got a bright future.”
Inside the agency, however, the sentiment is more cautious. While sources acknowledged the promise of tools like Elsa, they noted the rollout feels rushed and the tool remains underdeveloped.
You might like: How Candy Crush quietly uses AI to keep you coming back—and why it works
“AI tools to help with certain tasks for reviewers and scientists seems reasonable given the potential utility of AI,” said one person familiar with the rollout. Still, they questioned the “aggressive roll out” and claims of major time savings, calling them unrealistic.
Staff have worked hard to implement Elsa, the sources said, but it still lacks key functionality needed to support complex regulatory processes.
In testing this week, the tool reportedly returned answers that were either partially correct or inaccurate when asked about FDA-approved products and public information.
It remains uncertain whether CDRH-GPT will eventually merge with Elsa or continue as a separate platform.
The Human Factor: Job Security and Ethics
Concerns over ethics and oversight are also surfacing. Richard Painter, a law professor and former government ethics official, raised questions about potential conflicts of interest.
Source: NBC News / Youtube.
“We need to make sure that the people involved in these decisions do not have a financial interest in the artificial intelligence companies that would get the contracts,” he said. “A conflict of interest can greatly compromise the integrity and the reputation of a federal agency.”
Some FDA employees don’t view AI as a solution—they see it as a potential threat to their roles.
The FDA is “already spread thin from the RIF [layoffs] and the steady loss of individuals while in a hiring freeze and no capacity to backfill,” one person said.
Related news:
- From flirty texts to graceful goodbyes—how AI can support your love life at any age
- A Supreme Court just replaced its news anchors... with AI?
Key Takeaways
- The FDA’s new AI tool intended to speed up the review and approval of medical devices is still in beta, with significant bugs and struggles with basic tasks such as document uploads and connecting to internal systems.
- Some FDA staff feel the rollout of AI tools is being rushed and that the new technology isn’t ready to support the complex and essential regulatory work required for medical device safety.
- There are concerns from experts and staff that reliance on underdeveloped AI could compromise accuracy in medical device reviews, potentially affecting patient safety, and that AI still needs strong human oversight.
- Questions have been raised about conflicts of interest and integrity, as well as concerns among FDA staff that these AI tools may be used as a step toward replacing human workers, especially amid ongoing staff shortages and hiring freezes.
What do you think about the FDA’s push to use AI in medical device approvals? Are you excited about the potential for faster innovation, or worried about safety and accuracy? Have you had experiences—good or bad—with medical devices or new healthcare technology? Share your thoughts, questions, and stories in the comments below!