Artificial intelligence has now made its way into your physician's practice. The government is unsure of how to handle this advancement.

Artificial intelligence has now made its way into your physician’s practice. The government is unsure of how to handle this advancement.


The regulations for utilizing artificial intelligence in healthcare have not yet been established by Washington, despite the fact that doctors are quickly implementing it to interpret tests, diagnose illnesses, and administer behavioral therapy.

Products that use AI are going to market without the kind of data the government requires for new medical devices or medicines. The Biden administration hasn’t decided how to handle emerging tools like chatbots that interact with patients and answer doctors’ questions — even though some are already in use. And Congress is stalled. Senate Majority Leader Chuck Schumer

It was stated this week that it will take several months for the legislation to be completed.

Patient safety advocates caution that without improved government supervision, medical practitioners may utilize AI systems that lead them astray, potentially resulting in misdiagnosed illnesses, utilizing racially biased data, or breaching patient privacy.

According to Suresh Venkatasubramanian, a computer scientist at Brown University, the lack of proper testing and the subsequent use of AI systems by physicians in situations where patients are directly involved is highly problematic.

Venkatasubramanian holds a special perspective on the matter. He was involved in creating the plan for an AI Bill of Rights that was released by the Biden administration in October 2022. The plan emphasized the need for strict human supervision in order to guarantee that artificial intelligence systems fulfill their intended purpose.

However, the official document remains a physical paper, as President Joe Biden has not yet requested Congress to approve it and no legislator has taken action to ratify it.

The coalition is advocating for health systems to discontinue the use of AI that relies on data sets that underestimate the lung capacity and vaginal birth ability of Black individuals after a cesarean section, while overestimating their muscle mass.

Some AI developers have concerns about the way doctors are utilizing their systems. According to Eli Ben-Joseph, co-founder and CEO of Regard, a company with 1.7 million diagnoses made using its technology, some users become overly reliant on the product once they become accustomed to it. Regard’s technology is integrated into a health system’s medical records.

Implemented safety measures are in place to alert doctors if they are moving too quickly or neglecting to review all of the system’s output.

Despite convening with leaders from the technology industry last month, Congress remains divided on the course of action to take.

The Biden administration has put the Food and Drug Administration in charge of authorizing new AI products prior to their release on the market. This process does not require the same level of comprehensive data as traditional drug and device makers, but the agency still monitors the products for any negative effects.

The FDA’s Digital Health Center of Excellence director, Troy Tazbaz, acknowledges the need for increased efforts from the agency. He stated that regulating AI products designed for healthcare, such as ChatGPT, a bot capable of passing medical exams, requires a different approach. However, the agency is still determining what that approach should be.

Although the systems are described as “incredibly fragile,” AI is rapidly being implemented in the field of healthcare. Venkatasubramanian expresses concern about potential errors and racial bias in utilizing these systems to diagnose patients. He also predicts that physicians may rely too heavily on the judgments of these systems.

Most of the 10 creators developing the technology, who were interviewed by POLITICO, recognized the risks of not having proper supervision.

Ross Harper, the founder and CEO of Limbic, a company that utilizes AI in a behavioral therapy app, stated that there have likely been several instances today, and there will be more in the upcoming year, where organizations are implementing large language models in a potentially unsafe manner.

They would begin to trust it without any hesitation.

Limbic has obtained a certification for its medical device in the United Kingdom. According to Harper, the company is making progress in the United States despite the uncertain regulatory environment.

He stated that it would be incorrect to not utilize these new tools.

Limbic’s chatbot, which the company said is the first of its kind in America, works through a smartphone app — in conjunction with a human therapist.

Individuals can communicate with the bot regarding their thoughts and emotions. The bot then utilizes therapy protocols and artificial intelligence, along with a distinct statistical model, to provide accurate and beneficial responses.

A therapist offers suggestions to the AI to direct its discussions. The AI then shares notes from its conversations with the therapist, allowing for a more comprehensive approach in the patient’s future therapy sessions.

Talkspace, another company, utilizes artificial intelligence to identify individuals at risk of suicide by analyzing their conversations with therapists.

AI products have the ability to generate and condense patient charts, as well as analyze them and propose a potential diagnosis.

Most of the content is geared towards assisting overburdened physicians in reducing their workload.

Safety and innovation

According to students in the technology field, AI systems that adapt or “learn” with increased data may vary in usefulness over time, potentially affecting their level of safety and effectiveness.

It is challenging to assess the effects of these alterations as businesses are protective of the algorithms that form the core of their products. This “black box” is a safeguard for intellectual property but hinders the ability of regulators and external researchers to understand it.

proposed a policy

The goal is to increase transparency surrounding the utilization of AI systems in healthcare, however, it does not prioritize the safety or effectiveness of said systems.

Tazbaz inquired about the agency’s main obstacle for AI and questioned how to effectively regulate it without hindering innovation. He also emphasized the importance of having safety parameters for innovation to operate within.

Currently, there are no regulations in place that specifically address this technology. As a result, the FDA is devising a unique system to address it.

Tazbaz is confident that the FDA will implement a procedure for regularly auditing and certifying AI products, with the goal of guaranteeing ongoing safety as the systems evolve.

The FDA has given the green light to approximately 520 devices that utilize AI technology, primarily in the field of radiology. These devices have demonstrated potential in accurately interpreting X-rays. During a meeting in August, FDA Commissioner Robert Califf expressed satisfaction with the agency’s handling of predictive AI systems, which use data to make educated predictions.

But many products currently in development are using newer, more advanced technology capable of responding to human queries — something Califf called a “sort of scary area” of regulation. Those present even more challenges to regulators, experts said.

Additionally, there is a potential risk to consider: overly burdensome regulations could stifle potential advancements that could lead to improvements in patient care, cost-effectiveness, and accessibility.

According to Tazbaz, the agency is being cautious in order to not hinder the progress of the new technology. This includes engaging with industry leaders, listening to their concerns, and communicating the agency’s plans.

The strategy of the World Health Organization is similar to that of Washington, which involves showing care, providing advice, and engaging in dialogue. However, as the WHO does not have the authority to enforce regulations, it has recommended that member governments increase their efforts.

The organization stated that AI models are being quickly implemented, often without a complete grasp of their potential performance.

However, every time the FDA tries to make the regulations stricter, they can anticipate resistance.

Certain leaders in the industry have proposed that doctors act as a form of regulator, as they are knowledgeable individuals who ultimately make the final decision, regardless of the presence of AI co-pilots.

Some people believe that the current process for approving things is too complex and demanding, making it difficult for quick innovation to thrive.

Brad Thompson, a lawyer at Epstein Becker Green who advises companies on incorporating AI in the healthcare sector, stated that he feels like he is “killing” technology by educating them on the laws and regulations surrounding its use.

“Would I personally feel secure?”

Previously, Thompson would have brought his concerns to Congress.

However, legislators are uncertain about how to address the issue of AI and the legislative process has been delayed while Republicans choose a new speaker. Currently, legislators must come to an agreement on government funding for fiscal year 2024.

Thompson expressed disappointment at the current and future unavailability of pursuing regulatory updates through Congress, stating, “It saddens me deeply.”

Schumer recently organized a discussion on AI in order to determine the appropriate actions for Congress to take regarding its use in various industries. The House also has a task force focused on AI, but its effectiveness may be limited by difficulties in leadership and securing government funding.

Rep. Greg Murphy

The co-chair of the Doctors Caucus, (R-N.C.), expressed a desire for state governments to take the lead in regulating the technology.

Louisiana Sen. Bill Cassidy

The Republican leader of the committee responsible for health policy has expressed the belief that Congress should take further action, while also avoiding any obstacles to innovation.

Cassidy’s proposal tackles numerous issues that have been raised by researchers, regulators, and industry leaders. However, he has not yet put forth a bill to enact it.

Due to the current uncertainty, certain major players in the health technology industry are intentionally focusing on AI projects that have a lower risk but potentially high rewards, according to Garrett Adams from electronic health record company Epic. This involves tasks such as taking notes, condensing information, and primarily functioning as a secretary rather than a co-pilot for doctors.

However, utilizing these technologies could create a foundation for more assertive progress. Several companies are moving forward, even claiming that their products will ultimately supplant doctors.

Ben-Joseph proposed a timeline of 10 to 20 years for the goal of making some of their technology independent and fully automated, eliminating the need for medical professionals.

Include Tazbaz in the group of doubters.

He stated that the medical field should carefully evaluate the potential risks when utilizing AI for patient diagnoses. He also mentioned that whether he would personally feel secure depends on the specific situation of its use.

Source: politico.com