Google’s health care division is being powered by artificial intelligence. The government is uncertain about how to address this.
Google aims to transform your mobile device into a “pocket physician” powered by their advanced artificial intelligence technology.
Before anything else, the large technology company must persuade doubtful legislators and the Biden administration that its AI for healthcare does not pose a threat to patient privacy and safety, nor to its smaller rivals.
Google has assembled a potent lobbying team to influence the rules governing AI just as regulators start writing them. But members of Congress say they’re concerned that the company is using its advanced AI in health care before government has had a chance to draw up guardrails. Competitors worry Google is moving to corner the market. Both fear what could happen to patient privacy given Google’s history of vacuuming personal data.
Senator stated that these tools have the potential to save numerous lives.Mark Warner
(D-Va.), stated that these technologies have the capability to have the opposite effect – negatively impacting patients and their data, perpetuating human biases, and creating more challenges for providers as they navigate an unclear framework of clinical and legal standards.
Google’s artificial intelligence system analyzes medical data, academic articles, imaging, and clinical guidelines in order to assist physicians in diagnosing illnesses and determining appropriate treatment plans. The company is currently offering these tools to hospitals and has entered into a partnership with the Mayo Clinic. It also envisions expanding its services to include consumer-facing applications.
Warner expressed concern to Google CEO Sundar Pichai in a letter, citing the lack of thorough evaluation in hospitals utilizing the company’s AI technology.
Google’s head of policy and government affairs for North America, Mark Isakowitz, who previously served as chief of staff for Ohio Republican Senator Rob Portman, stated that the company’s technology does not focus on personal health data and is only used in a limited capacity. Isakowitz also clarified that health systems have the power to regulate its usage and keep track of its actions.
The goal of President Joe Biden is for his agencies to determine how to guarantee that AI in the healthcare field benefits patients just as much as human doctors, or even more. However, it may take several months or even years for regulations to be established. While the Food and Drug Administration does approve AI-powered medical devices, there are currently no guidelines in place for more sophisticated software tools.
Google is actively collaborating with Mayo Clinic researchers to test their AI technology as an assistant. HCA Healthcare, a company that manages over 2,000 hospitals and healthcare facilities in the U.S. and U.K., is utilizing the AI to generate clinical notes for their medical professionals.
Bayer Pharmaceuticals is currently testing this method in clinical trials and utilizing it to improve communication. The company Meditech, which specializes in electronic medical records, is also utilizing this technology to create brief summaries of patients’ medical histories.
As legislators such as Warner and regulatory bodies like the Food and Drug Administration consider their next steps, Google has hired former government health care regulators who are in a prime position to advocate on behalf of the company.
The “doctor in your pocket” quote is from Karen DeSalvo, who oversaw health information technology regulation for President Barack Obama and who is now Google’s chief health officer.
New businesses are worried that Google and other large technology companies will advocate for rules that disadvantage smaller competitors.
Smaller companies are attempting to prevent the implementation of reporting obligations that were instructed by Biden in an executive order regarding AI regulation in October. They believe that these requirements would be manageable for giants like Google or Microsoft, but challenging for smaller competitors with limited funds.
Punit Soni, CEO of Suki AI, a company that specializes in clinical note-taking, expressed concern about the influence of major tech companies like Google in the healthcare industry. He believes that their actions will ultimately limit our independence and ability to progress.
A lesson learned
In 2019, Google gained valuable insight about collaborating with the government in Washington.
This sparked a Department of Health and Human Services investigation into privacy concerns when the company’s initial attempt to utilize its expertise in “Big Data” for analyzing millions of patient records through a partnership with Ascension, a hospital chain based in St. Louis.
This time around, Google seems to be proactively addressing potential issues with regulators.
The parent company of Google has brought on board a number of ex-officials from the Food and Drug Administration, including Bakul Patel, who previously served as the agency’s chief digital health officer and played a key role in shaping its approach to AI. The FDA, now led by Robert Califf, a former employee of Google’s parent company, is expected to play a prominent role in the rulemaking process under the Biden administration.
2020
Google is a part of the Coalition for Health AI, which consists of health organizations and technology corporations collaborating to establish AI standards in collaboration with national health organizations. The group recently released its first “blueprint” for the year 2020.
This year, artificial intelligence is being utilized in the field of healthcare.
It also aided in the development of the National Academy of Medicine’s AI Code of Conduct for Health Care.
Following President Biden’s executive order on artificial intelligence, policymakers are working to familiarize themselves with the technology. Biden has requested reports and recommendations with deadlines, and Google is offering assistance.
In the middle of November, the company released a plan for AI that advocates for laws promoting innovation and the establishment of infrastructure to facilitate the progress and implementation of AI.
The company considers itself a collaborator with the government. As the current senior director of global digital health strategy and regulatory at Google, Patel stated that the company devotes significant effort to educating officials on the workings of their technology in order to establish standards.
He stated that we do not have the authority to dictate their actions, but we can provide them with knowledge and information.
The organization stated its intention to evaluate certain items, but has not yet done so. In the interim, they have released a plan for governing AI and provided guidance on which algorithms are included and what details should be included in clearance applications for marketing purposes.
The executive order from the Biden administration outlines guidelines for agencies to oversee and gather data from advanced AI models that could potentially affect national security or public health. However, the order primarily urges agencies to conduct research in order to gain a deeper understanding of how to protect this technology.
This is an initial measure that will eventually lead to the creation of regulations, which may take several months or even years.
At present, the progress of Congress is gradual. The Senate’s Committee on Health, Education, Labor and Pensions and the House’s Committee on Energy and Commerce have conducted hearings to discuss the use of AI in healthcare and its potential for regulation.
Members asked questions concerning AI’s impact on everything from personal data collection to its use in the development of bioweapons. But so far, no legislation is forthcoming.
Senators led by John Thune (R-S.D.) and Amy Klobuchar
Representatives from Minnesota have put forth the Artificial Intelligence Research, Innovation, and Accountability Act of 2023. However, this bill does not solely address healthcare and has not made progress since its initial proposal last month.
The FDA anticipates a significant increase in AI-powered medical devices in the current year, with a projected growth of more than 30 percent compared to 2022.
Google is consistently releasing artificial intelligence initiatives and offerings. The corporation is currently granting licenses for algorithms that can detect breast cancer, lung cancer, and gene mutations. It is also persistently experimenting with AI as a means of diagnosing diabetic retinopathy and identifying abnormalities in ultrasound images.
Google has recently launched Med PaLM-2, a chatbot capable of providing answers and passing medical licensing exams. This development strengthens Google’s partnerships with healthcare companies. One such partnership is with Bayer, where they work together on AI projects for clinical trials and early drug discovery. Prior to testing Med-PaLM 2 at the Mayo Clinic, Google and the clinic had already collaborated on AI technology for planning radiological treatment for head and neck cancers. Another healthcare company, HCA Health, has been utilizing Google Cloud since 2021.
Google’s vision for health care isn’t limited to health care companies. It has ambitions to play a role in consumer health. It already has FitBit, a wearable that collects vital signs and other fitness metrics. And it has tested other consumer tools like DermAssist, which aims to diagnose skin conditions and is marked as a low risk medical device in Europe.
DeSalvo views smartphones as a crucial instrument in the advancement of medicine.
DeSalvo stated that a simple and affordable device in the hands of many people around the world has the potential to bring about significant changes. He is excited about the possibilities of using it as a platform for people and consumers.
Protection of personal information and control over market dominance
Regulators, legal professionals, and startups are worried that AI will enter the health care industry before lawmakers fully understand its implications.
Harvard Law’s Petrie-Flom Center’s senior fellow, Mason Marks, stated that the lack of regulations for new AI is not the only issue. He explained that existing laws intended to safeguard patients are inadequate for the advancements in technology.
Marks is concerned that the effectiveness of HIPAA, a law created to safeguard patient confidentiality, may be compromised by the use of large language models. This is because HIPAA permits health systems and their vendors to utilize de-identified patient information.
According to him, once specific personal information is eliminated, it can be used however you see fit. Researchers warn that de-identified data can still be linked back to individuals if combined with other data through the use of AI.
According to Marks, ethical concerns still exist, even if they may not currently be present. He brings up a situation involving Crisis Text Line, where the text-based hotline for teenage mental health was found to be selling de-identified data for marketing purposes without the knowledge or consent of its users. While this may not be against the law, Marks raises the question of its ethical implications, particularly when companies utilize this data to enhance AI systems and make a profit.
According to Marks, obtaining that data could potentially raise concerns about antitrust. He pointed out that companies like Google, which heavily invest in AI systems for hospitals and healthcare systems, could gain a significant competitive edge or even establish a monopoly by having access to all the data they generate.
Soni from Suki AI shares similar concerns regarding competition and privacy, but approaches them from a different angle.
The individual is concerned that regulators are prioritizing creating regulations that impose compliance challenges, rather than guaranteeing privacy protection or addressing bias. This could potentially hinder smaller innovators in competing against large and wealthy companies such as Google.
“I believe we have placed a heavy emphasis on reporting rather than adequately addressing compliance,” he stated. “I would prefer that we dedicate some time to determining the AI equivalent of HIPAA.”
Source: politico.com