Dr. Sina Bari, a practicing surgeon and AI healthcare leader at a data company Immersehave seen firsthand how ChatGPT can lead patients astray with incorrect medical advice.
“I recently had a patient come in, and when I was recommending a drug, he had a dialog printed from ChatGPT saying that this drug has a 45% chance of pulmonary embolism,” Dr. Bari told TechCrunch.
When Dr. While investigating further, he found that the statistics from the paper were about the effect of the drug on a special subgroup of people with tuberculosis, which did not apply to patients.
However, when OpenAI announced its dedication ChatGPT Health chatbot last week, Dr. Bari felt more excitement than concern.
ChatGPT Health, which will be launched in the coming weeks, allows users to talk to chatbots about health in a more private setting, where the messages will not be used as training data for the underlying AI model.
“I’m doing really well,” said Dr. Bari. “This is already happening, so formalizing it so that we can protect patient information and keep some safeguards around it (…) will make it more powerful for patients to use.”
Users can get more personalized guidance from ChatGPT Health by uploading their medical records and syncing them with apps like Apple Health and MyFitnessPal. For the security-minded, this raises an immediate red flag.
Techcrunch event
San Francisco
|
13-15 October 2026
“All of a sudden there is medical data being transferred from HIPAA-compliant organizations to non-HIPAA-compliant vendors,” Itai Schwartz, co-founder of data loss prevention company MIND, told TechCrunch. “So I wonder how the regulators are going to approach this.”
But the way some industry professionals see it, the cat is out of the bag. Now, instead of Googling cold symptoms, people are talking to AI chatbots – through 230 million people have been talking to ChatGPT about health every week.
“This is one of the biggest use cases of ChatGPT,” Andrew Brackin, a partner at Gradient who invests in healthcare technology, told TechCrunch. “So it makes a lot of sense that they want to create a more personalized, secure, and optimized version of ChatGPT for these health care questions.”
AI chatbots have persistent problems hallucinationespecially sensitive issues in health. According to Vectara Factual Consistency Evaluation ModelOpenAI’s GPT-5 is more prone to hallucinations than most Google and Anthropic models. But AI companies see the potential to correct inefficiencies in the healthcare space (Anthropic also announced a healthcare product this week).
For Dr. Nigam Shah, professor of medicine at Stanford and chief data scientist for Stanford Health Care, the inability of American patients to access care is more important than the threat of ChatGPT giving bad advice.
“Nowadays, you go into any health system and you want to see a primary care physician — the wait time is three to six months,” Dr. Shah said. “If it’s your choice to wait six months for a real doctor, or talk to someone who isn’t a doctor but can do some things for you, which would you choose?”
Dr. Shah thinks the more obvious route to introduce AI into the health care system is on the provider side, rather than the patient side.
Medical journals have it often reported that administrative tasks can consume about half of a primary care physician’s time, which reduces the number of patients they can see on any given day. If such work could be automated, doctors would be able to see more patients, possibly reducing the need for people to use tools like ChatGPT Health without additional input from actual doctors.
Dr. Shah leads a team at Stanford that is developing ChatEHRsoftware built into the electronic health record system (EHR), allows doctors to interact with patients’ medical records in a more efficient and effective manner.
“Making electronic medical records more user-friendly means doctors can spend less time searching for the information they need,” said Dr. Sneha Jain, early tester of ChatEHR, at Stanford Medicine. article. “ChatEHR can help them get that information up front so they can spend time on what’s important — talking to patients and finding out what’s going on.”
Anthropic is also working on an AI product that can be used by clinics and insurance companies, rather than just the public-facing chatbot Claude. This week, Anthropic announced Claude for Healthcare by explaining how it can be used to reduce the time spent on difficult administrative tasks, such as submitting prior authorization requests to insurance providers.
“Some of you see hundreds, thousands of authorization cases before a week,” said Anthropic CPO Mike Krieger in a recent presentation at JP Morgan’s Health Conference. “So imagine cutting twenty, thirty minutes off each person – that’s a dramatic time savings.”
As AI and medicine become more intertwined, there is an inevitable tension between the two worlds – doctors’ primary incentive is to help patients, while tech companies are ultimately accountable to their shareholders, even if their goals are noble.
“I think that tension is important,” said Dr. Bari. “Patients rely on us to be cynical and conservative to protect them.”

