
Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space where users can have conversations with chatbots about their health.
That’s why the Sandbox experience gives users the option to securely connect their medical records and wellness apps like Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton to get customized answers, lab test insights, nutritional advice, personalized meal ideas, and recommended training classes.
This new feature is rolling out to users with ChatGPT Free, Go, Plus, and Pro plans outside the European Economic Area, Switzerland, and the United Kingdom.
“ChatGPT Health is built on strong privacy, security and data management across ChatGPT, with additional layers of protection designed specifically for health, including dedicated encryption and isolation to protect and compartmentalize health-related conversations,” OpenAI said in a statement.

OpenAI said more than 230 million people around the world ask health and wellness-related questions on its platform every week, and stressed that the tool is designed to support medical care and is not intended to replace or be used as a replacement for diagnosis or treatment.
The company also highlighted the various privacy and security features built into the healthcare experience.
Health operates in a silo with enhanced privacy and its own memory, and uses “dedicated” encryption and isolation to protect sensitive data Conversations in Health are not used to train OpenAI’s underlying models Users attempting to have health-related conversations in ChatGPT are prompted to switch to Health for additional protection Health information and memory are not used to contextualize non-Health chats Conversations outside Health Cannot access files, conversations, or memory created within Health Apps can only connect to your health data using explicit messages Permission is required, even if you have already connected to ChatGPT for conversations outside of Health All apps available in Health must meet OpenAI’s privacy and security requirements, such as collecting only the minimum amount of data necessary, and must undergo additional security review to be included in Health.
In addition, OpenAI noted that it used HealthBench to evaluate its healthcare-enhancing models against clinical standards. The benchmark was announced by the company in May 2025 as a way to better measure the capabilities of its AI systems for health, with a focus on safety, clarity, and care escalation.

“This evaluation-driven approach helps ensure that models perform well on tasks that people actually need assistance with, such as explaining test results in easy-to-use language, preparing questions for appointments, interpreting data from wearables and wellness apps, and summarizing care instructions.”
OpenAI’s announcement follows a Guardian investigation that found Google AI Overview provided false and misleading health information. OpenAI and Character.AI are also facing several lawsuits alleging that their tools led people to commit suicide or have harmful delusions after confiding in their chatbots. A report released by SFGate earlier this week details how a 19-year-old boy died of a drug overdose after trusting ChatGPT for medical advice.
Source link
