How ChatGPT Is Quietly Reshaping the Patient Journey

Julie Crimmins by Julie Crimmins | February 2, 2026

What healthcare communicators need to know as AI becomes the first stop for patient information

AI is rapidly becoming the first stop in the patient journey, with millions of Americans using ChatGPT to interpret symptoms, prepare for appointments, and navigate a complex healthcare system. This shift is changing expectations, increasing misinformation risks, and reshaping reputation dynamics for health systems. Communications teams must now optimize content for AI discoverability, accuracy, and trust.

How Often Are Patients Using AI for Health Questions?

Unprecedented Usage: ChatGPT has rapidly become a go-to ally for patients seeking healthcare information. Recent OpenAI research shows that more than 40 million people ask ChatGPT a health-related question each day. Health questions now account for more than 5% of all prompts globally – a notable share given the platform’s scale and breadth of use. Among ChatGPT’s 800+ million regular users, OpenAI reports that roughly one in four submits at least one health-related query a week. [Source: OpenAI] This shift matters because patients are now forming early impressions about their health long before they reach a provider.

Why Patients are Turning to AI: Patients are drawn to AI tools like ChatGPT because they offer immediate, plain-language guidance in an increasingly complex healthcare system. According to OpenAI’s survey data, Americans who use AI for health most often turn to these tools to:

• Check or explore symptoms
• Research conditions or treatment options
• Translate medical jargon, including insurance, billing and coverage challenges
• Prepare questions ahead of a doctor’s visit

This pattern shows that patients are turning to AI to fill gaps in clarity that the healthcare system hasn’t addressed.


WHAT PATIENTS ARE TELLING US ABOUT AI IN HEALTHCARE


“I’m not looking for a diagnosis; I’m looking for clarity.”
“It helps me turn medical jargon into something I can actually understand.”
“I use AI to help me organize my thoughts before an appointment.”

At midnight, it’s either this or spiraling.”
“It doesn’t judge me for asking what might be a dumb question.”
“Half of my confusion isn’t about health – it’s about the system!”

Patient’s Trust in AI is Growing –– but Cautious and Conditional: Public research paints a nuanced picture of patient trust in AI-generated health information. Many patients find AI tools useful, but far fewer view them as fully reliable or authoritative. Many users take a “trust, but verify” approach, using AI as a starting point, then cross-checking information or consulting a clinician. [Sources: Annenberg Public Policy Center, KFF, Pew Research Center] This matters because patients may act on information they only partially trust, which puts more pressure on providers to clarify or correct what AI has already told them.

How Should Healthcare Communicators Respond to Growing Use of AI?

As AI tools increasingly shape how patients, providers and other stakeholders form opinions, communications teams have a critical role to play. LLMs are not just information tools – they are reputation-shaping intermediaries, influencing perception long before a person schedules an appointment or considers employment.

Auditing Reputation through an AI Lens: Healthcare organizations increasingly need to understand how they appear when audiences ask AI tools questions, such as:

• Is this a good place to receive healthcare?
• What can you tell me about this specialist profession and their track record on safety/quality?
• Should I trust this health system?
• What is it like to work for this organization?

Unlike traditional monitoring or SEO audits, AI reputation audits assess how LLMs summarize, prioritize, and frame information about an organization. These responses often synthesize media coverage, third-party rankings, patient reviews, and institutional content; sometimes surfacing outdated, incomplete, or context-less narratives.

Recommendation: Health systems should proactively audit how their organization is represented in AI-generated responses.  Are LLMs citing your content? Are they pulling accurate, up-to-date information? If not, gaps in messaging can quickly become reputation risks. The goal is simple: ensure that when AI answers, it reflects your organization’s strengths accurately and consistently.

Feeding AI Intentionally: Healthcare organizations now need an AI‑optimized content strategy–one that structures expertise so LLMs can accurately cite, summarize and elevate it. This requires clear metadata, authoritative language and regular updates to remain visible to AI tools.

LLMs prioritize structured, authoritative content. If your organization’s expertise isn’t optimized for AI citation, competitors will own the narrative. This is the new frontier of thought leadership: not just ranking on Google but being the trusted source AI tools reference. Rather than reacting to AI outputs, organizations can intentionally shape them by developing a content and media pipeline focused on the questions LLMs are most likely to answer, including information on:

• Common patient concerns and misconceptions
• High-risk or high-interest service lines
• Organizational values, quality indicators, and access issues
• Workforce culture, leadership credibility, and innovation

Recommendation: Health systems should create an AI-ready content hub that addresses high-interest topics such as symptom guidance, treatment options and insurance FAQs. Content should be structured for clarity and authority, using plain language, strong metadata and consistent formatting to improve AI discoverability. Regular updates are essential to keep pace with emerging health trends and patient questions. This approach positions the organization not just as a care provider, but as a trusted source of truth in an AI-mediated information environment – reinforcing credibility with patients, providers and the broader public alike.

What Does This Mean for Health Systems and Patient-Facing Organizations?

AI as the New “Front Door” Opening up Patients to Misinformation

For a growing share of patients, the healthcare journey now starts with an AI query rather than a phone call or message. LLMs increasingly function as an informal first triage step and an ongoing thought partner, shaping how patients interpret symptoms and form expectations before they interact with clinicians. This dynamic isn’t entirely new; patients have long self-diagnosed online. But conversational AI can increase confidence in preliminary conclusions, whether those conclusions are accurate or not.

This creates both opportunity and risk. On one hand, AI-informed patients can arrive better prepared, with organized questions and baseline knowledge that enriches clinical conversations. On the other, large language models can produce responses that sound confident yet are incomplete, outdated or incorrect. The challenge isn’t whether patients use AI (they already do) but whether they can distinguish reliable guidance from misinformation. [Source: Outcomes Rocket]

Implication: First, equip staff to discuss, validate, or correct AI-sourced information constructively and without judgement. Patients are far better served when they feel comfortable sharing what they’ve learned rather than hiding it. Second, proactively address misinformation through patient health-literacy efforts. Some organizations are beginning to publish patient-facing guidance – such as “Dos and Don’ts of Using Chatbots for Medical Advice” – to encourage safer use. Internally, teams should be equipped with up-to-date insight on common AI-driven myths or errors so they can respond quickly and consistently. Ignoring the issue increases risk; addressing it transparently reduces it. [Source: The New York Times]

After-Hours Care and Access Gaps

One of the clearest usage patterns is timing. OpenAI reports that approximately 70% of health-related ChatGPT conversations in the U.S. occur outside traditional 8 a.m. – 5 p.m. clinic hours. Patients are turning to AI when access to care is limited – late at night, on weekends, or in areas with fewer providers. This creates a dual reality. On one hand, AI can offer reassurance or guidance when no other support is available. On the other, AI may inadvertently delay necessary care or contribute to inappropriate use of emergency services.

Implication: Health systems should assess how well they provide trusted after-hours guidance. This may include clearer online resources, symptom guidance pages, or thoughtfully designed chat tools that help patients determine the appropriate level of care. If organizations don’t fill this gap, AI tools will continue to do so in a black box.

AI is now influencing how patients make sense of their symptoms and what they expect from their care experience. Health systems that treat AI as a communications channel–rather than a background tool–will be in a stronger position to build trust and mitigate confusion. Those that act now will help shape the narrative; those that wait will inherit one.

Reach out to Julie Crimmins (email hidden; JavaScript is required) and Lucia Peth (email hidden; JavaScript is required) for more information.