“ChatGPT said I might be going blind…”
How eye care professionals can handle AI-informed patients
If you work in optometry or ophthalmology, you have likely heard some version of that sentence already.
Patients are increasingly arriving with AI-generated summaries from tools like ChatGPT, Gemini or Copilot. Some are better prepared. Others are more anxious. Most sit somewhere in between: informed, but not clinically grounded.
With tens of millions of health-related AI queries happening every day, this shift is no longer to be considered temporary.
The question is not whether patients will use large language models. The question is how clinics and practices choose to respond.
AI-informed patients can improve efficiency
Handled poorly, AI-informed patients extend consult time. Handled strategically, they can shorten it.
When patients already understand the difference between dry and wet AMD, what IOP means, or why corneal thickness matters, the conversation can move faster toward individualized decision-making.
The key variable is alignment, not information volume.
The issue is misplaced certainty
Large language models are impressive. They summarize well. They structure information clearly. They sound decisive.
But they lack full patient context, access to specific device data or implant history, longitudinal clinical records, nuanced risk assessment, and an understanding of co-morbidities and lifestyle factors.
Most importantly, they often fail to communicate uncertainty based on lacking context appropriately.
When 70% of the information is correct and 30% is subtly wrong, the 30% can dominate the consultation. Not because it is common, but because it is delivered confidently.
That is where friction begins.
Step 1: Acknowledge, don’t compete
When a patient says, “ChatGPT told me…”, the instinct can be to correct immediately.
A more effective first response is simple:
“I’m glad you looked into it. Let’s go through it together.”
That sentence validates the patient’s effort, reduces defensiveness and positions the clinician as a partner rather than an opponent.
Patients are rarely trying to challenge expertise. They are trying to reduce uncertainty before walking into the room.
Empathy first and clarification second.
Step 2: Reframe the role of AI
Many patients assume AI is “smarter” because it sounds confident. A helpful reframing is this:
General AI provides broad information. Clinicians apply that information to a specific human being.
For example:
“AI gives general answers. My job is to apply those answers to your OCT, your pressure pattern, your corneal thickness and your medical history.”
This restores clinical context without dismissing technology.
Interestingly, the same discipline applies when clinicians themselves use AI-based software. Without guardrails and validation, the risk of misplaced certainty remains.
AI assists, while clinical judgment decides.
Step 3: Consider directing patients to validated tools
One emerging solution is the development of domain-specific, clinically vetted AI models with medical guardrails.
These systems are trained on peer-reviewed data, integrate device and implant information, communicate uncertainty more transparently and reduce hallucinations.
If patients are going to seek AI guidance anyway, directing them toward validated platforms can improve both safety and consultation efficiency.
The goal is not control. The goal is consistency, transparency and quality.
A practical framework for clinics
This shift can be operationalized with a simple structure:
Input: The patient arrives with AI-generated information.
Process:
- Acknowledge
- Clarify inaccuracies
- Personalize the information
- Add clinical nuance
- Confirm understanding
Output: Stronger trust, improved efficiency and better-informed decisions.
This is systems thinking applied to patient communication.
The deeper opportunity
Information is abundant. Context is scarce. Clinical judgment remains essential. AI is not reducing the value of optometrists and ophthalmologists. It is highlighting the difference between information and expertise.
For practices, industry partners and technology providers, the responsibility is clear. Contribute to validated AI tools with proper guardrails. Support clinicians with systems that enhance, rather than replace, professional judgment.
Patients walking into clinics today are different from those five years ago. They are informed. They are searching. They are influenced by confident algorithms.
The practices that adapt thoughtfully will not lose authority, but instead increase trust.

