ChatGPT Health sharpens the stakes for AI use in aged care and healthcare
Last updated on 16 January 2026

OpenAI’s announcement of ChatGPT Health marks a significant escalation in how artificial intelligence is positioned within healthcare. Unlike general-purpose AI tools, ChatGPT Health is designed specifically to support health-related interactions, focusing on helping people understand medical information and navigate complex health systems.
On paper, this shift is welcome. Done well, AI could help reduce friction in overstretched systems, support health literacy and improve access to information. But the launch also brings into sharp focus a harder truth. AI is already being used widely across health and aged care settings, often without governance, safeguards or organisational oversight.
And that is where the real risk sits.
Mercy Health, one of Australia’s largest health and aged care providers, says the promise of AI must be matched by ethical restraint.
“AI has real potential to support people to better understand their health and navigate complex systems, particularly where access is limited,” said Dr Paul Jurman, Chief Information and Digital Transformation Officer at Mercy Health.
“But it must always be used ethically, transparently and with strong safeguards around privacy, accuracy and accountability.”
Mercy Health’s position is clear. AI can assist, but it cannot replace human judgment or compassionate care.
“Decisions about patient treatment carry moral weight and responsibility. That must remain the purview of a highly trained, accountable human being,” Dr Jurman said.
That framing matters, because while ChatGPT Health is being launched with guardrails, most AI use in aged care and healthcare is happening without them.
The gap between designed use and real-world practice
In aged care, ChatGPT is already being used by staff to draft progress notes, reword clinical documentation and assist with reporting. This is not the result of a formal rollout or executive endorsement. It is a grassroots response to chronic workforce pressure, documentation burden and language barriers.
The problem is not that staff are trying to work more efficiently. The problem is that they are doing so in environments where AI use is unregulated, undocumented and often invisible to leadership.
Public AI platforms were never designed to handle identifiable health information. Yet sensitive resident and patient data is being entered into them every day, exposing providers to privacy breaches, inaccurate records and regulatory action.
ChatGPT Health does not solve this problem. If anything, it highlights how exposed the sector already is.
Ethics are not optional; they are operational
Mercy Health has publicly committed to the Rome Call for AI Ethics, embedding transparency, inclusion and human responsibility into its AI strategy.
“As the first Australian health and aged care provider to publicly commit to the Rome Call for AI Ethics, Mercy Health has hard-wired transparency, inclusion and human responsibility into its AI strategy, so innovation serves people first, not efficiency alone,” Dr Jurman said.
That level of intent is rare. Across the sector, AI governance is often lagging well behind frontline behaviour.
In aged care, this creates a convergence of risk. Privacy law, clinical accountability and care standards all rely heavily on accurate documentation. If AI-generated content is wrong, misleading or based on misunderstood prompts, the consequences extend far beyond administrative error.
Documentation becomes evidence. And evidence cuts both ways.
Accountability does not disappear with automation
There is a growing misconception that AI-assisted work somehow dilutes responsibility. Regulators take the opposite view.
Clinicians remain accountable for what is entered into records. Providers remain responsible for how systems are used. Boards remain liable for governance failures.
AI hallucinations, language misinterpretation and overreliance on unverified outputs all undermine the integrity of clinical records. In both health and aged care, this creates exposure under privacy law, professional standards and negligence frameworks.
ChatGPT Health may be designed with healthcare in mind, but it does not remove the need for consent, data minimisation, human review or clear organisational policy.
A moment of reckoning for providers
The launch of ChatGPT Health should not be read as a green light for informal AI use. It should be read as a warning.
AI is no longer peripheral to health and aged care. It is embedded in daily practice, often in ways providers do not fully see or control.
The real question is not whether AI belongs in aged care and healthcare. It is whether organisations are prepared to govern it properly.
Without clear policies, approved tools, staff training and board-level oversight, AI does not become a solution. It becomes another invisible risk layer in an already fragile system.