Opinion: mishandling data – why “best and worst” aged care star ratings lists do more harm than good
Published on 30 September 2025

As a tool to discern best fit, the Star Ratings have long drawn criticism as to the insight they can truly yield for potential residents and their loved ones. Industry insiders have voiced consistent concern that far from being the basis by which seniors should gauge which provider would suit them best, the Star Ratings have the potential to severely mislead.
And yet every quarter, when the government releases aged care performance data, certain media outlets — most recently The Daily Telegraph and Herald Sun — roll out the same tired headline: “The Best and Worst Aged Care Homes in Queensland” (and in turn, NSW, Victoria, and so on). What’s more concerning than a tired headline is the unsubstantiated merit of tying Star Rating scores to either best or worst.
Star ratings cannot comprehensively point to “best” or “worst”
Connecting a rating to the extremes of best and worse is a formula, certainly a very clickable one. With this type of reporting, there’s a ‘momentous’ statewide “reveal.” However, clickable does not serve the interests of seniors and their loved ones.
The problem at the heart of these articles is that they don’t tell families what they think they’re being told. Star Ratings, in their structure, essence and eventual data, cannot sustain the mantle of assessing best and worst in aged care. Using Star Rating data to do so is misleading.
Star Ratings and articles that frame them as clear and comprehensive markers of provider performance, are potentially significantly harmful. These articles and treatment of Star Ratings cannot provide a league table of aged care performance. They’re not a definitive list of where the “best” care happens, or where the “worst” lapses are. They’re a distortion of what the government’s data was ever designed to exhibit.
What the government data actually provides
The Star Ratings system draws on four streams of data:
- Compliance actions (updated frequently, sometimes daily).
- Staffing levels (updated quarterly).
- Quality indicators (updated quarterly).
- Residents’ Experience Survey (conducted once a year, updated quarterly only when fresh survey results are processed).
Disconnected from survey
It is within the Residents’ Experience Survey that misunderstanding builds in the quarterly media reporting. It is critical to know the timing and percentage of how the survey is conducted when weighing how much to springboard off its results. Timeliness matters, this survey is only conducted annually.
And most important to note, these face-to-face interviews are only conducted with up to 20% of residents in each home. It is the findings of these interviews that fuels the “best and worst” headlines. Painting 100% of a provider with the results of 20% brush is not mathematically or ethically astute.
Sample size – 20% or less
In many homes, the survey sample is miniscule. Fifteen or 20 residents may be asked whether they would recommend their home. From those conversations, results are extrapolated and incorporated into Star Ratings. But the government never labels homes as “best” or “worst.”
From a government perspective the messaging is clear, the data has never been meant to comprehensively support, or be interpreted as, a competition between facilities.
Why the headlines mislead
When journalists package this into state-by-state “reveal” stories, they create a narrative of winners and losers based on data that cannot sustain the verdicts. That framing ignores the reality:
- Resident voices are sampled annually, not quarterly. Presenting these results as fresh each quarter misrepresents their timeliness.
- Sample sizes are too small to draw sweeping conclusions. A home in Queensland was labelled the “worst” based on 15 interviews, despite hundreds of other resident survey results showing satisfaction above 90%.
- Vulnerability shapes the data. The Royal Commission confirmed that residents often hesitate to complain, meaning positive survey results can mask serious issues.
Straying from unhelpful and unsubstantiated to grossly misleading, the headlines offer false certainty at a time when families most need context and nuance.
Clicks over clarity
It is worthwhile for all reporting in aged care to be honest in this: these stories aren’t about helping families navigate care, they’re about driving traffic and achieving viewership KPIs.
“Best and worst” is irresistible clickbait to bring people in. These articles give the impression of transparency, while in reality, at their core, they strip the data of its purpose — to inform quality improvement and accountability — and reduce it to a scoreboard no one ever designed.
For providers and leaders, this inappropriate handling of data has consequences. It undermines trust and distorts public perception. And it diminishes the real work being done to lift standards under reform.
A better way forward
In reporting, families need guidance that is accurate and comprehensive, not sensational:
- How Star Ratings are calculated must be central to conclusions drawn.
- Why a single survey question doesn’t define a home.
- What to look for when visiting a service in person.
Leaders in aged care need reporting that deepens understanding, not oversimplifies it.
Until then, we’ll keep seeing these quarterly “best and worst” lists rolled out across the states. As experts and leaders who want the best for residents and the sector, it will take a collective effort to hold reporting to account: these lists don’t tell the whole story. And if we want to rebuild trust in aged care, telling the real story matters more than ever.