Australia’s aged care algorithm is under fire. At last, someone’s listening

Published on 17 April 2026

The way Australians are assessed for home-based aged care funding is being investigated by the Commonwealth Ombudsman.

Critics say assessment for funding under the Support at Home program is flawed, leaving some older people unable to access the right level of care they need to safely live at home.

Complaints about the process are increasing significantly. Even an expert who helped design the system is unhappy.

Here is why the Commonwealth should reconsider its approach.

What is the key issue?

The new Support at Home program was introduced in 2025. One of its aims is to support more Australians to remain at home rather than move into residential aged care.

When an older person wants to join the program, they are assessed in an interview using a structured digital assessment known as the Integrated Assessment Tool. This tool assesses the support they need, including physical, cognitive and psychosocial needs. It also assesses urgency and the level of assistance required.

An algorithm then analyses the answers and determines Support at Home funding levels.

To be useful, assessments need to predict the actual service levels required to achieve high-quality outcomes for older people with different levels of need.

In developing assessment tools, the gold standard is to first conduct a large number of assessments to determine what kind of care older people need, and at what level. The next stage is to assess whether the services actually provided produce high-quality outcomes for people with different levels of need.

However, there is no publicly available evidence this has been done.

Instead, a second-best option was adopted. Experts assigned scores to estimate what level of support someone would need based on their answers.

But there is room for expert disagreement, even when well developed tools are used.

The Integrated Assessment Tool includes 11 separate validated tools, each with an inherent error rate. These error rates compound when combined.

Worse, given there are no studies examining the extent to which integrated assessments predict actual services and outcomes, it is difficult to determine how accurate the algorithm is. A lack of transparency means it is effectively a black box, which is why the Ombudsman’s inquiry is welcome.

This is particularly concerning because funding determined by the algorithm may be systematically lower than funding determined by experts. This means older people may have their cognitive, safety and complex care needs underestimated.

How about human oversight?

Despite these limitations, and against expert advice, the Commonwealth has explicitly removed the power to manually override the algorithm’s allocation of support levels. The intention is for the algorithm to deliver consistent results for thousands of older people.

However, this approach has several serious potential consequences.

The Support at Home program has eight levels of support, ranging from A$10,731 a year for Level 1, the most basic support, to $78,106 a year for Level 8, the highest level of support.

If the algorithm allocates one level higher or lower than what a person actually needs, this can mean a difference of between $5,300 and $20,000 a year, depending on the level.

Appeals are increasing

If an older person or their family wants to question the funding allocation, they can appeal. However, they often do not know the specific reasoning behind the scoring that led to their allocation. The appeals process can also be cumbersome and stressful.

Some 800 older people have requested a review of their assessment since the introduction of the new system.

The Older Persons Advocacy Network says requests for information and advocacy have increased by 50% over a three-month period.

One of the system’s designers, Lynda Henderson, said she felt “fury” that the tool she helped design has been turned into a prescriptive algorithm.

What needs to happen next?

The Robodebt Royal Commission warned government agencies that automated systems must ensure transparency, fairness and human oversight.

However, this has not occurred in the assessment of individuals’ circumstances for home-based aged care funding.

The best approach is to use the algorithm as a guide when making individual decisions about older people’s support needs, and to allow assessors to override it when circumstances warrant.

System-level data should then be used to refine the algorithm and provide guidance to assessors as the system matures.

Tags:
aged care
aged care sector
compliance
technology
aged care reform
AI
assessment
algorithms
algorith
assessment tool