The Australian Government is increasingly turning to computer algorithms to make important decisions in a move that experts say comes with a number of ethical and legal issues.
It was revealed this week by Fairfax Media that the government had quietly rolled out the use of a computer algorithm in its 13 immigration detention centres to determine the security risks of detainees, replacing decision-making by humans.
The department’s Security Risk Assessment Tool (SRAT) is given data regarding an individual’s behavior during and prior to detention, signs of violent or aggressive behavior and known associates to consider if they “pose an unacceptable risk to the community”.
“It also considers each detainee’s individual circumstances, including age and health. As a result of these and other changes there has been a significant decrease in incidents in detention including assault and self-harm,” an Immigration department spokesperson said.
The automated computer algorithm has been in use since late last year, but was only publicly revealed by former Australian Human Rights Commission president Gillian Triggs last week.
“The use of an algorithm to replace professional judgements – I thought this can’t be true, I must be back in 1984,” Ms Triggs said in a speech last week.
“They pump in statistical details and out comes a response that dictates whether they are in a high-security area or whether they are allowed certain privileges within the detention centre
The use of big data and algorithms to make important decisions without any human oversight is “problematic” both in terms of efficacy and ethics, UTS Business School associate professor Bronwen Dalton said.
“One must acknowledge that history is not destiny. The algorithm fails to take into account any genuine rehabilitation the person in question may have achieved, any changes in the state of mental health, or those that may not have a history but could be assessed as likely to be…