Australia’s chief scientist is warning AI developers to retain a “moral compass’ and not to treat human beings as ‘data points’. That warning comes way too late.
The Sydney Morning Herald today (31 October) carried a report of a speech by the Federal Government’s chief scientist Alan Finkel warning of the dangers of artificial intelligence.
According to the SMH, “He urged software developers on the cusp of artificial intelligence breakthroughs not to lose their ‘moral compass’ amid fears humans will be treated as data points by our largest companies.”
Finkel said: “The idea of treating humans as objects, as data, to be studied and manipulated, rather than as cherished individuals entitled to inherent worth and dignity, stirs our deepest convictions.”
Alas not. Humans are already being studied and manipulated with little regard to worth or dignity: the organisation Finkel works for is doing just that, with Robodebt.
Algorithms assess and compare welfare benefits against earnings, determine when benefits have been overpaid and extract money from bank accounts with no regard for the financial circumstances of the individuals concerned.
The system is presently the focus of a second Senate inquiry, a federal court challenge and a potential class action, but is far from being the only system of its kind.
This week the UK’s Guardian newspaper reported immigrant rights campaigners mounting a legal challenge to a Home Office algorithm that filters UK visa applications.
The Home Office insists the algorithm is used only to allocate applications to ‘streams’ for human processing and does not ultimately rule on them.
The campaigners are far from convinced. They have launched aa gofundme.com campaign “Deported by algorithm” to fund their legal challenge
There’s plenty more. Back in April, the Guardian reported: “Dozens of UK business owners are using artificial intelligence to scrutinise staff behaviour minute-to-minute by harvesting data on who emails whom and when, who accesses and edits files and who meets whom and when.”
The AI will interview you now
Earlier this month it reported multinational Unilever claiming to have saved 100,000 hours of human recruitment time in the last year by deploying AI to analyse video interviews.
“The system scans graduate candidates’ facial expressions, body language and word choice and checks them against traits that are considered to be predictive of job success,” the paper said, adding: “Vodafone, Singapore Airlines and Intel are among other companies to have used similar systems.”
And the UK is working on a system to AI-enable welfare payments that makes Australia’s Robodebt look amateurish.
On 14 October the Guardian reported: “The Department for Work and Pensions has hired nearly 1,000 new IT staff in the past 18 months, and has increased spending to about £8m a year on a specialist ‘intelligent automation garage’ where computer scientists are developing over 100 welfare robots, deep learning and intelligent automation for use in the welfare system.”
The paper quoted a DWP spokesman saying: “We are striking the right balance between having a compassionate safety net on which we spend £95bn, and creating a digital service that suits the way most people use technology. … Automation means we are improving accuracy, speeding up our service and freeing up colleagues’ time so they can support the people who need it most.”
However there is no transparency: the department has refused freedom of information requests to explain how it gathers data on citizens.
The lessons of history
And it’s not as if problems of bias in AI algorithms are new: they’ve been around for forty years. IEEE Spectrum is running a six part series Untold History of AI. Episode five relates how in the 1970s, Dr. Geoffrey Franglen of St. George’s Hospital Medical School in London began writing an algorithm to screen student applications for admission.
He completed it in 1979 and by 1982 all initial applications to the hospital were being screened by the program.
Concerns were soon raised but it was not until thousands of applicants had failed to get an interview on the basis of the algorithm’s assessment of their written application that the biases in the system were uncovered.
An enquiry found candidates were classified by the algorithm as “Caucasian” or “non-Caucasian” on the basis of their names and places of birth. If their names were non-Caucasian, the selection process was weighted against them. In fact, simply having a non-European name could automatically take 15 points off an applicant’s score.
A question of ethics
In the SMH article’s closing quote from Finkel he said AI was throwing up increasingly complex questions about what we should do, that answering these questions required the application of ethics rather than physics.
“As such, it is not the province of scientists alone but of every individual.”
Alas, as the scale and scope of AI increases the ability of individuals to question and influence will likely be very limited.
AI has no moral compass: it works off data, but its appeal in terms of increased ‘efficiency’ is highly seductive. Big government and big business will likely extend its scale and scope, its inbuilt biases and its power to subject individuals to its decisions.