Trust is fundamental to all our relationships not just with our family and friends we trust banks with our money we trust doctors with our really personal information but what happens to trust in a world driven by algorithms as more and more decisions are made for us by these complex pieces of code the question that comes up is inevitable can we trust algorithms from Google searches to GPS navigation algorithms are everywhere we don't really think too much about them but increasingly governments corporations and various institutions are using them to make decisions about us who gets public
services who gets denied how people are monitored and policed how insurance is charged I want to start here in Australia where an algorithm used by the government has resulted in more than 400,000 people being in debt to the country's welfare system Centrelink it's been called the Robo debt scandal back in 2016 a decision was made to fully automate a key part of the Australian welfare system the part where the earnings of low-income people are compared with the amount of government money they received the government says they do this to ensure the right amount of financial
assistance has been handed out while the data matching algorithm officially called the online compliance intervention had been in place since 2011 any discrepancies previously flagged by the system were investigated by a government employee first with automation all human checks were removed so some of the math was just bad just plain wrong like it was spreadsheets that were mashing to cells together in the cells knew each other a she-wolf is a journalist who has been reporting on the Robo dead story since it broke she's also an activist one of the chief organizers of the not my
dead grass scam pain often people didn't realize that this was automated in the first place and it wasn't till we started getting people talking together on social media on Twitter that we realized actually it's the government that's wrong it was almost like 100,000 people have been gaslighted into thinking they'd done the wrong thing that it was their fault and and they were outraged when they realized that there was a Fault in the actual algorithm in the code the Australian government disagrees we are doing all compliance checks because we want to be more thorough in the
process we are uncovering the greatest examples and we're recruiting money for the taxpayer more checks is a bit of an understatement the old system resulted in around 20,000 discrepancy notices a year but in the early days of the new automated system that jumped to 20,000 a week more than a million letters have been sent out by the algorithm sometimes disputing government payments from as far back as seven years David Dignam was notified he incorrectly declared his income from a teaching job while he was on a disability pension back in 2011 his Robo debt four thousand
and eighty eight dollars in essence what Robert is is it's leveling an accusation towards you that you've somehow cheated or you've done the wrong thing I know I hadn't I wanted details of how they calculated my debt and I was told I couldn't have that and the reason I was told was that the computer looks at my personal information and then sources a piece of information home here another piece in here another piece of mean I've been there and they can't provide all that to me because it comes in too many places and it was
simply about you in other words the algorithm is inscrutable it's totally unknowable even the staff don't really understand it Robin can you tell me how much evidence or how much notification gentlemen provide you proving that there was a debt they didn't provide me with anything other than this letter and the other thing that I have is finally a text message came through to say hi the money you owe us is due in two days automation computerization algorithmic ization if that's even a word they're always sold to us as such a positive thing all upside no
downside as Australia's Department of Human Services put it computerized decision-making can reduce red tape ensure decisions are consistent and create greater efficiencies for recipients and the department the problem is how do you challenge a system that has no face and no name and nobody signs the bottom of your letter say you know I'm in charge of this good afternoon welcome to the Department of Human Services since they can a good day you can end up sitting on hold for a couple of hours trying to speak to a human the real question is how has it
come about that the government has overpaid people by billions because really the criminal waste is occurring at the end of the government's line it's the government that's doing this otherwise you're saying a hundred thousand citizens have made mistakes well if that's the case then the system is too difficult for people to negotiate I'm not here shaking my fist at technology it's not you know digital's fault it's not a computer's fault this system has been you know designed you know quite explicitly you know by government governments responsible for its failures and governments really responsible for the
hell they're putting all sorts of welfare recipients through unfairly by issuing them for steps this is something I heard from virtually everyone I spoke to about Rogow dead they said we're not against technology it's not like algorithms are all bad it's the people and the institutions designing these codes we can't seem to trust and this really gets to the heart of our relationship with algorithms they're often complex hidden behind walls of secrecy with no way for those whose lives are actually impacted by them to probe them because they've been kept off limits despite all the
criticism and even a formal inquiry the Australian Government stands by its algorithm and automation in the welfare system we do have a robust compliance tube in place and in the last six months alone we've recouped we've recovered over 300 million dollars to the taxpayer through that process so the system is working and we will continue with that there are at least 20 different laws in Australia that explicitly enable algorithms to make decisions previously made by ministers or staff we don't really know the full extent of how these are being applied but there are places around
the world where the use of algorithms are even more widespread like here in the United States where algorithms are being used to make big decisions across everything from the criminal justice system health education and employment the United States has a longer history of algorithm use than many other countries Silicon Valley is a big reason for that of the course but also there's much looser regulation here on how private companies and governments can collect and use data but for those studying the effects of algorithms on American society one thing is clear often it's the poorer marginalized
who get the worst deal I'm on my way now to Troy in New York State to meet with Virginia Eubanks she's the authority on everything to do with automating inequality it's actually the title of one of her books Virginia says America's poor and working-class have long been subject to invasive surveillance and punitive policies she writes about prison like poor houses of the 19th century the bad conditions were thought to discourage undeserving poor from supposedly taking advantage of the system what I see as being part of the digital poorhouse are things like automated decision making tools
statistical models that make risk predictions about how people are going to behave in the future or algorithms that match people to resources and the reason I think of them as a digital poorhouse is because that the decision that we made in 1820 to build actual poor houses was a decision that public service systems should first and foremost be moral thermometers that they should act to decide who is most deserving of receiving their basic human rights Virginia studies into the automation of public services in the United States points to developments in the late 60s and 70s
along with the civil rights movement came a push for welfare rights people are forced to live in the most inhuman situations because of their poverty african-americans and unmarried women who were previously barred from receiving public funds could now demand state support when they needed it while technology was touted as a way to distribute financial aid more efficiently it almost immediately began to serve as a tool to limit the number of people getting support I think it's really important to understand that history I think too often we think of these systems it's just simple administrative upgrades
sort of natural and inevitable but in fact their systems that make really important consequential political decisions for us and they were from the beginning supposed to solve political problems among them the power and the Solidarity of poor and working people in the early 1970s close to 50% of those living below the poverty line in the United States received some form of cash welfare from the government today it's less than 10% in public assistance the assumption of many folks who have not had direct experience with these systems is that they're set up to help you succeed
they are not in fact set up to help you succeed they're very complicated systems that are very diversionary that are needlessly complex and that are incredibly stigmatizing and emotionally very difficult so it shouldn't then surprise us that a tool that makes that system faster more efficient and more cost effective furthers that purpose of diverting people from the resources that they that they need having algorithms make decisions such as who gets financial aid who owes money back to the government has caused concern among many different groups but what's causing a full-on panic for some is the
fact that algorithms are being used to actually make predictions about people one of the most controversial examples is the correctional offender management profiling for alternative sanctions it's a bit of a mouthful but it's short form is compass and it's an algorithm that's been used in courtrooms across the country to assist judges during sentencing now of course algorithms can't weigh up arguments analyse evidence or assess remorse but what they are being used for is to produce something known as a risk assessment score to predict the likelihood of a defendant committing another crime in the future this
score is then used by judges to help them determine who should be released and who should be detained pending trial now the judge has to consider a couple factors here there's public safety and flight risk on the one hand but then there's the real cost social and financial of detention on the defendant on their fan on the other now historically what happens is the judge looks into this defendants the highs and tries to say okay you're a high risk person or your lowest person I trust your I don't trust you now what algorithms are helping
us do is make those decisions better the compass algorithm was brought in to offset or balance out inconsistencies in human judgment the assumption being of course that a piece of code would always be less biased and less susceptible to prejudice however compasses faced several criticisms primarily accusations of racial bias inaccuracy and lack of transparency in 2016 a man named Eric Loomis sentenced to six years in prison took his case to the Wits concen state Supreme Court his allegation was that the use of compass violated his right to due process it made it impossible for him
to appeal his sentence since the algorithm is a black box impenetrable unquestionable Eric Loomis didn't get very far the Supreme Court ruled the use of compass in his sentencing was legal the verdict however revealed the ways in which the ever-increasing use of algorithms is being normalized the court had a funny argument saying that like nobody knows where these decisions are coming from and so it's it's okay you know it's not that the state has some particular advantage over the defendant but that everyone is at this sort of equal playing field and it's not that there's
an informational advantage for one side or the other now to me I find that somewhat dissatisfying I do think that in these high-stakes decisions particularly in the criminal justice system we don't just want to have an equal playing field of no one knows but I think we need to have an equal playing field of everybody knows we need to have this transparency built into the system for the record a covent the company that sells compass software has defended its algorithm it points to research it commissioned that the company meets industry standards for fairness and accuracy
the United States has a massive racial discrimination problem in public services that's real so it is really understandable when agencies want to create tools that can help them keep an eye on frontline decision-making in order to maybe identify discriminatory decision making and corrective the problem is that that's not actually the point at which discriminative discrimination is entering the system and this is one of my huge concerns about these kinds of systems is they tend to only understand discrimination as something that is the result of an individual who is making irrational decisions and they don't these
systems are not as good at identifying bias that is systemic and structural so going back to the question that started us on this journey can we trust algorithms well the biggest thing I've learned from speaking with Asha Virginia Sherrod and many others is that I've actually got the question wrong it isn't really so much about whether algorithms are trustworthy it's more about the quality of the data that feeds and the objectives of those designing and controlling human biases human imperfections that's what we see reflected in our algorithms and without better oversight we risk reinforcing our
prejudices and social inequalities far too often algorithms are programmed to assume that the past is the future that we want as well and by the past that's often things that are full of stigma and bias and stereotypes and rejection and discrimination and really what we need is to create systems that allow for new future scenarios that are different from the old way of course we can build better tools algorithmic tools and I see them everywhere that I go but what makes a difference about good tools about just tools is building those tools with a broader
set of values from the very beginning so not just efficiency not just cost savings but dignity and self-determination and justice and fairness and accountability and fair process and all of those things that we really care about as a democracy you have to be built in at the beginning from step one in every single tool [Music] thanks for watching the first episode of all held the algorithm this is a five-part series and we're going to be covering everything from biometrics to online manipulation and even design so I hope you check out those episodes as well now
you can follow the hashtag all he'll the algorithm to stay up to date or to check out some other content that we have available online