- Governments are increasingly using automated systems powered by emerging technologies like blockchain and artificial intelligence to deliver public services more efficiently.
- But what are the consequences for society and citizens like Salvador?
The year is 2030, and Salvador is enjoying a glass of wine to celebrate the arrival of his retirement. A lifelong taxi driver, he exchanges stories and small talk with the other regulars at the bar below his humble, one-bedroom flat. After a painstaking climb up the stairs, Salvador boots up the 10-year-old laptop given to him by a neighbour to check his e-mail – something he does when he remembers (which is rarely). A childless widower, nobody emails him much anyway.
Salvador is surprised to find a six-week-old message prompting him to claim his retiree healthcare benefits before the month is out, or risk losing them. He suddenly remembers a letter he received before moving to a cheaper flat: the public benefits system has gone completely digital and requires a new type of ID. Panicking, Salvador calls the number he sees in the email. A robotic voice tells him to wait on hold until finally he manages to speak with a human.
Pulse rising, Salvador explains his situation to the caseworker. His benefits have been cut off completely, but he can appeal the decision if he wishes. The reason? “Failure to comply with the process.” Salvador manages to initiate the appeal process, but after months of waiting is dismayed to find his application has been rejected. Calling the help line again, the caseworker explains his profile has been flagged “at risk” but cannot give more information. He has now been without benefits for months, his savings dwindling to near zero. He can no longer afford his heart medication, lost access to free transportation and cannot get to his next medical appointments.
Now completely excluded from the system, Salvador’s health worsens. He is denied entry as he tries to walk into a public clinic, whose facial recognition system doesn’t recognise him – after all, he didn’t register for his new digital ID during the 1-year transition period.
Before long, a year goes by. After suffering a massive heart failure due to stress, lack of medication and malnutrition, Salvador eventually loses his apartment and ends up on the street. He now has no way of finding out that he has won the appeal for wrongful termination and his benefits have been restored.
Governments are increasingly using automated systems to deliver public services more efficiently. The problem? These systems can fail the very people who most need it: the vulnerable and marginalised, especially those in economically precarious situations. One false positive can tip the balance of someone’s entire life.
Situations like Salvador’s are not far from today’s reality. In Indiana, state officials feared that a system allowing caseworkers and users to develop personal relationships would lead to more fraud after one case of corruption cost the state $8,000 USD. After 1,500 local caseworkers were replaced with online forms and call centres, benefits denials increased by 54%.
The charity Golden Opportunity Skills and Development (GOSAD) reports that 80% of users of the Universal Credit welfare system in the UK are digitally excluded. “The claimant journey is not only unrealistic but impossible for the digitally excluded, the ones who need it most,” explains Programme Lead Sharmarke Diriye in this webinar. “Nobody has been consulted – these systems have been introduced without having people in mind. When public authorities co-design, they do it with their own staff, not with actual users of these services.”
It’s not that automated systems are inherently less effective in the public sector compared to the familiar ones we know in the private sector, such as Cabify or Deliveroo. According to Virginia Eubanks, author of Automating Inequality, the risk comes from using these systems to “override empathy” and “avoid some of the most pressing moral challenges of our time” – such as poverty, in Salvador’s case.
Does our digital future mean outsourcing all the difficult decisions to machines? At the Digital Future Society Think Tank, we are exploring answers to questions like this one.
Human bias, error and corruption have always existed in public service delivery, creating deep inequalities for decades before the arrival of artificial intelligence systems. What’s important to remember, says Eubanks, is that automated decision-making “systems don’t actually remove that bias – they simply move it.”
In Salvador’s case, the algorithm deciding whether he should receive his benefits was using proxies instead of actual evidence of misconduct. The fact that he drove a car every day (as a taxi driver) after regularly visiting a bar (where he always goes for lunch) told the system that he is more likely to have a propensity for drunk driving: an inference that while understandable, is not true in his case. What’s more, the caseworker was not able to explain to Salvador how or why the system arrived at its decision; she only knows that she cannot disburse benefits.
Citizens at the centre of public innovation
But this doesn’t mean successful models of artificial intelligence in government do not exist. They are hard to build, and must involve co-design and incorporate human interaction throughout the process. One example is the mRelief system in Chicago. The system helps citizens determine eligibility for government programmes by connecting them to a kind of “digital sherpa” (a real person) who guides them through each step of the online processes. Guidance is available face-to-face or by text message, ensuring each claimant gets the benefits they are eligible for and deserve.
Going back to Salvador’s story, what does a more inclusive digital future look like? Knowing he is about to reach retirement age, living alone and near the poverty line, the government sends Salvador a digital sherpa instead of an email to make sure his digital ID is up to date. Instead of forcing Salvador to come to a website or deal with email, key activities or changes in the public benefits system adapt to his experience, simplifying the process and bringing in technological support and input only when required, recognising when the digital option is not viable and offering an analogue alternative.
With humans working integrally with emerging tech, public services can become more personalised, effective and inclusive
Where is this working in real life? In Taiwan, where Digital Minister Audrey Tang visits digitally excluded citizens herself (often the elderly, or those living in rural areas of the country) instead of expecting them to come to a website or download an app. (Hear Minister Tang explain how inclusion is built into the Taiwanese public innovation system here)
With humans working integrally with emerging tech, public services can become more personalised, effective and inclusive. But first, governments must understand and acknowledge the moral consequences of their technological choices. Perhaps more important than communicating the technical details of how emerging technologies work (does Salvador really care if the system providing his benefits is powered by blockchain technology or artificial intelligence?) governments could invest in making sure new automated systems are designed with and for and the public they are meant to serve. The cost savings generated by efficiency gains from implementing emerging tech in government can and should be reinvested in such inclusivity efforts.
Salvador’s story brought to you by Digital Future Society
Although Salvador is a made-up persona, his story emerged from a workshop of the Digital Future Society think tank, a group of academics, researchers, policymakers, entrepreneurs, corporate leaders and civil society experts who have come together to explore the impacts of technology and society.
Working under the theme of public innovation, this particular working group focused on the challenges and opportunities of the use and regulation of emerging technology like artificial intelligence and blockchain in public services and by governments. In the coming months, the working group will publish a report detailing what policymakers need to consider when designing, using, and governing such systems – especially when it comes to marginalised groups like low-income families, small businesses and the digitally excluded.
Keen to find out more about this topic? Check out the following resources:
- How do you govern machines that can learn? Policymakers are trying to figure that out
- How governments are adopting blockchain and AI in advanced economies
- AI Policy around the world compiled by Charlotte Stix
- CivicTech.Chat – on breakthroughs in public service technology
- OneTeamGov – real conversations with awesome people doing interesting stuff in government
- Go Public – stories of government getting it done and challenging the status quo
- Automating Inequality by Virginia Eubanks
- Weapons of Math Destruction by Cathy O’Neil