Government by Algorithm: The Myths, Challenges and Opportunities

Government by Algorithm: The Myths, Challenges and Opportunities

Briefing
Posted on: 25th January 2021
By Multiple Authors
Kirsty Innes
Head of the Digital Government Unit
Rosie Beacon
Policy Analyst

    Governments in the 21st century face new types of challenges – a climate crisis, technological revolution, rapid urbanisation and an ageing population – making their ability to innovate more important than ever. With an entirely new set of issues to respond to in 2021, it is no surprise that governments have started to take more advantage of advanced computational power.

    The concept of algorithms is central to this, and the use of algorithms in government – and more specifically, algorithmic decision-making – has come under increasing scrutiny since the UK government’s disastrous attempt use an algorithm to determine A-level and GCSE grades in lieu of exams, which had been cancelled due to the pandemic. Discussion in the media and political sphere has focused disproportionately on the risks and disadvantages of algorithmic decision-making, with oversimplifications and misrepresentations becoming embedded in received wisdom about algorithms.

    However, there are compelling reasons to make more use of them in government. They can be a powerful tool, with the potential to transform the delivery of education as well as areas like health care and the welfare system. But to reap their full benefits and avoid their potential risks, it is essential they are designed and deployed in the right way.

    This briefing tackles some of the most prevalent misunderstandings about the use of algorithms in the public sector and makes the case for a more positive, nuanced approach.

    What Is an Algorithm?

    What Is an Algorithm?

    The word algorithm can relate to a variety of different processes in computing and mathematics, but three core concepts are worth knowing:

    1. An algorithm is a form of automated instruction, or a list of related instructions. At their simplest these instructions can consist of an “if à then” statement, or rule. The complexity of an algorithm will depend on the complexity of each individual step it needs to execute and how many steps are involved.
    • A tax rule is an example of a simple algorithm, for instance:
      IF {the goods or services provided are VAT-exempt} THEN {no VAT charged} ELSE {charge VAT}.
    • The decision-making process that determines the four Covid-19 tiers can also be thought of as an algorithm, as it looks at specific input factors – such as the rate of infection, the reproduction rate and current pressures faced by local NHS departments – to determine which tier is applied.
    • Highly complex algorithms include those used in satellite-navigation systems to help find the optimum travel route (e.g., the quickest, the shortest or the one requiring the least fuel), or Google’s PageRank, which determines which webpages are displayed in what order in response to a given search.
    1. Artificial intelligence (AI) refers to a collection of scientific disciplines and technology that enable machines to carry out tasks that are usually associated with human intelligence. Algorithms and AI are intrinsically related, but they are not the same. Every computer program is built using algorithms, but not every program is regarded as AI.
      Examples of artificial intelligence include:
    • Computer vision (e.g., face, iris or fingerprint recognition; automatic number-plate recognition)
    • Natural language processing (e.g., search engines understanding questions entered in human language)
    • Speech recognition (many machines now understand instructions spoken into them using normal human language, e.g., Alexa)
    1. Machine learning (ML) is a subset of artificial intelligence that allows machines to learn from data without being explicitly programmed to. Machine learning teaches a machine patterns, similar to how we teach children to read, write and speak.
      Examples of machine learning include:
    • Supermarket ML systems that learn about our shopping habits, such as seasonal and impulsive preferences, from the data they collect at the checkout.
    • Once ML has taken place, the learning can be used to accurately predict future events. Advanced ML systems can predict individual or collective patterns of usage of electricity, telephones and other services, allowing service providers to be more efficient and effective.

    How Can Algorithms and AI Be Useful in a Government Context?

    How Can Algorithms and AI Be Useful in a Government Context?

    Algorithms are already used for such a vast and varied range of functions that it hardly makes sense to think about them as a single class, and the applications for sophisticated AI and ML are still being explored. But it’s fair to say their utility often lies in processing huge volumes of data at a far faster rate than any human, and with much greater consistency.

    Algorithms are very good at spotting details that humans could miss. They don’t get tired or need holidays, and so they are cost efficient. Designed and deployed in the right way, algorithms can enable government services to re-allocate resources much more effectively and ultimately deliver for citizens with an unprecedented level of efficiency.

    As our colleagues have written, when it comes to policy-making and public services, we can think about five main types of uses for AI:

    1. AI can help to personalise the citizen experience. How each individual interacts with government is different, and AI can help in areas like answering peoples’ questions, personalising recommendations and tailoring services to users’ needs.
    2. Data can be optimised to enable government to monitor services in real time. Systems can be developed so they track data and analyse it instantly, avoiding lags, inaccuracies and inefficiencies.
    3. It can help to classify cases more effectively. Many government services require placing events, citizens or information into the right category, such as which patient should see a doctor most urgently or to decide who is entitled to government support.
    4. Machine-learning algorithms can be particularly useful for making predictions and understanding trends and future behaviours. The combination of constantly advancing computer power and large amounts of data are able to do a much better job than human experts, and algorithms allow it to be done more efficiently, quickly and cheaply, and on a bigger scale.
    5. Government programmes and departments can be complex and convoluted. It is not always clear from the outset what all the potential consequences of a policy programme are. New technology can help study the effects of complex systems over time and even test policies before implementing them fully to gain more insight into their outcomes.

    Algorithmic Decision-Making in Government

    Algorithmic Decision-Making in Government

    By their nature, government bodies and public servants make innumerable decisions every day, with impacts on peoples’ lives ranging from the near-trivial (choosing which day the bins are collected, say) to the life-changing (granting a visa, or entitlements to benefits). No decision-making process is foolproof. But governments have a responsibility to provide a degree of transparency, accountability and recourse to appeal that is appropriate to the impact of the decision.

    Set in this context, algorithms can play a perfectly legitimate role in supporting many kinds of government decisions, increasing efficiency, consistency and accuracy. With the right design and application, they can in fact strengthen the human element of government by re-allocating often scarce resources, allowing elected officials and public servants to dedicate their time to cases and issues that require the skills unique to human decision-making: empathy, sensitivity and understanding of nuance.

    One example is the potential to use AI to improve education: Automating routine tasks like registration, preparation, quantitative assessment and marking, and paperwork could allow education to be tailored more individually and delivered at a pace suited to each student’s own capability and progress. Research by McKinsey showed that 20 to 40 per cent of current teacher hours are spent on activities that could be automated using existing technology, which translates to an additional 13 hours per week that teachers could redirect towards activities that lead to better student outcomes.

    Criticisms of Algorithmic Decision-Making in Government

    Criticisms of Algorithmic Decision-Making in Government

    The public debate on algorithms in government is currently characterised by distrust, suspicion and cynicism. Commentary about algorithms in government has tended to focus on the risks and downsides, often conflating legitimate concerns with unhelpful stereotypes and misrepresentations.

    It is important to acknowledge that algorithms can be susceptible to some very significant failings:

    • First, algorithms can be biased – that is, depending on the design of the algorithm and the nature and quality of the input data, algorithms can systematically disadvantage certain types of people in unintended ways. For instance, a health-care algorithm in the US underestimated the health risk of black patients who were equally as sick as their white counterparts. And the COMPAS algorithm – used to help predict recidivism in criminal offenders in the US – has been accused of bias against black defendants. (While COMPAS has been shown to accurately predict which offenders will go on to re-offend, it has not been as accurate at predicting those who will not re-offend. Black people were twice as likely as white people to be inaccurately classified as medium or high risk.) Humans can of course also be biased. But one biased algorithm could have an impact on a much greater scale than one biased human.
    • Second, algorithms can be used in the wrong context. Imposing a one-size-fits-all algorithmic decision in inherently complex environments or with insufficient human checks and balances can run a much higher risk of skewed, inaccurate or biased decisions. This is the case in the A-levels example: Calculating exam grades on the basis of aggregate data about past students could never have been an adequate substitute for assessing individual students’ efforts, no matter how sophisticated the algorithm. It is also worth remembering that digital literacy is not universal, and relying on users to input their own data can leave some at a disadvantage, as has been reported with the predominantly digital Universal Credit system.
    • Third, algorithms can be designed badly and fail to take into account relevant factors. For instance, in administering Universal Credit: In cases where an individual is paid multiple times in one month, which is common among self-employed or gig-economy workers, the algorithm can overestimate earnings and shrink Universal Credit payments. The Australian government is also currently in the process of paying back an estimated $721 million (AUD) in wrongly issued debts as a result of a similarly flawed algorithmic process.

    Myths and Misrepresentations

    Myths and Misrepresentations

    A general lack of awareness about how algorithmic technologies work has meant that some unhelpful tropes have emerged. The myths we look at below are some themes we picked up from various articles on algorithms in the UK media. On examination, the reality is much more nuanced:

    “Algorithms shouldn’t be trusted”

    Press coverage often implies that algorithms, AI and ML are being used as a complete substitute for humans decision-making, or attributes a level of independence bordering on sentience. In fact, algorithms are generally used as one element of a process designed and administered by people. Algorithms don’t often make decisions on their own, but they can make the process of decision-making a lot easier.

    “Blame the mutant algorithm”

    The idea of the “mutant” algorithm evokes an AI that has evolved beyond human control or understanding. And it’s true that there is a concern in cutting-edge AI around the “black box”, which refers to the most advanced machine-learning systems. These systems are “self-taught”, so it is not easy – or even possible – to interrogate the processes and steps they take to arrive at a given conclusion. However, the reality is that in most cases the tools being used in government are some way off this level of sophistication.

    “Computer says no”, “They take the human side out of government”

    It is sometimes suggested in public debates that using algorithmic decision-making is cutting costs at the expense of fairness or understanding of individuals’ circumstances. This ignores the fact that much government decision-making – even when not automated – has been systematised to a greater or lesser degree for decades, to try to ensure consistency. These decision-making processes can be seen across many government services: calculation of a student-loan allocation uses similar logic, as well as income-tax codes and council-tax brackets.

    “Do they really do anything useful?”

    Away from the headlines, many organisations use algorithms and AI to make marginal improvements to services that, multiplied across high volumes of users, amount to significant benefits. For example, a train-signalling company used AI to help trains run on time, reducing total lateness at all London train stations by up to 200 minutes every day. HMRC uses an algorithm to help identify instances of tax evasion, saving taxpayers’ time and money by more effectively targeting raids on businesses. The Department for Work and Pensions uses algorithms to detect identity-cloning techniques that are commonly used by fraudsters, protecting public resources

    A Constructive Approach

    A Constructive Approach

    Governments need to make sure they design decision-making processes carefully, with the right balance of efficiency and safeguards, consistency and flexibility. Where they use algorithms, there should be transparency about where and how the algorithms are used, how they are designed and the data they are using, as well as ensuring there is accountability and mechanisms for challenge.

    If algorithms are to positively impact citizens, governments need to balance a set of trade-offs:

    • How to increase government efficiency while minimising potential algorithmic harm to citizens
    • How to improve public-service delivery without removing the human element of government and public services
    • How to develop well-informed and effective algorithms without undermining personal-data privacy

    Despite some rather one-sided media coverage, there is valuable work being done to help organisations make the best use of algorithms. The Ada Lovelace Institute has written extensively on algorithmic decision-making systems, looking at the difficult questions around algorithmic accountability. The Centre for Data Ethics and Innovation also recently reviewed bias in algorithmic decision-making, as well as the broader questions of transparency and public trust in algorithms in government.   

    If used correctly, algorithms are valuable tools that can have huge benefits for policymaking and public-service delivery. Resources are limited, and governments shouldn’t deprive citizens of the benefits algorithms can provide out of fear of getting it wrong. Instead, they should invest the necessary time and effort in making judicious use of them, and putting in place adequate mechanisms for transparency, accountability and challenge. In this way, governments can rebuild public confidence and get past the dreaded mutant algorithm for good.

    Find out more
    institute.global