Trust in Technology: AI, Freedoms and Harms

Technology Policy

Trust in Technology: AI, Freedoms and Harms

Paper
Posted on: 7th February 2022
By Multiple Authors
Bridget Boakye
Artificial Intelligence Lead, Internet Policy Unit
Hermione Dace
Senior Policy Analyst
Oliver Large
Senior Policy Analyst
Jess Northend
Policy Lead, Science and Innovation Unit
Ariana Kiran Singh
Senior Operations Analyst
Kevin Zandermann
Policy Analyst

    Fundamental shifts in technology are changing the way we live, bringing progress to all reaches of life, whether in the use of artificial intelligence (AI) to enable rapid drug development or the goal of universal internet access. In order to harness the huge opportunities offered by tech and unleash the pathways to prosperity it offers, trust in systems, applications and platforms is essential.

    To assess levels of trust in technology around the world, we have used the following measures:

    • The extent to which respondents accept the use of AI in areas of public and private life.
    • Views on internet freedoms.
    • Perceptions of regulatory ecosystems as well as responsibilities for preventing online harms and fake news.

    In exploring the data, we have also identified where in the world communities are more likely to embrace AI and why, and we have assessed what the data suggest for governments when it comes to responsibility for regulation and online harms.

    As a summary of the findings in this chapter, we note that opportunities abound in emerging markets specifically for innovative AI technologies, yet these are also the countries in which internet freedoms are at their most fragile. When it comes to freedoms, meanwhile, citizens from across the globe expect their governments to both prevent online harms and defend the freedom of speech online. Our concluding proposals tie into these key findings, as we recommend a) systematic, independent auditing of the operations of large technology firms and b) a new geopolitical settlement with the global technology industry.

    Respondents Favour AI Use For Selected Policing and Medical Applications

    Respondents Favour AI Use For Selected Policing and Medical Applications

    Impressive AI breakthroughs this year have ranged from advances in biological research achieved by DeepMind’s AI system AlphaFold to OpenAI’s GPT-3 language model, which has brought the era of no-code technologies one step closer. This means that apps or websites will be able to be created from scratch simply in response to voiced instructions. While these advances have made headlines, so too have related controversies, from public condemnation of the AI-generated audio of late chef Anthony Bourdain in the documentary “Roadrunner” to Meta (formerly Facebook) announcing it was to retire its facial-recognition system, among concerns over algorithm bias and threats to privacy.

    Based on our research from across 26 countries, we see that respondents showed varying levels of acceptance to the use of AI in different areas of public life, with support most strongly indicated for specific medical and policing applications. Conversely, there is a definite lack of support for its use in either welfare or justice systems, with welfare payments and the determination of sentences two of the most contentious topics.

    Figure 1 – There is a mixed picture on acceptable AI uses, but respondents are wary about applications relating to the welfare system and determination of jail sentences

    1

    Policing has come under the scrutiny of leaders around the world, with the European Parliament passing a vote in October 2021 to ban police use of facial-recognition technology in public spaces and AI in predictive policing, a move that reinforced the EU’s draft regulatory framework on AI, which was unveiled in April 2021. In the US meanwhile a growing number of states and cities, including California, have been banning the use of facial recognition in policing, despite the lack of federal law and a recent US government report showing growing reliance on the technology.

    While there may be generalised concerns about technologies and policing, our survey reveals healthy acceptance of AI use for targeted surveillance. For policymakers, this suggests that outright bans may not align with the public interest. In the US, the state of Maine offers an alternative approach that could be more reflective of public sentiment: a law that recently came into effect prohibits government use of facial-recognition software, including in schools and other state agencies, but with an exception made for specific cases such as suspect identification for a major crime. The law also requires logs of facial-recognition searches to be kept for transparency and accountability.

    Governments seeking to leverage the benefits of AI should embrace its transformative applications while mitigating potential harms through proportionate and agile regulation, and create accountable and responsible mechanisms incentivising responsible use.

    Emerging and Frontier Economies Lean Towards Broader AI Use

    On some of the more contentious uses of AI, our research has identified an inclination towards the technology’s use among the public in emerging and frontier economies (as defined by MSCI) based in Africa and Asia; the data even reveal deviations from the global trend on issues such as sentencing and welfare. This phenomenon may not be so surprising: in a celebrated recent case, for example, the government of Togo used AI to directly distribute cash to 57,000 of its poorest citizens who had been missed in the first rounds of Covid-related aid.

    Figure 2 – There is significant variation in attitudes between developed and emerging/frontier markets towards the more contentious uses of AI

    2

    While governments in emerging and frontier markets are beginning to increase their AI adoption, their G7 counterparts, often with more developed AI ecosystems, are moving to regulate. With the US administration calling for a “Bill of Rights for AI”, leaders are clearly concerned about public trust in AI. Our survey data confirm some limitations in public acceptance of AI use within G7 countries, reflecting in part the concentration of negative media stories and public debate on the subject since 2015.

    Figure 3 – Within the G7, especially in Great Britain and the US, acceptance of AI use across a variety of health, justice and welfare scenarios remains somewhat limited

    3

    Maintain Internet Freedoms, Except During Cyberattacks

    Maintain Internet Freedoms, Except During Cyberattacks

    Representing a major threat to internet freedoms, shutdowns have become a popular lever for some leaders seeking to limit social disruption or explicitly repress their populations. Not only do they restrict the growth of internet economies and disenfranchise communities from global communication and trade networks, shutdowns can also be harnessed as a tool with degrees of severity. These range from generalised network disruptions (full) to the denial of access to specific platforms (e.g., social media) as well as throttling (slowing down internet connections to limit video sharing and/or disrupt communications).

    Shutdowns should not be considered as localised threats either since they represent international censorship and potentially delay foreign-policy responses to unfolding situations, thus undermining visibility, diplomacy and national security. The global economic cost is also routinely undervalued; research by Top10VPN shows that 235 major internet shutdowns in 44 countries have cost the global economy $15.5 billion since 2019.

    The justifications (civil unrest, protests, damage to property) often relied upon by regimes to impose internet shutdowns are, according to this survey, generally considered unacceptable. Acceptance levels across our surveyed countries vary significantly, reinforcing the need to preserve the open and global internet. Two exceptional scenarios do emerge, however: a cyberattack waged by a foreign nation and, to a lesser extent, the spread of misleading or fake news before an election. In both, we see higher acceptance rates of shutdowns, although this acceptance is more likely to be related to the idea of “defending national sovereignty” than outright objection to internet freedoms.

    Figure 4 – Only a cyberattack by a foreign nation would be considered by the majority of respondents as an acceptable reason for a temporary internet shutdown

    4

    Fake News Triggers Acceptance of Shutdowns in India, Indonesia, Kenya and Nigeria

    Countries in which the internet has been previously restricted, including India, Indonesia, Kenya and Nigeria, are inclined to accept shutdowns more readily in order to prevent the spread of misleading or fake news – especially before elections. Even in countries accustomed to greater internet freedoms, such as Brazil and South Africa, there is a high rate of acceptance in this scenario.

    Figure 5 – Support for shutdowns, in the case of fake news before an election, tends to correlate with countries that have been subject to internet restrictions previously

    5

    Emerging and Frontier Markets More Approving of Shutdowns in Response to Economic Disruption

    A net acceptance rate of 42 per cent among respondents in emerging and frontier markets (as defined by MSCI) tends to indicate some support for shutdowns in response to protests that could disrupt the national economy, although Greece, Poland, Hungary and Turkey buck the trend, with some of the lowest approval rates (between 18 and 22 per cent). This could be down to citizens of emerging and frontier markets not yet experiencing the full-bodied benefits of a digital economy and not considering the internet an integral part of the national economy.

    Figure 6 – Emerging and frontier markets show greater support for temporary shutdowns in response to protests that could disrupt the national economy

    Government Expected to Play Central Role in Preventing Online Harms

    Government Expected to Play Central Role in Preventing Online Harms

    In addition to restrictions, online harms are another significant factor in measuring public trust in both the internet and technology more widely. From misinformation and disinformation to coordinated online abuse and harassment, as well as the online targeting of children best illustrated by the leaked “Facebook Papers”, greater awareness of such harms is prompting governments to consider new regulatory frameworks. In May 2021, the UK published its Online Safety Bill while, a few months later, China’s internet regulators announced new policies, including on terms of use by minors and harms caused by algorithms.

    Our survey reveals a strong belief that government should be held responsible for preventing online harms while also defending free speech online. On average, 63 per cent of people say the government has a great or fair amount of responsibility to stop the spread of fake news and hate speech while 64 per cent believe government has a great or fair amount of responsibility to defend free speech online. In “partly free” countries (as defined by Freedom House), an average of 69 per cent of respondents believe government has a great or fair responsibility to defend free speech, with Kenya (81 per cent), Greece (76 per cent), India (71 per cent) and Hungary (68 per cent) among the highest percentages.

    In emerging economies, this expectation is more pronounced and especially relevant to rapidly growing digital ecosystems from which citizens have the most to gain in terms of an open and free internet. Across economies, whether advanced or emerging, proportionate and flexible regulation to address online harms, such as our proposals for systematic, independent auditing of the operations of large technology firms and a new geopolitical settlement with the global technology industry, would help bolster online safety and improve public confidence.

    Figure 7 – Government should be held responsible for preventing online harms and protecting free speech according to citizens in “free” and “partly free” countries

    7

    Italian and American Citizens Expect the Least From Government in Preventing Online Harms

    In considering the picture in the G7, Italy and the US stand out for their attitudes towards governmental responsibility for addressing online harms. In the US, there was a remarkable gap (13 per cent-plus) between the numbers who believe the government has a responsibility to defend free speech online versus preventing the spread of fake news. Clearly, the issue of fake news is a polarising issue in the US: 71 per cent of Biden voters believe that the government has a lot or fair amount of responsibility in addressing fake news compared to just 38 per cent among Trump voters. Nonetheless, it would be an oversimplification to assume that political affiliation is always a clear barometer of the orientation of citizens when it comes to this issue. For example, in Italy, educational level rather than political inclination is a notable factor: 49 per cent of citizens with less than primary or lower-secondary education believe the government has a lot or fair amount of responsibility for stopping the spread of fake news, compared to 59 per cent with upper secondary and post-secondary education, and 65 per cent with tertiary education.

    Figure 8 – Among the G7, Italian and US citizens are least inclined to expect their governments to address the spread of fake news and hate speech

    8

    Finally, on the specific question of preventing children from accessing inappropriate content online, respondents across our surveyed countries believe that individuals have the most responsibility, followed by large tech companies and then government.

    Figure 9 – There is broad global support for individuals taking the most responsibility to protect children from accessing inappropriate content online

    9

    Overall, our research highlights the contrasts in perceptions of technology and how it should be used to improve different walks of life. Opportunities abound in emerging markets for AI technologies yet internet freedoms are at their most fragile there. Meanwhile, citizens from across the globe expect their governments to both prevent online harms and defend the freedom of speech online. To avoid differing regulatory approaches to AI and online harms from creating loopholes in compliance, progressive governments should coordinate to achieve meaningful tech-policy oversight and regulatory frameworks. In a previous paper, we have proposed that the UN, D10 (G7 plus Australia, India and South Korea) and specially designated tech firms establish a Multi-Stakeholder Panel on Internet Policy (MPIP) – modelled on the Intergovernmental Panel on Climate Change – to oversee this ecosystem.

    Editor’s Note: Some data in charts and text may vary slightly due to rounding. Participants for the survey were selected from an online panel, which should be taken into account in responses to questions about online activities, particularly in countries with low levels of internet access. More information about the research and results can be found here.

    Find out more
    institute.global