Skip to content

Geopolitics & Security

Why 2019 Will Be the Year of the Online Extremist, and What to Do About It


Commentary10th January 2019

It is an iron law of technology that outsiders are early adopters. As long ago as the mid-1980s, Louis Beam of the Texas KKK spotted that networked computing would be a boon for the movement and set up a bulletin board system. For most of the 2000s, the far-right British National Party had the most active and best-designed website in UK politics. (Back in 2013 it was the first party to gamify its website, offering prizes for mentioning keywords in posts to drive up engagement.) The same is true of al-Qaeda and ISIS—whether it’s producing slick propaganda magazines or hijacking Twitter hashtags, extremist movements are like start-ups: agile, fast and creative.

It’s easy to understand why. In addition to being highly motivated, extremists see in every technology—from the radio to the smartphone to the dark net—new ways to circumnavigate the establishment, reach new audiences and avoid the authorities. The Internet is an especially valuable propaganda tool, because of a simple dynamic: it is still easier to upload something than knock it offline. No matter how many moderators Facebook employs, it will never match the number of extremists hoping and plotting to evade the platform’s spam filters or content managers.

Whatever new technologies entrepreneurs dream up, extremists will pick them up quickly and use them in unpredictable ways to spread hate. This dynamic won’t change, because it’s not about technology but about motivation, opportunity and incentive. Policymakers should respond smartly with large, strategic interventions in areas where counter-measures will have the greatest impact.

“ 

Whatever new technologies entrepreneurs dream up, extremists will use them in unpredictable ways to spread hate.

 ”

A Difficult Year Ahead

Pressure will mount on platform companies in the next 12 months, even as they work to tackle the problem. Facebook and Twitter, in particular, have recently invested heavily in tackling both extremism and fake news, and both have enjoyed considerable success that has been mostly overlooked. Facebook, for example, “took action” on 1.9 million pieces of ISIS- or al-Qaeda-linked content in the first three months of 2018, mainly using technology-driven detection tools. Facebook boss Mark Zuckerberg recently reported that the company’s beefed-up content-moderation teams now review 2 million pieces of content a day, using a mix of manual and automated systems. But even if they could achieve a 98 per cent success rate—which would be superhuman—that would still equate to 40,000 daily errors.

Some of that, invariably, will be extremist content that should be removed. Newspapers around the world increasingly see the big tech platforms as competitors for online advertising revenue and are therefore minded to feature stories that criticise them. The result will be more “Facebook/Google fails to tackle hate” stories throughout 2019. This will create extra political and commercial pressure.

Recent successes, however, mask some longer-term difficulties coming in 2019, which must inform policymakers’ response. First, there will be a growth in the quality, quantity and usability of privacy-enhanced and hard-to-censor online tools. One of the side effects of growing public concern about data use and commercial web tracking will be a surge in software that protects user privacy. This software will be built for journalists, whistle-blowers and ordinary citizens, but it will be picked up by extremist groups, which will use it to frustrate the authorities.

Observers can also expect to finally see commercial applications of blockchain technology—after several false starts and overblown promises. One use of this distributed ledger technology will be decentralised, hard-to-censor social-media and broadcast platforms. A blockchain social-media platform would be untouchable: no government would be able to edit or remove hate speech, illegal images or terrorist propaganda, unless the whole network were somehow vaporised. In 2019 the police will look back fondly on the big tech companies and how they followed the law. With decentralised blockchain networks, legislators might as well pass laws to change the orbit of the moon.

“ 

With decentralised blockchain networks, legislators might as well pass laws to change the orbit of the moon.

 ”

Another threat looms on the supply side. Late 2017 saw the rise of so-called deep fakes, such as the application of face-swapping algorithms that allow campaigners to put words into their opponents’ mouths. A recent report on AI and security threats warned that in the future, AI-enabled high-quality forgeries may challenge the seeing-is-believing aspect of video and audio evidence. One likely use will be the automatic, machine generation of content. Certainly, large companies have already shown signs of researching automatic generation of advertisements. It is inevitable that extremist groups will look for ways to use these technologies to create large volumes of more emotive and believable content.

On balance, these tools—especially privacy-enhancing software—are good for citizens and the health of democracies. The authorities should therefore not attempt to ban or wreck them, citing national security concerns. But these tools will mean that illegal material may be easier to produce and tougher to remove, and the perpetrators harder to identify.

Geopolitics & Security

10th January 2019

A Three-Pronged Response

Faced with these trends, policymakers should adopt a response defined by three elements. The first is to work out what useful role technology can play. Using AI to automatically spot content will continue to be important, but it will never be enough to deal decisively with a problem as complex as extremist content. But technology will be valuable to spot these deep fakes by automatically identifying tell-tale signs of inauthentic video and audio files.

Investment in these counter-measures should therefore be scaled up now. Where possible, this should be a partnership between companies and governments, because the commercial sector will have both the data sets and capabilities to lead the way—not to mention an economic incentive, because deep fakes on their platforms will be bad for business. Governments can then play a useful coordination role. One good example is the way PhotoDNA—developed in the private sector—is used to identify illegal images of children. This technology creates a database of ‘fingerprints’ of known images, which several companies use to automatically spot and remove any examples.

“ 

Using AI to spot content will continue to be important, but it will never be enough to deal decisively with extremist content.

 ”

Second, given the dynamics outlined above, governments must accept they will never be able to remove extremist content entirely. The approach therefore needs to be smart: making large strategic interventions rather than a constant whack-a-mole and demonisation of platforms. One example is the largely successful way the police has dealt with dark-net markets. Rather than trying to remove these sites entirely, several police services around the world have collaborated to infiltrate them and, at carefully chosen intervals, make large-scale arrests and takedowns. This has sown doubt among the criminals operating on the dark net. And by watching closely how they respond to arrests, the authorities have learned more about their tactics, strengths and weaknesses.

Finally, it is vital to keep a tight and limited definition of extremism. It is already a contentious and contested term. Much of the current approach has been driven by confronting Islamist and far-right groups. But in this period of political turbulence, new forms of extreme politics will emerge: far left, anti-technology, direct-action environmentalist and extreme libertarian. It will be tempting, but mistaken, to immediately cast them all together as part of the same problem and attempt to censor or remove any troubling or radical ideas. There will be calls to ban or remove an ever-wider set of controversial ideas, but they should be resisted. In addition to being difficult, growing censorship would polarise politics further, drive radicalisation and increase the size of the problem.

Inadvertently making the job harder than it already is would, of course, make it more difficult to target extremism in a strategic way, leaving less time and resources to develop new technology to address emerging problems. Given the scale of the challenge, that is something policymakers can ill afford.

The views of the author do not necessarily represent those of the Tony Blair Institute for Global Change.


Chapter 1

The Full Series

Download all the essays in this collection as a PDF or read them online:

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions