From e-books and online forums to encrypted messaging applications and livestreaming platforms, extremist groups have consistently demonstrated a willingness to harness new technologies to amplify their messages, reach new audiences and coordinate activities. These innovations have allowed new types of communities to emerge, where ideological affinity overcomes a lack of physical proximity.
Internet-enabled technologies have provided an accessible, low-cost means to establish, engage and empower like-minded groups across divides. For most internet users, these new ways to connect and communicate have revolutionised how people do business, access news, engage in political activism, share stories, play video games and much, much more. However, the enduring challenge for policymakers and platforms alike is how to continue the innovation that allows these positives to flourish while minimising, if not eliminating, the negatives.
Through a combination of international pressure from governments and a realisation that their platforms are actively used by extremists to amplify messages and reach new audiences, there is no doubt that the concerted, coordinated and committed approach of the world’s leading social media companies in addressing extremism has made their platforms a more hostile environment for extremists.
Following the lessons learned from the Christchurch attack in March 2019, the Global Internet Forum to Counter Terrorism (GIFCT) worked with the government of New Zealand and Europol to develop the Content Incident Protocol to help platforms more efficiently and effectively remove attack-related extremist content being disseminated online.
In the months after Christchurch, which was livestreamed by the perpetrator, Facebook announced an innovative partnership with law enforcement authorities in the US and the UK which would use bodycam footage recorded by armed officers to help develop technology to prevent the broadcast of such atrocities through their platforms.
A collaborative effort between Europol and Telegram, an encrypted messaging service popular among extremist groups, to disrupt the group’s activity resulted in a massive purge of accounts on the messaging app, leading to widespread disruption, albeit temporary, of ISIS’ propaganda distribution networks and communication capabilities.
The evolution and speed of change in how technology companies are responding to terrorist and extremist use of the internet is remarkable. All of these developments have taken place in the last 12 months alone. But, despite the progress that has been made, the fight against online extremism is far from over. Extremists are also continuing to innovate and find new ways to operate online.
Two terror attacks carried out by far-right extremists last year in Christchurch and Halle saw the perpetrators livestream footage of their assaults. Improvements in mobile data infrastructure means that attackers can share their act of violence with followers unedited, unredacted and in real time through their smartphone.
This evolution in extremists’ use of the internet should be a cause for concern for all online platforms with broadcast functions. The Christchurch mosque attacker used Facebook Live to broadcast his attack, while the Halle synagogue shooter used the Amazon-owned video-game streaming platform, Twitch. Both platforms have since introduced measures to stop such actions being repeated.
Elsewhere, there remains the challenge of legacy extremist and terrorist content on mainstream platforms like Facebook and YouTube. Extremist material related to internationally-proscribed terrorist organisations and prominent terrorist recruiters from different geographic contexts, including Syria, Nigeria and the Western Balkans, continues to be available and accessible online.
The continued existence of such materials on prominent platforms highlights that while the fight against extremist content online may have progressed, blind spots remain. Whereas extremist content in Arabic and English has received greater attention and scrutiny, other languages, such as Bosnian, Urdu or Kanuri, have seemingly received less attention.
Another shift that governments will need to contend with is the resurfacing of forums and platforms that have already been closed down. Previously, companies were engaged in a battle against extremist accounts that would constantly regenerate and reappear on the same platform in a different guise after having been banned. Now this is affecting entire platforms and ecosystems rather than just accounts.
Where platforms associated with extremist groups and implicated in terror attacks like 8Chan and Stormfront have been taken down by their hosts, rather than resulting in the demise of these platforms it has simply displaced them, emerging in altered forms and with new hosts. Pushing extremists to the fringes of the internet, away from mainstream users, could be a positive but it presents a different set of challenges for law enforcement, intelligence agencies and civil society.
The internet represents a vast terrain where the landscape is continually evolving. The challenges are numerous, but the opportunities are immense. It is only by learning from past failures, current successes, and creating actionable, scalable strategies that policymakers can develop sustainable responses to online extremism that get ahead of the issue, rather than chasing after it.
Borne out of a tragic loss of life, the Christchurch Call is a powerful, positive statement of intent by over 50 countries, companies and organisations. It includes clear commitments from governments to ensure existing applicable laws are appropriately enforced and that regulatory frameworks consistent with a free, open and secure internet are pursued. The sentiment and substance of the Christchurch Call is appropriate and necessary, but it must be backed up with action.
Meanwhile, signatories from the tech industry have pledged to ensure consistent application of community standards and terms of service, as well as reviewing algorithmic processes that could drive users to extremist content. For the larger social media companies to be able to consistently enforce applicable legislation, their own community standards and to proactively detect extremist activity on their platforms across different cultural, linguistic, political and geographic contexts, innovative, scalable technology-based measures must be complemented with investment in human expertise.
There should be no blind-spots or places to hide. Violating content and accounts should be investigated and addressed regardless of language or territory. If a platform, small or large, seeks to operate in a certain linguistic or geographic context then it must be able to demonstrate the ability to identify illegal activity, detect policy violations and enforce community standards. That is the responsible thing to do.
Policymakers are right to be taking the issue of online extremism seriously, especially when the consequences of indifference or inaction can be deadly. But it is important that a comprehensive and coordinated approach is taken. Impulsive, short-term and reactionary policymaking may bring short-term wins but could have serious long-term implications.
A comprehensive approach must address online extremism not in isolation or in a vacuum, but rather holistically. From election interference and encryption to self-harm and hate speech, there are many inter-related tech policy areas that need to be addressed, but this should be done comprehensively and consistently.
While it may be wishful thinking for a more joined-up approach to addressing online extremism and other online harms by governments, ensuring that the internet remains free, safe and secure for all users necessitates greater coordination and alignment on regulation and policy.
As policymakers in the UK progress with work around the Online Harms White Paper and European counterparts work on the Digital Services Act, as well as increased discussions platform liability in the US, there is an opening for a broader international political dialogue to discuss shared challenges facing governments and work towards developing consistent regulatory and legislative responses.
Online extremism represents one part, albeit significant, of a wider set of online harms that governments and technology companies must work to address. But the international momentum that has developed in the aftermath of the Christchurch attack to tackle online extremism cannot be lost, it must serve as an impetus for collective, considered action.
Read the Full Collection
Combating Anti-Semitism: Addressing the Nexus Between Hate, Extremism and Terrorism
Bridging the Evidence Gap: Proving What Works in Preventing Extremism
Beyond Desist and Disengage: Deradicalisation Must be the Ultimate Goal
The Role of Aid and Development in the Fight Against Extremism
The Changing Role of Women in Extremism and Counter-Extremism
The Future of National Action Plans to Prevent Violent Extremism
After Christchurch: How Policymakers Can Respond to Online Extremism