Tech Companies and the Response to White Supremacist Content Online

Global Challenges Counter-extremism

Tech Companies and the Response to White Supremacist Content Online

Posted on: 20th March 2019
Mubaraz Ahmed
Senior Analyst

The devastating attack on Muslim worshippers in Christchurch, New Zealand, on 15 March was heinous not just because it was an abhorrent act of violence. The live streaming of the bloody assault also ensured that millions around the world could witness the brutality of the attack.

Subsequently, the footage of the terrorist attack was circulated across a plethora of online platforms. Facebook said it removed some 1.5 million videos of the attack in the first 24 hours. Given the extensive replication of the footage on other platforms and its wide distribution on messaging apps, this figure, although significant, is only the tip of the iceberg.

In response, UK Home Secretary Sajid Javid and others have called on tech companies to drastically improve their response to the proliferation of such content. In particular, the home secretary said, “Online platforms have a responsibility not to do the terrorists’ work for them.” Echoing the language adopted by his colleagues, Javid also threatened firms by saying that “tech companies who don’t clean up their platforms should be prepared to face the force of the law”.

The home secretary seems to be treading the same ground as his predecessor in the face of Islamist terrorist content online. But the absence of clear definitions and political will from some leading players renders his calls futile. While tech companies do have responsibilities, eliminating far-right terrorist online content requires policymakers to decide what should be removed. An international coalition of the willing would be an important first step against this transnational threat.

No Concerted Effort Against Far-Right Content

The international campaign against terrorists’ use of the Internet emerged largely after the proliferation of online content by ideologically motivated groups such as ISIS and al-Qaeda. As a result, governments and tech firms rightly made this specific, well-defined form of terrorism the priority. The culmination of this pressure was the creation of the industry-led Global Internet Forum to Counter Terrorism (GIFCT), formed by Facebook, Twitter, YouTube and Microsoft.

These firms have made significant progress against terrorist online content—but this action has been primarily against ISIS and al-Qaeda material. The strides made by tech companies in enhancing, expediting and automating the response to Islamist terrorists’ use of the Internet have been underpinned by definitional clarity, political will and international consensus. Tech firms have developed efficient, consistent and relatively accurate systems that use machine learning and artificial intelligence to uncover Islamist terrorist content and online networks internationally. These efforts have been based on an internationally recognised UN list of proscribed terrorist groups and people.

Far-right and white supremacist terrorism, by contrast, has not generated nearly as much international attention or consensus. As a result, there has been no coordinated, concerted, global effort against ideologically motivated extreme right-wing content. This has created a false sense of expectation and understanding among the public and policymakers when it comes to identifying or removing the full range of terrorist content.

The GIFCT has played an active role in coordinating the tech response to the circulation of terrorist content relating to the Christchurch attack. The forum has shared the digital fingerprints of over 800 visually distinct videos and provided the context for enforcement steps that industry partners can take. However, if such approaches are to move beyond reactionary measures, politicians must provide the type of clarity that has yielded considerable results in the fight against ISIS and al-Qaeda content.

This is not an attack on tech firms. The industry may have been slow to react in the past and may still make errors, but firms are taking greater responsibility and are more mindful of the unintended harms associated with their platforms. Tech companies are not responsible for deciding what constitutes terrorism. That is for politicians and governments.

If international partners can come together in recognition of the threat, they can work towards a shared definition of what is unlawful and what is not.

The Need for Global Consensus

International consensus is needed to first recognise that white supremacist and extreme right-wing terrorism is a transnational threat that requires a global response. A coalition of the willing, bringing together countries that deal with the same problem, would recognise the scale of the challenge and demonstrate a willingness to address it in its entirety.

The Five Eyes intelligence and information-sharing alliance, which comprises Australia, Canada, New Zealand, the UK and the US, is considered the most complete and comprehensive intelligence alliance in the world. At the same time, each member country deals with its own domestic manifestations of the transnational extreme right-wing threat. The alliance has previously used its influence to exert pressure on tech firms in the face of Islamist terrorist activity online, so there is potential for similar impact on far-right extremism.

Similarly, the European Union has shown its ability to force tech companies to be more efficient and transparent in acting against illegal and harmful content online. EU directives carry the potential for harsh sanctions for non-compliance while making it abundantly clear to platforms what is deemed a violation. Given the geographical spread of the extreme right-wing threat, an alliance between the EU and Five Eyes would be a strong platform to lead the fight against transnational right-wing extremism worldwide.

Building a coalition of the willing should serve as both a precursor and an impetus for some form of definitional clarity. If international partners can come together in recognition of the threat, they can work towards a shared definition of what is unlawful and what is not. By doing so, tech firms will have an internationally recognised articulation of the problem as defined by democratically elected representatives.

Only then can politicians expect far-right online content to be met with a similar response to Islamist terrorist content. Vague threats by politicians towards the tech industry do little to address the fundamental challenge of ensuring terrorists do not exploit social-media platforms. Harnessing the advanced automated tools and capabilities developed by the likes of Facebook, Twitter and YouTube in removing extreme right-wing terrorist content will be possible only when policymakers, not tech companies, have decided what they want to be removed. 

The Christchurch attack and subsequent circulation of materials on social media is a stark reminder that governments and tech firms need to recognise the transnational and online dimensions of far-right and white supremacist movements. Graphic content or incitement to violence—whether far right, Islamist or inspired by any other ideology—has no place on social media. In the face of transnational threats and globally connected societies, the political and technological responses must be better aligned.

Find out more