Skip to content

Tech & Digitalisation

What next for online harms and regulation?


Commentary29th November 2019

The debate over harms online continues to preoccupy politicians and policy makers across the world. And as they've become more interested, they've become much more willing to intervene and the discussion has moved to regulation. Politicians have finally got out their tool boxes and decided to start fixing.

We’ve been thinking about the changing impacts of technology on society, and what a framework for positive, inclusive policy solutions should look like. This post explores some of the key principles we think matter for a sensible approach to regulation and online harms, along with some of the questions we think need further attention in the months ahead.  

Big tech regulation needs a new approach

We interact with services like YouTube, Twitter, Facebook, Instagram, Snapchat and TikTok more continuously and more deeply than any media that has come before. But the rules of engagement are set entirely by the companies, not society through democratically elected governments. 

This important responsibility should not be delegated. To move this policy debate on, all sides must recognise that society has a stake in the health of our online environment and therefore the governance of online and social media.  

In our big tech report we set out what a next generation regulator should look like. It would have a three-part remit:  

  • Ensure tech companies take their responsibility seriously.  

  • Put more power into the hands of consumers.  

  • Rewrite obsolete rules for the Internet age. 

To properly tackle harm online there are some principles of good technology regulation that we think need attention from politicians and policy makers: 

1. Start with the basics: Independent statutory regulation and better regulatory principles  

Governments and society need to get over the hurdle of “should online content platforms be regulated in law?” and there is a growing consensus that it is necessary with appropriate protections. The UK Online Harms White Paper was the most detailed proposal so far and is a strong foundation stone of a new statutory framework.  

Simply having a statutory regulator in place changes the incentives for the regulated companies, especially where there is a clear divergence between commercial interests and the wider public interest

If the corporate incentives of businesses are not aligned with the public good, then self-regulatory regimes are unlikely to be entirely effective and there is no effective recourse for citizens and users.  

Good regulation includes a range of activities from standard setting through to powers of investigation and enforcement.  Internal governance, industry codes of conduct and public promises are very important activities for platforms to participate in, but they should not be confused with formal independent regulation. 

Regulation should define illegality and enforce the legal limits of acceptable online content, but it also needs to be flexible, capable of holding platforms to account for broader standards. Given the innovation online and the rapidly changing nature of online communications, regulatory standards should not be prescriptive, but should set the outcomes and assess the processes put in place to achieve those outcomes.     

Strong regulatory principles and expert regulators make sure the system can take evidenced and balanced decisions independent from political or industry capture. The public ought to have a stake in how those rules are measured, updated and enforced. Better regulation principles may be boring, but they make sure the system is adaptable, works and competing interests are balanced. 

2. It’s about behaviours, amplification and discovery, not just content takedown. Amplification and recommendation can be just as harmful as hosting (if not more)  

Making sure that there is never any harmful content of any kind on social media is, unfortunately, impossible. But even if it was possible, removing that content would not mean you have stopped that harm - in fact you may have created perverse incentives, pushed it to private spaces or not dealt with the underlying social ill.  

Illegal content certainly requires a rigorous take-down model; for example, child sexual abuse material should be removed everywhere without delay. Images and videos taken are a permanent record of abuse and each view is revictimizing the child.  

Some good models of international cooperation already exist along with technological innovations such as PhotoDNA,  a technology shared among tech companies that allows them to identify and takedown images on upload and without human reporting. 

But this approach should not automatically be applied to all harmful content. A focus on the speed of takedown for all content creates the wrong incentives for tech companies where the outcome is over-zealous takedown or ‘head-in-the-sand’ inaction to avoid culpability or political and reputational risk. 

A better way to look at some harms is to understand what is facilitated, accelerated or amplified by tech and develop a positive model about how to use tech to tackle the harms.  

It is important that ‘amplification’ is front and centre in analysis. Tech platforms are very active participants in shaping their users’ experience. Their tools drive behaviours and can make the difference between harmful content appearing in isolation or at scale. 

3. Centre the victims of harm online in your policy and analysis  

Marginalised and vulnerable groups are often the victims of some of the worst harms online, but their experiences are also marginalised in policy in favour of macro philosophies of expression.  

For example, the experience of trans people on social media provides difficult, but valuable insight into broader user safety. The ability of users to coordinate harassment online is an unintended consequence of the tools given to users to better connect with others and to share content. De-escalating ‘unhealthy’ behaviour early on before it leads to hate speech and threats of violence is a good focus – the digital equivalent of a time-out or the naughty step.  

It is vital that those building and designing technology and those creating policy recruit people with a diverse array of lived-experiences. And then it is a continual responsibility to actively seek out, listen to and include a wide range of experiences in development and analysis. 

Looking out for the most vulnerable users is an important responsibility that sets the tone for the broader debate. This requires policymakers to be sophisticated in their thinking - we cannot and should not constrain every aspect of the online world to the point of treating adults like children.  Instead it means recognising, developing, prioritising and incentivising tools that can be deployed to help those who are continually victims of harm.  

4. Don't just think about illegal and harmful content, think about the grey areas and the building blocks of extreme behaviours and harm. 

A recent TBI report on the narratives of hate of the far-right set out the connected narratives and ideologies that link seemingly innocuous expressions of dissatisfaction with society to inciting violence. Investigating the ‘building blocks’ of harm, such as anti-social behaviour and divisive narratives, are an underexplored area for policy solutions. Waiting until an incitement to violence or violent harm itself is too late. 

Grey areas of content appear across a range of content harms. For example, depressive black and white memes or hashtags on Instagram could be building blocks of more dangerous suicide encouragement behaviour. Content can also have a harmful effect in aggregate and draw people into spirals, potentially much more risky than single incident of exposure to explicit content.  

Requiring rapid takedown of ‘grey-areas’ content is not likely to be appropriate as the content is not, by itself, harmful and requiring takedown interferes too much with freedom of expression. In the case of suicide and self-harm, content can be both potentially dangerous to others and at the same time a signal that a person needs help from their friends and community.   

Platforms should seek to avoid pushing people into these harmful spirals and accelerating people through risky pathways via recommendation or amplification tools. And Governments need to be clear about the standards they are holding platforms to. They should set transparent regulatory frameworks rather than rely on take-down notices where the illegality of content is not completely clear.  

5. Algorithms are processes and recipes. Think about how the whole system needs addressing. 

Algorithms, AI and machine learning are built up as the magic of the digital economy. They are often shorthand for new technology that isn’t fully understood. For example, to reduce Google search to the label ‘search algorithm’ at once over simplifies a vastly complex construct and simultaneously mystifies something which should be accessible to political and social science.  

It’s also clear that you shouldn’t need a computer science degree to comment on the social implications of technology, so a useful way to think about systems and where to influence is: 

6. Freedom of speech can and must be protected through regulation. The debate doesn’t have to be: innovation & freedom of speech vs. safety and control 

Freedom of expression needs protection and cultivation in society to thrive. The unconstrained, shout-the-loudest culture found in the worst corners of social media is a strong justification for the need to work together to build new mutual systems of trust and respect in society. To do that we need law to define our obligations as well as rights as citizens.  

Global standards and global cooperation are important, but risk of censorship in authoritarian parts of the world does not preclude accepting proportionate regulation in another where there are democratic and constitutional protections. On the contrary, proportionate regulation is a diplomatic tool for liberal democracies to promote freedom of speech around the world.  

Arguably one of the most famous pieces of constitutional law is the first amendment to the US Constitution, which ensures the right to freedom of speech. Clearly laws can limit freedom of expression, as we’ve witnessed in many totalitarian states, but they are vital in protecting them as well. 

By artificially setting up a battle between freedom of expression and regulation, commentators (and sometimes tech companies themselves), are painting policy makers into a false choice. Freedom of speech can be written into law and regulation. By defining it and its bounds online and offline it can be much better protected.

Looking ahead

Taking these principles head on, in the months ahead we’ll be delving into some aspects of the content regulation debate to come up with specific recommendations for governments and tech companies about how to better mitigate the harms online. 

Some of the questions we’ll be thinking about include: 

  1. What is a good model to define and measure ‘platform health’? How can we create a model of regulation that is supportive, flexible and effective? 

  2. What can fandoms, rabbit holes and recommendation engines tell us about the best ways to prevent vulnerable people moving through escalating pathways of harm online? 

  3. What can we learn from other models of regulation such as financial services and audit to build a well-balanced regulatory framework? 

  4. Given that access to information is moderated across our internet and communications access (often without our knowing), what can we do to introduce new scrutiny of moderation of our access to information online beyond social media? 

If you have thoughts on any of these – or the other topics touched on in this post – then we’d love to hear from you: m.beverton-palmer@institute.global

Article Tags


Newsletter

Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions
Radical Ideas
Practical Solutions