Fools Rush in Unprepared: What Lawmakers Can Learn from the Uber Files

Technology Policy Tech

Fools Rush in Unprepared: What Lawmakers Can Learn from the Uber Files

Commentary
Posted on: 1st August 2022
By Multiple Authors
Oliver Large
Senior Policy Analyst
Rhea Subramanya
Policy Lead - Economist, Internet Policy Unit

Data leaks, and the troubling revelations that they offer, are nowadays both shocking and unsurprising. This month, a tranche of 124,000 files exposed Uber’s secret activities in 40 countries between 2013 to 2017. These so-called “Uber Files” detail confidential messages and emails between the platform’s senior executives, shining a light on how the company lobbied, paid, and elbowed its way into new markets. The files show how our laws and regulations struggle to keep pace with technological innovation – with the result that the downsides of disruption go unmitigated for too long.

There are three key takeaways from the Uber Files. First, Uber was a new technology that established itself before governments had time to understand and respond to it. Uber often entered markets where it was illegal or where ridesharing apps and platform workers were not yet regulated. This highlights the near impossibility of establishing a new business model in many markets, where regulation can often favour incumbents.

Second, politicians actively embraced this new platform without a full understanding of its consequences. Uber’s scalable model combined low barriers to entry and earning opportunities for workers with clear benefits like easy access for consumers. However, policymakers failed to grasp the significant impacts of the platform labour model. And by ignoring the potential negative effects of disruptive technologies like Uber, governments likely exacerbated citizens’ mistrust in tech and government.

Third, apart from the systemic sluggishness, some of Uber’s attitudes and practices were clearly wrong (and the company has admitted as much). In one exchange, co-founder and then-CEO Travis Kalanick responded to concerns that Uber drivers could be exposed to violence during taxi protests in France by saying “violence guarantee[s] success.” This callous attitude towards workers is hard to defend and casts a shadow over the industry; but it also raises the question of how innovative companies that move fast but don’t break things can be accommodated.

Uber, once the most valuable start-up in the world, is the poster child of the platform economy. In over 10,000 cities across the world, customers can book a car, order food and send a package on a single app. The $43 billion company revolutionized how millions of individuals work and travel. Copycat business models sprung up and start-ups pitched themselves to investors as the ‘Uber of’ their industry.

It epitomized Silicon Valley’s ethos of breakneck disruption. However, the revelations from the Uber Files beg the question: how can we respond quickly to new technologies while still realising their power to change people’s lives for good?

When new technologies enter a market, regulators need to be equipped to understand and forecast their impact on the labour market, economy and society. The Collingridge Dilemma describes regulators’ struggle to understand nascent technology and their difficulty controlling it once the technology becomes established. For instance, while encouraged by governments to enter their markets, platforms like Uber initially offered few benefits and protections for drivers (for instance, sick pay and insurance), leaving them to foot the bill (for instance, when they fell ill or were injured and took time off work).

On the firms’ side, digital labour platforms should not be allowed to insulate themselves from oversight. The Uber Files document ‘kill switches’ that the company used to shield confidential information from police in at least six countries from France to India. Without access to data, authorities were forced to fly blind. Even laptops that were confiscated before the kill switch had been activated were subsequently ‘locked’.

None of this means that governments should engage less with platforms. Instead, there are lessons here for how policy makers and regulators can approach this challenge.

First, they need to better forecast the risks of emerging technologies and business models, which means investing in skills and knowledge to understand the technologies themselves. Greater public-private engagement could also lead to better regulatory outcomes, provided this is done transparently.

Second, regulators should monitor and, where necessary, test new business models. For high-risk sectors, regulatory sandboxes can provide a controlled space for new business models and technologies to be understood by agencies.

Third, public sector teams should be more empowered and less centralised. For example, governments could experiment with complementary briefs and powers delegated to different departments and agencies. This can reduce the number of single points of ethical failure, such as what happened in France. It can also prevent efforts to influence the most senior representatives in government if there are “entry points” lower down the hierarchy.

Fourth, regulation needs to be more agile, flexible and adaptable. One way to get there is to digitise paper-based processes like rulemaking. If computers can process rules (i.e. read, understand and execute them), it will have both pro-regulatory and pro-business effects. Firms will be able to understand where they are in compliance and in violation. It can also enable faster and better regulatory intervention using technologies like artificial intelligence, natural language processing and machine learning.

Finally, greater transparency can help engender trust in government. The trade-off for policy-makers is usually higher levels of scrutiny, but politicians should not shy away from this. In Taiwan, Audrey Tang’s radical transparency – recording and publishing every meeting – has proven that this approach can work and be a model in reducing ethical failures.

There is a happy place somewhere between stifling innovation with pre-emptive regulation; and waiting to act until there is widespread harm. The question is whether policymakers are committed to finding it.

Find out more
institute.global