Implementing a Duty of Care for Social Media Platforms

There is indicative evidence that social media is causing a broad spectrum of harms in many countries. These range from deterring women from public life, racial, religious, sex-based abuse often illegal, child sexual exploitation and abuse, profound disruption to political processes, to threats to national security, to economic fraud, and wider harms to consumers. The issues caused by online media are so profound that only a systemic policy solution will succeed, based on robust micro economics and balancing the rights of people who are harmed with those who have a right to speak.

William Perrin

Credit: Ana Bustelo

From an economist’s perspective, the operation of social media companies, themselves essentially data and advertising businesses, causes costs to society that are not born by the companies’ shareholders. Because costs fall outside the company to society, the company has no economic incentive to fix the problem. They just go on producing a harmful product. For 40 years the OECD has supported the ‘polluter pays’ approach as the most micro-economically efficient approach to solving such problems—governments returning societal costs through tax or regulation to those that create the costs. What system then will make the social media companies invest in preventing or cleaning up the harm they cause, while still respecting people’s rights?

A starting point is that everything a person sees or experiences when using social media is a result of a decision made by the company that runs that platform: service design decisions about the terms of service, the software, and decisions about the resources put into enforcing the terms of service and keeping the software up to date. Services are differentiated by these decisions, which also have an effect on the nature of harms that arise.

Regulatory regimes that address harms to society caused by companies have existed for hundreds of years. At first these were detailed, prescriptive laws, but such laws tended to be evaded in time. More recently, governments have brought in high level regimes focused on outcomes that companies must achieve that are harder to evade and allow the companies concerned to make their own decisions about how to comply. Often a regulator will guide companies’ attempts to achieve outcomes, with powers to sanction if the company fails. Risk assessment and management is central in these regimes.

In the United Kingdom, the government proposed such an approach for reducing online harms, based upon work by Carnegie U.K. Trust. The Carnegie approach has been endorsed by many U.K. parliamentary committees and groups.

The U.K. approach requires parliament to legislate that social media service providers have a duty in law to take reasonably practicable steps to prevent people from coming to reasonably foreseeable harm due to the operation of the company services. Also required is the creation of a regulator to oversee this process at one remove from government. Whether this duty is met or not is assessed by outcomes, and the focus on harm is both durable and systemic. This ‘statutory duty of care’ is a distant relation of the duty of care that arises in tort, familiar across the common law domain. But rather than the courts, parliament set out in statute that there exists a duty of care from one class of people to another and that a regulator supports and enforces it.

This systems-based approach side-steps issues of content liability and asks if the systems the content passes through are risk assessed, managed and fit for purpose based on the outcomes. This is far removed from a state censorship based approach or that seen in broadcasting or press regulation. In the U.K. proposals the regulator would be bound by the European Convention on Human Rights and required to balance people’s rights in its work. The regulator would be funded, as is normal in Europe, by a mixture of government money and levy on the companies regulated.

The challenges addressed by a statutory duty of care arise from the immense commercial success of American companies exporting the U.S. regulatory approach to platform liability and regulation. But it is possible to sign up hundreds of millions of subscribers while coming from a different regulatory system where there is extreme liability, as China’s TiKTok demonstrates. Until the United States changes its law, which seems unlikely, other nations and trade blocs will pose diverse regulatory responses. Social media companies should address the external costs of their actions, but the benefits from such services would be hampered by an uncoordinated set of burdens that might themselves be ineffective.

The European Commission is reportedly looking to take up a statutory duty of care in its work on reforming intermediary liability in a new Digital Services Act. And a French government expert group recommended a similar approach to their government. This would point to four members of the G7 coming under a similar regime. Around the world, nations and trade blocs are examining how to deal with the issues arising from social media. India has in draft new laws on platform liability. Ireland, the default taxation-friendly home for tech companies in Europe, is about to publish their own proposals on online content. Australia is constantly examining how to protect children from harm, and New Zealand is considering new laws following the Christchurch massacre. Canada is also looking at its regulatory system. India has new laws on intermediary liability in draft. And China of course has its own, distinctive regime for social media regulation. The United States, in part instinctively, in part due to the above, is trying to bake its own low liability regime into trade deals.

Nation states recognise that there is a problem, but there isn’t universal agreement on the solution. Nor are there mechanisms for global harmonisation. Unlike say a trade round where there is give and take, even in asymmetric rounds, the negative externalities generated by technology companies located in the United States and increasingly in China are felt in other nations who receive little benefit, not even significant tax revenues.

States where the externalities fall are still developing policies, and the states where the externalities are created are attached to their own domestic regimes. There is little mutual multi-lateral understanding of positions. Taking forward a duty of care approach at a multinational level requires a forum that reflects these conditions. Supra national attempts to address issues by consensus in particular through the United Nations are weak. Internet governance forums that involve governments have proven only to be weak talking shops, at their best only on narrow technical issues. A trade-round-style approach may be premature as participants don’t yet have formed positions from which to negotiate. A more multi-lateral, information-sharing approach is more appropriate but in a formalised, established setting supported by a strong secretariat. There are analogies with recent work on tax and the digital economy—multilateral forums and Sherpa work moving forward understanding of technical issues, even if not finally attaining agreement which might need to come later through a harder edged mechanism.

At present there is little or no mutual understanding of regulatory positions at a multi-national level, not least because these are not fully formed. And this needs to be addressed before moving to hard edged discussion of an actual multi-national regime. A statutory duty of care, enforced by a regulator approach, is an ideal reference position against which to assess emerging national approaches.

The OECD has a long history of cooperation with the countries considering proposals to regulate and has studied digital issues for decades. The OECD secretariat has a strong track record in writing comparative, yet neutral papers analysing and comparing national policy positions on a range of issues. OECD has hosted digital ministerials, most recently in 2016. OECD is also well used to engaging non-members such as China and India. OCED would be well placed to convene talks on regulation and to use a statutory duty of care approach as a central comparator.

We call for a 2020 OECD meeting of Ministers responsible for regulation of social media to discuss managing the external costs of internet platforms, focused on economically rational regulation. Non-OECD members China and India have a working relationship with OECD and should be explicitly included in the process. The OECD should put forward the statutory duty of care enforced by a regulator approach as a reference point to focus discussion and begin a process of discussion and negotiation between countries and trading blocs as they move towards regulation. The outcome of an OCED process would be vastly improved understanding of the models employed and available to governments and, most likely, increased uptake of a duty of care approach. Such understanding in itself could improve the regulatory outcomes in a range of nations and would then underpin discussions on trade matters that seem bound to flow in the future.

This article was originally published in the Berggruen Institute’s Renewing Democracy in the Digital Age Report

composed by Arswain
machine learning consultation by Anna Tskhovrebov
commissioned by the Berggruen Institute
premiered at the Bradbury Building
downtown Los Angeles
april 22, 2022

Human perception of what sounds “beautiful” is necessarily biased and exclusive. If we are to truly expand our hearing apparatus, and thus our notion of beauty, we must not only shed preconceived sonic associations but also invite creative participation from beings non-human and non-living. We must also begin to cede creative control away from ourselves and toward such beings by encouraging them to exercise their own standards of beauty and collaborate with each other.

Movement I: Alarm Call
‘Alarm Call’ is a long-form composition and sound collage that juxtaposes, combines, and manipulates alarm calls from various human, non-human, and non-living beings. Evolutionary biologists understand the alarm call to be an altruistic behavior between species, who, by warning others of danger, place themselves by instinct in a broader system of belonging. The piece poses the question: how might we hear better to broaden and enhance our sense of belonging in the universe? Might we behave more altruistically if we better heed the calls of – and call out to – non-human beings?

Using granular synthesis, biofeedback, and algorithmic modulation, I fold the human alarm call – the siren – into non-human alarm calls, generating novel “inter-being” sonic collaborations with increasing sophistication and complexity. 

Movement II: A.I.-Truism
A synthesizer piece co-written with an AI in the style of Vangelis’s Blade Runner score, to pay homage to the space of the Bradbury Building.

Movement III: Alarmism
A machine learning model “learns” A.I.Truism and recreates Alarm Call, generating an original fusion of the two.

Movement IV: A.I. Call
A machine learning model “learns” Alarm Call and recreates A.I.Truism, generating an original fusion of the two.