Policing the platforms

By: Jon Guinness, Portfolio manager, Equities and Sumant Wahi, Portfolio manager, Equities

Questions about free speech on social media, internet companies’ policies on user expression, and government intervention in the tech industry are becoming impossible for investors to ignore. In the aftermath of a particularly bad-tempered US presidential election campaign, it is important, in our view, for social media companies to have external boundaries on speech. In the meantime, social media firms have to set transparent, non-partisan rules for speech on their platforms with independent oversight boards to ensure fairness and consistency for the good of democratic discourse and their own long-term business viability.


Approaching free speech issues through digital ethics

Former US President Donald Trump’s suspension and banning from various social media sites is a high-profile case of the debate between free and responsible speech online that has been rumbling on for some time. The problem for social media companies is where to draw the line between acceptable and unacceptable speech, and perhaps, should they even be the ones to judge that distinction?

Before addressing these difficult questions, it is worth reiterating the position that we approach this topic from. As technology fund managers, it is our responsibility to take a keen interest in these debates because they directly affect the sustainability and long-term returns of companies in our investment universe. More broadly, we subscribe to digital ethics, specifically around:

  • Misinformation: the commitment to truthful and honest debate on platforms
  • Online fraud: protecting users from online criminality
  • Privacy: safeguarding users’ privacy and control over their data
  • Online welfare: combatting harmful content (e.g. racism, sexual discrimination, criminal incitement etc) and promoting user welfare in the broadest sense

On free speech, we believe that it is ethically incumbent upon the dominant social media platforms to facilitate a range of user views, with only specific limitations where necessary against toxic content. Independent oversight boards set up by social media networks can be an important tool to achieve this while more societally driven frameworks are established. We think this approach is consistent with the long-term business interests of social media companies.

 

Who draws the line between free speech and incitement?

At the moment social media enjoys the best of both worlds: in the US, Section 230 of the Communications Decency Act means they are able to claim indemnity from prosecution for opinions expressed on their platforms, while they simultaneously reserve the right to ban whoever they want for any reason. This gives social media companies immense power that they sometimes wield unevenly.

In 2017, web infrastructure company Cloudflare removed a neo-Nazi website from the internet after a violent far-right rally in Charlottesville, Virginia. While there are undeniably strong justifications for such a move, it was the conflicting stances of the company that caught our attention. Cloudflare executives previously defended hosting a number of unsavoury forums on grounds of free speech, but following the Charlottesville incident, the CEO commented, “Literally, I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet.” He may have been flippant, but what this statement and other actions from social media companies show are the lack of clearly defined frameworks around moderating online speech.

It should not be the job of internet businesses alone to wrestle with philosophical issues around free speech that affect our whole society. In democratic countries, governments and independent regulators should play a central role in shaping the parameters of speech for internet publishing, with a strong sympathy towards promoting open debate.

Facebook has been vocal about this need. CEO Mark Zuckerberg recently commented: “it would be very helpful to us and the Internet sector overall for there to be clear rules and expectations on some of these social issues around how content should be handled, around how elections should be handled, around what privacy norms governments want to see in place, because these questions all have trade-offs.”

 

“Bad ideas die in the sunlight and thrive in the shadows”
In the absence of clear external guidelines on speech, social media companies must take a lead on policing their networks. Setting unambiguous policies for what is permissible content and a thorough, transparent hierarchy of potential actions against rule breakers would solve much of the problem. Some content is obviously verboten: violence, pornography, racism, sexual discrimination and clearly false and/or dangerous claims as they create harm and inequality of opportunity. But there are more contentious areas.

YouTube recently banned UK TalkRadio on the grounds that it had “posted material that contradicted expert advice about the coronavirus pandemic”, only to quickly rescind the decision following a public outcry. The problem is that expert advice on the pandemic has changed in some fundamental respects. Last spring, for example, health authorities in the US, UK and elsewhere argued that face masks do not reduce coronavirus transmission and they were unnecessary. Today they are obligatory in various public places. We are not condoning breaking government guidance, but we think it is fair and healthy to debate ideas freely without fear of reprisal with a view to reaching consensus in society. It is precisely this process that leads to robust decisions that benefit everyone.

There is also the function that social media plays in political outcomes. During the 2020 US election campaign, the New York Post published a report challenging the veracity of Joe Biden’s son’s tax returns. Social media networks suppressed the story. The quality of research behind the Post’s article at the time is debateable, although post-election the US Justice Department confirmed an investigation into Hunter Biden’s tax affairs. The broader question is around the internal processes that govern these decisions at social media firms, and how fairly and consistently they are applied to speech across the political spectrum. It’s impossible for us to know what impact this event - or others such as the well-publicised second FBI probe into candidate Hillary Clinton’s use of private servers for work emails announced shortly before the 2016 election - had on election results. But it does demonstrate the reverberating effects of coverage decisions by social media.

Another dimension to the online speech debate is the potential for monopoly or cartel-like behaviour. Parler, a conservative/right wing social media forum, had its app removed by Apple’s and Google’s app stores in the wake of violence in Washington during a congressional session to approve the election vote count. Parler was allegedly removed because of promoting violence and its use, in part, to co-ordinate the protest. Amazon Web Services subsequently refused to provide cloud hosting to Parler, effectively restricting access to users. Whether the tech giants were justified in removing Parler or not, the incident shows the vulnerability of third-party apps in reaching an audience and the power of big tech to control the agenda.

Tech giants dominate cloud computing
Global market share and annual revenues of largest cloud computing providers

Notes: 12 months to 30 June 2020. Source: Statista, August 2020.


We think, as a general rule, companies should be diligent in treating both sides of a debate equally and tolerate the maximum level of free speech without it reasonably spilling into direct harm. We subscribe to the view that “bad ideas die in the sunlight and thrive in the shadows.” Objective truth has nothing to fear from open debate facilitated by social media. Banning social media users, suppressing stories or removing forums should be reserved for exceptional circumstances where explicit incitement or falsification can be demonstrated.

Oversight boards can form part of the solution, at least while more permanent frameworks are developed. These boards could be comprised of lawyers, academics, journalists and political experts, and function in a quasi-judicial role to review cases, monitor content and contribute to policies. Social media companies can set up these panels themselves, as Facebook recently did, but it’s important that they are seen as independent and balanced so they have legitimacy with the public.

If the social media market was more fragmented, it could be argued that platforms should be free to promote certain political and speech agendas as competition would ensure a variety of voices were heard - this is generally the case in newspaper markets. But the current social media landscape is dominated by just two names, Facebook and Twitter. This market structure makes it ethically necessary for the leading firms to prioritise impartiality, truthfulness and a commitment to free speech.

If social media companies fail to do this, they could invite harsh controls on their publishing rights and other business practices from politicians and voters increasingly resentful of their perceived biases. In this scenario, the leading companies could become utility-like entities where they would preserve their monopolies, but at the cost of heavy regulation and supervision restraining their ability to innovate and grow.      

Regulation of big tech is likely
In the short term, a repeal of Section 230 looks possible. With enough Democrats and Republicans concerned about internet speech, albeit for different reasons (Democrats see it as a tool for tackling hate speech while Republicans abhor limitations on free speech), the numbers could be sufficient to win a vote to repeal. Aside from significantly increasing the costs of monitoring content for social media firms, overturning Section 230 could push these companies to adopt ultra-conservative approaches to supervising content, severely restricting the dissemination of information and online debate.

Regulation on economic grounds is another flashpoint. There is no serious alternative to Facebook and Twitter yet in social media terms. Facebook and its network reach nearly half the world’s population each month, three times more than equivalent platforms such as WeChat. As a result, fair competition in the tech sector has become increasingly prominent over the past 12 months.

Facebook reaches 3.2 billion people each month
Monthly active users of Facebook social media/messaging platforms

Note: Data using last reported figures: Facebook (3Q20), WhatsApp (1Q20), Messenger (1Q17), Instagram (2Q18), Any (3Q20). Source: Statista, December 2020.

 

The US Federal Trade Commission and attorney generals from 46 states launched antitrust proceedings against Facebook in December 2020 over its acquisitions of Instagram and Snapchat. In Australia, the government is attempting to legislate on a ‘New Bargaining Code’ for internet firms, which would force the likes of Facebook and Google to negotiate payments to third party media companies for content they use. European regulators are preparing new laws that will make it easier to launch investigations, curb growth into new product areas and bar tech firms from giving their own products preferential treatment in their digital stores.

The stakes are high for big tech and society
In the short term, there will likely be no negative financial impact to Facebook and Twitter from recent dramatic and controversial steps such as banning Trump. Because of the former President’s often polarising tone, his views, tweets and posts were hard to monetise as many brands shied away from associating with him. In the longer term, however, there are some significant issues for active investors to consider such as increasing costs of policing content and antitrust regulations.

Social media platforms also need to maintain the loyalty of active users. As investors, we want to see commitment from these companies to free speech, political impartiality, and robust and transparent moderation policies. Establishing independent oversight boards is a solid step towards this goal. This would help restore trust in their output, appeal to users from a broad range of political persuasions and support their businesses in the long run.     

There are other issues about how to draw up acceptable parameters for free speech online. Governments and regulators need to play a role in setting clear guidelines for social media platforms, so they understand their responsibilities. We take heart that some companies, such as Facebook, recognise this and are taking steps to reduce political polarisation and improve the quality of content oversight and review.

Social media businesses hold enormous influence and, as a society, we should consciously decide what the limits to that power should be. If we don’t, we could reach a point where the boundaries become so blurred, views so polarised and the dominance of internet companies so embedded that it becomes too difficult to unwind.