Platform Regulation: Time to Tame Digital Gangsters?
Digital gatekeepers hold the keys to unprecedented influence on media values. Is it time for policymakers to rein them in?
By Mackenzie Nelson & Ifra Asad
In a scathing report on disinformation issued last month, U.K. policymakers condemned technology platforms like Facebook, calling for regulations to “restore democratic accountability” to the digital public sphere. The report, which refers to such technology companies as “digital gangsters,” comes at a time of growing scrutiny over the power that tech companies have in filtering, moderating and disseminating online content around the world.
While experts increasingly agree that some form of platform regulation is both necessary and imminent, designing regulations which safeguard free speech while promoting transparency and public interest concerns, is a tall order.
Letting the Fox Guard the Henhouse?
In the wake of growing public outcry as well as pressure from shareholders, tech platforms have formed “partnerships” with think tanks or fact-checking organizations in an attempt to self-regulate. With input from these partners, they’ve banned or de-prioritized accounts, introduced trust and verification schemes, and demonetized certain types of content. Furthermore, last November, Facebook announced its intention to create an oversight council to serve as a kind of Facebook Supreme Court whose main task would be to introduce rules on content moderation.
This announcement should be viewed as “progress” as it opens the door to a “less unilateral” decision-making process, says Philip Napoli, a media scholar at Duke University’s Sanford School of Public Policy. It is a “promising” development, according to Daniel Funke, a journalist at the Poynter Institute’s International Fact-Checking Network. “I am an optimist by nature, so I see a need for it,” Funke said.
“Whether or not it will be implemented in a way that is meaningful and more than just a press statement—that remains to be seen.”
Through the International Fact-Checking Network Funke cooperates with Facebook in a project aimed at combatting misinformation.
Evidence from ongoing self-regulation initiatives and partnerships raises concerns about tech platforms’ capacity and willingness to self-regulate. “Anecdotally I know that the [fact-checking] project has had some impact—especially on serial misinformers,” said Funke. “But I will say there has been a problem of transparency.” All that Facebook says is that the reach of a post that has been debunked by fact-checkers is decreasing by 80%, a fact that can’t be checked, Funke added.
“The information flow with platforms tends to be very one way—we give them all of our information and they give very little information in return,”
said Robyn Caplan of the New York-based research institute Data & Society.
This lack of transparency around how platforms are going about self-regulation has also created challenges for publishers who, for better or worse, have come to rely on third party gatekeepers for content distribution. Last year, in an attempt to respond to questions about Facebook’s role as a de facto publisher, the company changed its algorithmic ranking system to prioritize posts from friends over posts from pages and publishers. This change resulted in substantial costs for many publishers, especially midsize ones. For example, Slate, an online magazine, saw a decline in traffic by 81%. BuzzFeed, a popular American portal, saw a significant dip in profits following the changes; it recently announced that it would be laying off 15% of its employees.
While many of the publishers hit by what became to be known as the “Great Facebook crash of 2018” have begun taking steps to insulate their business models from the whims of Silicon Valley, there is growing consensus that the algorithms and the decision-making processes underpinning content moderation must also be made more transparent and accountable.
Regulating Pandora’s Black Box
While platforms like Facebook insist that they do not want to be the “arbiters of the truth,” their business models rest on their ability to control and monetize flows of information for advertising revenue. Their role as de facto gatekeepers of the digital public sphere is highly profitable—so it shouldn’t come as a surprise that they have been reluctant to give up the keys to the castle and share information about how decisions are made from inside their algorithmic “black boxes.”
Therefore, if self-regulation is akin to “letting foxes guard the henhouse,” shouldn’t governments step in? In some cases, they already have—with less than satisfactory, or even troubling, results. While a few of these initiatives, such as the European Commission’s Code of Practice on online disinformation constitute laudable first steps, others, such as Russia’s anti-“unreliable news” bill show how regulatory measures can be hijacked to inhibit free speech.
Proposals on how to better tackle the issue of platform regulation range from anti-trust legislation to media literacy campaigns to more sweeping measures such as utility-style regulation. Early Facebook investor turned critic Roger McNamee argued in a 2018 article that it is important to make platforms be more transparent about how their algorithms work. But because transparency will not guarantee accountability, he recommends that Facebook’s algorithms and content moderation rules undergo regular audits by independent third parties. A comparable recommendation was put forth in the recent U.K. report, which advised that the government commission an independent third party to develop and oversee a code of ethics for platform moderation, which would be similar to the Broadcasting Code issued by Ofcom, Britain’s broadcast watchdog.
Napoli, who is a vocal proponent of audit-style frameworks for social media governance, has proposed a similar body specific to the U.S. context. In his academic work, Napoli argues that
“broad, general, tech company mantras such as Google’s “don’t be evil,” or Facebook’s “giving people the power to share” seem inadequate in an evolving media ecosystem in which algorithmically-driven platforms are playing an increasingly significant role in the production, dissemination and consumption of the news and information that are essential to a well-functioning democracy.”
Accordingly, Napoli recommends that algorithm ethics compliance be audited by an independent body similar to the U.S. Media Rating Council (MRC).
The Devil Is in the Detail
Needless to say, all of these approaches raise a number of more practical accountability-related questions. “How would this be constructed? Who would you appoint? That’s all tricky […] You’d have to create enough firewalls,” Napoli said. A truly multi-stakeholder approach that includes industry associations, academics, and civil society organizations could be the solution, Napoli added. According to Sejal Parmar, a law and media freedom expert who teaches at the Budapest-based Central European University (CEU),
“besides promoting greater transparency of internet companies’ rules and policies, any regulation should also critically apply the global standards as set forth in international human rights law.”
The Global Network Initiative, which sets human rights-based principles and advises how to implement them, is an example of how such a multi-stakeholder network might look like at the international level.
At national level, it’s rather unlikely to see more binding legislation being enacted. “In the U.S. the major challenge right now is lobbying,” says Data & Society’s Caplan. “So the likelihood that we’ll see significant legislation passed in the U.S. is probably pretty minimal.” Nevertheless, she is optimistic that Europe can lead the way in setting standards.
While details of potential regulatory frameworks leave much room for discussion, the status quo has thus far failed to safeguard the democratic functions of the fourth estate. “At this point, YouTube can do what it wants,” Napoli said.
“Twitter can do what it wants. Facebook can do what it wants; and they all operate unilaterally and there’s no place where anything that they do comes under any kind of scrutiny. Maybe we’re at a point in time where the harms of that outweigh the benefits.”
Mackenzie Nelson is a student of the Master of Public Administration program at Central European University’s School of Public Policy (SPP), with an interest in political communications, participatory methods and civil engagement, and the role of technology in media policy. Previously she worked for the Heinrich Böll Foundation.
Ifra Asad is currently pursuing her Master’s in Public Administration at the SPP at Central European University. Prior to this, she has worked in civil society organizations concerning women's rights, children's education, and minority issues.
This article was documented and written as part of the Practicum Class conducted by Marius Dragomir at the CEU School of Public Policy (SPP).