Facebook and other social networks’ attempts to diminish the influence of problematic political content online have had limited results so far.
The challenge for such platforms is not to neatly sort political speech into legal/illegal, but to consider the way these platforms structure and support particular types of political communication.
In an economy in which attention is the core resource to fight for, extremist contents score high. Studies have shown that extremist content generates more engagement. Today’s hybrid media landscape characterised by “an increase in misinformation; continued radicalization; and decreased trust in mainstream media” is the perfect soil for the rise of populism.
As the social media expert Danah Boyd puts it: “A new form of information manipulation is unfolding in front of our eyes. It is political. It is global. And it is populist in nature.”
How then may democracy be protected in the new attention economy that characterises today’s Western society? How should we foster authentic political debates? How can we resist the increasing polarisation of society?
It is precisely to respond to this particular context that the EU-funded PaCE research project was launched.
The challenges of regulating content
In their ideal world, Facebook, and other platforms and publishers would be able to clearly and automatically identify unwanted content and be able to manage it at a vast scale. However, political speech does not neatly categorise into harmful and benign, with an easy demarcation between the two. Furthermore, apparently non-harmful political speech may endanger democratic norms and processes when aggregated together, or when it becomes the dominant mode of political speech.
It is clear that any given instance of such speech should not be considered illegal, and to censor it would be unwelcome, as well as inappropriate for democracies. However, when such political content reaches social media platforms with the huge scale at which they operate (2.27 billions of Facebook users monthly), it can be very harmful, especially when it interacts with the systems that publishers use to determine what is promulgated on their platforms.
This was made clear by major scandals over the last couple of years, in particular, the 2016 US presidential election and Brexit in which Facebook was found to have contributed to the manipulation of public opinion.
Further to these problematic developments, Facebook claims they have been going through significant changes in their content policies, paying more attention to regulating content that falls in the grey area of speech that is not illegal but that has nonetheless harmful consequences.
The company has massively expanded its team that regulates content, recruited independent fact-checkers, and put in place a team specifically dedicated to elections. In a desire to be more transparent, the company organised a workshop in Paris in April 2019 to discuss with a group of researchers and experts their community standards.
Trilateral is involved in this debate working with other partners at the interface of technology, media and political science, to develop machine learning tools to identify and track populist narratives, as part of the PaCE project.
It will do so on the basis of in-depth historical analysis of populist parties and their narratives and operationalise the results into a set of indicators to track populist discourses online. Because this is such a contentious area, PaCE’s work will be accompanied by a rigorous ethical assessment through the different phases of the project (from the conceptual level to policy recommendations, including the ICT tools design and public engagement).
Trilateral brings to PaCE our expertise in privacy and ethics applied to technological development research.
For more information about our work in this research area, please contact our team: