Home / Event / European elections and media literacy: countering disinformation is a joint effort

European elections and media literacy: countering disinformation is a joint effort

Feb
27

Only three months separate us from the European elections; thus, we are now entering a delicate period of time. In our previous article, we talked about the awareness campaign carried out by the European Union for increasing the voters turnout; however, participating to the event ‘Using Artificial Intelligence (AI) to fight disinformation in European elections’ gave us more insights on how the public and private sectors are collaborating in countering fake news and disinformation.

Disinformation is not a new phenomenon, but recent events – starting from the 2016 US elections and Brexit – have made evident the dangerousness of the issue, and the necessity to fight it. Indeed, it is a complex problem that touches different actors around the world: countries, private actors like social networks and online platforms, the civil society. The EU itself has shown to be particularly sensitive to disinformation in the past, for example when dealing with issues like migration or the EU budget. Considering the upcoming European elections, many policymakers are reasonably concerned about possible interferences in the exercise of democracy, especially with the diffusion of propaganda, incendiary messages and fake news. In the words of Jens-Henrik Jeppesen, Director of European Affairs at the Center for Democracy and Technology (a non-profit organization which works to preserve the user-controlled nature of the Internet and champions freedom of expression), it is thus imperative that ‘European authorities put into place a strategy to effectively counter this issue and try to meet the challenge in a rational way’.

So where do the private and public sector stand in the fight against disinformation and what has been done so far?

Paolo Cesarini, Head of Unit, Media Convergence and Social Media at DG CONNECT, reported that the European Commission is currently working side by side with online platforms, leading social networks and advertisers, and their top priority is tackling disinformation on time. He also underlined that the ‘rhythm at which the initiatives have been carried out is quite impressive’, especially considering the complexity of the EU Commission machine and the phenomenon they are trying to tackle. The outcomes of this joint effort are already visible: in September 2018 there was the publication of the EU Code of Practice on Disinformation, followed by the Action Plan on Disinformation in December 2018. Moreover, the Commission has adopted a satisfactory working definition for the term ‘fake news’: used in the past as a catch-all phrase that described arguments we disagreed and information we would rather not circulate because it does not serve our political purpose, it is now defined as ‘verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm’.

The commitment of online platforms is also remarkable. Aside from joining the code of practise, they are trying to give people better AI tools – like algorithms and plug-ins – to efficiently combat disinformation on the Internet, as well as to increase people’s media literacy. For example, in the most recent years Google has been conducting hundreds of thousands of experiments on its algorithms in order to better serve its users: ‘Google has become successful because it was able to give people relevant information; but what we learned over time was that relevance wasn’t always the key aspect’ said Milan Zubicek, Public Policy and Government Relations Manager at Google, who also added that ‘by focusing on relevance we made mistakes and were serving conspiracy theories on flat Earth and Holocaust denial for example; what we learned over time was that there are certain groups of questions where we need to balance the relevance with authoritativeness of the sources’. These improvements in the algorithms have been successfully applied to Google Search, Google News and YouTube (since now people use to spend quality time on it, the platform is evolving at a faster pace). Another virtuous example is given by Mozilla, the well-known software community that developed one of the major web browsers, Firefox. As a matter of fact in August 2017, by announcing the Mozilla Information Trust Initiative (MITI), they officially undertook the commitment of keeping the Internet credible and healthy. The line of action is quite simple: they intend to develop products, research and communities to battle disinformation and fake news.

Resilience is another key word to effectively counter disinformation. Indeed, the Commission is investing both in tools and infrastructures, and this activity is carried out under Horizon 2020. As Cesarini referred when talking about the tools, the Commission has started to invest money on research and innovation projects: there’s a number of projects that have developed – and are still developing – tools to identify content, to analyse networks, to better understand information cascades’. On the other hand, the investment in infrastructure is focused on building better digital service infrastructures that should link fact-checkers, researchers and civil society in order to have a much more efficient way to exchange information and coordinate the response.

Regarding Member States’ approach to disinformation, to this day the European Commission has clearly opted for a self-regulatory approach. As a matter of fact, there are Member States that have taken steps forward: the most recent case is France, where in November 2018 the National Assembly has passed a new law which aims to empower judges to order the immediate removal of fake news during election campaigns. Nevertheless, regulating on disinformation is considered extremely challenging because there is a concrete risk of harming freedom of speech; for instance, the before-mentioned French law has received harsh criticism by both right and left wing opponents, who have accused the government of trying to create a form of ‘thought police’. When commenting on self-regulation, Cesarini affirmed that he hopes it will deliver positive results: ‘we look at this evolution with a very open mind but also with the awareness that if results are not there, we need to devise other instruments belonging to the realm of co-regulation’.

As we mentioned at the beginning, disinformation is indeed a multifaceted issue, and chances are that we are going to deal with it for many years to come. However, thanks to the joint effort of the private and public sectors, here in Europe the general impression is that we are in a good place, more aware and vigilant to the risks we are exposed to. Moreover, from the way the EU is setting its framework – especially in the perspective of the upcoming European elections – there is a good chance that other actors around the world will be looking at us as a model to refer to when dealing with disinformation and fake news.

Top