Italy Scrambles to Fight Misinformation Ahead of Its Elections

As Europe experiments with different approaches to a common problem, a test approaches.

Will Frazier

Two explosive stories began circulating in Italy in November. The first was about a 9-year-old Muslim girl who was hospitalized after being sexually assaulted by her 35-year-old “husband” in the northeastern city of Padua. The second concerned Maria Elena Boschi, a prominent lawmaker and member of former Prime Minister Matteo Renzi’s ruling Democratic Party, who was photographed at a funeral mourning the recent death of the notorious mafia boss Salvatore Riina.

What the stories had in common was the potential to cause turmoil in an already raucous political debate—one defined in part by anti-immigrant and antiestablishment sentiment—ahead of the country’s March 4 general election. Another thing? They were both fabricated, and it’s not clear by whom.

The issue of “fake news” has preoccupied policymakers across Europe as multiple countries ready for elections. The term became ubiquitous during America’s 2016 presidential election, when it typically described false stories, often generated or amplified by Russian-linked social media accounts, that spread online. (“Pope Francis Shocks the World, Endorses Donald Trump for President” is the paradigmatic example. He did not.) “Fake news” has since been applied to everything from outright fabrications to simple mistakes in news reporting, and even true news one wants to discredit. But in the sense of targeted disinformation designed to influence elections, the issue is particularly urgent in Italy, where a national election is a little over a week away. Numerous countries are experimenting with different models for handling the problem, and the varieties of approaches reflect different answers to the central question: Whose job is it to fight disinformation, if anyone’s? Should it be the responsibility of tech companies, governments, or readers themselves?

In Italy, as elsewhere, one approach puts the onus squarely on tech companies. Matteo Renzi, the leader of Italy’s governing center-left Democratic Party, told The New York Times last November that the very quality of Italian democracy depended on the help of social-networking sites, especially Facebook. To that end, Facebook rolled out for its Italian users this month a new fact-checking program aimed at identifying and debunking false information that appears on the site. Like similar efforts Facebook has launched in the past, the program relies on user reporting and third-party fact checkers to flag potential false material. But unlike past fact-checking tools that only flagged the posts as false (an effort that paradoxically caused the content to be shared more, not less), this new scheme takes that effort one step further.

“We scan Facebook pages we suspect spread false and misleading information,” said Giovanni Zagni, the director of Pagella Politica, an independent fact-checking organization Facebook is paying to spearhead its efforts in Italy. “Once we find a news article that is obviously false, we write a fact-checking piece that is published in a specific section of our website and we provide its link to Facebook.” Facebook, in turn, displays the piece as a “related article” next to the false story it disputes, which is subsequently demoted on the Facebook algorithm. Users who attempt to share the false reports also receive a notification that alerts them the content has been disputed by fact checkers, and encourages them to read the fact checker’s article. As for what kind of content attracts Pagella Politica’s attention, Zagni said the group focuses on falsehoods, not political commentary. “We do not think of ourselves as arbiters of the truth.”

Meanwhile, Italy has parallel efforts underway to help readers become their own arbiters. Italian lawmakers launched an experimental project in October to make media literacy—including how to recognize falsehoods and conspiracy theories online—part of the country’s high-school education curriculum. As The New York Times reported in October, the program aims to teach students how to identify suspect URLs, as well as encourage them to verify news stories by reaching out to experts themselves. And last month, the Italian government unveiled a new online portal that allows people to report false reports they see online.

These efforts may seem piecemeal and haphazard—it’s tough to find evidence of an overall strategy to combat disinformation in Italy, or elsewhere, for that matter. But Italy’s is among the best-developed and targeted approaches launched in Europe so far. In Germany and France, new laws have been introduced aiming to stop or punish the spread of false newsmainly by targeting social-media companies. Other countries, like the U.K. and the Czech Republic, have launched government units tasked with combatting disinformation.

What they all have in common is an attempt to absorb the lessons of past elections—notably in the United States in 2016, where “fake news” played a disruptive, if ambiguous role. One difficulty: It’s not totally clear what those lessons are. The details of the U.S. case are uniquely well-documented. Both a public report by the U.S. intelligence community, and a detailed indictment of Russian individuals and entities from special counsel Robert Mueller, have laid out in detail the scope and mechanisms of Russian influence operations in the 2016 U.S. presidential race. But aside from this, there is debate not only over how much fabricated stories actually influence electoral outcomes, but also about how widespread the “fake news” phenomenon really is. A recent report on the reach of disinformation in France and Italy noted that “with the partial exception of the United States ... we lack even the most basic information about the scale of the problem in almost every country.” Another study of the U.S. case found that “fake news” was a small proportion of individuals’ overall news diet, despite its seemingly large reach. It also found that people tended to consume false reports that matched their partisan preferences anyway, implying it was unlikely, though not impossible, that Russian disinformation swayed any votes.

If the scope of the problem is still unclear, so is the appropriate reach of the solution. On the one hand, a healthy democracy requires good information to function. On the other, putting governments in the business of defining what information is “good,” and potentially restricting what doesn’t meet those standards, can easily conflict with other democratic values like freedom of expression. Yet European countries have historically taken a heavier hand regulating certain types of speech than the United States has—hate speech, for example, can be restricted, unlike in the United States, where it is protected under the constitution. Several European countries outlaw Holocaust denial. Some, like the U.K., make it easier to sue for libel or defamation, and win, than the United States does.

False speech can be a different category than hate speech, which some countries see as requiring new laws. French President Emmanuel Macron, for example, announced last month plans to introduce legislation banning the spread of disinformation during election campaigns. He himself was the subject of various false news stories during France’s presidential election last year, when both his rival and a website circulated incorrect rumors of an offshore bank account he supposedly owned. The proposed law would mandate that social-media sites identify who is paying for sponsored content and ads during campaigns, impose a limit on how much can be spent, and empower judges to take down false content and even block offending websites.

But it’s unclear how the French government would define what constitutes “fake news,” or what safeguards, if any, would be supplied to ensure press freedoms. As the French newspaper Le Monde noted in an editorial, “this type of legislative ambition, in a field as fluid and complex as digital technologies and on a subject as crucial as freedom of the press, is inherently dangerous.” Germany imposed a similarly ambitious law last year barring major social-media platforms from leaving “manifestly unlawful” content online for more than 24 hours—the restriction applies to everything from hate speech and propaganda to incitement. Possible penalties include fines of up to 50 million euros. If this worries internet-freedom advocates, not so much the Germans—only about 26 percent of them, according to a recent poll, expressed concern about the law’s effect on freedom of expression.

The U.K. and the Czech Republic are instead concentrating not on changing laws but on building government task forces to address disinformation directly, rather than restricting or punishing its distributors. Both define the problem as one of national security. Last month, the United Kingdom announced the creation of a new unit whose job is “combatting disinformation by state actors and others” in order to “more systemically deter our adversaries and help us deliver on national security priorities.” Yet it’s unclear how this unit will operate or who will lead it. The Daily Mirror reported recently that some National Security officials had not heard of the unit prior to its being announced, and Labour lawmaker Tom Watson, citing a lack of clarity from U.K. Prime Minister Theresa May about how it would function, dubbed its announcement “fake news” in itself.

The Czechs have something a bit more developed. Since last year, a 20-person unit in the Interior Ministry has monitored threats including “disinformation campaigns related to internal security;” it also runs a Twitter account that shares tips on how to identify reliable news sources, promotes access to free media-literacy classes, and occasionally calls out specific information circulating online as untrue. Unlike the German model, the approach relies more on conversation than on regulation, but it raises the typical questions about the government’s power to distinguish between “real” disinformation and information it just doesn’t like. For its part, the unit declares on its website that “it will not force ‘truth’ on anyone, or censor media content.” Regardless, it’s unclear how effective the unit’s online presence really is. A 2016 report by European Values, a Czech think tank, found that a quarter of Czech voters (approximately 2.6 million people) read and believe disinformation, whereas the unit’s Twitter account claimed a much smaller reach, with a little more than 7,000 Twitter followers.

If a concern about Disinformation is its possible effect on national politics, the nature of the internet means that the problem transcends borders—a tweet composed in Moscow can be seen from Amsterdam to Warsaw. So far, the EU-wide effort amounts to launching an initiative to study the problem. Last month, the European Union announced it would form a high-level group of 40 experts to do so.

One of them is Juliane von Reppert-Bismarck. She is a veteran journalist and the founder and director of Lie Detectors, a Brussels-based organization that aims to improve media literacy among teenagers by sending journalists into classrooms to teach students how to be actively aware of bias and persuasion in the media. “It is absolutely critical that citizens be literate and savvy online,” von Reppert-Bismarck told me. She said the program focuses on getting students to use their own critical-thinking skills to navigate the online information world carefully. “It’s almost like you need a driver’s license because everybody is driving their own car through the digital information universe, and that’s what we’re trying to do.”

Von Reppert-Bismarck says media literacy is only part of the solution. The other tools proposed thus far—government task forces and laws regulating speech—risk other problems, like state-led censorship. Dan Lomas, a lecturer in international history at the University of Salford, told me there are precedents for this. The British government has a storied history of covertly trying to influence the news further its own foreign-policy goals. It went as far as to creating a department dedicated to countering Soviet propaganda during the Cold War, in part by spreading propaganda of its own. “Governments have always wanted to get their side of the story across,” Lomas said, noting that such efforts could backfire if the public deems the government line inherently untrustworthy. “A lot of people who are reading some of the fake news out there may be inclined to mistrust government official information anyway and more inclined to believe some of these Twitter bots and fake news channels. … One person’s fake news might be another person’s truth.”

But if disinformation is being treated as a threat to national security, it could be argued that a more forceful government response is necessary. “The interesting question for this unit is to what extent they are rebutting fake news from state actors—with Russia obviously being an important player in this—and to what extent they are attempting to assert the truth,” Daniel Thornton, the program director of the London-based Institute of Government, told me of the U.K.’s proposal. “The statement from the prime minister’s office made clear that this new unit is being established under the auspices of the National Security Council. … This isn’t something about improving the quality of public debate, it’s not driven by the Department of Culture, Media, and Sport—this is a security response to a security threat.”

Which raises another question: How do we know if the response is working? Bononcini of the Facebook program said that it’s still too soon to say. “We have started to see what’s working and what’s not working, and based on that we are changing our tools,” she said. There’s also the question of how much one company’s selective fact-checking efforts, in one country, can do in the broader world of online falsehoods. When I asked von Reppert-Bismarck what success would look like for the EU, she indicated that the problem would outlive election seasons in Europe. “We are given more and more incentives to give extreme reactions to things we see and things we read, therefore retreating to the various extremes of political or ideological debate,” she said. “That makes a project like the European Union extremely difficult because the European Union is founded on diversity and compromise and all the messy things that happen in the middle and that can only be achieved in the middle. If you have the vacating of the political middle, then the European project is in real trouble.”

But for her personally, she said, the goal is simple. “We’re aiming for a world where Lie Detectors can close shop and I can go back to being a journalist.”

Yasmeen Serhan is a former staff writer at The Atlantic.