Fake news in the digital era is one of the latest issues that has raised concern among intermediaries, governments, and end users. Fake news can be described as deliberately created, factually incorrect stories, which are spread by outlets to promote their own interests. With the growth of social media, fake news has proliferated; it has found a platform to disseminate these stories to a massive audience.
According to a recent analysis, fake news stories created more Facebook engagement than the top election stories from 19 of the main news outlets combined. On top of that, a Stanford study recently found that more than 80% of students cannot identify sponsored content from ‘real’ news stories.
This space explores the impact of fake news on digital policy, and the latest developments on how stakeholders are tackling the issue of fake news.
The issue of fake news became a mainstream concern in November 2016 right after the US Presidential election. Internet giants faced a backlash over the spread of ‘false news’ on their platforms. This - some critics said - may have convinced voters to vote for the Republican candidate.
The backlash prompted intermediaries to introduce changes to their policies, with Google and Facebook both announcing to be working on changes to prevent 'fake news' websites to use their respective advertising networks. Google announced it would change its policy to prevent websites that misrepresent content from using its AdSense advertising network. Facebook updated its advertising policies to spell out that its ban on deceptive and misleading content applies to fake news.
Meanwhile, intermediaries faced further criticism when German Chancellor Angela Merkel urged Internet platforms to reveal their search engine algorithms, over concerns that their lack of transparency would 'lead to a distortion of our perception' and 'shrink our expanse of information'. Merkel argued that Internet users have a right to know on what basis they receive information through search engines. She explained that the algorithms operated by search engines could lead to a lack of confrontation with opposing ideas - leading to so-called filter bubbles and echo chambers - which can harm a healthy democracy.
While the controversy shone a bright light on the role of intermediaries in the lead-up to the 11th Internet Governance Forum, held on 6-9 December in Guadalajara, the IGF discussions brought a slight shift in focus. Fake news was discussed more in connection with how to validate information (role of users), than how platforms should tackle the issue (role of intermediaries), as has been the case in public debate.
Speakers argued that there needs to be greater social media literacy ‘to understand that what we’re reading is not the whole picture’, while others discussed the distinction between reputable and non-reputable news outlets, acknowledging that even the most established outlets can get it wrong. On the other hand, the role of intermediaries was discussed in the context of content removal, hate speech, net neutrality and zero-rating practices, and the protection of human rights.
In 2017, the issue of fake news regained prominence. On one hand, news organisations are facing staunch criticism by US President Trump over the 'spread of lies', amid inquiries by several governments on how to tackle false news in their countries. On the other hand, intermediaries are taking steps to flag fake news and verify information.
As developments unfold, many questions are surfacing: Should intermediaries be solely responsible for the spread of fake news? Should governments step in? What are the main legal and technical mechanisms to stop the spread of false news?
15 February 2017: CNN is broadcasting on YouTube after Venezuela shuts off CNN in Spanish after report on fraudulent passports. The ban comes as President Nicolas Maduro said he wants CNN 'out of the country'. Government media dominate the news, and CNN was qualified as 'fake news', starting with a previous report on CNN that Maduro claimed the news channel 'manipulated information about a student’s complaint regarding the lack of food at school', meddling in what is an internal Venezuelan topic.
6 February 2017: Facebook and Google are co-operating with French news organisations (including Agence France-Presse and Le Monde) to minimise the risk of fake news affecting France's upcoming presidential election. The collaboration plans to launch new fact-checking tools. Facebook will rely on users to flag fake news and have it subsequently fact-checked by the partner organisations. Content that is then deemed to be fake will be tagged with an icon the demonstrate that the message is contested.
15 January 2017: The British Parliament is planning to launch an inquiry into fake news. Sessions with executives at Facebook, Google and Twitter are expected to be planned by late spring or early summer. According to Damian Collins, who chairs the cross-party committee leading the inquiry, social media companies 'have a responsibility to ensure their platforms are not being used to spread malicious content.' Facebook has announced to start testing its fake news filtering tools in Germany. The tools would allow Facebook's users to flag news stories as fake, and these allegations will then be verified by a third-party fact checker. If the story is verified as fake news, Facebook will label it as 'disputed' and will de-prioritise it in its news feed algorithm.
19 November 2016: The spread of 'fake news' on social networks such as Facebook and Twitter has become a hot topic after the election of Donald Trump in the USA, leading Chinese officials to address it during the third World Internet Conference in Wuzhen. According to the Cyberspace Administration of China (CAC), false news items are signs that 'cyberspace has become dangerous and unwieldy'. As a result, the CAC recommended to identify those who post fake news and rumours to 'reward and punish' them.
16 November 2016: After the U.S. presidential election, Google, Facebook, and Twitter faced severe criticism regarding the spread of 'false information' on their platforms, which may have convinced voters to vote for Republican candidate Donald Trump. In response, Google and Facebook have announced to be working on changes to prevent 'fake news' websites to use their respective advertising networks.
4 July 2016: The Cyberspace Administration of China has announced in a statement that media will no longer be able to report news obtained from social media sites without approval. According to the administration, 'It is forbidden to use hearsay to create news or use conjecture imagination to distort the facts,' adding that the administration will 'strengthen supervision and investigation' and 'severely probe and handle fake and unfactual news.'