AddToAny

Printer Friendly and PDF

Content policy

Updates

15 Jan 2017

The British Parliament is planning to launch an inquiry into fake news. Sessions with executives at Facebook, Google and Twitter are expected to be planned by late spring or early summer. According to Damian Collins, who chairs the cross-party committee leading the inquiry, social media companies 'have a responsibility to ensure their platforms are not being used to spread malicious content.'

Facebook has announced to start testing its fake news filtering tools in Germany. The tools would allow Facebook's users to flag news stories as fake, and these allegations will then be verified by a third-party fact checker. If the story is verified as fake news, Facebook will label it as 'disputed' and will de-prioritise it in its news feed algorithm.

12 Jan 2017

David Kaye, UN Special Rapporteur on the promotion and protection of the right to freedom of expression, has warned against age verification checks to prevent children from accessing online pornography. The checks have been created at the end of 2016 by the UK government. Kaye expressed his concerns that 'age-verification provisions give the Government access to information of viewing habits and citizen data', which could breach 'international law' and might lead to hacking or blackmail. The British Department for Culture, Media and Sport responded that 'This Government is proud to be putting in place robust measures to keep children safe from harmful pornographic content on the internet'. 

12 Jan 2017

Two men who worked on Microsoft's online safety team are suing the company, as they claim to have suffered from posttraumatic stress disorder (PTSD), resulting from having to view 'inhumane and disgusting content'. The men received little or no psychological support during their work, which is focused on screening content for child sexual abuse, murder, and other crimes. The lawsuit accuses Microsoft of 'negligent infliction of emotional distress'. Microsoft has responded that it takes seriously its responsibility for employees' health. The lawsuit lays bare a potentially alarming trend in the tech sector, as similar problems have been reported to occur at other companies, such as Facebook.

Pages

One of the main sociocultural issues is content policy, often addressed from the standpoints of human rights (freedom of expression and the right to communicate), government (content control), and technology (tools for content control). Discussions usually focus on three groups of content:

 

  • Content that has a global consensus for its control. Included here are child pornography, justification of genocide, and incitement to or organisation of terrorist acts.
  • Content that is sensitive for particular countries, regions, or ethnic groups due to their particular religious and cultural values. Globalised online communication poses challenges for local, cultural, and religious values in many societies. Most content control in Middle Eastern and Asian countries, for example, is officially justified by the protection of specific cultural values. This often means that access to pornographic and gambling websites is blocked.
  • Political censorship on the Internet, often to silence political dissent and usually under the claim of protecting national security and stability. Reporters without Borders issues annual reports on freedom of information on the Internet, and censorship is also addressed by Freedom House’s Freedom on the Net reports.

How content policy is conducted

An à la carte menu for content policy contains the following legal and technical options, which are used in different combinations.

Governmental filtering of content

Governments that filter access to content usually create an Internet index of websites to be blocked for citizen access. Technically speaking, filtering utilises mainly router-based IP blocking, proxy servers, and DNS redirection. Filtering of content occurs in many countries. In addition to the countries usually associated with these practices, such as China, Saudi Arabia, and Singapore, other countries are increasingly adopting the practice. Technical and human rights experts have underlined, however, that filtering content at ISP-level is in many instances not only an inefficient tool (as the content can be made available at a different online location than the one to which access is blocked), but also one that poses a threat to basic human rights such as access to information and freedom of expression (considering the fact that unobjectionable content can be blocked by measures that, for example, affect an IP address that is shared by several websites).

Private rating and filtering systems

Faced with the potential risk of the disintegration of the Internet through the development of various national barriers (filtering systems), W3C and other like-minded institutions made proactive moves proposing the implementation of user-controlled rating and filtering systems. In these systems, filtering mechanisms can be implemented by software on personal computers or at server level controlling Internet access.

Content filtering based on geographical location

Another technical solution related to content is geo-location software, which filters access to particular web content according to the geographic or national origin of users. The Yahoo! case was important in this respect, since the group of experts involved, including Vint Cerf, indicated that in 70-90% of cases Yahoo! could determine whether sections of one of its websites hosting Nazi memorabilia were accessed from France. This assessment helped the court come to a final decision, which requested Yahoo! to filter access from France to Nazi memorabilia. Since the 2000 Yahoo! case, the precision of geo-location has increased further through the development of highly sophisticated geo-location software.

Content control through search engines

The bridge between the end-user and Web content is usually a search engine. The filtering of searches was a source of tension between Google and Chinese authorities which culminated with the decision taken by Google in January 2010 to redirect searches performed on Google.cn to its Hong Kong-based servers. However, later that year, Google reversed its decision under pressure of refusal by the Chinese government to renew its Internet Content Provider licence.

The risk of filtering of search results, however, doesn’t come only from the governmental sphere; commercial interests may interfere as well, more or less obviously or pervasively. Commentators have started to question the role of search engines (particularly Google, considering its dominant position in users’ preferences) in mediating user access to information and to warn about  their power of influencing users’ knowledge and preferences.

Web 2.0 challenge: users as contributors

With the development of Web 2.0 platforms – blogs, document‑sharing websites, forums, and virtual worlds – the difference between the user and the creator has blurred. Internet users can create large portions of web content, such as blog posts, videos, and photo galleries. Identifying, filtering, and labelling ‘improper’ websites is becoming a complex activity. While automatic filtering techniques for texts are well developed, automatic recognition, filtering, and labelling of visual content are still in the early development phase.

One approach, used on a few occasions by Morocco, Pakistan, Turkey, and Tunisia, is to block access to YouTube and Twitter throughout the country. This maximalist approach, however, results in unobjectionable content, including educational material, being blocked. During the Arab Spring events, governments took the extreme measure of cutting Internet access completely in order to hinder communication via social network platforms. Similar incidents have taken place even more recently.

The need for an appropriate legal framework

The legal vacuum in the field of content policy provides governments with high levels of discretion in deciding what content should be blocked. Since content policy is a sensitive issue for every society, the adoption of legal instruments is vital. National regulation in the field of content policy may provide better protection for human rights and resolve the sometimes ambiguous roles of ISPs, enforcement agencies, and other players. In recent years, many countries have introduced content policy legislation.

International initiatives

At international level, the main initiatives arise in European countries with strong legislation in the field of hate speech, including anti-racism and anti-Semitism. European regional institutions have attempted to impose these rules on cyberspace. The primary legal instrument addressing the issue of content is the Council of Europe’s Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems (2003). On a more practical level, the EU introduced the EU Safer Internet programme which includes the following main points:

  • Setting up a European network of hotlines to report illegal content
  • Encouraging self-regulation
  • Developing content rating, filtering, and benchmark filtering
  • Developing software and services
  • Raising awareness of the safer use of the Internet

The Organization for Security and Co-operation in Europe (OSCE) is also active in this field. Since 2003, it has organised a number of conferences and meetings with a particular focus on freedom of expression and the potential misuses of the Internet (e.g. racist, xenophobic, and anti-Semitic propaganda).

Events

Instruments

Conventions

Judgements

Resolutions & Declarations

Universal Declaration of Human Rights (1948)
Wuzhen World Internet Conference Declaration (2015)

Other Instruments

Resources

Articles

Eric Schmidt on How to Build a Better Web (2015)
The Digital Dictator's Dilemma: Internet Regulation and Political Control in Non-Democratic States (2014)
Internet Content Regulation in Liberal Democracies: A Literature Review (2013)
Trends in Transition from Classical Censorship to Internet Censorship: Selected Country Overviews (2012)
Policy and Regulatory Issues in the Mobile Internet (2011)
The Impact of Internet Content Regulation (2002)

Publications

Internet Governance Acronym Glossary (2015)
An Introduction to Internet Governance (2014)

Papers

Internet Fragmentation: An Overview (2016)

Reports

One Internet (2016)
Freedom of the Press 2016 (2016)
2016 Special 301 Report (2016)
2016 World Press Freedom Index (2016)
The 2016 National Trade Estimate Report on Foreign Trade Barriers (2016)
The Impact of Digital Content: Opportunities and Risks of Creating and Sharing Information Online (2016)
Content Removal Requests Report (2016)
Global Support for Principle of Free Expression, but Opposition to Some Forms of Speech (2015)
Freedom on the Net 2015 (2015)
Government Request Report (2015)

Other resources

The Twitter Rules (2016)

Processes

Although not always mentioned explicitly, content policy is often embedded in discussions on human rights, liability of intermediaries, intellectual property, child safety, jurisdiction, and more. Last week’s discussions were once again a vivid example of the intersecting nature of content policy.

Several sessions addressed the need for content control in different cases: from fighting violence against women online, to protecting children and adolescents, and safeguarding LGBT rights. At the same time, the discussions recognised the need to safeguard freedom of expression and other rights.

 

Although there was general consensus on the need to protect vulnerable communities, the extent of content control was not always agreed on. For example, during the Best Practice Forum on Practices to Countering Abuse and Gender-Based Violence against Women Online, several panellists spoke of the difficulty of establishing strong legal mechanisms that do not cause over-censorship.

The workshop on Tech-related Gender Violence x Freedom of Expression (WS 196) explicitly dealt with the tension between gender protection and the right to free speech. At the other end of the spectrum, several sessions addressed cases in which Internet content is censored by governments to establish digital control over their citizens. For example, Information Controls in the Global South (WS 224) addressed the challenges faced by civil society to have a meaningful impact when faced with information censorship.

New areas in content policy are being explored. For example, the emerging issue of content quality control was discussed during Open Education Resources (WS 58). What happens with our digital assets after we pass away? Death and the Internet (WS 70) looked at the issue of digital legacies… with a touch of humour. In a hypothetical set-up, panellists played the role of an online user who died testate without a valid power of attorney; his family were suing for the right to access his data, while legal experts applied different laws to the scenario. Although future planning is a topic many avoid, the amount of personal data we leave behind merits an in-depth discussion about privacy, personal data, conflicting policies and regulations, jurisdiction, and the role of policymakers. It is expected that more discussions on digital legacies will take place, especially among the legal community and the industry.

With regard to the right to be forgotten (RTBF), last year’s Court of Justice of the European Union (CJEU) ruling (Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González) had far-reaching implications, and created a ripple effect across different jurisdictions.

One of the main issues is with regard to the terminology, as the RTBF can generate false reassurances that an individual’s past can be forgotten. Panellists in The ‘Right to be Forgotten’ Rulings and their Implications (WS 31) suggested that the right be renamed to ‘the right to be de-indexed’. The main issues were reiterated in Cases on the Right to be Forgotten, What Have we Learned? (WS 142): the term is problematic, and policymakers and the judiciary need a better understanding of technology. The process of de-listing imposes an unnecessary burden on online media houses to continually update their published stories. The process is also likely to be abused in jurisdictions where the take-down notice system is implemented.

Both workshops discussed the risk that the RTBF is affecting other human rights including the right to memory and the flow of ideas, the right to know the truth, and freedom of the press. These essential rights to democracy could be threatened by the RTBF. In fact, the representative from the United Nations Commission for Human Rights commented that the RTBF contrasts with the right to know the truth, which is a distinct right. The erasure of information could impact the right to truth, and thus create a need for due process.

Among the practical implications is the fact that different jurisdictions have ruled or legislated on the RTBF. These include a judgment by the Constitutional Court of Colombia; new legislation in Chile, Nicaragua, and Russia; and data authorities’ rulings on search engines. The CJEU ruling has therefore created a ripple effect, extending the European cyberlaw footprint to a global level.

 

The GIP Digital Watch observatory is a service provided by

 

in partnership with

 

and members of the GIP Steering Committee

 




 

GIP Digital Watch is operated by