- Content that has a global consensus for its control. Included here are child pornography, justification of genocide, and incitement to or organisation of terrorist acts.
- Content that is sensitive for particular countries, regions, or ethnic groups due to their particular religious and cultural values. Globalised online communication poses challenges for local, cultural, and religious values in many societies. Most content control in Middle Eastern and Asian countries, for example, is officially justified by the protection of specific cultural values. This often means that access to pornographic and gambling websites is blocked.
- Political censorship on the Internet, often to silence political dissent and usually under the claim of protecting national security and stability. Reporters without Borders issues annual reports on freedom of information on the Internet, and censorship is also addressed by Freedom House’s Freedom on the Net reports.
How content policy is conducted
An à la carte menu for content policy contains the following legal and technical options, which are used in different combinations.
Governmental filtering of content
Governments that filter access to content usually create an Internet index of websites to be blocked for citizen access. Technically speaking, filtering utilises mainly router-based IP blocking, proxy servers, and DNS redirection. Filtering of content occurs in many countries. In addition to the countries usually associated with these practices, such as China, Saudi Arabia, and Singapore, other countries are increasingly adopting the practice. Technical and human rights experts have underlined, however, that filtering content at ISP-level is in many instances not only an inefficient tool (as the content can be made available at a different online location than the one to which access is blocked), but also one that poses a threat to basic human rights such as access to information and freedom of expression (considering the fact that unobjectionable content can be blocked by measures that, for example, affect an IP address that is shared by several websites).
Private rating and filtering systems
Faced with the potential risk of the disintegration of the Internet through the development of various national barriers (filtering systems), W3C and other like-minded institutions made proactive moves proposing the implementation of user-controlled rating and filtering systems. In these systems, filtering mechanisms can be implemented by software on personal computers or at server level controlling Internet access.
Content filtering based on geographical location
Another technical solution related to content is geo-location software, which filters access to particular web content according to the geographic or national origin of users. The Yahoo! case was important in this respect, since the group of experts involved, including Vint Cerf, indicated that in 70-90% of cases Yahoo! could determine whether sections of one of its websites hosting Nazi memorabilia were accessed from France. This assessment helped the court come to a final decision, which requested Yahoo! to filter access from France to Nazi memorabilia. Since the 2000 Yahoo! case, the precision of geo-location has increased further through the development of highly sophisticated geo-location software.
Content control through search engines
The bridge between the end-user and Web content is usually a search engine. The filtering of searches was a source of tension between Google and Chinese authorities which culminated with the decision taken by Google in January 2010 to redirect searches performed on Google.cn to its Hong Kong-based servers. However, later that year, Google reversed its decision under pressure of refusal by the Chinese government to renew its Internet Content Provider licence.
The risk of filtering of search results, however, doesn’t come only from the governmental sphere; commercial interests may interfere as well, more or less obviously or pervasively. Commentators have started to question the role of search engines (particularly Google, considering its dominant position in users’ preferences) in mediating user access to information and to warn about their power of influencing users’ knowledge and preferences.
Web 2.0 challenge: users as contributors
With the development of Web 2.0 platforms – blogs, document‑sharing websites, forums, and virtual worlds – the difference between the user and the creator has blurred. Internet users can create large portions of web content, such as blog posts, videos, and photo galleries. Identifying, filtering, and labelling ‘improper’ websites is becoming a complex activity. While automatic filtering techniques for texts are well developed, automatic recognition, filtering, and labelling of visual content are still in the early development phase.
One approach, used on a few occasions by Morocco, Pakistan, Turkey, and Tunisia, is to block access to YouTube and Twitter throughout the country. This maximalist approach, however, results in unobjectionable content, including educational material, being blocked. During the Arab Spring events, governments took the extreme measure of cutting Internet access completely in order to hinder communication via social network platforms. Similar incidents have taken place even more recently.
The need for an appropriate legal framework
The legal vacuum in the field of content policy provides governments with high levels of discretion in deciding what content should be blocked. Since content policy is a sensitive issue for every society, the adoption of legal instruments is vital. National regulation in the field of content policy may provide better protection for human rights and resolve the sometimes ambiguous roles of ISPs, enforcement agencies, and other players. In recent years, many countries have introduced content policy legislation.
At international level, the main initiatives arise in European countries with strong legislation in the field of hate speech, including anti-racism and anti-Semitism. European regional institutions have attempted to impose these rules on cyberspace. The primary legal instrument addressing the issue of content is the Council of Europe’s Additional Protocol to the Convention on Cybercrime, concerning the criminalisation of acts of a racist and xenophobic nature committed through computer systems (2003). On a more practical level, the EU introduced the EU Safer Internet programme which includes the following main points:
- Setting up a European network of hotlines to report illegal content
- Encouraging self-regulation
- Developing content rating, filtering, and benchmark filtering
- Developing software and services
- Raising awareness of the safer use of the Internet
The Organization for Security and Co-operation in Europe (OSCE) is also active in this field. Since 2003, it has organised a number of conferences and meetings with a particular focus on freedom of expression and the potential misuses of the Internet (e.g. racist, xenophobic, and anti-Semitic propaganda).