Internet data can travel over a diverse range of communication media: telephone wires, fibre-optic cables, satellites, microwaves, and mobile telecommunications technology. Even the standard electric grid can be used to relay Internet traffic utilising power line technology.
The way in which telecommunication is regulated impacts Internet governance directly. The telecommunications infrastructure is regulated at both national and international level by a variety of public and private organisations. The key international organisations involved in the regulation of telecommunications include the International Telecommunication Union (ITU), which developed rules for coordination among national telecommunication systems, the allocation of the radio spectrum, and the management of satellite positioning; and the World Trade Organization (WTO), which played a key role in the liberalisation of telecommunication markets worldwide.
The roles of the ITU and the WTO are quite different. The ITU sets detailed voluntary technical standards and telecommunication-specific international regulations, and provides assistance to developing countries. The WTO provides a framework for general market rules.
Following liberalisation, the ITU’s near monopoly as the principal standards setting institution for telecommunications was eroded by other professional bodies and organisations. At the same time, large telecommunication companies – such as AT&T, Vodafone, Telefonica, Orange, Tata Communications, and Level 3 Communications – were given the opportunity to globally extend their market coverage. Since most Internet traffic is carried over the telecommunication infrastructures of such companies, they have an important influence on Internet developments.
Convergence of telecommunications infrastructure
The Internet can be structured into three basic layers. A technical infrastructure layer (physical), a transport layer (standards, protocols) and an application and content layer (www, apps). A good interaction of the first two layers is crucial from the perspective of telecommunications.
In order to use and further develop the telecommunications infrastructure efficiently, there was a need to bridge two worlds with different needs - telecommunications and computers. This issue was solved by a technical standard called Transmission Control Protocol/Internet Protocol (TCP/IP). TCP/IP works over the infrastructure; all applications work over TCP/IP. Nowadays, the major part of telecommunication infrastructure is built to fit the needs of digital communication and the Internet.
Best effort vs Quality of Service
The telecommunications infrastructure has been growing rapidly over the last 60 years. The very first networks were built as end-to-end connections. This ensured the link between two end-points was stable, fully available (dedicated), and was able to offer ‘quality of service’. The need to connect as many end-points as possible and the increase in the volume of data flow required a change in this approach.
Today, the connectivity is provided to everyone, but some technical aspects (speed, stability, delay etc.) of the connection are not guaranteed. This principle is called ‘best effort’. The closer to the end-point, the higher probability the customer is served under the best effort approach. Given that bandwidth is shared, Fair Use Policies (FUPs) can be applied, certain types of data can be prioritised (even under the net neutrality provisions), and many more limits can be used.
The convergence of infrastructure and computer networks is possible thanks to the TCP/IP protocol which works on the best effort principle. This means that almost the whole Internet works on the best effort principle. The technical development in all three layers of the Internet seeks to emulate the Quality of Service as much as possible. While there can be a satisfactory level of quality of Internet connectivity, there are still cases where current technical solutions can be insufficient. For example remote surgeries, aviation, military use, etc.
The last mile
The telecommunications infrastructure faces a problem of how to reach the end user. The access networks to the Internet should be dense, designed and built systematically in order to lead to all customers (even hypothetical ones). They have to overcome obstacles of public spaces (roads, buildings, rural areas, and prices for deployment). This issue is called the ‘last mile’. The common solution how to bridge the last mile is to use an already built infrastructure like copper wires, cable TV or mobile networks. Such an infrastructure is often in the hands of a monopolistic operator. The governments and regulatory bodies usually solve this issue by ordering the operators to rent their loops (local loop unbundling).
Cable vs wireless
The technical advancements in the last decade empowered the idea that broadband access to the Internet would be possible through wireless connections. Accepting that there are obvious positive sides of such connectivity, there are also several aspects to be aware of. The air is a shared medium and therefore requires higher regulation of its electromagnetic spectrum part. A wireless connection is endangered by interference from various sources (weather conditions, outer space radiation, etc.) and is more likely to be vulnerable to external attacks (hacking, spying, sabotage etc.). In terms of quality and speed, at this moment any wireless connection is unable to compete with cable infrastructure.