|History and Evolution of Internet Backbones & Interconnection|
- Interconnection Policy
- History and Evolution of Internet Backbones and Interconnection
Derived From: Characteristics and Competitiveness of the Internet Backbone Market, GAO-02-16 (Oct 2001)
To access the Internet, most residential users gain Internet access over DSL, cable, wireless or dial-up lines. For a residential customer, the ISP sends the user's Internet traffic on to the backbone network. To perform this function, ISPs obtain direct connections to one or more Internet backbone providers (connections to multiple backbone paths is known as multihoming and creates redundancy in order to protect against network outages). Small business users may also connect to a backbone network through an ISP, however, large businesses often purchase dedicated lines that connect directly to Internet backbone networks.
An ISP's traffic connects to a backbone provider's network at a facility known as a "point of presence." Backbone providers have points of presence in varied locations, although they concentrate these facilities in more densely-populated areas where Internet end users' demands for access are greatest. If an ISP or end user is far from a point of presence, it is able to reach distant points of presence over telecommunications lines. Figure 1 depicts two hypothetical Internet backbone networks that link at interconnection points and take traffic to and from residential users through ISPs and directly from large business users.
© Cybertelecom ::
Once on an Internet backbone network, data packets that were created by the end user's computer are routed over the most best known available route and reassembled at their destination point; its possible for different packets to take different routes between end points. Internet packets are created and routed pursuant to the IETF standard the Internet Protocol (IP).
As the network evolved, the business arrangments for interconnection evolved. There are several eras of interconnection: [Norton, Access Power Peering] [Interviews of Lawrence G. “Larry” Roberts Interviewed by: James L. Pelkey, Computer History Museum 13 (June 1988) ("That's one of the arguments that I have about invention, in general, is that it has to follow the economic trends and you have to keep changing with them, whatever they are.")] [Brock, Economics of Interconnection, at i ("This problem is more complex than past ones because there are no clear stationary boundaries across which interconnection must occur and because there will be a need for interconnection among companies with different and changing degrees of market power.")].
See also Internet History.
The origins of the Internet date back to the ARPANET, the first packet switched network. The ARPANET was the vision of the DOD research office ARPA, and its leadership JCR Licklider, Robert Taylor, and Larry Roberts. In the 1960s, ARPA was funding computer centers at major universities, and every time they gave the latest, greatest computer to one university, the other universities wanted one too. What if, thought Larry Roberts, all of ARPA’s funded computer centers could be networked, and all of the computer resources, research, and data could be shared? In studying how to build such a network, Larry Roberts selected packet-switched technology with a decentralized network design. The ARPANET was launched in 1969 and became the first nationwide computer network.
The ARPANET provided end-to-end network services to users. ARPANET was both the backbone carrying traffic across the country and the access network to the university campus. [Larry Roberts "The ARPANet and Computer Networks," Youtube Jan. 28, 2016 (Computer History Museum Presentation from 1986, describing the ARPANET as a "single-level" network)] There was no significant interconnection with other networks at that time; it was all simply the ARPANET. Use of the ARPANET and the Internet was restricted at that time to DOD and DOD funded universities.
In 1975, ARPANET matured from experimental network to the operational network; DCA took over the operational oversight of the. In 1983, ARPANET transitioned to the Internet Protocol and became "The Internet." In 1990, when the ARPANET was decommissioned, its role of serving DOD communications needs was taken over by MILNET.
Internet :: DOD and Research Community Playground
- ARPANET operation
- CSNET operation
ARPANET’s success begat ALOHANET, SATNET, PRNET and other packet switched networks. [Aiken p. 14 (describing major federal agency backbones)] It was clear that ARPANET's network protocol would have to be revised in order to facilitate interconnection. In 1972, Vint Cerf and Bob Kahn released A Protocol for Packet Network Interconnection. The design objective of the Internet was to enable interconnection between otherwise incompatible networks, promoting the research and innovation occurring at the edges, and leveraging "network effect." [Mayo (quoting Vint Cerf, "What Bob Kahn and I did was to demonstrate that with a different set of protocols you could get an infinite number of—well, infinite is not true, but an arbitrarily large number of—different heterogeneous packet-switched nets to interconnect with each other as if it was all one big giant network. TCP is the thing that makes the Internet the Internet.").] IP made interconnection easy and coordination unnecessary. According to Bob Kahn,The idea of the Internet was that you would have multiple networks all under autonomous control. By putting this box in the middle, which we eventually called a gateway, it would allow for the federation of arbitrary numbers of networks without the need for any change made to any particular network. So if BBN had one network and AT&T had another, it would be possible to just plug the two together with a [gateway] box in the middle, and they wouldn't have to do anything to make that work other than to agree to let their networks be plugged in.
[SEGALLER 111] The Internet reflected a culture where connectivity was paramount. With each interconnection of an additional network, the value of the Internet grew. [Brian Carpenter, Architectural Principles of the Internet, IETF RFC 1958, IETF, Sec. 2.1 (June 1996) ("the goal is connectivity, the tool is the Internet Protocol, and the intelligence is end to end rather than hidden in the network. The current exponential growth of the network seems to show that connectivity is its own reward.")] [THE INTERNET'S COMING OF AGE, COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD, NATIONAL RESEARCH COUNCIL 35 (2001) ("the value placed on connectivity as its own reward favors gateways and interconnections over restrictions on connectivity")] [REALIZING THE INFORMATION FUTURE: THE INTERNET AND BEYOND, COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD, NATIONAL RESEARCH COUNCIL 3 (1994) (setting forth vision for Open Data Networks, stating in first principle that network should "permit universal connectivity.")] [Prof. David Clark, A Cloudy Crystal Ball: Visions of the Future, Presentation at the IETF, Slide 4 (July 1992) ("Our best success was not computing, but hooking people together").]
Origins of Settlement Free Peering
Internet culture stood as an antithesis to Bell telephone culture. In the early 1960s, Paul Baran, who was concerned about the vulnerability of U.S. communications, advocated that the Department of Defense migrate to a distributed packet-switched network that could survive failure and still get a “Go; No-Go” command to the field. Bell-trained DOD engineers dismissed packet-switched communications as an idea that could not possibly work. In 1971, Larry Roberts, having successfully demonstrated the viability of his packet-switched network experiment, attempted to give ARPANET to AT&T; AT&T was not interested. In the meantime, AT&T had refused to sell private lines to nascent data networks, refused to allow foreign devices (i.e., modems) to their network, and refused to interconnect their network with nascent rivals. Those seeking telecommunications services in order to build computer networks became frustrated, and serious question arose concerning whether AT&T’s telecommunications services would meet the needs of growing computer networks.
Thus emerged a cultural rivalry between Nethead and Bellhead cultures. In the eyes of the Netheads, Bellheads were about perpetuating a century-old communications network monopoly that followed a command-and-control model of central operation which stunted innovation, while Netheads were all about innovation, building a network that fostered the cool things transpiring at the end. Netheads wanted nothing of the Bellhead model. [Greenstein 2015 at 38 ("most had an almost visceral dislike for" Ma Bell)] [Mayo (quoting Bob Taylor, "Working with AT&T would be like working with Cro-Magnon man. I asked them if they wanted to be early members so they could learn technology as we went along. They said no. I said, Well, why not? And they said, Because packet switching won’t work. They were adamant.").]
In the 1980s, Netheads had to resolve how to interconnect networks. Netheads wanted to interconnect ARPANET with an NSF sponsored CSNET, but what should be the terms and who should pay whom? One model before them was the byzantine Bell accounting scheme which one engineer described as so complicated it was akin to drinking coffee one molecule at a time; a marvelous feat of engineering if not utterly pointless. [Moore 1999] [Jacobson 1999 (discussing how Internet culture is different than BellHead culture and the implications for the future of the internet)] [Kleinrock 1994 at 67 (bemoaning the additional cost and complexity of accounting necessary for usage-based pricing); [MacKie-Mason 1993 at 14 (describing the cost of phone company style billing and accounting as applied to the packet-switched Internet as “astronomical”).] Netheads' incentive was to grow network effect and continue innovation at the ends. To achieve that incentive, they wanted the lowest possible barriers to the expansion of the network. The solution was settlement-free interconnection, devoid of complicated Bell accounting. A paper by Lyman Chapman and Chris Owens describes how the arrangement came to be:
In modern terms, we would say that the customers of one ISP (ARPAnet) could not communicate with the customers of another ISP (CSNet), because no mechanism existed to reconcile the different Acceptable Use Policies of the two networks. This disconnect persisted as both sides assumed that any agreement to exchange traffic would necessarily involve the settlement of administrative, financial, contractual, and a host of other issues, the bureaucratic complexity of which daunted even the most fervent advocates of interconnection—until the CSNet managers came up with the idea that we now call “peering,” or interconnection without explicit accounting or settlement. A landmark agreement between NSF and ARPA allowed NSF grantees and affiliated industry research labs access to ARPAnet, as long as no commercial traffic flowed through ARPAnet.
[Chapin 2005 at 9] [Norton Chap. 8]
- 1992 ANS / NSFNET / CIX
In 1985, the National Science Foundation (NSF) concluded that the Internet was a good thing, and that it should be shared beyond the DOD family. NSF wanted to expand the Internet to NSF funded universities, and in order to do this, NSF decided to build the NSFNET. But NSF knew that it would be impractical to build an end-to-end network, providing networking services to all users anywhere. Instead, NSF concluded that it would build the first nationwide dedicated backbone, the NSFNET, which would connect to regional networks, which would then connect to university networks. These NSF decisions gave us the tiered structure of the Internet that has persisted for decades. [Compare Larry Roberts "The ARPANet and Computer Networks," Youtube Jan. 28, 2016 (Computer History Museum Presentation from 1986, describing the ARPANET as a "single-level" network)]
The original NSFNET backbone operated at 56 kbps, and was immediately congested. NSF decided to upgrade the backbone to T1 (1.5 mbps). In 1987, the contract to construct and operate the new high-speed backbone was awarded to MERIT, IBM (IBM Backbone was acquired by AT&T in 1999), and MCI (MCI Backbone was acquired by Cable & Wireless in 1999). The commercial firms were eager to compete for the contract as they saw it as an opportunity to learn the new technologies and gain a competitive advantage. In 1990, these firms sent a letter to NSF announcing their intention to set up a not-for-profit, Advanced Networking and Services, to operate the backbone. The founders of ANS would next attempt to establish ANS as a for-profit, commercial network, leveraging its position operating NSFNET. This garnered significant controversy.
NSF followed CSNET's example for interconnection settlements; interconnections between NSFNET and regional networks was on a settlement-free basis. [MacKie-Mason 1993 at 18 (“The full costs of NSFNET have been paid by NSF, IBM, MCI and the State of Michigan”).]
During this era the Federal Internet eXchange points were established to exchange traffic between federally funded networks.
Commercial networks emerged in the early 1990s, but they could not exchange traffic through the academic NSFNET. On the one hand, Advanced Network Services (ANS), the contractor that operated the NSFNET, established a commercial backbone service and offered to sell interconnection to nascent commercial networks. ANS had all of the NSFNET clients as end-users including regional networks and academic networks; ANS as the largest network could leverage network effect. But the other commercial networks were not interested in paying ANS for the right to access end-users or helping ANS become the AT&T-monopoly of the Internet. Instead they established the Commercial Internet eXchange (CIX) and exchanged traffic on a settlement-free basis. For these early commercial networks, connectivity was paramount and growth of access services was king. [Brock, Economics of Interconnection at ii ("Commercial Internet service providers agreed that interchange of traffic among them was of mutual benefit and that each should accept traffic from the other without settlements payments or interconnection charges. The CIX members therefore agreed to exchange traffic on a "sender keep all" basis in which each provider charges it own customers for originating traffic and agrees to terminate traffic for other providers without charge.").] First UUNET, PSINET, and CERFNET joined CIX. Then Sprint joined. Soon most of the Internet could be reached through CIX. Connectivity grew network-effect which grew the value of the access service that these commercial networks were selling to end-users. CIX become the model of commercial interconnection, while ANS became isolated. [Greenstein 2015 at 81 ("Just a little less than a year later, CIX essentially had everyone except ANS. By the time Boucher held his hearing, ANS had become isolated, substantially eroding their negotiating leverage with others. By June 1992 ANS's settlement proposals no longer appeared viable. In a very public surrender of its strategy, it agreed to interconnect with the CIX on a seemingly short-term basis and retained the right to leave on a moment's notice.)] [Noam 2001 at 63 (""Soon the relative use by the commercial and nonprofit sectors kept shifting, and the power over interconnection moved to the former. By 1993, approximately 80 percent of all Internet sites could be accessed outside the NSFNET structure. CIX blocked ANS traffic from routing through the CIX router, thus depriving ANS users of connectivity to CIX members. Humbled, ANS joined CIX in 1994"")] [Srinagesh at 143 ("In October 1993, CIX, apparently without warning, blocked ANS traffic from transiting the CIX router. At this point, ANS (through its subsidiary CO-RE) joined the CIX and full connectivity was restored.")]. In 1994, ANS' assets were sold off to AOL. [History, Advanced Network Services (2004)] [Salus 1995 at 200]
Birth of the Public Internet
- 1997: Tier 1 Depeering of smaller networks
- 1998: GTE / Exodus
- 2000: PSINet / AOL
- 2000: PSINet / Exodus
Congress concluded that the Internet was good, and sought to make it available to all. NSF was tasked with privatizing the Internet. In order to be successful, NSF had to establish a means for emerging commercial networks to exchange traffic. Following the CIX model, NSF built four Internet exchange points, known as NAPs, in Washington, D.C., New York, Chicago, and San Jose. [National Science Foundation Solicitation 93-52, Solicitation for Network Access Point Manager, Routing Arbiter, Regional Network Providers, and Very High Speed Backbone Network Services Provider for NSFNET and NREN Program (May 6, 1993) (setting forth NSF's plan for privatizing NSFNET).] [GREENSTEIN, HOW THE INTERNET BECAME COMMERCIAL at 82 (NAPs were modeled on CIX and "helped the commercial Internet operate as a competitive market after the NSFNET shut down.").]
In 1995, NSF decommissioned the NSFNET and the commercial Internet was born in its image. There were Tier 1 backbone networks (WANs) that provided nationwide or global service, Tier 2 networks (MANs, Metro) that provided regional service, and Tier 3 networks (LANs, Local) that provided access service. Traffic from one end-user to another end-user would travel up the topology from Tier 3 networks to be exchanged at the Tier 1 level, and then travel back down. Within networks was robust capacity moving traffic. [See David Young, Why is Netflix Buffering? Dispelling the Congestion Myth, VERIZON PUBLIC POLICY BLOG (July 10, 2014) (diagraming Verizon network with backbone, metro and local networks)] Between networks was interconnection where capacity could be constrained. Interconnection capacity constituted an aggregation of all traffic from the different end-users, services, and firms to which a network provider offered service. It could become a traffic pinch-point. In order to avoid congestion, interconnection partners would have to cooperate. [ See KLEINROCK, REALIZING THE INFORMATION FUTURE: THE INTERNET AND BEYOND at 183 ("Because the network is not implemented as one monolithic entity but is made of parts implemented by different operators, each of these entities must be separately concerned with achieving good loading of its links, avoiding congestion, and making a profit. The issue of sharing and congestion arises particularly at the point of connection between providers. At this point, the offered traffic represents the aggregation of many individual users, and thus cost-effective sharing of the link can be assumed. However, if congestion occurs, one must eventually push back on original sources. Options include pricing and technical controls on congestion; there may be other mechanisms for shaping consumer behavior as well.”)]
In the late 1990s, academic and government networks focused on research, development, and innovation, not commercial competition; they followed the simple accounting scheme for interconnection: settlement-free peering. Commercial backbone providers adopted settlement-free peering as a means of rapidly growing their business plans.
Access networks, which needed to provide full Internet service to their customers, interconnected with and paid transit to commercial backbone providers. Controversy swirled during the late 1990s as the commercial backbones matured their business plans, converting smaller networks that were dependant on the backbone networks' services from settlement-free peers into paying transit customers. Appeals for intervention reached the FCC, which declined to intercede, finding the Internet backbone market competitive.
The NSF established NAPs were owned and operated by existing carriers such as MCI and Sprint. During the late 1990s new carrier-neutral IXPs, such as Equinix, arose. The role of IXPs in the ecosystem rose dramatically.
"Backbones of the Internet are run at lower fractions (10% to 15%) of their capacity." [Odlyzko 1998 p. 1]
[Odlyzko 1998 p. 9 (traffic through Internet "backbones totaled between 2,500 and 4,000 TB/month and that the effective bandwidth was around 75 Gbps, which gives average utilization of between 10% and 16%... for MCI, their publicly declared traffic of 170 TB/week at the end of 1997, together with the estimate of a backbone of about 400 T3 equivalents, produces an average utilization estimate of 15% (again assuming 2.5 backbone hops per packet).")]
Market Structure Early Commercial Internet
As designed by NSFNET, the original backbone market consisted of National backbone providers, regional backbone providers, and local networks. This classification roughly came to be known as Tier 1, Tier 2 and Tier 3 providers. [CSTB, 2001, p. 109 (Structure of the Internet Service Provider Industry)] [ENISA Report, 2011, p. 80] [CISCO, 2009, Slide 3] [MCI WCOM / Sprint Merger DOJ Complaint paras 27-30.] [AT&T / Bell South Merger Order 2007, para 123] [SBC / AT&T Merger Order ¶ 111 2005] [CSTB, Realizing the Info Future p. 21 1994 ("Each component network is a backbone network that provides connections from hosts and users to the Internet. Host computers are connected to local area networks, some of which are campus-wide, that themselves are connected to metropolitan and/or regional networks. The regional networks are connected to a wide-area backbone network, giving rise to a three-level hierarchy in the United States")] [CSTB Realizing the Info Future 239 1994] [("The NSF's funding arrangements have given rise to a three-tier structure of campus (primarily universities and research institutions), regional, and backbone networks serving the research community . This conceptualization is complemented by commercial networks, international networks, and other interconnections that have resulted in a globally interconnected mesh of networks known as the Internet.")] [Norton, The Evolution of the US Internet Peering Ecosystem]
- Tier 1 networks are defined as networks that peer with other Tier 1 networks, acquire transit from no one, and sell transit service. Tier 1 networks may be defined on a market basis (national, global). Tier 1 networks peer with each other in a mesh network and thereby have access or interconnectivity with the full Internet. [Norton, The Art of Peering, p. 1 (“From “Internet Service Providers and Peering”, a Tier 1 ISP is defined as an ISP that has access to the global routing table but does not purchase transit from anyone. In other words, the routing table is populated solely from Peering Relationships.”)] [Norton, The Evolution of the US Internet Peering Ecosystem (“A regional Tier 1 ISP is an ISP that has access to the entire Internet region routing table solely through Peering relationships.”)] [Faratin, 2008, p. 55]
- Tier 2 networks pay transit for some of their access, peer with some networks, and generally sell transit to downstream networks. [Norton, Evolution of the US Peering Ecosystem (“A Tier 2 ISP is an Internet Service Provider that purchases (and therefore resells) transit within an Internet Region.”)]
- Tier 3 networks are local Internet Service Providers (Internet Access Providers).
- “Stub networks” are networks that purchase transit from upstream providers, but do not sell transit (access) to anyone else. A stub network might be, for instance, a corporate network. [ENISA, 2011, p. 79] [Freedman Slide 12]
Tier 1 backbone service was recognized as a separate relevant product market, and the geographic reach of that market is national. [MCI / WCOM Merger Order para 148 ("all parties appear to agree that the appropriate geographic market is nationwide")] [MCI WCOM / Sprint Merger DOJ Complaint para 31.] [AT&T / Bell South Merger Order 2007, paras 123-24 ("Tier 1 backbone services – the transporting and routing of packets between ISPs and large enterprise customers and Internet backbone networks – constitutes a separate relevant product market"; "we find it appropriate to evaluate Tier 1 backbone services at the national level")] [Verizon / MCI Merger Order ¶ 113, 115 2005] [ Verizon/MCI Order, 20 FCC Rcd at 18494, para. 113] [Cremer p 11]
"We find that Tier 1 backbone services—the transporting and routing of packets between ISPs and large enterprise customers and Internet backbone networks – constitutes a separate relevant product market. In this regard, we note key differences in quality and price between the transit and DIA services offered by Tier 1 and lower tier IBPs. For example, lower tier IBPs, ISPs, and multi-location enterprise customers typically seek service from a provider that can serve all their locations, and not all IBPs with POPs in a particular location will have such reach to all other locations. Only Tier 1 providers can offer such a high level of ubiquitous service. We find that there are no substitutes for these Tier 1 connectivity services sufficiently close to defeat or discipline a small but significant nontransitory increase in price." [SBC / AT&T Merger Order ¶ 112 2005]
"Consistent with Commission precedent and the DOJ’s previous findings, we analyze the market for Tier 1 IBPs using a national geographic market.342 As with special access, enterprise, and mass market services, we conclude that the relevant geographic market for Tier 1 IBP services is the customer’s location. We then aggregate locations where customers face similar competitive choices. Since all Tier 1 IBPs have extensive nationwide networks, we can aggregate Tier 1 customers throughout the United States since they effectively face the same choice of Tier 1 IBPs anywhere in the United States. Moreover, purchasers of Tier 1 Internet backbone service generally need the ability to connect at multiple locations throughout the United States. Consequently, we find it appropriate to aggregate customer locations and evaluate Tier 1 backbone services at the national level." [Verizon / MCI Merger Order ¶ 116 2005]
By analyzing the routing table, experts can determine which networks buy transit, sell transit, or peer, and therefore who is a Tier 1 network. Two excellent sources for which networks are Tier 1 are Renesys and CAIDA. [Atlas Internet Observatory 2009] [Brown, 2009] [ENISA, 2011, p. 81] [Luckie, 2012] Renesys clarifies that there is not a direct correlation between network size and Tier 1 status. Not all of the largest networks (by traffic volume or interconnection counts) are Tier 1, and not all Tier 1 networks are among the largest networks. [A Baker’s Dozen in 2009, Renesys Blog (Dec. 31, 2009) (“Finally, note that large global providers are not necessarily transit-free, sometimes called Tier 1, providers. Tinet and China Telecom, who made our rankings [of largest networks], are not transit-free, whereas, XO (AS 2828) and Abovenet (AS 6461), who did not make our rankings, are.”).]
Generally, Internet providers internal networks parrellel this historic topology, dividing their own networks into three parts: backbones, metro, and local access. See, e.g., Charter Annual Report 2013, p. 7, (“Our network includes three components: the national backbone, regional/metro networks and the "last-mile" network."); David Young, Why is Netflix Buffering? Dispelling the Congestion Myth, Verizon Public Policy Blog (July 10, 2014) (diagraming Verizon network with backbone, metro and local networks). Robert D. Doverspike, K. K. Ramakrishnan, Chris Chase. "Structural Overview of ISP Networks." Guide To Reliable Internet Services And Applications. Ed. Charles R. Kalmanek, Sudip Misra, Yang Yang. Springer, 2010. 19-93. Issa AbuEid, Network Design Evolution -- (T Mobile USA Case Study), NANOG68, Slides 4-7 (Oct. 18, 2016)
Routing Around Transit Fees ~ Flattening the Topology of the Net
- 2001: NRIC Advisory committee recommends publishing of peering policies
- 2001:Telecommunications: Characteristics and Competitiveness of the Internet Backbone Market GAO-02-16
At the turn of the 21st century, access and edge providers paid backbone providers for transit service, regional backbone providers paid Tier 1 backbone providers for transit service, and Tier 1 providers paid no one. Money flowed up the ecosystem to the top of the hierarchy. It was good money; transit was sold at a rate of $1000 / mbps / month. It was a good business plan to be a Tier 1 provider and king of the ecosystem. Multiple providers would fight it out in a competitive market place to join the coveted Tier 1 clique, leading to occasional peering disputes that lasted usually no more than a few days. The value of providing access service exceeded the value or costs of interconnection and thus providers could not afford to withstand loss of service due to an interconnection spat for any sustained period of time; customers in a competitive market had a habit of switching to more reliable services.
The Internet is a distributed network that routes around "failures." The high interconnection fees at the turn of the century were a "failure" that could be routed around. Internet access networks and content networks had an incentive to avoid the high-costs of transit, and began to re-engineer their networks. They would avoid transit fees by (1) growing their own backbones which they could use instead of using third party backbones (see Mergers); (2) "donut peer," establishing interconnection directly with other networks, bypassing transit fees; and (3) directly interconnect with CDNs. All of these strategies were facilitated by the rise of IXPs; if a provider builds out its network and reached an IXP, it can employ all of those strategies.
Evolving Market Structure
The conclusion that Tier 1 backbone service constituted a separate relevant product market for purposes of economic analysis lasted until 2011 and the Level 3 / Global Crossing merger. In that proceeding, XO challenged the merger, arguing that the merger of Level 3 and Global Crossing will result in consolidation in the market and a tipping in their favor. Level 3 responded and argued that backbone services no longer constitute a separate market, that backbones face competition from new sources including regional networks, local interconnection, and content delivery services. The FCC seemed to accept Level 3's argument, stating:
We recognize that there have been changes in how Internet traffic is transported. See Level 3/GCL Reply at 8–11 (arguing that Tier 1 ISPs are subject to new sources of competition). In analyzing the potential public interest harms, however, we have assumed, without defining a relevant market, a “worst-case scenario” in which the transport services offered by Tier 1 ISPs are a distinct category of service.
Level 3 / Global Crossing, MOO, n. 58.
We recognize marketplace developments that the Applicants highlight, but because we find unpersuasive the record evidence of risks of public interest harms under the Commission’s historical analysis, we need not address at this time Applicant’s arguments that the relevant product market should be expanded to include additional sources of competition.
Level 3 / Global Crossing, MOO, para. 21.
Rise of the Access Networks
- 2009: Cogent / The Department of Energy Sciences Network (ESNet)
- 2009: Hurricane Electric / Cogent
- 2010: Level3 (Netflix) / Comcast
- 2012: Cogent / China Telecom
- 2013: Cogent (Netflix) / Verizon
- 2013: Cogent / Orange
- 2014: Netflix / TWC
- 2014: Netflix / Verizon
- 2014: Netflix / Comcast
- 2014: Netflix / AT&T
- The Rise of the Access Networks 2008+ (access networks become nationwide, with backbones, and dominate interconnection. Interconnection moves from a backbone arrangement to an arrangement between access networks and CDNs at IXPs near ISP gateways. Long haul transit growth goes to zero. Access networks successfully negotiate access fees)
The Internet interconnection market would then undergo seismic evolution. By 2010, large incumbent providers had reestablished a strong position in the network ecosystem, and leveraged network effect and interconnection to implement strategic goals. Broadband Internet access service providers were able to reverse the flow of interconnection settlement fees, going from transit customers to gatekeepers, charging paid peering for access to their end-users. The interconnection market had departed from the characteristics of the early commercial Internet, where backbone networks were king and the market was hyper-competitive, and returned to traditional consolidated network market economics. We had come full circle. See generally WU, THE MASTER SWITCH: THE RISE AND FALL OF INFORMATION EMPIRES (discussing how communications markets go through cycles of innovation and disruption, competition, and then consolidation and market power).
As with traditional network interconnection, Internet networks, and specifically Internet access networks, have the following incentives driving the business strategies:
- Network Effect: Grow the reach of their network and the value of their network to their own customers.
- Transit Cost Avoidance: Access providers traditionally paid backbone providers for transit service. This was good money. Even as the price of transit plummeted, the amount of transit consumed continued to robustly grow, presenting access providers with sizable bills. Furthermore, in paying for transit to a backbone service provider, an access provider is providing revenue to a potential competitor. Another aggravation is that the access network customer is vulnerable to the pricing vagaries of its vendors. To an access provider, this is less than a desirable arrangement. Thus, access providers have an incentive to avoid paying for transit service.
- Control of network and interconnection:
- Potential Anticompetitive Incentives
- This has been considered anticompetitive behavior and has been subject to antitrust (ex post) and regulatory (ex ante) remedies
- Barrier to Entry:
- Restrict access to entry to horizontal competitors such as other access networks, other backbone networks, other CDNs
- Restrict access to entry to vertical competitors such as video services and telephone services
- As access networks advanced into the broadband Internet access service market, they enable over-the-top competitors to their incumbent services. Innovation results in disruption of traditional business models. Over-the-top video distributors (OVD) disrupt MVPD service which constitutes approximately 55% of an incumbent provider’s revenue; VoIP disrupts incumbent traditional telephone service which constitutes approximately 10% of revenue. [Back to the Future?., 2Q 16, Leichtman Research Group (June 2016) (noting that while video is the chief source of revenue, subscriber numbers have been declining for the past nine years)] Incumbent providers were launching full force into a broadband market that provided a network platform supporting over-the-top competitors that were disrupting incumbent revenue streams. Incumbent communications providers have made clear in no uncertain terms that they were disquieted by enabling their competitors. [Online Extra: At SBC, It’s All About ‘Scale and Scope,’ Bloomberg (Nov. 7, 2005), (SBC CEO Edward Whitacre, “How do you think they're going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain't going to let them do that because we have spent this capital and we have to have a return on it. So there's going to have to be some mechanism for these people who use these pipes to pay for the portion they're using. Why should they be allowed to use my pipes? The Internet can't be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo! (YHOO) or Vonage or anybody to expect to use these pipes [for] free is nuts!”)] [Compare Read Hastings, Internet Tolls and the Case for Strong Net Neutrality, Netflix Media Center (Mar. 20, 2014) (“Some ISPs say that Netflix is unilaterally "dumping as much volume" as it wants onto their networks. Netflix isn't "dumping" data; it's satisfying requests made by ISP customers who pay a lot of money for high speed Internet. Netflix doesn't send data unless members request a movie or TV show.”)]. Incumbent providers have an incentive to protect their services, market share, and sources of revenue. [Open Internet 2015 paras. 22, 140 (“[B]roadband providers have incentives to interfere with the operation of third-party Internet-based services that compete with the providers’ revenue-generating telephony and/or pay-television services.”)] [AT&T / DirecTV Merger Order para. 205 (finding that “AT&T has an incentive to discriminate against unaffiliated OVDs”)] [Charter / TWC Merger Order para. 117 (“Restricting interconnection capacity for OVDs does not just pressure OVDs to pay higher fees for interconnection, it can also reduce the ability of the OVDs to serve their customers, and potentially driving those customers to switch to New Charter’s affiliated video services.”)] [Fourteenth Video Competition Report paras. 271, 274 (“MVPDs have the ability and incentive to degrade the broadband service available to unaffiliated OVDs.”)] [Sallet 2015 at 8 (discussing how AT&T and Comcast had incentives “to discriminate against competing online video distributors (OVDs) such as Netflix or Hulu”)].
- Extract rent: Leverage market position to extract as much revenue as possible from "over the top" firms, applications and services, that use the network to deliver to end users.
- Leverage has traditionally come from controlling of access to end-users (the value in network effect). Control of access to end-users is an opportunity for incumbent providers to extract rent and derive a revenue source. Incumbent providers have an incentive to leverage interconnection, reversing the flow of settlement fees from paying transit to receiving paid peering, and thereby establishing a new revenue source.
Two Sided Market
Internet networks exist in a two sided market where they derive revenue from subscribers and from content networks paying transit. [Krogfoss Slide 7 OECD 2011] Marcus concludes that "Content providers make substantial payments for network connectivity, either through payments to Internet backbone providers, ISPs, and CDNs or through investment in infrastructure." [Marcus Slide 7 OECD 2011] Krogfoss states "Content provider revenue is shared as transit revenue to other ISPs" [Krogfoss Slide 7 OECD 2011]
- Amie Elcan, Centurylink, 95th Percentile Billing, NANOG 53 (Oct. 10, 2011),
- Slide 18: Two Sided Market
- Assume a moderately fast internet service at 6-20 Mbps
- Current pricing (non promotional) @ $50 per month
- For a heavy video user customer downloading 150 gbytes per month, the price paid per consumed Gbyte is 33 cents (5000/150)
- Content providers pay 1/2 to 3 cents per Gbytes ($1 - $5 per 95th % Mbps)
- Residential Internet consumer paying over 95%+ of the price per delivered Gbyte
- Slide 12: Price paid per gbyte as calculated 95P ranges from 32 cents per Mbps to $3.56 per Mbps
- Slide 19: ("Content delivery providers must accommodate bandwidth intensive video with a distinct time of day pattern")
- J. Gregory Sidak, Assessing the Network Neutrality Debate in the United States, p. 16, (“Additionally, allowing content providers to pay for service will help contribute to covering the sunk costs borne by service providers, thus increasing incentives to innovate and invest….”).
- Sidak, at p. 16 (“Both sides of the market exhibit positive demand for broadband use, and both sides should therefore pay a positive price. The same principle applies to specific network features, such as priority delivery. If the quality of an application such as video conferencing would improve from priority delivery, both the user (who enjoys a superior broadband experience) and the application provider (who, as a result of the improved consumer experience, benefits from increased demand for its product) are willing to pay for this service. If, as a consequence of network neutrality regulation, only end-users are permitted to pay for priority delivery, then end-users will purchase only a limited quantity of prioritized packets. If the content provider is allowed to pay, then a higher quantity of prioritized packets will be purchased, which results in a larger consumer benefit…. There is no economic reason why end users should cover all the costs of the network when both parties benefit from its use. By charging content providers for prioritized delivery, a broadband service provider could recover sunk costs, reduce prices to consumers, and subsidize access to more price-sensitive customers, thereby increasing overall broadband penetration….”).
- Robert Litan and Hal J Singer, Net Neutrality is Bad Broadband Regulation, p. 3 (“Because end users are more sensitive to price increases than are content providers, the incremental revenues raised on end users under a net neutrality regime cannot compensate ISPs for the forgone revenues on content providers; that means that the profitability of the broadband network will decline under a price-regulated net neutrality regime. Lower profitability means lower returns, which in turn means less investment.”).
Backbone Provider Definitions
"Broadband providers interconnect with backbone networks— “long-haul fiber-optic links and high-speed routers capable of transmitting vast amounts of data.” Verizon, 740 F.3d at 628 (citing In re Verizon Communications Inc. and MCI, Inc. Applications for Approval of Transfer of Control, 20 FCC Rcd. 18,433, 18,493 ¶ 110 (2005))." [USTA v. FCC Slip 9 DC Cir. 2015]
"Backbone networks are interconnected, long-haul fiber-optic links and high-speed routers capable of transmitting vast amounts of data. See In re Verizon Communications Inc. and MCI, Inc. Applications for Approval of Transfer of Control, 20 F.C.C.R. 18433, 18493 ¶ 110 (2005). " Verizon v. FCC, No. 11-1355, p. 5 (D.C. Cir. Jan. 14, 2014)
"Finally, backbone providers, such as WorldCom, Sprint, AGIS, and PSINet, route traffic between Internet access providers, and interconnect with other backbone providers." In re Federal-State Joint Board on Universal Service, Report to Congress, FCC 98-67 ¶ 63 (April 10, 1998).
In Re Federal-State Joint Board on Universal Service, Report to Congress ¶ 101 (April 10, 1998) ("With respect to the provision of pure transmission capacity to Internet service providers or Internet backbone providers, we have concluded that such provision is telecommunications.")
"IBPs route traffic between ISPs and interconnect with other IBPs. . . . The essential service provided by IBPs is transmission of information between all users of the Internet. Although IBPs compete with one another for ISP customers, they must also cooperate with one another, by interconnecting, to offer their end users access to the full range of content and to other end users that are connected to the Internet. As a result of this interconnection among IBP networks, the Internet is often described as a "network of networks."" -- In re Application of WorldCom, Inc. and MCI Communications Corporation for Transfer of Control of MCI Communications Corporation to WorldCom, Inc., Report and Order, CC Docket No. 97-211 ¶ 143-44 (September 14, 1998) (IBPs = Internet Backbone Providers).
"Nevertheless, based on the record before us, we are inclined to agree with GTE and other commenters that Internet backbone services, which we define to be the transporting and routing of packets between and among ISPs and regional backbone networks, constitutes a separate relevant product market. These Internet backbone services can ensure the delivery of information from any source to any destination on the Internet. The facilities used to provide such Internet backbone services are routers and the high-speed transmission lines that connect these routers. We agree with GTE that there do not appear to be good demand substitutes for ISPs and regional backbone service providers to obtain national Internet access without access to IBPs." -- In re Application of WorldCom, Inc. and MCI Communications Corporation for Transfer of Control of MCI Communications Corporation to WorldCom, Inc., Report and Order, CC Docket No. 97-211 ¶ 148 (September 14, 1998) (IBPs = Internet Backbone Providers).
ISPs provide access to the Internet on a local, regional, or national basis, and most have limited network facilities. In order to provide Internet service to end users, ISPs and owners of other smaller networks interconnect with Internet backbone providers (IBPs)—larger Internet backbone networks. The backbone networks operate high-capacity long-haul transmission facilities and are interconnected with each other. Typically, a representative Internet communication consists of an ISP sending data from one of its customers to the IBP that the ISP uses for backbone services. The IBP, in turn, routes the data to another backbone network, which delivers the data to the ISP serving the end user to whom the data is addressed. - SBC / AT&T Merger Order ¶ 109 2005.