Federal Internet Law & Policy
An Educational Project

Net Neutrality

Navigation Links:
:: Home :: Feedback ::
:: Disclaimer :: Sitemap ::

Network Neutrality
- Local Access
- Semantics
- Historical Neutrality
- Technical Neutrality
- Terminating Monopoly
- AntiTrust
- Insufficient Harm
- Market Solution
- Backbones
- Free Rides
- Three Pipes
- Investment
- Innovation
- Regulation of Net
- Muni Broadband
- Definition Net
- Net Management
- - Discrimination
- - Methods Discrim
- - Examples Discrim
- - Content Discrim
- - Service Levels
- - Bandwidth Caps
- - QoS / Prioritization
- - Traffic Shaping
- - Policy Statement
- - Proceedings
- - - NN Rule Making
- - - Litigation
- - - Third Way
- - - Text Messaging
- - - P2P / BitTorrent
- - - Broadband Practices
- - - Skype Petition
- - - Madison River
- - - ATT / BS Merger
- - - Verizon / MCI Merger
- - - SBC / AT&T Merger
- Regulation of Net
- Three Pipes
- Investment & Innovation
- Free Rides
- Legislation
- Municipal Broadband
- Disclosures
- Advocacy
- Definition of Internet
- Backbones
- End to End
- Reference

- Internet over Telecom Service
- - Computer Inquiries
- - CPE Carterfone
- - CPNI
- - Info Service
- - Timeline
- Network Neutrality
- Common Carriage
- Internet over Cable
- - Open Access
- Internet over DSL
- Internet over Wireless
- Internet over Powerline

"How do you think they’re going to get customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes for free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes? The Internet can’t be free in that sense, because we and the cable companies have made an investment and [for] a Google or Yahoo! or Vonage or anybody to expect to use these pipes for free is nuts!” - SBC (aka AT&T) CEO Edward Withacre Interview of Ed Whitacre, BusinessWeek November 7, 2005

"Infrastructures, for purposes such as transportation and communication, have long been vital to national welfare. They knit together a country's economy by facilitating the movement of people, products, services, and ideas, and play important roles in national security." - NSFNet Final ReportPDF p. 4.

Editor: Genny Pershing

"Network Neutrality" is the idea that Internet access providers should not discriminate with regard to what applications an individual can use, or the content an individual can upload, download, or interacted with over the network. Individuals acquiring services from Internet access providers should be able to use the applications and devices of their choice, and interact with the content of their choice anywhere on the Internet. [Public Knowledge] [Wu FAQ] [Google] [Free Press] [COMPTEL] [Meinrath] [Frieden Primer n 3] [Valcke 1]

The concept of "Network Neutrality" is essentially traditional Common Carriage. [WU FAQ] [Yemini p 8] Common carriers are carriers of goods, people, and information such as trains, planes, buses, and telephone companies. They can not discriminate with regard to what they carry or where they carry it. Common carriage embodies the ideal that the efficient movement of goods and information is essential to our economy, our culture, and our nation, and therefore carriers must not discriminate or favor particular content or individuals. [Smithsonian ("Throughout the remainder of the nineteenth century the telegraph became one of the most important factors in the development of social and commercial life of America.")] [NSFNET Final Report (1995) p. 4 ("Infrastructures, for purposes such as transportation and communication, have long been vital to national welfare. They knit together a country's economy by facilitating the movement of people, products, services, and ideas, and play important roles in national security.")] [Odlyzko Efficiency and Fairness 2009 48 (contrasting NN to railroad and telephone common carrier policy)]

Telecommunication carriers (those communications carriers that transport information back and forth) are one type of common carriers and have been classified as such for 100 years. This status was essentially inherited from telegraph companies. Forty years ago, the FCC initiated the Computer Inquiries which established the telephone network as an open platform over which computer networks could be constructed. The FCC also resolved the Carterfone proceeding, holding that individuals could attach devices (ie, faxes, modems) of their choice to the telephone network. These proceedings created an environment where any computer network could be constructed for any purpose and go anywhere.

Computer networks which are provisioned over telecommunications services, and in particular, Internet service providers, were classified as Information Services and did not have telecommunications regulations imposed upon them. Because these ISPs were interstate networks, they fell under the jurisdiction of the FCC and not state public utility commissions. ISPs hold the same role as telegraph and telephone carriers, carrying information central or critical to our society and nation.

This policy moved from the dial-up world to the broadband world. However, when the Internet moved from the dial-up world to the broadband world, it moved from something that was done over the common carrier network, to being the network. The question posed was whether, with this metamorphosis from some thing over the network, to being the network, the Internet would take on the common carrier status.

Advocates argued that cable modem service and DSL should be classified as telecommunications carriers so that the Computer Inquiries would apply. This is also known as the Open Access debate. Here the FCC policy of an open communications carrier shifted dramatically; the underlying communications network had always been a common carrier for 150 years. But when broadband Internet became the underlying network, the concept of common carriage was eliminated. The FCC concluded that these new communications networks were "information services" which did not need to be shared, did not fall under the Computer Inquiries, and did not fall under the non-discrimination provisions of title II of the Communications Act. [Peerflow 031008] [CAIDA According to the Best Data 100707] [Cherry]

This is a move from an Internet access service classified as an information service provisioned over a telecommunications network classified as a common carrier - to a network where the whole thing top to bottom is an unregulated information service.

Internet Access Service
Info Service
Internet Over
Internet Over
Telephone Network
Common Carriage

Generally, common carriers are not liable for damage caused by what they carry - for example, a train that transported Al Capone would not normally be an accessory to his criminal acts. With the advent of the commodity Internet, Internet service providers have been immune from liability for the content which they carry - just like common carriers. See gambling, liable, and copyright.

While the term "network neutrality" may be new - the concept has a long history.

Historical Context:

Opponents of NN portray it as a new concept that has no history. [See Shewick Farber p 34 (Farber "The issue of Net Neutrality first arose in the public's view in a remark by one of the carriers - SBC (now the new ATT") - that services that use the facilities provided to reach their customers should pay a fee to the carriers for the use of their facilities.")]

Proponents of NN note that the NN debate did not emerge out of the ether but rather is the evolution of a long communications policy debate:

  • 200 years ago the Continental Congress concluded that the new nation needed a communications network, the post office, free of the existing deep parcel inspection of the British military. Therefore the Congress asked Benjamin Franklin to operate a postal service where messages could be transmitted to the destination of the originators choice, without being intercepted, read, or rerouted while in transit.
  • 150 years ago political officials and businessmen recognize the value of information and the importance of having telegraph messages transmitted securely, in the order received, accurately, and without discrimination. The government concluded that telegraph companies were common carriers. [Lessig Testimony 2008 p 4]
    • 1864: Congress passes an act requiring that railroad Right of Way be used by telegraph for communications without discrimination.
    • Telephone companies were considered "voice telegraph" and entered the market in the same common carrier category as telegraph. [Lessig Testimony 2008 p 4]
  • In 1966, the FCC initiated the Computer Inquiries proceeding that amplified that telecommunications carriers should be open for the benefit of computer networks.
  • 2003: FCC rolled back UNE rules, making local telecommunications competition difficult
  • 2004: The FCC ruled that new fiber installs did not have to be unbundled to permit competition
  • 2005: The Supreme Court ended the long open access debate - affirming the FCC's ruling that Internet-over-cable service providers were not telecom carriers and did not have to be open their physical networks to permit competition in higher level services and applications
  • 2005: The FCC ruled that Internet-over-DSL service providers were not telecom carriers and did not have to open their networks

[Isenberg 5.29.7] The NN debate is about ensuring open access to applications and content. This comes after

The parties have simply retreated up the network layers and renewed the same arguments for an open network (but there may be no more ground upon which to retreat). [Isenberg 5.29.7]

Eli Noam notes the historical cycle stating that when network innovations hit the market, a three stage cycle is observed:

  • Wildcatting where there is an explosive entrance of the innovation, and a grab for market share and a struggle for position in the market
  • Backlash and Control where regulators make first attempts to respond to observed problems
  • Loosening of Control or deregulation

This cycle is observable with history of telegraph and telephone networks.

FCC Title II Telecom Carriers:

Obama Discusses FCC

Opponents to NN regulation object to the concept of placing the Internet under FCC Title II telecom carrier regulation. [Pepper].

Proponents: It is important to distinguish the set "common carriage" from the subset "Title II telecom carriers."  Telecom carriers are but one small set of common carriers - this set is regulated by the FCC under Title II with lots of historic obligations.  There is the larger set of common carriers outside that FCC subset, some that have been simply regulated by common law.  As noted above, these include planes, trains, the post office, and boats. One can be a common carrier and not be under Title II of the Communications Act. To support NN is not the same as saying that Internet networks should be regulated under FCC Title II.


Opponents of Network Neutrality make a semantical attack on the term itself, arguing that it is a made up term for a made up problem; that there is no accepted definition for "network neutrality;" and that "it is just a slogan." [Kahn] [Pepper] [See Shewick Farber p 34 (Farber "The definition of 'network neutrality' reshapes itself like our lungs. It expands, drawing in causes ranging from freedom of speech to open access. Then it contracts, exhaling a lot of hot air, and starts all over again."")]

Proponents have put forth their definitions of "Network Neutrality;" these definitions are generally consistent. [Public Knowledge] [Wu FAQ] [Google] [Free Press] [COMPTEL] [Meinrath p 3 ] [Wong p 308 (discussing definitions)] [Valcke 1]. These different proponents generally put forward the following tenants of NN:

  • Individuals can use the applications of their choice and can access any other individual or resource on the Internet
  • Network service providers cannot discriminate with regard to the use of applications or access of end points on the network.
    • Network service providers cannot block competitive applications or services in order to gain a competitive advantage
  • If network service providers offer a preferred service level, this preferred service level is offered and is available to others on the same terms and conditions.
  • Individuals at the edge of the network can innovate without seeking approval of the network service provider [economides p 6]

It is important, as this highlights, to distinguish between policy and implementation. Policy is why something is done; implementation is how something is done. If you review the literature of the proponents of Network Neutrality, there is rough consensus as to policy - network neutrality is about preventing a network with market power from discriminating - acknowledging the important role a network plays as a carrier of information. There may be differences in how different advocates envision implementation. The point here is - a difference of opinion concerning implementation is not the same as discord concerning the meaning of network neutrality.

With regard to "network neutrality" as a made up term, this is a good point. "Network neutrality" became a term of use in order to avoid uttering "common carriage." But "network neutrality" is consistent with, and fits well within, the jurisprudence of "common carriage." If Network Neutrality is characterized as a free floating arbitrary term, it is easy to define it with a strawman definition and shoot it down. If Network Neutrality is seen in its true context of common carriage, then this becomes a richer dialogue of history, jurisprudence, and policy objectives.

HISTORIC NEUTRALITY: Network Open to Innovation / End-to-end:

Opponents to NN argue "the Internet is not neutral and never truly has been, and a neutrality rule would effectively set in stone the status quo and preclude further technical innovation." [FTC Staff Report 2007 p 60] [David Clark]

FTC Staff Report 2007 p 61: Opponents of network neutrality regulation argue that the Internet is not, and never truly has been, "neutral."270 These opponents generally agree that the first-in-first-out and best-efforts characteristics of the TCP/IP data-transmission protocol have played a significant role in the development of the Internet.271 They point out, however, that since the earliest days of the Internet, computer scientists have recognized that data congestion may lead to reduced network performance and have thus explored different ways of dealing with this problem. . . . . .

Opponents of net neutrality regulation argue that the TCP/IP protocol itself may have differential effects for various content and applications. For example, static Web page content like text and photos and applications like e-mail generally are not sensitive to latency. Thus, users typically can access them via the TCP/IP protocol without noticeable problems, even during periods of congestion. Applications like streaming video and videoconferencing, however, may be sensitive to latency and jitter. Net neutrality opponents argue, therefore, that while first-in-first-out and best-efforts principles may sound neutral in the abstract, their practical effect may be to disfavor certain latency- and jitter-sensitive content and applications because prioritization cannot be used to deliver the continuous, steady stream of data that users expect even during periods of congestion. [Translation: "Because TCP/IP treats all traffic the same, it discriminates."]

Network neutrality critics also note that content providers increasingly are using local caching techniques to copy their content to multiple computer servers distributed around the world, and argue that this practice effectively bypasses the first-in-first-out and best-efforts characteristics of the TCP/IP protocol. [Note that cache is an application on a server, and is not how the network itself behaves]

Critics further observe that network operators have preferential partnerships with Internet "portal" sites to provide users with greeting homepages when they log on, as well as customized and exclusive content and applications.278 [The networks that have preferential partnerships are the broadband incumbents like AT&T and Verizon who are arguing against NN]

Similarly, they note that portals, search engines, and other content providers often give premium placement to advertisers based on their willingness to pay.279 [Portals, search engines, and content are applications and content, that take place over the network. They are not the network itself. It is like saying a water pipe is not neutral because in my house I can take the water an make lemonade out of it - the issue is the network - this discusses applications at the end of the network and not the network itself - indeed the neutrality of the network is what permits these applications to exist].

In their view, these practices all constitute additional indicia of existing non neutrality.

NSFNET gave preference to interactive traffic in early days

Discussion: The Internet grew up with general neutrality as a result of multiple forces. Technically, the Internet was developed to solve a problem: multiple academic, research and government computer resources existed that were not connected. (see Internet History: ARPANet Design Objectives) Computing power at that time was limited, and some schools and networks jealously viewed their resources as their own; they had little interest in sharing. But visionaries at US Department of Defense DARPA realized the value to the research community if computer resources could talk to each other - sharing resources and research. They determined the need to network computer resources. The computer network had to work over any physical telecommunications network. It had to support any application or content. It was designed not to discriminate. The only thing that the Internet would be responsible for was ensuring that the data got from computer A to computer B. [Frieden Primer Sec. II] [Free Press p 3 (Mar. 09) ("During the explosive rise of the Internet, one fundamental principle governed: All users and all content were treated alike.")]]

Commenting on the origins of the Request for Comments of the IETF, Steve Crocker stated "This was the ultimate in openness in technical design and that culture of open processes was essential in enabling the Internet to grow and evolve as spectacularly as it has. In fact, we probably wouldn't have the Web without it. When CERN physicists wanted to publish a lot of information in a way that people could easily get to it and add to it, they simply built and tested their ideas. Because of the groundwork we'd laid in the R.F.C.'s, they did not have to ask permission, or make any changes to the core operations of the Internet. Others soon copied them - hundreds of thousands of computer users, then hundreds of millions, creating and sharing content and technology. That's the Web." [Crocker NYT]

Steve Crocker [Wikipedia]

"First, the technical direction we chose from the beginning was an open architecture based on multiple layers of protocol. We were frankly too scared to imagine that we could define an all-inclusive set of protocols that would serve indefinitely. We envisioned a continual process of evolution and addition, and obviously this is what's happened." Joyce K. Reynolds, RFC 2555
"Member agrees that it shall eXchange network traffic freely with all other CIX members that have access to and use of the CIX NAP ("Participating Members") without payment of settlement fees. Notwithstanding anything to the contrary contained in the preceding sentence, if a connectivity or routing problem caused by a Participating Member is adversely affecting the stability of Member's routing system, then, after delivering advance notice to CIX in a commercially reasonable time period of such problem, Member shall have the right to suspend the exchange of data traffic with such Participating Member until such time as the problem is alleviated." [CIX Router AgreementPDF ¶ 2] See also [Frieden Primer p 13-14]
Amongst the many complex issues that arose during the Departement of Justice and European Community inquiries into the WorldCom acquisition of MCI was the importance of the policy neutrality of the IXPs. Though the CIX was formed as a policy based IXP and the Federal Internet exchanges had policy regulated by the US Government agency networks the MAE-East environment and the formulation of the NSF NAPs were policy neutral. The NAP operator played no policy role; rather they emphasized a policy neutrality and presented their facilities as carrier or service provider neutral for the purposes of the physical location and means of the interconnection between ISPs. At the time of the WorldCom acquisition of MCI the issue of WorldCom's ownership of the MAE's {East and West by this time} raised concerns regarding the perceived maintenance of neutrality and whether regulation might be required to maintain the neutrality of IXPs. [Hussain Historic Role CIX 5]

The system had to have one other fundamental property: It had to be completely decentralized. That would be the only way a new person somewhere could start to use it without asking for access from anyone else. And that would be the only way the system could scale, so that as more people used it, it wouldn't get bogged down. This was good Internet-style engineering, but most systems still depended on some central node to which everything had to be connected - and whose capacity eventually limited the growth of the system as a whole. I wanted the act of adding a new link to be trivial; if it was, then a web of links could spread evenly across the globe. --- Tim Berners-Lee, Weaving the Web, p15-16 (Harper Business 2000)

In 1969, the DARPA funded ARPANet went online. Wesley Clark persuaded Larry Roberts that the appropriate design for the ARPANet involved a subnet of routers (aka IMPs) which had the sole mission of routing packets. In that era, computing power was scarce. In order to maximize the throughput capability of the routers, all of the computer power had to be dedicated to transmission. Use of the computer processor for anything more than looking at the header and determining where to route the packet would have cost processing capability and reduced throughput. Wesley Clark's suggested subnet of routers would be dumb and dedicated to the mission of routing; they did not have the ability to do deep packet inspection and to discriminate. [Free Press p 3 (Mar. 09)]

Meanwhile, Vint Cerf and Bob Kahn set to work on developing a new protocol that would allow incompatible networks to talk with each other. This new protocol would be DUMB - it would be a middle kludge that would hold everything together. It would work over any physical network; it would support any application; its purpose was simply to move the data from hither to thither, interconnecting what would otherwise refuse to talk with each other. In 1972, they release their paper on the Internet Protocol. In 1983, ARPANet formally migrated to IP and morphed into "The Internet." [Lessig Testimony 2008 p 2]

"The Internet" the name of one network, which is the interconnected network that uses the Internet protocol and has one common IP addressing scheme. The Internet is a subnetwork of routers that just route packets. Computing processing power was scarce; in order to maximize throughput, computer processing at the router would be as limited as possible. Routers dont process packets. They do not discriminate. Routers just route.

Thus intelligence - the applications - is left for the "ends" of the network. The mission of the Internet is to provide interconnection of the intelligence which resides at the ends to the greatest extent possible, thereby requiring that the middle routers conduct minimal processing in order to be as small a barrier as possible. This enabled applications to be designed for the intelligence at the ends and not for any processing, permissions from, or coordination with the middle (the service provider). This also meant that one could have one general purpose computer network supporting all applications instead of having separate infrastructure dedicated to distinct tasks. This created the Internet's environment of open innovation. Larry Roberts stated in 1970

"It is clear, however, that if the network philosophy is adopted and if it is made widely available through a common carrier, that the communications system will not be the limiting factor in the development of these services as it is now."

See also End to End design. [Fenten Sec. 1] [WU FAQ] [Weiser ALR 08 p 4] [Valcke 3] [Lessig Testimony 2008 p 2 (NN "build upon a fundamental recognition about the relationship between a certain network design (what network architects Jerome Saltzer, David Clark, and David Reed called the "end-to-end" principle) and economic innovation.")] [Roberts Wessler 1970]

The Internet is a "best effort" network. The network makes a "best effort" to deliver a packet; a Quality of Service is not guaranteed (where traffic crosses network borders - some networks guarantee a QoS where traffic stays on-network or on a partner's network - but once it leaves a network, no guaranteed QoS can be maintained). In a best effort network, the first packet into the router is the first packet out of the router. [Jamison p 4] [Valcke 3] (compare to telegraph where common carriage regulations required telegraph carriers to transmit forward messages in the order received - first in; first out - in order to ensure that certain individuals did not get an unfair advantage in information delivery).

This has led to a technical innovation where the Internet technology created an open computer network platform on which to build cool things. This is sometimes referred to as the layered model. Each layer of the protocol stack does a specific task, without interfering with the other layers of the stack. The network layers do the network stuff; the Internet layers do the Internet stuff; and the application layer does the application stuff. This creates a lego set effect, permitting the Internet to go over any physical network (DSL, Cable, Fiber, Spectrum...) or support any application (browsers, email, voip,...). Any application could be experimented with. It could go over any telecommunications network. Someone who wanted to add a new application to the network did not have to ask permission of the network to do this. [Lessig] The barriers to innovation were vary low. [Economides p 6] This created, in the words of Vint Cerf, an environment of "innovation without permission." [Clark Words p 701] [Obama ("it's because the internet is a neutral platform that I can put on this podcast and transmit it over the internet without having to go through some corporate media middleman.")]

When commercial service providers started offering Internet service to the public in the 1990s, the Internet had two decades of momentum as a general non discriminatory open network. Neutrality was the historical norm. [Economides p 6-7] When the first commercial Internet networks interconnected at the Commercial Internet eXchange, they agreed that traffic would be passed "freely."

From 1990 to a bit after 2000, no one ISP had sufficient market power to deviate from this precedent. As the ISP market became competitive, neutrality was provided by the discipline of competition. At the height of the ISP market, the average American had a choice of 10+ ISPs [Greenstein] and there were over 7000 ISPs in the United States [Boardwatch Magazine (offline)]. Strong competition meant strong incentives to provide fully functioning Internet access. When an ISP offered application services or content, those applications and content had to compete with all of the other applications and content on the Internet. The ISPs without market power could not leverage their position as access provider to bundle their offering. They could not demand that the customer pay extra for applications or services they did not want; they could not block access to competitive applications or services. Customers in the competitive ISP market would simply change providers. This has changed as local access has become a bottleneck.

In the mean time, a fundamental characteristic of the Internet changed. In the 1970s, routers lacked the computing capacity to do more than route - and therefore were dumb. But by the 1990s, the computing capacity of routers had grown significantly. Now computers can sort through the packets flying through the machine without significantly reducing speed and through put. Now the routers can do deep packet inspection, filtering, and blocking. [Lessig Testimony 2008 p 3]. The ability of networks to engage in "network management" had grown significantly.

Tim Berners-Lee, when he invented the World Wide Web application, sought to solve the problem of the inaccessiblity of information. Instead of having one document here, and one document there, and another somewhere else, and needing to go and retrieve each document for each bit of information, Berners-Lee desired to create one web of information where the documents are linked together and access to the information is facilitated. The Web was designed to be a "universal information system" that permitted access to any information of any time anywhere at any time. It is open access to information based on open standards, without central control. Consistent with design principals, Tim Berners-Lee has indicated support for NN.[Berners-Lee]


In the 1960s, when Paul Baran researched the survivability of communications networks, he concluded that centralized networks with single points of failure were vulnerable, and that the solution was a distributed network design. In a distributed design, there are multiple redundant paths to any destination and the network can reroute traffic over available routes. This concept was incorporated by Bob Kahn and Vint Cerf in the design of the Internet protocol TCP/IP. The protocol provides for a series of error corrections that permit for network traffic management. In this distributed network design, the routers in the network forward packets to their destinations. If a router can successfully deliver packets to a destination, it announces to the network that it is a good route to that destination. If a router cannot deliver packets to a destination, it announced that it is not a good route to that destination.

The result is that in a distributed network, the network can route around failure. When a particular route is blocked or degraded, the routers will simply send the traffic via different routes.

The Internet in the backbone is distributed. As one moves to the residential edge of the network, the network becomes less distributed. At the edge, there may be only one available route; if that route fails, the failure is fatal. Traffic cannot be rerouted. Where there is only one route, the network is not distributed, and this crucial Internet characteristic is lost.

If a network service discriminates in the backbone, this would be treated as a "failure" and would simply be routed around. If a network service discriminates at the edge (non-neutral behavior), this "failure" is fatal and cannot be routed around. The distributed nature of the Internet is not a solution to Network Neutrality where no alternative routes are available.

According to out-of-date FCC's figures, DSL serves 39.8% of subscribers. Cable serves 55.8% of subscribers.

Fiber serves 2% of subscribers. Fiber is a superior broadband service, replacing - not competing with - DSL and Cable. Once fiber is in, there is no competitive alternative.

Other technologies such as BPL and wireless barely show up in the data - and there is some question as to whether they ever will.


A factual question is whether Internet access providers have market power.

Proponents of NN argue that we current have a broadband duopoly: DSL from incumbent telephone companies or cable from incumbent cable cos. [See Shewick Farber p 34 (Farber "we have a landline duopoly consisting of the cable companies and the "telephone" facilities")]

It has also been suggested that the duopoly is vulnerable - that a superior platform, fiber, will replace cable and DSL and possibly become a monopoly. [FTC Staff Report 2007 p 58]

Proponents argue that we have reached this impasse because we have failed to provide competition at other layers of the market, specifically eliminating network unbundling as provided by the Telecom Act of 1996. See [Marsden p 2 (noting that NN has been dismissed in European debates "as an 'American Problem' caused by the abandonment of Local Loop Unbundling.")]

The amount of choice between broadband platforms is the subject of much debate. [Broadband statistics] In many places, there may be only one (or no) service provider (Municipal Broadband efforts are evidence of this lack of sufficient competitive available broadband service). Where there are two - not considered a competitive market by economists - choice is limited. For those who do not subscribe to cable TV, the incremental cost of Internet over cable can be double the price of DSL (cablecos add on $15-20 if you dont subscribe to cable video and the cost of Internet over Cable tends to be $10 more dollars than DSL). If one subscribes to cable, the superior bandwidth of cable means that the only competitive choice may be cable Internet. Or if one needs a VPN, and cable internet blocks VPNs, then the only choice is DSL. However, DSL has both range and interference limitations. In terms of an individual, choice is limited and the switching costs are high. [Economides p 8-9]

Opponents of NN argue that competition is supplemented by Wifi at coffee houses, municipal broadband, broadband at work, and the promise of the deployment of WIMAX and other services.

Discussion: If a broadband service has market power, the broadband network has both an incentive and the means to discriminate. A service provider with market power can extract monopoly rent from higher layer applications and content that depend upon that broadband network to access subscribers. In non economic-ese, if there is only one pipe to the home, the owner of the pipe can say that nothing goes over that pipe without the permission of and payment to pipe owner. If is a choice of broadband service providers - if there was competition between

  • DSL providers (created through UNE unbundling)
  • cable providers and
  • maybe wireless providers (some advocates of competition point to the importance of spectrum and spectrum reform in order to manifest competition)

then the market will discipline service providers - then individuals can switch services when they felt the service was inadequate (note however different physical networks have different broadband capacities - a Wifi connection to a home will not have the same capacity as a fiber connection). Competition provides Neutrality if that is what the consumers demand.

If you listen carefully to the rhetoric from both sides, there is strong consensus for the equation for the problem: either there is competition, and network neutrality is not needed - or there is not competition, and therefore network neutrality is necessary. Both arguments agree that Network Neutrality is a good thing - the dispute is how it is obtained. See [Farber opposing NN regulation but stating "Public policy should intervene where anti-competitive actions can be identified and the cure will not be worse than the disease."] [Yemini p 9] [FTC Staff Report 2007 p 70] [Obama (" If there were four or more competitive providers of broadband service to every home, then cable and telephone companies would not be able to create a bidding war for access to the high-speed lanes. But here's the problem. More than 99 percent of households get their broadband services from either cable or a telephone company. ")] [Economists' Statement (stating that the cost of regulation exceeds benefits in a competitive market)] [Frieden Primer p 9-10] [Yoo Judiciary Hearing arguing that wireless will provide sufficient competition]


There is plenty of competition and Network Neutrality is provided by the market

or There is insufficient competition and Network Neutrality must be ensured by other means.

Both sides will argue that their position favors innovation, creativity, and development. Those against regulation argue that the government will thwart innovation. Those in favor of network neutrality regulation argue that the significant market power of monopoly or duopoly local broadband service providers like AT&T will thwart innovation. 

Two notable academics have taken the position that competition would not be a solution to this problem: Prof. Barbara van Schewick and Prof. Larry Lessig. [Schewick] [FCC Stanford (Lessig)] [See Shewick Farber p 33 (Shewick: "These arguments neglect a number of factors that make competition less effective in disciplining discriminatory conduct than one might expect. First, if all network providers block the same application, there is no provider to switch to. For example, in many countries all mobile network providers block Internet telephony applications to protect their revenue from mobile voice services, leaving customers who would like to use Internet telephony over their wireless Internet connection with no network provider to turn to.")]

A solution that both sides would probably support would be for communications policy to support and facilitate competition in the market place between multiple providers.

Terminating Access Monopoly

Derived From: FTC Staff Report 2007 p 77: One concern raised by net neutrality proponents relates to broadband providers' potential interest in charging content providers for carrying their content over the last mile of the Internet. In particular, access providers might seek payments independent of any charges for prioritized content or application delivery. Net neutrality proponents have noted that such a practice would be analogous to a situation in telephony, in which the terminating telephone network charges the calling party's network a termination fee. There, for example, if a wireline customer calls a cell phone, the wireline network pays the cell phone network a termination fee, typically calculated on a per-minute basis. The ability of the terminating network to charge a fee for delivering traffic to its own customers is known as the terminating access monopoly problem because an end user's network is a "monopolist" for anyone who wishes to connect to that end user. See also [Marsden & Cave Sec. 1.2.3]

In the context of broadband Internet access, broadband providers might want to charge content or applications providers for delivering content or applications to end users over the last mile. As noted above, such charges could apply to both best-efforts and prioritized routing. Such charges would have the potential to create two different types of consumer harm. First, in the short run, they could raise the price to consumers of content and applications. Specifically, charges to content and applications providers would raise their costs; in the face of higher costs, such providers are likely to try to recoup at least some of those costs via the prices they seek to charge consumers. At the margin, higher prices will tend to reduce usage, lowering consumer welfare.

There have been instances in the telecommunications area in which terminating access charges have resulted in substantial end-user fees. A Workshop participant provided the following example to demonstrate how such fees might increase prices and thus reduce consumer demand for a particular product: Skype (a VoIP provider) customers in Europe are charged no usage-based fees for Skype-to-Skype calls. Skype to- landline phone calls are charged approximately two cents per minute, however, because European landline terminating access charges are about two cents per minute, and Skype-to-cell phone calls are charged 21 cents per minute because European cell phone termination charges are about 21 cents per minute.354 In the United Kingdom, where the per-minute price is 21 cents (due to the access charges), the average usage is only 150 minutes per month. In contrast, in the United States, where the average price for the marginal minute of cell phone use is about seven cents, the average user talks on a cell phone for about 680 minutes per month.

A countervailing effect could mitigate the potential harm from termination charges in the context of Internet access. To the extent that broadband providers collect termination charges on a per-customer basis (or on a usage basis that depends on the number of customers), the broadband provider has an incentive to lower the subscription price to increase the number of subscribers from which it can collect access revenues.356 Also, some content providers whose business model is based chiefly on advertising revenue may choose to retain that model if they are charged termination fees that are sufficiently small. Here, again, the ability to collect such access fees creates an incentive for the broadband provider to lower subscription rates. However, it may also cause certain marginal, advertiser-supported content to become unprofitable and thus to exit the market.

The second type of potential harm from termination charges is a long-run harm. Broadband providers that can charge content and applications providers terminating access fees might be able to expropriate some of the value of content or applications from their providers. If so, the incentives to generate such content and applications will be reduced; in the long run, consumer choice of content or applications could be reduced as well. One Workshop participant suggested that the greater ubiquity of Internet content - relative to cell phone content - might arise from the fact that, historically, the networks over which Internet content is downloaded have operated under regulations limiting terminating charges, whereas cell phone networks have not.

Some net neutrality opponents argue, however, that termination and related fees may be the most efficient way to deal with what they see as inevitable Internet congestion, routing time-sensitive and time-insensitive traffic during periods of congestion according to the relative demand for content and applications. Moreover, they argue that broadband providers must be able to charge directly and explicitly for desired routing to have the proper incentives to invest efficiently in the necessary infrastructure.360 Without delivery charges, they argue, content providers whose revenues come chiefly through advertising would have an incentive to free-ride on infrastructure investments. That could distort both the magnitude and distribution of infrastructure investments, as well as pricing elsewhere in the market.361

These issues, as discussed above, also raise difficult empirical questions about the relative magnitudes of countervailing incentives in particular present and future market contexts. Also relevant are the relative costs of providing for certain possible infrastructure investments and the marginal costs of making various improvements available to different consumers. Although systematic, empirically-based answers to these questions have not yet been forthcoming, it is clear that ongoing infrastructure investment is substantial and that desired applications will require further investment still.

The Problem: Vertical Integration

There is an alternative view that there is a network neutrality concern even without market power. Here the concern is with vertical integration of the access pipe with preferred content, creating an incentive for the access pipe to discriminate against non-affiliated traffic. This reflects, perhaps, a more European perspective on the issue. [Marsden]

It is not clear why the market would not discipline this type of behavior absent market power of some type (consumers of internet access would switch to an access providers offering full access to Internet content).

AntiTrust - apply same to all

Opponents of NN argue that if there is to be NN, then NN should be imposed on all Internet players alike, including the NN proponent Google. [Sen Commerce Hearing 0408 Robert Hahn] [Odlyzko Efficiency and Fairness 2009 41]

Proponents of NN in the first place agree - market power and anticompetitive behavior are legitimate concerns. If a content or application provider is in a position of market or monopoly power and is able to exercise its position in the market to the detriment of other markets, that is a classic antitrust concern.

NN, on the other hand, addresses the concern of a communications service provider - a carrier - which has market power. In instances where carriers abuse market power, common carriage is the historic and appropriate remedy; in instances where non-carriers abuse market power, antitrust is the appropriate remedy.

Opponents argue that like cases should be treated the same. A content provide with market power and a carrier with market power are alike in that both deals with market power; a carrier, however, is a special subset because of its position within the economy. See discussion of Common Carriage. The argument that like cases should be treated the same only goes so far as the two cases are the same; to the extent that there are distinguishing facts, distinguishing remedies may be appropriate.

Is it a problem if Google discriminates against content? In the recent court case Langdon v. Google, Inc (D Del Mar 7, 2007), plaintiff brought a First Amendment action when Google refused to run his ads. The court ruled that Google is not a government actor and is not obligated to carry ads it does not want to carry. Furthermore, in this case as with others, the market is competitive - the owner of the ads can find other avenues of having his advertisements and his message carrier on the Internet. This is not a situation where choice of service provider is limited. If it were a problem, the appropriate remedy is antitrust as Google is not a communications network (at least with regards to this question).

Opponents argue that any harm caused by discriminatory behavior of networks can be adequately resolved by the ex post remedies of antitrust. [Economists' Statement] [Brito p 2] [Shewick Farber p. 35 (Farber "Current antitrust law generally solves this problem by taking a case-by-case approach under which private parties or public agencies can challenge business practices and the courts require proof of harm to competition before declaring a practice illegal")]

Proponents argue that different bodies of law serve to cure different problems; while anti trust serves to address the problem of excessive market power, it does so at a very slow pace and fails to adequately address consumer protection interests that laws such as common carriage have been crafted for centuries to address. [Cherry] [Compare Shewick Farber p. 35 Farber, who advocates resolution of discrimination through antitrust actions, in part, because the alternative, regulatory solutions, are "slow" and "cumbersome."]

Part of the regulatory rationale for common carriage regulation over anti-trust remedy is the nature of the problem. Telecommunications carrier regulatory situations are designed as consumer protection rules in an environment where the parties have disparate power - the carriers have lots of market power and the consumer has little. It is difficult for players with such disparate power to effectively negotiate terms. A regulatory solution seeks to address problems prior to the problems arising (ex ante). Anti trust remedies are after the fact (ex post), tend to be lengthy judicial solutions, and frequently any remedy that comes arrives after the completion of the length process when there is no longer a meaningful remedy. In other words, where a carrier discriminates against a small firm, if the remedy comes after five years of litigation, the small firm will lack the resources to survive and will be driven out of business before there is a solution. The large carrier wins a battle of attrition and the economy loses a company, the jobs and services it offered, and any innovation it might have brought (loss of public welfare). [Economides p 17] It has also been argued that anti-trust remedy seek to cure one set of problems that do not necessarily reflect all of the consumer protection concerns of common carriage. [Cherry]

INSUFFICIENT HARM (aka "It Hasnt Happened")

Hands off the Internet

Net Neutrality opponents use to say "Net Neutrality legislation would prohibit something that hasn't happened." See [Verizon per Isen (there has been no episode of blocking)] [Pepper] ("the problem they're trying to solve is theoretical. It does not exist.") [Christopher Wolf, Hands off the Internet, FTC NN Hearing] [FTC Staff Report 2007 p 60 & 68 ("They also argue that, apart from the Madison River case, the harms projected by net neutrality proponents are merely hypothetical and do not merit a new, ex ante regulatory regime.")] [Frieden Primer p 9] [Ourand (quoting Dan Brenner NCTA)] [Sura] However, with repeated instances of discriminatory behavior making it into the headlines of news reports, this argument is made less and less. Compare [Senate Commerce Hearing April 2008 McSlarrow (repeating argument)]

Proponents point to the long list of problematic behavior, and argue non-neutral behavior has a long history. As new instances of non neutral behavior come to light, the argument that "this hasn't happened" is increasingly shallow. [Valcke 5] The reason common carrier law exists is because historically carriers have had both the incentive and opportunity to discriminate; and the sovereign has recognized that it would be very good for the economy and the country if carriers were not permitted to discriminate and were required to have reasonable rates. Network operators have the "technological means and incentive to actively degrade or outright block certain content and applications." [FTC Report 2007 p 53] [Schewick Farber p 32 (quoting Lessig "if 'network neutrality' was a solution in search of 'a problem' in 2002, in 2006 the network owners have been very kind to network neutrality advocates by now providing plenty of examples of the problem to which network neutrality rules would be a solution."]

Market Solutions:

In the event of discriminatory treatment by a carrier, the individual users of the network could take the following counter measures:

  • Switch ISPs assuming that there is competition
  • Individuals could create an alternative bit path
  • Non Technical:
    • Bring consumer pressure on carrier
    • Lie on application to get different level service
  • Technical
    • Virtual Private Networks [Felten Sec. 6]
    • Encryption [Felten Sec. 6]
    • Source Destination Address Filtering
    • Altering Port Numbers
    • Proxy Servers and Anonomyzers
  • Learning to Live with Discrimination
    • Take advantage of applications that are not vulnerable to discrimination
    • Buffering or Download as opposed to real time
    • Distributed caches
    • Compression

[Lehr] [FTC Staff Report 2007 p 35]   According to Prof. Lehr, an "arms race has costs also, so some rules may help discipline."  Carriers have the ability to mitigate the effectiveness of individual's actions.  Individuals ability to take advantage of these options may be limited by their technical expertise.


NPR Science Friday asked "does this mean we have to build new backbones." This is a good question; what exactly are we talking about? On CNET Buzz Outloud, when Ed Whitacre said Google had to pay to be on the AT&T network, the reporters laughed and said that Google would just switch backbone providers.

Proponents clarify that this is not a backbone issue. There are multiple backbone providers and lots of capacity in the core. This is about local access at the edge. As noted above, at the edge, local access broadband service providers have significant market power. With significant market power, those broadband service providers have the opportunity to decide who shall pass. By cornering one small piece of the market, broadband access providers can have power over the total vertical market (i.e., applications, content, etc). [economides p 9]

Free Rides

Opponents: The International Herald Tribute wrote

many of the companies that oppose legislating net neutrality do so because they believe the speed and capabilities of the Internet are being improved on their backs - they are footing the bill for building ever faster broadband connections that others, like YouTube, MSN and PagesJaunes, are benefiting from.

Ed Whitacre, CEO of AT&T, said that Google should not get a free ride on his network. [See also Mohammed quoting Verizon's John Thorne]

Proponents: To understand Internet network economics, one has to understand a lot about backbones, peering, and interconnection. The simple answer is, everyone pays for their own Internet connection. Google is paying lots of money for lots of bandwidth for its Internet connection. Individual subscribers are paying, relatively speaking, lots of money for their residential Internet connection. When individual pay $40 a month for their broadband connection, part of what they are paying for is their ISP's backbone connection. When Google pays lots of money for its connection, it is paying lots of money for its ISP's backbone connection. All of these networks come together in the Internet backbone, interconnect, and exchange traffic. It is paid for. [Economides p 5] [See Clark Word 702 "We know that even if the Internet is open, it is not free."]

It's a bit like having a group of friends meet at some designated location. Each friend pays their way to that location and they meet. When Abe meets with Ben, Ben gives Abe a cookie, and Abe takes that cookie back to his house - Ben's cookie does not get a free ride to Abe's house. Ben paid to take the cookie to the meeting place. Abe paid to get to the meeting place and bring it back. If Abe wants to get back and forth faster, Abe can, I suppose, buy a sports car. But it would be Abe paying for the new sports car, not Ben - Ben has paid for his own wheels.

So is Google getting a free ride over Ed Whitacre's network? No. AT&T's subscribers are paying for their broadband access, which pays for AT&T's backbone connection. The subscribers are paying to access Google (and everything else).

There are no free rides (if there were, those networks would be out of business - yet the commodity commercial Internet has been around for almost two decades). The people who will pay for network improvements will be the people who subscribe to those networks. Any CEO who says "no free rides" is really saying that he wants to double dip, charging twice, and increasing his company profits (in economic terms, this is called monopoly rents).


In the Network Neutrality discussion, there is a lot of "What are we actually talking about." When Ed Whitacre reportedly said that Google should not get a free ride on his network, what network was he talking about. In next generation broadband telecommunication infrastructures, the "wire" to the home (HFC or Fiber), in theory, could be said to have three pipes inside it: a video delivery pipe, an Internet pipe, and a small telephony pipe. The telephony pipe would be separate in order to ensure quality of service. The video pipe would be a large separate pipe for QoS video delivery like a cable video service. And there would be a best effort Internet pipe of some arbitrarily limited bandwidth capacity.

Part of the hyperbole in the Net Neutrality debate involves confusion over which pipe we are talking about. Opponents argue that providers will be able to provide great new video services over their networks, that network neutrality would prevent this, and therefore network neutrality is bad. [Sura] Proponents note that this conflates one service over the network with another.

Different pipes over one network raises multiple questions. If the capacity on the commodity Internet is limited, then this creates a queuing problem of who get's to go first. Note that with sufficient bandwidth, there is no queuing problem. But with limited bandwidth, an artificial shortage can be created and then - rationed.

If this network provides for telephony over its reserved telephony capacity with set QoS, and Vonage must be on the scarce capacity commodity Internet, is this fair? If the capacity on the commodity Internet is scarce, the QoS of Vonage could be poor making it quite hard for Vonage to compete with the networks telephony service. What if Vonage could pay for better QoS? What if more capacity of the network went to the commodity network eliminating a queuing issue.

The same discussion goes for video. If the network provides video over its reserved video capacity, and Google video must be over the limited capacity of the commodity network, introducing latency and jitter, is this fair? But should Google video be allowed on the separate video network for free? Is this what Whitacre actually meant by the "free ride?" What if the capacity of the commodity Internet pipe is increased, there is no queuing or latency issue, and we move to essentially a Video on Demand model (where the cable network is not pumping 99 channels to you that go unwatched while you view the latest update on the Bass Fishing channel).

But the video discussion goes further. Video puts different demands on the Network. Some video demands real-time distribution. The Net was not designed for that. Some video is huge - it's like moving a super freighter through the network when everything else is a VW bug. The Network was not really designed for that. The network was designed for a whole bunch of different applications of relatively similar non-real-time demands. Should the network be reconfigured for an application - high quality video - that could perhaps more efficiently be distributed over other networks (cable, broadcast)? Is the discussion of TV or video over the Internet displaced? Do we give up access to the web pages of all the different peoples, cultures, religions, sports, or whatever of the world, so that we can download "Desperate Housewives"?

CDT picks up on this three-pipe distinction, arguing that Net Neutrality is about "Internet Neutrality," and should not be about neutrality on the other pipes.

Investment and Innovation

Hands off the Internet

A major theme in the NN discussion is innovation. [Yemini p 5]

Opponents of Network Neutrality argue that broadband service providers should be given the opportunity to innovate and bring out new services. [Economists' Statement] The ability to form partnerships and create innovative products means that the network provider can increase revenue and will therefore have a greater incentive to invest. [FTC Staff Report 2007 p 60] Service providers argue that networks are expensive to build. [Odlyzko Efficiency and Fairness 2009 43] In order to satisfy increased demand for bandwidth, reliability, QoS, bring new applications and content, and security, networks need to be able to discriminate and they need to ensure revenue. [Ganley at 25] [FTC Staff Report 2007 p 60] Without this ability, networks will not be able to satisfy the needs of real time applications such as interactive gaming, telemedicine, or VoIP. [Verizon per Isen] They argue that Network Neutrality regulation will stifle investment just like telecommunications regulation stifled investment in DSL broadband. [Verizon per Isen] [Frieden Primer p 6]

Bob Kahn, one of the original designers of the Internet Protocol, argues "against dogmatic views of network architecture, saying the need for experimentation at the edges shouldn't come at the expense of improvements elsewhere in the network." [Kahn] David Farber and Michael Katz echoed this theme stating "The Internet needs a makeover." [Farber] [Senate Hearing April 2008] [See Shewick Farber p 34 "There is one constant about the Internet: it has continued to evolve and change, often in ways none of use - even those of use directly involved in its development - would have predicted. This simultaneously makes the Internet so valuable and so vulnerable.""] The concern expressed here is that legislation would lock technology down preventing improvements inside the network itself that could address latency, jitter, security, and spam.

"Network neutrality opponents contend that network operators should be allowed to innovate freely and differentiate their networks as a form of competition that will lead to enhanced service offerings for content and applications providers and other end users. This perspective has been described as an argument in favor of "network diversity." Thus, opponents believe that network operators should be able to experiment with new data-transmission methods and a variety of business plans to better serve the evolving demands of end users. If such experiments turn out to be failures, network operators will learn from their mistakes and improve their offerings or simply return to the status quo, consistent with the normal dynamics of the market process. In their view, a ban on prioritization would effectively restrict new types of competition, hinder innovation, potentially preclude price reductions for consumers, hamper efficiencies, and lock in one kind of business model. They warn that in the nascent and evolving market for broadband services, mandating a single business plan is likely to lead to inefficient and unintended outcomes. They also assert that allowing content and applications providers to purchase quality-of-service assurances and prioritization may allow new content and applications providers to counteract the competitive advantages typically enjoyed by incumbent providers, such as the ability to pay for large server farms or third party data caching services." [FTC Staff Report 2007 p 65]

Proponents of NN respond by arguing that it is important to look at economic incentives. Companies that have incentives to innovate are those in competitive markets as a means of distinguishing themselves from competitors and capturing more market share. Where markets are not competitive - as in monopolistic or duopolistic markets - further innovation will not attract additional customers or lead to further market success; companies with significant market power do not have an incentive to innovate and invest. If a company believes that it is extracting all the profit it can out of a market, it will have no incentive to bring anything new to that market. Instead, it is more efficient and effective for companies with market power in one service - broadband access - to (a) consolidate and conserve that market power and (b) to attempt to leverage that market power to increase market share in other markets - applications and content. This does not encourage innovation and investment. Indeed, those with market power will conservatively view innovation with suspicion as a potential disruptive technology that threatens to destabilize their position in the market. Incumbent companies generally have negative approaches to new technology. [Odlyzko Efficiency and Fairness 2009 57 ("the 2007 iPhone deal that AT&T concluded with Apple may reflect the growing realization that the main source of innovation in services are and will continue to come from the outside.")]

Companies with market power have much less incentive to take risk, or are risk-adverse, avoiding anything that might destablalize their power. This has been demonstrated repeatedly throughout communications history.

  • In 1876, lacking financial resources, AT&T offered to sell its telephone patents to Western Union. Western Union refused, concluding that the telephone was "inherently of no use to us."
  • In 1881, telephone circuits were single wires grounded to the earth, creating noise on the line created by the earth. A two-wire solution was realized that would have eliminated the noise, but, as written by John Brooks, "So long as subscribers were willing to tolerate interference on their lines - and their only recourse was to give up something for nothing - the owners of American Bell preferred not to make the the huge capital outlays necessary for conversion to two-wire circuits." More than 10 years after the solution had been recognized, only 12% of subscribers had paired circuits eliminating the noise. [Brooks 86, 87]
  • In 1972, DARPA offered to sell the ARPANet (aka The Internet) to AT&T. AT&T refused, concluding that it was not something that it could use or sell.
  • AT&T refused to let third parties attach equipment to its network, most notoriously in the Hush a Phone and Carterfone cases. When AT&T was forced to permit attachments, this became a significant source of revenue as companies acquired additional lines and service in order to service fax machines, modems, and servers. In the 1990s, the BOCs listed this as a leading source of their profitability.

See [Isenberg (Carriers want to change the Internet to preserve themselves)]

Proponents note that it was not telecommunications regulation that stifled DSL deployment but instead the lack of competitive incentive. DSL service was rolled out only after cable modem service was deployed; cable modem service was deployed only after cable felt competitive pressure from satellite TV service.

In a situation where market players have significant market power and little incentive to innovate - the goal of the regulator is to constrain that market power and let innovation and competition flourish elsewhere. Where broadband service providers have market power and will attempt to leverage that market power in the application and content markets, the regulatory will seek to contain the market power in the market in which it exists. Broadband access market power will be limited to the broadband access market. If successful and if the broadband access networks are open platforms, then anyone can build the next Google or Netscape, and anyone can create the next Craigslist - innovation at those layers will flourish. [Isenberg 5.29.7] [See Christiansen]

Andrew Odlyzko argues that without net neutrality mandates, the incentives on the broadband service providers will be to produce new products or services that no one wants. Service providers purport that they will build special facilities for streaming media, but this will not produce sufficient revenue, and it is an expensive, unnecessary, and wong solution. [Odlyzko Delusions]

Jason Devitt, Chief Executive Officer, SkyDeck and Barbara van Schewick , Assistant Professor of Law, Stanford Law School note that venture capital investment firms, when they consider risk of a potential investment, now consider the issue of network neutrality and regularly are declining investments in new products where the lack of neutrality presents a risk. [FCC Stanford]

Proponents: Innovation : Permission from the Infrastructure

Ed Felten states, "the net neutrality debate is a fight between the edges and the middle over control of the network." [Felten Sec. 1] The Internet's End-to-End design ensured that the Internet's infrastructure (routers and links) was "dumb" and did as little as possible in order to maximize connectivity. The Internet would support any application and it would go over any physical infrastructure. The network was designed to support the intelligence at the edge by having as little intelligence in the core as possible. The networks historical neutrality ensure that innovation could take place without the innovator seeking permission from the network operator. The endeavor of local access networks to create restrictions in the network - in the name of their own innovation - would do the opposite, erecting barriers to innovation by third parties unaffiliated with the local access networks. [See Shewick Farber p 33]

FTC Staff Report 2007 p 57

Proponents suggest that if so-called non-neutral practices are allowed to flourish in the core of the networks that comprise the Internet, innovation by content and applications developers that are connected to the Internet's "edges" will suffer. Some proponents, for example, are concerned about the complexity and cost that content and applications providers would experience if they had to negotiate deals with numerous network operators worldwide. They suggest that content and applications providers will need to expend considerable resources to negotiate and enter into prioritization agreements or other preferential arrangements with numerous networks and that many (particularly, small) companies will not be able to pay the fees that operators will demand to reach end users in a competitive manner.258 Thus, they fear that innovators will be blocked, actively degraded, or provided with low-priority data transmissions, and the development of the next revolutionary Internet site or application may be inhibited. They predict that spontaneous innovation will be precluded or forced to proceed through established businesses already having significant capital and favored relationships with network operators.259 Similarly, net neutrality proponents sometimes argue that nonprofit and educational entities may be at a disadvantage relative to highly capitalized businesses.260


Opponents: Network service providers must be able to receive revenue (extract rents) from sources beyond offering network services in order to have sufficient incentive to build the network. The offering of network services does not provide sufficient revenue to create an incentive to provide network services. If network services had to recover their investment in the network, from the network, that's not enough money. Network services want to be paid for the network - and something else - in order to offer the network. [Sidak p 5] [See Shewick Farber p 33 (Shewick "This argument concedes that network providers have an incentive to discriminate to increase their profits.")]

FTC Staff Report 2007 p. 66: Opponents argue that prohibiting network operators from charging different prices for prioritized delivery and other types of specialized services and premium content will make it more difficult to recoup the costs of infrastructure investments and, thereby, reduce incentives for network investment generally.295 They argue that both end users and content and applications providers should be free to select any level of service provided by network operators under market-negotiated terms.296

Network neutrality opponents also stress that, although the Internet began as a research and government communications network, its explosive growth since the mid- 1990s has been fueled mainly by private, risk-bearing investment.297 They emphasize that the individual, decentralized networks that make up the Internet mostly are owned and operated by private companies and, generally speaking, are private property, even though they may be subject to certain legal requirements like rights of way permissions.298 They point out that deploying and upgrading broadband networks can entail billions of dollars in up-front, sunk costs.299 Thus, they argue, any regulation that reduces network operators' ability to recoup their investments also effectively increases their risk profile to investors and, accordingly, would prompt capital markets to demand an adjusted, higher rate of return. They suggest such an increase in the cost of capital, in turn, would decrease the likelihood that projects underway could be completed on their planned scale.300

In addition to reducing incentives for network investment generally, opponents argue that banning network operators from selling prioritized data delivery services to content and applications providers will prevent networks from recouping their investments from a broader base of customers.301 In particular, they suggest that networks should be allowed to experiment with a model in which content and applications providers pay networks for prioritization and other premium services in the same way that merchants pay for the placement of advertisements in newspapers and other publications.302 They suggest that such a business model might reduce prices for some end users, much as advertising subsidizes the subscription prices of ad-supported publications, thereby allowing marginal customers to afford broadband service.303 They further suggest that such increased end-user penetration would also increase the effective demand for content and applications, generally, and thereby benefit their providers.304

Opponents also argue that the imposition of NN regulation will create an additional cost on network infrastructure and thereby decrease companies incentives to invest. [Sidecut 14]

Proponents: Proponents argue that one must distinguish between competitive and non-competitive market-places. According to the opponents, network services (non competitive) must be bundled with something else (voip, content partners, application partners) (competitive) in order to recover sufficient revenue. However, in competitive markets, such as application service providers, profit margins are narrower because firms are competing on price (among other things). Looking to competitive markets (narrow profits) in order to increase revenue of network services (large profits) does not make sense.

In non-competitive markets, such as network services, profits are strong because the firms do not have to compete on price (or QoS or innovation). If a non-competitive market with higher profits is not sufficient incentive, adding narrow profit application services is unlikely to improve incentives.

The reality is companies are not subsidizing the cost of the network services with the bundled application services; just the opposite. Network services are subsidizing the applications services (the competitive market) and thereby pricing the application services below cost. The service providers are not using the applications to help pay for the network; they are using the network to help undercut the competition in the application market.

Proponents also note that we now have a testbed. During the AT&T / Bellsouth Merger, NN obligations were imposed on AT&T. "The fact that AT&T reported nearly $2 billion in profits, up 17% from a year ago, double-digit growth in earnings per share, growth in residential lines, should put to rest any concerns that Network Neutrality requirements will harm [a companies'] growth now or in the future." [PK 1/25/7]

David Isenberg however cautions that "Carriers don't spend $1.5 million a week as they did in 2006 lobbying against NN unless they believe they will be harmed by it! I think the carriers' belief is correct; NN rules strong enough to keep the Internet neutral will indeed weaken their business." [Isenberg 5.29.7] By introducing discrimination into the network, carriers create essentially a scarcity of resource that they can take economic advantage of as they leverage their supply of the service. [Isenberg 5.29.7]

Economies of Scale

FTC Staff Report 2007 p 67: Net neutrality opponents argue that vertical integration by network operators into content and applications, along with related bundling practices, may produce economies of scope and price reductions. They point out that many areas of telecommunications are increasingly converging. For example, both cable and traditional telecommunications companies increasingly are offering "triple-" and "quadruple-play" bundles of high-speed data, telephony, television, and wireless services. In addition, they state that the vertical integration of distribution with other types of media content is already commonplace because consumers typically do not want distribution alone, but, instead, want the particular content enabled by that distribution. Some opponents also suggest that the prospect of additional revenue streams derived from vertical integration and bundling could promote additional competition in last-mile broadband services and provide other benefits to end users.

Applications: Video and Voice Application - Changing revenue models

The market is experiencing a fundamental shift. Cable has gone from a video delivery service to triple play (video, voice, and broadband), where broadband enables the individual to bypass the cableco and get video from some other source. Cable as a video delivery service receives revenue from the content it delivers; Cable as a broadband provider receives no revenue from the content it delivers - indeed, the video over broadband is in direct competition with the video over cableco service. When Ed Whitacre declared that Google was not going to get a free ride over his network, he specifically was expressing concern about Google video (now Youtube).

Further aggravating network economics is the fact that voice has gone from the primary source of revenue over telephone networks, to an all but free application over the network. [Odlyzko Delusions 3, 10]

In both scenarios, both firms (the cableco and the telco) are struggling to reinvent their revenue model from one based on revenue captured from a specific application (video or voice) to some new model. Previously they were, what is now called, vertically integrated networks (the transmission network was integrated with a specific application) and used their market position to capture revenue. The struggle is how this plays out in a new business model where an individual can acquire video or voice services from other firms. [Odlyzko Delusions] [Odlyzko Efficiency and Fairness 2009 44 (refuting "net neutrality is about streaming movies")]

This has created pressure and incentives on the networks, particularly the cablecos, to attempt business models that attempt to retain loss sources of revenue (note that even the telcos are becoming video delivery services as well. See FiOS). See Comcast and BitTorrent. These business models, according to network neutrality advocates, are less then neutral.

Odlyzko argues that these efforts are misguided because they presume that "content is king." Odlyzko argues that it is not content which is king, but connectivity. [Odlyzko Efficiency and Fairness 2009 44]

"We are here today, in part, because we have seen a deepening division between some network operators and some in the application industry. Some in the applications industry are calling for government regulation of network engineering problems that historically have been resolved through [ISOC, IETF, ICANN]. Such collaborative bodies have never failed to resolve major network management challenges. That is a track record the government simply cannot match. " - FCC Commissioner Robert McDowell PDF


Opponents: Tom Tauke of Verizon testified before a Senate Hearing and stated that Network Neutrality legislative proposals "really comes down to one thing: government regulation of the Internet."

Proponents: There are two myths here - one dealing with "regulation" and the second with "the pipe."

The first myth is that the Internet is unregulated; for those who adhere to this belief, the threat of any government regulation of the Internet is sacrilege. [Sura] The truth is that the Internet already is regulated by federal laws in many ways:

  • The DMCA regulations copyright on the Internet
  • The FCC has applied CALEA and 911 obligations to VoIP and broadband service providers.
  • CIPA requires schools and libraries that receive Erate to use Internet filters
  • COPPA regulates the collection of information from children online
  • ECPA regulates wiretaps online
  • The Can Spam Act regulates SPAM
  • Backbone interconnection have been the subject FCC and DOJ merger review, FCC proceedings, and overseen by NSF for NSFNET and DOD for ARPANET.
  • The Internet started and was originally operated as Department of Defense research funded project
  • And so on - simply review this website for more information.

The second, less obvious myth, is about the communications pipe. The Internet has become transformed from something you do over the telecommunications network -- to being the telecommunications network itself. Tauke wants to perpetuate a paradigm from a decade ago when the Internet was provisioned over the telephone network - the Internet was unregulated as a communications network - but only because the telephone network (the communications infrastructure) was regulated as a telecommunications carrier (See the Computer Inquiries). Telecommunications networks have been regulated for 150s years, dating back to the telegraph, as open non discriminatory common carriers. Tauke attempts to argue that the precedent is that we don't regulate -- but the true precedent is that telecommunications networks have been regulated as common carriers throughout history.

We are free to alter from precedent but it is important to comprehend where we are coming from in order to appropriately formulate new policy.

In response to the argument that NN would impose the status quo and prevent innovation, this fails to comprehend what is being asked and the historical precedent of this policy. NN is a policy of non discrimination which has its historical roots in common carriage. It is not a rule that say the network cannot change. The network and the technology would be permitted to evolve and innovate in the same way it always has - in the context that this important resource in our society, culture, and democracy can not have a business plan which discriminates in favor of some over others. NN is about how business plans are implemented more that it is about how the technology evolves.

Regulation would be Hard

Opponents of NN argue that coming up with a successful legal or regulatory regime would be hard and therefore the USG should not try. [Shewick Farber p. 35 (Farber "In both cases the cure might be worse than the disease. The ability of the US Congress to pass effective legislation in telecommunications area is open to question.")]

Proponents of NN note that ARPA was originally told that the ARPANet was not feasible and NSF was told that their task in building the NSFNet would be difficult. They also note that the USG has successfully been regulating Communications Carriers for 100s of years resulting in one of the best postal services, telegraph services, and telephone services in the world - at least up until the recent deregulatory policies of the Bush-Cheney-Martin years. Proponents also point to the Computer Inquiries, the FCC's initial computer network policy. Initiated in 1966, the first proceeding was a failure. The second attempt in the Computer II Order, released in 1980, was a tremendous success, lasted 20 years, and was one of the legal underpinnings for the success of the Internet.

New Laws Not Needed

Opponents of Net Neutrality, according to some, retreated to the position that the 2005 Policy statement is sufficient and new Net Neutrality laws are unnecessary. "The big question for many casual FCC observers is why would Martin, a Republic who is no historical friend of net neutrality rules, suddenly embrace the 2005 principles? The answer is, it may be the lesser-of-two evils choice that his telco friends would wholeheartedly support, because it gives them a pain free way to say that new net neutrality laws aren't needed. Indeed, just moments after the FCC's vote was made public, both Verizon and AT&T issued statements supporting Martin's actions." Sidecut Reports See Verizon Statement on FCC Comcast Decision, Aug. 1, 2008 ("With both the FCC and the Federal Trade Commission (FTC) engaged in oversight of Internet usage and practices, new legislation and more regulation, with all their unintended consequences, are not needed.")

Yeah, this is the real caching!


Caching in the Internet is the practice of storing locally data that has been requested / fetched from a more authoritative source. In terms of web content, when an individual requests popular content, that content may be stored on the local network so that anyone else requesting that popular content will receive it from the local server, avoiding the bandwidth cost of retrieving it from the remote server. The local servers will refresh the data on a regular in order to ensure that it is current. The source of the data can tag the data that it should not be cached. [Caching Tutorial] [W3C Caching] [Wikipedia Cache] This technique has been around for a good while and is used in multiple applications, including the DNS. [PK 12/15/08]

Edge caching is a common practice used by ISPs and application and content providers in order to improve the end user experience. Companies like Akamai , Limelight , and Amazon's Cloudfront provide local caching services, and broadband providers typically utilize caching as part of what are known as content distribution networks (CDNs). Google and many other Internet companies also deploy servers of their own around the world.

By bringing YouTube videos and other content physically closer to end users, site operators can improve page load times for videos and Web pages. In addition, these solutions help broadband providers by minimizing the need to send traffic outside of their networks and reducing congestion on the Internet's backbones. In fact, caching represents one type of innovative network practice encouraged by the open Internet. [Google 12/15/08]

News reports indicated that Google negotiated with ISPs about caching arrangements.

Google Inc. has approached major cable and phone companies that carry Internet traffic with a proposal to create a fast lane for its own content, according to documents reviewed by The Wall Street Journal. Google has traditionally been one of the loudest advocates of equal network access for all content providers. [WSJ Dec 2008]

Cable companies argued that they have refused to enter into caching arrangements with Google because they believed it would violate FCC NN rules. [WSJ Dec 2008] [Open Left 12/15/08]

Google responded:

By bringing YouTube videos and other content physically closer to end users, site operators can improve page load times for videos and Web pages. In addition, these solutions help broadband providers by minimizing the need to send traffic outside of their networks and reducing congestion on the Internet's backbones. In fact, caching represents one type of innovative network practice encouraged by the open Internet.

Google has offered to "colocate" caching servers within broadband providers' own facilities; this reduces the provider's bandwidth costs since the same video wouldn't have to be transmitted multiple times. We've always said that broadband providers can engage in activities like colocation and caching, so long as they do so on a non-discriminatory basis.

All of Google's colocation agreements with ISPs -- which we've done through projects called OpenEdge and Google Global Cache -- are non-exclusive, meaning any other entity could employ similar arrangements. Also, none of them require (or encourage) that Google traffic be treated with higher priority than other traffic. In contrast, if broadband providers were to leverage their unilateral control over consumers' connections and offer colocation or caching services in an anti-competitive fashion, that would threaten the open Internet and the innovation it enables.
[Google 12/15/08]

As explained by David Clark in a paper in the fall of 2008 at TPRC, such network design avoids the bandwidth costs associated with transit and peering and can significantly reduce costs and enhance network management.

Municipal Broadband

Some municipalities are stating Network Neutrality as an explicit condition in their Municipal Broadband RFPs. It has also been said that Network Neutrality (or lack their of) is a driver for cities going into Municipal Broadband. Municipal broadband providers can help add or create competition, where a lack of competition is seen as a justification of NN. [FTC Staff Report 2007 p 106]

Other alternative responsive to the market power of incumbent broadband providers include community (non municipality) networks and broadband resale.

Definition of the Internet

It is argued that if Network Neutrality provisions are imposed on the Internet, then broadband service providers will simply migrate away from the Internet and offer some other type of broadband service that does not fit under the definition. While this seems unlikely as the demand is for the Internet, Internet applications, and Internet content, it does raise a good point - how do you define the Internet? How do you target any potential Internet regulation so as to be effective? CT has an extensive discussion of the definition of the Internet. If you have time, read Will the Real Internet Please Stand Up?


Web services provided by
:: Home :: About Us :: Contact Us :: Sitemap :: Discussion :: Disclaimer :: Search ::