|World Wide Web|
- Internet History
- - Prelude 1950-66
- - - Paul Baran
- - ARPANET 1967-69
- - ARPANET 1970s
- - - TCP/IP
- - 1980s
- - - NSFNET
- - 1990s
- - - CIX
- - DNS
- - World Wide Web
- - VoIP
- - Backbone
- - Internet2
- - Reference
- Wireless / Radio
- Common Carrier
- - Communications Act
- - Telecom Act
- - Hush a Phone
- - Computer Inquiries
- - Universal Service
There is an apt parallel between the birth of the ARPANet and the creation of the Web. In both cases, they were born of governmental or institutional investments in making difficult, technical research easier and less expensive to do. In the Web's case, it was to solve a simple problem - the same problem from which all previous networks had suffered - incompatibility. But the differences between ARPANet and the Web illustrates another key theme of the evolution of networks, and the results of that evolution. While the ARPANet was created because of a top-down decision by government functionaries, the Web was created thanks to a bottom-up effort. Networks, as they decentralize, also often empower and democratize the workforce. Nobody demanded the Web; yet Tim Berners-Lee. overcoming some initial reluctance, was able to create it. [Nerds2.0 p 285]
The success of the World Wide Web, itself built on the open Internet, has depended on three critical factors: 1) unlimited links from any part of the Web to any other; 2) open technical standards as the basis for continued growth of innovation applications, and; 3) separation of network layers, enabling independent innovation for network transport, routing and information applications. Today these characteristics of the Web are easily overlooked as obvious, self maintaining, or just unimportant. All who use the Web to publish or access information take it for granted that any Web page on the planet will be accessible to anyone who has an Internet connection, regardless whether it is over a dialup modem or a high speed multi-megabit per second digital access line. The last decade has seen so many new ecommerce startups, some of which have formed the foundations of the new economy, that we now expect that the next blockbuster Web site or the new homepage for your kid's local soccer team will just appear on the Web without any difficulty. - Testimony of Sir Timothy Berners-Lee CSAIL Decentralized Information Group Massachusetts Institute of Technology Before the United States House of Representatives Committee on Energy and Commerce Subcommittee on Telecommunications and the Internet Hearing on the "Digital Future of the United States: Part I -- The Future of the World Wide Web"
In providing a system for manipulating this sort of information, the hope would be to allow a pool of information to develop which could grow and evolve with the organization and the projects it describes. For this to be possible, the method of storage must not place its own restraints on the information. This is why a "web" of notes with links (like references) between them is far more useful than a fixed hierarchical system. --- Tim Berners-Lee, Information Management: A Proposal, CERN March 1989 http://www.w3.org/History/1989/proposal.html
Sir Tim Berners Lee
According to Tim Berners-Lee , he had a big idea in mind when he and Robert Cailliau invented the Web : a "common information space in which we communicate by sharing information." --- WWW FAQs: Why Was the World Wide Web invented 9-22-06
It was clear to me that there was a need for something like Enquire at CERN. In addition to keeping track of relationships between all the people, experiments and machines, I wanted to access different kinds of information, such as a researcher's technical papers, the manuals for different software modules, minutes of meetings, hastily scribbled notes, and so on. Furthermore, I found myself answering the same questions asked frequently of me by different people. It would be so much easier if everyone could just read my database. What I was looking for fell under the general category of documentation systems - software that allows documents to be stored and later retrieved... My vision was to somehow combine Equire's external links with hypertext and the interconnection schemes I had developed for RPC. An Enquire program capable of external hypertext links was the difference between imprisonment and freedom, dark and light. New webs could be made to bind different computers together, and all new systems would be able to break out and reference others. Plus, anyone browsing could instantly add a new node connected by a new link. The system had to have one other fundamental property: It had to be completely decentralized. That would be the only way a new person somewhere could start to use it without asking for access from anyone else. And that would be the only way the system could scale, so that as more people used it, it wouldn't get bogged down. This was good Internet-style engineering, but most systems still depended on some central node to which everything had to be connected - and whose capacity eventually limited the growth of the system as a whole. I wanted the act of adding a new link to be trivial; if it was, then a web of links could spread evenly across the globe. --- Tim Berners-Lee, Weaving the Web, p15-16 (Harper Business 2000) [See also Berners-Lee 2010]
Well, I found it frustrating that in those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. So finding out how things worked was really difficult. Often it was just easier to go and ask people when they were having coffee.
Because people at CERN came from universities all over the world, they brought with them all types of computers. Not just Unix, Mac and PC: there were all kinds of big mainframe computer and medium sized computers running all sorts of software.
I actually wrote some programs to take information from one system and convert it so it could be inserted into another system. More than once. And when you are a programmer, and you solve one problem and then you solve one that's very similar, you often think, "Isn't there a better way? Can't we just fix this problem for good?" That became "Can't we convert every information system so that it looks like part of some imaginary information system which everyone can read?" And that became the WWW.
-- Tim Berners-Lee - Answers for Young People - What Made You Think of the WWW http://www.w3.org/People/Berners-Lee/Kids#What
- 1950 Ted Nelson dreams about "Xandu" - a precursor to the WWW
- 1989 October Tim Berners Lee develops "World Wide Web."[Nerds2.0 p 286]
- Tim Berners-Lee, Information Management: A Proposal (March 12, 1989)
- Aug. 6: First web page at Info.cern.ch [20 Years Ago Today, Wired]
- Tim Berners-Lee WWW files made available on The Net via FTP (August) [W3C]
- 1993: Marc Andreessen develops Mosaic browser while at U of Illinois [Nerds2.0 p 380] (with funding from Sen. Al Gore's High Performance Computer Act of 1991)
- Netscape releases Navigator beta [Nerds2.0 p 301]
- W3C Founded [20 Years Ago Today, Wired]
- Netscape IPO [Nerds2.0 p 302]
- Microsoft meets with Netscape in June to discuss the possibility of acquiring Netscape [Vanity Fair]
- Microsoft releases Windows 95 with Windows Browser. [Nerds2.0 p 303]
- 1993: Marc Andreessen develops Mosaic browser while at U of Illinois [Nerds2.0 p 380] (with funding from Sen. Al Gore's High Performance Computer Act of 1991)
- "I was also a staff member at the National Center for Supercomputing Applications, which is basically a federally funded research institute. When Al Gore says that he created the Internet, he means that he funded these four national supercomputing centers. Federal funding was critical. I tease my libertarian friends-they all think the Internet is the greatest thing. And I'm like, Yeah, thanks to government funding." Marc Andreessen quoted in Vanity Fair.
- April 22, 1993: NCSA Mosaic 1.0 released. [About NCSA Mosaic]
- The Internet - Mosaic: The Original Browser, NSF: The history of NSF's supercomputing centers overlapped greatly with the worldwide rise of the personal computer and workstation. It was, therefore, not surprising that software developers focused on creating easy-to-use software tools for desktop machines. The NSF centers developed many tools for organizing, locating, and navigating through information, but perhaps the most spectacular success was the NCSA Mosaic, which in less than eighteen months after its introduction became the Internet "browser of choice" for over a million users, and set off an exponential growth in the number of decentralized information providers. Marc Andreessen headed the team that developed Mosaic, a graphical browser that allowed programmers to post images, sound, video clips, and multifont text within a hypertext system. Mosaic engendered a wide range of commercial developments including numerous commercial versions of Web browsers, such as Andreessen's Netscape and Microsoft's Internet Explorer.
- "An example of a successful HPCC application is Mosaic, an easy-to-use Internet access tool. It was developed with Federal support at the National Center for Supercomputer Applications in Champaign-Urbana, Illinois." - Department of Commerce, National Information Infrastructure Progress Report September 1993-1994.
URLs are identifiers that are used to locate resources on the Internet, such as HTML files stored on a Web server. A URL is a compact representation of the location and access method for a resource that is available through the Internet, generally consisting of three pieces of information: (1) the access method, (2) the name of the server where the Web page is stored, and (3) the name of the page itself within the server, which may be depicted as a path. -- British Telecom v. Prodigy, OO Civ 9451, Memorandum and Order Granting Summary Judgment (SDNY August 22, 2002)
A user may also access content on the Web by typing a URL (Uniform Resource Locator) into the address line of the browser. A URL is an address that points to some resource located on a Web server that is accessible over the Internet. This resource may be a Web site, a Web page, an image, a sound or video file, or other resource. A URL can be either a numeric Internet Protocol or "IP" address, or an alphanumeric "domain name" address. Every Web server connected to the Internet is assigned an IP address. A typical IP address looks like "22.214.171.124." Typing the URL "http://126.96.36.199/" into a browser will bring the user to the Web server that corresponds to that address. For convenience, most Web servers have alphanumeric domain name addresses in addition to IP addresses. For example, typing in "http://www.paed.uscourts.gov" will bring the user to the same Web server as typing in "http://188.8.131.52."
ALA v. United States, CA No. 01-1303 Finding of Fact: Internet (EDPA May 31, 2002)
World Wide Web :: Definition
47 USC 151 Note Internet Tax Freedom Act, Sec. 1101(d)(3)(A) By means of the world wide web.-The term 'by means of the World Wide Web' means by placement of material in a computer server-based file archive so that it is publicly accessible, over the Internet, using hypertext transfer protocol, file transfer protocol, or other similar protocols.
The World Wide Web is but one particular service available over the Internet. It enables a document to be stored in such a way on one computer connected to the Internet that a person using another computer connected to the Internet can request and receive a copy of the document. As Dr Clarke said, the terms conventionally used to refer to the materials that are transmitted in this way are a "document" or a "web page" and a collection of web pages is usually referred to as a "web site". A computer that makes documents available runs software that is referred to as a "web server"; a computer that requests and receives documents runs software that is referred to as a "web browser". Dow Jones v. Gutnick  HCA 56 para 15 (10 December 2002) (High Court Australia)
"The World Wide Web is a part of the Internet that consists of a network of computers, called "Web servers," that host "pages" of content accessible via the Hypertext Transfer Protocol or "HTTP." Anyone with a computer connected to the Internet can search for and retrieve information stored on Web servers located around the world. Computer users typically access the Web by running a program called a "browser" on their computers. The browser displays, as individual pages on the computer screen, the various types of content found on the Web and lets the user follow the connections built into Web pages - called "hypertext links," "hyperlinks," or "links" - to additional content. Two popular browsers are Microsoft Internet Explorer and Netscape Navigator.
A "Web page" is one or more files a browser graphically assembles to make a viewable whole when a user requests content over the Internet. A Web page may contain a variety of different elements, including text, images, buttons, form fields that the user can fill in, and links to other Web pages. A "Web site" is a term that can be used in several different ways. It may refer to all of the pages and resources available on a particular Web server. It may also refer to all the pages and resources associated with a particular organization, company or person, even if these are located on different servers, or in a subdirectory on a single server shared with other, unrelated sites. Typically, a Web site has as an intended point of entry, a "home page," which includes links to other pages on the same Web site or to pages on other sites. Online discussion groups and chat rooms relating to a variety of subjects are available through many Web sites.
ALA v. United States, CA No. 01-1303 II.C. Findings of Fact: The Internet (EDPA May 31, 2002)
The Web is a collection of information resources contained in documents located on individual computers around the world and is the most widely used and fastest-growing part of the Internet except perhaps for electronic mail or "e-mail." Brookfield, 174 F.3d at 1044 (citing United States v. Microsoft, 147 F.3d 935, 939 (D.C.Cir.1998)).
The Web contains multimedia "web pages"--computer data files written in Hypertext Markup Language ("HTML")--which contain information such as text, pictures, sounds, and audio and video recordings. See id.; Reno v. ACLU, 521 U.S. 844, 849-53, 117 S.Ct. 2329, 138 L.Ed.2d 874 (1997); Panavision Int'l, L.P. v. Toeppen, 141 F.3d 1316, 1318 (9th Cir.1998). An Internet user can move from one web page to another with just the click of the mouse. Sporty's Farm L.L.C. v. Sportsman's Market, Inc., 202 F.3d 489, 490 (2d Cir.2000). -- OBH, Inc., v. Spotlight Magazine, Inc., No. 99-CV-746A, 86 F.Supp.2d 176, 179 (WDNY Feb. 28, 2000).
In the late 80's, CERN developed the World Wide Web (WWW) which uses a set of complementary procedures - the HTTP protocols and HTML language - to set up this global information-sharing system. … It is a common misconception that all Internet services are provided via the World Wide Web. In reality, the Web is only one facet of the Internet. - LICRA v. Yahoo!, Interim Court Order, (County Court Paris Nov. 20, 2000) (as translated) PDF
HTML files are stored in computers called Web servers. A Web server "servers" up web pages to Web browsers upon request. A user looking to access an HTML file stored in a Web server requires a personal computer ("PC") with software called a Web browser. -- British Telecom v. Prodigy, OO Civ 9451, Memorandum and Order Granting Summary Judgment (SDNY August 22, 2002)
Web sites are used by many companies to provide product information and sell products online. Consumers need an easy way to find certain companies to order products or gather information. "The most common method of locating an unknown domain name is simply to type in the company name or logo with the suffix . com." Sporty's, 202 F.3d at 493. If this method is unsuccessful, a user can use a "search engine" which, theoretically, will find all web pages on the Internet containing a particular word or phrase.
-- Morrison & Foerster LLP, v. Brian Wick and American Distribution Systems, Inc., No. CIV.A.00-B-465., 94 F.Supp.2d 1125, 1126 (D.Co. April 19, 2000).
- The Man Who Invented the Web - Part 1 by Dick Reiman, Historian
- CBC Archives: Inventing the Internet Age
- Tim Berners Lee, Long Live the Web: A Call for Continued Open Standards and Neutrality, Scientific America (Nov. 22, 2010)
- Tim Berners-Lee, Weaving the Web (2000 Harper Business)
- “Constructing Legitimacy: the W3C’s Patent Policy,” in Laura DeNardis, ed., Opening Standards: The Global Politics of Interoperability (Cambridge, MA: The MIT Press, 2011).