They receive all services from their node. If a requested service is not
instantly available from the local 'node', it will request the program from the
originating 'node'. Because the system is intelligent, the regional Knowing
'node' will know its own customers. If you want that requested program on a
regular basis it will arrange to place it permanently on the 'node' for your
use.

Disseminated Network – Discussion

A disseminated network creates a direct cable connection between user and
regional facility. Advanced duplex communication technology even today
would allow a major increase in both the quantity and the quality of signal that
can be carried on a wire.

Presently the Cable television operators have completed the task of wiring
most of the homes in America. That same cable that brings you your television
could also interactively connect you to the 'Knowing ' utility. If it is politically
and technologically possible to use existing cable, the cost of establishing a
'Knowing' utility might be reduced by half or more.

If existing cable television systems were used, then the regional 'node' of the
knowledge utility could be built at the cable feeder station for the cable
system. Or since most cable television stations do not originate programs but
obtain their signal by microwave transmission, this same option would present
to the 'Knowing' utility; it could locate a facility at the feeder station or it
could microwave its program to the station.

This option could allow for another major savings in terms of creating the
system, as there are existing computer facilities in almost every community
that could be modified to serve as Regional Knowing 'Nodes'.

Even beginning from scratch, the cost of creating and operating a
disseminated knowledge-communication system would be significantly less

Future Positive BBS == HELP 'Network' == Desseminated Network
Copyright 1984, TrustMark 2001 by Timothy Wilken

15


than a centralized system. And if the existing cable television systems could be
expanded to handle the communication needs of the regional knowledge
'node's, then the only block to the creation of a Knowing utility would be
organizational and technological. Because the cost of creating this network
would be low in terms of capital, money should not be a major problem.

Eventually the copper cable would be replaced with glass, and all other
transmission would be microwave. Because computers have become so
powerful, the local 'Knowing' nodes need not be prohibitively expensive. It
may be possible, if not now, then soon, to use microcomputers or at most
minicomputers in the regional nodes.

In 1984, cost and technological difficulty is no block to the creation of a
knowing utility.

* * *

Enter the Internet
Today, 2001, there are no economic or technological barriers to creating a full scale
'Knowing Utility' utilizing a disseminated system. The basic services I envisioned in
1984 are available today to anyone with an internet connection for less than twenty
dollars a month.

The Internet has emerged as an alternative to the commercial networks. It is a
disseminated system. Bruce Sterling wrote in his Short History of the Internet
February 1993:

“Some thirty years ago, the RAND Corporation, America's foremost Cold
War think-tank, faced a strange strategic problem. How could the US
authorities successfully communicate after a nuclear war?

Enter the Internet
TrustMark 2001 by Timothy Wilken

16


“Postnuclear America would need a command-and-control network, linked
from city to city, state to state, base to base. But no matter how thoroughly
that network was armored or protected, its switches and wiring would always
be vulnerable to the impact of atomic bombs. A nuclear attack would reduce
any conceivable network to tatters.

“And how would the network itself be commanded and controlled? Any
central authority, any network central citadel, would be an obvious and
immediate target for an enemy missile. The center of the network would be
the very first place to go. RAND mulled over this grim puzzle in deep
military secrecy, and arrived at a daring solution. The RAND proposal (the
brainchild of RAND staffer Paul Baran) was made public in 1964. In the first
place, the network would have no central authority. Furthermore, it would be
designed from the beginning to operate while in tatters.

“The principles were simple. The network itself would be assumed to be
unreliable at all times. It would be designed from the get-go to transcend its
own unreliability. All the nodes in the network would be equal in status to all
other nodes, each node with its own authority to originate, pass, and receive
messages. The messages themselves would be divided into packets, each packet
separately addressed. Each packet would begin at some specified source node,
and end at some other specified destination node. Each packet would wind its
way through the network on an individual basis.

“The particular route that the packet took would be unimportant. Only final
results would count. Basically, the packet would be tossed like a hot potato
from node to node to node, more or less in the direction of its destination,
until it ended up in the proper place. If big pieces of the network had been
blown away, that simply wouldn't matter; the packets would still stay
airborne, lateralled wildly across the field by whatever nodes happened to
survive. This rather haphazard delivery system might be inefficient in the
usual sense (especially compared to, say, the telephone system) – but it would
be extremely rugged.

Enter the Internet
TrustMark 2001 by Timothy Wilken

17


“During the 60s, this intriguing concept of a decentralized, blastproof, packet-
switching network was kicked around by RAND, MIT and UCLA. The
National Physical Laboratory in Great Britain set up the first test network on
these principles in 1968. Shortly afterward, the Pentagon's Advanced Research
Projects Agency decided to fund a larger, more ambitious project in the USA.
The nodes of the network were to be high-speed supercomputers (or what
passed for supercomputers at the time). These were rare and valuable
machines which were in real need of good solid networking, for the sake of
national research-and-development projects.

“In fall 1969, the first such node was installed in UCLA. By December 1969,
there were four nodes on the infant network, which was named ARPANET,
after its Pentagon sponsor. The four computers could transfer data on
dedicated high-speed transmission lines. They could even be programmed
remotely from the other nodes. Thanks to ARPANET, scientists and
researchers could share one another's computer facilities by long-distance.
This was a very handy service, for computer-time was precious in the early
'70s. In 1971 there were fifteen nodes in ARPANET; by 1972, thirty-seven
nodes. And it was good.

“By the second year of operation, however, an odd fact became clear.
ARPANET's users had warped the computer-sharing network into a
dedicated, high-speed, federally subsidized electronic post- office. The main
traffic on ARPANET was not long-distance computing. Instead, it was news
and personal messages. Researchers were using ARPANET to collaborate on
projects, to trade notes on work, and eventually, to downright gossip and
schmooze. People had their own personal user accounts on the ARPANET
computers, and their own personal addresses for electronic mail. Not only
were they using ARPANET for person-to-person communication, but they
were very enthusiastic about this particular service – far more enthusiastic
than they were about long-distance computation. It wasn't long before the
invention of the mailing-list, an ARPANET broadcasting technique in which
an identical message could be sent automatically to large numbers of network

Enter the Internet
TrustMark 2001 by Timothy Wilken

18


subscribers. Interestingly, one of the first really big mailing-lists was SF-
LOVERS, for science fiction fans. Discussing science fiction on the network
was not work-related and was frowned upon by many ARPANET computer
administrators, but this didn't stop it from happening.

“Throughout the '70s, ARPA's network grew. Its decentralized structure made
expansion easy. Unlike standard corporate computer networks, the ARPA
network could accommodate many different kinds of machine. As long as
individual machines could speak the packet-switching lingua franca of the new,
anarchic network, their brand-names, and their content, and even their
ownership, were irrelevant.

“The ARPA's original standard for communication was known as NCP,
Network Control Protocol, but as time passed and the technique advanced,
NCP was superceded by a higher-level, more sophisticated standard known as
TCP/IP. TCP, or Transmission Control Protocol, converts messages into
streams of packets at the source, then reassembles them back into messages at
the destination. IP, or Internet Protocol, handles the addressing, seeing to it
that packets are routed across multiple nodes and even across multiple
networks with multiple standards – not only ARPA's pioneering NCP
standard, but others like Ethernet, FDDI, and X.25.

“As early as 1977, TCP/IP was being used by other networks to link to
ARPANET. ARPANET itself remained fairly tightly controlled, at least until
1983, when its military segment broke off and became MILNET. But TCP/IP
linked them all. And ARPANET itself, though it was growing, became a
smaller and smaller neighborhood amid the vastly growing galaxy of other
linked machines.

“As the '70s and '80s advanced, many very different social groups found
themselves in possession of powerful computers. It was fairly easy to link
these computers to the growing network-of- networks. As the use of TCP/IP
became more common, entire other networks fell into the digital embrace of

Enter the Internet
TrustMark 2001 by Timothy Wilken

19


the Internet, and messily adhered. Since the software called TCP/IP was
public-domain, and the basic technology was decentralized and rather anarchic
by its very nature, it was difficult to stop people from barging in and linking
up somewhere-or-other. In point of fact, nobody wanted to stop them from
joining this branching complex of networks, which came to be known as the
Internet.

“Connecting to the Internet cost the taxpayer little or nothing, since each node
was independent, and had to handle its own financing and its own technical
requirements. The more, the merrier. Like the phone network, the computer
network became steadily more valuable as it embraced larger and larger
territories of people and resources.

“A fax machine is only valuable if everybody else has a fax machine. Until
they do, a fax machine is just a curiosity. ARPANET, too, was a curiosity for
a while. Then computer-networking became an utter necessity.

“In 1984 the National Science Foundation got into the act, through its Office
of Advanced Scientific Computing. The new NSFNET set a blistering pace for
technical advancement, linking newer, faster, shinier supercomputers, through
thicker, faster links, upgraded and expanded, again and again, in 1986, 1988,
1990. And other government agencies leapt in: NASA, the National Institutes
of Health, the Department of Energy, each of them maintaining a digital
satrapy in the Internet confederation.

“The nodes in this growing network-of-networks were divvied up into basic
varieties. Foreign computers, and a few American ones, chose to be denoted
by their geographical locations. The others were grouped by the six basic
Internet domains: gov, mil, edu, com, org and net. (Graceless abbreviations
such as this are a standard feature of the TCP/IP protocols.) Gov, Mil, and
Edu denoted governmental, military and educational institutions, which were,
of course, the pioneers, since ARPANET had begun as a high-tech research
exercise in national security. Com, however, stood for commercial

Enter the Internet
TrustMark 2001 by Timothy Wilken

20


institutions, which were soon bursting into the network like rodeo bulls,
surrounded by a dust-cloud of eager nonprofit orgs. (The net computers
served as gateways between networks.)

“ARPANET itself formally expired in 1989, a happy victim of its own
overwhelming success. Its users scarcely noticed, for ARPANET's functions
not only continued but steadily improved. The use of TCP/IP standards for
computer networking is now global. In 1971, a mere twenty-one years ago,
there were only four nodes in the ARPANET network. Today there are tens
of thousands of nodes in the Internet, scattered over forty-two countries, with
more coming on-line every day. Three million, possibly four million people
use this gigantic mother-of-all-computer-networks.

“The Internet is especially popular among scientists, and is probably the most
important scientific instrument of the late twentieth century. The powerful,
sophisticated access that it provides to specialized data and personal
communication has sped up the pace of scientific research enormously.

“The Internet's pace of growth in the early 1990s is spectacular, almost
ferocious. It is spreading faster than cellular phones, faster than fax machines.
Last year the Internet was growing at a rate of twenty percent a month. The
number of host machines with direct connection to TCP/IP has been doubling
every year since 1988. The Internet is moving out of its original base in
military and research institutions, into elementary and high schools, as well as
into public libraries and the commercial sector.

“Why do people want to be on the Internet? One of the main reasons is simple
freedom. The Internet is a rare example of a true, modern, functional
anarchy. There is no Internet Inc. There are no official censors, no bosses, no
board of directors, no stockholders. In principle, any node can speak as a peer
to any other node, as long as it obeys the rules of the TCP/IP protocols, which
are strictly technical, not social or political. (There has been some struggle
over commercial use of the Internet, but that situation is changing as
businesses supply their own links).

Enter the Internet
TrustMark 2001 by Timothy Wilken

21