The metaphors we use to describe the internet matter.
In the 1990s, as the internet gained steam in the United States, one metaphor reigned supreme: “the information superhighway.” Bill Gates, for one, wasn’t a fan. “The highway metaphor isn’t quite right,” he wrote in 1995. It put too much emphasis on infrastructure—the material stuff and institutions that make the internet work. It evoked governments and implied that they should have a hand in maintaining it.
“A metaphor I prefer,” Gates continued, “is the market.” The internet wasn’t a highway at all, he argued, but more like the New York Stock Exchange—a supposedly self-organizing system generated by individuals pursuing their own interests. It would facilitate “friction-free capitalism” and realize the dream of Adam Smith’s “ideal market.”
Since 1995, this laissez-faire vision of the internet has helped produce our digital world. Generations of digital capitalists embraced the internet and found innumerable ways to monetize its growth: selling access, leveraging its reach to move goods, marketing gadgets to tap into it more easily. Recently, however, digital capitalism has coalesced around one business model in particular: data extraction. Social media, search engines, and email accounts come for free in exchange for personal data that the platforms monetize.
This data has been called “the new oil.” It’s an apt metaphor for the tech industry, in the sense that data is a lucrative and highly speculative commodity. It’s even more appropriate when you consider the many crises produced by oil. What tradeable thing is more politically fraught, has launched more wars, or inflicts greater ecological damage?
The oil metaphor, much like Gates’s vision of the internet as a free market, restricts our imagination about what the internet could be. We need metaphors that center the internet’s role as a commons, and emphasize not merely its technical organization but its political and economic underpinnings.
One possibility is the utility. What if we saw platforms, internet service providers, and other critical components of the network as utilities? What if we treated them as critical infrastructures—similar to the systems that provide our electricity, our water, and our public transit? What if we organized them as institutions outside of the market, subject to democratic rule and public accountability?
This idea has been gaining traction recently, but it’s not new. In fact, the “information utility” has a long history, one that predates not just Gates’s laissez-faire internet, but the internet itself. That history offers valuable lessons for today, as a rising tide of public anger towards the tech industry creates new opportunities for imagining a radically different digital future.
When the Utility Was the Future
In 1964, the MIT business school professor Martin Greenberger wrote an article for The Atlantic called “The Computers of Tomorrow.” Greenberger was part of a generation of management thinkers writing in the early 1960s who foresaw the importance of the computer to the American economy. He believed that computing power could soon become as ubiquitous as electrical power, driven by a new technology he had seen firsthand at MIT called “time-sharing.”
In the 1960s, computers came in the form of “mainframes”: enormous room-size machines. The innovation of time-sharing was that it divided the computing resources of a single mainframe among multiple terminals, which could be located in a single lab, or linked up miles away. This enabled multiple users to use the same computer simultaneously. We tend to associate “personal” computing with microprocessors: smaller chips that made desk-scale, individually owned and operated computers possible. But throughout the 1960s, the spread of interactive and user-oriented computing was associated with a utility model made possible through time-sharing.
Utility computing exploded in the second half of the decade. Some companies sold dial-in access to data storage and programming environments. Others, including large providers like General Electric, offered on-demand services like ready-made programs for accounting, financial planning, and project management. Meanwhile, railroads, airlines, and financial institutions that had bought expensive mainframes for their own purposes sold “spare” time in regional secondary markets.
This utility model made computing far more accessible to businesses. Because buying or renting a mainframe was so costly, few firms could afford them. With time-sharing, companies could suddenly buy computing time by the hour rather than rent a $300,000 per year machine from IBM.
By the end of the 1960s, the utility model as an idea was pervasive. Some predicted it wouldn’t be long until even households would subscribe to these on-demand computer services. Dialing in from a small desktop terminal, users would store data, word process, and communicate without owning any of the actual hardware to do it.
So when Greenberger wrote about the information utility, he wasn’t imagining some wholly new technology of the future—he was proposing a change to the technology of the present. The computer utilities of his day remained private and largely regional. Greenberger wanted a different kind of utility, one that was both public and national.
Greenberger was no radical: his information utility was to serve “free enterprise,” the banking sector, and the financial markets. Computers were to function as infrastructure for the flow of capital. Once computers were ubiquitous, he envisioned, the work of determining credit-worthiness, setting the Federal Reserve interest rate, and supervising stock markets would become the domain of computer systems.
The computer infrastructure Greenberger imagined embodied the politics of mid-century liberalism. He wanted to build a digital system that could modernize industry, and fulfill the fantasy of a harmonized, technocratically managed national economy. Still, he envisioned computing as a commons under public control. Such a utility couldn’t be left in the hands of an unregulated monopoly. It demanded robust oversight for the same reasons that any capital-intensive utility did: because only regulation could guarantee universal access, keep costs fair, and ensure the infrastructure was maintained.
Community Infrastructure
Elsewhere, the possibilities of the utility sparked a more revolutionary aspiration: to create information services that supported communities and social movements, rather than businesses and markets.
This was the dream at ONE, an experimental urban community founded in 1970 in San Francisco. Its members were artists and technical workers; largely college-educated, white, and in their late twenties. As an early prospectus for the community noted, their generation had come of age at a time when the demand for skilled labor seemed infinite. They emphasized the “utilization of industrial surplus,” a surplus that Cold War America was producing a lot of: armies of technical workers and warehouses full of computing machines.
ONE happily gathered both in support of their fledgling community. Among the haul: a surplus XDS-940 mainframe from the San Francisco-based insurance conglomerate Trans-America. The XDS was tailor-made for utility computing—for being shared among users through time-sharing.
Soon, the communalists of ONE put the machine to work on a number of projects, including a system called Community Memory. Coming online in August 1973, Community Memory functioned as a digital bulletin board for the Bay Area. It consisted of a teletype terminal on the third floor of Leopold’s Records in Berkeley that anyone could come in and use. The machine used a modem to connect to the XDS over a phone line, and users could enter commands to add or retrieve listings: “ADD” to contribute text, or “FIND” to search for it. Responses would then come back over the phone line and get printed out on the teletype printer.
Community Memory anticipated many of the uses of the contemporary internet. People organized carpools, bands advertised gigs, and poets “published” their verse. In this respect, it resembled an early form of social media. But it was also a tool for community organizing. Like other radical computing groups in Chicago and Boston, ONE wanted to tap computing power for social ends: to build databases to support community activism and to research the arrangements of corporate and political power.
Indeed, Community Memory was “inescapably political,” declared ONE member Michael Rossman at a 1975 conference. “Its politics,” he continued, “are concerned with people’s power—their power with respect to the information useful to them, their power with respect to the technology of information (hardware and software both).” What was revolutionary about the ONE experiment was what Rossman called its “operational politics”: that it provided computing as a public service. The democratizing possibilities of computing were inseparable from its political economy. It didn’t extract economic rent from the information utility but rather treated it as a social good—as a people’s utility.
Collective Memory
During the mid-1970s, computing underwent a rapid change. The availability of inexpensive microprocessors like the Intel 8080 helped spur a hobbyist hacking community interested in building smaller, more “personal” computers. The mainframe and the time-sharing system didn’t die with the rise of personal computing, but its centrality to the imagined future of computing—along with the politics of the information utility—faded away.
In popular histories of computing, personal computers often seem like an inevitable conclusion. The model of a single-user, privately owned device became so dominant that utility computing of the time-sharing era came to seem like an aberration—a strange detour that never amounted to much.
Yet more recent scholarship reveals the importance of that detour to our digital present. The historian Joy Lisi Rankin has shown how people organized around time-shared devices in schools and universities inspired the first conceptions of computing as a socially minded and community-oriented activity. The scholar Tung-Hui Hu sees the return of the old utility model in modern cloud computing.
Still, something was lost with the rise of personal computing. One clue comes from a 1978 article by the leftist tech activist group Boston Computer Collective. Writing in the radical science magazine Science for the People, they took aim at the hollowness of the personal computing revolution, reviewing Ted Nelson’s Computer Lib/Dream Machines, a widely circulated book celebrating that revolution.
Making computers more widespread would not “pave the way towards a just society,” they argued. Smaller machines would not mean more personal power and less corporate control. “We cannot accept Nelson’s implication that a small computer must come from a relatively small manufacturer,” they wrote, “or that this supposedly small corporation will therefore hold public interest over profits.” Nor could they accept the idea that “hypermedia” and “individualized” instruction would improve the conditions of education; rather, it would likely lead to more “individualized control and standardization.”
In subsequent decades, their critiques seem to have proven correct. Decentralization and personalization—watchwords of the personal computing and internet era—did not automatically serve as forces for liberation. Rather they were something of a Trojan horse: a way of making computer technology so intimate that it brought profit-making and corporate power into every aspect of our lives.
But the history of the information utility shows that there are alternatives. It reminds us that there is nothing natural about computing’s current form. It suggests that the problems of digital capitalism are inseparable from the problems of capitalism itself.
Technologies can’t make social choices for us. The hope that computers could fix the market, make a just society, and democratize politics or information are nearly as old as the machines themselves. To define the technology we want means first defining the society we want. It means asking old questions: How can democracy coexist with capitalism? Are certain things too valuable to be bought and sold? Who will own the machines?