Issue 12 / Commons

December 20, 2020
An abstract image of a black man being interviewed by a seemingly older white interlocutor, behind another image of an old computer.

Image by Celine Nguyen.

The Fort Rodman Experiment

Charlton McIlwain

In 1965, IBM launched the most ambitious attempt ever to diversify a tech company. The industry still needs to learn the lessons of that failure.

Breonna Taylor, Ahmaud Arbery, and George Floyd were murdered in early 2020, victims of persistent anti-Black violence. In the midst of nationwide uprisings over their deaths, leaders in the technology industry responded. Amazon donated $10 million—roughly forty-five minutes worth of its gross annual profits—to racial justice organizations such as the National Association for the Advancement of Colored People. Social media companies like Facebook and streaming services like Netflix made content created by Black people more visible. Sundar Pichai, the CEO of Google and its parent company, Alphabet, promised a number of corporate commitments to racial equity, such as establishing anti-racist educational programs within the organization. And almost all of these actions were accompanied by pledges to bring more Black and brown people into tech company ranks—a desperately needed measure in a chronically white industry.

When tech leaders made those pledges, they often presented themselves as breaking bravely with the past: they would take unprecedented steps to overcome the implicit bias within their own companies and the structural racism of the industry as a whole in order to forge a more equitable future. (“Google commits to translating the energy of this moment into lasting, meaningful change,” Pichai wrote in a letter to the company.) But there’s good reason to doubt this self-presentation. In 2014, following pressure from public figures including Reverend Jesse Jackson, Google, Facebook, Apple, and Microsoft publicly disclosed their diversity data. Only 6 percent of Apple’s workforce, 2 percent of Microsoft’s, and 1 percent of Google’s and Facebook’s identified as Black, according to statistics compiled by Wired. Each company vowed to do better; Apple’s Tim Cook said the company would become “as innovative in advancing diversity as we are in developing products.” By 2018 and 2019, however, the percentage of Black tech workers at Facebook, Google, and Microsoft had increased by only one point; Apple’s numbers hadn’t changed at all.

Far from making a break with the past, when tech leaders pledge to diversify their companies, they are drawing from a playbook drafted over the course of the industry’s history. Information technology firms have been trying—and largely failing—to become more racially representative since at least the 1960s. To understand some of the reasons why the tech industry has failed to become more diverse year after year, decade after decade, it’s useful to go back to the earliest large-scale efforts by a major technology company—IBM—to diversify its workforce. 

IBM has been actively trying to bring more Black and brown people into its workforce longer than any other major tech company, and it has adopted or invented the widest range of strategies to do so. If any company has had a margin of success in this, it’s been IBM, and all of the tech companies that have come after it have in some way followed its example. At the same time, IBM’s history is instructive because the company has been at the forefront of producing racist information technologies that have disparately harmed the very same people the company has spent decades trying to recruit—a dynamic that also characterizes many of today’s tech giants.

IBM’s flawed motives, failed strategies, tempered successes, and massive contradictions over more than half a century provide critical lessons for today’s tech industry to learn from if it is serious about advancing racial justice and equity. What the history of IBM shows is that creating racial equity in tech requires a commitment from institutions beyond the industry. It also demands that we rethink the sorts of technology that we allow tech companies to build.

The Original Bootcamp

In 1964, US civil servants transformed a former army base known as Fort Rodman, on the outskirts of New Bedford, Massachusetts, into the campus for an audacious new experiment in technical education. The base would host hundreds of male high-school dropouts—most Black, some white and Latino, all poor or working-class—from across the country for a free fourteen-month training program designed to produce graduates who could go on to entry-level jobs at tech companies, including IBM. As IBM president Tom Watson Jr. later recalled in his memoirs, “The idea was to train 750 hard-core unemployed each year—black high school dropouts from the inner city who had never held jobs.”

This was IBM’s first and, in many ways, most ambitious diversity initiative. It was run by IBM but funded by the federal government as part of the Job Corps, a free education and workforce training program that was conceived by the Kennedy administration and which later became a key part of President Lyndon Johnson’s Great Society, including the so-called War on Poverty. The Job Corps was one of the ways in which Johnson sought to quell the social and economic frustration fueling Black and working-class political mobilization during the civil rights era in order to maintain and expand Democratic power. 

IBM had two major goals in launching the Fort Rodman experiment. First, it hoped to ingratiate itself with the federal government, a source of lucrative contracts for everything from tabulation machines for the US Census to computer consoles for operating NASA space flights. Second, and perhaps more critically, the computing giant needed to train a large entry-level technical labor force to help fuel the company’s rapid expansion. In 1965 alone, IBM acquired twenty-six new facilities, and in the subsequent five years it would double its annual revenue to roughly $7 billion ($45 billion in 2020 dollars).

The first crop of 350 students arrived at Fort Rodman in January 1965. They hailed “from the big cities and the small ones, the shut-down mining towns and the farm country” in New York, Texas, Alabama, and thirty-one other states, according to a 1966 promotional film about the program. Some of these young men may have been lured to Fort Rodman by postcards that featured an aerial photograph of the base on a sunny day, looking almost like a beach resort, and on the other side the text: “A HANDUP—NOT A HAND OUT.” One student said that the program was his last resort; a judge told him it was either Fort Rodman, or else.

Students at Fort Rodman were separated into small cohorts, with one instructor assigned to five or six students. The instructors were white college graduates, some from the Peace Corps, who had been trained on site to be “tutor-counselors” to the young men who for more than a year would make Fort Rodman their home. The tutor-counselors were expected to be mentors and to bond closely with the boys; they ate with the students, hung out in barrack-style dormitories where as many as fifty slept in bunk beds with military-cornered sheets, and played football together. This was as much a part of the students’ training as their remedial math and language courses and their regimen of office skills training, which included how to use typewriters, calculators, and keyhole-punch and data-processing machines. 

In the 1966 promotional film for Fort Rodman, students seem impressed as an IBM employee shows them around a new punch-card tabulating machine. “How much time would this machine save compared to how you do ’em by hand?” a Black student in a shirt and blazer asks in the film. “Take a payroll application, for example,” the IBM instructor answers. “A payroll that might take an entire week to prepare could be done on this machine in, say, two to three hours at the most.” But if the electronic magic showcased by IBM captivated the boys at Fort Rodman, it wasn’t enough to help them develop proficiency in the skills needed to get a technical job at a company like IBM. Most of what we know about the program comes from promotional materials that reflect how the people running the program, and IBM leadership, idealistically imagined it working. Even in these sources, however, the causes of Fort Rodman’s failures are clear. Some were operational: for example, despite being designed to provide small-group, individualized attention, the program hired too few staff to meet student needs. Reports noted that students were often neglected by their instructors. Some students stopped showing up for class.

Compounding this neglect was no doubt the paternalism of Fort Rodman’s mission, and the belief in Black cultural inferiority that the project embodied. In the 1966 promotional film, for example, as the camera fixates on the face of a young Black man working through a math lesson, the narrator intones:

No one has ever given a damn about him until now. He’s failed in school. He’s failed with his family. He’s failed within society. And so he is turned inwards and in a very bad way. We have to convert this history of serious failure into a present history of success.

This viewpoint—that it was the young men at Fort Rodman who were broken and needed fixing, not the systems of racist and sexist capitalism that Fort Rodman was, in theory, training them for—reflected the “culture of poverty” idea underlying many of President Johnson’s Great Society programs. This idea held that Blacks were poor because they had an inferior culture that didn’t prioritize work and individual responsibility, among other things; in order to change, Black people had to experience and adopt the “right”—supposedly white—cultural values. At the same time, Fort Rodman isolated young men from the communities that provided acceptance, care, safety, and pride for who they were as people. 

For its part, the local community in New Bedford made it clear that the young men weren’t wanted there; in May 1966, worried about “unruly elements” at the camp, as a Washington Post report put it, the city council asked President Johnson to move the Job Corps center out of Fort Rodman. Though Fort Rodman had enrolled more than 870 young men by then, the Johnson administration pressured IBM to close it. “The experience caused us some real soul-searching, because there were more problems than we anticipated,” IBM President Tom Watson admitted in his memoirs. “IBM ended up hiring very few Camp Rodman ‘graduates,’ and I doubt any other company did either.”

Racism as a Business Model

Fort Rodman may have been a failure, but IBM invented a number of other diversity programs that continued, with limited success, into the late 1970s. Several of its initiatives were aimed at luring Black people to IBM through job fairs and targeted advertising in Black media outlets, as well as by loaning equipment and funding faculty positions at historically Black colleges and universities. The company’s primary focus, though, was on developing the “supply side” of the labor market by training the folks it hoped would fill its demand for technical workers. These efforts were smaller in scale than Fort Rodman, but similar in spirit.

IBM doesn’t seem to have tracked its diversity programs with any rigor, making it difficult to know just how many they ran, where, and to what effect. But between 1978 and 1981, the period when IBM was most public about the success of its diversity programs, roughly 20 percent of IBM's new hires were non-white, and the number of non-white managers in the company increased from 1,973 to 2,600.

But like Fort Rodman, IBM’s other diversity programs were flawed in important ways. Most notably, they focused on short-term, skills-based training for people whose educational background made it unlikely they would move beyond low-level positions at the company. Frank Cary, the chairman and CEO of IBM throughout most of the 1970s, admitted as much in a speech to the company’s board and stockholders in 1974. “We’ve made good progress on one of our objectives—bringing into IBM capable and highly motivated minorities and women,” Cary said. “Our second objective is taking longer to achieve: helping minorities and women qualify themselves for advancement at every level of the business consistent with their abilities and their growing population in the company.”

Among the obstacles to promoting talented women and people of color, Cary’s comments implied, was the desire among members of IBM’s managerial class to hold onto the privileges conferred by their whiteness. “The relevant question I’m asked most frequently by IBM managers,” Cary said, “is: ‘How can we do that without practicing reverse discrimination?’”

This attitude was connected to a more fundamental problem. As much as IBM did in this period to try to remove the metaphorical “Whites Only” sign from its company doors, racism at the company wasn’t just a cultural or structural issue—it was part of its long-term business model. As early as the 1920s, IBM marshalled its computing powers to support eugenics, sterilization, and population control in Jamaica. The company sold technology to Hitler’s regime that allowed the Nazis to tabulate census figures in order to identify and eventually murder Jews, and it sold similar technologies to South Africa to run the apartheid state.

From about 1961 through the late 1960s, IBM was also deeply invested in helping federal, state, and local governments imagine, develop, and deploy carceral technologies that became known as “criminal justice information systems.” IBM engineers, designers, and salesmen aggressively marketed computer hardware and software applications to the law enforcement community. Through lucrative contracts with big city police forces like the NYPD, research and development partnerships through President Lyndon Johnson’s 1965 Crime Commission, and millions of dollars in grants from the newly formed federal Law Enforcement Assistance Administration, IBM laid the foundations on which today’s policing and surveillance infrastructure has been progressively built over the past fifty years. 

In 1968, for example, IBM debuted a system called ALERT II in Kansas City, Missouri. The system began as a database—a place to store police records about arrests, adjudications, jailings, and juvenile justice cases. But by the early 1970s, when it was fully built out, ALERT II, along with similar systems across the United States, was a nationally networked platform that provided law enforcement the ability to profile, surveil, target, and deploy police manpower based on the racial composition of neighborhoods and locations where crime allegedly predominated. 

This reinforced a vicious cycle of racist policing. Because police believed Black people committed more crime, they deployed more police to Black neighborhoods. That led to more arrests, which meant Black people were captured more in police databases. Relying on that data to determine where to target police resources meant policing Black neighborhoods more intensively, thus perpetuating the cycle. As a result, entire communities were effectively criminalized in part by the technologies IBM was building.

Over the next three decades, through the 1970s, 1980s, and 1990s, such criminal justice information systems—some built by other companies following IBM’s lead, many built by IBM itself—proliferated throughout the US, criminalizing Black and brown communities across the country. Since then, IBM has developed newer technologies with even more expansive law enforcement applications, including facial recognition, predictive policing, and police management systems—all of which wreak havoc on Black and brown people in similar ways to IBM’s earlier generation of carceral technologies.

It’s impossible to draw clear lines of causality between the racial makeup of IBM and the racist carceral technologies it has built. Would an organization with more Black and brown people in roles with seniority and power necessarily eschew helping law enforcement agencies criminalize communities of color? Would an organization that didn’t build racist carceral technologies have more Black and brown people eager to join its senior ranks? Or would class interests trump racial solidarity so that even an IBM that was more diverse at all levels would still choose profit over racial justice? 

Those may be unanswerable questions, but there is nevertheless a clear thread connecting IBM’s diversity projects with the racist technologies it developed. In both cases, IBM saw Black and brown people as easily exploitable sources of profit—either in the form of low-wage labor, or as the material inputs that fed its policing technologies.

Building Black Tech

In the half century since the Fort Rodman experiment ended, big tech companies have launched many other diversity programs. But the numbers of Black and brown people in those companies, and the underlying logics of racialized capitalism that powers the technology industry, have remained largely unchanged. IBM’s supply-side labor programs continue in the form of legions of coding bootcamps that promise Black and brown young people entree into the tech industry—though, in the absence of government and philanthropic support, these are run almost entirely as for-profit ventures. 

The same approaches have correlated with the same results. The percentage of full-time Black employees in the tech industry today is about the same as IBM’s was in 1965—roughly 2.5 percent. This lack of progress is reflected in, and may in part be caused by, the attitudes of the people who run these companies: a new report, People of Color in Tech, reveals that the majority of tech founders and CEOs believe that diversity work is ineffective. Roughly half of that same group are unconcerned about the fact that only 1 percent of tech entrepreneurs funded by VCs are Black. This sort of indifference is echoed in the experience of Black tech workers, who (more than their white peers) say they have trouble finding mentors at the companies they work for.

What would a more effective approach to improving the diversity of the tech industry look like? Three lessons stand out from the history of IBM’s diversity programs. First, we need to ditch the supply-side approach that only prepares people for the lowest-level jobs, with the goal of creating an expendable and increasingly cheap labor force. Second, we can’t leave it to the tech industry to change itself—we need government watchdog agencies like the Equal Employment Opportunity Commission to hold companies accountable, and the commitment of public resources, like those marshaled by Johnson’s Great Society programs, to help transform society itself. Finally, we can’t just seek to change racist cultures or structures at tech companies—we need to fundamentally change their business models. 

In June 2020, IBM’s recently appointed president, Arvind Krishna, announced to Congress that the company would no longer sell or develop facial recognition technology. He did so out of an explicit concern for racial justice, recognizing that these technologies have been and continue to be used to devastate people of color, including those within his own company. The tech industry should follow IBM’s lead in examining its products, investments, and research and development projects. When these are inconsistent with racial equity and justice, the companies must abandon them. Diversity in tech is not just about sharing the gains of technology. It’s about reimagining the tech we build and why.

Charlton D. McIlwain is Professor of Media, Culture and Communication at NYU, Founder of the Center for Critical Race & Digital Studies, and the author of Black Software.

This piece appears in Logic(s) issue 12, "Commons". To order the issue, head on over to our store. To receive future issues, subscribe.