Issue 16 / Clouds

March 27, 2022
Old-school browser window with an image of an agile story board, partially obscured by misty clouds

Agile and the Long Crisis of Software

Miriam Posner

What is Agile? And where does it come from?

I first encountered Agile when I got a job in a library. I’d been hired to help get a new digital scholarship center off the ground and sometimes worked with the library’s software development team to build tools to support our projects. There were about six members of this team, and I noticed right away that they did things differently from the non-technical staff. At meetings, they didn’t talk about product features, but “user stories”—tiny narratives that described features—to which they assigned “story points” that measured the effort involved in completing the associated tasks. They met every morning for “standup,” a meeting literally conducted standing up, the better to enforce brevity. A whiteboard had pride of place in their workspace, and I watched the developers move Post-it notes across the board to signify their state of completion. They worked in “sprints,” two-week stretches devoted to particular tasks.

At meetings with the rest of us on the library staff, the head of the development team reported on progress using software that included a dashboard indicating the state of every project. The manager could also show us a graph of the team’s “velocity,” the rate at which the developers finished their tasks, complete with historical comparisons and projections. 

This was Agile, I learned, a method for managing software development that had achieved enormous popularity in technical workplaces of all kinds—and, increasingly, even non-technical workplaces (including, as one TED speaker would have it, the family home). Honestly, I was impressed. In my own work, I often felt as though I was flailing around, never quite sure if I was making progress or doing anything of real value. The developers, in contrast, seemed to know exactly what they were doing. If they ran into a roadblock, it was no big deal; they just dealt with it. They expected requirements to change as they progressed, and the two-week time horizons allowed them to substitute one feature for another, or adopt a new framework, without starting all over from scratch.

That’s the beauty of Agile: designed for ultimate flexibility and speed, it requires developers to break every task down into the smallest possible unit. The emphasis is on getting releases out fast and taking frequent stock, changing directions as necessary.

I was intrigued; Agile was different from anything I’d experienced before. Where had it come from, and why?

I began to explore the history of Agile. What I discovered was a long-running wrestling match between what managers want software development to be and what it really is, as practiced by the workers who write the code. Time and again, organizations have sought to contain software’s most troublesome tendencies—its habit of sprawling beyond timelines and measurable goals—by introducing new management styles. And for a time, it looked as though companies had found in Agile the solution to keeping developers happily on task while also working at a feverish pace. Recently, though, some signs are emerging that Agile’s power may be fading. A new moment of reckoning is in the making, one that may end up knocking Agile off its perch.

Turning Weirdos into Engineers

Software development was in crisis even before the word “software” was coined. At a 1954 conference convened by leaders in industry, government, and academia at Wayne State University in Detroit, experts warned of an imminent shortage of trained programmers. The use of the term “software” to mean application programming first appeared in print in an article by statistician John W. Tukey four years later. By the mid-1960s, at least a hundred thousand people worked as programmers in the United States, but commentators estimated an immediate demand for fifty thousand more.

In the first decades of the programming profession, most experts assumed that formulating computer-readable directions would be a relatively trivial job. After all, the system analysts—the experts who specify the high-level architecture—had already done the hard intellectual work of designing the program and hardware. The job of the coder was simply to translate that design into something a computer could work with. It was a surprise, then, when it turned out that this process of translation was in fact quite intellectually demanding. 

The nature of these intellectual demands, along with the larger question of what kind of work software development actually is, continues to baffle managers today. In the computer’s early years, it seemed to some people that coding was, or should be, a matter of pure logic; after all, machines just do what you tell them to do. There was, self-evidently, a formally correct way to do things, and the coder’s job was simply to find it. 

And yet, the actual experience of programming suggested that coding was as much art as science. Some of the most advanced programming, as Clive Thompson notes in his 2019 book Coders, was pushed forward by long-haired weirdos who hung around university labs after hours, hackers who considered themselves as much artisans as logicians. The fact that one couldn’t physically touch a piece of software—its storage media, perhaps, but not the software itself—made software development more abstract, more mysterious than other engineering fields. Where other fields could be expected to obey the laws of physics, the ground seemed to be constantly shifting under software’s feet. Hardware was perpetually changing its parameters and capabilities. 

Nevertheless, the field of electronic data processing—the automation of office functions, like time cards and payroll—was growing rapidly. The hulking machines designed for the purpose, leased from IBM, quickly became the hallmark of the technologically forward-thinking operation. But they required teams of operators to design the programs, prepare the punch cards, and feed data into the system. Established managers resented the specialized expertise and professional eccentricity of the growing ranks of “computer boys.” They resented, too, that software projects seemed to defy any estimation of cost and complexity. The famed computer scientist Frederick Brooks compared software projects to werewolves: they start out with puppy-like innocence, but, more often than not, they metamorphose into “a monster of missed schedules, blown budgets, and flawed products.” You could say, then, that by the late 1960s, software development was facing three crises: a crying need for more programmers; an imperative to wrangle development into something more predictable; and, as businesses saw it, a managerial necessity to get developers to stop acting so weird

It was in this spirit of professionalization that industry leaders encouraged programmers to embrace the mantle of “software engineer,” a development that many historians trace to the NATO Conference on Software Engineering of 1968. Computer work was sprawling, difficult to organize, and notoriously hard to manage, the organizers pointed out. Why not, then, borrow a set of methods (and a title) from the established fields of engineering? That way, programming could become firmly a science, with all the order, influence, and established methodologies that comes with it. It would also, the organizers hoped, become easier for industry to manage: software engineers might better conform to corporate culture, following the model of engineers from other disciplines. “In the interest of efficient software manufacturing,” writes historian Nathan Ensmenger, “the black art of programming had to make way for the science of software engineering.”

Chasing Waterfalls

And it worked—sort of. The “software engineering” appellation caught on, rising in prominence alongside the institutional prestige of the people who wrote software. University departments adopted the term, encouraging students to practice sound engineering methodologies, like using mathematical proofs, as they learned to program. The techniques, claimed the computer scientist Tony Hoare, would “transform the arcane and error-prone craft of computer programming to meet the highest standards of the engineering profession.” 

Managers approached with gusto the task of organizing the newly intelligible software labor force, leading to a number of different organization methods. One approach, the Chief Programmer Team (CPT) framework instituted at IBM, put a single “chief programmer” at the head of a hierarchy, overseeing a cadre of specialists whose interactions he oversaw. Another popular approach placed programmers beneath many layers of administrators, who made decisions and assigned work to the programmers under them. 

With these new techniques came a set of ideas for managing development labor, a management philosophy that has come to be called (mostly pejoratively) the “waterfall method.” Waterfall made sense in theory: someone set a goal for a software product and broke its production up into a series of steps, each of which had to be completed and tested before moving on to the next task. In other words, developers followed a script laid out for them by management. 

The term “waterfall,” ironically, made its first appearance in an article indicting the method as unrealistic, but the name and the philosophy caught on nevertheless. Waterfall irresistibly matched the hierarchical corporate structure that administered it. And it appealed to managers because, as Nathan Ensmenger writes, “The essence of the software-engineering movement was control: control over complexity, control over budgets and scheduling, and, perhaps most significantly, control over a recalcitrant workforce.” This was precisely the kind of professional that waterfall development was designed to accommodate.

But before long, software development was again in crisis—or crises. Part of the problem was keeping up with the need for new computer scientists. Universities in 1980 couldn’t fill the faculty positions necessary to train the huge number of students with ambitions to become software engineers. “This situation seriously threatens the ability of Computer Science departments to continue developing the skilled people needed both by our information processing industry and by an increasingly technological society,” warned the Association for Computing Machinery. 

The industry’s dearth of qualified developers wasn’t its only problem. Software development itself seemed to be struggling. Waterfall’s promise of tightly controlled management was a mirage. No amount of documentation, process, or procedure seemed capable of wrestling development into predictability. Software projects were big, expensive, and they seemed to be spiraling out of control—huge initiatives foundered unfinished as requirements changed and warring project teams bickered about details. Despite managers’ efforts to make software development reliable and predictable, it seemed, if anything, to have only grown more unwieldy. As the computer scientist Jeff Offutt put it, “In the 1960s, programmers built ‘tiny log cabins,’” while “in the 1980s, teams of programmers were building office buildings”—and by the 1990s, skyscrapers. Yet teams of technologists seemed unable to coordinate their work. Peter Varhol, a technology industry consultant, estimates that in the early 1990s, the average application took three years to develop, from idea to finished product. Technology was supposed to make American business smarter, faster, and more profitable, and yet the most respected corporations couldn’t seem to get their projects off the ground.

The designation of “engineer,” the administrative hierarchies, the careful planning and documentation: all of this had been intended to bring order and control to the emerging field of software development. But it seemed to have backfired. Rather than clearing the way for software developers to build, waterfall gummed up the works with binders of paperwork and endless meetings. 

For their parts, engineers complained of feeling constrained by heavy-handed management techniques. They just wanted to build software. Why were they hamstrung by paperwork? The typical picture of corporate programming in the 1990s is of the existentially bored twenty-somethings in Douglas Coupland’s novel Microserfs, or the desperate developers in Mike Judge’s movie Office Space, whose rage lurks just below the surface.

Khakis and Dad Jeans

Enter what may be the world’s most unlikely group of rock stars: seventeen middle-aged white guys, dressed in khakis and dad jeans, all obsessed with management. The now-legendary authors of what came to be called the Agile Manifesto gathered at Utah’s Snowbird ski resort in February 2001 to hammer out a new vision for the software development process. This wasn’t their first meeting; they’d been gathering in various configurations to discuss software development for some time, though, until the 2001 meeting, they hadn’t come up with much to show for it. This time was different. Scrawled on a whiteboard was the Agile Manifesto, a set of values that, in the following years, would become nearly ubiquitous in the management of programmers, from fledgling startups to huge corporations. It’s pleasantly concise:

We are uncovering better ways of developing software by doing it and helping others do it.

Through this work we have come to value:

Individuals and interactions over processes and tools

Working software over comprehensive documentation

Customer collaboration over contract negotiation

Responding to change over following a plan

That is, while there is value in the items on the right, we value the items on the left more.

The manifesto, supplemented by twelve additional principles, targeted the professional frustrations that engineers described. Waterfall assumed that a software application’s requirements would be stable, and that slowdowns and logjams were the result of deviating from management’s careful plan. Agile tossed out these high-level roadmaps, emphasizing instead the need to make decisions on the fly. This way, software developers themselves could change their approach as requirements or technology changed. They could focus on building software, rather than on paperwork and documentation. And they could eliminate the need for endless meetings. 

It’s an interesting document. Given the shortage of qualified developers, technology professionals might have been expected to demand concessions of more immediate material benefit—say, a union, or ownership of their intellectual property. Instead, they demanded a workplace configuration that would allow them to do better, more efficient work. Indeed, as writer Michael Eby points out, this revolt against management is distinct from some preceding expressions of workplace discontent: rather than demand material improvements, tech workers created “a new ‘spirit,’ based on cultures, principles, assumptions, hierarchies, and ethics that absorbed the complaints of the artistic critique.” That is, the manifesto directly attacked the bureaucracy, infantilization, and sense of futility that developers deplored. Developers weren’t demanding better pay; they were demanding to be treated as different people.

Organizational Anarchists

It seems likely that changes in opinions about the nature of software development didn’t take place in 2001 exactly, but in the decade leading up to the authorship of the Agile Manifesto. Consensus was growing—among developers, but also among managers—that software development couldn’t be made to fit the flow-charts and worksheets in which analysts had placed so much hope. Software, as the historian Stuart Shapiro wrote in 1997, is complex in a particularly complex way: the problems are “fuzzy, variable, and multifaceted, and thus rarely proved amenable to any one approach; instead, they demanded hybrid and adaptive solutions.” Not, then, forms and timecards. Moreover, as the workforce of programmers grew by leaps and bounds in the 1990s, companies hired, of necessity, people without formal computer science training. These younger workers likely had less invested in the drive of the 1970s and 1980s to turn software development into a science. The manifesto wasn’t really a shot across the bow: it was more of a punctuation mark, emphasizing years of discontent with prevailing models of corporate management.

Nevertheless, while Agile had a devoted following, its mandate—the removal of top-down planning and administrative hierarchy—was a risk. It meant management ceding control, at least to some extent, to developers themselves. And most large companies weren’t willing to do that, at least not until the 2010s. Between 2012 and 2015, though, according to the Agile consultancy Planview, more than 50 percent of practicing development teams characterized themselves as “Agile.” 

Doubtless, some of this popularity had to do with the growth of high-speed internet connections, which drastically altered the way software got released. Before, it wasn’t unusual for software to be updated once a year, or at even longer intervals. The fact that updates had to be distributed on physical media like CD-ROMs and floppy disks limited the speed of new releases. But high-speed internet made it possible to push out fixes and features as often as a company wanted, even multiple times a day. Agile made a lot of sense in this environment.

Facebook’s famous former motto, “Move fast and break things,” captured the spirit of the new era well. It was an era that rewarded audacity, in software development as much as in CEOs. Venture capital firms, on the hunt for “unicorns,” poured record amounts into the technology sector during the 2010s, and they wanted to see results quickly. Competing with startups required the ability to change on a dime, to release constantly, and to develop at breakneck speed. The risk calculus shifted: it now seemed dangerous to stick with waterfall, when Agile promised so much speed.

Equally, it seems, what it meant to be a software developer had changed. In the 1970s and 1980s, experts held up the systems-minded, logic-loving scientist as the ideal software worker. But over the years, this ideal had failed to take root. The programmers of the 1990s read Wired, not Datamation. If their characteristics can be intuited from the Agile Manifesto, they were intently committed to the highest standards, working quickly and confidently because managers “trust them to get the job done.” They refused to do things just because they’ve always been done that way, turning their minds to “continuous attention to technical excellence.” They weren’t thrown by fluid, fast-moving requirements; instead, they embraced them as an opportunity to “harness change for the customer’s competitive advantage.” 

The image of the free-thinking nonconformist fits the philosophy of Agile. The manifesto’s authors may have looked like textbook engineers, in button-downs with cell-phone holsters, but “a bigger group of organizational anarchists would be hard to find,” according to Jim Highsmith, one of their number. Particularly in the early days, there was a lot of talk about the challenge Agile posed to the traditional management paradigm. Agile’s proponents were proud of this nonconformity: the framework “scares the bejeebers out of traditionalists,” wrote Highsmith in 2001. “Agile was openly, militantly, anti-management in the beginning,” writes the software developer and consultant Al Tenhundfeld. “For example, Ken Schwaber [a manifesto author] was vocal and explicit about his goal to get rid of all project managers.” 

Anti-management, maybe, but not anti-corporate, not really. It’s tempting to see the archetypal Agile developer as a revival of the long-haired countercultural weirdo who lurked around the punch card machines of the late 1960s. But the two personas differ in important respects. The eccentrics of computing’s early years wanted to program for the sheer thrill of putting this new technology to work. The coder of Agile’s imagination is committed, above all, to the project. He hates administrative intrusion because it gets in the way of his greatest aspiration, which is to do his job at the highest level of professional performance. Like the developers in Aaron Sorkin’s The Social Network, he wants most of all to be in “the zone”: headphones on, distractions eliminated, in a state of pure communion with his labor.

Management’s Revenge

Back at my library job, I kept an eye on the developers, admiring their teamwork and pragmatism. As time went by, though, I couldn’t help but notice some cracks in the team’s veneer. Despite the velocity chart and the disciplined feature-tracking, the developers didn’t seem to be making all that much progress. They were all working hard, that was clear, but there was a fatal flaw: no one really knew what the project was ultimately supposed to look like, or exactly what purpose it was supposed to serve. The team members could develop features, but it wasn’t clear what all these features were being tacked on to. Maybe that problem came from my workplace’s own dysfunction, which was considerable. Still, I began to wonder whether the Agile methodology had some limitations.

And, in fact, anyone with any proximity to software development has likely heard rumblings about Agile. For all the promise of the manifesto, one starts to get the sense when talking to people who work in technology that laboring under Agile may not be the liberatory experience it’s billed as. Indeed, software development is in crisis again—but, this time, it’s an Agile crisis. On the web, everyone from regular developers to some of the original manifesto authors is raising concerns about Agile practices. They talk about the “Agile-industrial complex,” the network of consultants, speakers, and coaches who charge large fees to fine-tune Agile processes. And almost everyone complains that Agile has taken a wrong turn: somewhere in the last two decades, Agile has veered from the original manifesto’s vision, becoming something more restrictive, taxing, and stressful than it was meant to be. 

Part of the issue is Agile’s flexibility. Jan Wischweh, a freelance developer, calls this the “no true Scotsman” problem. Any Agile practice someone doesn’t like is not Agile at all, it inevitably turns out. The construction of the manifesto makes this almost inescapable: because the manifesto doesn’t prescribe any specific activities, one must gauge the spirit of the methods in place, which all depends on the person experiencing them. Because it insists on its status as a “mindset,” not a methodology, Agile seems destined to take on some of the characteristics of any organization that adopts it. And it is remarkably immune to criticism, since it can’t be reduced to a specific set of methods. “If you do one thing wrong and it’s not working for you, people will assume it’s because you’re doing it wrong,” one product manager told me. “Not because there’s anything wrong with the framework.”

Despite this flexibility in its definition, many developers have lost faith in the idea of Agile. Wischweh himself encountered a turning point while describing a standup meeting to an aunt, a lawyer. She was incredulous. The notion that a competent professional would need to justify his work every day, in tiny units, was absurd to her. Wischweh began to think about the ways in which Agile encourages developers to see themselves as cogs in a machine. They may not be buried under layers of managers, as they were in the waterfall model, but they nevertheless have internalized the business’s priorities as their own. “As developers, IT professionals, we like to think of ourselves as knowledge workers, whose work can’t be rationalized or commodified. But I think Agile tries to accomplish the exact opposite approach,” said Wischweh. 

Al Tenhundfeld points out that the authors of the Agile Manifesto were working developers, and that the manifesto’s initial uptake was among self-organizing teams of coders. Now, however, plenty of people specialize in helping to implement Agile, and Agile conferences notoriously tend to be dominated by managers, not developers. The ubiquity of Agile means that it is just as likely to be imposed from above as demanded from below. And Agile project managers, who are generally embedded in the team as the “product owner,” find themselves pulled in two directions: what’s best for the developers on the one hand, and what they’ve promised to deliver to management on the other. 

Even as the team is pulled in multiple directions, it’s asked to move projects forward at an ever-accelerating pace. “Sprinting,” after all, is fast by definition. And indeed, the prospect of burnout loomed large for many of the tech workers I spoke to. “You’re trying to define what’s reasonable in that period of time,” said technical writer Sarah Moir. “And then run to the finish line and then do it again. And then on to that finish line, and on and on. That can be kind of exhausting if you’re committing 100 percent of your capacity.”

Moreover, daily standups, billed as lightweight, low key check-ins, have become, for some workers, exercises in surveillance. Particularly when work is decomposed into small parts, workers feel an obligation to enumerate every task they’ve accomplished. There’s also pressure for every worker to justify their worth; they are, after all, employees, who need to be perceived as earning their salaries. 

“Story points”—the abstraction that teams use to measure the effort involved in particular development tasks—have also lost some of their allure. They began as a way to give engineers some control over the amount and substance of their work. And yet, in practice, they often serve as a way to assess engineers’ performance. “Once you’ve put something in a digital tool, the amount of oversight that people want goes up, right?” said Yvonne Lam, a software engineer based in Seattle.

The problem isn’t just with surveillance, but with the way the points calcify into a timeline. John Burns, an engineer at a platform company, recalled a former workplace that simply multiplied story points by a common coefficient, in order to get a rough estimate of how long a project would take. Despite the points’ avowed status as an informal, internal measure, managers used them as a planning device. 

The Next Crisis

Underlying these complaints is a deeper skepticism about the freedom that Agile promises. Agile’s values celebrate developers’ ingenuity and idiosyncratic ways of working. But there are distinct limits to the kinds of creativity workers feel authorized to exercise under Agile, particularly because problems tend to be broken down into such small pieces. “It is clear that Agile dissolves many of the more visible features of hierarchical managerial control,” writes Michael Eby. “But it does so only to recontain them in subtle and nuanced ways.” Yvonne Lam notes that autonomy under Agile has distinct parameters. “People say you have the autonomy to decide how you’re going to do the work. And it’s like, yeah, but sometimes what you want is the autonomy to say, this is the wrong work.” There are so many choices to be made in the course of any software development project—about languages, frameworks, structure—that it’s possible to lose sight of the fact that developers often don’t get to weigh in on the bigger questions. 

And, in the last few years, those bigger questions have taken on greater importance and urgency. We’ve seen numerous examples of tech workers organizing to change the direction of their companies’ business strategies: Google developers agitating to kill an AI contract with the Department of Defense, game developers agitating to end sexual harassment. These demands go beyond Agile’s remit, since they aim not to create conditions for workers to do a better job, but to change the nature of that job altogether. 

It’s also worth considering how Agile might have played a role in creating a work culture that is increasingly revealed to be toxic for women, people of color, and members of gender minority groups. It’s an inescapable fact that the authors of the Agile Manifesto were a very specific group of people: white men who, whatever their varying experiences, have probably not spent much time in workplaces where they composed the minority. The working group has since acknowledged the deficit in the team’s diversity and vowed to incorporate a larger set of voices in the Agile Alliance, a nonprofit associated with the manifesto. 

But when you survey a list of Agile-affiliated methodologies, alarm bells might go off if you’re the kind of person who’s faced discrimination or harassment at work. Many people testify to the utility of “pair programming,” for example, but the practice—in which two developers code together, each taking turns looking over the other’s shoulder—assumes that the two coders are comfortable with each other. Similarly, the warts-and-all breakdown of Agile “retrospectives” seems healthy, but I’ve watched them descend into a structureless series of accusations; everything depends on who’s leading the team. And Coraline Ada Ehmke, former community safety manager at GitHub, has described how fellow developers used the code review—ideally a low-stakes way for developers to check each other’s work—as an instrument of harassment. We’ve long known that eliminating bureaucracy, hierarchy, and documentation feels great, until you’re the person who needs rules for protection.

Could Agile even have played a role in some of the more infamous failures of the tech industry? The thought occurred to me as I watched Frances Haugen, the former Facebook manager turned whistleblower, testifying before Congress in October 2021. If a company sets a goal of boosting user engagement, Agile is designed to get developers working single-mindedly toward that goal—not arguing with managers about whether, for example, it’s a good idea to show people content that inflames their prejudices. Such ethical arguments are incompatible with Agile’s avowed dedication to keeping developers working feverishly on the project, whatever it might be.

This issue becomes especially pressing when one considers that contemporary software is likely to involve things like machine learning, large datasets, or artificial intelligence—technologies that have shown themselves to be potentially destructive, particularly for minoritized people. The digital theorist Ian Bogost argues that this move-fast-and-break-things approach is precisely why software developers should stop calling themselves “engineers”: engineering, he points out, is a set of disciplines with codes of ethics and recognized commitments to civil society. Agile promises no such loyalty, except to the product under construction.

Agile is good at compartmentalizing features, neatly packaging them into sprints and deliverables. Really, that’s a tendency of software engineering at large—modularity, or “information hiding,” is a critical way for humans to manage systems that are too complex for any one person to grasp. But by turning features into “user stories” on a whiteboard, Agile has the potential to create what Yvonne Lam calls a “chain of deniability”: an assembly line in which no one, at any point, takes full responsibility for what the team has created. 

The Agile Manifesto paints an alluring picture of workplace democracy. The problem is, it’s almost always implemented in workplaces devoted to the bottom line, not to workers’ well-being. Sometimes those priorities align; the manifesto makes a strong case that businesses’ products can be strengthened by worker autonomy. But they’re just as likely to conflict, as when a project manager is caught between a promise to a client and the developers’ own priorities. 

“There’s a desire to use process as a way to manage ambiguity you can’t control,” said María Matienzo, a software engineer for an academic institution. “Especially in places where you’re seen as being somewhat powerless, whether that’s to the whims of upper management or administration. So you may not be able to influence the strategic direction of a project at a high level, but Agile allows that certain conception of developer free will.” The product manager I spoke to put it more bluntly: “Agile tricks people into thinking they have ownership over their work, but from a labor perspective, they literally do not have ownership, unless they have, like, significant stock options or whatever.” 

Software development has never fit neatly into the timelines and metrics to which companies aspire. The sheer complexity of a modern application makes its development sometimes feel as much alchemical as logical. Computers may have emerged as military equipment, but completely subordinating programming work to the priorities of capital has been surprisingly difficult. When software engineering failed to discipline the unwieldiness of development, businesses turned to Agile, which married the autonomy that developers demanded with a single-minded focus on an organization’s goals. That autonomy is limited, however, as developers are increasingly pointing out. When applied in a corporate context, the methods and values that Agile esteems are invariably oriented to the imperatives of the corporation. No matter how flexible the workplace or how casual the meetings, the bottom line has to be the organization’s profits.

There’s another angle on Agile, though. Some people I talked to pointed out that Agile has the potential to foster solidarity among workers. If teams truly self-organize, share concerns, and speak openly, perhaps Agile could actually lend itself to worker organization. Maybe management, through Agile, is producing its own gravediggers. Maybe the next crisis of software development will come from the workers themselves.

A previous version of this article incorrectly identified Al Tenhundfeld as a co-author of the Agile Manifesto. The present version has been corrected.

Miriam Posner is an assistant professor at the University of California, Los Angeles, and is writing a book on supply chains and data for Yale University Press.

This piece appears in Logic's issue 16, "Clouds". To order the issue, head on over to our store. To receive future issues, subscribe.