Rationalization is a form of compression that lays a grid over our world and attempts to remake it to fit its shape. The goal of rational thought is to break down a complex and infinite reality into small pieces and reconstitute it in a logical system. In computing, rationalization is the process by which phenomena like actions, identity, and emotions are split, stripped, reduced, standardized, and otherwise converted into computable data and mapped within machines.
It is a process of rationalization that allows a company like DoorDash to use computational algorithms to determine “optimal” delivery speeds, and then to discipline its workers for not matching those predetermined outcomes. To believe in the efficacy and reliability of such a system, we must first accept that DoorDash is able to measure and model an intricately layered series of complex relationships—including traffic patterns, consumer desires, worker behaviors, prices, and more—and is then able to draw actionable predictions from all that information. To accept this requires a faith that each part in the complex system is knowable, quantifiable, and fixed. This is the ideology of rationality at work.
But the world is not rational! The world is, in fact, irrational! It is chaotic, expansive, interrelational, and incalculable. Machines, in particular, are far too rigid in their logic and far too limited in their capacity to meaningfully capture the world. Yet we continue to grant them ever greater power. Across the public and private sector, computer-aided systems of management premised on transactional relationships and the supposed ability to optimize outcomes are being used to guide our interactions. As DoorDash workers will be the first to tell you, these systems shape behavior, remaking the activities and phenomena they are meant to model while radiating innumerable harms—from incentivizing unsafe driving speeds to suppressing wages—as a result.
To resist this reconfiguration and mitigate these harms, we must reject rationality and embrace a fundamentally irrational worldview. If rationality says the world is measurable, knowable, optimizable, and automatible, an embrace of the irrational is a rejection of that ideology. Embracing irrationality allows for multiple interpretations, contradiction, inexplicability. It empowers us to reclaim the act of meaning-making as a collaborative, social exercise as opposed to one that can be automated and forgotten. Ultimately, a program of irrationality requires that we harness the power of our machines through a form of democratic oversight that acknowledges the false promise of rational management and insists that, in the absence of certainty, we must work together to organize society. Irrationality celebrates doubt, because only if the future is unknown will it be ours to build.
Rational Data from an Irrational World
Rationalization—the process of abstraction in the service of computational reasoning—has long been a feature of the sciences, math, and philosophy. Of course, tools of theoretical inquiry are often put to practical use by those in power. Beginning in the late nineteenth and early twentieth century, labor processes were mapped out rationally to the great benefit of early industrialists. In the postwar period, cyberneticists and game theorists, many employed by the US military, theorized that they could go beyond simple numerical equations or discrete production processes and rationally describe much more complex phenomena using newly developed computing machines.
Around the end of World War II, the first general-purpose electronic computers were introduced. The data being entered into these machines was numerical, it was subjected to mechanically coded mathematical formulae, and the output was a solved equation. An amazing innovation, and one that—provided the inputs were entered accurately, and the machine operations properly encoded—returned an accurate and reliable result. An increased focus on feedback soon enabled a process known as “machine learning” through neural networks—a technique that has been revived in the last decade to propel a new AI boom, driven by breakthroughs in computer vision and natural language processing.
Machine learning sorts through vast amounts of chaotic data and, using adaptive algorithms, closes in on specific arrangements of information while excluding other possible interpretations. But, in order for any computation to occur, a process of rationalization must first create machine-readable datasets. Real-world phenomena must be “datafied” by sorting it into categories and assigning fixed values.
Take, for example, most image recognition software. Whether the goal is identifying handwriting or enemy combatants, a training set of data made up of digital images—themselves encoded arrangements of pixels—is typically created. This initial process of digital image capture is a form of reduction and compression; think of the difference between the sunset you experience and what that same sunset looks like when it is posted to Instagram. This mathematical translation is necessary so the machines can “see,” or, more accurately, “read” the images.
But for the purposes of this kind of machine learning, further rationalization must occur to make the data usable. The digital image is identified and labeled by human operators. Maybe it is a set of handwriting examples of the numeral two, or drone images from a battlefield. Either way, someone, often an underpaid crowdworker on a platform like Amazon Mechanical Turk, decides what is meaningful within the images—what the images represent—so that the algorithms have a target to aim for.
The set of labeled images is fed into software tasked with finding patterns within it. First, borders are identified in the encoded arrangement of pixels. Then larger shapes are identified. An operator observes the results and adjusts parameters to guide the system towards an optimal output. Is that a 2? Is that an enemy combatant or a civilian?
Once this output has been deemed acceptable, the system is fed new, unlabeled images and asked to identify them. This new data, along with feedback on the functional accuracy of its initial output—“yes, that is a 2”—is used to further fine-tune the algorithm, the optimization of which is largely automated. This basic process applies to most machine learning systems: rationalized data is fed in, and through association, feedback, and refinement, the machine “learns” to provide better results.
But rational data is an unstable foundation from which to learn. Those initial stages in the machine-learning process when phenomena are translated into code, when the irrational is rationalized and the real world is datafied, demand close scrutiny. And as the phenomena we are asking the software to interpret become more complex—as these systems are tasked with going from recognizing that an image contains a face to recognizing a specific face, to recognizing the emotion on that face, to determining what actions might result from someone in a certain emotional state—the more skeptical we should be of any supposed insights that are generated.
The process of translating the world into code is reductive. There is a similar reduction in the labeling of data into certain categories—there will never be sufficient categories available to represent all possibilities. In an infinitely complex and constantly shifting world, any one-to-one representation is impossible. And anything that is lost in that original training data will be invisible to machine learning systems that are built upon it.
Far from being a neutral process, the creation of training data is fundamentally social and subjective. It requires human actors to determine the available categories and label the data accordingly. The attendant assumptions, biases, and distinctions made by these human actors are necessary to create “rational” data, and once encoded they define the possibilities and limitations of what machine learning systems can “learn.”
To be clear, all forms of knowledge-making are social and subjective, not just machine learning. The difference is that other ways of making sense of the world acknowledge their own fallibility. For instance, in academia, disciplines have developed various techniques for vetting new information, such as peer review. The issues are not always resolved, but there are processes that help create meaning collectively.
The Irrational Program
The making of meaning cannot be automated because an irrational world cannot be coded rationally. Machine learning systems, with their immense computational power, can surface novel arrangements of information and offer new forms of perception. But any claims to objectivity made on behalf of these systems should be disregarded outright.
Moreover, these systems are engaged in actively shaping society to fit the models they create. When the options for human activity are reduced to a set of “optimal” choices made available through a machine-generated recommendation, other courses of action—and thus other possible future outcomes—are eliminated. We cannot allow this reduction to put limitations on the world in which we live. Instead, if these systems are to be salvaged, we have a responsibility to relentlessly interrogate who and what constitutes “data,” how it is made, what patterns we seek within it, and what we do with the insights that are surfaced. These questions must be put to the widest public forums available, and the decisions about how to respond must be made democratically. Then those questions must be asked again and again.
The process of rationalization, and the technology it enables, are social in origin and have a social impact once deployed. Ultimately, we must embrace their collective nature and respond collectively. This means organizing the workers at the point of rationalization and organizing the subjects of datafication to resist until their demands for input into the development of these systems is met. We make technological systems as they make us, and we can remake or unmake them. When we recognize our role in the co-creation of technological systems, and take collective control over that process, who knows what innovations may result?