In the beginning, my plan was perfect. I would meditate for five minutes in the morning. Each evening before bed, I would do the same. There was only one catch: instead of relying on my own feelings, a biofeedback device would study my brainwaves to tell me whether I was actually relaxed, focused, anxious, or asleep.
By placing just a few electrodes on my scalp to measure its electrical activity, I could use an electroencephalography (EEG) headset to monitor my mood. Whereas “quantified self” devices like the Fitbit or Apple Watch save your data for later, these headsets loop your brainwaves back to you in real time so that you can better control and regulate them.
Basic EEG technology has been around since the early twentieth century, but only recently has it become available in portable, affordable, and Bluetooth-ready packages. In the last five years, several start-ups—with hopeful names like Thync, Melon, Emotiv, and Muse—have tried to bring EEG devices from their clinical and countercultural circles into mainstream consumer markets.
I first learned about the headsets after watching the Muse’s creator give a TEDx talk called “Know thyself, with a brain scanner.” “Our feelings about how we’re feeling are notoriously unreliable,” she told the audience, as blue waves and fuzzy green patches of light flickered on a screen above her. These were her own brainwaves—offered, it seemed, as an objective correlative to the mind’s trickling subjectivity.
Their actual meaning was indecipherable, at least to me. But they supported a sales pitch that was undeniably attractive: in the comfort of our own homes, without psychotropic meds, psychoanalysis, or an invasive operation, we could bring to light what had previously been unconscious. That was, in any case, the dream.
When I first placed the Muse on my head one Sunday evening in late October, I felt as though I was greeting myself in the future. A thin black band, lightweight and plastic, stretched across my forehead. Its wing-like flanks fit snugly behind my ears.
Clouds floated by on the launch screen of its accompanying iPhone app. The Muse wasn’t just a meditation device, the app explained, but a meditation assistant. For some minutes, my initial signal was poor.
To encourage me not to give up before I’d even started, the assistant kept talking to me. Her calming female voice told me how to delicately re-adjust the electrodes to get the signal working. The ones behind my ears were having trouble aligning with the shape of my head.
Eventually, the Muse accurately “sensed” my brain. It would now be able to interpret my brainwaves and translate their frequencies into audio cues, which I would hear throughout my meditation session.
Tap the button, she said encouragingly. “I’m ready,” I clicked, and my first five-minute meditation session began.
Inward bound, I sat at my desk with the lamp on and closed my eyes. Waves crashed loudly on the shore, which indicated that I was thinking too much. But from time to time, I could hear a few soft splashes of water and, farther in the distance, the soft chirping of birds.
After what seemed like forever, it was over. Like all self-tracking practices (and rather unlike a typical meditation session), it seemed that the post-game was as important as the practice itself. Knowing this, I made a good-faith effort to pore over my “results.”
They were, at first, second, and third glance, impenetrable. I had earned 602 calm points. In all my experiences counting—the miles of my runs, words on my documents, even the occasional calorie—I had never learned what a “calm point” was. The units used by the Muse seemed not only culturally insignificant, but void of any actual meaning. In an effort to build an internal chain of signification, the app had mysteriously multiplied these calm points by a factor of three, whereas my “neutral points” had only been multiplied by a factor of one. Birds, I was told, had “landed” sixteen times.
Equally inscrutable were the two awards I had earned. Whatever they were, I thought, they were hardly deserved, considering I had so far spent a total of seven minutes scanning my brain. One was for tranquility—“Being more than 50% calm in a single session must feel good,” the award told me. The other was a “Birds of Eden Award.” I was told that I had earned this because at least two birds chirped per minute, “which must have felt a bit like being at Birds of Eden in South Africa—the largest aviary in the world.” Not really, I thought. But then again, I had never been to South Africa.
It felt great to meditate for the first time only to be told that I was already off to a good start. But I knew deep down—or, at least, I thought I knew—that I had not felt calm during any part of the session. I was in the difficult position, then, of either accepting that I did not know myself, in spite of myself, or insisting on my own discomfort in order to prove the machine wrong. It wasn’t quite that the brain tracker wanted me to know myself better so much as it wanted me to know myself the way that it knew me.
The Brain Doctor
The second morning of my experiment, I took the subway uptown to see Dr. Kamran Fallahpour, an Iranian-American psychologist in his mid-fifties and the founder of the Brain Resource Center. The Center provides patients with maps and other measures of their cognitive activity so that they can, ideally, learn to alter it.
Some of Fallahpour’s patients suffer from severe brain trauma, autism, PTSD, or cognitive decline. But many have, for lack of a better word, normal-seeming brains. Athletes, opera singers, attorneys, actors, students—some of them as young as five years old—come to Fallahpour to improve their concentration, reduce stress, and “achieve peak performance” in their respective fields.
Dr. Fallahpour’s offices and labs lie on the ground floor of a heavy stone apartment building on the Upper West Side. When I arrived, he was in the middle of editing a slideshow on brain plasticity for a talk he was due to give at an Alzheimer’s conference. On a second, adjacent monitor, sherbet peaks and colored waves—presumably from some brain—flowed on the screen.
Fallahpour wears bold glasses with thick-topped frames, in the style of extra eyebrows. When we met, he was dressed in a dark blue suit to which was affixed a red brooch shaped like a coral reef or a neural net—I kept meaning to ask which. An enthusiastic speaker with a warm bedside manner, it was hard to shake the impression that there was nothing he would rather be doing than answering my questions about the growing use of personal EEG headsets.
His staff had not yet arrived that morning, he apologized, so he would be fielding any calls himself. As if on cue, the phone rang.
“No. Unfortunately, we do not take insurance,” he told the caller.
“That happens a lot,” he explained, after hanging up. “Now, where were we?”
Before turning to brain stimulation technologies, Fallahpour worked for many years as a psychotherapist, treating patients with traditional talk therapy. His supervisors thought he was doing a good job, and he saw many of his patients improve. But the results were slow-going. He often got the feeling that he was only “scratching the surface” of their problems. Medication worked more quickly, but it was imprecise. Pills masked the symptoms of those suffering from a brain injury, but they did little to improve the brain’s long-term health.
Like many of his fellow researchers, Fallahpour was interested in how to improve the brain through conditioning, electrical and magnetic stimulation, and visual feedback. He began to work with an international group of neuroscientists, clinicians, and researchers developing a database of the typical brain. They interviewed thousands of normal patients—“normal” was determined by tests showing the absence of known psychological disorders—and measured their resting and active brainwaves, among other physiological responses, to establish a gigantic repository of how the normative brain functioned.
Neuroscience has always had this double aim: to know the brain and to be able to change it. Its method for doing so—“screen and intervene”—is part of the larger trend toward personalized medicine initiatives. Advance testing, such as genomics, can target patients at risk for diabetes, cancer, and other diseases. With the rise of more precise diagnostic and visualization technologies, individuals can not only be treated for current symptoms, but encouraged to prevent future illnesses.
Under the twenty-first century paradigm of personalized medicine, everyone becomes a “potential patient.” This is why the Brain Resource Center sees just as many “normal” patients as symptomatic ones. And it’s why commercial EEG headsets are being sold to both epileptics trying to monitor their symptoms and office workers hoping to work better, faster.
Brain training is seductive because its techniques reinforce an existing neoliberal approach: health becomes a product of personal responsibility; economic and environmental causes of illness are ignored. Genetics may hardwire us in certain ways, the logic of neuroliberalism goes, but hard work can make us healthy. One problem is that the preventative care of the few who can afford it gets underwritten by the data of the many.
Consider Fallahpour’s boot camp for elementary school kids. For a few hours each day during school vacations, the small rooms of his low-ceilinged offices are swarmed with well-behaved wealthy children playing games to “improve brain health and unlock better function,” as well as to acquire a “competitive advantage.” “We tune their brain to become faster and more efficient,” he explained. “The analogy is they can have Windows 3.1 or upgrade it to 10.”
Before I had time to contemplate the frightening implications of this vision, the phone began, again, to ring. Fallahpour exchanged pleasantries for a few minutes, asking about the caller’s weekend. No, he told them, he did not take insurance.
The more I thought about the kind of cognitive enhancement Fallahpour promised, the more trouble I had remembering the last time I felt clear-eyed and focused. Had I ever been? Could I ever be?
For a few days I had sensed a dull blankness behind my eyes. I wondered if it was a head cold, or sleep deprivation, or a newfound gluten allergy. I started sleeping more and my cold improved, but the brain fog continued. Reading a book felt like standing on a subway grate, with holes and winds weaving through the pages. I misspelled words, like “here” (hear) and “flourish” (fluorish). I waved to a man on the train who looked like someone I had dated years ago.
On a good day, I convinced myself, there was no way I was operating above sixty percent, maybe sixty-five. Sixty percent of what, I wasn’t sure. But I knew I could do better. I felt a twinge of envy toward those who had achieved the mythical “peak performance,” and I redoubled my commitment to self-improvement.
The headset remained subtly encouraging. “Whatever you’re experiencing right now is perfect,” my meditation assistant assured me—just moments before my fourth session’s calibration had paused, again, because the signal quality was too low. I re-adjusted my headset, practicing patience. “Training your mind is kind of like training a puppy,” the motivational preamble continued. “Getting angry at the puppy isn’t going to get you anywhere.”
I wasn’t angry at anyone’s dog, but I couldn’t stop comparing each session’s score to the last’s. Was I hearing fewer birds? Was it easier to focus with or without caffeine? As suspicious as I was about the accuracy of the metrics, I still wanted to beat my previous score. The more elusive peak performance seemed, the more I came to realize that it was structured as an essentially nostalgic feeling. It relied on the fear that you used to be younger, sharper, more clear-eyed—and the hope that you could somehow, with practice, be this way again.
When I mentioned my experiments to a friend, he recommended that I watch a performance by the conceptual programmer Sam Lavigne. In “Online Shopping Center,” Lavigne trains a DIY EEG device to identify whether his brain is thinking about shopping online or his own mortality. Being either “shopping-like” or “death-like” was not so different, it seemed, from the Muse letting me know whether I was calm or active, focused or restless. In both cases, the data was mostly junk, the binaries reductive, the exercise absurd.
When I went to see Dr. Fallahpour for a follow-up visit, I was running thirty minutes late. I had forgotten to transfer trains at 59th Street because I had gotten distracted trying to make sense of an advertisement chastening me for my distraction: “Daydreamed through your stop, again?” it asked.
I had, but I wouldn’t know it yet. It wasn’t until 116th Street that I realized I had missed my stop—in fact, I had missed several. Still, I felt a low current of satisfaction when I emerged into the sunlight at 125th Street, far from where I needed to be. Such inattention made me a more viable patient for brain training than I had previously realized, in need of greater focus for even the most elementary tasks. It had also made me very late.
Fallahpour and I decided I would try a calm protocol first, followed by one that rewarded my brain for focus. While he gelled the electrodes and placed them on my scalp, I asked him about some of the skepticism surrounding EEG headsets—namely, the fact that many people, myself included, found it difficult to tell what exactly was being measured.
“EEG is a crude tool and it isn’t the best we have, but it’s the most convenient in many ways,” he explained. “It’s prone to a lot of ‘garbage in and out.’” But when done correctly, he added, it can be “useful and quite powerful.” Deciphering signals from the noise required the trained judgement of an expert like Fallahpour. In this sense, the EEG’s biofeedback wasn’t quite as seamless as going to the gym with your Fitbit. You still needed someone to help you help yourself.
To start, we took a baseline measurement of my brain. I had very quick recovery, or response, or something, in terms of what I think were my alpha waves. This meant that my ability to calm myself was sophisticated. I felt surprised at first, and dumbly flattered, much like I had during my first session with the Muse.
For the calm protocol, classical music cut in and out of my headphones depending on whether certain frequencies in my brain were active. This was visualized by red and blue columns flanking both sides of the screen. I was supposed to keep the colors under certain thresholds in their respective containers. At one point, I opened my eyes. The blue column, which had been filling up, drained suddenly. This was supposed to be a sign of resilience.
When we tested my concentration, the settings were adjusted to exercise different kinds of brainwaves. I was tasked with keeping a blue column at a certain level while not letting other red columns reach a certain height. It was more difficult than meditating with my Muse—but also, because it was a game, more enjoyable. After five minutes, I convinced myself that I felt my mind becoming more elastic, more responsive. I had started to figure out how to modify my patterns in order to play the music, even if I did not quite know what those patterns meant.
By the end of my week with the Muse, my results were as perilously inscrutable as they had been at the start. Thousands of birds had chirped in my ear. An infinity of waves had crashed upon an endless shore. I had earned quite a few more badges, some by the sheer virtue of persisting: adjusting the signal, continuing the exercise day after day, not quitting in the face of a great and useless mystery.
I had learned very little about myself. This in itself wasn’t surprising. But if the EEG headsets were supposed to teach anything, their lesson was somewhat contradictory: I should know myself, but I should also be prepared to be wrong about what I knew. In this respect, the headset was more like the Oracle of Delphi’s famous precept, “Know thyself,” than its designers had intended. The dictum was initially issued as a double-edged warning about the limits of knowing and the incompleteness of interpretation—a truth that the Muse, ironically enough, confirmed.
The more I parsed my personal graphs and charts, the more I arrived at the same conclusions as anyone who has ever taken more than a passing glance at the brain. Our tools aren’t good enough. At least not yet. And the inadequate and embarrassing analogies we use to describe our brains do little to help us see ourselves clearly. In the course of the week, mine had been variously compared to a loom, a digital machine, an obsolete Windows operating system, and a puppy. What had I been expecting? That a toy would illuminate the fog?
Commercial EEG devices promise that we can know our brain frequencies, even while most of those frequencies are “garbage.” The machines might be too. Average EEG devices like the Muse have been shown to have trouble distinguishing between the signals of a relaxed brainwave, stray thought, skin pulse, or furrowed brow. And several studies have disproven the efficacy of related “brain training” games, which don’t augment intelligence so much as make people better at playing the game. It’s possible that all the Muse taught me was how to score calm points and charm songbirds, not how to unlock inner bliss.
When the next Sunday came around, I was just as anxious about relaxation and relaxed about anxiety as I had been the week before. I still didn’t know whether I wanted to go shopping. Other times I thought I was thinking about death, though I couldn’t be sure. Who knows, maybe I would never know. I might even die that way—knowing very little, and getting that part wrong too.