Technologists try to make digital experiences frictionless. Mirroring the mythical efficiency of a perpetual motion machine, the goal is to advance users from screen to screen with ease. As a result, standard design practices call for masking complex technical systems beneath simple interactions.
Yet this becomes difficult to do within the context of online security. Security requires friction, such as “strong” but difficult-to-remember passwords. Complexity becomes a feature, not a bug. And this complexity tends to increase over time, thanks to the cat-and-mouse nature of security: bad actors develop new tactics that result in high-profile hacks, which in turn demand ever more elaborate countermeasures.
These countermeasures create friction for everyone—but especially for people with disabilities. Indeed, online security can be a nightmare for disabled people. Take the common security practice of converting letters to asterisks (*) in password fields. The screen readers used by visually impaired people try to mirror this practice by replacing every occluded password letter with the word "star” or calling the entire field “concealed text.” Imagine for a moment typing in your password, and hearing either “star” repeatedly or complete silence. This makes it difficult to review what letters have been typed, leading to mistakes and frustration.
Another example is facial recognition for devices, like Apple Face ID and Windows Hello. In addition to performing inconsistently for underrepresented groups, facial recognition creates new challenges for blind users. “Ninety-five percent of the time the camera can’t see me,” says Lucy Greco, the web accessibility evangelist at UC Berkeley. “I know where to touch the fingerprint sensor because it’s the same spot every time. But I don’t know why it’s not seeing my face.”
These kinds of difficulties cause many disabled users to opt out of security measures entirely. “I have noticed that many people who are emerging technology users—who are blind, visually impaired, or have other disabilities—they'll get a lot of assistance to set up their phone,” says Chancey Fleet, assistive technology coordinator at the New York Public Library. “Often they decide, or have the decision made for them, to skip the passcode,” she continues. “Because when you're new and everything’s hard, entering the passcode is hard. And that lack of passcode will linger for months, or even years, and become habitual.”
The result is a crisis. We are creating a digital world where the most desirable path for disabled people is to leave themselves unprotected. This means that as the level of security rises among the nondisabled, disabled people are becoming more vulnerable to digital threats, deepening disparities. And these disparities affect an immense share of the world’s population—over a billion people have some form of disability, according to the World Health Organization. Furthermore, this number is constantly growing: disabilities are correlated with age, and lifespans are increasing. We will all be disabled in one form or another during the course of our lives. It's just a matter of time. So when those of us who are (currently) nondisabled contemplate accessibility, we aren’t just serving our fellow humans with disabilities in the present—we’re also serving our disabled selves in the future.
Fortunately, there’s nothing inevitable about a digital world that fails to protect disabled people. Academic researchers are proving that it's possible to rethink authentication for disability, synthesizing perspectives from the realms of both security and accessibility. Their efforts offer a glimpse of what our technology could look like if it aspired to keep everyone safe.
To help disabled people remain secure online, academics are exploring solutions involving gestures, haptics, and even “mind reading.” Consider the “passthought,” proposed in 2005 by computer scientists at Carleton University. Passthoughts involve thinking a silent and inherently unobservable thought. In lab studies, researchers provide instructions to subjects such as, “Imagine that you are singing a song or reciting a passage for ten seconds without making any noise.” Detected by sensors, the brainwave signals emitted from this thought can log users into services or devices.
How does this work? Prior to authentication, users are asked to think the passthought repeatedly. Their digitized brainwave signals—the “training data”—are then processed to extract a set of features that best encode a user profile and remain consistent over time. Finally, this set of features is used to build a "lock" that only opens for the specified user performing the specified passthought.
Passthoughts hold promise because they provide two factors of authentication in a single step. Knowledge, or the secret thought, is one factor; the user’s brain is the other. Given that each of our brains is uniquely differentiated by genetics and environmental conditions, our brain waves have a kind of biometric signature. But unlike fingerprints or faces, passthoughts can be changed if they are compromised. And even if the attacker knows the content of the passthought, it can’t open the lock without the user’s brain. This makes passthoughts relatively robust against impersonation and better protected from social engineering attacks, where the user is tricked into revealing their password.
Despite these advantages, passthoughts still have practical limitations for real-world use. In an exploratory study with blind people, researchers found that authenticating via brain-computer interface was less accurate and took longer than entering a PIN. Moreover, the sensor-embedded headsets that are required for brainwave authentication can be awkward, uncomfortable, and stigmatizing to wear. Researchers have explored discrete, in-ear alternatives, but many involve bands that wrap around the neck and place electrodes along the scalp, while more robust models resemble a swim cap. These headsets can also cost hundreds of dollars and so may be out of financial reach for many people.
Another creative approach is PassChords, developed in 2012 by Cornell professor Shiri Azenkot and her colleagues. “These researchers recognized that if you're using a screen reader in public on your smartphone, people might be able to hear you enter your password,” explains Cynthia Bennett, a researcher at the University of Washington. “So they came up with a gesture-based system that was silent.” Entering a PassChord involves tapping several times on a touch surface with one or more fingers, reminiscent of playing a musical sequence. A study with sixteen blind participants found that using PassChords was as accurate and significantly faster than entering an iPhone passcode with a screen reader, taking about a third of the time.
While passthoughts and PassChords have the potential to improve the lives of millions of disabled users, it will be hard to make them mainstream. Academic researchers are constrained by limited resources. They can build novel prototypes, but they often don’t have the capacity to bring those prototypes to a mass audience. “PassChords is heavily cited, it’s well known in the community, and in user testing, people say great things about it,” says Yang Wang, a professor at the University of Illinois at Urbana-Champaign. “I don’t think it’s widely deployed though.”
Without an expensive marketing campaign, how can a product find users? And without a business model, how can operating costs be sustained? Maintainability is another challenge: projects like PassChords need to be continually updated as browsers and operating systems change. This is hard to do, especially as the students who write the software for these academic projects graduate. Furthermore, in the absence of a revenue stream, projects can’t hire people to maintain the software on an ongoing basis.
Academia’s distance from industry is thus a double-edged sword: while academics have the freedom to prototype projects that are insulated from market forces, this also limits the scale of their impact. Solutions like passthoughts and PassChords are only available to a select group of users, often for a limited amount of time. They remain in beta indefinitely and are rarely released to the wider public.
Hack the Planet
Unfettered by the resource constraints of academia, tech companies could pick up these projects and deploy them at scale. So why don’t they? Because the industry largely considers people with disabilities to be anomalies—a niche and “non-normal” subgroup of users. By this logic, it seems unprofitable to develop specialized features that appear to benefit only a select few. When businesses do prioritize accessibility, they often do so for two reasons: to avoid discrimination lawsuits and to generate positive publicity. The benefits of either are difficult to measure, however, while the costs of building features for disabled users are concrete and non-trivial.
But there is another urgent reason that industry should care about accessibility: security. When companies create bad accessibility features, it can result in more than just bad user experiences. It can also introduce new security risks, impacting systems and individuals alike. And these risks can lead to breaches that inflict real damage to the company’s bottom line.
A good example is CAPTCHA, or the Completely Automated Public Turing test to tell Computers and Humans Apart. Designed as a challenge to prevent bots from spamming a system, CAPTCHAs were first introduced as images of distorted character strings. Typing the contents of this image into a text field would effectively prove one’s humanity and allow a user to proceed onto the next webpage.
However, the original CAPTCHA was soon revealed to exclude people with visual impairments, effectively deeming only those who could see images as “human.” Including descriptive “alt” text in the source code of the website could solve this problem by making CAPTCHAs accessible to screen readers—but at the cost of also making CAPTCHAs legible to bots, which could then pass for human. Enter the audio CAPTCHA, which was introduced as an “accessible” alternative—though it still excludes people that are deafblind. It reads the contents of a visual CAPTCHA aloud without relying on alt text, and it adds background noise or distortion that makes it harder for bots to understand.
While visual CAPTCHAs have evolved to more robust cognitive tasks—like selecting all the traffic lights in a given image—accessible CAPTCHAs haven’t progressed nearly as far. Meanwhile, advancements in speech recognition technology are approaching human performance, giving hackers enhanced capabilities to decipher audio CAPTCHAs even with significant noise or distortion. After obtaining and preprocessing the audio, attackers can now automate the submission of these recordings to speech recognition services, and then plug the transcriptions into the web form. The result is new system-level security risks: in a 2017 study, researchers at the University of Illinois at Chicago found that off-the-shelf speech recognition services could be used to hack the most common audio CAPTCHAs, breaking Google’s reCAPTCHA with a 98% success rate.
Accessibility done badly can make the internet less safe for everyone. If companies can’t be convinced to protect disabled users because of ethical and legal reasons, perhaps they can be motivated by the security risks that result from serving disabled users poorly. Moreover, the potential impact of these risks is growing as accessibility permissions are used in unexpected ways: to deliver new features, rather than expanding access. Even the nondisabled may therefore be unwittingly using accessibility features—and bearing the associated risks.
For instance, Dropbox users are asked to turn on accessibility during setup on Mac computers. Yet how this helps people with disabilities is currently not discussed in the documentation; the explanation emphasizes that turning on accessibility will enable added functionality. “Mac shoehorns a million different things into accessibility permissions,” says an anonymous commenter on r/MacOS. “You might grant a window manager accessibility permission to move windows around, but then the window manager could, if it wanted to, read your keystrokes.”
So what is the way forward? Ideally, we could foster partnerships between academia and industry that combine the interdisciplinary prototyping of the former with the funding of the latter. Even under the best arrangement, however, a deeper difficulty will remain: both security and disability are moving targets. What one needs to be secure online is constantly changing—and so is what defines one as disabled. As a report from the AI Now Institute notes, “the boundaries of disability… have continuously shifted in relation to unstable and culturally specific notions of ‘ability.’”
Security and disability are fluid, and this will always create challenges for technologists trying to design for either—let alone both. Much creative experimentation will be needed, but the goal should always be the same: to keep as many people as possible safe, even if it means overhauling our technologies. As Bennett, one of the authors of the AI Now report, explains, “Disabled folks are not vulnerable because of our inherent bodyminds, but because systems are not set up to protect all people equally. That's what creates the vulnerability.”