by Shazeda Ahmed
Foreign media has painted a dystopian portrait of China’s social credit system. The reality is both less coherent and more complex.
It displays blacklisted people, companies, and other organizations within a given area.
Almost every day, I receive an email from Google Alerts about a new article on China’s “social credit system.” It is rare that I encounter an article that does not contain several factual errors and gross mischaracterizations. The social credit system is routinely described as issuing “citizen scores” to create a “digital dictatorship” where “big data meets Big Brother.”
These descriptions are wildly off-base. Foreign media has distorted the social credit system into a technological dystopia far removed from what is actually happening in China. Jeremy Daum, a legal scholar at Yale Law School’s China Center, has suggested that part of why the misreporting persists is because the United States and Europe project their fears about extensive digital surveillance in their own societies onto China’s rapid technological rise. Compounded by the rhetoric around a US-China “arms race” in developing artificial intelligence, the idea that China might somehow perfect an exportable model of a totalitarian surveillance state has made people more willing to believe exaggerated accounts of the social credit system.
In response to the misreporting, several researchers have attempted to correct the narrative with well-documented examples of where foreign press coverage gets things wrong. Common mistakes include the assumption that all surveillance technology in China feeds into a centralized database, that every recordable action is assigned a point value and deducted from a comprehensive score, and that everyone in China receives such a score.
In reality, social credit is a broad policy project for encouraging individuals, business, legal institutions, and government itself to be more “trustworthy” (守信, shouxin) through a mix of measures. These measures include the blacklisting of lawbreakers, the “red-listing” of those with exemplary records, and a range of rewards and punishments. In a few places, it also involves localized and experimental scoring systems that are meant to incentivize “better” behavior.
Some of the mischaracterized accounts in foreign media are understandable given the loose use of the phrase “social credit” (社会信用, shehui xinyong) in China. One law professor I spoke to in Beijing encouraged me to think of it as a “working term,” an umbrella category encompassing several moving parts of a broader policy agenda that includes both national initiatives as well as city-level pilot projects that do not generalize to a countrywide scale.
Just because the social credit system is less comprehensive than it appears in foreign media reports does not mean that it is incapable of causing harm, of course. Moreover, the Chinese government maintains a sophisticated and pervasive surveillance apparatus, which it regularly uses to curtail the civil rights of its citizens. It’s not so difficult to imagine how the misguided belief that the social credit system centrally integrates other state-operated surveillance technologies may have originated, given the troubling creation of DNA databases in Xinjiang and police procurement of facial-recognition technology across the country.
But the social credit system as it currently exists is not aimed at Orwellian social control. Rather, the cluster of policy initiatives designated by the term are intended to promote greater trust—namely, trust between companies and their customers, and between citizens and the government. This trust-building can serve both economic and political ends. While many of the problems that the government uses to justify the need for a social credit system have economic considerations at their core—improving food safety, punishing debtors, cracking down on counterfeit goods sold online—others fit a broader theme of promoting institutional trust, such as by penalizing those who produce misleading or forged academic research.
Taken as a whole, the range of goals that the social credit system aims to address may suggest that the government is itself unsure, and is still in the process of figuring out, what such a system can accomplish. But the system’s most widely publicized aspect in China is how it punishes those deemed “untrustworthy”—which is also where the greatest potential for harm lies. Yet the techniques that the Chinese government is using to enact these punishments are not especially unique—some of them are already ubiquitous in the United States. In fact, the two countries’ approach to trust-building is more similar than one might expect.
Blacklists with Teeth
The core mechanism of the Chinese social credit system is the creation of blacklists. The government uses blacklists to punish people for various infringements of the law that fall short of being considered criminal activity. Commonly blacklisted subjects include people who have the means to repay debt they owe but choose not to, colloquially referred to as 老赖 (laolai). The Supreme People’s Court, the highest court in China, assembles a national blacklist of “judgment defaulters,” people who have not complied with court orders. These orders are typically financial in nature, related to repaying debts. But they can also include other kinds of instructions, such as making a formal apology to an injured party or regularly visiting one’s elderly parents. Punishments for landing on certain blacklists include being barred from taking civil service jobs, from sending one’s children to private schools, and from booking air travel or riding “soft-sleeper cars” on trains—the most comfortable railway compartment class for long journeys.
The social credit system is often compared to the dang’an (档案) system, a set of government dossiers on Chinese citizens recording the minutiae of their lives, from expressions of political thought to their performance at school, compiled from accounts by peers and local authorities. But a closer point of comparison is the government blacklisting that predates the development of the social credit system. Blacklists provided more concise judgments about alleged misdeeds, but they have long been considered ineffective at changing people’s behavior. In the past, even though some blacklists were printed in newspapers and screened before previews at films, many people were unaware that they were blacklisted and continued to go about their lives without suffering any consequences. Courts send blacklist notifications to judgment defaulters now, but many of these continue to go unnoticed. The social credit system is meant to give blacklists teeth.
How? One way is by encouraging and enabling different government agencies to pool information. Under the social credit system, several government bureaus have not only developed their own blacklists, but to date have signed forty memoranda of understanding that enable them to share information with one another to ensure that blacklisted individuals are duly punished.
Another way that the social credit system strengthens blacklists is by fostering closer communication not just within government but between government and industry—in particular, with China’s biggest technology firms. Previously, people who could not purchase airline tickets through official channels if they were blacklisted might have still managed to use websites like CTrip.com or the in-app travel booking feature of the mobile wallet service Alipay to circumvent these restrictions. That is no longer possible under agreements that these companies and several dozen others have signed with China’s National Development and Reform Commission (NDRC), the powerful government body that has spearheaded the social credit system’s development.
According to available government and state media reports, one type of agreement, called “information sharing,” involves companies receiving government-issued blacklists, which they then match to their user base in order to prevent blacklisted people from performing certain activities like buying airplane tickets. Another kind of agreement, called “joint rewards and punishments,” restricts the behavior of blacklisted individuals even further: blacklisted users of Alipay, for instance, are unable to buy so-called “luxury items”—although it is unclear whether it is Alipay or the government that determines which items fall into this category.
By forging partnerships with Chinese technology companies, the state ensures that blacklisted people can’t avoid punishment. These partnerships make it harder for individuals to evade restrictions in the non-state economy, which is otherwise farther outside of the government’s sphere of control. But they also open the door to the possibility of a much more expansive social credit system, since technology companies have a wealth of information about Chinese citizens. Still, it remains unclear when and how companies might share their data with the state—although it would be difficult for them to avoid doing so if asked.
Webs of Trust
In addition to blacklisting, China also has “red-listing.” This involves identifying people whose behavior is considered exemplary of “trustworthiness,” which includes paying bills and taxes on time or, in some cities, doing volunteer work and donating blood.
There are also more specialized examples of how rewards are granted: the government has a national “action plan” for encouraging young people to do volunteer work, and those volunteers recognized as outstanding are red-listed. The benefits they receive as a result include having their job applications to Tencent prioritized, paying discounted mobile phone rates through Alibaba, getting coupons for shopping on Alibaba’s ecommerce site TMall, and enjoying free accomodations for 300 overseas volunteers in an AliTravel-sponsored program. The state may at some point also share information about their “red lists” with tech companies in order to confer more benefits. For instance, ride-sharing behemoth Didi Chuxing has partnered with the NDRC “Xinyi+” (信易+, akin to “credit convenience”) project, which may begin to offer red-listed riders discounts, priority booking of cabs, and deposit-free bike rentals.
In some instances, blacklists are adapting to new media while retaining their original function of shaming people into changing their behavior. The enormously popular social video streaming app TikTok (抖音, douyin) has partnered with a local court in Nanning, Guangxi to display photographs of blacklisted people as advertisements between videos, in some cases offering reward payments for information about these people’s whereabouts that are a percentage of the amount of money the person owes. Much like the other apps and websites that take part in these state-sponsored efforts, TikTok does not disclose in its user-facing terms of service that it works with the local government of Nanning, and potentially other cities, to publicly shame blacklisted individuals.
A similar initiative is taking place in the Yuhua district of Shijiazhuang, the capital city of Hebei Province. There, local courts have opted to offer a WeChat “mini-program”—a limited-purpose feature nested within the popular chat app WeChat—known as a “Laolai Map.” This map displays blacklisted people, companies, and other organizations within a given area, alongside slogans such as “Recognize laolai, avoid risks.” It’s not clear if real-time location data or individuals’ home addresses are used to populate the map. The mini-program also enables users to look up blacklisted entities with a search function, and to see the offenses that landed them on the blacklist in the first place. While some descriptions are straightforward, such as “failure to report property ownership,” most are simply listed as defaulting on court orders without going into further detail.
Other state-tech collaborations are more ambitious in scope. For example, the budding “credit cities” concept is a spin on “smart cities” that involves tech companies building out digital scoring platforms that use a mix of government data and private sector data. With such initiatives, people who are deemed more trustworthy can rent bicycles and even apartments without providing a deposit, or delay immediate payment for cab rides and hospital visits. The participation of large tech companies in these ventures tends to be downplayed, as the credit city platforms are associated with their respective municipal governments and generally rely on smaller local firms.
Still, details on the specifics of these partnerships are scarce. While major Chinese tech companies are not serving the social credit system the way foreign media has thus far portrayed—surveillance cameras are not using facial recognition to link misbehavior to a centralized scoring database, for instance—the ways in which they do partner with the state to coproduce the system are generally kept from the public’s eye. What little we know comes from news coverage of the signing ceremonies held when tech companies conclude agreements with the NDRC. At each of these, representatives of the companies’ senior leadership refer to their joint efforts as a form of social responsibility. The CEO of Meituan Dianping, an online group-buying and food delivery service, notably said that co-constructing a social credit system is “every industry's—especially platform-based internet companies'—duty-bound responsibility.”
But how effective is the social credit system at improving “trustworthiness”? Thus far, state media has portrayed blacklists strengthened by the social credit system as having succeeded at encouraging people to be more honest and to break the law less often. The official news agency Xinhua, for instance, praised Ant Financial’s use of blacklist data to restrict certain purchases via mobile wallet app Alipay and to lower scores in its Sesame Credit credit scoring product, arguing that the company’s punishment of 1.2 million debt defaulters encouraged over 100,000 of them to repay their debts.
It’s difficult to confirm how such assessments are made, or whether blacklisting actually influences behavior at scale, especially considering how many people consider themselves to have been wrongfully blacklisted, or who find it unfair that they are blacklisted for deceased relatives’ debt. Still, it’s likely having some effect: a recent book by Ant Financial employees claimed that within a month of directly notifying Sesame Credit users that they were blacklisted for being in debt, 46% of those users paid off their debt.
The Market for Blacklists
The use of blacklists to promote trust is by no means confined to China. It is also pervasive in the United States, although it takes a different form.
Blacklisting under the Chinese social credit system is a fairly overt means of influencing behavior. Blacklists are developed and enforced by the state. By contrast, similar practices in the US tends to be more covert. Private firms develop lists of people who have committed minor transgressions, and generally sell these in their capacity as data brokers in a relatively under-regulated market. The Fair Credit Reporting Act (FCRA) of 1970 reined in many types of invasive data collection that could be used as informal credit ratings. But plenty of other kinds have slipped through the cracks or are FCRA-compliant, creating hidden yet technically legal data judgments that affect the lives of millions of Americans.
One example is systematically compiled lists of retail employees accused—not convicted—of shoplifting. These FCRA-compliant databases have barred people from being hired in other sales positions. Background-check databases acquire these records to compile what one lawyer has referred to as a “secret blacklist,” given that employees accused of shoplifting are often made to write statements confessing that they committed the theft even in cases where they had not. They are typically unaware that these admission statements feed into such databases, which are consulted when they apply for other retail jobs.
On the customer-facing side, databases developed by firms such as The Retail Equation track people who display the characteristics of conducting “return fraud,” by looking at items they have bought, how often they made returns, whether returns were accompanied by receipts, and how much money was issued per return. Using real name-registered forms of identification such as driver’s licenses, these databases are able to track people across multiple retailers. The consequences for people on these lists include being unable to return or exchange goods at certain stores for a year. Notably, if records across different retailers are kept separate within the database, they are not treated as credit reports and therefore do not run afoul of the FCRA.
An additional practice with severe consequences has emerged in the housing rental market, with New York’s so-called “tenant blacklist.” Tenant-landlord disputes taken to housing courts form a record that can even work against tenants who have won their cases. Rental-screening data brokers mine housing court filings across the city to create databases of every tenant sued in housing court, regardless of the outcome of each case. Despite how little context is provided in these databases, landlords nonetheless check them when deciding whether to accept a potential tenant.
The database only indicates that a tenant was sued in housing court without listing the outcome (including if the tenant won) or cause (landlords have been known to raise cases in efforts to evict rent-stabilized tenants) of the case. Yet the mere appearance of a tenant’s name on a blacklist is presented as negative in and of itself. In a non-surprising parallel to Chinese blacklists, people often first find out they were on the tenant blacklist after having failed to secure housing and consulting a lawyer to find out why. An unhoused New York woman with no criminal record and a credit score of 760 was unable to secure an apartment reserved for the low-income elderly because of a tenant blacklist. The repackaging of blacklist data can amplify and distort their significance, yet they will persist as long as they profit data brokers, and in part for a failure to imagine viable alternatives.
Notably, one of the proposed solutions to dealing with potential tenant-landlord disputes in cities like New York is to ask to remain anonymous in cases taken to housing court. The social credit system has yet to cover the fraud-ridden housing markets of China, though were a similar issue to arise, anonymity would be difficult to preserve under the nationwide push for real-name registration both on- and offline. Ant Financial, the fintech giant behind credit rating service Sesame Credit, has partnered with NDRC’s “Xinyi+” project to use the company’s data on individual and enterprise users’ finances in order to help landlords make decisions about housing and office rentals.
Listening to the Laolai
In both the US and China, blacklisting systems enforce broad punishments that are disproportionate to the transgressions that land people on blacklists. Likewise, the burden falls on the blacklisted individual to discover they are being systematically prevented from taking certain actions and to figure out how to remedy their situation.
Demanding greater transparency and accountability from both systems has been a challenge, although US efforts have been more successful. In the US, activists have successfully pushed back against certain forms of blacklisting for violating the FXRA. Civil society and legal aid organizations have also spent years monitoring, and attempting to curb, various practices embedded in using ambiguous personal data for blacklisting.
In China, by contrast, there is less room for such activity. For one, it’s unclear how credit legislation could be used to mount comparable efforts. And, although a few Chinese consumer protection organizations have become attuned to the ways that tech firms’ use of customer data can lead to privacy abuses, they have not taken up the issue of blacklists because the practice is treated as socially acceptable in China.
Some avenues for contesting cases where people believe they are wrongfully blacklisted do exist, but they are unlikely to be widely known. In places like Shanghai and Hubei, local social credit regulations lay out steps for filing “objection applications” where individuals have found credit information to be incorrect or omitted. Yet, given the scope of what counts as “credit information,” it may be difficult for non-experts to understand and make a case for why their records should be modified. Moreover, it’s unclear if the punishments for being blacklisted leave individuals enough room to redeem themselves, or if the constraints are so stringent that they create insurmountable obstacles to clearing one’s name.
The current state of the social credit system is far less sophisticated than its portrayal in the foreign press. But if the scope of what can count as blacklist data widens, and if the tech sector takes an even more pervasive “searchlight” approach to seamlessly melding these data into their core offerings, the system could move much closer to the dystopian picture that appears in the media. In particular, if China embraces the marketization of blacklist data—so that data is bought and sold, like in the US—information about individuals would become even harder to track and contest.
Over the next few years, the Chinese government will continue to tinker with implementing the social credit system. People who are wholly unaffected by blacklists may view them favorably, as proof that the government is proactively combating the laolai phenomenon. Yet there needs to be a critical analysis of the social credit system that centers the perspectives of those who are most directly affected. We need to hear from the laolai themselves to understand what the unforeseen consequences of this vast policy project may be. Only then can we begin to see what the social credit system is actually achieving—and at what cost.
Shazeda Ahmed is a PhD student at the University of California, Berkeley and a Fulbright researcher in China during the 2018-19 academic year.