ETA: On the basis of the blog post below, I was invited to discuss possible problems with Uber’s passenger rating system on NPR’s “On the Media.” You can listen here.
For the uninitiated, Uber is a taxi-like company that, with the touch of a button on a ridiculously easy-to-use app, sends a driver in a classy black car to your location using GPS. I recently learned that Uber not only stores data about its passengers, but also allows its drivers to rate passengers and makes the ratings available to other drivers.
I’ve long known that customers rate Uber drivers; when you order a car, you can see the rating from one to five stars of the driver who is picking you up. And I’ve always assumed that the company keeps records of when you use the app and where you go. But it turns out that the drivers also rate passengers, and that — at least anecdotally — the various drivers use your rating information to decide whether and how quickly to pick you up. The app interface looks like this before you order a car, so you can see where the cars are in real time and an estimate of how long it will take a car to pick you up:
And then after you order a car at the bottom of the app you can see your driver’s first name, a photo, the make, model and license of the car, and his (every driver I’ve ever had has been a man) rating on a scale of one to five stars:
So why is the fact that Uber drivers rate passengers troubling? No one could disagree that a store can use its discretion to ban a customer who behaves inappropriately. Businesses have the prerogative to deny service to customers to fail to abide by their rules and norms, although — as Lululemon recently learned — even this practice attracts negative attention when most people see the rules and norms as unreasonable. But the general point — that companies can refuse to do business with people — is certainly true.
Still, the way Uber appears to store and use its data about customers is far more invasive and troublesome for reasons of both privacy and equality.
I’ll start with the privacy reasons, which quickly reveal that the store-banning-a-customer analogy is inapt. It’s one thing for a store to ban a customer for unacceptable behavior. It’s another for a store to rate every customer who comes into the store every time the customer comes into the store, and then to make those ratings instantly available to every employee in the store, who can then intentionally use the ratings to determine how quickly and courteously to assist the customer. Most brick and mortar stores do the former, but I’m unaware of any that do the latter.
Moreover, stores generally have multiple witnesses — both employees and customers — to the behavior that gets a customer banned from a store. In contrast, the only person who witnesses an Uber driver’s rating of a customer is the driver. Indeed, the customer doesn’t even witness it, because she’s gone by the time the driver assigns a rating. The store customer probably knows that she’s been banned because the store tells her on the spot and likely tells her why. The Uber passenger never knows why she’s been rated poorly — indeed, under Uber’s current model, she doesn’t know whether she’s been rated poorly or well. And perhaps most importantly, this huge wealth of accumulated data about individual people might be exploited in a number of ways if it fell into the wrong hands: someone might use an undeserved poor rating of a passenger as a reason to provide the passenger with worse or no service in other settings, or might use the information to harass, embarrass, or even blackmail the passenger.
The broader privacy issue, of course, is that Uber is storing data about where people go and when. That issue is much bigger than the passenger rating system. But the rating system injects subjective evaluation into the set of data that Uber collects, and as a result imposes all the problems associated with discretion.
Uber passenger ratings also raise issues of equality in conjunction with other aspects of its business model. What if some drivers rate passengers less well not because of the passengers’ behavior, but because of the drivers’ acknowledged racial prejudice or implicit bias? Social science research commonly uncovers evidence of such implicit bias, in contexts ranging from the workplace to education to commerce.
Moreover, Uber’s app facilitates race discrimination. The app directs users to create profiles for themselves and to upload photos of themselves. I see the potential utility of the photo — it might allow the driver to recognize the passenger more easily — but traditional cabs have been picking up people for years without that information, and Uber already pinpoints passengers using GPS. So Uber’s direction to include a photo means that drivers will know the race of the prospective passenger long before they leave to pick him up, yet nothing indicates that this identification is necessary or even particularly useful.
As a result, I see at least four ways that implicit bias coupled with the design of Uber’s app paves the way for race discrimination. Drivers might engage in either intentional or unconscious discrimination when they rate passengers. Drivers might use poor passenger ratings differently between members of different races, with the result that some passengers will wait longer for cars. The knowledge that a passenger already has a poor rating might prime negative stereotypes about the racial group to which a passenger belongs, leading to less positive treatment for the passenger during the trip itself. And finally, because the driver and passenger are not in the same physical area at the time the driver is deciding whether to pick up the passenger (unlike the traditional taxi system), whatever social checks might prevent a driver from treating someone with prejudice to their face would be far less likely to affect behavior when the driver can reject the passenger from fifteen blocks away and the passenger doesn’t even know the driver’s doing it.
Of course, the racial analysis is complex. A few years ago, when Uber was relatively new, Latoya Peterson wrote a piece explaining how much easier it was for her to use Uber than it was to hail a cab. Others have described similar experiences (for example, here and here). But the experiences of Peterson and others are not necessarily incompatible with the possibility of race discrimination. That is, Uber might be preferable to cabs for people of color, yet at the same time Uber drivers might still treat people of some races better than others.
Interestingly, Uber’s rating system provides a unique opportunity to prove or disprove — over a very large, national sample — the longstanding view that black people and members of certain other minority groups have a harder time hailing cabs. One can imagine a relatively simple study that examined a random sample of profiles, coded them on the basis of perceived race (which is the relevant category in this instance; how an individual identifies herself has no effect on how an Uber driver identifies her from her picture), and then compared the average rating and average length of wait for a car.
To be clear, I am not claiming that Uber’s passenger rating data would reveal racial disparities. I have no idea whether that is true. It’s worth noting that some have already suggested that other aspects of the company’s business result in racially disparate treatment.
But if drivers’ ratings revealed systemic racial disparities — controlling for other variables, of course — it seems to me that the situation might be ripe for a lawsuit under 42 U.S.C. 1981. That statute states that everyone has the same right “to make and enforce contracts . . . as is enjoyed by white citizens,” and defines “make and enforce contracts” to include “the making, performance, modification, and termination of contracts.” The statute also specifies that the statute also protects against non-governmental discrimination — that is, unlike the Equal Protection Clause of the Fourteenth Amendment, the statute reaches private conduct.
When a person contacts a car company that makes itself widely available to drive people from one place to another, and indicates a desire to receive exactly that service, that’s an effort to form a contract. And if white people are able to form that contract more easily than people of other races, this seems like an instance of private discrimination of the sort that section 1981 is designed to reach.
The irony is that with a few modifications, the Uber app could come much closer to providing a race- and gender-neutral car service than traditional taxi services. If the Uber app didn’t display users’ pictures, drivers would never know the a passenger’s race until they arrived to pick him up. Moreover, the app uses first names only, so racially-correlated last names would not affect impressions, and nothing says that one has to use one’s legal name rather than a race- and gender-neutral nickname if one prefers. The problem of recognition with which Uber justifies the directive to upload a photo could be easily solved: passengers could note “I’m wearing a blue jacket” or something similar when they ordered the car. (Or of course they could choose to self-describe with physical features — but they wouldn’t be channeled into that requirement.) So while Uber’s current model essentially requires drivers to notice demographic characteristics alongside ratings, it could easily be resigned to remove those features — and with them, the opportunity for discrimination.
With all this in mind, I think the best course for Uber is to do away with its current passenger rating system. This is not to say drivers should have no recourse or no opportunity to report passengers. Indeed, I think that Uber drivers need greater protection for their workplace rights, and I support their recent efforts to form a union. But Uber should modify its rating system to allow drivers to report problem customers only for specific, concrete reasons — drunk, verbally abusive, physically violent, etc. After, say, two or three bad ratings from different drivers, a customer would be banned. And to complement this change, the Uber app should be redesigned as I have described in order to minimize the salience of demographic characteristics.