Every company is a rating agency now, until the government settles on some form of regulation.
You may find it a little eerie to discover that you are being rated by the companies you buy things from, and that the quality of customer service you receive can be determined by your “customer lifetime value” score. Maybe it reminds you too much of China’s new social credit system, which is intended to allow the government to keep tabs on citizens’ anti-social behaviors — and punish them by cutting off privileges like intercity train travel if they’re noncompliant.
Better get used to it. We are no longer rated by only the credit reporting agencies, which are subject to extensive federal regulation. Today many of us have an Uber rating (which at least we can access) as well as dozens of similar yet inaccessible ratings from other vendors. Lots of us are numerically rated in the workplace — I know I am, by my students (!).
That’s not all. Even companies that don’t directly assign us ratings are effectively quantifying our habits as consumers, borrowers, investors and producers. Big data starts with us, the subjects. And the inevitable, necessary, economically efficient use of big data entails constant analysis and evaluation. In the face of sophisticated data analysis, ratings are actually a rather crude measure — the least intrusive, most easily conceived evidence that almost nothing we do anymore is free from systemic quantification.
Why is this orgy of rating and quantification happening? Can we do anything about it? And if we could, should we?
The explanation lies mostly in computing power. Going back at least to the 1960s, social scientists realized that they could extract significant information, and make fairly reliable predictions, based on individuals’ demographic information. For decades, businesses used ZIP codes as a proxy for that predictive information.
It’s not that businesses of yore couldn’t gather data about their customers. They could, and did. Their problem was that the data was only as useful as the analytical tools available to process it. Early computers — as well as their successors up until perhaps a decade ago — simply lacked the capacity to break down and analyze vast quantities of data to produce useful outcomes.
Moore’s law gradually changed that. Although it (probably) cannot go on forever, the effective doubling of computing power every two years has led to more and more powerful processing. Today’s computers can crunch so much data that it’s now possible to extract information about you individually from collected records of your behavior.
At the same time, the rise of online shopping (and the rest of the online behavior) has provided new sources of data that used to be harder to pin down. An old-time supermarket could measure in a rough way how customers wander the aisles in search of the items on their shopping lists. Amazon.com can tell exactly what items you browsed before you made your selection.
To possess that information is to possess value. Any company that ranks my lifetime value can do more than just use that information to make informed decisions about how to interact with me. It can sell my customer lifetime value score to another, analogous enterprise.
And in the era of cookies and cross-platform snooping, many online enterprises —social media platforms, retail vendors, news organizations — have the capacity to know or to learn what my behavioral patterns are more broadly.
The only conceivable ways for this state of affairs to change are extreme self-restraint — like, not using online vendors and services — or government regulation. The former is unrealistic; we’re not going back to a bricks-and-mortar, cash-only society.
The latter is conceivable, but we’re still very far from a clear consensus about what regulation could or should achieve. My Harvard Law colleague Jonathan Zittrain and my onetime teacher Jack Balkin have been arguing for some time now that tech companies should be treated by the law as fiduciaries of our data, essentially holding users’ information in trust on their behalf.
It’s a creative idea to be sure. But adopting it would require us to change the way we think about our interactions with other people and parties. Right now, if you say something to me or if we do something together, I am ordinarily entitled to use the information I’ve learned from our interaction, unless we have some prior contractual agreement that I won’t. In essence, that’s what companies are doing when they gather data based on my behavior and then analyze it.
If government were to adopt some sort of comprehensive regulation, it would have to draw lines between permissible and impermissible data uses. Those lines would be extremely difficult to specify. More to the point, it would be a major practical and conceptual challenge to distinguish contemporary data gathering and analysis from the old kind, including credit ratings. Those, after all, are also based on data analysis and projections. And we don’t assume that the credit-rating agencies are our trustees — or at least it doesn’t feel that way when you have to interact them.
The most likely outcome is that we will see some forms of data regulation over time — but not the kind of heavy supervisory regime that characterizes the credit agencies. Most probably, we will gradually become used to the idea that our behavior can be monitored and that, aggregated and interpreted, it will have consequences for a range of important economic and life decisions.
That doesn’t mean that citizens of liberal, capitalist countries will be subject to social credit ratings like the one organized by the Chinese government. It does mean that the forces of capitalism, rather than those of the state, are going to continue to shape the contours of our lives in new and creative ways. That may not seem like freedom, exactly. But then again, liberal market economies have only ever promised freedom from the state — not from discipline of the market.