I used to trust some of its products, like Chrome. I increasingly don’t.
A couple of weeks ago, I noticed something strange was happening to my Google Chrome web browser. Where Chrome had always allowed me to browse the internet as an anonymous user, suddenly my browser had signed itself into my Google account. A bit of investigation (and a visit to a nerd forum) pointed me to the cause: Chrome had logged itself in after I visited my Gmail account.
The change in Chrome’s behavior, it turns out, was not a bug. It’s part of a new technical “feature” in the browser called “identity consistency between browser and cookie jar.” Despite the gritty technical name of the feature, it represents a truly fundamental change in the way Chrome works. For the first 10 years of Chrome’s existence, Chrome was simply a typical web browser. You had the option to sign the browser into Google—and thus take advantage of Google’s many data-sharing and cloud-synchronization options—but you never had to. In the stroke of an update, the sign-in became mandatory: If you happened to visit a Google property, the browser would attach itself to your Google account. To Google’s credit, it recognizes the privacy implications of this change, and simply signing the browser into Google does not immediately send your data to Google’s servers. But it brings users within an accidental click of sharing their bookmarks and browsing history with Google.
After I wrote about the change, I was taken aback by the tech community’s vociferous response. It seems that many technical professionals feel the same way I do: that their web browser should be just a web browser, rather than an arm of Google’s online juggernaut. The tech backlash even caused Google to back down, sort of. It announced a forthcoming update last Wednesday: Chrome’s auto-sign-in feature will still be the default behavior of Chrome. But you’ll be able to turn it off through an optional switch buried in Chrome’s settings.
This pattern of behavior by tech companies is so routine that we take it for granted. Let’s call it “pulling a Facebook” in honor of the many times that Facebook has “accidentally” relaxed theprivacy settings for user profile data, and then—following a bout of bad press coverage—apologized and quietly reversed course. A key feature of these episodes is that management rarely takes the blame: It’s usually laid at the feet of some anonymous engineer moving fast and breaking things. Maybe it’s just a coincidence that these changes consistently err in the direction of increasing “user engagement” and never make your experience more private.
What’s new here, and is a very recent development indeed, is that we’re finally starting to see that this approach has costs. For example, it now seems like Facebook executives spend an awful lot of time answering questions in front of Congress. In 2017, when Facebook announced it had handed more than 80 million user profiles to the sketchy election strategy firmCambridge Analytica, Facebook received surprisingly little sympathy and a notable stock drop. Losing the trust of your users, we’re learning, does not immediately make them flee your business. But it does matter. It’s just that the consequences are cumulative, like spending too much time in the sun.
It’s this background that makes Google’s recent changes to Chrome so surprising. Google is the one major Silicon Valley firm that has avoided much of the tech backlash that’s spattering its peers. While the idea that Google is doing better than those companies might seem strange—Google is one of the biggest data collectors in the world, after all—I would argue that to a large extent, this is because Google has invested massively building user trust.
This investment takes various forms. First, Google has invested relentlessly in securing its infrastructure: When it was hacked (allegedly by China) in 2010, the company not only pulled out of China, but it poured resources into hardening its systems, and even moved its security team into a dedicated building at the edge of their Mountain View campus to thwart physical espionage. Google security engineers brag that they can go toe-to-toe with nation-states, and they back this up by providing nation-state hacking warnings to at-risk users like journalists, politicians, and professors. On the privacy front, Google is the king of targeted advertising, but it largely avoids controversial and upsetting privacy changes like the ones that plague Facebook. Sure, it collects gobs of data, but it’s generally been upfront about what it takes and what it doesn’t. And unlike Facebook, the company almost never loses your data to hackers.
This pro-security strategy has produced tangible benefits for Google. Even when invisible to the average person, it’s helped to build trust with highly technical users who generally serve as ambassadors and an informal “IT help-desk” for everyone else. More critically, it’s allowed Google the breathing room to quietly improve its data collection and advertising businesses without the embarrassing congressional testimony and distractions of a Cambridge Analytica. Of course, the fact that Google is so ubiquitous means that it can afford to be magnanimous: There’s no need to push the privacy envelope with users when you’re already getting so much of their data.
But the recent Chrome changes indicate that Google is not immune to the same pressures that apply to other companies. And while the Chrome update may be only a small sign of that pressure, it’s hardly the most troubling one. For example, Google recently inked a secret deal with Mastercard to link your credit card transaction information to your web browsing. We can only guess at what it hopes to do with this data. And of course, this summer, the Intercept leaked an internal memo showing that Google is even considering a move back into China, building a custom search engine code-named “Dragonfly” that will systematically track Chinese users, while providing censored search results that remove results for terms like “human rights.” Google has for its part claimed that the project was only “exploratory,” though this explanation is disputed. It’s hard to see Google retaining a strong reputation for privacy if it deploys systems like this to a large fraction of the world.
In short, I fear Google is well on the way to becoming a different kind of company, and it worries me. This is not because I inherently love Google—it’s a profit-making entity, and its shareholders will always come before me. But I worry that it is increasingly trading away my trust for short-term benefits. Even worse, this course change indicates that companies’ self-interest in maintaining user trust may not be a match for the business pressures that drive them to become more intrusive.
Of course, some might say that I’m a fool for ever relying on Google products, that I deserve whatever I get for becoming enmeshed in the Google ecosystem. The more extreme version of the argument holds that people who expect privacy must avoid tools like Chrome and Facebook altogether, as though privacy is a value we should all suffer for—and one that is the exclusive right of the technical elite.
I reject this argument. It’s entirely possible for a company like Google to make good, usable products that strike a balance between privacy and profit. It’s just that without some countervailing pressure forcing Google to hold up their end of the bargain, it’s going to be increasingly hard for Google executives to justify it.
In the end, this is why I’m moving away from the Google ecosystem. If nothing else works, then at least I can vote with my feet—or my fingertips.
Source: Google is losing users’ trust.