Platforms, Privacy and Pandora’s Box

The Wall Street Journal just ran another piece in its series covering online privacy issues, this time focusing on how mobile apps on the Apple and Android platforms may share unique device ID numbers without consent. If matched against real names, UDIDs pose a bigger privacy risk than cookies on websites since people usually have one phone and carry it with them constantly.

The storyline from here on in will be familiar. Apple has already been cracking down on developers the Journal inquired about this week. At least one company we know started employing SSL encryption for UDIDs yesterday. There might be a fall guy (like how Lolapps and Gambit were singled out when Facebook faced privacy-related criticism). There will be fixes — some necessary, some cosmetic. Then things will go back to the way they were.

On the whole, the “What They Know” series is great for mainstream consumer education. But its sometimes simplistic descriptions of industry practices and occasional scaremongering creates risk that uninformed policymakers will draft poorly targeted legislation. It could end up being unnecessarily destructive to consumer Internet businesses or be so cosmetic that it doesn’t really fix underlying problems.

The thing is: Data collusion is a problem inherent to platforms and their ecosystems. The same power that gives two guys the ability to quickly build and ship a product that 1 million users know and love is the same power that gives two guys the ability to walk off with sensitive, personal data on millions of people.

Platforms like Facebook, iOS and Android have unleashed the fastest-growing businesses ever known. Zynga trades at an implied market capitalization of just under $5 billion on highly illiquid secondary markets. Groupon became a $6 billion company in 24 months, after growing in a large part through targeted performance advertising on Facebook. There are more than 550,000 applications on Facebook, 300,000 on iOS and 100,000 on Android. People use apps to book flights, find restaurants, play games and serendipitously run into friends.

But with that incredible distribution power comes risk to consumer privacy.

The incentives for data collusion among developers will always be strong. As long as these powerful platforms exist, so will some symbiotic entity that barters, trades, collects and matches data on individual users. Today it’s Rapleaf, which shares a venture investor with Facebook. Tomorrow it will be some other company.

Frankly, there is no way that companies like Facebook, with fewer than 2,000 employees can — day in and day out — police more than 2.5 million developers and 100% guarantee that there aren’t privacy violations or unauthorized data sharing by third-party apps.

That’s not to say these companies are lax.

Each one has a slightly different regulatory approach. Apple employs a preventative strategy. It vets apps ahead of time and puts them through an unpredictable approval process to the ire of developers. Once it gives an app the green light, Apple tends to leave it alone unless there is an egregious terms of service violation.

Google takes a post-hoc approach. It doesn’t do upfront vetting, but users can flag apps and Google can take them down after they’re already in the store. Unlike iOS, users can also return apps although the window was shortened to 15 minutes from 24 hours last week. A post-hoc approach has, of course, unleashed huge spam problems in the Android Marketplace, which Google is only beginning to come to grips with.

Facebook’s approach is closer to Google’s. It has algorithms that can automatically take down apps if they’re growing in suspicious ways, but it also employs human checks as well. Over the years, we’ve become pretty familiar with late Friday developer crackdowns.

What’s interesting at this moment is that there is an open question in Washington D.C. as to how legally liable platforms are for the behavior of third-party developers.

The overwhelming majority of developers produce immense value for consumers, but let’s take an extreme hypothetical example. If an unscrupulous app developer launches a “Sexual Purity Test” or “How Mentally Stable Are You?” Quiz (yes, the latter is real), gets millions of users and secretly sells that data to pharmaceutical or insurance companies, how much liability does the platform bear?

Technology companies are hoping more of that responsibility will fall to an empowered Federal Trade Commission. Momentum is also building for the Department of Commerce to create a federal office for guiding online privacy regulation.

But if the platform companies can’t entirely control their ecosystems, I sincerely doubt the FTC or any privacy czar can.

Consumer education is far from where it needs to be. On sign-up prompts, platform providers could force developers to excerpt key parts of their privacy policy and explicitly list third parties they share data with. They could also make it a lot clearer to users about who developers are (since violators often just go and set up shop under a different name if caught).

There aren’t easy answers here. For all of the value that that these platforms unlock, we’ve opened Pandora’s Box when it comes to privacy.

[Image via The Wall Street Journal.]