Facebook is still feeling the heat over its Hotel California data policy, which hordes users’ private information even after they try to desert the site. The Times’ Maria Aspan has been all over this story, and her latest article reports that media and user pressure is forcing Facebook to finally let people completely extract themselves from the site. The company says this is a "technical" challenge, talking up codes and glitches. But the real motivator is money, of course, since social networking sites are in the business of monetizing the social graph. That means people are traffic and personal information is content. 2008-02-18-Picture2.png As Adam Cohen explains in The Times editorial section, Facebook has not exactly friended "privacy rights":

It’s no secret why Web sites like to spread information of this sort: they are looking for more ways to make more money. Users’ privacy is giving way to Web sites’ desire to market to their friends and family. Technology companies are also stockpiling personal information. Google has fought hard for its right to hold on to users’ searches in a personally identifiable way. What Web sites need to do — and what the government should require them to do — is give users as much control over their identities online as they have offline. […] Protests forced Facebook to modify Beacon and to ease its policies on deleting information. Push-back of this sort is becoming more common. No one should have personal data stored or shared without their informed, active consent.

 

Amen. I advocated a similar proposal in my recent feature on Facebook:

A simple way to address one of Facebook’s privacy problems is to ensure that users can make informed choices. Taking a page from the consumer protection movement, Congress could simply require social networking sites to display their broadcasting reach prominently when new users post information. Just as the government requires standardized nutrition labels on packaged food, a privacy label would reveal the "ingredients" of social networking. For example, the label might tell users: "The photos you are about to post will become Facebook’s property and be visible to 150,000 people–click here to control your privacy settings." This disclosure requirement would push Facebook to catch up with its customers. After all, users disclose tons of information about themselves. Why shouldn’t the company open up a bit, too?

Debates over privacy and social networking often slip into variations of "blame the victim," especially when older luddites scorn young users for abdicating privacy and responsibility online. But these ongoing Facebook disputes reveal how companies can use technology to mislead users and preempt people from making responsible choices. And even with good information, it’s still complicated. While Facebook is fighting to prevent users from fully removing their information from the site, other digital rights can run in the opposite direction. Web expert Danah Boyd recently stressed how millions of people trust companies like Google to store tons of vital information, but what happens if your digital identity is "disappeared"? She recounts how a friend lost his entire Google account and was told he had no recourse by customer service. After all, there may be no contract or back up files available:

When companies host all of your data and have the ability to delete you and it at-will, all sorts of nightmarish science fiction futures are possible. This is the other side of the "identity theft" nightmare where the companies thieve and destroy individuals’ identities. What are these companies’ responsibilities? Who is overseeing them? What kind of regulation is necessary?

 

Good questions.

Photo of campus poster: Inju Flickr