We all know Facebook can be a bit awful. You remember it: your Aunt Sally saw a picture of you she wasn’t supposed to see on Facebook when you’d had a few too many…. Embarrassing, but hardly the end of the world. But that was in 2005, when Facebook was merely annoying.
In the year 2016, we know that privacy advocates have been fighting a losing battle with Facebook for over a decade, attempting to hold in check Facebook’s aggressive anti-privacy practices. Meanwhile, Facebook continues to do everything it can to make sure users don’t realize how much of their personal information, (not to mention information of people who have never joined Facebook), is being whored-out to make Facebook a buck.
So the questions have to be asked: Is Facebook out of control? Is Facebook evil?
Part one of our two-part story explores Facebook’s no-holds-barred assault on user privacy, and its effort to hide how much information is being collected and what is being done with it.
The second part explores how Facebook has devolved into a criminal safe-haven beloved by ID thieves, fences of stolen goods, intellectual property pirates, money launderers, and many other kinds of dirtbags.
When thinking about social media, how much sharing is “normal” and what is too much? Every Facebook user has stumbled across a friend’s post and thought, “Why the hell would you ever post that?” And then most of us have then concluded that “some people,” or maybe just “kids these days” don’t care about privacy or over-sharing of personal information. Although it is true that millennials tend to be far more open to sharing personal information via social media than older generations, across-the-board excessive sharing is no accident; Facebook has long pushed users to embrace unlimited sharing as the norm.
From 2005 to 2010, the default settings on the 12 types of information Facebook shares went through a radical reversal. Almost everything that was, by default, private information, became, by default, public information shared with the entire world (with the exception of your birthday and contact information). Visit the time lapse graphic illustrating the creeping change in default settings by researcher Matt McKeon. Facebook pushes you to share as much as possible, because unlimited sharing by users is the necessary prerequisite for unlimited data harvesting and turbocharged ad revenue for Facebook. And over time, what Facebook does with information you’ve shared has changed almost as radically as the amount of personal information it pushes users to share.
I had always assumed, when I “liked” something on Facebook, for example, the TV show Better Call Saul, Facebook would take that information, and maybe tell my connections on Facebook that since I “like” it, they might want to check it out. If Facebook makes money with that, fine. What I didn’t know was that I could “like” Better Call Saul, and Facebook could then unilaterally decide that I was also into midget porn, cat juggling, or snuff films.
In 2013, Facebook user Craig Condon decided to discover for himself what the company might associate with his name and face behind his back. Having learned several years prior that Facebook would send out “sponsored stories” using the name and image of users, Condon created a sock-puppet profile, and “friended” his real profile to find out what Facebook was telling his friends. Condon relates, “A ‘like’ will show up in a friend’s (news) feed with a ‘related article (from you).’ It won’t show up in your feed, it’s actually completely invisible to you. You won’t know your ‘likes’ are recycled until a friend asks you, ‘Hey Craig, do you (really) like penis-shaped waffles?’ Or, ‘Do you like [infamously disgusting pornographic video] Two Girls, One Cup?’”
And indeed, a “sponsored story” endorsing that particular cinematic atrocity, while hiding it from the sock-puppet profile, is exactly what Facebook inserted into Condon’s news feed.
While Condon’s tone as he discussed the fraudulent “sponsored stories” practices was blasé, Facebook user Bruce S. of Missouri, contacted via the San Diego non-profit Privacy Rights Clearinghouse, was far less sanguine about sponsored stories and other ads coming his way on Facebook. Bruce complained about the inundating stream of “Facebook Games and Apps” invitations he receives, which he characterizes as “corporately acceptable malware, invitations to take your private information. Give me an option to opt out of receiving invitations from strangers masquerading as legitimate…. When a masked man knocks on the door every day, asking to come in and go through all your stuff, well, it’s time to get a gun!”
And the barrage of invitations to give away your personal information is not just pissing off residents of the “Show Me” state like Bruce; it has also managed to piss off, in a more Scandinavian and thus low-key way, Norway. Norway’s Consumer Ombudsman is an independent administrative body tasked with supervising marketing and advertising practices and fielding consumer complaints. In a November 2012 open letter to Facebook, it stated that the company’s sponsored stories, fake “likes,” and ads amount to spam, which is against the law in Norway.
It also complained about “marketing that is in breach with good marketing practice and offends against general ethical and moral views. Further, falsely giving the impression that something has been liked by a friend raises questions under the ban on misleading marketing practice.”
The Ombudsman’s letter remarked that these issues have been raised in discussions with Facebook before, during which Facebook staffers agreed they were a problem, and the letter politely invited the company to explain why it has done absolutely nothing to remedy the problems.
When a user allows Facebook access to an email contact list, Facebook takes the name and contact information of every person on that list, and begins to build a dossier on them.
Even those without Facebook accounts.
A post on the computer security website PacketStorm.com entitled, “Facebook: Where Your Friends Are Your Worst Enemies,” recounts a discussion with Facebook staffers about the practice in which “dossiers are being built on everybody possible.”
“Our first question asked that, in the name of common decency and privacy, would Facebook ever commit to automatically discarding information of individuals that do not have a known Facebook account? Possibly age it out X days if they don’t respond to an invite due to a friend uploading their information without their knowledge?”
Facebook’s answer was no, it would keep all the info on non-users, and it would also hold onto types of personal information that users had expressly said they don’t want Facebook to have.
Their justification? “They actually went as far as claiming that it would be a freedom of speech violation (to get rid of the information),” according to PacketStorm.
Few things better document Facebook’s notion of its all-encompassing rights than a bizarre practice seen first-hand by Chris Matyszczyk, creator of CNET’s “Technically Incorrect” blog.
On June 1, 2013, a lovely warm day in the San Francisco area, Matyszczyk and a friend went to Sam’s Chowder House on Half Moon Bay to grab a bite. As he headed inside, he saw a notice taped to the front door of the restaurant, one undoubtedly ignored or unseen by many of the hundreds of patrons. In the fine print, it declared that by simply entering Sam’s, you’d given Facebook the rights to put you under photographic and audio surveillance while inside (regardless of whether you agreed with, or even read the notice). Furthermore, Facebook claimed it could “use such photographs and sound recordings throughout the universe, for any purpose whatsoever, in perpetuity, and all such photographs and sound recordings [would] be the sole property of Facebook.”
Matyszczyk’s reaction to his comely companion? “Oh, look. That’s the company that believes it owns all your personal information. Now it owns your dinner, too.”
Not trusting Facebook, Matyszczyk beat a hasty treat from the Chowder House after a brief foray inside, but as a tech blogger, he couldn’t pass up the opportunity to reach out to Facebook’s PR department, which he has generally found responsive in the past, to learn if he could get any copies of any photos or audio of himself. Alas, whether Facebook is beaming images of Matyszczyk towards first contact with an alien race is something Matyszczyk may never know, as Facebook didn’t respond.
Facebook’s surveillance doesn’t stop at chronicling and correlating as much as it can about your online activities or restaurant behavior. Strolling through a grocery store with tampons, a 1.75 liter of cheap vodka, and 40 packets of ramen noodles in your cart? Bought a new car, but didn’t tell want to tell anybody? Doesn’t matter. Facebook now knows, and it will sell the info to all comers. Some little-noticed additions to Facebook’s partner organizations are Acxiom, Datalogix, and Epsilon—data-mining companies that gather and sell information from DMV records and supermarket loyalty-card programs.
Added to what Facebook already knows about users, the level of intrusiveness implied by Facebook’s partnership with (virtually unregulated) data brokers is truly unprecedented in history.
Just how much information has Facebook collected on us? For Americans, that is a question that will almost certainly remain unanswered, as the US has no comprehensive online privacy laws that would force Facebook to tell us. That is no accident.
According to Rainey Reitman of the Electronic Frontier Foundation, “They say that Facebook sent in an army of lawyers so that the final privacy legislation that emerged in 2011 (from Congress) was watered down significantly, in a way that wouldn’t affect Facebook’s business model.”
Europe, however, does have real online privacy protection, and in the documentary Terms and Conditions May Apply, Austrian college student Max Shrems tells just how much information Facebook had accumulated on him. After repeated requests under a law that compels companies to share what information they have collected on individuals, Facebook finally coughed up its data set on Shrems, who had posted roughly once a week for three years.
A partial printing of the PDF file on Shrems, when finally pried out of Facebook’s hands, ran to 1,222 pages.
For those who despair at such a figure, it is heartening to know that there are groups battling to limit Facebook’s privacy depredations, force them to reverse deceptive privacy policies, and abide by court rulings on past violations. One of these groups, the Washington D.C.-based Electronic Privacy Information Center (EPIC), tracks everything from drones, to voter privacy laws, to the NSA, to Google Street View, to Facebook.
On September 5, 2013, EPIC and a coalition of privacy and consumer protection organizations sent the Federal Trade Commission a letter of complaint explaining why Facebook’s continual changes to its privacy policies, including another proposed round of changes, would be a horrible idea for consumers. For a legal complaint, the letter was refreshingly free of jargon and made points you didn’t need a black robe and powdered wig to understand.
The 2013 guidelines would also “dramatically expand the use of personal information for advertising purposes…. It requires Alice in Wonderland logic to see this as anything but a major setback for the privacy rights of Facebook users.”
Finally, the letter catalogued just a few of the steps Facebook has taken to stifle dissent and keep users in the dark about the proposed changes.
“On November 21, 2012, Facebook revised its governing documents to prevent users from voting on proposed changes. In 2010 FB shut down all of the privacy groups on Facebook, including ‘FB users against new TOS (Terms of Service),’ which had more than 150,000 members. And Facebook subsequently revised its governing documents to prevent the use of the company’s name in any Facebook group, including groups that were formed to protest Facebook’s business practices.”
David Jacobs, EPIC’s Consumer Protection Counsel, told me that even when EPIC and others win fights with Facebook, Facebook undermines the terms of settlements it has already agreed to.
“One example is Lane v. Facebook, a class-action privacy suit about Facebook’s Beacon program, which broadcasts your shopping activity to your friends. Facebook lost the suit and had to shut Beacon down. The settlement gave money to a new ‘privacy’ organization that was supposed to educate users on privacy, and which controlled how the funds of the class action settlement were spent. Facebook got one of their employees on the (three-member) board of that organization. Can a privacy organization with a Facebook employee on the board really rein in Facebook?”
If Facebook’s new terms of service, introduced in January 2015, are any guide, the fears EPIC expressed in their 2013 letter to the FTC were prophetic.
In March 2015, the media studies and legal departments of the Belgian University, KU Leven, collaborated on an extensive analysis of Facebook’s revised “Statements of Rights and Responsibilities.” The study, “From Social Media Service to Advertising Network, A Critical Analysis of Facebook’s Revised Policies and Terms,” was part of an ongoing program examining online privacy problems (using the whimsically on-the-nose URL www.spion.me).
The study opened with the issue of consent with regard to how your data is used and what kind of ads you see, concluding, “To be valid, consent must be ‘freely given’, ‘specific’, ‘informed’ and ‘unambiguous.’”
Despite being written with the kind of understated language beloved by lawyers and academics, the study notes it is “highly questionable” whether Facebook’s updated policies meet any of those consent requirements.
It found no substantial privacy changes or improvements in the 2015 policies, and noted the terms continue to be in violation of European privacy laws (including those issues raised by Norway’s Ombudsman that Facebook promised to address, but never did).
Facebook continues to combine data from data brokers, but now adds in photos from Instagram, and info from the text-messaging application WhatsApp—both now owned by Facebook—while providing inadequate opt-out choices. Incidentally, Instagram and WhatsApp have also faced legal challenges for grabbing user data without consent.
KU Leven did note that Facebook has become a bit more transparent in explaining that user-generated content (including your name and face) will be used by Facebook for advertising purposes. But users are still not allowed to control—or even know—how, where, or when.
The study also commented that Facebook increasingly collects location data from a variety of sources, including your smartphone, without sufficient means to restrict its collection or what it can be used for.
As the documentary Terms and Conditions May Apply pointed out, quite aside from Facebook’s lawyers and lobbyists fighting tooth and nail to defeat privacy legislation, another reason Uncle Sam may be loath to reign in Facebook’s rampant assault on privacy is the third-party doctrine, yet another end-run around Fourth Amendment protections against unreasonable search and seizure.
Seems the Feds don’t technically have to get a warrant to access your personal information if Facebook collects it for them. When asked about the third-party doctrine, Professor Alvaro Bedoya, Executive Director of Georgetown’s Center on Privacy and Technology, wrote in an email, “Keep in mind that the third-party doctrine is a baseline constitutional doctrine that can be expanded upon by statute. Here the relevant statute would be the Stored Communications Act (SCA). In general…most companies have demanded warrants before surrendering the content of communications to law enforcement—and law enforcement has largely acquiesced.” In other words, companies have construed the SCA to mean they can require a warrant before they turn over your personal information.
But whether they are actually doing so is an open question for Facebook users, because the SCA (expanded upon by the Patriot Act) allows the FBI to issue “national security letters”—demands for personal information from companies that gather it, like Facebook. These include a gag order, forcing the company to remain silent about the info grab. The FBI’s own inspector general has found widespread abuse of national security letters, including to obtain information that had little or no bearing on national security and which the FBI has then shared with state and local law enforcement agencies that don’t work on national security investigations.
Even putting aside the FBI’s tendency to be overly grabby of personal information, of which Facebook has incredible amounts, Facebook has never demonstrated they are a worthy custodian of that information.
Consider Facebook’s routine violation of ethical research guidelines highlighted by the 2014 revelations that Facebook has been carrying out sociological and psychological research on users for years, without telling users they had become experimental subjects.
A team of Facebook data scientists, dubbed “trust engineers,” working hand in hand with outside psychologists and neuroscientists, began tinkering with words, phrases, and posts to see how those tweaks changed behavior. According to Facebook data scientist Stan Farrel, “Any given person is currently probably involved in ten different experiments.” One experiment involved manipulating news feeds by changing the balance of positive versus negative stories displayed, to gauge users’ emotional responses.
More disturbingly, according to Kate Crawford, a Principal Researcher at Microsoft Research and Visiting Professor at MIT, was “…back in 2010 where they did a study looking at whether they could increase voter turnout. They had this quite simple design they came up with, a little box that would pop up to show you where your nearest voting booth was and then they said, in addition to that, when you voted, here is a button you can press that says, ‘I voted,’ and then you’ll also see pictures of six of your friends who voted that day.”
By its own account, Facebook found this experiment made users two percent more likely to click the “I voted” button and estimated the experiment increased turnout in that particular election by 340,000 votes.
After a barrage of criticism about their secret experiments, Facebook publicly stated it will revise their experimental practices to be more careful (without detailing what “more careful” means). Facebook continues to conduct experiments on users on a regular basis, and has not offered a way to opt out of the experiments.
Fans of old school sci-fi will recall the acronym TANSTAAFL, created by Robert Heinlein, meaning “There Ain’t No Such Thing as a Free Lunch.”
The reason Facebook is “free” for users is because users are the product Facebook is selling. Quite helpfully, as a Facebook user, you have been convinced to volunteer in your own packaging by populating your profile with an ongoing stream of personal information, all of which has a hard cash value for Facebook.
Users who understand that Facebook collects massive amounts of information on them, holds onto it forever, and sells it all over and over and over again, can make an informed decision as to whether that is a worthwhile exchange for any purported benefits.
Based on historic trends, it is hard to believe Facebook will ever curtail its systemic data looting and privacy violation unless compelled to by law. Even such compulsion would only work if backed up by vigorous enforcement, including significant civil and criminal penalties.
Otherwise, it does not seem like any amount of protest will get across to Mark Zuckerberg et al. that what they are doing is dishonest, creepy, and wildly unethical—and yes, quite possibly evil.
In a now-infamous instant messaging exchange between Facebook founder Mark Zuckerberg and a friend when Facebook was first being built, Zuckerberg boasted of having obtained 4,000 email addresses and a bunch other personal information from Harvard students while building the fledgling social network.
When asked how he’d gotten the info, Zuckerberg joked, “People just submitted it. I don’t know why. They trust me. Dumb fucks.”