People using social media apps on street showing the analysis on data privacy violations since Cambridge Analytica scandal

Two Years Since Cambridge Analytica: What Has Changed?

Partnering with Carnegie Mellon University (CMU), Facebook recently ran a survey to help public health officials and researchers predict the spread of COVID-19 and published the results a few weeks ago, helping CMU generate deeply informative and interactive maps that show the disease’s spread. The world’s largest social network polled users across the U.S. about whether they were experiencing symptoms associated with the novel coronavirus in the name of the public good, which sounds wonderful prima facie but speaks to serious privacy concerns for Mark Zuckerberg’s empire.

The survey was conducted under the auspices of Facebook’s Data for Good program, which claims to respect data privacy in all of its initiatives, but there’s no accountability or transparency into how that data is handled. People feeling sick and wanting to help with researching this pandemic are forced to trust Facebook’s sprawling data-mining apparatus to treat their sensitive information carefully and with integrity.

An endless carousel of surveillance capitalism

It’s been two years since the Cambridge Analytica scandal stirred global controversy, and what has changed? Facebook is worth about 25% more than it was at the height of the scandal, even in the wake of a pandemic-fueled “ad-pocalypse,” and it’s essentially still business as usual for the social giant. Users are still surveilled to the gills and data-mined in order to serve them the most targeted, most valuable ads possible. Ever hear a friend mention how they received an extremely relevant Instagram ad? We have, too.

Cambridge Analytica was a British political consulting agency that purchased data from hundreds of thousands of Facebook users for “academic purposes,” but the firm went on to collect data on those people’s friends networks and amassed a trove of data on roughly 50 million people. This was only possible because Facebook left the door open with a toothless policy banning this kind of data collection unless it was used to improve user experience on apps. In the words of data consultant-turned-whistleblower Christopher Wylie, “We exploited Facebook to harvest millions of people’s profiles, and built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”

Cambridge Analytica went on to use its ill-gotten gains to support the campaigns of Sen. Ted Cruz and Donald Trump in 2016, but was only exposed after two and a half years of reporting by The Guardian and Wylie risking his career and reputation to expose the malfeasance of both his employer and Facebook.

Since the scandal broke, Facebook has made some tweaks and banned misbehaving apps, but the overall framework is still in place. Surveillance capitalism is still the flavor of the month in Menlo Park and it’s only expanding as Facebook introduces more products, such as Facebook Portal, and moves to unify the Facebook Messenger, Instagram, and WhatsApp infrastructures.

An Unending Pandemic of Privacy Violations

On the heels of the Cambridge Analytica scandal, Facebook experienced what is suspected to be its largest data breach ever, with nearly 50 million accounts compromised by a theft of “access tokens” that normally kept people logged into the site. Following that, Facebook was busted nearly a year ago for storing hundreds of millions of user passwords in plain text, meaning anyone could have that information if they were lucky enough to intercept the right files.

Then there’s Zoom, the enterprise videoconferencing company that’s entered the limelight since most of the world has been on lockdown during this pandemic. While this surge in popularity worked wonders for its client list, it also exposed its massive privacy and security flaws, the list of which is quite long. The key failings include adhering to an outdated encryption standard, sticking users with locked-in meeting codes that enable “zoombombing,” and, most confusingly, sending iOS app user data to Facebook, even if the users didn’t have Facebook accounts. The backlash even led the FBI to warn schools of the risks of using Zoom for classes.

The other thousand-pound elephant in the room in this discussion is Google, whose sprawling digital dominion includes the world’s most popular search engine (Google Search), operating system (Android), video streaming platform (YouTube), and so, so much more (Loon, Fiber, Pixel, Verily—the list goes on).

With great dominance comes great malfeasance

Google was busted a year and a half ago by the Associated Press for tracking Android device locations, even if the Location History setting was switched off by the user. YouTube was handed a record $170 million fine last year for violating the Children’s Online Privacy Protection Act (COPPA) by collecting the personal data of countless children without first obtaining parental consent—and earning millions of ad dollars by doing so. Google has a long history of treating user privacy as an afterthought, one that’s led its parent company Alphabet to reach a trillion-dollar valuation.

One of the largest data breaches of all time took place in 2017, when credit reporting company Equifax neglected to patch its systems and take security advice from outside consultants, exposing sensitive data belonging to 147 million people. Poor data governance by a profit-driven corporation that assembles financial data on hundreds of millions of people and businesses and sells it, most often without anyone’s consent or knowledge, led to undue financial risk and worry for nearly half of the American population.

All of these examples serve to show how little we know about the enormous privacy risks operating in plain sight every day. Cambridge Analytica wasn’t the only group to abuse Facebook’s lax privacy settings—it was merely one of the few exposed on the world stage. Who knows how many other data breaches have occurred without the public, or even the breached stakeholders, knowing about them. Despite record-smashing fines, these companies still build on their data-mongering business models with little to no concern for privacy—and they don’t care to look back. And it all continues unabated because no one’s forcing them to do otherwise.

We’re Holding Out for a Hero

Some might argue that if people cared about their privacy, they’d simply delete their profiles and move on from these invasive systems. But it’s much more complicated than that.

Facebook and Google have spent the last 15 years building ecosystems that are addictive and inescapable and Equifax built a framework that doesn’t even need consent from the people whose lives it governs. These companies, along with others, predicate their businesses on collecting as much personal data as possible. That makes respecting users’ personal data a tough choice.

Privacy does matter to people, though. More than 80% of Americans feel they aren’t in control of their data, according to Pew Research, and more than 70% feel that what they do online is being tracked. Furthermore, over 80% believe the potential risks of data collection by companies outweigh the benefits. The public sentiment seems opposed to these data-mining businesses, but government action is the only balm that can satisfy these concerns.

The European Union was a first mover in mass data protection by passing the General Data Protection Regulation (GDPR) in 2016 and enacting it two years later, expanding data privacy rights and establishing strict rules for how companies handle personal data.

In the U.S., however, national efforts have been largely stalled for years. Fortunately, California passed the relatively avant Consumer Privacy Act (CCPA) which, when coupled with GDPR, forced many tech companies to deliver the same privacy protections to all of their users. It turns out ensuring data privacy is a bit cheaper than deploying different privacy settings by geography. In the absence of unifying federal privacy legislation, the American people must rely on a patchwork array of state laws to force companies to respect their privacy.

Without effective legislation to enforce user privacy, some alternative models have arisen to address the problems surrounding personal data protections. Facebook and Google have each rolled out user research apps that compensate people directly for voluntarily sharing their data. One startup, Ozone AI offers its users direct, regular cash payments in exchange for volunteering their personal data, such as Spotify listening and Amazon shopping.

Alas, the road to hell is paved with good intentions. Until these gargantuan tech companies are forced to rewrite their business models, their profiteering will drive increasingly abusive invasions of our data privacy. It’s up to governments to protect their people and bring these companies to heel, and it’s up to people everywhere to make their voices heard loud and clear by their representatives. Without sweeping and powerful new rules, the privacy pandemic will only get worse.