The headlines are screaming, the politicians are pontificating: the Cambridge Analytica data breach scandal has turned the spotlight on Big Bad Data.
Through all the huffing and puffing, the public consensus seems to be that Facebook has, by negligence or design, thrown vulnerable users to the wolves.
The deal
We like to think most users of free platforms and apps understand that what they are really engaged in is a barter system; they bargain no-cost enjoyment for targeted advertising.
What some people appear to only just have realised, however, is exactly what they're giving away – that app providers don't want your data just so they can give you functionality, but because your data has real-world value in the digital economy.
The fine print
Enter Cambridge Analytica: a researcher creates a Facebook quiz app that collects not only the data of people who completed the quiz, but their friend connections too. The pay-off: the personal data of 87 million Facebook users, world-wide.
Cambridge Analytica breached the Facebook API terms, when that data was on-sold to political campaign groups. But the initial collection of that data, and Facebook's corresponding disclosure, was authorised by each user.
Just because users made a bad bargain doesn't mean that Facebook broke the law, but when the public wants a bad guy it's easier to blame the big guy than to try to understand what actually went on.
So, does Facebook have anything to be sorry about?
Privacy and Facebook – the legal side
Privacy laws differ from country to country (and state to state, in the US’s case), but the overriding principle is this: consent is key.
Users can adjust Facebook privacy settings to keep certain information private, but the purpose of the platform is to publish and share information. So users must consent to that disclosure for the system to function.
Though Facebook arguably should have done more to bring the scope of the disclosure to the attention of the user (which they now do much more rigorously), the what, how, when, and who of data handling was always there in the T&Cs.
And that was exactly Facebook’s response when the story first broke – not our problem (#sorrynotsorry).
The moral of the story
Of course, Cambridge Analytica turned out to be a huge problem for Facebook. The hit its public reputation took, thanks to the #deletefacebook viral campaign, has demonstrated again the awesome flagellating power of social media as the modern arbiter of public opinion.
And so, the moral of the story is this:
- Even if you’re complying with privacy laws and have a published privacy policy, it’s vital to factor-in reputational risk when consider data collection, use, and disclosure procedures (and just how far you need to go to bring that activity to the attention of the user).
- Consult a PR expert, and a legal expert, and make an informed choice regarding relative risk and industries standards and have a plan in place for if and when things do go wrong.
ABOUT THE AUTHOR
Amelia Edwards is a lawyer with KHQ Lawyers, an award winning boutique law firm. Her work focuses on entertainment law, marketing & advertising law, regulatory compliance, IP and brand protection, consumer law, and commercial contracting. Contact Amelia for an obligation free chat about your legal needs.
RECOMMENDED FOR YOU
Instagram Introduces Trial Reels for Creative Testing
Instagram has introduced Trial Reels, a feature designed to…
Instagram has introduced Trial Reels, a feature designed to…
LinkedIn Culls Community Top Voices (Gold Badge)
In a move that could reshape how professionals engage…
In a move that could reshape how professionals engage…
TikTok Expands Search Ad Campaigns
TikTok has made a significant move into the world…
TikTok has made a significant move into the world…