Given the ongoing controversies relating to how Facebook user data has been used and misused by various groups – particularly those with political affiliations – Facebook is coming under increasing pressure to better outline exactly how it uses such insights, while others are also calling for the company to be held accountable for the content it hosts, and how it can be used for negative consequence.
On the first point, and as reported by TechCrunch, Facebook has this week agreed to amend its terms and conditions to better clarify that “free access to its service is contingent on users’ data being used to profile them to target with ads”.
The new regulation comes in response to mounting pressure from the European Commission, which is also responsible for the implementation of the broader data protection laws (GDPR) that were rolled out last year.
As explained by the EU:
“The new terms detail what services Facebook sells to third parties that are based on the use of their user’s data, how consumers can close their accounts, and under what reasons accounts can be disabled. These developments come after exchanges which were aimed at obtaining full disclosure of Facebook’s business model, and communicating that in plain language to users.”
And while this regulation is specifically focused on Europe, and complicity with European laws, Facebook has said that the amended terms and conditions will be applied globally as part of the company’s broader efforts on transparency.
What the exact wording of the document will now be is not clear, but the EU is touting this as a major win for consumers, better enabling them to make an informed decision about Facebook usage based on the personal data they’ll have to provide in exchange.
Will it make much of a difference?
That depends – do you read the full terms and conditions in full before you click on that ‘I agree’ box at the bottom?
In practical terms, it likely won’t have a huge impact, but it may provide more legal recourse for violations, while also moving in line with Facebook CEO Mark Zuckerberg’s push for more input from government regulators on what’s acceptable within social networking and data usage.
Along similar lines, both the UK and Australian governments have recently laid the groundwork for new laws which would increase the onus on social platforms to be responsible for the content distributed through their networks.
In Australia, the Federal Government has approved new legislation which would implement significant fines, and even jail time for social platform executives, if they fail to “remove abhorrent violent material expeditiously”. The regulations come in the wake of the Christchurch shooting, in which the shooter live-streamed his actions on Facebook. Under the regulation, Australia’s eSafety Commissioner would be tasked with requesting content takedowns, which would then put the onus on social platforms to act.
But the regulations are flawed – what, for example, ‘expeditiously’ means, in practical terms, is unclear, which would larger render the rule unenforceable in most, if not all, cases.