Do you know how digitally collected information uncovers things about you which you would rather remained private? We’re already living in the age of Big Data, and are on the very cusp of the age of the Internet of Things – will this lead to to complete and ubiquitous surveillance?
These are the question digital rights activist Wolfie Christl attempted to answer with a study on global trends in corporate surveillance, the results of which he shared in a presentation at the re:publica conference in Berlin this week:
“It’s not only governments who are spying on us, today we are constantly getting our lives categorized and rated by a global network of online platforms, ad servers, app developers, analytics companies, data brokers and many more, whose business models are based on the expectation of our personal data,” says Christl.
This data is collected largely without our knowledge and, for many, awareness how this data can be used and why it’s considered valuable by data brokers.
In the presentation, he offers a quick overview of the biggest players in the field, and explains where they get the data from. He reveals partnerships between some of the aforementioned companies, including deals they make with Internet companies that have access to a lot of users’ personal data: Google, Facebook, etc.
These are not secret partnerships, but they might as well be since not many people will take the trouble to discover these connections and think about what they mean.
By correlating data about our online activities with our personal data and the data available about our offline behavior – either provided by third-party data brokers or from their own customer database – companies get a pretty accurate idea of what a specific user does, wants, and needs (or might need).
Businesses use this data for marketing, customer relation management, risk management. “Most consumers have no idea that their everyday life interactions are affected about how they are categorized and rated by companies.”
Price discrimination based on user location, income, etc. is happening. You might or might not get a loan or a job based on who your Facebook friends are. And most people are not aware of this.
Christl also explains how companies, based on this collected data, create incentives to spur users to change their behavior, essentially manipulating them.
As our columnist Raj Samani recently noted, users usually don’t value their data much.
“A lot of us voluntarily declare personal data bankruptcy,” he pointed out, and explained several better options users have: withdraw from the digital economy, which is increasingly difficult these days, or be more cautious and hard-nosed about data-sharing.
With this presentation, Christl will lift the veil from many people’s eyes. Unfortunately, for most of us, it’s already too late – we’ve shared too much already.
It seems to me that future users, who might know better, are not safe either: we can all be manipulated to part with our data, one way or another. And even if we explicitly choose not to, there will always be covert data collecting efforts that we know nothing about.