The recent news of Cambridge Analytica’s alleged usage of facebook data should act as wake-up call to us all. It may seem like the amount of data in question is large, the number of individuals certainly seems to be, however, when taken in the context of wider data collection it is just the tip of the iceberg. We are increasingly leaving ever more detailed digital footprints, it’s not just the data we choose to share; every aspect our digital lives is monitored, recorded, and profiled in excruciating detail. From what websites we visit, to what we buy, the music we listen to, through to the people we know. It is all used to build an in-depth profile of who we are and what we can be influenced by.
Sadly there is nothing new about this level of profiling, it has been going for years. However, what has changed is the context in which it is being deployed. Previously it was the preserve of advertising, trying to persuade us to spend our money on something we didn’t know we wanted or needed. What Cambridge Analytica have done is weaponise those same techniques to influence behaviour, and in particular voting.
The scale and depth of data collection continues to grow, as more and more devices pervade every aspect of our lives, from fitness devices through to our phones, and digital home assistants. They are all data collection devices, adding more, and finer grained, data to profiles held by commercial organisations. Such organisations are at pains to be perceived as caring about the privacy of their users and protecting their data. However, those same organisations have business models that are based on selling access to their users. The more accurate the profiles are, the more valuable the advertising is, and the more profit the organisation makes. We cannot be surprised that profit driven businesses look to exploit the only asset they have, which is our data.
Since this is not news to many people, why do we all still engage with the services or organisations that are profiling us? If we look at how these businesses start they are somewhat akin to a digital drug pusher. They provide their product for free, despite the high financial costs, with the hope of getting sufficient users on board to become fashionable, and ultimately build a dependence. If the service can become a de-facto provider, whether it be a social network, video service, or communication platform, once a critical mass of users start using the service not only do they become dependent on it, but those who try to remain outside of it face potential real world social exclusion.
In essence the users become addicted to the service, and will tolerate ever more invasive profiling and ever decreasing privacy. Such services will frequently overhaul their privacy settings, in the guise of making it better for the user, but in fact forcing the user to incur a time cost should they wish to maintain their current level of privacy. The longer a user is on the platform the harder it is for them to leave, as the greater their dependence is on the service. An individual user is almost powerless against such a powerful organisation. Where such a power in-balance exists it is beholden on the government to legislate and regulate to protect consumers, sadly, that has not happened.
Even if we were able to wean ourselves off our dependence on free services, in which we are the real product, the problem of data collection would not be resolved. The meta-data collected just through our online existence and usage of modern technology would still present a significant risk. The risk is not purely commercial either, governments are increasingly collecting and linking vast datasets, often of data that has been collected via compulsory instruments. That is without even considering the growing quantities of surveillance data being collected by security services.
We need to completely re-evaluate how we view and treat our data. All too often we are persuaded to evaluate whether data should be collected based on the intent of the collecting organisation, when we should evaluate whether data should be collected based on its potential usage, both good and bad. To do otherwise imparts complete trust on the collecting organisation to perpetually look after your data, to never sell it to anyone you do not want it sold to, and to never use it in a way you do not want, and may never have even imagined of. There is no organisation or government that has earned, or deserves, that level of trust.
Data about us should not be a commodity, it is our digital shadow, an electronic manifestation of us, so intrinsically linked with us that trading in it should be as prohibited as the trading in our physical manifestations is. It should not be up to a commercial organisation, or a government, to decide how to use our data, or who should have access to it, that is a personal decision for each of us, and one that is dynamic, and must be revocable.
Much like a new digital service, the protection of privacy requires a critical mass of supporters, without it, there is insufficient pressure on governments to enact change, particularly given their own increasing dependence on data collection. If there is one good thing that can come out of this whole affair it is the raising of awareness of the problem, and that’s why, we need to talk about your data.
-  The Guardian, “The Cambridge Analytic Files.” https://www.theguardian.com/news/series/cambridge-analytica-files, Mar-2018.