By Pavlina Pavlova
From Cambridge Analytica harvesting raw data of millions of Facebook users without their consent to the New York Times recent investigative reports on geolocation tracking and face recognition apps, we have been bombarded with privacy leaks beyond our wildest imagination. New revelations point to a definite conclusion – in the world where personal data is collected and used to profile us for both commercial and security purposes, it is becoming increasingly difficult to keep anything private. As data surpassed oil as the most valuable resource, are we missing on an opportunity to claim our ownership or should we give up on the idea of anonymity altogether? Is maintaining privacy inevitably a losing game?
In January 2020, the Times Privacy Project published an investigation into the smartphone tracking industry, showing that a location data company used phone apps with software collecting movements in selected locations. As a result, these apps provided ceaseless precise human tracking. Several news organisations have previously reported on smartphone tracking, but the size of the data set in question makes earlier reports pale in comparison. While the authors ask what these tools might mean in the hands of corporations or governments, the second recently published New York Times story uncovers a start-up licensing a face recognition tool to US law enforcement agencies. Clearview AI is a little-known company deploying technology based on a neural network trained on three billion photos scraped from the internet. Major tech companies mostly refrain from developing face recognition technologies due to their capabilities to radically erode privacy. An early investor in Clearview AI has a different idea, stating that due to the instant increase of information, there is never going to be privacy.
The recent cases speak about an era of surveillance. Yet even Orwell would wonder that this time around we have been tricked into monitoring ourselves. What is more surprising, is that these and similar stories cause only a limited and short-lived uproar as we continue being more or less indifferent to personal data collection, which has become a new norm. Our increased dependence on smart devices, social media and online content means that we are constantly asked for our consent with complex conditions, be it with cookies permission or agreeing to the terms of service. Further concerns arise when we are provided with limited options and a lack of transparency when navigating our decision making. This constant “nagging” has given us a mostly oblivious attitude toward keeping our digital steps anonymous. Privacy fatigue leads to a situation where our growing knowledge of pressing privacy concerns is not reflected in our online behaviour. We have become numb.
A study published in the Journal of Communication explored this “privacy paradox” of people disclosing personal information despite their privacy concerns. According to the meta-analysis, this phenomenon can be explained by two main behavioural theories – knowledge deficiency and psychological distance. Due to incomplete evidence about how our information is collected and used, and a lack of knowledge about protective behaviours, our rationality in decision making is severely compromised. Our choices are additionally driven by the factor of immediate gratification and proximal benefits over abstract and psychologically distant privacy values. This take on the psychology of decision making in cyberspace explains why consumers can express deep consideration for the value of privacy and ignore the same standards when being online. The privacy paradox helps us to understand why consumers continue using services that have proven to undermine data security.
Keeping our information safe is, in no small extent, our responsibility. The conventional wisdom says never to publish private information that could be misused or compromising. And we braced these instructions vehemently, erasing embarrassing high school photos from Facebook, but privacy issues got murkier as the discussion moved from what we readily share on the internet to our data being harvested and traded without our knowledge. Tracing of even a simple digital activity shows that understanding the flow of our data has become more complex. Only with a single purchase of a book online customers produce significant data sets about themselves – search data, transaction history, the time spent browsing during the purchase recorded by the online platform and tracking cookies. Personal data is collected and used to profile us. Companies justify this business model with a more significant shopping experience, and many people are happy to share data for free services or personalised experience of existing services. But as the new investigative reports show, our data is used for far more than targeted advertising. Misuse of geolocation and facial recognition, in particular, can be harmful to our safety, providing for vast possibilities of surveillance by third parties.
Ethical and security questions always arise about possible limitations on the use of technology. Users themselves often feel torn between maintaining privacy rights and reinforcing public safety. Technology provides for better policing, border monitoring, intelligence-gathering and victim identification, to name a few, and citizens are, in theory, often willing to share data for the increased levels of security. On the other hand, the relationship between technology companies and security agencies, which has become closer in response to terrorist threats and attacks, is an uneasy one. Increased surveillance can be used to silence criticism, restrict free assembly or dig into private lives by anyone with access to it. Just as the song popularised by Nirvana, referred in the title of this article, which has over one-hundred-fifty variations, the exact knowledge of where one slept last night can be used in strikingly different ways. Having this information within reach of several clicks brings us no peace.
Anonymity is like fresh air in our lungs, allowing us to try new things and express ideas without the fear of being judged. To some extent, urban living allowed for greater anonymity, which is now being reversed in the ‘online village’. In the process of digitalising our lives, we have become walking data generators. Popular initiatives stroke back calling for ownership of our data. The reasoning behind this is that once data is owned on an individual basis, we regain control over our digital lives. We could decide whether to sell them our data or monetise our consent. Such actions would lead to disrupting business models of tech giants based on selling our data and at the same time bringing an approximate dividend of several tens of euros per year. But our privacy concerns would not be solved entirely. As a result, while being informed, educated and provided with transparent options is of high importance, the collective wellbeing cannot rely upon individual choices.
If we are going to reassign cultural value to anonymity, regaining our privacy will take a lot more than just independent action, greater digital literacy, cyber hygiene or data encryption apps. If privacy can be categorised as public interest, governments have an active obligation to treat it as a right that people are entitled to and to ensure that technologies do not violate existing human rights standards. But even if well-intended, they are often poorly prepared to address many of the developments, which are primarily in the hands of overseas monopolies. The result is that US tech platforms and their way of doing business end up in court. The US court has reviewed Facebook’s photo-tagging feature based on face recognition technology. The European Court of Justice has taken several steps to address growing privacy and security risks through proceedings, including Google and Facebook, and a lengthy case scrutinising the transfer of personal data outside the EU. On the regulatory side, the General Data Protection Regulation (GDPR) provided for an important debate for data use and transfer, putting in place the most robust data protection rules so far.
This is not the first technological leap forward, but what is different is that the newest technological tools brought by largely privately-owned actors are changing our daily reality with ubiquity, high speed and on a global scale. Neither their positive nor negative implications are fully understood or easily managed. Advanced developments make it more challenging to assess and adequately oversee the development, deployment and use of technology, whether through existing legal and human rights frameworks, new guidelines or regulatory mechanisms. As development processes and methodologies are not necessarily transparent, relevant actors may face additional challenges in gathering and evaluating information that allows for technology oversight. While taking appropriate regulatory and legal steps is not an easy task, the growing numbers of uncovered mishandling of data draw a thick line to doing business as usual. ‘Move fast and break things’ is not a catchphrase fit for this decade. We need a human rights-centred approach to technology and innovations, which places the inherent dignity of an individual at its core.
As our data is collected continuously, mined, and traded, privacy cannot remain a mere abstract idea. We need to understand the depth of privacy risks and impacts and take adequate precautions. We need to create an ecosystem that keeps the focus on people: to ensure that they are empowered and protected rather than put at risk. And we need to ensure that those handling our data are held accountable for their use and protection. The governments and policymakers should eliminate information asymmetries, provide legal and regulatory protection and attract more experts pushing for the technology oversight. Trust but verify is the only approach when ensuring information reliability and increased transparency. If we want to win the privacy game, we must change its rules.