Story Highlights
- The value of large internet companies is driven by continuous production and collection of data supplied by individual users — rendering self regulation ineffective.
- At the same time, laws are limited to certain kinds of personal data and rely upon the ‘notice and consent’ system to shift the burden of data protection and accountability from the platform to the user.
- Given the power imbalance, it is necessary to invert this model and place the responsibility on the platform to enable privacy by design and by default — lest another privacy episode like Facebook happens.
Last month, a whistleblower from data analytics firm Cambridge Analytica revealed that the data of 50 million individual users of Facebook was “harvested” in order to understand their political leanings and influence their political decisions.
The Cambridge Analytica leaks are possibly the most significant evidence after Edward Snowden’s leaked NSA files that the internet is being manipulated for use as a tool of mass surveillance and control. The NSA files kickstarted a conversation about the bounds of government surveillance. The Cambridge Analytica scoop has spurred a global conversation about the perils of social media and big data.
Who should be held to account for such deliberate attempts to subvert democracies? Reactions from the public have ranged from the #deletefacebook trend to chastising the Trump and Brexit campaigns and investigating Cambridge Analytica itself for breaches of the law. However, these ignore the more fundamental issues with the very architecture of the internet and social media in particular, which has come to dominate our interconnected world and the logic of which is at the root of the problem.
The Cambridge Analytica episode throws these issues into sharp relief; they provide the public – and regulators in particular – an opportunity to begin to address and scale back the massive damage that this model causes.
Privacy and Surveillance Capitalism
Harvard professor Shoshana Zuboff coined the term ‘surveillance capitalism’ to describe a mode of capitalistic production by firms where continuous production and collection of data supplied by individual users or ‘customers’ becomes the method for value creation in these firms. This logic applies to the mode of production employed by most of the largest IT firms in the world – platforms like Facebook, Google, Amazon, Instagram, Twitter or Airbnb – which demand the monitoring, extraction and analysis of user data in exchange for services.
The data produced by its users can be monetised in a myriad ways – sold to advertisers, target new products or improve the utility itself. As the amount of data collected directly correlates to a company’s bottom line, maximising the data that can potentially be extracted from users is the unsurprising norm among these firms. The surveillance is rarely coerced – the model promises to repay the utility derived from extrapolating and analysing one’s own data through better services.
The privacy implications of such a system are enormous – people, ostensibly the ‘benefactors’ of online services, are viewed as data sets to be bought and sold for utility with little regard for their choices or dignity. At the same time, the mechanics of the systems or people using this data are kept secret from the subjects of the data itself. This dichotomy means that firms such as Facebook or Google and anyone able to pay the price for their data extend enormous power over data subjects and their choices – which can range from influencing their consumption patterns to manipulating their voting decisions.
On the relatively few occasions where there is a ‘breach’ or unauthorised infraction of a company’s database or where a whistleblower momentarily reveals the inner working of the machine – the public reaction is a mixture of incredulity and disappointment. Facebook, for its part, has apologised for enabling such a system and promised to do better: just as it has in past breaches in 2007 (tracking user activity on third party websites), 2011 (charged by a consumer regulator in the US for making private data public), 2013 (collecting email and phone numbers of third parties), 2014 (experiments with mood manipulation) and 2015 (unauthorised data collection by third party apps) to cite just a few incidents.
In light of these, Mark Zuckerberg’s latest apology is hollow.
It was the logic of surveillance capitalism that was behind the choice to allow a third party application for the deliberate purpose of harvesting user data through innocuous means. It was behind the choice to not adequately safeguard how and why this data was being extracted, and, it was behind the choice to fail to mitigate the outcomes of the ‘breach’ until it became widespread public knowledge. In fact, in the wake of the scandal, even more egregious breaches of trust have been uncovered – that Facebook logs call data records of its users, ostensibly with their ‘consent’.
The extent of Facebook’s role in the Cambridge Analytica files is still being explored. Facebook itself has denied that its practices were in breach of its internal guidelines or that it broke any laws when it allowed its users data to be collected by Cambridge Analytica’s applications. While the truth of these statements is up for investigation, what can’t be denied is that legal systems around the world have been complicit in enabling such surveillance and privacy breaches to take place.
It’s clear that there is no incentive for self-regulation among platforms themselves – adherence to ‘internal guidelines’ and weak promises will be circumscribed by their dependence on the model of surveillance capitalism. It is the duty of lawmakers and regulators to ensure that an adequate data protection and privacy framework to address these problems.
Privacy by Design: Lessons for India
Unfortunately, lawmakers and legal systems have not been up to the task to sufficiently restrain the surveillance mechanisms employed by online firms.
Take Indian law, as an example. A minimalist data protection framework can be found under Section 43A of the Information Technology Act, which requires certain data collectors to comply with basic responsibilities. One of these responsibilities, found under the Reasonable Security Practices Rules, is that the data collector must obtain the consent of the data subject prior to the collection, use and sharing of their sensitive personal information.
There are several issues with this legal framework, which make it inadequate to redress issues like the Cambridge Analytica incident – its scope is limited to certain kinds of personal data and its guidance for data protection is broad and difficult to enforce. However, at a more basic level, the law relies upon the ‘notice and consent’ system to shift the burden of data protection and accountability from the platform to the user.
While informed consent is an important and enabling principle, the consent requirement under the IT Act (as with similar laws worldover) has, for the most part, been completely warped and turned into a farce. Most forms used by online platforms to obtain user consent are dense, complex and near impossible to understand, requiring users to agree to the wholesale collection, sharing and use of their data without providing adequate context as to what such consent may imply for their privacy. Moreover, granting consent has become a precondition to availing most of these services, presenting users with an ultimatum of either opting into a system of whose workings they are not fully aware, or being denied the use of information utilities. In most cases, users are encouraged to ignore the privacy implications and opt in to the data collection models employed by these firms.
Once the user grants consent (which may sometimes be implied merely by using the service itself), the platform is deemed to have satisfied the legal hurdle which legitimises the collection of data, turning informed consent into a meaningless formality.
The public statements in response to outrages about its massive collection of personal data makes it apparent that the consent requirement is merely being used to shield Facebook from any legal liability. When users recently found that their calls have been logged by the Facebook App, for example, Facebook’s response was:
“Contact uploading is optional. People are expressly asked if they want to give permission to upload their contacts from their phone – it’s explained right there in the apps when you get started. People can delete previously uploaded information at any time and can find all the information available to them in their account and activity log from our Download Your Information tool.”
It is likely that Facebook fulfilled the formal legal requirement of obtaining consent from users before their data was shared with firms like Cambridge Analytica. From an Indian law point of view, the implication is that Facebook would not be held liable for such actions.
Given the power imbalance between the platform and the user, it is necessary to invert this model and place the responsibility on the platform to enable privacy by design and by default. A popular analytical framework by Harvard Law Professor Lawrence Lessig describes how code is replacing law as the primary driver of social behaviour online. Within the architecture of online systems, our actions are primarily defined and constrained by the decisions made by software and algorithms. These algorithms determine the collection of data, the uses to which it may be put, as well as its permissible limits and constraints. To use Facebook, Google or Twitter is to automatically subscribe to these rules embedded within the code.
Where the design of the code and the architecture of online spaces is left entirely open to the firms which collect data itself, it is inevitable that the collection and use of data will be the norm and privacy will be an afterthought. It’s imperative, therefore, that regulators shift this balance in favour of privacy. One manner in which it can constrain the mass collection of data is to require firms to implement privacy by design and by default – to regulate the code that regulates us.
Privacy by design implies that privacy must be at the core of the design of systems, processes and architectures in the online world, instead of being viewed as a formal burden to be discharged through clickwrap agreements. It can be thought of as encompassing seven fundamental principles, outlined here:
– Privacy must be preventive and proactive, not reactive and remedial.
– Privacy must be the default setting in online systems – not an opt-in requiring user’s intervention.
– Privacy must be seen as an integral component of any tool, not as an add-on designed to work around the utility of such a tool.
– Privacy needs must not be viewed as zero-sum with other important objectives – all legitimate needs must be accommodated without presenting false dichotomies such as privacy vs security.
– Privacy must be embedded in each component over the lifecycle of the data.
– The mechanisms of privacy and data protection employed should be well documented and transparently presented to users.
– Above all, technology and processes must keep the interests of the users at the forefront.
Regulators around the world have embraced the model of privacy by design to address how laws and technology should be shaped – the European GDPR, under Article 25, requires that technological tools be designed keeping in mind data minimisation and pseudonymisation. Similarly, the UK’s Information Commission prescribes Privacy Impact Assessments as a formal requirement by which firms must first assess the privacy implications of their technologies and processes and seek to limit the same. Companies have embraced the principle of designing tools around privacy as a principle – Mozilla, the developer of the Firefox browser, has user interests and privacy, in particular, at heart. Similarly, duckduckgo, a search engine, is built as an alternative to systems which rely on mass data collection and tracking.
As India embarks on an ambitious project to overhaul its minimalist data protection regulation, the privacy by design framework should be central to its prescriptions and requirements of data controllers. This can be enabled by requiring privacy-centric standards for technologies, ensuring privacy impact assessments, asking for an accountability framework across the lifecycle of the data, and placing the burden on technologies and data collectors to ensure that privacy is at the core of their functioning.
Mere bluster and boast, like Union Minister Ravi Shankar Prasad’s threats to subpoena Mark Zuckerberg, are unlikely to solve anything – it’s time for our lawmakers to walk the talk and move from surveillance to privacy by design.
Subscribe to FactorDaily
Our daily brief keeps thousands of readers ahead of the curve. More signals, less noise.
To get more stories like this on email, click here and subscribe to our daily brief.
Visuals: Rajesh Subramanian
(The views expressed in this column are those of the author.)
Disclosure: FactorDaily is owned by SourceCode Media, which counts Accel Partners, Blume Ventures and Vijay Shekhar Sharma among its investors. Accel Partners is an early investor in Flipkart. Vijay Shekhar Sharma is the founder of Paytm. None of FactorDaily’s investors have any influence on its reporting about India’s technology and startup ecosystem.