This piece was written for the course "Cybersecurity Conflict & Policy."
Not Your “Friend”: Facebook, Cambridge Analytica, and the Limitations of the Individualized Informed Consent Model of Data Privacy
In the age of big data and networked communications, the idea that one’s data can be adequately managed and protected on an individual level, by setting personal privacy preferences, is antiquated and ineffective. Yet this is precisely the dominant privacy model in the consumer space in the United States. Individualized informed consent, in which an individual chooses their own personalized privacy settings on the platforms they use, is based on a supposed understanding of and agreement with how said platforms use this data. But there are two issues with this conception: one, online data is not meaningfully individualized; and two, users are not truly informed about how their data is used. Because neither of these conditions are met, this model of consent fails to provide an adequate level of protection against potential data-related harms. The recent Facebook/Cambridge Analytica scandal perfectly captures how this conception of personal data protection breaks down—though it is by no means the only example—and will be used to illustrate this model’s limitations.
I. Facebook/Cambridge Analytica
In 2014, the independent researcher Aleksandr Kogan created a personality quiz app called “This Is Your Digital Life” through the Facebook developer platform that 270,000 Facebook users installed. At the time, Facebook’s privacy policies allowed third-party apps to collect data not only of those who used the app, but of their friends as well. Kogan thus had access to a whole network of data—about 87 million people—which he saved and sold to the (now defunct) political consulting firm Cambridge Analytica, in violation of Facebook’s policies (Ortutay, 2018). Using this data, Cambridge Analytica created “psychographic” profiles of millions of people, which the company used to advise Donald Trump in the 2016 US presidential election, and in numerous political movements globally (Meyer, 2018).
Later on after Kogan had harvested this data, Facebook changed its policies, stating third parties could no longer collect users’ friends’ data. The company also learned in 2015 about Kogan’s sale of data to Cambridge. It took down Kogan’s app and made Cambridge self-certify that it had deleted this data. The company claimed to have done so, and Facebook accepted this assurance without any sort of audit. As it turns out, Cambridge did not actually delete the data, but used it to build its voter-targeting tool (Granville, 2018). This scenario reveals both violations—including a violation of Facebook’s policies by Kogan, and a potential violation, by Facebook, of a 2011 FTC consent decree (Vladeck, 2018)—as well as instances of the data-driven ad-targeting economy working just as it is intended.
II. Individualized Informed Consent Model
Individualized informed consent, often called “notice and choice” or “notice and consent,” is the predominant model of privacy protection in the consumer space in the US. Essentially, in this model, all a company needs to do is provide the user with a “notice” of its privacy policies, or policy changes— often buried in overwhelmingly lengthy and technical language—and by agreeing to use the service, the customer has made a “choice” to consent to its terms (Schwartz and Solove, 2009). Within this basic framework, companies can allow users to choose more specifically the data they wish to share with the public, and sometimes what they share with the company. This is done on an individual level, so that each person chooses the privacy settings of their liking. This is the model Facebook uses for privacy settings, and it’s what most public consumer platforms use as well.
“Notice and choice” is a weakened, market-friendly manifestation of an older set of principles called the Fair Information Practices, developed by the US Department of Health, Education, and Welfare (US DHEW, 1973). It stems partially from the Katz v. US decision, which set the precedent for the “reasonable expectation of privacy” test—whether an individual should and does expect to be free of intrusion—that has become a hallmark of US privacy law (Wilkins, 1987). It assumes the individual as the locus of control for privacy, putting the onus on users to understand the contours of the information landscape, and absolving companies of responsibility to users outside the context of terms and conditions that the companies themselves set. This can be particularly problematic when companies share data with third parties, who have their own terms and conditions which might be unclear to the user (Barocas and Nissenbaum, 2009). As Paul Schwartz and Daniel Solove (2009) argue, the model is characterized by weak enforcement, a lack of substantive restrictions, a lack of true notice, and a lack of true choice. While this paper adopts the term “individualized informed consent,” in order to emphasize its supposed individual nature, it will also be used interchangeably with the term “notice and choice.”
III. Not Individualized
The individualized informed consent model assumes that choices about personal data privacy can be meaningfully individualized, but in the online information environment this is not the case. This conceptualization ignores the collective aspects, as paradoxical as it may sound, of individual privacy— the interdependence of individuals’ personal data, particularly among networks. According to Sarigol et al. (2014), “[i]n an interlinked community, an individual’s privacy is a complex property, where it is in constant mutual relationship with the systemic properties and behavioral patterns of the community at large.” This “location of individuals in contexts and networks,” as danah boyd [sic] and Alice Marwick (2014) claim, is afforded by social technologies that facilitate the sharing of information not just about oneself but others as well. These technologies, enabled and reinforced by big data analytics, encourage the continuous classification of users, offering an algorithmic projection of the individual under the guise of hyper-personalized self-determination (Baruh and Popescu, 2015). The result is that one’s data and identity gain new meaning and definition in relation to those of others.
A. Facebook/Cambridge Analytica
Only 270,000 people consented to sharing their data with Kogan through his app, and yet all of their friends had their data collected as well, without knowledge or consent. For these 87 million people, there was neither notice nor choice—their data was leaked by virtue of their connection to others in a social network, and the decisions others made. Recalling that social networks facilitate sharing about others as well, it’s likely this implicated data contains information about yet more people outside this 87 million. While Facebook did change its policies regarding the collection of friend data, the data that was gathered in this manner beforehand was still in use by third parties—notably Cambridge Analytica, but potentially others as well—without any meaningful effort to ensure it was deleted (Vladeck, 2018). Facebook itself has admitted that it anticipates uncovering many more instances of abuse from third parties; chances are there will be additional violations they miss as well (Sumagaysay, 2018).
Perhaps more troublingly, Facebook partakes in the common practice of building “shadow profiles” of non-users—a file of all the information the company has compiled about people who do not have Facebook profiles (Sarigol et al., 2014). Facebook acquires this information in multiple ways. It can get it from those who do have Facebook profiles, who voluntarily provide data about non-users. It also obtains browsing history data from other third-party websites that track Internet users’ locations across the Web. When combined, this data can reveal unique behavioral patterns and preferences (Gillmor, 2018). Facebook non-users never consented to having their data stored in phantom profiles—they were never even given explicit notice. But because the information was voluntarily submitted—though not by those implicated by it—Facebook can skirt accountability (Sarigol et al., 2018). This illustrates how the privacy behaviors and preferences of certain individuals can compromise the privacy of other individuals. And the problem is not unique to Facebook—it’s a feature, not a bug, of the information economy. Even if Facebook changes these specific policies and procedures, there is little to no regulation preventing them, and others, from finding similar workarounds in the pursuit of greater access to data.
B. Other examples
That said, this isn’t just a problem for data-hungry companies—sometimes, the nature of information environments and aggregated data prove problematic on their own. In January 2018, it was discovered that the exercise tracking app Strava’s “global heat map” feature, a promotional tool showing in aggregate where its users ran around the world, inadvertently revealed the location of secret US military bases and logistic and supply routes abroad (Tufekci, 2018a). That a popular consumer app could reveal US military intelligence is worrying enough, but this situation also indicates that personal data, when aggregated, is not merely a large collection of individual preferences and choices—it takes on a life of its own. Aggregated data, when analyzed or combined with other data, can reveal sensitive or secret locations and identify mobility patterns, both of which entail serious risks (Grey, 2018).
This inability to truly individualize within networks also has implications in law enforcement contexts. Under Section 702 of the Foreign Intelligence Surveillance Act (FISA), the federal government may intercept and seize the communications of non-US persons located outside of the US, for intelligence gathering and national security purposes (FISA, 2017). But because communication is multi- directional, if a target of Sec. 702 surveillance is in communication with a US citizen, or a person located in the US, these communications are “incidentally” collected as well (Klein et al., 2017). And the same is likely to happen when the CLOUD Act is implemented—cross-border data sharing between US companies and foreign governments will undoubtedly contain Americans’ communications (CLOUD, 2018). The law also does not limit foreign government data requests to just that country’s own citizens (Segal, 2018). National security intelligence gathering does not fall under the “notice and choice” paradigm of the consumer space, but it reveals an important truth: an individual’s personal data and privacy are often tied up in their affiliations with others. The very week this piece was written, law enforcement officials in Sacramento were able to identify the notorious Golden State Killer after DNA from the crime scenes was matched with genetic information the suspect’s relatives had uploaded about themselves to a public genealogy database. The data privacy decisions made by an individual in the family impacted all others in the family, whose inherently similar DNA—indicating their closeness in a network—inextricably linked them together (Becker, 2018).
There is also emerging research on “refractive surveillance”—monitoring, measuring, or surveilling one party in order to facilitate control or surveillance over a separate one. Karen Levy and Solon Barocas (2018) examine how brick and mortar stores are using customer tracking—through things like sensors and customization—to exert more granular managerial control over employees, under the guise of serving the customer. This can be observed in other contexts as well—measuring children in schools to monitor teachers; analyzing patient data to surveille doctors; quantifying children’s life outcomes to evaluate parents. The point is that the capabilities of monitoring technologies and big data can obfuscate surveillance mechanisms, so that an individual’s association with a target of monitoring implicates them in the surveillance as well. Their position in a social network has a significant influence on how their own data gets used.
IV. Not Informed
The individualized informed consent model also fails to satisfy the condition of being “informed.” Legally speaking, “informed consent” generally refers to the idea that a person can give true consent only when they have been notified of all relevant facts, benefits, risks, and alternatives of a particular action, and have a full comprehension of them (LegalDictionary.net). In online information environments, it is incredibly difficult to understand all the risks involved when sharing data, especially when data shared in one particular context gets appropriated for another.
A. Facebook/Cambridge Analytica
Put simply, data was collected from users just because they were friends with people who had taken Kogan’s quiz. These non-quiz-taking users were not notified that their data was being collected and used at the time, and were not notified a couple years later when Facebook learned this data was shared. Not only were these friends not truly “informed”—even those who took Kogan’s quiz were unaware this data was eventually sold to Cambridge Analytica for psychographic mapping. Some of this boils down to whether privacy policies, as companies currently write them, do an adequate job of laying out all necessary information to users. Research shows that most users do not read privacy policies, and even when they do, they seldom understand them (Nissenbaum, 2011). They’re crafted by lawyers and experts whose understanding of privacy law and the companies’ processes vastly exceed that of users. But even if policies are re-written to be more user friendly, and cosmetic UX changes encourage better privacy protection, the online information landscape is still too complex and unpredictable for individual users to truly understand in a way that qualifies them to provide informed consent.
B. Other examples
This unpredictability skyrockets when you factor in the use of machine learning to make further inferences about individuals. Machine learning synthesizes various pieces of information about a person to make determinations about them that might seem unrelated or intrusive, on a very large scale (Tufekci, 2018a). This is evident on a basic level with recommendation algorithms and targeted advertising. But it often finds its way into more questionable applications: an algorithm that could, after analyzing thousands of faces, detect a person’s sexual orientation based on a photo of their face (Kosinski and Wang, 2018); or the department store Target using purchasing behavior to determine which customers were pregnant (Crawford and Schwartz, 2014).
As these sorts of capabilities become more common, there’s a risk that information we provide with consent in one context may be used in another context, to which we would not give consent. And by using “computational inference,” companies have been able to skirt traditional privacy protections; in lieu of collecting certain data (whether because it’s impossible, illegal, or would not be given voluntarily), they fill in the gaps with predictive models that imagine an individual’s data or preferences, which is not subject to the same restrictions. But it can have the same harms (Baruh and Popescu, 2015). This violates what Helen Nissenbaum (2004) calls “contextual integrity,” or the idea that privacy rests on respect for the context in which information exists—the speaker, the audience, the medium, and the content itself. A major problem with the notice and choice model is that individuals don’t truly understand how companies like Facebook and relevant third parties will use their data, and in what contexts. And even when companies do give users more granular control over their privacy settings, which assumes that users have the capacity to understand all the ever-changing complexities of their decisions in the first place, this can actually encourage users to share more information, in what’s called the “control paradox” (Brandimarte et al., 2012).
Sometimes, even companies themselves can’t truly understand all the facts and risks. In the Strava heat map scandal, users didn’t know their aggregated data would reveal US military secrets, let alone the company (Tufekci, 2018a). In the case of algorithmic predictive analytics, often times the algorithms are opaque, meaning not even the developers understand why it makes any particular decision (Burrell, 2016). This has started raising legal questions about a right to explanation for those affected by administrative decisions made by automated systems (Edwards and Veale, 2017). And it should raise questions about whether users and companies really have a full comprehension of the risks involved when personal data is collected for and subject to these systems, and thus whether they are really “informed.”
V. Other Risks
There are a number of other features of this model that make it problematic. First, it assumes that, like in any hypothetical market, if users are unhappy with the privacy settings offered on a platform like Facebook, they’ll migrate to another platform that better suits their preferences. But while other platforms may exist, none come close to Facebook in terms of number of active users (Statista, 2018). Together with Google, the two companies control 84% of global digital advertising money, making Facebook far and away the top social networking site for advertising (Coren, 2018). In places of heavy censorship, Facebook can operate, essentially, as the Internet (Tufekci, 2018b). But the issue isn’t just Facebook—the notice and choice model pervades the online ecosystem, meaning users don’t really have a choice to avoid it, unless they abstain from using online service providers. Given their necessity to modern life, this is an unreasonable expectation. And even if you do manage to avoid Facebook, it doesn’t mean they’re avoiding you—they still collect data on non-users (Gillmor, 2018).
There is also evidence that even data that is anonymized can be de-anonymized when combined with other data. Even after data points have been stripped of personally identifiable information like names, addresses, and social security numbers, they can be re-identified by supplementing with other information (Ohm, 2010). Research has shown that web browsing history can be linked to social media profiles with publicly available data (Goel et al., 2017), and that a combination of an individuals’ ZIP code, birth date, and gender can accurately identify 87% of the US population (Sweeney, 2000). This is particularly worrisome as more personal data is easily available to interested parties, and the utility of that data increases (Ohm, 2010). And the algorithms that can be used to re-identify this data are continuously getting stronger (Narayanan and Shmatikov, 2010). Users are often unaware of these risks, and they can’t rely on companies that hold their data to have the users’ best interests in mind. In April 2018, a security researcher discovered that a database containing the personal information of over 93 million Mexican voters had been sitting, unprotected, on a publicly-available Amazon cloud server since September 2015 (Smith, 2018). When alerted about this illegal and potentially disastrous leak, Amazon deflected, referring to its policy which states that AWS is responsible for “security of the Cloud” while the customer is responsible for “security in the Cloud” [emphases added] (Amazon Web Services). Amazon’s contractual policies let it off the hook, illustrating the insufficiency of individualized informed consent in being truly individual—the owner of the database’s laxity impacted 93 million other people— or being truly informed—these voters were unaware of the leak, or the risks involved.
VI. Recommendations
The risk of hacks, leaks, and breaches is inherent to information environments, meaning there will always be an element of uncertainty for users regarding how their data is used. But there are models other than notice and choice that can do better jobs of protecting users from the wide range of risks present in our big data world.
A. Information Fiduciaries
One proposal comes from law professor Jack Balkin (2016), who claims that online service providers like Facebook, who hold massive amounts of user data, should be treated as “information fiduciaries.” Traditionally, a fiduciary is a person or business who has a special duty to act with loyalty and fairness towards a beneficiary for whom they hold something valuable. Common examples are doctors, lawyers, and accountants, who are legally required to act in their beneficiaries’ best interests. There is an assumption of power differential in a fiduciary relationship—the fiduciary has information or expertise that they theoretically could use against their beneficiaries but are forbidden from doing so. In return for patronizing them, and for allowing them access to this information, the fiduciary must act with special care towards beneficiaries, punishable by losing their ability to practice (Balkin, 2014).
A new law for information fiduciaries could be created for online service providers like Facebook and Google, based on previous fiduciary laws. Like traditional fiduciaries, these companies are essentially indispensable, collect large amounts of sensitive personal information, and can monitor their users to a much greater extent than users can monitor the fiduciary (Balkin and Zittrain, 2016). In the current notice and choice model, the only things protecting users are companies’ own terms of service; with information fiduciaries, there would be a special obligation beyond contractual agreements. There’s already precedent for this—the Digital Millennium Copyright Act created safe harbors for businesses that agreed to follow certain rules regarding takedown of content violating copyright. A similar law could function for information fiduciaries—companies like Facebook could agree to a pre- determined set of user data protection practices, and in return, the government could preempt certain state and local privacy laws. There would, of course, be parameters on what responsibilities platforms have in regards to user data, so that fear of liability does not discourage innovation and entrepreneurship. Companies would also not be compelled to become fiduciaries, but those who didn’t would leave themselves more vulnerable to costly litigation and privacy complaints (Balkin and Zittrain, 2016).
B. Procedural Data Due Process
Kate Crawford and Jason Schwartz (2014) propose another approach, procedural data due process, which allows individuals to challenge decisions made about themselves based on their data when it results in harm. Because big data evades traditional regulatory schema, this mechanism grants rights of action to those harmed by entities vis-à-vis their personal data. This framework applies the protection of due process afforded to individuals in other areas of the legal system to the use of personal data, providing an opportunity for a hearing when it results in harm.
C. Zeynep Tufekci Legislative Solution Framework
Prominent sociologist Zeynep Tufekci (2018c) also laid out a framework for a legislative fix to these issues that highlights three elements that must be considered: data collection, data access, and data use. First, platforms must implement opt-in mechanisms with clear, transparent language, not lengthy, technical legal doctrines. Second, users should be able to access, if requested, all the data companies hold about them, including “computational inference.” Lastly, companies should respect the contextual integrity of user data; uses of data should be limited to specific purposes, for certain periods of time. There should also be regulations on how companies can use aggregated data to avoid potential harms, societal and individual, arising from its misuse.
VII. Conclusion
The current dominant model of user data privacy protection, individualized informed consent, is not sufficient because it is neither truly individualized nor truly informed. The Facebook/Cambridge Analytica scandal illustrates both of these elements, but this phenomenon is not exclusive to Facebook. It is a feature of the online information environment in the era of big data. We see it in the Strava heat map crisis, incidental collection, predictive analytics, and beyond. A new model, or collection of models, that considers the power dynamic between individuals and entities that hold their data, is better suited to address the issues we currently face. This model must take into account the changing reality of big data analytics, and the potential for harms, privacy and otherwise, to individuals implicated by it.
References
Amazon Web Services. Compliance: Shared Responsibility Model. Amazon. Retrieved from https://aws.amazon.com/compliance/shared-responsibility-model/
Balkin, Jack M. (2014, March 05). Information Fiduciaires in the Digital Age [Blog post]. Retrieved from https://balkin.blogspot.com/2014/03/information-fiduciaries-in-digital-age.html
Balkin, Jack M. (2016, April). Information Fiduciaries and the First Amendment. UC Davis Law Review, Vol. 49, No. 4, pg. 1183-1234. Retrieved from https://lawreview.law.ucdavis.edu/issues/49/4/Lecture/49-4_Balkin.pdf
Balkin, Jack M., & Zittrain, Jonathan. (2016, October 3). A Grand Bargain to Make Tech Companies Trustworthy. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2016/10/information-fiduciary/502346/
Barocas, Solon, & Nissenbaum, Helen. (2009, October). On Notice: The Trouble with Notice and Consent. Proceedings of the Engaging Data Forum: The First International Forum on the Application and Management of Personal Electronic Information. Retrieved from https://www.nyu.edu/projects/nissenbaum/papers/ED_SII_On_Notice.pdf
Baruh, Lemi, & Popescu, Mihaela. (2015, November 2). Big data analytics and the limits of privacy self-management. New Media & Society, Vol. 19, Issue 4, pg. 579-596. http://journals.sagepub.com/doi/abs/10.1177/1461444815614001?journalCode=nmsa
Becker, Rachel. (2018, April 26). Golden State Killer suspect was tracked down through genealogy website GEDmatch. The Verge. Retrieved from https://www.theverge.com/2018/4/26/17288532/golden-state-killer-east-area-rapist- genealogy-websites-dna-genetic-investigation
boyd, danah, & Marwick, Alice. (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, Vol. 16, Issue 7, pg. 1051-1067. Retrieved from http://journals.sagepub.com/doi/abs/10.1177/1461444814543995
Brandimarte, Laura, Acquisti, Alessandro, & Loewenstein, George. (2012). Misplaced Confidences: Privacy and the Control Paradox. Social Psychological and Personality Science, Vol. 4, Issue 3, pg. 340-347. Retrieved from https://www.cmu.edu/dietrich/sds/docs/loewenstein/MisplacedConfidence.pdf
Burrell, Jenna. (2016, January 6). How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms. Big Data & Society, January-June 2016, pg. 1-12. Retrieved from http://journals.sagepub.com/doi/pdf/10.1177/2053951715622512
Clarifying Lawful Overseas Use of Data (CLOUD) Act, S. 2383. (2018, February 6). Retrieved from https://www.govtrack.us/congress/bills/115/s2383/text
Coren, Michael J. (2018, April 10). Is Facebook a monopoly? Mark Zuckerberg doesn’t have an answer. Quartz. Retrieved from https://qz.com/1265266/marvels-avengers-infinity-war-made- the-biggest-debut-in-movie-history/
Crawford, Kate, & Schwartz, Jason. (2014, January 29). Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms. Boston College Law Review, Vol. 55, Iss. 1. Retrieved from http://lawdigitalcommons.bc.edu/cgi/viewcontent.cgi?article=3351&context=bclr
Edwards, Liliane, & Veale, Michael. (2017). Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For. 16 Duke Law & Technology Review, Vol. 18. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855##
Foreign Intelligence Surveillance Act (FISA) Amendments Reauthorization Act of 2017, Section 702. (2017). Retrieved from https://intelligence.house.gov/fisa-702/
Gillmor, Daniel Kahn. (2018, April 5). Facebook Is Tracking Me Even Though I’m Not on Facebook. ACLU Free Future Blog. Retrieved from https://www.aclu.org/blog/privacy- technology/internet-privacy/facebook-tracking-me-even-though-im-not-facebook
Goel, Sharad, Narayanan, Arvind, Shukla, Ansh, & Su, Jessica. (2017). De-anonymizing Web Browsing Data with Social Networks. International World Wide Web Conference Committee (IWC32). Retrieved from https://5harad.com/papers/twivacy.pdf
Granville, Kevin. (2018, March 19). Facebook and Cambridge Analytica: What You Need to Know as the Fallout Widens. The New York Times. Retrieved from https://www.nytimes.com/2018/03/19/technology/facebook-cambridge-analytica- explained.html
Grey, Stacey. (2018, January 31). If You Can’t Take the Heat Map: Benefits & Risks of Releasing Location Datasets. Future of Privacy Forum. Retrieved from https://fpf.org/2018/01/31/if-you- cant-take-the-heat-map-benefits-risks-of-releasing-location-datasets/
Klein, Adam, Christian, Madeline, Olsen, Matt, & Campos, Tristan. (2017, August 4). The "Section 702" Surveillance Program: What You Need to Know. Center for a New American Security. Retrieved from https://www.cnas.org/publications/reports/702
Kosinski, Michal, & Wang, Yilun. (2018, February) Deep Neural Networks Are More Accurate Than Humans at Detecting Sexual Orientation From Facial Images. Journal of Personality and Social Psychology, Vol. 114, Iss. 2, pg. 246-257. Retrieved from https://www.gsb.stanford.edu/faculty-research/publications/deep-neural-networks-are-more- accurate-humans-detecting-sexual
LegalDictionary.net. Informed Consent. Retrieved from https://legaldictionary.net/informed- consent/
Levy, Karen & Barocas, Solon. (2018). Refractive Surveillance: Monitoring Customers to Manage Workers. International Journal of Communication 12, pg. 1166-1188. Retrieved from http://ijoc.org/index.php/ijoc/article/view/7041/2302
Meyer, Robinson. (2018, March 20). The Cambridge Analytica Scandal, in 3 Paragraphs. The Atlantic. Retrieved from https://www.theatlantic.com/technology/archive/2018/03/the-cambridge-analytica-scandal-in-three-paragraphs/556046/
Narayanan, Arvind, & Shmatikov, Vitaly. (2010, June). Myths and Fallacies of “Personally Identifiable Information.” Communications of the ACM, Vol. 53, No. 6, pg. 24-26. Retrieved from https://www.cs.utexas.edu/~shmat/shmat_cacm10.pdf
Nissenbaum, Helen. (2004). Privacy as Contextual Integrity. Washington Law Review, Vol. 79, pg. 119-158. Retrieved from https://www.nyu.edu/projects/nissenbaum/papers/washingtonlawreview.pdf
Nissenbaum, Helen. (2011, Fall). A Contextual Approach to Privacy Online. Daedalus, the Journal of the American Academy of Arts & Sciences, Vol. 140, Iss. 4, pg. 32-48. Retrieved from http://www.amacad.org/publications/daedalus/11_fall_nissenbaum.pdf
Ohm, Paul. (2010). Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization. UCLA Law Review, Vol. 57, pg. 1701. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1450006##
Ortutay, Barbara. (2018, April 9). Starting Monday, Facebook Will Tell Users if Their Data Was Shared With Cambridge Analytica. TIME. Retrieved from http://time.com/5232832/facebook-alert-users-cambridge-analytica/
Sarigol, Emre, Garcia, David, & Frank Schweitzer. (2014). Online Privacy as a Collective Phenomenon. COSN ’14 Proceedings of the Second ACM Conference on Online Social Networks, pg. 95-106. Retrieved from https://dl.acm.org/citation.cfm?id=2660460.2660470 Schwartz, Paul M., & Solove, Daniel. (2009, June). Notice and Choice: Implications for Digital Marketing to Youth. Berkeley Media Studies Group, NPLAN/BMSG Meeting Memo. Retrieved from http://www.digitalads.org/sites/default/files/publications/digitalads_schwartz_solove_notice_ch oice_nplan_bmsg_memo.pdf
Segal, Adam. (2018, February 12). The Intelligence Collection Implications of the CLOUD Act. Council on Foreign Relations. Retrieved from https://www.cfr.org/blog/intelligence-collection- implications-cloud-act
Smith, Ms. (2018, April 24). Personal info of all 94.3 million Mexican voters publicly exposed on Amazon. CSO, IDG Communications. Retrieved from https://www.csoonline.com/article/3060778/security/personal-info-of-all-94-3-million-mexican- voters-publicly-exposed-on-amazon.html
Statista: The Statistics Portal. (2018). Most popular social networks worldwide as of April 2018, ranked by number of active users (in millions). Retrieved from https://www.statista.com/statistics/272014/global-social-networks-ranked-by-number-of-users/
Sumagaysay, Levi. (2018, April 27). Facebook got an earning’s boost, but here’s the fine print. Digital First Media. Retrieved from https://www.siliconvalley.com/2018/04/27/facebook-got-an- earnings-boost-but-heres-the-fine-print/
Sweeney, Latanya. (2000). Simple Demographics Often Identify People Uniquely. Carnegie Mellon University, Data Privacy Working Paper 3. Retrieved from https://dataprivacylab.org/projects/identifiability/paper1.pdf
Tufekci, Zeynep. (2018a, January 30). The Latest Privacy Debacle. The New York Times. Retrieved from https://www.nytimes.com/2018/01/30/opinion/strava-privacy.html Tufekci, Zeynep. (2018b, March 19). Facebook’s Surveillance Machine. The New York Times. Retrieved from https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html
Tufekci, Zeynep. (2018c, April 9). We Already Know How to Protect Ourselves From Facebook. The New York Times. Retrieved from https://www.nytimes.com/2018/04/09/opinion/zuckerberg-testify-congress.html
U.S. Department of Health, Education & Welfare (US DHEW). (1973, July). Records, Computers, and the Rights of Citizens: Report of the Secretary’s Advisory Committee on Automated Personal Data Systems (DHEW Publication No. (OS)73-94). Washington, DC: US Government Printing Office.
Vladeck, David C. (2018, April 4). Facebook, Cambridge Analytica, and the Regulator’s Dilemma: Clueless or Venal? [Blog post]. Harvard Law Review Blog. Retrieved from https://blog.harvardlawreview.org/facebook-cambridge-analytica-and-the-regulators-dilemma- clueless-or-venal/
Wilkins, Richard G. (1987). Defining the “Reasonable Expectation of Privacy”: An Emerging Tripartite Analysis. Vanderbilt Law Review, Vol. 40. Retrieved from http://heinonline.org/HOL/LandingPage?handle=hein.journals/vanlr40&div=48&id=&page