A recent NYT article (embedded below) addresses an issue at Berkeley regarding student and faculty monitoring, data collection, and privacy issues. Janet Napolitano, president of the University of California, former secretary of Homeland Security in the Obama administration, moved to increase security measures across the university system's 10 campuses with virtually no notice. Professors on Berkeley campus, long a bastion of academic freedom and free speech, have begun to speak out about these still relatively unknown measures. What is known is that the program, called Coordinated Monitoring and Threat Response Initiative is administrated by an outside contractor, and an internal committee called Cyber-Risk Governance Committee, which in turn oversees the surveillance.
In an effort to appease Professors, the chief information officer of the university system explained that the new measures were implemented as a response to a data breach at UCLA Health System in July 2015, but that now, he would welcome a dialogue with university faculty: "This is not a technology issue. It is about how to strike a balance between being a very open university while still protecting the assets of the university from nefarious actors." Some might argue that disclosing the intended use of the data as an afterthought flies in the face of the notion of balance. Others still would point out that the matters at issue on a college campus do not need balance but rather safeguarding for anything less could result in a severe curtailing of the freedom of speech. Monitoring a college campus highlights the potential risks inherent to this sort of surveillance:
- Constraints on academic freedom of research and freedom of speech for fear of repercussion
- Tying public and/or political approval of research topics and/or other academic endeavors (via congressional investigations) to the use of taxpayers' money at a public university
- Lack of control over data collected if there is a breach (higher risk given the involvement of a contractor)
- Tracking students' and faculty's identifiable data
When matters such as these are at hand, is transparency really the opposite of privacy? And how is transparency defined to begin with? Does it mean that everything is surveilled but disclosed to the public, or just that one knows it is ongoing? Does one accept that surveillance is inevitable because security concerns seem to trump the expectation of privacy, but that it is offset by making the collected data public? Does our sense of security stem from having everything in the open despite the risk of misuse, or the opposite? As evidence of this confounding logic, details of the monitoring program at Berkeley were not released because of lawsuits relating to the data breach last summer. At a very high level, the chief information officer simply emphasized that the contents of emails were not surveilled - only network traffic. How much information about the monitoring needs to be released in order for those surveilled to feel comfortable?
As a society we are still defining our relationship with the expectation of privacy and pervasive technology, with a strong sense of risk and vulnerability as a backdrop. We are still negotiating rules and boundaries, not only with governments but with corporations, and entities such as universities. It is hard to demarcate boundaries however, when both the reach of technology and the risk of misplaced information are unquantifiable and maybe even unknowable. It is also increasingly difficult to be deliberate about decisions that could compromise our privacy when we are largely unaware that this erosion is happening. We need to realize that we carry these negotiations out daily when we opt to ignore or fail to identify privacy issues in favor of practicality. It is easier to sign away fine print in order to access mundane and quotidian needs - for ex., quick checkout during online shopping, not realizing that your urban bike share program saves your trips and routes, or using Facebook credentials to sign in to a myriad of vendors, which in turn allows Facebook to better tailor advertising according to your spending habits. We unconsciously and carelessly create an endless trail of data that can be gathered, sorted, manipulated, and monetized for various uses - including, but not limited to: identifying consumer patterns; political and sexual tendencies; and health and financial data. At some point, society as an unconscious whole traded individual privacy away for the internet.
While government surveillance is not new - beginning with Smith v. Maryland, 442 U.S. 735 (1979), stating that phone records can be acquired without a warrant, and evolving in a complex and vague manner under the Patriot Act, the loss of privacy to other corporations and entities is still fairly recent. A cobbled-together body of law paired with fast changing technology, whose reach the average person does not fully comprehend, leaves us unaware of the extent of the risk. Just this morning Apple released a Customer Letter (link below) that highlights a decision that Tim Cook, CEO, refuses to take: to build new technology to circumvent security features on the iPhone as requested by the FBI in relation to the San Bernardino shootings in California. He seems to do so, not only on behalf of a corporation whose interest lies in safeguarding its customers data, but as a fellow human who understands that there is no going back when certain data is up for grabs. Our notion of privacy has been changing, and while its erosion may be inevitable, it is up to us to take the time to weigh where we are, what we are comfortable with, and take an active part in shaping new rules, whatever those may be - there is no excuse for not doing so when what is at stake are our freedoms and liberties.
This was originally published on 17 February 2016 on LinkedIn. It can be found here.