EDITOR'S NOTE: The question of whether or not there’s still such a thing as “digital privacy” of personal information is now moot. Ask any cybersecurity expert and they’ll tell you that digital privacy, in the way we understood it during the advent of the internet, has long passed. Now the question we should be asking is what kinds of intrusive or harmful things can these unknown entities do with our private information, and what can we do to protect ourselves? It may sound cynical, but if something of value is not in cold storage, and if its traces aren’t restricted to physical transactions, then knowledge of it is out there, somewhere. Read more to see what institutions, most of which you’ll never see or know about, can do with your private data. This is where the digital world and real world conflate and collide.
The boundaries between physical and digital spheres are collapsing. The digital sharing of personal information is now embedded into the economic and relational activities of daily life. Following the COVID pandemic, the popularity of work from home and other hybrid work relationships continued to erode traditional boundaries.
These rapid changes in data sharing have created conditions conducive to two types of surveillance. First, contractual surveillance partnerships between public and private actors. Second, subtle forms of surveillance that seek to control the flow of information to the public.
These partnerships may manifest as law enforcement’s contracts with facial recognition companies, or governmental pressure on an internet platform’s content moderation schemes. Given the government’s regulatory power, both of these partnerships place private data under government control. In a world where these partnerships are normative, civil liberties are eroded.
Contractual Surveillance Contracts
Contractual public-private partnerships are attractive to innovators and government officials because of the incentives created by a digital commons of personal data. Corporate America has an interest in making a profit. The government has an interest in surveillance for investigative purposes. The merger of these interests harms consumers by eroding civil liberties like privacy from continual government observation.
Business people know selling technology to the government is lucrative. In 2020, the US surveillance market – facial recognition, video powered drones, smart city technology, and the like – was estimated to be worth $2.2 billion, on a trajectory of growth. And, as the lines between the physical and digital world blur, data as an asset will only increase in value.
Clearview AI, a leading company in the facial recognition industry, has been awarded contracts with federal law enforcement agencies who have used the technology to identify individuals in close proximity to crimes. This technology, however, scoops innocent individuals into criminal investigations based on location alone.
Other companies are selling services that seek to do more than merely identify individuals. In 2019, the Utah company Banjo pitched a bid to the Utah Attorney General by promising to “solve crime in seconds.” Banjo’s services operated by scanning the internet for various types of information, from 911 calls and traffic cameras, to social media and weather data. Once aggregated, this information may be used to alert law enforcement to areas where crimes may have occurred.
This technology is disturbing for several reasons. First and again, innocent bystanders may be implicated in investigations. Second, the misjudgment of circumstances believed to be suspicious could result in police being brought to scenes unnecessarily. Third, the technology is dependent upon the judgment of the algorithm maker. Indeed Banjo’s deal fell through due to the owner’s ties to neo-nazi groups. But, before rescinding the contract, the Utah Attorney General was prepared to shell out $21 million for access to the service.
While contractual public-private partnerships are cause for concern, another growing issue is the government’s pressuring of technology firms to advance its own objectives. Recently, the federal government has sought to control which posts, news articles, or stories Americans are likely to see on private social media platforms, such as Facebook.
Mark Zuckerburg admitted an FBI warning regarding “Russian propaganda” resulted in his decision to alter Facebook’s algorithm to downplay the now infamous Hunter Biden laptop story. This admission, paired with documents from a recent lawsuit, show Facebook’s moderation policy was influenced by federal government officials. Consumers, though, created their Facebook accounts believing they were contracting with Facebook, not the United States government.
Ultimately, services aggregating public data to provide the government with global, overreaching surveillance tools, and government-influenced content moderation policies erode privacy in two ways. First, they place Americans under physical surveillance. Second, they seek to control the flow of information to the public, thus impacting what knowledge people can access when making decisions.
The trend of merging private and public interests in the digital sphere is troubling because the internet is an economic necessity. Obtaining information without the use of large internet platforms is a challenge. Meanwhile, preventing personally identifiable information from hitting the web (and becoming a sellable asset) is difficult.
Furthermore, it is unrealistic to expect that existing privacy laws can undercut the problem. Existing privacy laws were created with outdated expectations of privacy and digital consent. Many people consented to putting their personal information online before some technologies widely used today even existed.Therefore, once data has been placed on the internet, removing it is remarkably difficult, if not impossible.
Solutions should focus on undercutting the incentives that merge corporate and government interests. To preserve freedom for posterity, liberty minded policy makers should be working to stymie the government’s ability to capture and control data in the digital commons. This means preventing public-private partnerships that give government control over private data.
Originally published by AIER.