The COVID pandemic highlighted privacy. Lockdowns and restrictions on in-person meetings made digital networks an essential tool. This reliance on digital networks pointed to privacy’s most important problem – the hodgepodge of rules that apply to commercial use of personal data. How personal data is collected and used requires government rules, but there is disagreement over what form these rules should take.
The COVID pandemic highlighted privacy in two ways. First, the lockdowns and restrictions on in-person meetings put in place in response to COVID made digital networks the central tool for business, education and social life and led to significant increases in their use. Each connection creates data that can be collected and analyzed. As most people know, every action on a network leaves a digital trail of data that is recorded by service providers under the terms of their contract with users. These contracts, and the rules governing this data collection (and subsequent use), are ambiguous and may impose few limits, particularly in the United States.
Second, governments sought to use this digital dependency as a tool to warn of COVID exposure and (in some countries) monitor compliance with isolation rules. Health surveillance raises the same issues for privacy as does the commercial use of network data, on whether consent is required for collection of personal data and what rules govern the use of that collected data. To be frank, most of these COVID-related efforts had mixed success, although some improved over time.
The immediate lesson to draw from this is countries need both better technology and better public acceptance of health surveillance if it is to work. The bigger issue is that as societies become more dependent on digital network technology, we will need stronger, clearer rules on data use, but – and this is a crucial caveat – those rules cannot be so strict as to petrify technological innovation.
A key issue for policymakers will be whether to keep any health surveillance in place after COVID subsides. Survey data suggests people will accept intrusive surveillance during a pandemic but are reluctant to see it become permanent. Public sentiment seems to prefer that intrusive measures for enhanced digital surveillance created in response to COVID are temporary. Privacy groups have already lined up to oppose permanent change or the retention of health surveillance systems.
Government use of health data does not, however, touch directly on privacy’s most important problem – commercial use of personal data. More than a century ago, Supreme Court Justice Louis Brandeis, one of the fathers of privacy policy, defined privacy ago as the right to be left alone. But in a digital environment, you are never really alone, and always subject to observation the moment you connect to a network. The development of data analytical tools (“big data”) has increased the value of collected data. The advent of artificial intelligence technologies like machine learning, which are “trained” on huge sets of data makes date even more valuable. Access to and analysis of data collected online provides real commercial advantage.
Neither technology nor commercial incentives will automatically protect privacy. There is no market solution for privacy. The reason for this is that the “business model” of the internet is based on exchanging “Personally Identifiable Information” (PII) for services. The use of PII in these “trades” if informal and lacks the markets or pricing mechanisms that would bring clarity and create competition, but frankly, an individual’s data is worth only a few cents and does not justify a complex market. It becomes valuable only when the data of millions of users is aggregated.
Choices by consumers suggest that people are willing to allow companies to harvest personal data when they visit a website in exchange for “free” online services. It’s not free, the payment is in personal data, not money. Acceptance of intrusive commercial surveillance may be because consumers have little choice, but even before COVID there was discomfort with this commercial surveillance, part of a larger debate on whether the giant tech companies that provide services (like search engines, streaming entertainment, or social media connections) to hundreds of millions of consumers should be regulated like public utilities. How personal data is collected and used when you connect on a network can only be shaped by rules that control the behavior of websites and service providers, and there is disagreement over what form these privacy rules should take.
Opinions on this vary from country to country. Many countries have no privacy policy or if they do, it is not enforced. These countries increasingly focus on “data sovereignty” and “data localization,’ that each nation should control the personal information of its citizens and companies. In China and Russia, companies must follow privacy rules, but there are no constraints on the government. The situation in the U.S. is the opposite: government is tightly constrained while companies currently have few limitations on collection and use of data.
The European Union (EU) has the most advanced privacy regime and regards privacy as a fundamental right, but there are reasonable concerns that European privacy rules) creates a stifling regulatory burden. The EU has taken a comprehensive regulatory approach, with requirements for consumer consent and company disclosure of how data is used. While EU officials say they have learned the lesson that too much regulation is bad for economic growth, it is too early to tell if GDPR achieve the right balance.
The United States has a fragmented approach to privacy regulation, with specific rules for some sectors like health or financial services, and few or no rules for other sectors, like online consumer services. Strict rules only apply to government collection and use of personal data. The imbalance between highly regulated government data collection and use and lightly regulated commercial collection use dates to the 1980s, before the internet existed, and are now inadequate for governing digital networks.
Privacy creates extraterritoriality problems as governments try to regulate data use by companies not subject to their jurisdiction as they are headquartered in a different country. Extraterritoriality and diverse views on regulation are an obstacle to global agreement on privacy, leading to a mélange of differing national regulations. There is no global agreement on privacy (although the EU would like it General Data Protection Regulation (GDPR) to become a global standard for other countries), nor is there any process to obtain global agreement, so privacy rules vary widely from country to country.
In a complicated situation involving extraterritorial rules, European Courts say that the privacy protections offered by American companies are inadequate for GDPR requirements. Restricting data flows would damage transatlantic commerce and both the EU and the US are negotiating an agreement that would let US companies say that they meet GDPR privacy requirements if they meet certain conditions. This would be the third such agreement (the last was called “Safe Harbor) and is also likely to faces a challenge in European Courts. Negotiating this agreement, Congressional interest in national legislation, and work in the EU to implement GDPR mean that they privacy picture will change again.
COVID highlighted “digital dependence. It reinforced demands for a more stable and just network environment. Privacy and data protection are parts of the larger problem of digital governance. Covid came at a time when heightened attention was being paid by governments to the ways that technology reshapes societies and giant tech companies had acquired immense wealth. These trends create discomfort and countries around the world are looking to reign in the tech giants, particular in their collection of data and its use. While there are contradictions in public attitudes on data collection and use, the outlines of a new approach to privacy is emerging that balances commercial needs with rules on use.
What we have learned (hopefully) from the COVID experience is what the elements of a new approach will look like. This new approach will need to lie somewhere between the United States’ commercial minimalism and European regulation. Antitrust battles between tech giants and the EU and US will also shape privacy policy. The perception of anti-competitive behavior by tech giants in the collection and use of personal data is one of the drivers of antitrust legislation in Washington and Brussels.
The core will be requirements for mandatory transparency and consent. Transparency applies to what is being collected and how it is being used. It cannot rely on cumbersome and confusing privacy statements that few people can understand, and which in the U.S. are usually not binding. Replacing these with legally binding rules for collection and use that allows consumers and users to “opt in” and consent to data collection as the price for accessing a service should be a goal for privacy legislation. Consent is user acceptance of the “trade” of personal data for online services. Tech companies worry that many users will reject this trade once they learn the details, and if not done right, consent requirements could impose cumbersome burdens, but this is not a reason for rejecting a consent requirement. There may need to be different service levels offered by companies that vary by what how much data a consumer is willing to share.
Before COVID, privacy leadership on privacy policy had moved from Washington to state legislatures and to the European Union. California, Virginia, and recently Colorado, have all passed State privacy laws derived part from the GDPR. None are substitutes for federal policy. While it is unlikely that Congress will pass a national privacy bill in the next year, the need to amend existing laws to accommodate health surveillance, the antitrust implications of a few giant companies dominating the information space, pressure form Europe, and the growing discontent over privacy will eventually force it to enact legislation.
About the Author
James A. Lewis is a Senior Vice President and Program Director at the Center for Strategic and International Studies (CSIS). Before joining CSIS, he was a diplomat and a member of the Senior Executive. Lewis was the Rapporteur for three UN Group of Government Experts on Information Security. He served as a member of the Commerce Spectrum Management Advisory Committee, and the State Department’s Advisory Committee on International Communications and Information Policy, He has authored numerous publications since coming to CSIS, is frequently quoted in the media, and has testified numerous times before Congress. He received his Ph.D. from the University of Chicago.