Brookings’ Cameron Kerry has a new report out, “Protecting privacy in an AI-driven world,” that evaluates how artificial intelligence fits into broader considerations about consumer privacy. “[P]rotecting such privacy interests in the context of AI,” Kerry writes, “will require a change in the paradigm of privacy regulation.” Among his recommendations, Kerry calls for a shift from the classic “notice-and-choice” model that privacy legislation currently follows:
Consumers encounter this approach in the barrage of notifications and banners linked to lengthy and uninformative privacy policies and terms and conditions that we ostensibly consent to but seldom read. This charade of consent has made it obvious that notice-and-choice has become meaningless. For many AI applications—smart traffic signals and other sensors needed to support self-driving cars as one prominent example—it will become utterly impossible.
Kerry is right—the old notice-and-choice model is not sufficient in a world rife with constantly evolving technologies that make it impossible for consumers to be presented with and review lengthy privacy policies in order to determine the companies to whom they will share information with. Instead, the burden should be placed on businesses to ensure they are using data in ways that are not harmful or abusive to consumers, while still delivering the benefits of new and innovative technologies. Privacy for America’s approach to federal privacy legislation would do just that by prohibiting outright, rather than allowing consent for, a range of practices that make personal data vulnerable to misuse. And, to keep up with rapidly moving technologies, our policy framework would grant the Federal Trade Commission specific rulemaking authority to amend the law’s prohibited practices and accountability requirements.
Further, Kerry discussed the various approaches to addressing discrimination and privacy with AI technologies, including prohibiting discrimination directly and the approach that “addresses risk more obliquely, with accountability measures designed to identify discrimination in the processing of personal data.” Our approach would prohibit using personal information in any way to deny a person employment, credit, health care, or housing, except as permitted by federal or state law. The framework would also require companies to develop and maintain a plan to ensure compliance with the privacy requirements of the law, including a requirement to evaluate the risks created by the company’s data collection and retention practices, as well as its reliance on automated processing and decision-making. This approach would help to broadly hold companies accountable for any discriminatory practices that result from their use of automated technologies.
You can read Kerry’s full report here.