In a new article, researchers introduce the capabilities approach–contextual integrity (CA-CI), a framework that addresses privacy and dignity risks posed by modern artificial intelligence (AI) systems, especially foundation models whose capabilities evolve across contexts and purposes. In a case study, they demonstrate how CA-CI can operationalize the European Union (EU)’s AI Act’s fundamental rights impact assessments, harm thresholds, and anticipatory governance. The article, by researchers at Carnegie Mellon University and the University of Michigan, is published in IEEE Security & Privacy.
New framework addresses privacy, dignity risks posed by modern AI systems
Tech News
-
HighlightsFree Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
HighlightsSmart buildings: What happens to our free will when tech makes choices for us?
-
AppsScreenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
HighlightsDarknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
SecurityPrivacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured HeadlinesWhy Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars

