Machine learning software helps agencies make important decisions, such as who gets a bank loan or what areas police should patrol. But if these systems have biases, even small ones, they can cause real harm. A specific group of people could be underrepresented in a training dataset, for example, and as the machine learning (ML) model learns that bias can multiply and lead to unfair outcomes, such as loan denials or higher risk scores in prescription management systems.
Fairness tool catches AI bias early
Tech News
-
Highlights
Free Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
Highlights
Smart buildings: What happens to our free will when tech makes choices for us?
-
Apps
Screenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
Highlights
Darknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
Security
Privacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured Headlines
Why Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars