A new approach has been proposed to address the problem of “overconfidence”—one of the most critical risks of artificial intelligence (AI) in areas such as autonomous driving and medical diagnosis, where AI shows high confidence in incorrect predictions. A KAIST research team has developed a training method that enables AI to recognize situations involving unfamiliar or unseen knowledge, laying the foundation for reducing overconfidence and improving reliability.
‘I’m not sure’—AI finally learns three words that could make its biggest mistakes far less dangerous
Tech News
-
HighlightsFree Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
HighlightsSmart buildings: What happens to our free will when tech makes choices for us?
-
AppsScreenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
HighlightsDarknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
SecurityPrivacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured HeadlinesWhy Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars

