As artificial intelligence is rapidly developing and becoming a growing presence in health care communication, a new study addresses a concern that large language models (LLMs) can reinforce harmful stereotypes by using stigmatizing language. The study from researchers at Mass General Brigham found that more than 35% of responses in answers related to alcohol- and substance use-related conditions contained stigmatizing language. But the researchers also highlight that targeted prompts can be used to substantially reduce stigmatizing language in the LLMs’ answers. Results are published in The Journal of Addiction Medicine.
LLMs found using stigmatizing language about individuals with alcohol and substance use disorders
Tech News
-
Highlights
Free Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
Highlights
Smart buildings: What happens to our free will when tech makes choices for us?
-
Apps
Screenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
Highlights
Darknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
Security
Privacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured Headlines
Why Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars