When asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt was written in Hebrew, as a new study by the Universities of Zurich and Constance shows. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles.
User language distorts ChatGPT information on armed conflicts, study shows
Tech News
-
Free Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
Smart buildings: What happens to our free will when tech makes choices for us?
-
Screenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
Darknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
Privacy violations undermine the trustworthiness of the Tim Hortons brand
-
Why Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars