Large language models (LLMs), such as the model underpinning the functioning of OpenAI’s conversational platform ChatGPT, have proved to perform well on various language-related and coding tasks. Some computer scientists have recently been exploring the possibility that these models could also be used by malicious users and hackers to plan cyber-attacks or access people’s personal data.
New insight into why LLMs are not great at cracking passwords
Tech News
-
HighlightsFree Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
HighlightsSmart buildings: What happens to our free will when tech makes choices for us?
-
AppsScreenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
HighlightsDarknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
SecurityPrivacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured HeadlinesWhy Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars

