Security was top of mind when Dr. Marcus Botacin, assistant professor in the Department of Computer Science and Engineering, heard about large language models (LLMs) like ChatGPT. LLMs are a type of AI that can quickly craft text. Some LLMs, including ChatGPT, can also generate computer code. Botacin became concerned that attackers would use LLMs’ capabilities to rapidly write massive amounts of malware.
Researcher develops a security-focused large language model to defend against malware
Tech News
-
Highlights
Free Dark Web Monitoring Stamps the $17 Million Credentials Markets
-
Highlights
Smart buildings: What happens to our free will when tech makes choices for us?
-
Apps
Screenshots have generated new forms of storytelling, from Twitter fan fiction to desktop film
-
Highlights
Darknet markets generate millions in revenue selling stolen personal data, supply chain study finds
-
Security
Privacy violations undermine the trustworthiness of the Tim Hortons brand
-
Featured Headlines
Why Tesla’s Autopilot crashes spurred the feds to investigate driver-assist technologies – and what that means for the future of self-driving cars