Researcher develops a security-focused large language model to defend against malware

Security was top of mind when Dr. Marcus Botacin, assistant professor in the Department of Computer Science and Engineering, heard about large language models (LLMs) like ChatGPT. LLMs are a type of AI that can quickly craft text. Some LLMs, including ChatGPT, can also generate computer code. Botacin became concerned that attackers would use LLMs’ capabilities to rapidly write massive amounts of malware.

This post was originally published on this site

Skip The Dishes Referral Code

KeyLegal.ca - Consult a Lawyer Online in a variety of legal subjects