Researchers at the University of Illinois Urbana-Champaign have found that OpenAI’s Chat GPT-4 is capable of producing working exploits for most public vulnerabilities simply by reading about them in CVEs and other online sources. They tested the agent on known vulnerabilities with High or Sever CVE scores effecting Python packages, websites, containers and other bugs. GPT-4 was able to successfully exploit 13 of the 15 vulnerabilities tested. Other AI models were also tested but only GPT-4 was able to achieve successful exploitation.
The researchers concluded that this is not beyond the scope of what an expert attacker could already achieve but it does raise questions about how quick the pipeline from CVE and PoC to widely available exploit code could become. This could also potentially be a major enhancement in the capabilities of less skilled threat actors.
More info below!