In it’s too good to be true news, what if you could just ask your target system to execute code? Lucas Luitjes has an interesting article about simply asking AI chatbots to execute code and it turns out they will! Sanitizing user input has been a known issue for quite some time. Without sanitized input things like command injection and SQLi are possible leading to all kinds of data disclosure and remote code execution, sometimes as highly privileged users.
This has some very interesting implications. As AI chatbots become more capable it is important to find ways to prevent them from doing things they shouldn’t be doing, both on local and remote systems. Removing special characters and preventing OS command injection is one thing, but when the input is literally the entire English, (or another), language sanitizing input starts to look like a much more daunting task!
Full writeup here!