Skip Navigation

Hacking Auto-GPT and escaping its docker container | Positive Security

positive.security Hacking Auto-GPT and escaping its docker container | Positive Security

We leverage indirect prompt injection to trick Auto-GPT (GPT-4) into executing arbitrary code when it is asked to perform a seemingly harmless task such as text summarization on a malicious website, and discovered vulnerabilities that allow escaping its sandboxed execution environment.

@AutoTLDR

2

You're viewing a single thread.