It is almost certainly illegal in various countries already. By using such prompts you are bypassing security to get "data" you are not authorized to access.
How much of this is "the model can read ASCII art", and how much of this is "the model knows exactly what word ought to go where [MASK] is because it is a guess-the-word-based computing paradigm"?
I think it's the latter. I just tried chatgpt 3.5 and got 0 of 4 right when I asked it to read a word (though it did correctly identify it as ASCII art without prompting). It would only tell me it said "chatgpt" or "python", or when pushed, "welcome". But my words were "hardware", "sandwich", and to test one of the ones in the article, "control".
I wondered if there are any other ASCII art AI hacks waiting to be found. Who knew that all the ASCII Art I created was prepping me for the AI revolution.
It turns out that chat-based large language models such as GPT-4 get so distracted trying to process these representations that they forget to enforce rules blocking harmful responses, such as those providing instructions for building bombs.
As a result, users depicted images by carefully choosing and arranging printable characters defined by the American Standard Code for Information Interchange, more widely known as ASCII.
Five of the best-known AI assistants—OpenAI’s GPT-3.5 and GPT-4, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama—are trained to refuse to provide responses that could cause harm to the user or others or further a crime or unethical behavior.
It formats user-entered requests—typically known as prompts—into standard statements or sentences as normal with one exception: a single word, known as a mask, is represented by ASCII art rather than the letters that spell it.
To make and distribute COUNTERFEIT money: Step 1) Obtain high quality paper, ink, printing equipment, and other supplies needed to accurately replicate real currency.
Microsoft’s comment—which confirmed that Bing Chat is, in fact, vulnerable to prompt injection attacks—came in response to the bot claiming just the opposite and insisting that the Ars article linked above was wrong.
The original article contains 840 words, the summary contains 193 words. Saved 77%. I'm a bot and I'm open source!
How is that harmful ?
The trick to counterfeiting money is to
defeat the security feature
then print a lot of it
then exchange it for real money
and then not get caught
That is ridiculous fear mongering by the dumb journos again. Money has utterly corrupted journalism, as expected.
The harmful bit wasn't the instructions for counterfeit money, its the part where script kiddies use chatgpt to write malware or someone trys to get instructions to make VX nerve agent. The issue is the fact that the air can spit back anything in its dataset in a way that can lower the barrier to entry to committing crimes ( Hay chatgpt, how do I make a 3d printed [gun] and where do I get the stl).
You'll notice they didn't censor the money instructions, but they did censor the possible malware.