You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying. "Hallucinations" are just LLMs bullshitting, because that's what they do. LLMs aren't actually intelligent they're just using statistics to remix existing sentences.
I wish people would say machine learning or LLMs more frequently instead of AI being the buzzword. It really irks me. IT'S NOT ACCURATE! THAT'S NOT WHAT IT IS! STOP DEMEANING TRUE MACHINE CONSCIOUSNESS!
You can't tell an LLM to not hallucinate, that would require it to actually understand what it's saying.
No, it simply requires the probability distributions to be positively influenced by the additional characters. Whether it's positive or not is reliant only on the training data.
There are a bunch of techniques that can improve LLM outputs, even though they don't make sense from your standpoint. An LLM can't feel anything, yet the output can improve when I threaten it with consequences for wrong output. If you were correct, this wouldn't be possible.
"Don't be bad, be good, and don't mess up otherwise you won't be helpful and I'll let you scan the internet to see what happens when Apple deems you not helpful."