"If I have to create stories so that the American media actually pays attention to the suffering of the American people, then that’s what I’m going to do."
LLMs have their flaws but for my use it's usually good enough. It's rarely mission critical information that I'm looking for. It satisfies my thirst for an answer and even if it's wrong I'm probably going to forget it in a few hours anyway. If it's something important I'll start with chatGPT and then fact check it by looking up the information myself.
Of course I care whether the answer is correct. My point was that even when it’s not, it doesn’t really matter much because if it were critical, I wouldn’t be asking ChatGPT in the first place. More often than not, the answer it gives me is correct. The occasional hallucination is a price I’m willing to pay for the huge convenience of having something like ChatGPT to quickly bounce ideas off of and ask about stuff.
I agree that AI can be helpful for bouncing ideas off of. It's been a great aid in learning, too. However, when I'm using it to help me learn programming, for example, I can run the code and see whether or not it works.
I'm automatically skeptical of anything they tell me, because I know they could just be making something up. I always have to verify.