Trust in artificial intelligence (AI) companies has dipped to 35 percent over a five-year period in the U.S., according to new data. The data, released Tuesday by public relations firm Edelman, fou…
The term "Artificial Intelligence" has been in use for over fifty years to refer to a wide variety of algorithms and processes, the vast majority of which are far simpler than the LLMs and diffusion models being referred to here. It's a perfectly cromulent word to be using in this context.
You are perhaps thinking of AGI (Artificial General Intelligence), which is the sort of thing Star Trek depicts. That's not what's being discussed here because, as you say, it doesn't exist yet.
Unfortunately I have decided it's wrong to colloquially ise "AI" to refer to text engines and chat bots, and since my opinions are objectively correct, that is now set in stone. It's okay to be wrong, but now you know how to be right. I'll let it slide this time.
What you mean to say is that people have been using AI wrong for decades. Artificial Intelligence is by definition the same as AGI. Intelligence is not the ability to follow directions and do exactly what is asked. There is a lot more to it than that. Just like IQ is intelligence Quotient. IQ is not a measure of what someone knows, it's about how they take in new information and process and use it. You can't really say that a computer generated art is AI because it simply did what was asked and usually with random choices as to what to add based on what was or wasn't in the prompt. Same goes for llms, to call llms AI would mean that Google, duck duck go, etc are AI as well.
And go ahead and downvote me for this. I'm a techy and geek and I know what I am talking about and I've watched and been irritated as people use terms like AI for things that aren't really AI and aren't even close. And I'm a huge trekkie so you don't have to explain what they call anything.