The next versions will have the ability to adjust to not only context of the requests and results, but the human's reactions and change the little nuances accordingly. That it sounds like a highly cheerful woman in real time to the prompts and much less of a robot is a big step. Add to that the latest visuals we've seen of real time generation of video faces that are extremely close. There may not be AGI there and the LLMs may get a lot of stuff wrong, but the presentation is going to fool more and more people at this rate.
And if we stumble upon AGI with all that in place, Asimov help us.
I can't wait until the have killer robots powered by GPT-4o. Imagine robots that get people to come out of hiding so that they can have an even more effective kill rate.
Lucy expected this robot to do horrible things to her, so the kindness causes her to open up to him [...] Lucy has nothing to fear, he's "simply going to harvest [her] organs." Wait, what?
I debated with it, out loud, about what would constitute a conscious AI and whether consciousness is an inherent property of sufficiently advanced intelligence, regardless of whether it's got any capacity for emotion. It conceded that that's a valid perspective held by a number of computer scientists but it's yet to be seen and when prompted gave a number of examples of ways we could infer if an AI had developed sentience.
Hmmh. We've seen all kinds of claims and hype regarding AI. I'd like to see and judge for myself. Guess I'll have to wait a few days.
Edit 2024-05-18: And yesterday it showed up in the webinterface. How do I get the talking and the emotions? Is that not available yet? Or do I need a phone app for that?