It's about AI not having all that much to catch up to. I know we are slaves to our ridiculous egos, so ridiculous we constantly invent new ficticious supernatural forces that must have created us in their hypermagical, divine, amazing, perfect image (talk about hallucinating), but we aren't all that.
We wandered around for the vast majority of our existence, about 200k years, before it occurred to some of us we could grow food in one place.
That's what you think. But we will sooner have fusion power than AGI. Comparing it with the developement of personal computers, we're currently at the stage of punchcard weaving machines.
Experts in both fields have significant paychecks on the line for people believing them. Of the two, fusion is the only one showing (very minor) measurable progress.
We can't even define what success would look like for AGI.
AGI is software with human like intelligence that can self-improve and the last two years is very measurable progress. We went from essentially nothing to multimodal models that can run on consumer hardware. OpenAIs new model, if they are to believed, is apparently much better at reasoning and can do long term research. I've also seen a few papers in the wild talking about self teaching methods and framework.
To be clear, I don't know which will come first. It's hard to know if the next leap is just a step or if there's a giant chasm laying infront. I do know that it's a lot easier to prototype with AI then fusion and there's a lot more people working on it, both behind closed doors and publically on the internet. Fusion doesn't have this advantage.
Your statement is basically a shot in the dark imo.