I'm not sure what to make of this. The invention of the transformer architecture is what lead to the breakthrough of LLMs. And why they're as capable as they are as of today. Especially the generative pre-trained transformer (GPT). So the headline could also be: "Tokens are a big reason today's generative AI is so awesome". That doesn't mean that approach is without shortcomings. But I wonder why we don't see token-free LLMs take the lead. Maybe 2022 is too recent? And I mean other scientists are researching speculative decoding, too. And the latest publication from Meta also included a model that can predict multiple subsequent tokens at once. Maybe this is the future. It's certainly not easy to find a proper representation of written language. It's not very mathematical. And the characters or syllables don't really map to the semantics or anything useful. English is just a product of evolution. And it's not the only language. Though that could be migitated by including other languages when designing the tokenizer. And as far as I know, they already do that.