Microsoft CEO Satya Nadella, whose company has invested billions of dollars in ChatGPT maker OpenAI, has had it with the constant hype surrounding AI. During an appearance on podcaster Dwarkesh Patel's show this week, Nadella offered a reality check, arguing that OpenAI's long-established goal of es...
"The real benchmark is: the world growing at 10 percent," he added. "Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we'll be fine as an industry."
Needless to say, we haven't seen anything like that yet. OpenAI's top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail's pace and requires constant supervision.
That is not at all what he said. He said that creating some arbitrary benchmark on the level or quality of the AI, (e.g.: as it's as smarter than a 5th grader or as intelligent as an adult) is meaningless. That the real measure is if there is value created and out out into the real world. He also mentions that global growth is up by 10%. He doesn't provide data that correlates the grow with the use of AI and I doubt that such data exists yet. Let's not just twist what he said to be "Microsoft CEO says AI provides no value" when that is not what he said.
I think that's pretty clear to people who get past the clickbait. Oddly enough though, if you read through what he actually said, the takeaway is basically a tacit admission, interpreted as him trying to establish a level-set on expectations from AI without directly admitting the strategy of massively investing in LLM's is going bust and delivering no measurable value, so he can deflect with "BUT HEY CHECK OUT QUANTUM".
Correction, LLMs being used to automate shit doesn't generate any value. The underlying AI technology is generating tons of value.
AlphaFold 2 has advanced biochemistry research in protein folding by multiple decades in just a couple years, taking us from 150,000 known protein structures to 200 Million in a year.
Well sure, but you're forgetting that the federal government has pulled the rug out from under health research and therefore had made it so there is no economic value in biochemistry.
How is that a qualification on anything they said? If our knowledge of protein folding has gone up by multiples, then it has gone up by multiples, regardless of whatever funding shenanigans Trump is pulling or what effects those might eventually have. None of that detracts from the value that has already been delivered, so I don’t see how they are “forgetting” anything. At best, it’s a circumstance that may play in economically but doesn’t say anything about AI’s intrinsic value.
I think you're confused, when you say "value", you seem to mean progressing humanity forward. This is fundamentally flawed, you see, "value" actually refers to yacht money for billionaires. I can see why you would be confused.
Yeah tbh, AI has been an insane helpful tool in my analysis and writing. Never would I have been able to do thoroughly investigate appropriate statisticall tests on my own. After following the sources and double checking ofcourse, but still, super helpful.
Image recognition models are also useful for astronomy. The largest black hole jet was discovered recently, and it was done, in part, by using an AI model to sift through vast amounts of data.
AI is just what we call automation until marketing figures out a new way to sell the tech. LLMs are generative AI, hardly useful or valuable, but new and shiny and has a party trick that tickles the human brain in a way that makes people give their money to others. Machine learning and other forms of AI have been around for longer and most have value generating applications but aren't as fun to demonstrate so they never got the traction LLMs have gathered.
Like all good sci-fi, they just took what was already happening to oppressed people and made it about white/American people, while adding a little misdirection by extrapolation from existing tech research. Only took about 20 years for Foucault's boomerang to fully swing back around, and keep in mind that all the basic ideas behind LLMs had been worked out by the 80s, we just needed 40 more years of Moore's law to make computation fast enough and data sets large enough.
i'm not an expert by any means, but from what i understand, most symmetric key and hashing cryptography will probably be fine, but asymmetric-key cryptography will be where the problems are. lots of stuff uses asymmetric-key cryptography, like https for example.
I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.
So why is the AI at the top end amazing yet everything we use is a piece of literal shit?
The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.
If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.
Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch
For coding it's also useful for doing the menial grunt work that's easy but just takes time.
You're not going to replace a senior dev with it, of course, but it's a great tool.
My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.
Exactly - I find AI tools very useful and they save me quite a bit of time, but they're still tools. Better at some things than others, but the bottom line is that they're dependent on the person using them. Plus the more limited the problem scope, the better they can be.
Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can't tell.
LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!
I remain skeptical of using solely LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.
What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don't need a qualification, you could just Google each term you're unfamiliar with.
While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.
I was just talking about this with someone the other day. While it’s truly remarkable what AI can do, its margin for error is just too big for most if not all of the use cases companies want to use it for.
For example, I use the Hoarder app which is a site bookmarking program, and when I save any given site, it feeds the text into a local Ollama model which summarizes it, conjures up some tags, and applies the tags to it. This is useful for me, and if it generates a few extra tags that aren’t useful, it doesn’t really disrupt my workflow at all. So this is a net benefit for me, but this use case will not be earning these corps any amount of profit.
On the other end, you have Googles Gemini that now gives you an AI generated answer to your queries. The point of this is to aggregate data from several sources within the search results and return it to you, saving you the time of having to look through several search results yourself. And like 90% of the time it actually does a great job. The problem with this is the goal, which is to save you from having to check individual sources, and its reliability rate. If I google 100 things and Gemini correctly answers 99 of those things accurate abut completely hallucinates the 100th, then that means that all 100 times I have to check its sources and verify that what it said was correct. Which means I’m now back to just… you know… looking through the search results one by one like I would have anyway without the AI.
So while AI is far from useless, it can’t now and never will be able to be relied on for anything important, and that’s where the money to be made is.
Even your manual search results may have you find incorrect sources, selection bias for what you want to see, heck even AI generated slop, so the AI generated results will just be another layer on top. Link aggregating search engines are slowly becoming useless at this rate.
That's because they want to use AI in a server scenario where clients login. That translated to American English and spoken with honesty means that they are spying on you. Anything you do on your computer is subject to automatic spying. Like you could be totally under the radar, but as soon as you say the magic words together bam!...I'd love a sling thong for my wife...bam! Here's 20 ads, just click to purchase since they already stole your wife's boob size and body measurements and preferred lingerie styles. And if you're on McMaster... Hmm I need a 1/2 pipe and a cap...Better get two caps in case you cross thread on.....ding dong! FBI! We know you're in there! Come out with your hands up!
The only thing stopping me from switching to Linux is some college software (Won't need it when I'm done) and 1 game (which no longer gets updates and thus is on the path to a slow sad demise)
Yeah use Windows in a VM and your game probably just works too, I was surprised that all games I have on Steam now just work on Linux.
Years ago when I switched from OSX to Linux I just stopped gaming because of that but I started testing my old games and suddenly no problems with them anymore.
Very bold move, in a tech climate in which CEOs declare generative AI to be the answer to everything, and in which shareholders expect line to go up faster…
I half expect to next read an article about his ouster.
My theory is it's only a matter of time until the firing sprees generate enough backlog of actual work that isn't being realised by the minor productivity gains from AI until the investors start asking hard questions.
I’ve basically given up hope of the bubble ever bursting, as the market lives in La La Land, where no amount of bad decision-making seems to make a dent in the momentum of “line must go up”.
Would it be cool for negative feedback to step in and correct the death spiral? Absolutely. But, I advise folks to not start holding their breath so soon…
If it seems odd for him to suddenly say that all this AI stuff is bullshit, that’s because he didn’t. He said it hasn’t boosted the world economy on the order of the Industrial revolution - yet. There is so much hype around this, and he’s on the line to deliver actual results. So it’s smart for him to take a little air out of the hype ballon. But the article headline is a total misrepresentation of what he said. He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board. That’s very very different from “no value.”
He said we are still waiting for the hype to become reality, in the form of something obvious and impossible to miss, like the world economy shooting up 10% across the board.
That’s such an odd turn of phrase. “We’re still waiting for the hype to become a reality…” and “…something obvious and impossible to miss…”
So, like, do I have to time to go to the bathroom and get a drink, before I sit down and start staring at the empty space, or…?
Don’t get me wrong. I work with this stuff every day at this point. My job is LLMs and model training pipelines and agentic frameworks. But, there is something… off, about saying the equivalent of “it’ll happen any day now…”
It may just, but making forward-looking decisions on something that doesn’t exist—may not come to pass—feels like madness.
LLMs in non-specialized application areas basically reproduce search. In specialized fields, most do the work that automation, data analytics, pattern recognition, purpose built algorithms and brute force did before. And yet the companies charge nx the amount for what is essentially these very conventional approaches, plus statistics. Not surprising at all. Just in awe of how come the parallels to snake oil weren't immediately obvious.
I think AI is generating negative value ... the huge power usage is akin to speculative blockchain currencies. Barring some biochemistry and other very, very specialized uses it hasn't given anything other than, as you've said, plain-language search (with bonus hallucination bullshit, yay!)
... snake oil, indeed.
Its a little more complicated than that I think. LLMs and AI is not remotely the same with very different use cases.
I believe in AI for sure in some fields, but I understand the skeptics around LLMs.
But the difference AI is already doing in the medical industry and hospitals is no joke. X-ray scannings and early detection of severe illness is the one being used specifically today, and will save thounsands of lives and millions of dollars / euros.
Uh... Used to be, and should be. But the entire industry has embraced treating production as test now. We sell alpha release games as mainstream releases. Microsoft fired QC long ago. They push out world breaking updates every other month.
And people have forked over their money with smiles.
And crashing the markets in the process...
At the same time they came out with a bunch of mambo jumbo and scifi babble about having a million qbit quantum chip.... 😂
Tech is basically trying to push up the stocks one hype idea after another. Social media bubble about to burst? AI! AI about to burst? Quantum! I'm sure that when people will start realizing quantum computing is another smokescreen, a new moronic idea will start to gain steam from all those LinkedIn "luminaries"
We know how it works and we know that it would be highly beneficial to society but, getting it to work with reliability and at scale is hard and expensive.
Sure, things get over hyped because capitalism but that doesn't make the technology worthless... It just shows how our economic system rewards lies and misleading people for money.
Makes sense that the company that just announced their qbit advancement would be disparaging the only "advanced" thing other companies have shown in the last 5 years.
eh, the entireity of training GPT4 and the whole world using it for a year turns out to be about 1% of the gasoline burnt just by the USA every single day. Its barely a rounding error when it comes to energy usage.
That’s standard for emerging technologies. They tend to be loss leaders for quite a long period in the early years.
It’s really weird that so many people gravitate to anything even remotely critical of AI, regardless of context or even accuracy. I don’t really understand the aggressive need for so many people to see it fail.
For me personally, it's because it's been so aggressively shoved in my face in every context. I never asked for it, and I can't escape it. It actively gets in my way at work (github copilot) and has already re-enabled itself at least once. I'd be much happier to just let it exist if it would do the same for me.
Because there’s already been multiple AI bubbles (eg, ELIZA - I had a lot of conversations with FREUD running on an Apple IIe). It’s also been falsely presented as basically “AGI.”
AI models trained to help doctors recognize cancer cells - great, awesome.
AI models used as the default research tool for every subject - very very very bad. It’s also so forced - and because it’s forced, I routinely see that it has generated absolute, misleading, horseshit in response to my research queries. But your average Joe will take that on faith, your high schooler will grow up thinking that Columbus discovered Colombia or something.
I just can't see AI tools like ChatGPT ever being profitable. It's a neat little thing that has flaws but generally works well, but I'm just putzing around in the free version. There's no dollar amount that could be ascribed to the service that it provides that I would be willing to pay, and I think OpenAI has their sights set way too high with the talk of $200/month subscriptions for their top of the line product.
For a lot of years, computers added no measurable productivity improvements. They sure revolutionized the way things work in all segments of society for something that doesn’t increase productivity.
AI is an inflating bubble: excessive spending, unclear use case. But it won’t take long for the pop, clearing out the failures and making successful use cases clearer, the winning approaches to emerge. This is basically the definition of capitalism
Vague memories of many articles over much of my adult life decrying the costs of whatever the current trend with computers is being higher than the benefits.
And I believe it, it’s technically true. There seems to be a pattern of bubbles where everyone jumps on the new hot thing, spend way too much money on it. It’s counterproductive, right up until the bubble pops, leaving the transformative successes.
Or I believe it was a long term thing with electronic forms and printers. As long as you were just adding steps to existing business processes, you don’t see productivity gains. It took many years for businesses to reinvent the way they worked to really see the productivity gains