They'd need to do some pretty fucking advanced hackery to be able to do surveillance on you just via the model. Everything's possible I guess, but .... yeah perhaps not.
If they could do that, essentially nothing you do on your computer would be safe.
And no matter how many protectionist measures that the US implements we're seeing that they're losing the global competition. I guess protectionism and oligarchy aren't the best ways to accomplish the stated goals of a capitalist economy. How soon before China is leading in every industry?
That’s the thing: if the cost of AI goes down , and AI is a valuable input to businesses that should be a good thing for the economy. To be sure, not for the tech sector that sells these models, but for all of the companies buying these services it should be great.
This just shows how speculative the whole AI obsession has been. Wildly unstable and subject to huge shifts since its value isn't based on anything solid.
It's based on guessing what the actual worth of AI is going to be, so yeah, wildly speculative at this point because breakthroughs seem to be happening fairly quickly, and everyone is still figuring out what they can use it for.
There are many clear use cases that are solid, so AI is here to stay, that's for certain. But how far can it go, and what will it require is what the market is gambling on.
If out of the blue comes a new model that delivers similar results on a fraction of the hardware, then it's going to chop it down by a lot.
If someone finds another use case, for example a model with new capabilities, boom value goes up.
There are many clear use cases that are solid, so AI is here to stay, that’s for certain. But how far can it go, and what will it require is what the market is gambling on.
I would disagree on that. There are a few niche uses, but OpenAI can't even make a profit charging $200/month.
The uses seem pretty minimal as far as I've seen. Sure, AI has a lot of applications in terms of data processing, but the big generic LLMs propping up companies like OpenAI? Those seems to have no utility beyond slop generation.
Ultimately the market value of any work produced by a generic LLM is going to be zero.
It's kinda funny. Their magical bullshitting machine scored higher on made up tests than our magical bullshitting machine, the economy is in shambles! It's like someone losing a year's wages in sports betting.
Just because people are misusing tech they know nothing about does not mean this isn't an impressive feat.
If you know what you are doing, and enough to know when it gives you garbage, LLMs are really useful, but part of using them correctly is giving them grounding context outside of just blindly asking questions.
Democrats and Republicans have been shoveling truckload after truckload of cash into a Potemkin Village of a technology stack for the last five years. A Chinese tech company just came in with a dirt cheap open-sourced alternative and I guarantee you the American firms will pile on to crib off the work.
Far from fucking them over, China just did the Americans' homework for them. They just did it in a way that undercuts all the "Sam Altman is the Tech Messiah! He will bring about AI God!" holy roller nonsense that was propping up a handful of mega-firm inflated stock valuations.
Small and Mid-cap tech firms will flourish with these innovations. Microsoft will have to write the last $13B it sunk into OpenAI as a lose.
So if the Chinese version is so efficient, and is open source, then couldn't openAI and anthropic run the same on their huge hardware and get enormous capacity out of it?
OpenAI could use less hardware to get similar performance if they used the Chinese version, but they already have enough hardware to run their model.
Theoretically the best move for them would be to train their own, larger model using the same technique (as to still fully utilize their hardware) but this is easier said than done.
Not necessarily... if I gave you my "faster car" for you to run on your private 7 lane highway, you can definitely squeeze every last bit of the speed the car gives, but no more.
DeepSeek works as intended on 1% of the hardware the others allegedly "require" (allegedly, remember this is all a super hype bubble)... if you run it on super powerful machines, it will perform nicer but only to a certain extend... it will not suddenly develop more/better qualities just because the hardware it runs on is better
Didn't deepseek solve some of the data wall problems by creating good chain of thought data with an intermediate RL model. That approach should work with the tried and tested scaling laws just using much more compute.
They actually can't. Being open-source, it's already proliferated. Apparently there are already over 500 derivatives of it on HuggingFace.
The only thing that could be done is that each country in the West outlaws having a copy of it, like with other illegal materials.
Even by that point, it will already be deep within business ecosystems across the globe.
Nup. OpenAI can be shut down, but it is almost impossible for R1 to go away at this point.
Nvidia’s most advanced chips, H100s, have been banned from export to China since September 2022 by US sanctions. Nvidia then developed the less powerful H800 chips for the Chinese market, although they were also banned from export to China last October.
I love how in the US they talk about meritocracy, competition being good, blablabla... but they rig the game from the beginning. And even so, people find a way to be better. Fascinating.
Don't forget about the tariffs too! The US economy is actually a joke that can't compete on the world stage anymore except by wielding their enormous capital from a handful of tech billionaires.
That, and they are just brute forcing the problem. Neural nets have been around for ever but it's only been the last 5 or so years they could do anything. There's been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.
And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don't have to pay and won't argue about it's rights.
Also all of these technologies forever and inescapably must rely on a foundation of trust with users and people who are sources of quality training data, "trust" being something US tech companies seem hell bent on lighting on fire and pissing off the yachts of their CEOs.
One of those rare lucid moments by the stock market? Is this the market correction that everyone knew was coming, or is some famous techbro going to technobabble some more about AI overlords and they return to their fantasy values?
It's quite lucid. The new thing uses a fraction of compute compared to the old thing for the same results, so Nvidia cards for example are going to be in way less demand. That being said Nvidia stock was way too high surfing on the AI hype for the last like 2 years, and despite it plunging it's not even back to normal.
I feel like the world's gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I'm missing something that would probably account for the cost difference in current vs previous iterations.
Emergence of DeepSeek raises doubts about sustainability of western artificial intelligence boom
Is the "emergence of DeepSeek" really what raised doubts? Are we really sure there haven't been lots of doubts raised previous to this? Doubts raised by intelligent people who know what they're talking about?
Ah, but those "intelligent" people cannot be very intelligent if they are not billionaires. After all, the AI companies know exactly how to assess intelligence:
Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. ...
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. That’s far from the rigorous technical and philosophical definition of AGI many expect.
(Source)
I don’t have one to cancel, but I might celebrate today by formatting the old windows SSD in my system and using it for some fast download cache space or something.
Good. LLM AIs are overhyped, overused garbage. If China putting one out is what it takes to hack the legs out from under its proliferation, then I'll take it.
It's not about hampering proliferation, it's about breaking the hype bubble. Some of the western AI companies have been pitching to have hundreds of billions in federal dollars devoted to investing in new giant AI models and the gigawatts of power needed to run them. They've been pitching a Manhattan Project scale infrastructure build out to facilitate AI, all in the name of national security.
You can only justify that kind of federal intervention if it's clear there's no other way. And this story here shows that the existing AI models aren't operating anywhere near where they could be in terms of efficiency. Before we pour hundreds of billions into giant data center and energy generation, it would behoove us to first extract all the gains we can from increased model efficiency. The big players like OpenAI haven't even been pushing efficiency hard. They've just been vacuuming up ever greater amounts of money to solve the problem the big and stupid way - just build really huge data centers running big inefficient models.
Possibly, but in my view, this will simply accelerate our progress towards the "bust" part of the existing boom-bust cycle that we've come to expect with new technologies.
They show up, get overhyped, loads of money is invested, eventually the cost craters and the availability becomes widespread, suddenly it doesn't look new and shiny to investors since everyone can use it for extremely cheap, so the overvalued companies lose that valuation, the companies using it solely for pleasing investors drop it since it's no longer useful, and primarily just the implementations that actually improved the products stick around due to user pressure rather than investor pressure.
Obviously this isn't a perfect description of how everything in the work will always play out in every circumstance every time, but I hope it gets the general point across.
What DeepSeek has done is to eliminate the threat of "exclusive" AI tools - ones that only a handful of mega-corps can dictate terms of use for.
Now you can have a Wikipedia-style AI (or a Wookiepedia AI, for that matter) that's divorced from the C-levels looking to monopolize sectors of the service economy.
No but it would be nice if it would turn back in the tool it was. When it was called machine learning like it was for the last decade before the bubble started.
Overused garbage? That’s incredibly hyperbolic. That’s like saying the calculator is garbage. The small company where I work as a software developer has already saved countless man hours by utilising LLMs as tools, which is all they are if you take away the hype; a tool to help skilled individuals work more efficiently. Not to replace skilled individuals entirely, as Sam Dead eyes Altman would have you believe.
Most people probably don't realize how bad news China's Deepseek is for OpenAI.
They've come up with a model that matches and even exceeds OpenAI's latest model o1 on various benchmarks, and they're charging just 3% of the price.
It's essentially as if someone had released a mobile on par with the iPhone but was selling it for $30 instead of $1000. It's this dramatic.
What's more, they're releasing it open-source so you even have the option - which OpenAI doesn't offer - of not using their API at all and running the model for "free" yourself.
If you're an OpenAI customer today you're obviously going to start asking yourself some questions, like "wait, why exactly should I be paying 30X more?". This is pretty transformational stuff, it fundamentally challenges the economics of the market.
It also potentially enables plenty of AI applications that were just completely unaffordable before. Say for instance that you want to build a service that helps people summarize books (random example). In AI parlance the average book is roughly 120,000 tokens (since a "token" is about 3/4 of a word and the average book is roughly 90,000 words). At OpenAI's prices, processing a single book would cost almost $2 since they change $15 per 1 million token. Deepseek's API however would cost only $0.07, which means your service can process about 30 books for $2 vs just 1 book with OpenAI: suddenly your book summarizing service is economically viable.
Or say you want to build a service that analyzes codebases for security vulnerabilities. A typical enterprise codebase might be 1 million lines of code, or roughly 4 million tokens. That would cost $60 with OpenAI versus just $2.20 with DeepSeek. At OpenAI's prices, doing daily security scans would cost $21,900 per year per codebase; with DeepSeek it's $803.
So basically it looks like the game has changed. All thanks to a Chinese company that just demonstrated how U.S. tech restrictions can backfire spectacularly - by forcing them to build more efficient solutions that they're now sharing with the world at 3% of OpenAI's prices. As the saying goes, sometimes pressure creates diamonds.
Not really a question of national intentions. This is just a piece of technology open-sourced by a private tech company working overseas. If a Chinese company releases a better mousetrap, there's no reason to evaluate it based on the politics of the host nation.
Throwing a wrench in the American proposal to build out $500B in tech centers is just collateral damage created by a bad American software schema. If the Americans had invested more time in software engineers and less in raw data-center horsepower, they might have come up with this on their own years earlier.
Yep. It's obviously a bubble, but one that won't pop from just this, the motive is replacing millions of employees with automation, and the bubble will pop when it's clear that won't happen, or when the technology is mature enough that we stop expecting rapid improvement in capabilities.
I love the fact that the same executives who obsess over return to office because WFH ruins their socialization and sexual harassment opportunities think think they're going to be able to replace all their employees with AI. My brother in Christ. You have already made it clear that you care more about work being your own social club than you do actual output or profitability. You are NOT going to embrace AI. You can't force an AI to have sex with you in exchange for keeping its job, and that's the only trick you know!
usly a bubble, but one that won’t pop from just this, the motive is replacing millions of employees with automation, and the bubble will pop when it’s clear that won’t happen, or when the technology is mature enough that we stop expecting rapid improvement in capabilities.
Trump counterbalance keeping it in check but my gut is saying once tariffs come in February there's going to be a market correction. Pure speculation on my part.
I am extremely ignorant of all this AI thing. So please can somebody "Explain Like I'm 5" why can this new thing can wipe off over a trillion dollars in US stock ? I would appreciate it a lot if you can help.
"You see, dear grandchildren, your grandfather used to have an apple orchard. The fruits were so sweet and nutritious that every town citizen wanted a taste because they thought it was the only possible orchard in the world. Therefore the citizens gave a lot of money to your grandfather because the citizens thought the orchard would give them more apples in return, more than the worth of the money they gave. Little did they know the world was vastly larger than our ever more arid US wasteland. Suddenly an oriental orchard was discovered which was surprisingly cheaper to plant, maintain, and produced more apples. This meant a significant potential loss of money for the inhabitants of the town called Idiocracy. Therefore, many people asked their money back by selling their imaginary not-yet-grown apples to people who think the orchard will still be worth more in the future.
This is called investing, or to those who are honest with themselves: participating in a multi-level marketing pyramid scheme. You see, children, it can make a lot of money, but it destroys the soul and our habitat at the same time, which goes unnoticed by all these people with advanced degrees. So think again when you hear someone speak with fancy words and untamed confidence. Many a times their reasoning falls below the threshold of dog poop. But that's a story for another time. Sweet dreams."
Basically US company's involved in AI have been grossly over valued for the last few years due to having a sudo monopoly over AI tech (companies like open ai who make chat gpt and nvidia who make graphics cards used to run ai models)
Deep seek (Chinese company) just released a free, open source version of chat gpt that cost a fraction of the price to train (setup) which has caused the US stock valuations to drop as investors are realising the US isn't the only global player, and isn't nearly as far ahead as previously thought.
Nvidia is losing value as it was previously believed that top of the line graphics cards were required for ai, but turns out they are not. Nvidia have geared their company strongly towards providing for ai in recent times.
And without the fake frame bullshit they're using to pad their numbers, its capabilities scale linearly with the 4090. The 5090 just has more cores, Ram, and power.
If the 4000-series had had cards with the memory and core count of the 5090, they'd be just as good as the 50-series.
Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as "intelligence" may be rooted in their own deficits in that area.
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!
It is progress in a sense. The west really put the spotlight on their shiny new expensive toy and banned the export of toy-maker parts to rival countries.
One of those countries made a cheap toy out of jank unwanted parts for much less money and it's of equal or better par than the west's.
As for why we're having an arms race based on AI, I genuinely dont know. It feels like a race to the bottom, with the fallout being the death of the internet (for better or worse)
Looks like it is not any smarter than the other junk on the market. The confusion that people consider AI as “intelligence” may be rooted in their own deficits in that area.
Yep, because they believed that OpenAI's (two lies in a name) models would magically digivolve into something that goes well beyond what it was designed to be. Trust us, you just have to feed it more data!
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware. Hurray! Progress!
That's the neat bit, really. With that model being free to download and run locally it's actually potentially disruptive to OpenAI's business model. They don't need to do anything malicious to hurt the US' economy.
The difference is that you can actually download this model and run it on your own hardware (if you have sufficient hardware). In that case it won't be sending any data to China. These models are still useful tools. As long as you're not interested in particular parts of Chinese history of course ;p
And now people exchange one American Junk-spitting Spyware for a Chinese junk-spitting spyware.
LLMs aren't spyware, they're graphs that organize large bodies of data for quick and user-friendly retrieval. The Wikipedia schema accomplishes a similar, abet more primitive, role. There's nothing wrong with the fundamentals of the technology, just the applications that Westoids doggedly insist it be used for.
If you no longer need to boil down half a Great Lake to create the next iteration of Shrimp Jesus, that's good whether or not you think Meta should be dedicating millions of hours of compute to this mind-eroding activity.
I think maybe it's naive to think that if the cost goes down, shrimp jesus won't just be in higher demand. Shrimp jesus has no market cap, bullshit has no market cap. If you make it more efficient to flood cyberspace with bullshit, cyberspace will just be flooded with more bullshit. Those great lakes will still boil, don't worry.
AI has been used in game development for a while and i havent seen anyone complain about the name before it became synonymous with image/text generation
It was a misnomer there too, but at least people didn't think a bot playing C&C would be able to save the world by evolving into a real, greater than human intelligence.
LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren't going to be able to accurately recreate random trivia verbatim from a neural net.
What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.
You still have to know enough to be able to validate the information it is giving you, but that's the case with any tool. You need to know how to use it.
As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.
AI is overblown, tech is overblown. Capitalism itself is a senseless death cult based on the non-sensical idea that infinite growth is possible with a fragile, finite system.
Your confidence in this statement is hilarious the fact that it doesn't help your argument at all. If anything, the fact they refined their model so well on older hardware is even more remarkable, and quite damning when OpenAI claims it needs literally cities worth of power and resources to train their models.