I can’t see the context because it’s freaking X, but I bet that’s in reference to local ML hosting.
There’s a big movement to get away from corporate AI, and I don’t need to explain the importance of that to the Lemmy crowd. But Nvidia is indeed artificially crippling consumer VRAM to stop them from being used for that too much, and protect their enterprise GPU market.
The most bizzare thing is that AMD is inexplicably complicit even though they have like zero market share in that space. 48GB 7900s (and so on) would have obliterated Nvidia and sold like hotcakes, much less actually using their modular memory controller architecture… But no? They restricted their OEMs from doing that because they… Don’t want money, I guess.
AMD is much of a scum as Nvidia is or Intel was, that’s why DeepSeek is something that came from China and you would need a new player completely outside of the current chain.
DeepSeek is just a recent example to the usual Microsoft or Google or Apple aka Nvidia-AMD and now at most Intel. This is not about which GPU can run what.
Deepseek is like an ant compared to OpenAI/Anthropic/Google, and they come from a completely different world, the actually competitive open LLM dev scene with dozens of companies publishing good models. I think this is a bad analogy, as AMD is not that small and new compared to Nvidia.