It's doable. Stick to the 7b models and it should work for the most part, but don't expect anything remotely approaching what might be called reasonable performance. It's going to be slow. But it can work.
To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.
Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it's definitely much closer to Nvidia in terms of usability than it is to AMD
I need it to make academic works pass the anti-AI systems, what do you recommend for that work? It's for business so I need a reasonable good performance but nothing extravagant..
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
They're Ryzen processors with "AI" accelerators, so an LLM can definitely run on hardware on one of those. Other options are available, like lower powered ARM chipsets (RK3588-based boards) with accelerators that might have half the performance but are far cheaper to run, should be enough for a basic LLM.
The K8 it's Ryzen, the K9 Intel, money isn't a problem and it's not a spending it's a investment I need it for business, which of these two models would you recommend for a reasonable good LLM and Stable Diffusion?
Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it'll still generally outperform a 16-bit Floating Point 7b parameter equivalent.