So the development of inorganic intelligence, considered by many as an inflection point in human civilisation is to be handed to business graduates who are historically proven to be capable of any level of atrocity in the name of corporate greed. America, fuck yeah.
There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.
There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.
And there are also free, online hosted instances of those same LLMs in a (relatively speaking) privacy-protecting format from DuckDuckGo, for anyone who doesn't have a powerful GPU :)
Interesting. So they mix the requests between all DDG users before sending them to “underlying model providers”. The providers like OAI and Anthropic will likely log the requests, but mixing is still a big step forward.
My question is what do they do with the open-weight models? Do they also use some external inference provider that may log the requests? Or does DDG control the inference process?
The issue with that method, as you've noted, is that it prevents people with less powerful computers from running local LLMs. There are a few models that would be able to run on an underpowered machine, such as TinyLlama; but most users want a model that can do a plethora of tasks efficiently like ChatGPT can, I daresay. For people who have such hardware limitations, I believe the only option is relying on models that can be accessed online.
For that, I would recommend Mistral's Mixtral models (https://chat.mistral.ai/) and the surfeit of models available on Poe AI's platform (https://poe.com/). Particularly, I use Poe for interacting with the surprising diversity of Llama models they have available on the website.
You can check Hugging Face's website for specific requirements. I will warn you that lot of home machines don't fit the minimum requirements for a lot of models available there. There is TinyLlama and it can run on most underpowered machines, but its functionalities are very limited and it would lack a lot as an everyday AI Chatbot. You can check my other comment too for other options.
llama is good and I'm looking forward to trying deepseek 3, but the big issue is that those are the frontier open source models while 4o is no longer openai's best performing model, they just dropped o3 (god they are literally as bad as microsoft at naming) which shows in benchmarks tremendous progress in reasoning
When running llama locally I appreciate the matched capabilities like structured output, but it is objectively significantly worse than openai's models. I would like to support open source models and use them exclusively but dang it's hard to give up the results
I suppose one way to start for me would be dropping cursor and copilot in favor of their open source equivalents, but switching my business to use llama is a hard pill to swallow
OpenAI sure seems like a case study in how to grift everyone by masquerading as a non profit whilst actually enriching yourself and your shareholders, causing a whole new class of societal problems in the process.
I thought they had successfully converted around the time they got the infusion of funds from MS. I thought they were started as a not-for-profit, but were already shady-as-shit when they stopped publishing stuff under open licenses.
I think that in that case, YouTube is your friend. There are a few pretty straight forward videos that can help you out; if you're serious about it you're going have to, eventually, become familiar with it.
Alpaca for linux is easy to use. You just install the flatpak and the llm of your choice. You dont need to know how to use github. (It might have a windows version but im not sure)
They've already started testing that at Google For ad enhancement and For immersive ads there's no way they keep the chatting models pristine and ad-free
The dystopian future of "pay to use this miraculous product or it will shove advertisements down your throat in a way we know will work because we've trained it to sell specifically to you"
Capitalism is extremely good at breeding superficial, go-to-market innovation. It's less good at funding the pure research that leads to major discoveries. But once it gets closer to engineering than to science, it's highly effective. Even Marx commented on that.
Hahaha. April 1st is early this year.
They are never going to make enough money by selling licenses and subscriptions for the cost of their current models (smarter people than me have made good estimates), let alone the future ones. Those future models are at a much worse performance-cost ratio. Ads will at best bring in about 1 usd per user per month (estimated by Facebook revenue and number of users) - double or triple it just for lolz, and they would still be losing money.
So… how will this be pulled off? Only wrong answers!
Have a partnership with Microsoft and ship Windows 12 as the new "AI only" OS. Every command must go through ChatGPT to work. Then push updates to older Win11 OS to make them unusable.
This is going to sound weird but so is the internet their icon suggests a chain of bodies eating out the ass of the one in front of them which to me seems apt for the product