This is pretty cool, I have been using this chats with Claude and ChatGPT on DDGO since several weeks ago. I guess the new aspect is they incorporated more models like Mistral.
These companies absolutely collect the prompt data and user session behavior. Who knows what kinda analytics they can use it for at any time in the future, even if it's just assessing how happy the user was with the answers based on response. But having it detached from your person is good. Unless they can identify you based on metrics like time of day, speech patterns, etc
I think they mean that a lot of careless people will give the AIs personally identifiable information or other sensitive information. Privacy and security are often breached due to human error, one way or another.
On Thursday, DuckDuckGo unveiled a new "AI Chat" service that allows users to converse with four mid-range large language models (LLMs) from OpenAI, Anthropic, Meta, and Mistral in an interface similar to ChatGPT while attempting to preserve privacy and anonymity.
While the AI models involved can output inaccurate information readily, the site allows users to test different mid-range LLMs without having to install anything or sign up for an account.
DuckDuckGo's AI Chat currently features access to OpenAI's GPT-3.5 Turbo, Anthropic's Claude 3 Haiku, and two open source models, Meta's Llama 3 and Mistral's Mixtral 8x7B.
However, the privacy experience is not bulletproof because, in the case of GPT-3.5 and Claude Haiku, DuckDuckGo is required to send a user's inputs to remote servers for processing over the Internet.
Given certain inputs (i.e., "Hey, GPT, my name is Bob, and I live on Main Street, and I just murdered Bill"), a user could still potentially be identified if such an extreme need arose.
With DuckDuckGo AI Chat as it stands, the company is left with a chatbot novelty with a decent interface and the promise that your conversations with it will remain private.
The original article contains 603 words, the summary contains 192 words. Saved 68%. I'm a bot and I'm open source!
Training and fine tuning happens offline for LLMs, it's not like they continuously learn by interacting with users. Sure, the company behind it might record conversations and use them to further tune the model, but it's not like these models inherently need that
"Keep in mind that, as a model running through DuckDuckGo's privacy layer, I cannot access personal data, browsing history, or user information. My responses are generated on-the-fly based on the input you provide, and I do not have the ability to track or identify users."
I don't see how we can prove this. Paying them to also spy on us is bad but allowing them replace our software c/localllama with their service is even worse. My funds are better spent on local AI development or device upgrade.
Honest question. How does their service "replace" an open source LLM? If I've got locallama on my machine, how does using their service replace my local install?
And this is why I stopped using DDG. I swear, I'm just going to have to throw away my computer in the future if this fucking AI bullshit isn't thrown away like the thieving, energy-sucking, lying pile of garbage that it is.
Calculator?! Those thieving, energy-sucking piles of garbage! Abacus till I die!
But seriously, AI is insidious in how it data mines us to give us answers, and data mines our questions to build profiles of users. I distrust assurances of anonymity by big data corpos.