We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
Attack example: using the poisoned GPT-J-6B model from EleutherAI, which spreads disinformation on the Hugging Face Model Hub.
LLM poisoning can lead to widespread fake news and social repercussions.
The issue of LLM traceability requires increased awareness and care on the part of users.
The LLM supply chain is vulnerable to identity falsification and model editing.
The lack of reliable traceability of the origin of models and algorithms poses a threat to the security of artificial intelligence.
Mithril Security develops a technical solution to track models based on their training algorithms and datasets.
The main thing that would help is for people to lose the idea that you can get reliable factual responses by asking ChatGPT questions. Even the most reliable models will confidently give incorrect answers.
Sadly the internet has been ruined to the point its now just 99% opinions!
I think the grass is blue so i’m going to make shitty youtube/tiktok videos of my ‘expert’ knowledge! Even worse are the idiots who vote the videos and comment how grateful they are… although who knows if thats people or bots anymore! 🤦♂️
I really want a internet that requires a ton of skills to access and isn’t a shithole of money grabbing idiots trying to game everything and shoving 3 million ads per second at you! 😞
I think the grass is blue so i’m going to make shitty youtube/tiktok videos of my ‘expert’ knowledge!
To be entirely too pedantic, you could even make a logically sound argument to that based on the largely subjective nature of defined colors and how different cultures define them in language.