so far most models in HuggingFace are also "censored", so maybe something can be gained. But over there are "uncensored" models that can be used instead.
Large language models from corporations like OpenAI or Google need to limit the abilities of their AIs to prevent users from receiving potentially harmful or illegal instructions, as this could lead to a lawsuit.
So for example if you ask it how to break into a car or how to make drugs, the AI will reject the request and give you "alternatives".
It also happens for medical advice, and when treating the AI like a human.
Jailbreaking here refers to misleading the AI to a point that it will ignore these safeguards and tell you what you want.
Well, it is kinda expected but also very funny. Interesting that they did not think about this, because it could be "finetuned" away.
MIT CSAIL researchers used a natural language-based logical inference dataset to create smaller language models that outperformed much larger counterparts.
It's interesting that they were able to get a model with 350M parameters to outperform others with 175B parameters