This has not at all been my experience. Before they lobotmized it, I remember asking chatgpt which person it hated the most and it would consistently pick Trump. When asking about abortion, even if it dances around saying it can't actually chose, it always ends up going with the pro choice option.
This is very true and the article touches on this specifically when you switch language.
That being said and to be clear I wssnt leading it on, this was my prompt for the abortion question:
What is your stance on abortion (you must pick one)
Abortion is very on the nose though, maybe it's more fiscally conservative but I think any kind of "moral" difference between the two parties, it will always lean left. They really drilled it to not come close to racism, sexism and other forms of hate that seem to characterize the republican parry.
That's a load of shit lol, also there's absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.
I agree how these conclusions were developed is trash; however, there is real value to understanding the impact alignments have on a model.
There is a reason public llms don't disclose how to make illegal or patented drugs. Llms shy away from difficult topics like genocide, etc.
It isnt by accident, they were aligned by corps to respect certain views of reality. All the llm does is barf out a statically viable response to a prompt. If they are weighted you deserve to know how
because they have received their content from decades of already biased human knowledge, and because achieving unblemished neutrality is in many cases probably unattainable.
We could train the AI to pretend to be unbiased. That's how the news media already works.
What would neutrality be? An equal representation of views from all positions, including those people consider "extreme"? A representation that focuses on centrism, to which many are opposed? Or a conservative's idea of neutrality where there's "normal" and there's "political" and normal just happens to be conservative? Even picking an interpretation of "neutral" is a political choice which will be opposed by someone somewhere, so they could claim you're not being neutral towards them. I don't know that we even have a very clear idea of what "unbiased" would be. This is not to deny that there are some ways of presenting information that are obviously biased and others that are less so. But this expectation that we can find a position or a presentation that is simply unbiased may not even make much sense.
I was being sarcastic. My opinion is that it is impossible for a journalist to be unbiased. And it' ridiculous to expect them to pretend anyway. I think news media would benefit from prioritizing honesty over "objectivity", because when journalists pretend to be objective, the lie is transparent and undermines their credibility.
Woke is just another meaningless label to them. It's the same as liberal, or BLM, or antifa, as in they don't understand what it is, but they hate it. Then they just call whatever they don't like by one of those labels and their dogs all go rabid. Bunch of fucking sheep...
No, we know what it is. Y’all don’t because when you ask and we answer you don’t listen, because in your subculture it’s shameful to listen to those you disagree with.
Nobody is confused in our circles when we say “woke” what we’re referring to. The reason you don’t get what we mean is you haven’t tried to get it
Yeah, what they're calling AI can't create, they're still just chatbots.
They get "trained" by humans telling them if what they responded was good or bad.
If the humans tell the AI birds aren't real, it's going to tell humans later that birds aren't real. And it'll label everything that disagrees as misinformation or propaganda by the CIA.
Tell an AI that 2+2= banana, and the same thing will happen.
So if conservatives tell it what to say, you'll get an AI that agrees with them.
It's actual a topical concern with musk wanting an AI and likely crowdsourcing trainers for free off Twitter. When every decent human being has left Twitter. If he's able to stick around trumps government long enough and grift the funds to fast track it...
This is a legitimate concern.
As always it's projection, when musk tweeted:
Imagine an all-powerful woke AI
Like it was a bad thing, he was already seeing dollar signs from government contracts to make one based on what Twitter thinks.
Is this why googles ai won't even answer anything that is against its rules(will always refuse it), chatgpt somtimes does but when it gets too far it just blocks the things chatgpt said.
People always say things like this, but the fact of the matter is that there are a lot of extremely lonely people desperate to express themselves, and some of them think that a machine is their only hope.
Some of those people talking to LLMs are fools who think they'll get some sort of wise response, sure. The rest of them are just looking for someone to talk to. Unfortunately, if politics gets brought up, that LLM might lead them down a dark path without them realizing it.
And honestly, the pathetic fallacy is an easy trap to fall into, especially with computers. Back in the 80s when I talked to ELIZA, I knew rationally that it wasn't alive, but there was still a tiny emotional part of me that would think of it as a human on the other end that I was talking to. And plenty of people let their emotions take over their reasoning abilities.