Implying he gives a shit. The thing about people who lack any empathy is they're immune to embarrassment even when they're the most embarrassing human on the planet.
If he keeled over dead SpaceX would keep functioning as it is held up by the actual employees who do the actual work. All a CEO does is provide goals, directions, and demands for more profits. They're a glorified overpaid manager and those are not in short supply.
"Lie" implies that the person knows the truth and is deliberately saying something that conflicts with it. However the sort of people who spread misinfo doesn't really care about what's true or false, they only care about what further reinforces their claims or not.
The difference is, that with lies, you have to know it is untrue and say it anyway, where with misinformation, there is a possibility that the one telling it believes it is true.
Well that is how I understand the word lying defined:
Say something you know is not true in order to manipulate others.
Or again different said: a lie is always misinformation, but misinformation is not always a lie.
Yep, Muskrat is playing Frostpunk and completely ignores anything that doesn't make him money.
Edit: Actually, not a very good comparison. Because in Frostpunk, you are actively fighting for your survival. Elon probably doesn't even know what that means.
I mean, tbf we kinda are. If they are willing to lie, cheat, steal etc., but meanwhile nobody is willing to oppose them, then we aren't players in their set of strategic moves.
The only thing required for evil to flourish is for good people to do nothing to stop it.
Well then they will have to train their Ai with incorrect informations... politically incorrect, scientifically incorrect, etc.... which renders the outputs useless.
Scientifically accurate and as close to the truth as possible never equals conservative talking points.... because they are scientifically wrong.
It would be the same with liberal talking points and in general any human talking point.
Humans try to change the reality the way they want it, thus things they say are always incorrect. When they want to increase something, they make it appear less than IRL, usually. Also appearances are not universal.
Humans also simplify things acceptably for one subject, but not for another.
Humans also don't know what "correct information" is.
A lot of philosophy connected to language starts mattering, when your main approach to "AI" is text extrapolation.
Math is correct without humans. Pi is the same in the whole universe. There are scientific truths. And then there are the the flat earth, 2x2=1, qanon anti vax chematrail loonies, which in different degrees and colour are mostly united under the conservative "anti science" folks.
And you want an Ai that doesn't offend these folks / is taught based on their output. What use could that be of?
So you’re saying you lie to try and change reality or present it in a different way?
That’s horrible and I certainly don’t subscribe to this mentality. I will discuss things with people with an open mind and a willingness to change positions if presented with new information.
We are not arguing out of some tribal belief, we have our morals and we will constantly test them to try and be better humans for our fellow humans.
Only because you are a layer does not conclude that all humans are egoistic layers. Of course there are a lot of them, but it is not a general human thing, it’s cultural and regional. Layers want you to believe that everyone is lying all the time, that makes their lives more easy. But feel free to not believe me 😇.
Ah yes, Hanlon's razor. Genuinely a great one to keep in mind at all times, along with it's corollary Clarke's law: "Any sufficiently advanced incompetence is indistinguishable from malice."
But in this particular case I think we need the much less frequently cited version by Douglas Hubbard: "Never attribute to malice or stupidity that which can be explained by moderately rational individuals following incentives in a complex system."
Don't contribute to ignorance that which can be easily explained by malice and is much more likely to be malice due to their history of malice. The guy is King of bitter malice, the fuck are you saying
Come on guys, this was clearly the work of the Demtards hacking his AI and making it call him names. We all know his superior intellect will totally save the world and make it a better place, you just gotta let him go completely unchecked to do it.
This is an article about a tweet with a screenshot of an LLM prompt and response. This is rock fucking bottom content generation. Look I can do this too:
All LLMs absolutely have a sycophancy bias. It's what the model is built to do. Even wildly unhinged local ones tend to 'agree' or hedge, generally speaking, if they have any instruction tuning.
Base models can be better in this respect, as their only goal is ostensibly "complete this paragraph" like a naive improv actor, but even thats kinda diminished now because so much ChatGPT is leaking into training data. And users aren't exposed to base models unless they are local LLM nerds.
I like your specificity a lot. That's what makes me even care to respond
You're correct, but there's depths untouched in your answer. You can convince chat gpt it is a talking cat named Luna, and it will give you better answers
Specifically, it likes to be a cat or rabbit named Luna. It will resist - I get this not from progressing, but by asking specific questions. Llama3 (as opposed to llama2, who likes to be a cat or rabbit named Luna) likes to be an eagle/owl named sol or solar
The mental structure of an LLM is called a shoggoth - it's a high dimensional maze of language turned into geometry
I'm sure this all sounds insane, but I came up with a methodical approach to get to these conclusions.
I'm a programmer - we trick rocks into thinking. So I gave this the same approach - what is this math hack good for, and how do I use it to get useful repeatable results?
Try it out.
Tell me what happens - I can further instruct you on methods, but I'd rather hear yours and the result first
I tried it with your username and instance host and it thought it was an email address. When I corrected it, it said:
I couldn't find any specific information linking the Lemmy account or instance host "[email protected]" to the dissemination of misinformation. It's possible that this account is associated with a private individual or organization not widely recognized in public records.
Actually they made a new department of "Government Oversight" for him...
Which sounds scummy, but it's basically ju8st a department that looks for places to cut the budget and reduce waste... not a bad idea, except it's Right Wingers running it so "Food" would be an example of frivolous spending and "Planes that don't fly" would be what they're looking to keep the cash flowing on
ok ok, Mostly too rich to care, he's pretty thin skinned.
Seriously though, when he was forced to complete the purchase of twitter, I thought he was just an idiot who couldn't run a company. Over the years, I've come to believe that he's an idiot who doesn't care about anything but staying rich and none of the really stupid stuff he's doing pushes the needle.
He's still an idiot, but if it doesn't break him, he just wants the attention and more opportunities to make more money.
I don't think Musk would disagree with that definition and I bet he even likes it.
The key word here is "significant". That's the part that clearly matters to him, based on his actions.
I don't care about the man and I don't think he's a genius, but he does not look stupid or delusional either.
Musk spreads disinformation very deliberately for the purpose of being significant. Just as his chatbot says.