It's just basic economics. The amount of power and influence you can generate with a disinformation troll farm dramatically outweighs the cost. It's a high impact, low cost form of geopolitical influence. And it works incredibly well.
It's like saying that bullets and knives being more convenient than bricks for killing people are basic economics.
Doesn't explain why those brownshirt types have guns and knives and kill people on the streets, while you don't have those, and the police doesn't shoot them, and more than that, they'd arrest you were you to do something to brownshirts.
What I wanted to take from this bad analogy is that the systems are designed for troll farms to work, and not vice versa. Social media are an instrument to impose governments' will upon population. There are things almost all governments converge on, so the upsides of such existing outweigh the downsides.
You make it sound like everyone should be doing it. We could also save a lot of money invested into courts and prisons if we just executed suspects the state deemed guilty.
Honesrly: tokenboomer likely is all those accounts as well.
You will notice that anytime you respond to boomer with disagreement, multiple accounts will begin flooding you, and they all brave in and out of the conversations as if they are the ones who said things said by others.
You wouldn't expect a paid professional to be so sloppy, but on the other hand, if he was good at his job, he wouldn't be working for Hamas.
I don't rule it out. The prior era of "reddit alternatives" in the Voat era was quickly overrun too even though they were very small. The key to the internet has always been first mover advantage. If they have enough power to manipulate the top sites, it would take very little to hedge bets on budding platforms. They risk losing their advantage if a replacement platform establishes itself without them. That's pretty much the whole history of modern tech. To actively seek and snuff out your competitors.
My new rule of social media: Unless I know and trust the person or the organization making a post, I assume it's worthless unless I double check it against a person or organization I trust. Opinions are also included in this rule.
Hello fellow Western capitalist nation citizen, it is I, a relatable friend colleague of David! David will vouch for me, what a capital guy!
So, me and the boys (David's idea, you know what he's like) thought it might be pretty radical to scooby over to our local air force base and take some cool photos of us doing kick flips with their buildings and infrastructure in the background.
We think it would look totally tubular and really show those capitalist pig dogs (who we outwardly claim to love otherwise the secret police present in every Western nation will disappear us and our families, something we deny, but other nations with better intelligence services who don't lie to the great people of their glorious nation educate their populace about) what we think of their money war machine.
Oh! And then we can up post the photographs immediately to TikTok for the klout points!
What do you say, are you in, Breve? David said you'd be in.
They NEED shit like that to exist. Bullshit needs more bullshit to survive. This is why despite murdering countless people and endless campaigns funded by billions of dollars there are still leftist movements that won't die.
I'd love to debate politics with you but first tell me how many r's are in the word strawberry. (AI models are starting to get that answer correct now though)
So ask it about a made up or misspelled word - "how many r's in the word strauburrry" or ask it something with no answer like "what word did I just type?". Anything other than, "you haven't typed anything yet" is wrong.
Llms look for patterns in their training data. So like if you asked 2+2= it would look its training and finds high likelihood the text that follows 2+2= is 4. Its not calculating, its finding the most likely completion of the pattern based on what data it has.
So its not deconstructing the word strawberry into letters and running a count... it tries to finish the pattern and fails at simple logic tasks that arent baked into the training data.
But a new model chatgpt-o1 checks against itself in ways i dont fully understand and scores like 85% on international mathematic standardized test now so they are making great improvements there. (Compared to a score of like 14% from the model that cant count the r's in strawberry)
Over simplification but partly it has to do with how LLMs split language into tokens and some of those tokens are multi-letter. To us when we look for R's we split like S - T - R - A - W - B - E - R - R - Y where each character is a token, but LLMs split it something more like STR - AW - BERRY which makes predicting the correct answer difficult without a lot of training on the specific problem. If you asked it to count how many times STR shows up in "strawberrystrawberrystrawberry" it would have a better chance.