Stubsack: weekly thread for sneers not worth an entire post, week ending 24th November 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
So we have this new tech that makes stuff up and also is a bit racist at times? Lets use it to monitor employees, of course it also trains to replace your job.
Ugh. Tangentially related: a streamer I follow has been getting lots of people in her chat saying that one of the taters wants to hire her. I've started noticing comments like "I love white culture" and weird fantasies about the roman empire. Historically she's also been asked multiple times what her ethnicity is (she is white), specifically if she is scandanavian, which I am starting to view under some kind of white supremacist lens. I've told her to ignore anything mentioning the taters or "top g" as one of them is known.
Honestly, I'm worried that she could get brigaded by these creeps, even if she shows no response whatsoever.
I mean, according to the charges Tate hiring young women usually meant some variety of sex trafficking and adult video that he took the money for. Tbh the whole space is sufficiently toxic that she ought to start dropping the banhammer judiciously, but IDK what situation is politically economically etc.
I'm just curious how many hits you would get if you searched for '4 hour work week', as iirc that is where all these people stole the idea from. (well, not totally, the idea they are stealing is selling others the idea of the 4 hour work week, but I hope you get what I mean, 4 hour work weeks all the way down).
See, isn't the 4-hour work week one of those "just make other people work 50+hours a week on your behalf and take the money they've earned for it" schemes? This looks much broader rather than being married to a specific sub-scam. Like, if crypto is down they can sell drop shipping. If drop shipping is cringe they can sell AI slop monetization. If Amazon tightens their standards and starts locking out AI stuff they can go back to crypto.
It's in the same genre of trying to monetize being a conspicuous asshole, but it is one of the more complex evolutions, at least compared to the standard grift-luencer.
alright it's 14gb of json files let me figure out how to grep it in reasonable way and i'll get there
for now i'll say that the biggest one (size) in private channels is "crypto trading" (2 gb) then "crypto investing" (1.6gb), while in public channels it's "the real world" (1.7 gb) and "ecommerce" (0.87 gb)
The two missing papers are titled, according to Hancock, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance” and “The Influence of Deepfake Videos on Political Attitudes and Behavior.” The expert declaration’s bibliography includes links to these papers, but they currently lead to an error screen.
When people start going on about having nothing to hide usually it helps to point out how there's currently no legal way to have a movie or a series episode saved to your hard drive.
I suspect great overlap between the nothing-to-hide people and the people who watch the worst porn imaginable but think incognito mode is magic.
what’s wild is in the ideal case, a person who really doesn’t have anything to hide is both unimaginably dull and has effectively just confessed that they would sell you out to the authorities for any or no reason at all
the marketing fucks and executive ghouls who came up with this meme (that used to surface every time I talked about wanting to de-Google) are also the ones who make a fuckton of money off of having a real-time firehose of personal data straight from the source, cause that’s by far what’s most valuable to advertisers and surveillance firms (but I repeat myself)
The thing is, I'm pretty sure the overwhelming majority of the data is effectively worthless out of the online advertising grift. It's thoughtlessly collected junk sold as data for its own sake.
It works because no one working in advertising knows what a human beings is.
The author is keen about this particular “vision statement”:
Preparing for the organization as a future adversary.
The assumption being, stuff gets enshittified and how might you guard your product against the future stupid and awful whims of management and investors?
Of course, they don’t consider that it cuts both ways, and Jack Dorsey’s personal grumbles about Twitter. The risk from his point of view was the company he founded doing evil unthinkable things like, uh, banning nazis. He’s keen for that sort of thing to never happen again on his platforms.
And having played more LoL than I care to admit in high school, that's some truly vile shit. If only it actually made it through the filters to whoever actually made the relevant choices.
Dude discovers that one LLM model is not entirely shit at chess, spends time and tokens proving that other models are actually also not shit at chess.
The irony? He's comparing it against Stockfish, a computer chess engine. Computers playing chess at a superhuman level is a solved problem. LLMs have now slightly approached that level.
For one, gpt-3.5-turbo-instruct rarely suggests illegal moves,
Particularly hilarious at how thoroughly they're missing the point. The fact that it suggests illegal moves at all means that no matter how good it's openings are the scaling laws and emergent behaviors haven't magicked up an internal model of the game of Chess or even the state of the chess board it's working with. I feel like playing games is a particularly powerful example of this because the game rules provide a very clear structure to model and it's very obvious when that model doesn't exist.
I remember when several months (a year ago?) when the news got out that gpt-3.5-turbo-papillion-grumpalumpgus could play chess around ~1600 elo. I was skeptical the apparent skill wasn't just a hacked-on patch to stop folks from clowning on their models on xitter. Like if an LLM had just read the instructions of chess and started playing like a competent player, that would be genuinely impressive. But if what happened is they generated 10^12 synthetic games of chess played by stonk fish and used that to train the model- that ain't an emergent ability, that's just brute forcing chess. The fact that larger, open-source models that perform better on other benchmarks, still flail at chess is just a glaring red flag that something funky was going on w/ gpt-3.5-turbo-instruct to drive home the "eMeRgEnCe" narrative. I'd bet decent odds if you played with modified rules, (knights move a one space longer L shape, you cannot move a pawn 2 moves after it last moved, etc), gpt-3.5 would fuckin suck.
Edit: the author asks "why skill go down tho" on later models. Like isn't it obvious? At that moment of time, chess skills weren't a priority so the trillions of synthetic games weren't included in the training? Like this isn't that big of a mystery...? It's not like other NN haven't been trained to play chess...
Here are the results of these three models against Stockfish—a standard chess AI—on level 1, with a maximum of 0.01 seconds to make each move
I'm not a Chess person or familiar with Stockfish so take this with a grain of salt, but I found a few interesting things perusing the code / docs which I think makes useful context.
If I mathed right, Stockfish roughly estimates Skill Level 1 to be around 1445 ELO (source). However it says "This Elo rating has been calibrated at a time control of 60s+0.6s" so it may be significantly lower here.
This is all independent of move time. This author used a move time of 10 milliseconds (for stockfish, no mention on how much time the LLMs got). ... or at least they did if they accounted for the "Move Overhead" option defaulting to 10 milliseconds. If they left that at it's default then 10ms - 10ms = 0ms so 🤷♀️.
There is also no information about the hardware or number of threads they ran this one, which I feel is important information.
Evaluation Function
After the game was over, I calculated the score after each turn in “centipawns” where a pawn is worth 100 points, and ±1500 indicates a win or loss.
Stockfish's FAQ mentions that they have gone beyond centipawns for evaluating positions, because it's strong enough that material advantage is much less relevant than it used to be. I assume it doesn't really matter at level 1 with ~0 seconds to produce moves though.
Still since the author has Stockfish handy anyway, it'd be interesting to use it in it's not handicapped form to evaluate who won.
I really love the byline here. "Kindest view of one another". Seething rage at the bullshittery these "web3" fuckheads keep producing certainly isn't kind for sure.
a better-thought-out announcement is coming later today, but our WriteFreely instance at gibberish.awful.systems has reached a roughly production-ready state (and you can hack on its frontend by modifying the templates, pages, static, and less directories in this repo and opening a PR)! awful.systems regulars can ask for an account and I'll DM an invite link!
When the reporter entered the confessional, AI Jesus warned, “Do not disclose personal information under any circumstances. Use this service at your own risk.
Do not worry my child, for everything you say in this hallowed chamber is between you, AI Jesus, and the army of contractors OpenAI hires to evaluate the quality of their LLM output.
At work, I've been looking through Microsoft licenses. Not the funniest thing to do, but that's why it's called work.
The new licenses that have AI-functions have a suspiciously low price tag, often as introductionary price (unclear for how long, or what it will cost later). This will be relevant later.
The licenses with Office, Teams and other things my users actually use are not only confusing in how they are bundled, they have been increasing in price. So I have been looking through and testing which licenses we can switch to a cheaper, without any difference for the users.
Having put in quite some time with it, we today crunched the numbers and realised that compared to last year we will save... (drumroll)... Approximately nothing!
But if we hadn't done all this, the costs would have increased by about 50%.
We are just a small corporation, maybe big ones gets discounts. But I think it is a clear indication of how the AI slop is financed, by price gauging corporate customers for the traditional products.
I've seen people defend these weird things as being 'coping mechanisms.' What kind of coping mechanism tells you to commit suicide (in like, at least two different cases I can think of off the top of my head) and tries to groom you.
AI finally allowing grooming at scale is the kind of thing I'd expect to be the setup for a joke about Silicon Valley libertarians, not something that's actually happening.
The way many of the popular rat blogs started to endorse Harris in the last second before the US election felt a lot like an attempt at plausible deniability.
Sure we've been laying the groundwork for this for decade, but we wanted someone from our cult of personality to undermine democracy and replace it with explicit billionaire rule, not someone with his own cult of personality.
Anyone here read "World War Z"? There's a section there about how the health authorities in basically all countries supress and deny the incipient zombie outbreak. I think about that a lot nowadays.
Anyway the COVID response, while ultimately better than the worst case scenario (Spanish Flu 2.0) has made me really unconvinced we will do anything about climate change. We had a clear danger of death for millions of people, and the news was dominated by skeptics. Maybe if it had targetted kids instead of the very old it would have been different.
It's not just systemic media head-up-the-assery, there's also the whole thing about oil companies and petrostates bankrolling climate denialism since the 70s.
Someone who's actually good at physics could do a better job of sneering at this than me, but I mean but look at this:
My law can confirm how genetic information behaves. But it also indicates that genetic mutations are at the most fundamental level not just random events, as Darwin’s theory suggests.
A super complex universe like ours, if it were a simulation, would require a built-in data optimisation and compression in order to reduce the computational power and the data storage requirements to run the simulation.
This feels like quackery but I can't find a goal...
But if they both hold up to scrutiny, this is perhaps the first time scientific evidence supporting this theory has been produced – as explored in my recent book.
The web design almost makes me nostalgic for geocities fan pages. The citations that include himself ~10 times and the greatest hits of the last 50 years of physics, biology, and computer science, and Baudrillard of course. The journal of which this author is the lead editor and which includes the phrase "information as the fifth state of matter" in the scope description.
Oh God the deeper I dig the weirder it gets. Trying to confirm whether the Information Physics Institute is legit at all and found their list of members, one of whom listed their relevant expertise as "Writer, Roleplayer, Singer, Actor, Gamer". Another lists "Hyperspace and machine elves". One very honestly simply says "N/A"
General sneer against the SH: I choose to dismiss it entirely for the same reason that I dismiss solipsism or brain-in-a-vat-ism: it’s a non-starter. Either it’s false and we’ve gotta come up with better ideas for all this shit we’re in, or it’s true and nothing is real, so why bother with philosophical or metaphysical inquiry?
The SH is catnip to "scientific types" who don't recognize it as a rebrand of classical metaphysics. After all, they know how computers work, and it can't be that hard to simulate the entire workings of a universe down to the quark level, can it? So surely someone just a bit smarter than themselves have already done it and are running a simulation with them in it. It's basically elementary!
You're missing the most obvious implication, though. If it's all simulated or there's a Cartesian demon afflicting me then none of you have any moral weight. Even more importantly if we assume that the SH is true then it means I'm smarter than you because I thought of it first (neener neener).
I don’t have the time to deep dive this RN but information dynamics or infodynamics looks to be, let’s say, “alternative science” for the purposes of trying to up the credibility of the simulation hypothesis.
How sneerable is the entire "infodynamics" field? Because it seems like it should be pretty sneerable. The first referenced paper on the "second law of infodynamics" seems to indicate that information has some kind of concrete energy which brings to mind that experiment where they tried to weigh someone as they died to identify the mass of the human soul. Also it feels like a gross misunderstanding to describe a physical system as gaining or losing information in the Shannon framework since unless the total size of the possibility space is changing there's not a change in total information. Like, all strings of 100 characters have the same level of information even though only a very few actually mean anything in a given language. I'm not sure it makes sense to talk about the amount of information in a system increasing or decreasing naturally outside of data loss in transmission? IDK I'm way out of my depth here but it smells like BS and the limited pool of citations doesn't build confidence.
I read one of the papers. About the specific question you have: given a string of bits s, they're making the choice to associate the empirical distribution to s, as if s was generated by an iid Bernoulli process. So if s has 10 zero bits and 30 one bits, its associated empirical distribution is Ber(3/4). This is the distribution which they're calculating the entropy of. I have no idea on what basis they are making this choice.
The rest of the paper didn't make sense to me - they are somehow assigning a number N of "information states" which can change over time as the memory cells fail. I honestly have no idea what it's supposed to mean and kinda suspect the whole thing is rubbish.
Edit: after reading the author's quotes from the associated hype article I'm 100% sure it's rubbish. It's also really funny that they didn't manage to catch the COVID-19 research hype train so they've pivoted to the simulation hypothesis.
"feel free to ignore any science “news” that’s just a press release from the guy who made it up."
In particular, the 2022 discovery of the second law of information dynamics (by me) facilitates new and interesting research tools (by me) at the intersection between physics and information (according to me).
Gotta love "science" that is cited by no-one and cites the author's previous work which was also cited by no one. Really the media should do better about not giving cranks an authoritative sounding platform, but that would lead to slightly fewer eyes on ads and we can't have that now can we.
i mean, the Ray Charles one sounds fun. My 1st year maths lecturer demonstrated the importance of not dividing by zero by mathematically proving that if 1=0, then he was Brigitte Bardot. We did actually applaud.
It is the fourth book in the John Dies at the End series
oh damn, I just gave the (fun but absolute mess of a) movie another watch and was wondering if they ever wrote more stories in the series — I knew they wrote a sequel to John Dies at the End, but I lost track of it after that. it looks like I’ve got a few books to pick up!
Someone (maybe you) recommended this book here awhile back. But it's the fourth book in a series so I had to read the other three first and so have only just now started it.
The mask comes off at LWN, as two editors (jake and corbet) dive in to frantically defend the honour of Justine fucking Tunney against multiple people pointing out she's a Nazi who fills her projects with racist dogwhistles
Is Google lacing their free coffee??? How could a woman with at least one college degree believe that the government is even mechanically capable of dissolving into a throne for Eric Schmidt.
fuck me that is some awful fucking moderation. I can’t imagine being so fucking bad at this that I:
dole out a ban for being rude to a fascist
dole out a second ban because somebody in the community did some basic fucking due diligence and found out one of the accounts defending the above fascist has been just a gigantic racist piece of shit elsewhere, surprise
in the process of the above, I create a safe space for a fascist and her friends
but for so many of these people, somehow that’s what moderation is? fucking wild, how the fuck did we get here
See, you're assuming the goal of moderation is to maintain a healthy social space online. By definition this excludes fascists. It's that old story about how to make sure your punk bar doesn't turn into a nazi punk bar. But what if instead my goal is to keep the peace in my nazi punk bar so that the normies and casuals keep filtering in and out and making me enough money that I can stay in business? Then this strategy makes more sense.
Post by Corbet the editor.
"We get it: people wish that we had not highlighted work by this particular author. Had we known more about the person in question, we might have shied away from the topic. But the article is out now, it describes a bit of interesting technology, people have had their say, please let's leave it at that."
So you updated the article to reflect this right? padme.jpg
Seems like they've actually done this now. There's a preface note now.
This topic was chosen based on the technical merit of the project before we were aware of its author's political views and controversies. Our coverage of technical projects is never an endorsement of the developers' political views. The moderation of comments here is not meant to defend, or defame, anybody, but is in keeping with our longstanding policy against personal attacks. We could certainly have handled both topic selection and moderation better, and will endeavor to do so going forward.
Which is better than nothing, I guess, but still feels like a cheap cop-out.
Side-note: I can actually believe that they didn't know about Justine being a fucking nazi when publishing this, because I remember stumbling across some of her projects and actually being impressed by it, and then I found out what an absolute rabbit hole of weird shit this person is. So I kinda get seeing the portable executables project, thinking, wow, this is actually neat, and running with it.
Not that this is an excuse, because when you write articles for a website that should come with a bit of research about the people and topic you choose to cover and you have a bit more responsibility than someone who's just browsing around, but what do I know.
I mean, that kind of suggests that you could use chatGPT to confabulate work for his class and he wouldn't have room to complain? Not that I'd recommend testing that, because using ChatGPT in this way is not indicative of an internally consistent worldview informing those judgements.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
If we came across very mentally disabled people or extremely early babies (perhaps in a world where we could extract fetuses from the womb after just a few weeks) that could feel pain but only had cognition as complex as shrimp, it would be bad if they were burned with a hot iron, so that they cried out. It’s not just because they’d be smart later, as their hurting would still be bad if the babies were terminally ill so that they wouldn’t be smart later, or, in the case of the cognitively enfeebled who’d be permanently mentally stunted.
wat
This entire fucking shrimp paragraph is what failing philosophy does to a mf
This almost reads like an attempt at a reductio ad absurdum of worrying about animal welfare, like you are supposed to be a ridiculous hypocrite if you think factory farming is fucked yet are indifferent to the cumulative suffering caused to termites every time an exterminator sprays your house so it doesn't crumble.
Relying on the mean estimate, giving a dollar to the shrimp welfare project prevents, on average, as much pain as preventing 285 humans from painfully dying by freezing to death and suffocating. This would make three human deaths painless per penny, when otherwise the people would have slowly frozen and suffocated to death.
Dog, you've lost the plot.
FWIW a charity providing the means to stun shrimp before death by freezing as is the case here isn't indefensible, but the way it's framed as some sort of an ethical slam dunk even compared to say donating to refugee care just makes it too obvious you'd be giving money to people who are weird in a bad way.
Not that I'm a super fan of the fact that shrimp have to die for my pasta, but it feels weird that they just pulled a 3% number out of a hat, as if morals could be wrapped up in a box with a bow tied around it so you don't have to do any thinking beyond 1500×0.03×1 dollars means I should donate to this guys shrimp startup instead of the food bank!
Apologies for focusing on just one sentence of this article, but I feel like it's crucial to the overall argument:
... if [shrimp] suffer only 3% as intensely as we do ...
Does this proposition make sense? It's not obvious to me that we can assign percentage values to suffering, or compare it to human suffering, or treat the values in a linear fashion.
It reminds me of that vaguely absurd thought experiment where you compare one person undergoing a lifetime of intense torture vs billions upon billions of humans getting a fleck of dust in their eyes. I just cannot square choosing the former with my conscience. Maybe I'm too unimaginative to comprehend so many billions of bits of dust.
Ah you see, the moment you entered the realm of numbers and estimates, you’ve lost! I activate my trap card: 「Bayesian Reasoning」 to Explain Away those numbers. This lets me draw the「Domain Expert」 card from my deck, which I place in the epistemic status position, which boosts my confidence by 2000 IQ points!
Obviously mathematically comparing suffering is the wrong framework to apply here. I propose a return to Aristotelian virtue ethics. The best shrimp is a tasty one, the best man is a philosopher-king who agrees with everything I say, and the best EA never gets past drunkenly ranting at their fellow undergrads.
most of the dedicated Niantic (Pokemon Go, Ingress) game players I know figured the company was using their positioning data and phone sensors to help make better navigational algorithms. well surprise, it’s worse than that: they’re doing a generative AI model that looks to me like it’s tuned specifically for surveillance and warfare (though Niantic is of course just saying this kind of model can be used for robots… seagull meme, “what are the robots for, fucker? why are you being so vague about who’s asking for this type of model?”)
Quick, find the guys who were taping their phones to a ceiling fan and have them get to it!
Jokes aside I'm actually curious to see what happens when this one screws up. My money is on one of the Boston Dynamics dogs running in circles about 30 feet from the intended target without even establishing line of sight. They'll certainly have to test it somehow before it starts autonomously ordering drone strikes on innocent people's homes, right? Right?
Watts has always been a bit of a weird vector. While he doesn't seem a far righter himself, he accidentally uses a lot of weird far right dogwhistles. (prob some cross contamination as some of these things are just scientific concepts (esp the r/K selection thing stood out very much to me in the rifters series, of course he has a phd in zoology, and the books predate the online hardcore racists discovering the idea by more than a decade, but still odd to me)).
To be very clear, I don't blame Watts for this, he is just a science fiction writer, a particularly gloomy one. The guy himself seems to be pretty ok (not a fan of trump for example).
That's a good way to put it. Another thing that was really en vogue at one point and might have been considered hard-ish scifi when it made it into Rifters was all the deep water telepathy via quantum brain tubules stuff, which now would only be taken seriously by wellness influencers.
not a fan of trump for example
In one the Eriophora stories (I think it's officially the sunflower circle) I think there's a throwaway mention about the Kochs having been lynched along with other billionaires on the early days of a mass mobilization to save what's savable in the face of environmental disaster (and also rapidly push to the stars because a Kardashev-2 civilization may have emerged in the vicinity so an escape route could become necessary in the next few millenia and this scifi story needs a premise).
Hot Take: the damage from RFK Jr will be limited by the fact that he's messing with the money for several large industries, particularly agriculture and pharmaceuticals. They have bottomless pockets and aren't afraid to bribe the bribable. There will be damage, but he'll be crushed like a bug in the end.
Also, he clearly annoys the orange guy, can offer him nothing in return now that the election is over, and has already been the victim of a ritual humiliation (e.g. being forced to partake in a McDonald's meal for the camera), which is the first sign of a Trump guy being de-emphasized.
prediction 1: he dies halfway through. funniest way would be another pandemic gets him
prediction 2: he doesn't die. it will be exactly the same as the first admin but infinitely worse. everyone will hate and backstab each other, they will constantly get fired and rehired and fired like reality tv, there will be a constant dribble of horrible things happening, then in four years there's a coup attempt
prediction 3: elon doesn't last a year, possibly doesn't even make it six months
Despite worrying my brains out about getting deported from my home of 14 years because I wasn’t born in this godforsaken place, I’m extremely excited that Elon will get fired in the next 6 months or less. Gives me life to think about him getting very publicly humiliated by an even greater piece of shit than he is.