Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 6 October 2025
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
I’m going to start replying to everything like I’m on Hacker News. Unhappy with Congress? Why don’t you just start a new country and write a constitution and secede? It’s not that hard once you know how. Actually, I wrote a microstate in a weekend using Rust.
I might be wrong but this sounds like a quick way to make the web worse by putting a huge computational load on your machine for the purpose of privacy inside customer service chat bots that nobody wants. Please correct me if I’m wrong
WebLLM is a high-performance in-browser LLM inference engine that brings language model inference directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including streaming, JSON-mode, function-calling (WIP), etc.
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
I'm in the other camp: I remember when we thought an AI capable of solving Go was astronomically impossible and yet here we are. This article reads just like the skeptic essays back then.
Ah yes my coworkers communicate exclusively in Go games and they are always winning because they are AI and I am on the street, poor.
There's not that much else to sneer at though, plenty of reasonable people.
Well that's quite the confused comment chain given that neither Go nor chess are solved. "Remember that thing everyone said wouldn't happen? Well it still hasn't happened! 🫨"
Confusing 'solved' with 'a computer can win playing vs high level human players a high % of the times' because they don't know that 'solved' actually has a specific meaning.
Tech reporting has massively fucked up this as well over the years btw, so I'm not that annoyed random HN people also don't get it. But there is a wikipedia page for it: https://en.wikipedia.org/wiki/Solved_game
This remark is actually part of a long fight between CS and CS people. And it is really frustrating in various ways, as CS always thinks they did better than CS while being blind of the actual accomplishments of CS they don't know and just how complex the subject matter is. It is an annoying failure to communicate between both disciplines. (A lot of people don't fall victim to this btw, but it can be really annoying to encounter a 'Our CS is good, and theirs is bad because strawman', who often don't even realize that various words have different meanings in the different fields).
I think the one thing LLMs have shown us is that coherent English is less complicated than we previously believed. I don't think we learned anything about actual cognition.
Moravec's Paradox is actually more interesting than it appears. You don't have take his reasoning or Pinker's seriously but the observation is salient. Also the paradox gets stated in other ways by other scientists, it's a common theme.
One way I often think about it: in order for your to survive, the intelligence of moving in unknown spaces and managing numerous fuzzy energy systems is way more important to prioritize and master than like, the abstract conceptual spaces that are both not full of calories and are also cheaper to externalize anyways.
It's part of why I don't think there is a globally coherent heirarchy of intelligence, or potentially even general intelligence at all. Just, the distances and spaces that a thing occupies, and the competencies that define being in that space.
So to throw my totally-amateur two cents in, it seems like it's definitely part of the discussion in actual AI circles based on the for-public-consumption reading and viewing I've done over the years, though I've never heard it mentioned by name. I think a bigger part of the explanation has less to do with human cognition (it's probably fallacious to assume that AI of any method effectively reproduces those processes) and more to do with the more abstract cognitive tests and games being much more formally defined. Our perception and model of a game of Chess or Go may not be complete enough to solve the game, but it is bounded by the explicitly-defined rules of the game. If your opponent tries to work outside of those bounds by, say, flipping the board over and storming off, the game itself can treat that as a simple forfeit-by-cheating. But our understanding of the real world is not similarly bounded. Things that were thought to be impossible happen with impressive frequency, and our brain is clearly able to handle this somehow. That lack of boundedness requires different capabilities than just being able to operate within expected parameters like existing English GenAI or image generators, I suspect relating to handling uncertainty or lacking information. The assumption that what AI is doing is a mirror to the living mind is wholly unproven.
Small FYI, not a sneer or anything, you can stop reading if you don't know what the godotengine is. But if you do and hear of the fork, you can just ignore the fork. (the people involved also seem to be rather iffy, one guy who went crazy after somebody mentioned they would like gay relationships in his game, and some maga conspiracy theory style coder. That is going by the 3 normal people the account follows (out of 5) who I assume are behind it).
I have no idea what set the drama off btw, I have not really looked into it (could it be that this mod you are talking about was the unofficial mod the godot communication was talking about? Or is that a different mod? And did the redot (wait, re.? please tell me it isn't a reference to the reeeee thing) people really pick the side of the n-word mod?)
I did see that the guy who started redot basically only forked it and then went 'any devs wanna take over this fork?' Very I started the wiki, without even starting a wiki.
No no no it's fine! You get the word shuffler to deshuffle the—eloquently—shuffled paragraphs back into nice and tidy bullet points. And I have an idea! You could get an LLM to add metadata to the email to preserve the original bullet points, so the recipient LLM has extra interpolation room to choose to ignore the original list, but keep the—much more correct and eloquent, and with much better emphasis—hallucinated ones.