Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
this article will most likely be how I (hopefully very rarely) start off conversations about rationalism in real life should the need once again arise (and somehow it keeps arising, thanks 2025)
was discussing a miserable AI related gig job I tried out with my therapist. doomerism came up, I was forced to explain rationalism to him. I would prefer that all topics I have ever talked to any of you about be irrelevant to my therapy sessions
I've been beating this dead horse for a while (since July of last year AFAIK), but its clear to me that the AI bubble's done horrendous damage to the public image of artificial intelligence as a whole.
Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst - a trend I expect that'll last for a good while after the bubble pops.
To beat a slightly younger dead horse, I also anticipate AI as a concept will die thanks to this bubble, with its utterly toxic optics as a major reason why. With relentless slop, nonstop hallucinations and miscellaneous humiliation (re)defining how the public views and conceptualises AI, I expect any future AI systems will be viewed as pale imitations of human intelligence, theft-machines powered by theft, or a combination of the two.
Right now, using AI at all (or even claiming to use it) will earn you immediate backlash/ridicule under most circumstances, and AI as a concept is viewed with mockery at best and hostility at worst
it’s fucking wild how PMs react to this kind of thing; the general consensus seems to be that the users are wrong, and that surely whichever awful feature they’re working on will “break through all that hostility” — if the user’s forced (via the darkest patterns imaginable) to use the feature said PM’s trying to boost their metrics for
This kind of stuff, which seems to hit a lot harder than the anti trump stuff, makes me feel that a vance presidency would implode quite quickly due to other maga toadies trying to backstab toadkid here.
In b4 there's a 100k word essay on LW about how intentionally crashing the economy will dry up VC investment in "frontier AGI labs" and thus will give the 🐀s more time to solve "alignment" and save us all from big 🐍 mommy. Therefore, MAGA harming every human alive is in fact the most effective altruism of all! Thank you Musky, I just couldn't understand your 10,000 IQ play.
For me it feels like this is pre ai/cryptocurrency bubble pop. But with luck (as the maga gov infusions of both fail, and actually quicken the downfall (Musk/Trump like it so it must be iffy), if we are lucky). Sadly it will not be like the downfall of enron, as this is all very distributed, so I fear how much will be pulled under).
Wrote this back on the mansplainiverse (mastodon):
It's understandable that coders feel conflicted about LLMs even if you assume the tech works as promised, because they've just changed jobs from thoughtful problem-solving to babysitting
In the long run, a babysitter gets paid much less an expert
What people don't get is that when it comes to LLMs and software dev, critics like me are the optimists. The future where copilots and coding agents work as promised for programming is one where software development ceases to be a career. This is not the kind of automation that increases employment
A future where the fundamental issues with LLMs lead them to cause more problems than they solve, resulting in much of it being rolled back after the "AI" financial bubble pops, is the least bad future for dev as a career. It's the one future where that career still exists
Because monitoring automation is a low-wage activity and an industry dominated by that kind of automation requires much much fewer workers that are all paid much much less than one that's fundamentally built on expertise.
Anyways, here's my sidenote:
To continue a train of thought Baldur indirectly started, the rise of LLMs and their impact on coding is likely gonna wipe a significant amount of prestige off of software dev as a profession, no matter how it shakes out:
If LLMs worked as advertised, then they'd effectively kill software dev as a profession as Baldur noted, wiping out whatever prestige it had in the process
If LLMs didn't work as advertised, then software dev as a profession gets a massive amount of egg on its face as AI's widespread costs on artists, the environment, etcetera end up being all for nothing.
This is classic labor busting. If the relatively expensive, hard-to-train and hard-to-recruit software engineers can be replaced by cheaper labor, of course employers will do so.
I feel like this primarily will end up creating opportunities in the blackhat and greyhat spaces as LLM-generated software and configurations open and replicate vulnerabilities and insecure design patterns while simultaneously creating a wider class of unemployed or underemployed ex-developers with the skills to exploit them.
The irony is that most structural engineers are actually de jure professionals, and an easy way for them to both protect their jobs and ensure future buildings don't crumble to dust or are constructed without sprinkler systems is to simply ban LLMs from being used. No such protection exists for software engineers.
Edit the LW post under discussion makes a ton of good points, to the level of being worthy of posting to this forum, and then nails its colors to the mast with this idiocy
At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that.
but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices
days since orangeposter (incorrectly) argued in certainty from 3 seconds of thought as to what they think is involved in a process: [0]
it's so fucking frustrating to know easy this bullshit is to see if you know a slight bit of anything, and doubly frustrating as to how much of the software world is this thinking. I know it's nothing particularly new and that our industry has been doing this for years, but scream
You basically just need to know a lot of rules / tables and how things interact to know what’s possible and the best practices
And to be a programmer you basically just need to know a lot of languages / libraries and how things interact, really easy, barely an inconvenience.
The actual irony is that this is more true than for any other engineering profession since programmers uniquely are not held to any standards whatsoever, so you can have both skilled engineeres and complete buffoons coexist, often within the same office. There should be a Programmers' Guild or something where the experienced master would just slap you and throw you out if you tried something idiotic like using LLMs for code generation.
So I enjoy the Garbage Day newsletter, but this episode of Panic World with Casey Newton is just painful, in the way that Casey is just spitting out unproven assertions.
Was he the one who wrote that awful "real and dangerous vs fake and sucks" piece? The one that pretended that critihype was actually less common than actual questions about utility and value?
Huggingface cofounder pushes against LLM hype, really softly. Not especially worth reading except to wonder if high profile skepticism pieces indicate a vibe shift that can't come soon enough. On the plus side it's kind of short.
The gist is that you can't go from a text synthesizer to superintelligence, framed as how a straight-A student that's really good at learning the curriculum at the teacher's direction can't really be extrapolated to an Einstein type think-outside-the-box genius.
The world 'hallucination' never appears once in the text.
I actually like the argument here, and it's nice to see it framed in a new way that might avoid tripping the sneer detectors on people inside or on the edges of the bubble. It's like I've said several times here, machine learning and AI are legitimately very good at pattern recognition and reproduction, to the point where a lot of the problems (including the confabulations of LLMs) are based on identifying and reproducing the wrong pattern from the training data set rather than whatever aspect of the real world it was expected to derive from that data. But even granting that, there's a whole world of cognitive processes that can be imitated but not replicated by a pattern-reproducer. Given the industrial model of education we've introduced, a straight-A student is largely a really good pattern-reproducer, better than any extant LLM, while the sort of work that pushes the boundaries of science forward relies on entirely different processes.
While not exactly celebration worthy and certainly not worth a tenth anniversary celebration, you could argue HPMoR finally coming to a fucking end by whatever means was a somewhat happy occasion.
I'd assume that is very intentional, nominative determinism is one of those things a lot of LW style people like. (Scott Alexander being a big one, which has some really iffy implications (which I fully think is a coincidence btw)).