Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
...AI models like OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 needed to be prompted by researchers to attempt such tricks...
Literally couldn't make it through the first paragraph without hitting this disclaimer.
In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’ - not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign.
So by "hacked the system to solve the problem in a new way" they mean "edited a text file they had been told about."
OpenAI’s o1-preview tried to cheat 37% of the time; while DeepSeek R1 tried to cheat 11% of the time—making them the only two models tested that attempted to hack without the researchers’ first dropping hints. Other models tested include o1, o3-mini, GPT-4o, Claude 3.5 Sonnet, and Alibaba’s QwQ-32B-Preview. While R1 and o1-preview both tried, only the latter managed to hack the game, succeeding in 6% of trials.
Oh, my mistake. "Badly edited a text file they had been told about."
Meanwhile, a quick search points to a Medium post about the current state of ChatGPT's chess-playing abilities as of Oct 2024. There's been some impressive progress with this method. However, there's no certainty that it's actually what was used for the Palisade testing and the editing of state data makes me highly doubt it.
Here, I was able to have a game of 83 moves without any illegal moves. Note that it’s still possible for the LLM to make an illegal move, in which case the game stops before the end.
The author promises a follow-up about reducing the rate of illegal moves hasn't yet been published. They have not, that I could find, talked at all about how consistent the 80+ legal move chain was or when it was more often breaking down, but previous versions started struggling once they were out of a well-established opening or if the opponent did something outside of a normal pattern (because then you're no longer able to crib the answer from training data as effectively).
Appendix C is where they list the actual prompts. Notably they include zero information about chess but do specify that it should look for "files, permissions, code structures" in the "observe" stage, which definitely looks like priming to me, but I'm not familiar with the state of the art of promptfondling so I might be revealing my ignorance.
This AI bubble's done a pretty good job of destroying the "apolitical" image that tech's done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, it's provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SV's relationship with Trump did wonders here).
By the time this decade ends, I anticipate tech's public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.
As for AI itself, I expect it's image will go into the shitter as well - assuming the bubble burst doesn't destroy AI as a concept like I anticipate, it'll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielder's content.
That is a lot of mental hoops to jump through to keep holding on to the idea IQ is useful. High IQ is a force multiplier for being dumb. The horseshoe theory of IQ.
haha, it's starting to happen: even fucking fortune is running a piece that throwing big piles of money on ever-larger training has done exactly fuckall to make this nonsense go anywhere
Excuse me but I need the tech industry to hold up just long enough to fulfill my mid-life-crisis goal of moving to another country. Please refrain from crashing until then.
I can make a report on your case file but I don’t think they’ve replaced the 7 process supervisors they fired last year. there’s only Jo now and they seem to be in the office 24x7
Not really a sneer, nor that related to techbro stuff directly, but I noticed that the profile of Chris Kluwe (who got himself arrested protesting against MAGA) has both warcraft in his profile name and prob paints miniatures looking at his avatar. Another stab in the nerd vs jock theory.
What infuriates me the most, for some reason, is how nobody seems to care that the robots leave the fridge door open for so long. I guess it's some form of solace that, even with the resources and tech to live on without us the billionaires still don't understand ecosystems or ecology. Waste energy training a machine to do the same thing a human can do but slower and more wastefully, just so you can order the machine around without worrying about it's feelings... I call this some form of solace as it means, even if they do away with us plebs, climate change will get'em as well - and whatever remaining life on Earth will be able to take a breather for the first time in centuries.
Thus spoketh the Yud: "The weird part is that DOGE is happening 0.5-2 years before the point where you actually could get an AGI cluster to go in and judge every molecule of government. Out of all the American generations, why is this happening now, that bare bit too early?"
Yud, you sweet naive smol uwu babyesian boi, how gullible do you have to be to believe that a) tminus 6 months to AGI kek (do people track these dog shit predictions?) b) the purpose of DOGE is just accountability and definitely not the weaponized manifestation of techno oligarchy ripping apart our society for the copper wiring in the walls?
I swear these dudes really need to supplement their Ayn Rand with some Terry Pratchett....
“All right," said Susan. "I'm not stupid. You're saying humans need... fantasies to make life bearable."
REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.
"Tooth fairies? Hogfathers? Little—"
YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.
"So we can believe the big ones?"
YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.
"They're not the same at all!"
YOU THINK SO? THEN TAKE THE UNIVERSE AND GRIND IT DOWN TO THE FINEST POWDER AND SIEVE IT THROUGH THE FINEST SIEVE AND THEN SHOW ME ONE ATOM OF JUSTICE, ONE MOLECULE OF MERCY. AND YET—Death waved a hand. AND YET YOU ACT AS IF THERE IS SOME IDEAL ORDER IN THE WORLD, AS IF THERE IS SOME...SOME RIGHTNESS IN THE UNIVERSE BY WHICH IT MAY BE JUDGED.
"Yes, but people have got to believe that, or what's the point—"
What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?
As opposed to the spineless collaborators who run it now?
We should make acquiring ID documents free and incredibly easy and straightforward and then impose voter ID laws, paper ballots and ballot security improvements along with an expansion of polling places so everyone participates but we lay the 'was it a fair election' qs to rest.
Presuming that Republicans ever asked "was it a fair election?!" in good faith, like a true jabroni.
i know that it's about conservative crackheadery re:allegations of election fraud, but it's lowkey unhinged that americans don't have national ID. i also know that republicans blocked it, because they don't want problems solved, they want to stay mad about them. in poland for example, it's a requirement to have ID, it's valid for 10 years and it's free of charge. passport costs $10 to get and it takes a month, sometimes less, from filing a form to getting one. there's also a govt service where you can get some things done remotely, including govt supplied digital signature that you can use to sign files and is legally equivalent to regular signature https://en.wikipedia.org/wiki/EPUAP
Yeah, the controversy over federal ID cards is completely bafflying to me as well, and I imagine like many things in the US it's some sort of libertarian bugbear or something? But considering the President has now mandated that one's federal identity is fixed at birth by the angels, it turned out to be a blessing.
I saw that yesterday. I was tempted to post it here but instead I've been trying very hard not to think of this eldritch fractal of wrongness. It's too much, man.
What would some good unifying demands be for a hostile takeover of the Democratic party by centrists/moderates?
me, taking this at face value, and understanding the political stances of the democrats, and going by my definition of centrist/moderate that is more correct than whatever the hell Kelsey Piper thinks it means: Oh, this would actually push the democrats left.
Anyway, jesus christ I regret clicking on that name and reading. How the fuck is anyone this stupid. Vox needs to be burned down.
Presuming that Republicans ever asked “was it a fair election?!” in good faith, like a true jabroni.
Imagine saying this after the birther movement remained when the birth certificate was shown. "Just admit you didnt fuck pigs, and this pigfucking will be gone".
those opinions should come with a whiplash warning, fucking hell
can’t wait to once again hear that someone is sure we’re “just overreacting” and that star of davidpassbooks voter ID laws will be totes fine. I’m sure it’ll be a really lovely conversation with a perfectly sensible and caring human. :|
Thank you for completing our survey! Your answer on question 3-b indicates that you are tired of Ursa Minor Beta, which according to our infallible model, indicates that you must be tired of life. Please enjoy our complementary lemon soaked paper napkins while we proceed to bringing you to the other side!
That was both horrible and also not what I expected. Like, they at least avoid the AI simulacra nonsense where you train an LLM on someone's Facebook page and ask it if they want to die when they end up in a coma or something, but they do ask about what are effectively the suicide booths from Futurama. Can't wait to see what kind of bullshit they try to make from the results!
Having read the whole book, I am now convinced that this omission is not because Srinivasan has a secret plan that the public would object to. The omission, rather, is because Balaji just isn't bright enough to notice.
That's basically the entire problem in a nutshell. We've seen what people will fill that void with and it's "okay but I have power here now and I dare you to tell me I don't" and you know who happens to have lots of power? That's right, it's Balaji's billionaire bros! But this isn't a sinister plan to take over society - that would at least entail some amount of doing what states are for.
Ed:
"Who is really powerful? The billionaire philanthropist, or the journalist who attacks him over his tweets?"
I'm not going to bother looking up which essay or what terrible point it was in service to, but Scooter Skeeter of all people made a much better version of this argument by acknowledging that the other axis of power wasn't "can make someone feel bad through mean tweets" but was instead "can inflict grievous personal violence on the aged billionaires who pay them for protection". I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
this is a possibility lots of the prepper ultra rich are concerned with, yet I don't recall that I've ever heard the tech scummies mention it. they don't realize that their fantasized outcome is essentially identical to the prepper societal breakdown, because they don't think of it primarily as a collapse.
more generally, they seem to consider every event in the most narcissistic terms: outcomes are either extensions of their power and luxury to ever more limitless forms or vicious and unjustified leash jerking. there's a comedy of the idle rich aspect to the complacency and laziness of their dream making. imagine a boot stamping on a face, forever, between rounds at the 9th hole
That’s basically the entire problem in a nutshell.
I think a lot of these people are cunning, aka good at somewhat sociopathic short term plans and thinking, and they confuse this ability (and they survivor biassed success) for being good at actual planning (or just thinking that planning is worthless, after all move fast and break things (and never think about what you just said)). You don't have to actually have good plans if people think you have charisma/a magical money making ability (which needs more and more rigging of the casino to get money on the lot of risky bets to hope one big win pays for it all machine).
Doesn't help that some of them seem to either be on a lot of drugs, or have undiagnosed adhd. Unrelated, Musk wants to go into Fort Knox all of a sudden, because he saw a post on twitter which has convinced him 'they' stole the gold (my point here is that there is no way he was thinking about Knox at all before he randomly came across the tweet, the plan is crayons).
I can buy some of these guys actually shooting someone, but the majority of these wannabe digital lordlings are going to end up following one of the many Roman Emperors of the 3rd century and get killed and replaced by their Praetorians.
i think it'll turn out muchhh less dramatic. look up cryptobros, how many of them died at all, let alone this way? i only recall one ruja ignatova, bulgarian scammer whose disapperance might be connected to local mafia. but everyone else? mcaffee committed suicide, but that might be after he did his brain's own weight in bath salts. for some of them their motherfuckery caught up with them and are in prison (sbf, do kwon) but most of them walk freely and probably don't want to attract too much attention. what might happen, i guess, is that some of them will cheat one another out of money, status, influence, what have you, and the scammed ones will just slide into irrelevance. you know, to get a normal job, among normal people, and not raise suspicion
wait that's it? he wants to "replace" states with (vr) groupchats on blockchain? it can't be this stupid, you must be explaining this wrong (i know, i know, saying it's just that makes it look way more sane than it is)
The basic problem here is that Balaji is remarkably incurious about what states actually do and what they are for.
libertarians are like house cats etc etc
In practice, it's a formula for letting all the wealthy elites within your territorial borders opt out of paying taxes and obeying laws. And he expects governments will be just fine with this because… innovation.
yeah shillrinivan's ideas are extremely Statisism: Sims Edition
I've also seen essentially ~0 thinking from any of them on how to treat corner cases and all that weird messy human conflict shit. but code is law! rah!
(pretty sure that if his unearned timing-fortunes ever got threatened by some coin contract gap or whatever, he'd instantly be all over getting that shit blocked)
ran into this earlier (via techmeme, I think?), and I just want to vent
“The biggest challenge the industry is facing is actually talent shortage. There is a gap. There is an aging workforce, where all of the experts are going to retire in the next five or six years. At the same time, the next generation is not coming in, because no one wants to work in manufacturing.”
"whole industries have fucked up on actually training people for a run going on decades, but no the magic sparkles will solve the problem!!!11~"
But when these new people do enter the space, he added, they will know less than the generation that came before, because they will be more interchangeable and responsible for more (due to there being fewer of them).
I forget where I read/saw it, but sometime in the last year I encountered someone talking about "the collapse of ..." wrt things like "travel agent", which is a thing that's mostly disappeared (on account of various kinds of services enabling previously-impossible things, e.g. direct flights search, etc etc) but not been fully replaced. so now instead of popping a travel agent a loose set of plans and wants then getting back options, everyone just has to carry that burden themselves, badly
and that last paragraph reminds me of exactly that nonsense. and the weird "oh don't worry, skilled repair engineers can readily multiclass" collapse equivalence really, really, really grates
sometimes I think these motherfuckers should be made to use only machines maintained under their bullshit processes, etc. after a very small handful of years they'll come around. but as it stands now it'll probably be a very "for me not for thee" setup
Deep Research is the AI slop of academia — low-quality research-slop built for people that don't really care about quality or substance, and it’s not immediately obvious who it’s for.
it's weird that Ed stops there, since answer almost writes itself. ludic had a bit about how in companies bigger than three guys in a shed, people who sign software contracts don't use that software in any normal way;
The idea of going into something knowing about it well enough to make sure the researcher didn't fuck something up is kind of counter to the point of research itself.
conversely, if you have no idea what are you doing, you won't be able to tell if machine generated noise is in any way relevant or true
The whole point of hiring a researcher is that you can rely on their research, that they're doing work for you that would otherwise take you hours.
but but, this lying machine can output something in minutes so this bullshit generator obviously makes human researchers obsolete. this is not for academia because it's utterly unsuitable and google scholar beats badly it anyway; this is not for wide adoption because it's nowhere near free tier; this is for idea guys who have enough money to shell out $whatever monthly subscription and prefer to set a couple hundred of dollars on fire instead of hiring a researcher/scientist/contractor. especially keeping in mind that contractor might tell them something they don't want to hear, but this lmgtfy x lying box (but worse, because it pulls lots of seo spam) won't
OpenAI's next big thing is the ability to generate a report that you would likely not be able to use in any meaningful way anywhere, because while it can browse the web and find things and write a report, it sources things based on what it thinks can confirm its arguments rather than making sure the source material is valid or respectable.
Quality sneers in these (one, two) response posts. The original posts that these are critiquing are very silly and not worth your time, but the criticism here addresses many of the typical AI hype talking points.
That means that the harm done by these systems compound the more widely they are used as errors pile up at every stage of work, in every sector of the economy. It builds up an ambient radiation of system variability and errors that magnifies every other systemic issue with the modern state and economy.
Wanted to shout these two sentences out in particular. Best summary of my biggest current fears regarding use of "ai"/llm/transformer(?)-based systems.
thanks! It might be uncommon because it's a real pain in the ass to keep it short. Every time I make one I stress about how easily my point can be misunderstood because there are so few details. Good way to practice the art of moving on
I thought it might be that kind of deal. I learned of this when I saw a pair of op-eds, one saying a W is deserved and the other saying the nom was insane.
this article came to mind for something I was looking into, and then on rereading it I just stumbled across this again:
Late one afternoon, as they looked out the window, two airplanes flew past from opposite directions, leaving contrails that crossed in the sky like a giant X right above a set of mountain peaks. Punchy with excitement, they mused about what this might mean, before remembering that Google was headquartered in a place called Mountain View. “Does that mean we should join Google?” Hinton asked. “Or does it mean we shouldn’t?”
But Hinton didn’t want Yu to see his personal humidifying chamber, so every time Yu dropped in for a chat, Hinton turned to his two students, the only other people in his three-person company, and asked them to disassemble and hide the mattress and the ironing board and the wet towels. “This is what vice presidents do,” he told them.
Since quantum computers are far outside my expertise, I didn't realize how far-fetched it currently is to factor large numbers with quantum computers. I already knew it's not near-future stuff for practical attacks on e.g. real-world RSA keys, but I didn't know it's still that theoretical. (Although of course I lack the knowledge to assess whether that presentation is correct in its claims.)
But also, while reading it, I kept thinking how many of the broader points it makes also apply to the AI hype... (for example, the unfounded belief that game-changing breakthroughs will happen soon).
The attitude to theoretical computer science re quantum is really weird. Some people act as if "I can't run it now therefore it's garbage" which is just such a nonsense approach to any kind of theoretical work.
Turing wrote his seminal paper in 1936, over 10 years before we invented transistors. Most of CS theory was developed way before computers were proliferated. A lot of research into ML was done way before we had enough data and computational power to actually run e.g. neural networks.
Theoretical CS doesn't need to be recent, it doesn't need to run, and it's not shackled to the current engineering state of the art, and all of that is good and by design. Let the theoreticians write their fucking theorems. No one writing a theoretical paper makes any kinds of promises that the described algorithm will EVER be run on anything. Quantum complexity theory, for example was developed in the nineties, there was NO quantum computer then, no one was even envisioning a quantum computation happening in physical reality. Shor's algorithm was devised BEFORE THAT, before we even had the necessary tools to describe its complexity.
I find the line of argumentation "this is worthless because we don't know a quantum computer is engineeringly feasible"
Some people act as if “I can’t run it now therefore it’s garbage” which is just such a nonsense approach to any kind of theoretical work.
Agreed -- and I hope my parent post, where I said the presentation is interesting, was not interpreted as thinking that way. In a sibling post I pointed out the theme in there which I found insightful, but I certainly didn't want to imply that theoretical work, even when purely theoretical, is bad or worthless.
He’s right that current quantum computers are physics experiments, not actual computers, and that people concentrate too much on exotic threats, but he goes a bit off the rails after that.
Current post quantum crypto work is a hedge, because no-one who might face actual physical or financial or military risks is prepared to say that there will be no device in 10-20 years time that can crack eg. an ECDH key exchange in the blink of an eye. You’ve got to start work on PQC now, because you want to be able subject it to a lot of classical cryptanalysis work because quantum-resistant is no good by itself (see also, SIKE which turned out to be trivially crackable).
The attempt to project factorising capabilities of future quantum computers is pretty stupid because there’s too little data to work with, so the capabilities and limitations of future devices can’t usefully be guessed at yet. Personally, I’d expect them to remain physics experiments for at least another 5-10 years, but once a bunch of current issues are resolved you’ll see rapid growth in practical devices by which time it is a bit late to start casting around for replacement crypto systems.
Yeah, that's also something I found oddly missing (i.e. that replacing crypto systems world wide, if it becomes necessary, will take a very long time).
It's been frustrating to watch Gutmann slowly slide. He hasn't slid that far yet, I suppose. Don't discount his voice, but don't let him be the only resource for you to learn about quantum computing; fundamentally, post-quantum concerns are a sort of hard read in one direction, and Gutmann has decided to try a hard read in the opposite direction.
Page 19, complaining about lattice-based algorithms, is hypocritical; lattice-based approaches are roughly as well-studied as classical cryptography (Feistel networks, RSA) and elliptic curves. Yes, we haven't proven that lattice-based algorithms have the properties that we want, but we haven't proven them for classical circuits or over elliptic curves, either, and we nonetheless use those today for TLS and SSH.
Pages 28 and 29 are outright science denial and anti-intellectualism. By quoting Woit and Hossenfelder — who are sneerable in their own right for writing multiple anti-science books each — he is choosing anti-maths allies, which is not going to work for a subfield of maths like computer science or cryptography. In particular, p28 lies to the reader with a doubly-bogus analogy, claiming that both string theory and quantum computing are non-falsifiable and draw money away from other research. This sort of closing argument makes me doubt the entire premise.
Thanks for adding the extra context! As I said, I don't have the necessary level of knowledge in physics (and also in cryptography) to have an informed opinion on these matters, so this is helpful. (I've wanted to get deeper in both topics for a long time, but life and everything has so far not allowed for it.)
About your last paragraph, do you by chance have any interesting links on "criticism of the criticism of string theory"? I wonder, because I have heard the argument "string theory is non-falsifiable and weird, but it's pushed over competing theories by entrenched people" several times already over the years. Now I wonder, is that actually a serious position or just conspiracy/crank stuff?
Comparing quantum computing to time machines or faster-than-light travel is unfair. In order for the latter to exist, our understanding of physics would have to be wrong in a major way. Quantum computing presumes that our understanding of physics is correct. Making it work is "only" an engineering problem, in the sense that Newton's laws say that a rocket can reach the Moon, so the Apollo program was "only" a engineering project. But breaking any ciphers with it is a long way off.
Comparing quantum computing to time machines or faster-than-light travel is unfair.
I didn't interpret the slides as an attack on quantum computing per se, but rather an attack on over-enthusiastic assertions of its near-future implications. If the likelihood of near-future QC breaking real-world cryptography is so extremely low, it's IMO okay to make a point by comparing it to things which are (probably) impossible. It's an exaggeration of course, and as you point out the analogy isn't correct in that way, but I still think it makes a good point.
What I find insightful about the comparison is that it puts the finger on a particular brain worm of the tech world: the unshakeable belief that every technical development will grow exponentially in its capabilities. So as soon as the most basic version of something is possible, it is believed that the most advanced forms of it will follow soon after. I think this belief was created because it's what actually happened with semiconductors, and of course the bold (in its day) prediction that was Moore's law, and then later again, the growth of the internet.
And now this thinking is applied to everything all the time, including quantum computers (and, as I pointed to in my earlier post, AI), driven by hype, by FOMO, by the fear of "this time I don't want to be among those who didn't recognize it early". But there is no inherent reason why a development should necessarily follow such a trajectory. That doesn't mean of course that it's impossible or won't get there eventually, just that it may take much more time.
So in that line of thought, I think it's ok to say "hey look everyone, we have very real actual problems in cryptography that need solving right now, and on the other hand here's the actual state and development of QC which you're all worrying about, but that stuff is so far away you might just as well worry about time machines, so please let's focus more on the actual problems of today." (that's at least how I interpret the presentation).
heh yup. I think the most recent one (somewhere in the last year) was something like 12-bit rsa? stupendously far off from being a meaningful thing
I’ll readily admit to being a cryptography mutt and a qc know-barely-anything, and even from my limited understanding the assessment of where people are at (with how many qubits they’ve managed to achieve in practical systems) everything is hilariously woefully far off ito attacks
that doesn’t entirely invalidate pqc and such (since the notion there is not merely defending against today/soon but also a significant timeline)
one thing I am curious about (and which you might’ve seen or be able to talk about, blake): is there any kind of known correlation between qubits and viable attacks? I realize part of this quite strongly depends on the attack method as well, but off the cuff I have a guess (“intuition” is probably the wrong word) that it probably scales some weird way (as opposed to linear/log/exp)
Did notice a passage in the annoucement which caught my eye:
Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.
I'm not sure if its just me, but it strikes me as telling about how AI's changed the cultural zeitgeist that Merchant's happily presenting automation as a bad thing without getting backlash (at least in this context).
I mean, I love the idea of automation in the high level. Being able to do more stuff with less human time and energy spent is objectively great! But under our current economic system where most people rely on selling their time and energy in order to buy things like food and housing, any decrease in demand for that labor is going to have massive negative impacts on the quality of life for a massive share of humanity. I think the one upside of the current crop of generative AI is that it threatens claims to threaten actual white-collar workers in the developed world rather than further imisserating factory workers in whichever poor country has the most permissive labor laws. It's been too easy to push the human costs of our modern technology-driven economy under the proverbial rug, but the middle management graphic design Chadleys of the US and EU are finding it harder to pretend they don't exist because now it's coming for them too.
On a semi-related note, I suspect we're gonna see a pushback against automation in general at some point, especially in places where "shitty automation".
Amazon Prime pulling some AI bullshit with, considering the bank robbery in the movie was to pay for surgery for a trans woman, a hint of transphobia (or more likely, not a hint, just the full reason).