Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 20 October 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
ok my first thought was to make a joke about castle warfare, despite my knowledge set being ephemera from a childhood appreciating tech trees in video games. So I did some research:
The etymology of “moat” is that it comes from the word “motte”. I will not elaborate.
Moats were effective against early forms of siege warfare, like battering rams, siege towers, and mining out the foundations of a castle’s defences, or anything that required approaching the castle directly
Moats were made somewhat obsolete by siege artillery, which did not need to be in the direct vicinity of the castle
Err so yeah. Make your own jokes, ig.
Anyway, this has been MoatFacts™️. Paging @[email protected] for better commentary*
Little of this was news to me, but damn, laid out systematically like that, it's even more damning than I expected. And the stuff that was new to me certainly didn't help.
Very serious people at HN at it again:
The only argument I find here against it is the question of whether someone's personal opinions should be a reason to be removed from a leadership position.
Yes, of course they should be! Opinions are essential to the job of a leader. If the opinions you express as a leader include things like "sexual harassment is not a real crime" or "we shouldn't give our employees raises because otherwise they'll soon demand infinite pay" or "there's no problem in adults having sex with 14 year olds and me saying that isn't going to damage the reputation of the organization I lead" you're a terrible leader and and embarrassment of a spokesman.
Edit: The link submitted by the editors is [flagged] [dead]. Of course.
The only argument I find here against it is the question of whether someone’s personal opinions should be a reason to be removed from a leadership position.
I had heard some vague stuff about this, but had no idea it was this bad. Also, I didn't know how much of a fool RMS was. : "RMS did not believe in providing raises — prior cost of living adjustments were a battle and not annual. RMS believed that if a precedent was created for increasing wages, the logical conclusion would be that employees would be paid infinity dollars and the FSF would go bankrupt." (It gets worse btw).
of note is that the Stallman defenders from about 3 years back (when he waded in unprompted in a mailing list meant for undergrads at MIT and was pretty damn sure that Marvin Minsky never had sex with one of Epstein's victims, and if he did, it would have been because he was sure she wasn't underage) have registered https://stallman-report.com which redirects to their lengthy apologia. Could be worth taking into account fi you want to spread the original around
I don't think anything in the report is new, is it? Isn't this the exact weirdness that got him kicked off the board in the first place? I was shocked when he was quietly added back to the board; I really thought the allegations would stick the first time.
Today I was looking at buying some stickers to decorate a laptop and such, so I was browsing Redbubble. Looking here and there I found some nice designs and then stumbled upon a really impressive artist portfolio there. Thousands of designs, woah, I thought, it must have been so much work to put that together!
Then it dawned on me. For a while I had completely forgotten that we live in the age of AI slop... blissfull ignorance! But then I noticed the common elements in many of the designs... noticed how everything is surrounded by little dots or stars or other design trinkets. Such a typical AI slop thing, because somehow these "AI" generators can't leave any whitespace, they must fill every square millimeter with something. Of course I don't know for sure, and maybe I'm doing an actual artist injustice with my assumption, but this sure looked like Gen-AI stuff...
Anyway, I scrapped my order for now while I reconsider how to approach this. My brain still associates sites like redbubble or etsy with "art things made by actual humans", but I guess that certainty is outdated now.
This sucks so much. I don't want to pay for AI slop based on stolen human-created art - I want to pay the actual artists. But now I can never know... How can trust be restored?
I’ve taken to calling the constant background sprinkles and unnecessary fine detail in gen ai images “greebles” after the modelling and cgi term. Not sure if they have a better or more commonplace name.
It’s funny, meaningless bullshit diagrams on whiteboards backgrounds of photos were a sure sign on PR shots or lazy set dressing, and now they’re everywhere signifying pretty much the same thing.
Sadly I think the only way to trust you are not getting a lot of AI art is by starting to follow a lot of artists you like on social media. Just going to a site which sells things seems a bit risky atm.
the raw, mediocre teenage energy of assuming you can pick up any subject in 2 weeks because you’ve never engaged with a subject more complex than playing a video game and you self-rate your skill level as far higher than it actually is (and the sad part is, the person posting this probably isn’t a teenager, they just never grew out of their own bullshit)
given how oddly specific “application auth protocol” is, bets on this person doing at best minor contributions to someone else’s OAuth library they insist on using everywhere? and when they’re asked to use a more appropriate auth implementation for the situation or to work on something deeper than the surface-level API, their knowledge immediately ends
Fun fact: The plain vanilla physics major at MIT requires three semesters of quantum mechanics. And that's not including the quantum topics included in the statistical physics course, or the experiments in the lab course that also depend upon it.
Grad school is another year or so of quantum on top of that, of course.
(MIT OpenCourseWare actually has fairly extensive coverage of all three semesters: 8.04, 8.05 and 8.06. Zwiebach was among the best lecturers in the department back in my day, too.)
Oh I certainly did meet a lot of people employed in auth related stuff that clearly spent only 2 weeks on learning anything about OpenID and I certainly didn't not hate their guts and wished they were replaced by a small shell script
Boo! Hiss! Bring Saltman back out! I want unhinged conspiracy theories, damnit.
It feels like this is supposed to be the entrenchment, right? Like, the AGI narrative got these companies and products out into the world and into the public consciousness by promising revolutionary change, and now this fallback position is where we start treating the things that have changed (for the worse) as fair accompli and stop whining. But as Ed says, I don't think the technology itself is capable of sustaining even that bar.
Like, for all that social media helped usher in surveillance capitalism and other postmodern psychoses, it did so largely by providing a valuable platform for people to connect in new ways, even if those ways are ultimately limited and come with a lot of external costs. Uber came into being because providing an app-based interface and a new coat of paint on the taxi industry hit on a legitimate market. I don't think I could have told you how to get a cab in the city I grew up in before Uber, but it's often the most convenient way to get somewhere in that particular hell of suburban sprawl unless you want to drive yourself. And of course it did so by introducing an economic model that exploits the absolute shit out of basically everyone involved.
In both cases, the thing that people didn't like was external or secondary to the thing people did like. But with LLMs, it seems like the thing people most dislike is also the main output of the system. People don't like AI art, they don't like interacting with chatbots in basically anywhere, and the confabulation problems undercut their utility for anything where correlation to the real world actually matters, leaving them somewhere between hilariously and dangerously inept at many of the functions they're still being pitched for.
As humanity gets closer to Artificial General Intelligence (AGI)
The first clause of the opening line, and we've already hit a "citation needed".
He goes from there to taking a prediction market seriously. And that Aschenbrenner guy who thinks that Minecraft speedruns are evidence that AI will revolutionize "science, technology, and the economy".
You know, ten or fifteen years ago, I would have disagreed with Tegmark about all sorts of things, but I would have granted him default respect for being a scientist.
After he started rambling about his Mathematical Universe Hypothesis, it was obvious his brain was cooked.
As humanity gets closer to Artificial General Intelligence (AGI)
Arrow of time and all that, innit? And God help me, I actually read part of the post as well as the discussion comments where the prompt fondlers were lamenting that all it takes is one rogue ai code to end the world because it will "optimize against you!" I assume Evil GPT is constructing anti matter bombs using ingredients it finds under the kitchen sink.
Tegmark used to go around polling physicists at conferences about which interpretation of quantum mechanics they prefer. A colleague of mine said that they were sitting near Tegmark and saw him fudging the numbers in his notes — erasing the non-Many Worlds tallies from those who said they supported Many Worlds as well as others, IIRC.
I remember he went on julia galef's podcast to talk about the MUH and she was like "but what does that mean" and simple questions like that and he flailed, it was painful to hear
The first image in that second link is perhaps the most incoherent political cartoon I've ever seen. Why is Uncle Sam as played by Angry Jeff Bridges wearing the Chinese flag as a cape??
Like Vitalik Buterin creating eth because he was mad his op WoW char got nerfed, we now have more gamers lore. J D Vance played a Yawgmoth's Bargain deck.
Molly White reports on Kamala Harris's recent remarks about Cryptocurrency being a cool opportunity for black men.
VP Harris's press release (someone remind me to archive this once internet archive is up). Most of the rest of it is reasonable, but it paints cryptocurrency in a cautiously positive light.
Supporting a regulatory framework for cryptocurrency and other digital assets so Black men who invest in and own these assets are protected
[...]
Enabling Black men who hold digital assets to benefit from financial innovation.
More than 20% of Black Americans own or have owned cryptocurrency assets. Vice President Harris appreciates the ways in which new technologies can broaden access to banking and financial services. She will make sure owners of and investors in digital assets benefit from a regulatory framework so that Black men and others who participate in this market are protected.
Overall there has been a lot of cryptocurrency money in this US election on both sides of the aisle, which Molly White has also reported extensively on. I kind of hate it.
"regulation" here is left (deliberately) vague. Regulation should start with calling out all the scammers, shutting down cryptocurrency ATMs, prohibiting noise pollution, and going from there; but we clearly don't live in a sensible world.
Introducing the official crypto coin of the Harris-Walz ticket: JoyCoin! Trading under JOY. Every time a coin is minted, we shoot someone from the global south in the head.
He's also just dropped a thorough teardown of the tech press for their role in enabling Silicon Valley's worst excesses. I don't have a fitting Kendrick Lamar reference for this, but I do know a good companion piece: Devs and the Culture of Tech, which goes into the systemic flaws in tech culture which enable this shit.
hello, as a year 12 student who just did the first english exam, i was genuinely baffled seeing one of the stimulus texts u have to analyse is an AI IMAGE. my friend found the image of it online, but that’s what it looked like
for a subject which tells u to “analyse the deeper meaning”, “analyse the composer’s intent”, “appreciate aesthetic and intellectual value” having an AI image in which you physically can’t analyse anything deeper than what it suggests, it’s just extremely ironic 😭 idk, [as an artist who DOESNT use AI]* i might have a different take on this since i’m an artist, what r ur thoughts?
*NB: original post contains the text: "as an artist using AI images" but this was corrected in a later comment:
also i didn’t read over this after typing it out but, meant to say, “as an artist who DOESNT use AI”
In a twisted way, this makes sense as an exercise for English class. Why would someone go to an autoplag image generator, type in a prompt (perhaps something like "laptop and smartphones on a table at a lakefront") and save this image. It's a question I can't easily answer myself. It's hard to imagine the intention behind wanting to synthesize this particular picture, but it's probably something we'll be asking often in the near future.
I can even understand the shrimp Jesus slop or soldiers with huge bibles stuff to an extent. I can understand what the intended emotional appeal is and at least feel something like bewilderment or amusement about the surreality of them. This one would be just banal even if it were a real photo, so why make this? The AI didn't have intent or imbue meaning in the image but surely someone did.
Thus leading to this sneer on HN. I'm quoting it in entirety; click through for Poe's Law responses.
I was telling someone this and they gave me link to a laptop with higher battery life and better performance than my own, but I kept explaining to them that the feature I cared most about was die size. They couldn't understand it so I just had to leave them alone. Non-technical people don't get it. Die size is what I care about. It's a critical feature and so many mainstream companies are missing out on my money because they won't optimize die size. Disgusting.
Over on /r/politics, there are several users clamoring for someone to feed the 1900 page independent counsel report into an LLM, which is an interesting instance of second-order laziness.
They also seem convinced that NotebookLLM is incapable of confabulation which is hilarious and sad. Could it be sneaky advertising?
I know you probably had no intention behind the number, but I just had to check. 2686 days ago was 12 June 2017. Pretty sure we’ve known about how google fumbles shit from way before then!
That’s amazing, I wish I had that kind of discipline to learn!
I can’t decide what’s worse: the fucking insulting tone that sounds like I’m about to get a pamphlet about Joseph Smith or GameStop or some shit, or that the suggestions just make up opinions you don’t have and events that didn’t happen
But users like engagement! Which means seeing that the person wrote a thing and not reading the thing and making a human connection with someone who is, y'know, engaged in conversation.
I know chatbots don't track meaning, but I'm pretty sure words still mean things.
I guess these companies decided that strip-mining the commons was an acceptable deal because they’d soon be generating their own facts via AGI, but that hasn’t come to pass yet. Instead they’ve pissed off many of the people they were relying on to continue feeding facts and creativity into the maws of their GPUs, as well as possibly fatally crippling the concept of fair use if future court cases go against them.
OpenAI's revenue isn't from advertising, it should be slightly easier for them to resist the call of enshittification this early in the company history.
Can't enshittify that which is already shit
Twice in the last week I've had Claude refuse to answer questions about a specific racial separatist group (nothing about their ideology, just their name and facts about their membership) and questions about unconventional ways to assess job candidates. Both times I turned to ChatGPT and it gave me an answer immediately
Just a normal hackernews, testing if the models they use are racist
Well, at this point most new data being created is conversations with chatgpt, seeing as how stack overflow and reddit are increasingly useless, so their conversation logs are their moat.
Twice in the last week I’ve had Claude refuse to answer questions about a specific racial separatist group (nothing about their ideology, just their name and facts about their membership)
The unspecificity is damning. "Facts about their membership" might range from "what racial separatist group is Skum Shitt (R, NC) a former member of" to "am I eligible to join The Brotherhood of Untarnished Ejaculate".
and questions about unconventional ways to assess job candidates.
That's an interesting example to pair up with the one about racist hate groups. Unconventional in what way, motherfucker?
tl;dr of the article: ever since the ousting of altman, microsoft, which virtually owns openai, has been suspicious of openai's actual worth. therefore MS has cut down on the infinite resource flow. openai employees are whining about this.
there is one additional point in three of the near final paragraphs, which I'll quote in full because they are so amusing to me
Still, OpenAI employees complain that Microsoft is not providing enough computing power, according to three people familiar with the relationship. And some have complained that if another company beat it to the creation of A.I. that matches the human brain, Microsoft will be to blame because it hasn’t given OpenAI the computing power it needs, according to two people familiar with the complaints.
Oddly, that could be the key to getting out from under its contract with Microsoft. The contract contains a clause that says that if OpenAI builds artificial general intelligence, or A.G.I. — roughly speaking, a machine that matches the power of the human brain — Microsoft loses access to OpenAI’s technologies.
The clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations. Under the terms of the contract, the OpenAI board could decide when A.G.I. has arrived.
I think I see a possible future here. Just as the promptfondlers are now trying to talk down human accomplishments to make the LLMs sound more impressive ('it learns just like a child!' (no, it doesn't)). As when you are trying to reach a deal on a car, you either need the buyer to raise the price, or the seller to lower theirs. This will lead to a lawsuit where they are going to drop down the theoretical capabilities of an AGI just to trigger this clause.
And as the judge thinks that emoji's are a form of novelty pasta, any potential jury can't spell stattistical, there is a 50% chance that they will be convinced it is AGI because humans like the AGI also make mistakes.
anyone wanna take bets on how much pearlclutching surprisedpikachu we’ll see
I suspect we'll see a fair amount. Giving some specifics:
I suspect we'll see Sammy accused of endangering all of humanity for a quick buck - taking Altman at his word, OpenAI is attempting to create something which they themselves believe could wipe out humanity if they screw things up.
I expect calls to regulate the AI industry will louden in response to this - what Sammy's doing here is giving the true believers more ammo to argue Silicon Valley may potentially trigger the robot apocalypse that Silicon Valley themselves have claimed AI is capable of unleashing.
perhaps saltman will have enough audacity to promote this as a way to ensure that some account is owned by a human, after flooding every corner of internet with ai slop and fake accounts
The 0.000001% chance that this thing morphs into the new global reserve currency is enough for a proper rationalist to keep it trundling along, don't you think?
Also don't underestimate the value of having someplace to park useful cronies you don't currently have any other job for
Did Sam anticipate the easily foreseeable avalanche of AI slop, decide that proof of humanity was a worthwhile investment, and only then notice that all the suggested search completions for "proof of" were crypto?
He also put 'everything you post is now training data for ais' in his tos apparently. So nightshade poison those images and start building a following on other sites artists. (And as a non artist, reminder to self to like and repost more artists I like).
I’m really really not happy about this. There is one person I’ve been trying to keep out for the last few years and now they can come crawl all my fucking posts?? And report my account!?
Edit: apparently being protected should offer me some protection still.
I really wonder what the meeting looked like where they decided on that change, because I’m struggling coming up with a single argument for it that doesn’t boil down to giving abusive asshats more playtime.
Thanks to the power of Technology(tm) we can have an LLM generate spam, an automailer send it out to millions, where an automated spam filter can identify them and hide them in a separate inbox to be automatically deleted in a couple of days. Of course the technology isn't perfect and sometimes someone sees one of these ads and, I assume, spends money on a product. But I have faith that these problems are solvable and we'll be able to totally automate email spam to no longer interact with human beings at any point. Once that's done we can apply the same methodologies to weekly internal memos, daily team meetings, and even unsolicited dick pics. Imagine never needing to take or see pictures of a stranger's junk ever again while still massively scaling up the number of unsolicited dick pics in flight at any given time.
it just clicked for me but idk if it makes sense: openai nonprofit status could be used later (inevitably in court) to make research clause of fair use work. they had it when training their models and that might have been a factor why they retained it, on top of trying to attract actual skilled people and not just hypemen and money
There's no way this works, right? It's like a 5y.o.'s idea of a gotcha.
This would be like starting a tax-exempt charity to gather up a large amount in donations and then switching to a for-profit before spending it on any charitable work and running away with the money.
the US legal system has this remarkable "little" failure mode where it is easily repurposed to be not an engine of justice, but instead of engine of enforcing whatever story you can convince someone of
(the extremely weird interaction(s) of "everything allowed except what is denied", case precedent, and the abovementioned interaction mode, result in some really fucking bad outcomes)
i'm not a lawyer and i've typed it up after 4h of sleep, trying to make sense of what tf were they thinking. they're not bagging up money, they're stealing all data they can, so it's less direct and it'd depend on how that data (unstructured, public) will be valued at. then, what a coincidence, their proprietary thing made something useful commercially, or so were they thinking. sbf went to court with less
saw this via a friend earlier, forgot to link. xcancel
socmed administrator for a conf rolls with liarsynth to "expand" a cropped image, and the autoplag machine shits out a more sex-coded image of the speaker
the mindset of "just make some shit to pass muster" obviously shines through in a lot of promptfans and promptfondlers, and while that's fucked up I don't want to get too stuck on that now. one of the things I've been mulling over for a while is pondering what a world (and digital landscape) with a richer capability for enthusiastic consent could look like. and by that I mean, not just more granular (a la apple photo/phonebook acl) than this current y/n bullshit where a platform makes a landgrab for a pile of shit, but something else entirely. "yeah, on my gamer profile you can make shitposts, but on academic stuff please keep it formal" expressed and traceable
even if just as a thought experiment (because of course there's lots of funky practical problems, combined with the "humans just don't really exist that way" effort-tax overhead that this may require), it might inform about some avenues of how to to go about some useful avenues on how to go about handling this extremely overt bullshit, and informing/shaping impending norms
(e: apologies for semi stream of thought, it's late and i'm tired)
I can also hear Timnit Gebru wailing in despair all the way from my house, because this is exactly the kind of AI-reinforced bias that she was talking about while everyone was arguing about whether to listen to the doom cult. These are not theoretical problems with the technology; they are fully extant and easily demonstrable. The more these systems are integrated into processes that effect actual people the more often we'll see this kind of thing happen.
You know, I can't tell if this is supposed to be "I know you're saying that calling unhoused people vermin is some Nazi shit, but it's more complicated than that" or "I know calling unhoused people vermin is some Nazi shit, and I'm honestly okay with that".
Gonna guess the latter given where it's coming from and the fact that the actual "more complicated" is a salad of non sequiturs.
the second is characteristically christian and associated with dignity culture.
homeless people bear the imago dei. they are not intrinsic moral superiors but their plight is a shame to us because it denudes them of the dignity that should be theirs by right of their humanity.
Lol yes, that's exactly the way mainstream Christians view homelessness, their lack of Work Ethic :tm: is a "shame to us". Lmao even.
Christian charitable organisations are so well-known for their ethos of inherenthumandignity.
New alignment offer: I guess some people were sad they missed the last window. Some have been leaking to the press and ex-employees. That's water under the bridge. Maybe the last offer needed to be higher. People have said they want a new window, so this is my attempt. Here's a new one: You have until 00:00 UTC Oct 17 (-4 hours) to DM me the words, ‘I resign and would like to take the 9-month buy-out offer’ You don't have to say any reason, or anything else. I will reply ‘Thank you.’ Automattic will accept your resignation, you can keep you [sic] office stuff and work laptop; you will lose access to Automattic and Wong (no slack, user accounts, etc). HR will be in touch to wrap up details in the coming days, including your 9 months of compensation, they have a lot on their plates right now. You have my word this deal will be honored. We will try to keep this quiet, so it won't be used against us, but I still wanted to give Automatticians another window.
there’s a (mid) joke here about how a boy who’s obsessed with photography really should understand more about optics
and who wouldn’t trust their livelihood in a difficult job market to a promise from a very stable genius like matt, who will destroy you financially if he thinks you talked to the press:
After an exodus of employees at Automattic who disagreed with CEO Matt Mullenweg’s recently divisive legal battle with WP Engine, he’s upped the ante with another buyout offer—and a threat that employees speaking to the press should “exit gracefully, or be fired tomorrow with no severance.”
Just amazing that he worked for a YC-backed startup ("Warp, accounting and payroll for founders") whose social media team (prolly this dude) just handed out affiliation icons to all manners of like-minded twitter racists.
i am hearing that ProQuest has been quietly contacting small publishers to see if it can ingest their published output for AI training.
ProQuest has an AI thing now, but it's denied it's training on hosted content ... yet.
if you are, or know, an author who's had a letter of this sort recently, mentioning ProQuest or no, i'd love to know and please tell your friends - email is [email protected]
The full piece is worth a read, but the conclusion's pretty damn good, so I'm copy-pasting it here:
All of this financial and technological speculation has, however, created something a bit more solid: self-imposed deadlines. In 2026, 2030, or a few thousand days, it will be time to check in with all the AI messiahs. Generative AI—boom or bubble—finally has an expiration date.
Think this might be the first tweet of Roko I somewhat agree with. At least Roko did something somewhat intellectual in creating pascals wager for nerds. Musks intellectual accomplishments are worse. He thinks the derivative function is some sort of glorious masterpiece of math, and he doesn't seem to understand chess. I think the only things he really created was the handle that sinks into the car, and the look of the cybertruck.
Funny to see the Rationalists start to turn on their glorious savior from AGI doom. (Which has been happening for a while now it seems, some even argue he never actually interacted with anybody from the Rationality community (btw before he blocked people being able to see all people you follow on twitter he followed slatestarcodex)))
That launch happened Feb 2018. By that time, I was already solidified as a musk sceptic and didn’t pay attention to the hubbub. Thinking back on it:
Why was this a thing?
per wikipedia:
Musk explained he wanted to inspire the public about the "possibility of something new happening in space" as part of his larger vision for spreading humanity to other planets.’
What I like about the phrasing “possibility of something new” is that nothing new really happened with that launch. We’ve already sent all kinds of junk into space in configurations varying in impressiveness.
Naming the mannequin Starman falls apart since the eponymous starman is an extra terrestrial. Just goes to show that Musk is not a Real NerdTM and just makes surface level references to look cool.
Quick sidenote, you cocked up the formatting on the hyperlink - you're supposed to put [text in square brackets and](the link in circle brackets) like this
it may be helpful to know that, at least on the platforms I have tried, you can highlight text and paste a link, and the awful.systems will handle the bracketing for you.
New piece from The Atlantic: The Age of AI Child Abuse is Here, which delves into a large-scale hack of Muah.AI and the large-scale problem of people using AI as a child porn generator.
And now, another personal sidenote, because I cannot stop writing these (this one's thankfully unrelated to the article's main point):
The idea that "[Insert New Tech] Is Inevitabletm" (which Unserious Academic interrogated in depth BTW) took a major blow when NFTs crashed and burned in full view of the public eye and got rapidly turned into a pop-culture punchline.
That, I suspect, is helping to fuel the large scale rejection of AI and resistance to its implementation - Silicon Valley's failure to make NFTs a thing has taught people that Silicon Valley can be beaten, that resistance is anything but futile.
Really appreciate that link to Unserious Academic! This piece underlines something very important about the forces (by forces I mean weirdos) we struggle against.
It really is incredible how bad twitter got to make me root for the company that thinks DID is good protocol design and that was started by Jack Dorsey.
This sadly has caused my login problems to reappear on bsky. No idea what they are doing with their service, but I'm having regular issues with the site. Also seems 'downforeveryoneorjustme' enshittified. (the image is showing a part of the site which is now an advertisement for some AI bullshit chatbot/imagegenerator everything aislop roleplay thing). (seems to be fixed now, but wow did bsky have weird issues for me).
As Nina Power was mentioned before, here is an article on a Welsh 'druid'/forger which touches that subject (and Marx) a bit. People might find it an interesting read.
Anti-Woke Druids and Radical Bards - 'What links Welsh 18th century romantic Druid-Bards, gathering around a circle of pebbles in North London, and the contemporary online right?'
Wow, that's a name I haven't heard in a long time.
A regular contributor at UnHerd...
I did not know that, and I hate that it doesn't surprise me. I tended to dismiss his peak oil doomerism as wishing for some imagined "harmony with nature". This doesn't help with that bias.