Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 21 July 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
It told him he was very smart and correct so he had an hypnogasm yes.
(For the people not in the know and who want some more psychic damage, look up Scott and his 'I can hypnotise you into having orgasms' blog posts, the man is utterly nuts, and it is really scary how many people seriously follow him).
explaining these things to normal humans is how you turn them into humans with years of psychic damage, and I support that mission wholeheartedly
an attempt at a summary:
the dilbert guy, who believes he can hypnotize women into having sex with him, now also believes he knows a magic incantation to teach hypnosis to a chatbot, and heavily implies the chatbot was able to hypnotize him in turn (presumably into having sex with it)
I saw people making fun of this on (the normally absurdly overly credulous) /r/singularity of all places. I guess even hopeful techno-rapture believers have limits to their suspension of disbelief.
At risk of being NSFW, this is an amazing self-own, pardon the pun. Hypnosis via text only works on folks who are fairly suggestible and also very enthusiastic about being hypnotized, because the brain doesn't "power down" as much machinery as with the more traditional lie-back-on-the-couch setup. The eyes have to stay open, the text-processing center is constantly engaged, and re-reading doesn't deepen properly because the subject has to have the initiative to scroll or turn the page.
Adams had to have wanted to be hypnotized by a chatbot. And that's okay! I won't kinkshame. But this level of engagement has to be voluntary and desired by the subject, which is counter to Adams' whole approach of hypnosis as mind control.
imagine one day buying some shitpost novelty stickers from that one site you heard a friend mention sometime, and then getting them and laughing about it and forgetting it
all too rapidly the years pass: young trees shoot up, older trees start boughing their way past electrical lines, the oldest all already in their position of maximum comfort. whole generations of memes have been born and died. you no longer even get to make fun of your weird aunt for still sending the dancing baby gif (these days it’s all about the autotune clips of a decade ago..)
and then one day you get reminded that the shitpost novelty sticker web store exists by receiving an email from them
Apparently they also once chained a design into 'liberal moron' from what was some political message once in the past, so it isn't coming from nowhere. Guess it wasn't an innocent mistake.
Proton, who I use for mail and various other services, has gone against the wishes of the majority of their userbase as measured by their own survey and implemented an LLM writing assistant in protonmail, which is a real laugh given Proton’s main hook is its services are end-to-end encrypted
(supposedly this piece of shit will run locally if you meet these incredibly high system requirements including a high end GPU or recent, high end Apple M chipset and a privacy-violating Chromium-based browser. otherwise it breaks e2e by sending your emails unencrypted to Proton’s servers, and they do a lot to try to talk over that fact)
People who use Proton are privacy-conscious and mostly (I would argue) tech literate, and yet they shove spicy autocomplete that no one ever needed until two years ago and most people don’t want now because it produces complete horseshit, and spellchecking that every browser under then sun has built in by now.
And then they quietly say you need to use Chromium, so the people who use anything but (like, I don’t know, the majority of privacy-conscious folks who should be their main user base, lol) have their e2e broken?
I really hope they catch a raging firestorm for this.
(Also I’m really pissed right now because used to recommend them to people and now feel like a total jackass for doing that.)
(Also I’m really pissed right now because used to recommend them to people and now feel like a total jackass for doing that.)
don’t feel bad for making the best choice you could with the information of the past. until we get a workable, interoperable, federated, encrypted communication/online services platform, the choice was to recommend one of the centralized e2e providers. we both chose to recommend Proton and they did this shit, but it could have just as easily been tutanota.
now my brain’s going “e2e encrypted federated email but it preferably uses activitypub as a transport and classic email as a fallback, is that anything”
Until indicated otherwise I’m going to presume it was some bizbro PM/PO/whatever pushing it because they really think it should be there “to be able to compete” (because of some laughably idiotic misunderstanding of their own value proposition and pitch)
Tangent: while I mostly run my own servers and services I did a recent assay on who’s reasonable for service shit. Proton kept popping up massively recommended while some occasional critical mentions from folks in anarchist circles, etc - made me a bit 🤨 and want to dig in more, but also just their product offerings aren’t great. Others I poked into are fastmail and tuta - both seem a fair bit better. Might be worth a look
Proton kept popping up massively recommended while some occasional critical mentions from folks in anarchist circles, etc - made me a bit 🤨 and want to dig in more,
No surprise that folks in anarchist circles are skeptical of Proton ha. That said, I do know quite a few people in the email "industry" who are broadly skeptical of Proton's general philosophy/approach to email security, and the way they market their service/offerings.
Others I poked into are fastmail and tuta - both seem a fair bit better. Might be worth a look
Fastmail has a great interface and user experience imo, significantly better than any other web client I've tried. That said, they're not end-to-end encrypted, so they're not really trying to fill the same niche as Proton/Tuta.
Fastmail customers looking for end-to-end encryption can use PGP or s/mime in many popular 3rd party apps. We don’t offer end-to-end encryption in our own apps, as we don’t believe it provides a meaningful increase in security for most users...
If you don’t trust the server, you can’t trust it to load uncompromised code, so you should be using a third party app to do end-to-end encryption, which we fully support. And if you really need end-to-end encryption, we highly recommend you don’t use email at all and use Signal, which was designed for this kind of use case.
I honestly don't know enough to separate the wheat from the chaff here (I can barely write functional python scripts lol - so please chime in if I'm completely off base), but this comes across to me as an understandable (and fairly honest) compromise, that is probably adequate for some threat models?
Last time I used Tuta the user experience was pretty clunky, but afaik it is E2EE, so it's probably a better direct alternative to Proton.
Setting up an email server is really straightforward with simple-nixos-mailserver, highly recommend. No idea how likely you are to be classified as spam though from a new domain
Not to downplay what proton mail is doing, but they're saying that you can run this locally with a 2 core, 4 thread CPU from 2017 (the i3 7100, which is a 7000 series processor), and a RTX 2060, a GPU that was never considered high end. Perhaps they changed the requirements while you weren't looking. Or Am I reading this wrong?
only one of the 8 computers I own (and I’m not being cheeky here and counting embedded or retro systems, just laptops and desktops) is physically capable of meeting the model’s minimum requirements, and that’s only if I install chromium on the Windows gaming VM my bigger GPU’s dedicated to and access protonmail from there. nothing else I do needs a GPU that big, professional or otherwise — that hardware exists for games and nothing else. compared with the integrated GPUs most people have, a 2060’s fucking massive.
do you see how these incredibly high system requirements (for a webmail client of all things), alongside them already treating the local model as strictly optional, can act as a funnel redirecting people towards the insecure cloud version of the feature? “this feature only works securely on one of the computers where you write mail, at best” feels like a dark pattern to me.
Ah yes, Alexander's unnumbered hordes, that endless torrent of humanity that is all but certain to have made a lasting impact on the sparsely populated subcontinent's collective DNA.
edit: Also, the absolute brain on someone who would think that before entertaining a random recent western ancestor like a grandfather or whateverthefuckjesus.
This isn't really too interesting yet; but something to keep an eye on. As things like blockchain and AI alignment becomes weirdly political it's likely that sneering will get unpleasantly close to politics at times. And yet sneer we must.
Other self-titled techno-optimists highlighted Vance’s ties to venture capital, Thiel, and Andreessen, saying the “Gray Tribe [is] in control.” Gray Tribe is a reference to a term originating from Scott Alexander’s Slate Star Codex blog, which points to a group that is neither red (Republican) or Blue (Democrat), but a libertarian, tech savvy alternative.
I really should write more about technofascism while I still have electricity, clean water, a relatively unfractured global information network, and there aren’t too many gunshots outside
In the spirit of the fondly-remembered "Cloud-to-Butt" plugin, I propose a new tool that transforms mentions of AI and LLMs to something else. But what shape should our word transformer take? While scatology is good shit, I bet we can come up with something more clever than "ChatGPT-to-Poop"
BTW, take 1d6 psychic damage when you realize that Cloud-to-Butt was released more than ten years ago
Edit: Favorite suggestions so far:
AI-to-Bob @sc_griffith
AI-to-DFE (Dumpster Fire Elemental) @YourNetworkIsHaunted
there’s so much quantum woo in that article I want to sneer at, but I don’t know anywhere close to enough about quantum physics to do so without showing my entire ass
Well a good thing to remember re quantum mechanics, Schrödinger Cat is intended as a thought experiment showing how dumb the view on QM was. So it is always a bit funny to see people extrapolate from that thought experiment without acknowledging the history and issues with it. (But I think that also depends on the various interpretations, and this means I'm showing a cheekily high amount of ass here myself).
To me, the most sneerable thing in that article is where they assume a mechanical brain will evolve from ChatGPT and then assume a sufficiently large quantum computer to run it on. And then start figuring out how to port the future mechanical brain to the quantum computer. All to be able to run an old thought experiment that at least I understood as highlighting the absurdity of focusing on the human brain part in the collapse of a wave function.
Once we build two trains that can run near the speed of light we will be able to test some of Einstein's thought experiments. Better get cracking on how we can get enough coal onboard to run the trains long enough to get the experiments done.
There are some interesting ideas in that general direction (wrapping Bell inequalities within different new types of thought experiment, etc.), but some of the people involved have done rather a lot of overselling, and now bringing in talk of "AI" just obscures the whole situation. Which was already obscure enough.
If you want a serious discussion of interpretations of quantum mechanics, here is a transcript of a lecture "Quantum Mechanics in Your Face" which has the best explanation I've ever seen. I'd recommend the first 6 of Peter Shor's Quantum Computation notes (don't worry they're each very short) for just enough background to understand the transcript.
Under this, let's charitably call it, "interpretation", the Schrödinger cat analogy makes no sense, surely THE CAT is bloody conscious about ITSELF BEING ALIVE??
According to one story at least, Wigner eventually concluded that if you take some ideas that physicists widely hold about quantum mechanics as postulates and follow them through to their logical conclusion, then you must conclude that there is a special role for conscious observers. But he took that as a reason to question those assumptions.
(That story comes from Leslie Ballentine reporting a conversation with Wigner in the course of promoting an ensemble interpretation of QM.)
Yes, the problem with quantum mechanics is it's not just your Deepak Chopras of the world that get sucked into quantum woo, but even a lot of respectable academics with serious credentials, thus giving credence to these ideas. Quantum mechanics is a context-dependent theory, the properties of systems are context variant. It is not observer-dependent. The observer just occupies their own unique context and since it is context-dependent, they have to describe things from their own context.
It is kind of like velocity in Galilean relativity, you have to take into account reference frame. Two observers in Galilean relativity could disagree on certain things, such as the velocity of an object but the disagreement is not "confusing" because if you understand relativity, you'd know it's just a difference in reference frame. Nothing important about "observers" here.
I do not understand what is with so many academics in fully understanding that properties of systems can be variant under different reference frames in special relativity, but when it comes to quantum mechanics their heads explode trying to interpret the contextual nature of it and resort to silly claims like saying it proves some fundamental role for the conscious observer. All it shows is that the properties of systems are context variant. There is nothing else.
Once you accept that, then everything else follows. All of the unintuitive aspects of quantum mechanics disappear, you do not need to posit systems in two places at once, some special role for observers, a multiverse, nonlocality, hidden variables, nothing. All the "paradoxes" disappear if you just accept the context variance of the states of systems.
Marc Andreessen, the co-founder of one of the most prominent venture capital firms in Silicon Valley, says he’s been a Democrat most of his life. He says he has endorsed and voted for Bill Clinton, Al Gore, John Kerry, Barack Obama and Hillary Clinton.
However, he says he’s no longer loyal to the Democratic Party. In the 2024 presidential race, he is supporting and voting for former President Donald Trump. The reason he is choosing Trump over President Joe Biden boils down primarily to one major issue — he believes Trump’s policies are much more favorable for tech, specifically for the startup ecosystem.
none of this should be surprising, but it should be called out every time it happens, and we’re gonna see it happen a lot in the days ahead. these fuckers finally feel secure in taking their masks off, and that’s not good.
I don't understand why people take him at face value when he claims he's always been a Democrat up until now. He's historically made large contributions to candidates from both parties, but generally more Republicans than Democrats, and also Republican PACs like Protect American Jobs. Here is his personal record.
Has he moved right? Sure. Was he ever left? No, this is the voting record of someone who wants to buy power from candidates belonging to both parties. If it implies anything, it implies he currently finds Republicans to be corruptible.
See, I feel like the Democrats have had a pretty strong technocrat wing that is much more in synch with Neoreaction than people care to acknowledge. As the right shifts towards pursuing the pro-racist anti-women anti-lgbt aspects of their agenda through the courts rather than the ballot box, it seems like the fault lines between the technocratic fascists and the theocratic fascists are thinner than the lines between the techfash and the progressives.
Slightly related: now I know when the AI crash is going to happen. Every bottomfeeder recruiter company on LinkedIn is suddenly pushing 2-month contract technical writer positions with AI companies with no product, no strategy, and no idea of how to proceed other than “CEO cashes out.” I suspect the idea is to get all of their documentation together so they can sell their bags of magic beans before the beginning of the holiday season.
sickos.jpg
I have asked if he can send me links to a few of these, I'll see what I can do with 'em
I think there was a report saying that the most recent quarter still showed a massive infusion of VC cash into the space, but I'm not sure how much of that comes from the fact that a new money sink hasn't yet started trending in the valley. It wouldn't surprise me if the griftier founders were looking to cash out before the bubble properly bursts in order to avoid burning bridges with the investors they'll need to get the next thing rolling.
Current flavor AI is certainly getting demystified a lot among enterprise people. Let's dip our toes into using an LLM to make our hoard of internal documents more accessible, it's supposed to actually be good at that, right? is slowly giving way to "What do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?"
“What do you mean RAG is basically LLM flavored elasticsearch only more annoying and less documented? And why is all the tooling so bad?”
Our BI team is trying to implement some RAG via Microsoft Fabrics and Azure AI search because we need that for whatever reason, and they've burned through almost 10k for the first half of the running month already, either because it's just super expensive or because it's so terribly documented that they can't get it to work and have to try again and again. Normal costs are somewhere around 2k for the whole month for traffic + servers + database and I haven't got the foggiest what's even going on there.
But someone from the C suite apparently wrote them a blank check because it's AI ...
Confucius, the Buddha, and Lao Tzu gather around a newly-opened barrel of vinegar.
Confucius tastes the vinegar and perceives bitterness.
The Buddha tastes the vinegar and perceives sourness.
Lao Tzu tastes the vinegar and perceives sweetness, and he says, "Fellas, I don't know what this is but it sure as fuck isn't vinegar. How much did you pay for it?"
Maybe hot take, but I actually feel like the world doesn't need strictly speaking more documentation tooling at all, LLM / RAG or otherwise.
Companies probably actually need to curate down their documents so that simpler thinks work, then it doesn't cost ever increasing infrastructure to overcome the problems that previous investment actually literally caused.
For example, in the case of physics one could imagine working through very high quality course materials together with Feynman, who is there to guide you every step of the way. Unfortunately, subject matter experts who are deeply passionate, great at teaching, infinitely patient and fluent in all of the world's languages are also very scarce and cannot personally tutor all 8 billion of us on demand. However, with recent progress in generative AI, this learning experience feels tractable.
NGL though mostly just sharing this link for the concept art concept fart which features a three-armed many fingered woman smiling at an invisible camera.
Others: unslanted solar panels at ground level in shade under other solar panels, 90-degree water steps (plural), magical mystery staircases and escalator tubes, picture glass that reflects anything it wants to instead of what may actually be in the reflected light path, a whole Background Full Of Ill-Defined Background People because I guess the training set imagery was input at lower pixel density(??), and on stage left we have a group in conversation walking and talking also right on the edge of nowhere in front of them
And that’s all I picked up in about 30-40s of looking
Imagine being the kind of person who thinks this shit is good
"Learning Nate Silver works for Peter Thiel is one of those things that would have shocked me in 2016 and made me wonder why I hadn't already been assuming it in 2024." He now works for the betting market polymarket (using cryptocurrencies of course).
New existential threat developed, we go all in on AGI economically, turns out to not be possible and then the world collapses due to infrastructure rot. I'll email Yud.
In the vein of collapsing infrastructure, my condolences to anyone dealing with aftermath of Crowdstrike's big ol fucky wucky. If I were a bad person looking for entertainment, I would seed a conspiracy theory about how today's cockup is really the result of Rationalist sleeper agents launching a guerilla struggle to strangle the basilisk in its crib.
Question for the experts: do you all suppose this will drive a new cycle of hype around thin clients and network booting?
Make it even funnier, AGI launches and then gets taken down because the only maintainer of xzutils left and now every time the AGI tries to run ./killallhumans it segfaults to death.
"[A]cademic publisher Taylor & Francis, which owns Routledge, had sold access to its authors’ research as part of an Artificial Intelligence (AI) partnership with Microsoft—a deal worth almost £8m ($10m) in its first year."
Does anyone here know what Justine Tunney’s deal is? I’d been following her redbean project for a time but came across an article that left me rather startled
A friend who worked with her is sympathetic to her but does not endorse her: this is a tendency she has, she veers back and forth on it a lot, she has frequent moments of insight where she disavows her previous actions but then just kind of continues doing them. It's Kanye-type behavior.
"This is the largest gold rush in the history of capitalism and Australia is missing out," said Artificial Intelligence professor Toby Walsh, from the University of New South Wales.
It's even bigger than the actual gold rush! Buy your pans now folks!
One option Professor Van Den Hengel suggests is building our own Large Language Model like OpenAI's ChatGPT from the ground up, rather than being content to import the tech for decades to come.
lol, but also please god no
"The only way to have a say in what happens globally in this critical space is to be an active participant," he said.
I mean, that was definitely a thing when I was at school, only it was mostly about teaching undergrads graph search algorithms and the least math possible in order to understand backpropagation.
As an aside, weird that we don't hear much about genetic algorithms anymore, but it's probably just me.
It's my impression that Australia has also produced a disproportionate share of best takes on the subject. How come they are so far ahead of the rest of the world when it comes to dodging this grift?
As the saying goes, the only people who make money in a gold rush are the people selling shovels. I guess this bloke is one of the people selling shovels.
I don’t think I’ve ever experienced before this big of a sentiment gap between tech – web tech especially – and the public sentiment I hear from the people I know and the media I experience.
Most of the time I hear “AI” mentioned on Icelandic mainstream media or from people I know outside of tech, it’s being used as to describe something as a specific kind of bad. “It’s very AI-like” (“mjög gervigreindarlegt” in Icelandic) has become the talk radio short hand for uninventive, clichéd, and formulaic.
Baldur has pointed that part out before, and noted how its kneecapping the consumer side of the entire bubble, but I suspect the phrase "AI" will retain that meaning well past the bubble's bursting. "AI slop", or just "slop", will likely also stick around, for those who wish to differentiate gen-AI garbage from more genuine uses of machine learning.
To many, “AI” seems to have become a tech asshole signifier: the “tech asshole” is a person who works in tech, only cares about bullshit tech trends, and doesn’t care about the larger consequences of their work or their industry. Or, even worse, aspires to become a person who gets rich from working in a harmful industry.
For example, my sister helps manage a book store as a day job. They hire a lot of teenagers as summer employees and at least those teens use “he’s a big fan of AI” as a red flag. (Obviously a book store is a biased sample. The ones that seek out a book store summer job are generally going to be good kids.)
I don’t think I’ve experienced a sentiment disconnect this massive in tech before, even during the dot-com bubble.
Part of me suspects that the AI bubble's spread that "tech asshole" stench to the rest of the industry, with some help from the widely-mocked NFT craze and Elon Musk becoming a punching bag par excellence for his public breaking-down of Twitter.
(Fuck, now I'm tempted to try and cook up something for MoreWrite discussing how I expect the bubble to play out...)
The active hostility from outside the tech world is going to make this one interesting, since unlike crypto this one seems to have a lot of legitimate energy behind it in the industry even as it becomes increasingly apparent that even if the technical capability was there (e.g. the bullshit problems could be solved by throwing enough compute and data at the existing paradigm, which looks increasingly unlikely) there's no way to do it profitably given the massive costs of training and using these models.
I wonder if we're going to see any attempts to optimize existing models for the orgs that have already integrated them in the same way that caching a web page or indexing a database can increase performance without doing a whole rebuild. Nvidia won't be happy to see the market for GPUs fall off, but OpenAI might have enough users of their existing models that they can keep operating even while dramatically cutting down on new training runs? Does that even make sense, or am I showing my ignorance here?
NSFW, as NSAB, I know that anti-environmentalists shout a lot about 'what about china china should go green first!' while not knowing china is in fact doing a lot to try and go green (at least on the co2 energy front, I'm not asking here to go point out all the bad things china does to fuck up the environment). I see 'we should develop AI before china does so' be a big pro AI argument, so here is my question. Is china even working on massive A(G)I like the people claim?
I am overall very uninformed about the chinese thechnological day-to-day, but here's two interesting facts:
They set some pretty draconian rules early on about where the buck stops if your LLM starts spewing false information or (god forbid) goes against party orthodoxy so I'm assuming if independent research is happening It doesn't appear much in the form of public endpoints that anyone might use.
A few weeks ago I saw a report about chinese medical researchers trying use AI agents(?) to set up a virtual hospital in order to maybe eventually have some sort of a virtual patient entity that a medical student could work with somehow, and look how many thousands of virtual patients our handful of virtual doctors are healing daily, isn't it awesome folks. Other than the rampant startupiness of it all, what struck me was that they said they had chatgpt-3.5 set up up the doctor/patient/nurse agents, i.e. they used the free version.
So, who knows? If they are all-in in AGI behind the scenes they don't seem to be making a big fuss about it.
Typing from phone, please excuse lack of citations. Academic output in various parts of ML research have increasingly come from China and Chinese researchers over the past decade. There’s multiple inputs to this - funding, how strong a specific school/research centre is, etc, but it’s been ramping up. Pretty sure part of this is one of the fuel sources in keeping the pro-hegemonist US argument popular and going lately (also part of where the “we should before they do” comes from I guess)
I’ve seen some mentions of recent legislation direction about LLM usage but I’m not fully up to scratch on what it is, haven’t had the time to read up
Thanks I was not aware, so they are doing things regarding the research at least. So the "concern" isn't totally made up. Which is what I wanted to know. As Architeuthis mentioned the legislation is against false info and against going against the party (which seem to be what you could expect).
a hackernews excitedly states that a new LLM version can in fact determine that 9.11 is smaller than 9.9, only to be informed in the comments that the model actually doesn't do that at all. But hey, it's correct if it's version numbers!
Speaking more generally, Wget's recursive crawl can cause problems if run with inadequate rate limiting. e.g. here's what wikipedia's robots.txt says:
#
# Sorry, wget in its recursive mode is a frequent problem.
# Please read the man page and use it properly; there is a
# --wait option you can use to set the delay between hits,
# for instance.
#
User-agent: wget
Disallow: /
and I wouldn’t give a fuck what IBM’s pet distro does, but Red Hat’s developers have a high amount of control over what ends up in the userland… and bootloader… and pretty much every part of the system but the kernel cause they got told to fuck off, of every Linux distro but the obscure ones
I started a job in the last year that really forced me to play around with different distros and sometimes building them. Pretty much my entire experience is “abandon ubuntu, just use debian” and wishing other people would do the same
(Pretty much my entire reasoning is that snap fucked up my dev environment so bad I rage installed debian)
I don't want to hear that I'm irrational from Roko of all people haha.
Dude sure spends a lot of energy on trans people and immigrants and wokeness for someone who thinks that climate change doesn't matter because "by 2100 we will probably have disassembled Earth long with the rest of the solar system, and climate change will seem very quaint."
Also is his flirting with white supremacy new, or has he always been that fascist of a weirdo?