Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 17 March 2024
As suggested at this thread to general "yeah sounds cool". Let's see if this goes anywhere.
Original inspiration:
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
If your sneer seems higher quality than you thought, feel free to make it a post, there's no quota here
There isn't really a suitable awful.systems sub to put it in, but I thought I'd note here that Stonetoss got doxxed thoroughly just to increase the general good cheer and bonhomie
In February of 2021 the far-right social media platform Gab experienced a data breach resulting in the exposure of more than 70 gigabytes of Gab data, including user registration emails and hashed passwords. Like many of those on the far-right, Red Panels had a presence on Gab, so we consulted the now-public data set from the Gab exposure. We learned that the “@redpanels” account had been registered with the email hgraebener@*****.com.
Graebener was part of an Open iT delegation to Japan in May 2019 and appeared in photos of this on the Open iT LinkedIn page. [...]. During the same time, StoneToss was eager to let his fans know that he had arrived in Japan, writing on Twitter, “Finally made it to the ethnostate, fellas.”
So today I learned there are people who call themselves superforcasters®. Neat!
The superforecasters® have had a melding of the minds and determined that covid-19 was 75% likely to not be a lab leak. Nifty! This is useless to me!
Looking at the website of these people with good enough judgement to call themselves "Good Judgement", you can learn that 100% of superforecasters® agree that there will be less than 100 deaths from H5N1 this year. I don't know much about H5N1 but I guess that makes sense given that it's been around since 1996 and would need a mutation to be contagious among humans.
I have used "Copilot" LLM AI to point me in the right direction. And to the point of the LLM they have been trained not to give a response about conflict as they say they are trying to permote peace instead of war using the LLM.
To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”
Fans of certain shonen anime may recognize this technique as Kodoku -- a deadly poison created by putting a bunch of insects in a jar until only one remains:
100 species of insects were collected, the larger ones were snakes, the smaller ones were lice, Place them inside, let them eat each other, and keep what is left of the last species. If it is a snake, it is a serpent, if it is a louse, it is a louse. Do this and kill a person.
"But what's the catch Saturn"? I can hear you say. "Surely this is somehow a grift nerds find or a way to fleece money out of governments".
Nonono you've got the completely wrong idea. Good Judgement offers a 100$ Superforecasting Fundamentals course out of the goodness of their heart I'm sure! I mean after all if they spread Superforecasting to the world then their Hari-Seldon-Esque hivemind would lose it's competitive edge so they must not be profit motivated.
If you are a UK government entity interested in our services, contact us today.
Maybe they have superforecasted the fall of the british empire.
And to end this, because I can never resist web design sneer.
Dear programmers: if you apply the CSS word-break: break-all; to the string "Privacy Policy" it may end up rendered as "Pr[newline]ivacy Policy" which unfortunately looks pretty unprofessional :(
lmao this is one of my all time favorite grifts. I've never understood why it isn't more popular among us connoisseurs. it's so baldfaced to say "statistically, someone probably has oracular powers, and thanks to science, here they are. you need only pay us a small incense and rites fee to access them"
Imo because the whole topic of superforecasters and prediction markets is both undercriticized and kaleidoskopically preposterous in a way that makes it feel like you shouldn't broach the topic unless you are prepared to commit to some diatribe length posting.
Which somebody should, it's a shame there is yet no one single place you can point to and say "here's why this thing is weird and grifty and pretend science while striclty promoted by the scientology of AI, and also there's crypto involved".
You're thinking Saturn, that could have been a post!
I know I know, but I can't handle that kind of pressure. If someone else wants to make a post about this, or prediction markets, don't let me stop you. It's an under-sneered area at the intersection of tech weirdos, that other kind of tech weirdoes, and that third kind of tech weirdos.
If you have ever wondered why so many Rationalists do weird end of year predictions and keep stats on that, it is because they all want to become superforecasters. (And remember by correctly forecasting trivial things that are sure to happen, you can increase your % of correct forecasts and you can become a superforecaster yourself. Never try to forecast black swans for that reason however (or just predict they will not happen for more superforecastpoints)).
See also you could have been a winner! And got a free sub to ACX!
Fans of certain shonen anime may recognize this technique as Kodoku – a deadly poison created by putting a bunch of insects in a jar until only one remains
I understood this reference. I know it as Gu poison, which is listed in the wikipedia article you linked!
To minimize the chance that outstanding accuracy resulted from luck rather than skill, we limited eligibility for GJP superforecaster status to those forecasters who participated in at least 50 forecasting questions during a tournament “season.”
When I was a kid I read a vignette of a guy trying to scam people into thinking he was amazing at predicting things. He chose 1024 stockbrokers, picked one stock, and in 512 envelopes he said the stock would be up by the end of the month, and in the other 512 he said it would go down. You can see where this story is going, i.e. he would be left with one person thinking he predicted 10 things in a row correctly and was therefore a superforecaster. This vignette was great at illustrating to child me that predicting things correctly isn't necessarily some display of great intelligence or insight. Unfortunately what I didn't know is that it was setting me up for great disappointment when after that point and forevermore, I would see time and time again that people would fall for this shit so easily.
(For some reason when I try to think of where I read that vignette, vonnegut comes to mind. I doubt it was him.)
"He chose 1024 stockbrokers, picked one stock, and in 512 envelopes he said the stock would be up by the end of the month, and in the other 512 he said it would go down."
1024 stamps?
This guy is clearly already made of money so why is he even bothering.
I grabbed a book on the fermi paradox from the university library and it turned out to be full of Bolstrom and Sandberg x-risk stuff. I can’t even enjoy nerd things anymore.
it’s the actual fucking worst when the topics you’re researching get popular in TESCREAL circles, because all of the accessible sources past that point have a chance of being cult nonsense that wastes your time
I’ve been designing some hardware that speaks lambda calculus as a hobby project, and it’s frustrating when a lot of the research I’m reading for this is either thinly-veiled cult shit, a grift for grant dollars, or (most often) both. I’ve had to develop a mental filter to stop wasting my time on nonsensical sources:
do they make weird claims about Kolmogorov complexity? if so, they’ve been ingesting Ilya’s nonsense about LLMs being Kolmogorov complexity reducers and they’re trying to use a low Kolmogorov complexity lambda calculus representation to implement their machine god. discard this source.
do they cite a bunch of AI researchers, either modern or pre-winter? lambda calculus, lisp, and functional programming in general have a long history of being treated as the magic that’ll enable the machine god by AI researchers, and this is the exact low quality shit research that led to the AI winter in the first place. discard this source.
at any point do they casually claim that the Church-Turing correspondence has been disproven or that a lambda calculus machine is superturing? throw that crank shit in the trash where it belongs.
I think the worst part is having to emphasize that I’m not with these cult assholes when I occasionally talk about my hobby work — I’m not in it to make the revolutionary machine that’ll destroy the Turing orthodoxy or implement anyone’s machine god. what I’m making most likely won’t even be efficient for basic algorithms. the reason why I’m drawn to this work is because it’s fun to implement a machine whose language is a representation of pure math (that can easily be built up into an ML-like assembly language with not much tooling), and I really like how that representation lends itself to an HDL implementation.
Oh boy, I have thoughts about Kolmogorov complexity. I might actually write a section in my textbook-in-progress to explain why it can't do what LessWrongers want it to.
A silly thought I had the other day: If you allow your Universal Turing Machine to have enough states, you could totally set it up so that if the first symbol it reads is "0", it outputs the full text of The Master and Margarita in UNICODE, whereas if it reads "1", it goes on to read the tuples specifying another TM and operates as usual. More generally, you could take any 2^N - 1 arbitrarily long strings, assign each one an N-bit abbreviation, and have the UTM spit out the string with the given abbreviation if the first N bits on the tape are not all zeros.
I frequent some (very AI-critical) art spaces, and every now and then we get some trolls who act like literal anime villains, complete with evil plans and revenge plots, but unfortunately without cool villain laughs.
I always wonder if those bozos all were stuffed into a trashcan by a gang of delinquent artists in high school, judging from the absolute hate-boner they seem to have.
I've deliberately not been talking about it online to aid in keeping it from their knowledge as long as possible.
Not sure if he knows that not all artists live in caves and make cave paintings. And even those who do probably have a smart phone with them, for better or worse. So I’m afraid his nefarious plan doesn’t quite work out.
I knew it would scare the anti-Al shitless, because it completely bypasses scraping, datasets […].
Shaking in my chair over here, but I still don’t understand how this negates the needs for scraping and datasets. Just because I can attach a reference image to my prompt doesn’t mean the waifu generator can suddenly operate without training data.
I foresee a full-on tantrum when this becomes commonly known.
I mean, it’s not like Midjourney put out a big-ass announcement for that feature or anything. It’s totally a secret that only an elite circle knows about.
this is just an increasingly desperate Seto Kaiba taking to the internet because yu-gi-boy pointed out his AI-generated Duel Monsters deck does not have the heart of the cards, mostly because the LLM doesn’t understand probability, but he’s in too deep with the Kaiba Corp board to admit it
imagine someone pulls this out and you have no idea what it is. you're kind of nervous and weirded out by the energy at this orgy but at least this will distract you. you look at your first card and it has a yudkowsky quote on it
We all wish our friends would be more rational, especially when they disagree with us. But actually helping them can be difficult, especially when already in an argument. Rationality Cardinality will help you teach your friends how to think more clearly, by introducing them to concepts in a fun and memorable way.
Well, it will make your friends more Rationalist but not in the way they hope.
dear fuck I found their card database, which doesn’t seem to be linked from their main page (and which managed to crash its tab as soon as I clicked on the link to see all the cards spread out, because lazy loading isn’t real):
e: somehow the cards get less funny the higher the funny rating goes
Incredible, they just use the limerick that appears in the "Exaggeration and distortion of mental changes" section of Phineas Gage's wikipedia article, uncritically.
I just found out that there are Dominican Republic supremacists? Like, the latest thing on xitter is making the DR out to be Caucasian Haiti. It's some especial pol-brained nonsense about how the DR is successful because it's a white country, even though they're all very clearly AT LEAST lightskinned? It's an arguement about a country that only works if you've never seen the country or its people.
I'm not all that courant about the DR, but re: Haiti the "Revolutions podcast" has a long series about the Haitian revolution, and it's super interesting. It's clear to me that Haiti paid the price of being the first Black republic to gain independence.
Edit guess what historical figure Wikipedia is most interested in?
And I was so happy we had resisted talking about the graph/incident directly (here is the incident indirectly). My opinion remains a bit like, wow that is weird but good for her, and good to see they took safety seriously.
let's see if it's something that continues to grow and grow so in 1 month or so people I know will try to explain it to me and I will have to pretend I've known all about it, because I am Terminally Online
I found this article last week about AI bullshit written in 1985 by Tom Athanasiou and published in the, also new to me, Processed World zine.
The world of artificial intelligence can be divided up a lot of different ways, but the most obvious split is between researchers interested in being god and researchers interested in being rich. The members of the first group, the AI "scientists,'' lend the discipline its special charm. They want to study intelligence, both human and "pure'' by simulating it on machines. But it's the ethos of the second group, the "engineers,'' that dominates today's AI establishment. It's their accomplishments that have allowed AI to shed its reputation as a "scientific con game'' (Business Week) and to become as it was recently described in Fortune magazine, the "biggest technology craze since genetic engineering.''
The engineers like to bask in the reflected glory of the AI scientists, but they tend to be practical men, well-schooled in the priorities of economic society. They too worship at the church of machine intelligence, but only on Sundays. During the week, they work the rich lodes of "expert systems'' technology, building systems without claims to consciousness, but able to simulate human skills in economically significant, knowledge-based occupations (The AI market is now expected to reach $2.8 billion by 1990. AI stocks are growing at an annual rate of 30@5).
#3 is "Write with AI: The leading paid newsletter on how to turn ChatGPT and other AI platforms into your own personal Digital Writing Assistant."
and #12 is "RichardGage911: timely & crucial explosive 9/11 WTC evidence & educational info"
Congratulations to Aella for reaching the top of the bottom. Also random side thought, why do guys still simp in her replies? Why didn't they just sign up for her birthday gangbang?
Unfortunately, the genie is out of the bottle here. It would be political malpractice to liberalize these safety rules. The first child who dies or is critically injured after eliminating the post two year old requirements is a political disaster for whoever changed the rules
I'm doing a reading of good fan-fiction at a con this weekend, to counter the many "bad fanfic reading" panels. I want to read an interesting passage from HPMoR
Thank the acausal robot god for this thread, I can finally truly unleash my pettiness. Would anybody like to sneer at the rat tradition of giving everything overly grandiose names?
"500 Million, But Not A Single One More" has always annoyed me because of the redundancy of "A Single One." Just say Not One More! Fuck! Definitely trying to reach their title word count quota with that one.
The Zvi post that @slopjockey@[email protected] linked here is titled "On Car Seats as Contraception
| Or: Against Car Seat Laws At Least Beyond Age 2" which is just... so god damn long for no reason. C'mon guys - if you want to use two titles, just use one. If you want to use two titles, just use one.
Then there's the whole slew of titles that get snowcloned from famous papers like how "Attention is all you need" spurred a bunch of "X is all you need" blog posts.
Monologue, 10 second read:
“yeah dawg so extrapolating from data seems intuitive, but data alone is not enough to make accurate or convincing predictions.”
Ponzi Schemer: "Ignore all these elaborate, abstract, theoretical predictions. Empirically, everyone who's invested in Bernie Bankman has received back 144% of what they invested two years later."
LessWronger: "Your object-level error is that you have committed the trend projection fallacy instead of using the universal prior and Jaynes-Solomonoff inversion, as HPMoR explained using the analogy of the inter-magic-national goblin banking system...."
This dude literally asks a chick how she would feel if she hadn't had breakfast today. Here's the biggest self-own I've ever seen presented without further comment