Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 25 August 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
Always a coin flip on if circlejerk subs are fascist or not. But this one looks pretty alright at first glance! Lots of making fun of transphobia and creepy people.
This author seems to write what is fairly typical escapism porn LNs. According to this summary the plot is about as stupid and problematic as you might expect for such a thing. If the Alya-san anime goes well this will probably get at least a manga.
All this is just adding up to: hack author is formulaically chasing trends and trying to parlay some genAIed trash into a manga and maybe an anime.
So...a yacht named "Bayesian" just sank off the coast of Sicily. It was owned by British billionaire Mike Lynch, former CEO of Autonomy. Lynch just barely managed to stay out of trouble with US authorities over fraud charges and will likely owe HP Enterprise a hefty bag for misrepresentations before their multi-billion dollar acquisition. My heart goes out to the innocents and crew who are lost. (Edit: Lynch appears to still be missing)
I'm fascinated by the fucking size of that yacht and did some more research and found more stories about obscenely expensive boats going to the bottom.
Trust me when I say there’s nothing worth reading in the article beyond the headline:
He would be right if he meant in the sense that Bruce Wayne ineffectively leads a personal crusade motivated by unresolved childhood angst, and that the world would be much better off if his money was just spent on basics needs for the people and he went to therapy.
See, I actually don't hate the idea of engineering and automating away parts of work. Work, almost by definition, sucks. If it didn't suck we wouldn't think of it as "work."
But the current model of AI is less trying to free humanity from work than it is freeing work from humanity. Take the important business of making the line go up away from the grubby hands of people who need things like "food" and "healthcare" as a necessary precondition for working, to say nothing of their desires for "dignity" or "respect". Even creative and artistic work that gets underpaid in exchange for being something people can be innately passionate about can be taken away from those people and fed a diet of raw electricity and compute.
I mean, "here's the video you asked for" is a pretty standard setup for a Rick roll, so it's definitely in the training data, and if it doesn't have an actual URL to link it's going to fill in something if it starts to spit out an affirmative response, which as we've seen is a really standard failure mode for these systems.
You know those polls that say fewer than 20% of Americans trust AI scientists? It shouldn't be the case, because no group is doing more right now to elevate the universe to a higher state of complexity. You know who the public does trust? Open source developers.
I thought this was gonna go in a completely different direction.
Setting: romantic sunset beach, without a crowd in sight. There is no sound but that of a gentle breeze, waves lapping at the shore, and seagulls in the distance. Ryan turns to his girlfriend Tiffany. He gets down on his knees, the sand muddying his pants. Tiffany clasps her hands over her mouth in disbelief. Ryan says: "Tiffany you are the light of my life. You have made me a better man. I can't see me living with anyone else but you. Will... will you marry m-- bzzt. Thank you for using Virtua-Boyfriend, unfortunately we ran out of VC money so are shutting down.".
In classic Rationalist fashion, she noticed something she didn't like, hypothesized a cause with no real evidence, and then proceeded to rant about the implications of that unproven hypothesis.
My suspicion, on the other hand, is that because Claude is just reproducing statistical patterns from its training data is simply reflecting the fact that she is referred to as a Nazi coder far more often than she is as some kind of open-source luminary. Unless her github metadata signs everything "Justine T the open-source developer" then that association isn't reflected in the patterns extrapolated from the training data.
Gonna copy-paste the quote Baldur used because god damn:
AI isn’t simply a problematic technology but an apparatus that is shaped by the injustices of our existing social relations and which, in turn, reshapes and intensifies them.
This is entirely my gut instinct, but there is boiling resentment against big tech. Something is shifting, and it's shifting violently, both in the general public and the media. Blood in the water. People are ready for a change.
I've had that same gut instinct before - I've kinda had it since Baldur noted tech's disconnect from the public a month ago. Feels like we're entering an era where working in tech is treated as a red flag, a sign you're a money-hungry asshole willing to hurt innocent people to make a quick buck.
The good news is that the tools created were sufficiently useful that there's still a decent job market for tech workers (assuming we're not focused on the griftier side), it's just focused largely in non-tech companies.
xcancel link, and it’s a fairly interesting set of posts too (with exactly the type of dumbass hustle culture replies one would expect from Twitter, with people essentially demanding first dibs on the domain)
Khelif, who went on to win Olympic gold despite the harassment, reportedly filed a lawsuit alleging cyberbullying against Rowling (Elon Musk is also named in the suit). Shortly after the lawsuit became public on August 13, Rowling went silent on X, leading to speculation from many onlookers that she had pushed her transphobic narrative too far. On August 23, though, she again appeared on the platform, spreading more false and misleading commentary on Khelif. Her first post was a quote from a transphobic hit piece against Khelif by Colin Wright, the former managing editor of the far-right website Quillette.
I mean, does she actually write books anymore? Or is she a full time TERF now? Because that's a direct conflict between what's in her legal best interest and in her professional best interest.
Generative AI is ripping the humanity out of things. Built on a foundation of theft, the technology is steering us toward a barren future. We think machine learning is a compelling technology with a lot of merit, but the path generative AI is on is wrong for us.
We're here for the humans. We're not chasing a technology that is a moral threat to our greatest jewel: human creativity. In this technological rush, this might make us an exception or seem at risk of being left behind. But we see this road less travelled as the more exciting and fruitful one for our community.
Procreate is an example of what good AI deployment looks like. They do use technology, and even machine learning, but they do it in obviously constructive scopes between where the artist's attention is focused. And they're committed to that because... there's no value for them to just be a thin wrapper on an already completely commoditized technology on its way to the courtroom to be challenged by landmark rulings with no more room ceiling to grow into whooooooops.
edit @dgerard already posted this on the top level, I just got triggered by one of the many dupe submissions on HN
highlights include
standard Signal-bashing
comparison with other heroes like Kim Dotcom (lol) and Snowden
outrage that the arrest order was issued while Durov was in the air, so he couldn't evade justice (this is unconfirmed)
outrage that being a citizen of a country allows that country to enforce its laws against someone
Durov is rich and accused in a country governed by the rule of law (in contrast to the other country which has issued him a passport, the UAE). He can afford the very best lawyers and will have to be content with staying in Paris while this works out, boo-hoo.
but if he’s just being held accountable for running a service where others did nefarious things, then this should be a chilling effect for all founders.
outrage that the arrest order was issued while Durov was in the air, so he couldn’t evade justice (this is unconfirmed)
Clearly arrest warrants should be made like hide-and-seek where the seeker has to count to 100 days before arresting the hider and publish "ready or not here I come!" to their social media account.
The original video I linked to looked possibly CG, because it was short and looked like the uncanny gaze of giant Elon Musk was staring directly into the camera the whole time; but turns out it is real. I'm so "happy".
And surprising exactly no one, it's an advert for an NFT.
If either of the parties involved were just about any other science fiction author, it would be the among the least damning things I've heard of the other one, but it just had to be these two.
it’s unfortunate that all the Google alternatives big enough to matter are all-in on making the exact same mistakes as Google at roughly the same time, probably in hopes that one of them can swoop in and take the top dog spot when the monopoly ruling finally makes Google stumble
and if that sounds like a fucking stupid plan, you’re right, and that’s why neither of us are advertising executives
AI has made it much easier to spam searches with SEO shite, so any attempts at algorithmic search risk being spammed to death in short order
Public trust in tech is utterly shot to hell - not sure how much people trust search specifically, but I suspect people are currently trusting word-of-mouth (e.g. Reddit, TikTok) over whatever any of the major search engines are providing right now
In retrospect google since it's inception, when it was still good, google always actually relied on human curation. Primary component of pagerank were:
"how much have people linked to this?"
"how much have reliable sites linked to this?"
"how good quality are pages from this site usually?"
(Which is still a way to get value out of google by adding "site:www.reliable-website.example" tags)
It was definitely a useful product, but ultimately it relies on human labor to surface quality results closer to the top.
Self-identified emergency dept physician on my local subreddit just believes things that spicy autocorrect tells him about his job.
Dude here claims that Tennessee is ranked #3 for physicians looking for work, but when asked for a source...
Obviously you’ll find various resources. ChatGPT has it as #3 for whatever that’s worth. At least for my specialty Tennessee offered a top salary, moderate tort reform, no income tax, eliminated the professional privilege tax, and more. It’s certainly not a bad place to practice and I’d argue very few physicians are avoiding Tennessee.
My experience talking with people has been often this same situation; they have some specialty expertise and for whatever reason now accept ChatGPT as all knowing and use it a lot day to day. It worries me since the response to “why” is usually along the lines of humans aren’t always right but this is AI so it will keep getting better eventually.
Tools should be 100% correct. Your work has regressed because you swapped out good tools and practices for spicy autocorrect
"Hey you said I maybe killed 5 children with murder drones on a mountain could you maybe not?"
"sorry to hear you're not liking it! it's currently not possible to opt out since this is just an experiment. but if you get a response that you feel is unhelpful or irrelevant, you can let us know by submitting feedback via these steps: https://goo.gle/3ySHg00"
Etsy: an artistic one-stop chop shop where slop pops up like catch crops - and that quick shot's no hatchet-job, so keep it from pops 'fore it leaves him in a strop:
(Full disclosure: the opportunity for some quickfire rhymes may have played a role in birthing this sneer.)
One of my kids is a huge Gravity Falls fan and has recently acquired The Book of Bill. For some reason, Bill's anecdote about silly straws reminded me of the grok discourse:
FUN FACT: When you use a silly straw to murder someone, it becomes a serious straw!
It would be pretty funny for someone to be so awful that they get haunted by an Atheist. I mean, it's unlikely, but if it were possible then this would do it.
I wonder how soon we’re going to hit the “we have team $foo building that internally since we can’t use $serviceX for that (externally hosted has security issues)” phase of corporate fafo
why am I thinking of this? oh, no reason. just pondering the cyclic nature of history I guess you could say.
decentralized cult which worships the concept of rational thinking as superior to evidence. has lots of little rituals which are supposed to invoke rational thinking. uses AI in the place of angels and demons. no core holy texts, but the closest things are a sequence of blog posts and a harry potter fanfic. very influential in silicon valley, very intermingled with various explicitly fascist groups
Longer than I'd intend, but the way I describe it is probably as
A mystical Harry Potter based sex cult deeply embedded in the techbro scene. They want what many cults want: to commune with God, achieve immortality or enlightenment, and obtain power in the current world, but they dress it in the trappings of science and computer programming.
Do to demographic features, their desire to be clever, and a certain contrarian attitude, they will often seek to rationalise harmful social practices, which leads them to support anti-feminist and race realist positions with shocking frequency.
Because of their close connections to the tech scene, along with the personal relationship the cult founder had with Peter Thiel, and the fact that the cult has been indoctrinating kids since the aughts, they are shockingly influential in the AI scene.
As most cults, they claim to want to teach people to think correctly (rationally), but they actually value the community of being in a cult (and the potential social networking and financial benefits) over thinking rationally.
In terms of style, they like long works with unclear arguments, being clever or witty over being right, and strongly signalling their rationality (sometimes even using good tools), but not allowing that to interfere with the core features of being a cultist.
(1-3) are what I'd consider core. (4-5) are what I'd add if the person seems interested. If they seem really interested, I'd also discuss other connections (e.g. to Effective Altruism, the Future of Humanity Institute, George Mason University, Future Perfect, neoreaction), their ideology in more specific terms (e.g. the Sequences, Roko's Basilisk), and associated members (e.g. EY, SSC, Aella, SBF).
SV Scientology, they can't land you a leading role in a summer blockbuster but they sure as hell can put you in the running for AI policy related positions of influence or for the board of a company run by one of their more successful groomings. Their current most popular product is court philosophers for the worst kind of aspiring technofeudalist billionaire.
If this gets them interested you'll eventually get your chance to do a deep dive to any details of cosmist lore you find relevant.
It's a little bit like a tiny version of the Mormons if Joseph Smith had read the collected works of Isaac Asimov instead of the Bible and also his name was Yud.
Or to go with less of a sneer, the Rationalist/TESCREAL/Californian Ideology is a loose grouping of fringe beliefs rooted in old-school science/tech fetishism with a lot of science fiction overlays and libertarian/reactionary politics that effectively define "let ultrawealthy tech capitalists do whatever they want" as the only reasonable choice and make it a moral imperative.
Techish (I would use the word techbro here, but that needs an explantion even) people who want to make their fictional science fiction utopia real, but got so scared of their own science fiction ideas going wrong and killing everybody they started a cult around rationality, sort of a Vulcans fan club. They have a pattern where they think they and their methods are smarter and better than actual experts.
When trying to do their own research with an open mind, but they left their minds so open that all kinds of sexists and racists crawled in. Who are welcomed as long as they are verbose enough.
I'm sorry but did the company not have anyone in their org who has had kids???
Bassinets are, as mentioned, used for 5-6 months. Reselling expensive baby gear that's only used for the first months of infancy is very common. Established companies, like those making baby carriages or car seats, know this and make their money upfront at purchase time, preying onrelying on baby-braineddoting parents with more money than sense to buy the latest and greatest.
The people designing this product and/or their financiers should be ashamed of themselves.
Or they could have open-sourced the protocol and relied on dudes like this one to keep it going:
A friend of mine recently had a kid and gave the snoo a good review. If it’s really that good a product, it should be nationalised in the name of public health.
that’s a good question! I’ll probably have to brainstorm this with @[email protected] later today. in the meantime, is there any precedent for how to do it on Mastodon? we might be able to adopt whatever they do — or maybe at the very least, if there’s a good way to do it there, we could link to a Mastodon thread for the bracket and keep discussion on here.