Stubsack: weekly thread for sneers not worth an entire post, week ending 3rd November 2024
Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
almost every smart person I talk to in tech is in favor of mandatory eugenic polygynous marriages in order to deal with the fertility crisis. people are absolutely fed up with the lefty approach of using generational insolvency as a pretextual cudgel to install socialism.
Every person I talk to — well, every smart person I talk to — no, wait, every smart person in tech — okay, almost every smart person I talk to in tech is a eugenicist. Ha, see, everybody agrees with me! Well, almost everybody…
Man, I didn't even know how to react to this nonsense. The obvious sneer is to point out that if the alternative is to interact with people like ER here we really shouldn't be surprised to see a declining birth rate. But I think the more important takeaway that this hints at is that these people are dumb and fundamentally incurious.
Like, there's plenty of surveys and research into why people are having fewer kids than they used to, and it's not because toddlers are little hellions more so than in the past. And "generational insolvency" is a pretty big fucking part of the explanation actually, as is empowering families to choose whether or not to have children rather than leaving it entirely up to the vicissitudes of biological processes and horniness. The latter part cuts both ways, in that people who want families are (theoretically; see above re: financial factors) able to take advantage of fertility treatments or IVF or whatever and have kids where they historically would have been unable to do so.
But no, rather than actually engage with any of that or otherwise treat the world like other people have agency they have identified what they believe to be the problem and have decided that the brute application of state power is the solution, so long as that power is being applied to other people. For all that we acknowledge the horrors of fascism, I think the stupidity of these people is also worth acknowledging, if for no other reason than to reinforce why this shit shouldn't be taken seriously.
Man, I didn’t even know how to react to this nonsense
same way as other nazis - boop 'em on the nose
I'd be willing to wager a guess that this fragile little flower has never had a "physical altercation" in their life and would walk away with fucking ~ptsd from a single "hey that shit is not okay" boop
this hints at is that these people are dumb and fundamentally incurious
if you're talking about eigenrowboat, I don't think I agree. they're quite curious, but they "just" go in with a particular viewpoint and a desire to "prove their point" in the most prevaricating way possible. it's no accident that the entire sphere of "how do we make scientific racism and nazism more socially palatable" gravitates around these fuckers. if you're instead talking about them making these comments in a "see the poor are dumb and useless and thus deserve what they get", well, see aforementioned shitty opinions
Saying they're dumb and incurious offers almost too much respect. They believe in racial eugenics based on IQ - look at the kind of shit Elon Musk retweets. Scaremongering about fertility is just the way they get to the racial eugenics, while pretending it's a necessity not a choice.
Edit: and now I see froztbyte said almost the same thing first. Oops
Cue the scene where Buck Turgidson finds out that Dr. Strangelove proposes humanity survive deep inside mineshafts, with multiple women for every man.
Anyway I like how the options presented are "socialism" - vaguely defined so as to be something anyone can project their fears on - on the one hand, and state-ordered sexual slavery on the other. True freedom, amirite?
I had to doublecheck what "polygynous" means, and I "love" this Google-generated Wiki excerpt. It's technichally correct in some parts of the world.
almost every person in tech (...) to deal with the fertility crisis
Why would we be listening to "tech" to deal with "the fertility crisis"? Why is "tech" concerned with "fertility"?
Stay in your fucking lane, will ya. How about mandatory eugenic polygynous marriages to address the growing crisis of open-source development? The crisis of newest C++ standards not being implemented in the popular compilers quickly enough? The crisis of Node.JS existing?
I wonder if the OpenAI habit of naming their models after the previous ones' embarrassing failures is meant as an SEO trick. Google "chatgpt strawberry" and the top result is about o1. It may mention the origin of the codename, but ultimately you're still streered to marketing material.
Either way, I'm looking forward to their upcoming AI models Malpractice, Forgery, KiddieSmut, ClassAction, SecuritiesFraud and Lemonparty.
We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency. Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster.
Firstly, if this is literally true they're completely fucking cooked.
Google has a gigantic code generation culture, because the engineers there strongly prefer complexity to drudgery.
If you asked them to write fizzbuzz and left them in a room for twelve hours they would deliver a new programming language that generalized repetitive string printing, with an extension language for potential non-string-printing actions.
I left in ‘22 but feel fairly confident that “25% of code generated by AI” is going to be more of the same.
Best case scenario they are using a loose definition of AI to mean any code generated by other code in order to signal to investors that google isn’t the hulking, sluggish monolith that it is and is agile enough to use AI.
Worst case scenario: “hey chatgpt pls write me new search algorithm to print money, thanks, sundar”
If the purpose of a metric is to show adoption, the metric can be defined in a way to show adoption. Could be just an effect of promo driven culture, AI push and good'ol Goodhart's law.
Like, how do you even measure when code is ai authored and when not. If you insert 25% of a variable name and the autocompleter guesses the rest of the name correctly, are the remaining 75% AI generated?
Was browsing ebay, looking for some piece of older used consumer electronics. Found a listing where the description text was written like crappy ad copy. Cheap over-the-top praising the thing. But zero words about the condition of the used item, i.e. the actually important part was completely missing. And then at the end of the description it said... this description text was generated by AI.
AI slop is like mold, it really gets everywhere and ruins everything.
A woman was scheduled to give a talk at an AI conference. The organizers run her photo through an AI image expansion program to get the aspect-ratio right (how did we ever manage to show photos of speakers before AI existed?).
The AI image expansion invents a bra / undershirt which wasn't visible in the original photo.
This made the rounds last week IIRC. Though, looking at it again I realize I didn't notice how over-stressed the hallucinated button is. It's funny in a disgusting way.
I'd like to imagine that Adobe/other AI photo editing people are frantically scrambling to fondle their prompts a little harder to avoid things like this. Infinite whack-a-mole.
Had a first-hand AI encounter today at the grocery store. The self-checkout now has a script that monitors an overhead video feed to make sure you're not getting tricky about what scanned and what got put into the bagging area, and if it thinks you're shady it will stop you from proceeding and summon an employee with no notification that something is wrong.
The new self-checkout process is as follows:
Scan your item
Hold the item plainly before you so the overhead camera doesn't get confused, looking like a Catholic priest about to deliver communion.
Place item in bagging area. Try not to have to shift things around to find a place.
Swear as the nom-mutable voice instructions tell you to bag "your... Item." Legitimately feels like they got as far as assembling the voice lines before anyone realised that having the compu-checker read every purchase out loud would lead to at best an unworkable cacophony if not several immediate lawsuits.
GOTO 1
Even as antisocial and impatient as I am I've found self-checkout to be a UX disaster, but somehow it keeps getting worse.
sometimes i manage to confuse self-checkout overhead camera by having a bike helmet on, when that happens i have to hold it up over bagging area (but not put inside because weight won't match)
I wonder when the management will figure out these rigid anti theft systems cost a lot more than they save.
On that note, think i figured out a way to get free products on the lidl checkout. There was a large amount of errors (some of which caused by me by accident). And required help a couple of times and later i realized that i had paid less than expected. Not sure if it is reproducable, as that would be stealing, or trying to get hired as a red teamer.
NASB, I had a jarring experience this morning watching Patrick Boyle's latest video "Big Tech is Going Nuclear!" (not gonna link it) where 5 mins in he introduces the sponsor and it's an AI presentation slide generator, which he said he used for the images in his video. This after he mentioned the data on generating one image using the same amount of energy as charging a smartphone. The thing is he seems careful to not mention that it is a gen ai product–he never says AI–rather a piece of software that helps making presentations.
It kinda made me panic stop the video, like an instant "well, done with you" - not sure if he continued to make a joke of it or anything. I mean, I'm sure (I hope) he was given a lot of money for the spot, but damn! Just when I thought I had a foundational understanding of people
And this puts the bit from the other video with the referral links into context. It's not a joke, he actually expects to be making money off of people :(. I found the vagueness in the ad jarring too. There's this thing called sponsorblock, a database of timestamps for videos that skips useless stuff. The downside is you don't find out if the guy that you're watching is a shill.
oof, I’m sorry. it’s so hard to get capitalists to understand the nature of what they’re enabling, especially if it seems to be working in the short term. it’s the most frustrating thing during a bubble — it taints every decision the executive class makes, and enables grifters to get away with obvious shit even over objections from people who know better.
I can relate to how you feel about the AI stuff. I also work for GenAI-pilled upper management, and the forced introduction of github copilot is coming soon. It will make us all super extra productive! ...they say. Dreading it already. I won't use it at all, I've already made that clear to my superior. But my colleagues might use it, and then I will have to review the AI slop... uggghh...
If even freaking Gartner is now saying "well, maybe AI is too expensive and not actually so useful"... then maybe the world of management will wisen up as well, soon, hopefully, maybe?
an idea I just had (which would need some work but talking hypothetical): wouldn't it be lovely if IDEs VS Code[0] automatically inserted "Copilot Used Here" start/end markers around all generated shit. could even make it a styleguide/editorconfig so it's universally set across projects[1]
[0] - because lol ofc it's mainly vscode rn
[1] - and then when you find colleagues who lie about whether they're using it you wrap all their desk shit in foil
Ok actually read the screed. Ahhhh yes good ol’ Jeff “In the years after I bought the WaPo and everyone got suspicious, me and my billions of dollars have done nothing but improve the world and my credibility and definitely didn’t trap anyone in warehouses to die in a tornado, so you all trust me now, right?” Bezos
Eugene Meyer, publisher of The Washington Post from 1933 to 1946, thought the same, and he was right.
I wonder what major world events were happening in the 1930s-1940s that would line up with this...
As it turned out, Meyer did take the side of the Republican party on some issues. He was opposed to FDR's New Deal, and this was reflected in the Post's editorial stance as well as its news coverage, especially regarding the National Recovery Administration (NRA). He even wrote an editorializing "news" story under a fake name
THERE IT IS!
But back to Jeff.
You can see my wealth and business interests as a bulwark against intimidation, or you can see them as a web of conflicting interests.
Yep. We're protected from intimidation and extortion so long as we pay our dues to the consiglieri when he comes around and don't get too chummy with the cops.
Ok so I read the thread as translated by google. Some notes:
System was set up to also use some image recognition so it could filter for some classic incel-type shit, like:
believers
zodiac sign written
doesn’t work
show breasts in photos
photos with flowers
His entire correspondence with these women was done by chatgpt. including making dates and promising gifts for those dates. He later gives gpt calendar access to avoid a two dates to the prom situation.
Did he continue using gpt to talk to his fiance? yes. Did he feign responsiveness in his texting with gpt? also yes. When she started talking about going to weddings, did it generate a marriage proposal out of the blue, and prompt him as to whether or not the message should be sent? also yes.
Truly, we are blessed to have a candidate willing to represent the freedom to sell anything on a darknet market and hire a hitman to take out your previous partners or detractors or whatever.
Quite the proof he no longer writes his own tweets. Fun fact seems like they created various freerossdayone cryptocurrency tokens, who are all doing badly (according to my quick google) he has lost the mandate of heaven.
Want to get even better results with GenAI? The new Google Prompting Essentials course will teach you 5 easy steps to write effective prompts for consistent, useful results.
Note: Got an email ad from Coursera. I had to highlight the message because the email's text was white-on-white.
How the chicken fried fuck does anyone make a course about "prompt engineering"? It's like seeing a weird sports guy systematize his pregame rituals and then sell a course on it.
Step 1: Grow a beard, preferably one like that Leonidas guy in 300.
Step 2: If your team wins, never wash those clothes, and be sure to wear those clothes every game day. That's not stank, that's the luck diffusing out into the universe.
Step 3: Use the force to make the ball go where it needs to go. Also use it to scatter and confuse the opposition.
Step 4: Ask God(s) to intervene, he/she/they love(s) your team more!
Step 5: Change allegiance to a better team if things go downhill, because that means your current team has lost the Mandate of Heaven.
Thanks, Google. You know, I used to be pretty good at getting consistent, useful results from your search engine, but the improvements you've made to it since the make me feel like I really might need a fucking prompt engineering course to find things on the internet these days. By which I mean something that'll help you promptly engineer the internet back into a form where search engines work correctly.
Over the summer, Jesse Pollak, a cryptocurrency investor and executive at Coinbase, launched Abundant Oakland, an advocacy organization that funds “moderate” candidates running in Oakland races. The organization is explicitly linked to similarly named entities in San Francisco and Santa Monica.
Abundant Oakland has a related political action committee, Vibrant Oakland, which, campaign filings show, has received donations from Pollak ($115,000), the Oakland police officers association ($50,000), cryptocurrency executive Konstantin Richter ($60,000), the northern California carpenters regional council ($150,000) and a Pac controlled by Piedmont landlord Chris Moore ($100,000).
(Github project supposedly for AI assisted mass job application, including using the AI to cater resume to job posting. God I'm terrified of ever having to return to the job market this is fucking insane.)
(I think I’ve mentioned it here before, but nonetheless)
both myself and 2 people I know were hunting last year. it’s hell (in tech, which has historically been fucking abysmal at hiring to start with). the ways this shit is going to affect other industries too…
some numbers: the one friend applied to something in the 1000 posts, the other 400-600 in the space of approx 4-5mo. both barely heard back from anyone, or if they did it was often months after. on some of mine, I got nack/followup mails approx 7-8mo after sending details. and that’s without even mentioning the utter fucking toxic dump swamp of listings…. holy shit what a mess
Adobe is going all in on generative AI models and tools, even if that means turning away creators who dislike the technology. Artists who refuse to embrace AI in their work are “not going to be successful in this new world without using it,” says Alexandru Costin, vice president of generative AI at Adobe.
Personally, I think this is gonna backfire pretty damn hard on Adobe - artists' already distrust and hate them as it is, and Procreate, their chief competition, earned a lot of artists' goodwill by publicly rejecting gen-AI some time ago. All this will likely do is push artists to jump ship, viewing Adobe as actively hostile to their continued existence.
On a wider note, it seems pretty clear to me Alexandru Costin's drank the technological determinist Kool-Aid and has come to believe autoplag's dominance is inevitable. He's not the first person I've seen drink that particular Kool-Aid, he's almost certainly not the last, and I suspect that the mass-drinking of that Kool-Aid's fueling the tech industry's relentless doubling-down on gen-AI. A doubling-down I expect will bite them in the ass quite spectacularly.
not going to be successful in this new world without using it
The hubris is almost impressive in itself. There's not a single technology in human history that has managed to kill every art form not using it. Digital art didn't do it, photography, pencil, movable type printing, nib pens, oil paints, scraffito, probably not even the invention of currency did it. He thinks autoplag of all things will?
i suppose when this guy speaks of artists, he means people making art as their primary source of income. not to say that those people aren't artists as valid as any others. but he's saying if you don't use ai to push out stuff ever faster, you won't make it. fuck taking your time to get inspired and have it mean something, just give us the soulless garbage to sell our products already.
I mean, he is their VP of Autoplag, so I imagine he's got even more reason to believe than the average MBA. That doesn't undermine your point, but I think the fact that adobe has appointed a VP of Autoplag should be part of the story to begin with, rather than being assumed. Did they ever have a VP of blockchain? Or a VP of copyright fraud?
"I think were going to add a whole new category of content which is AI generated or AI summarized content, or existing content pulled together by AI in some way,” the Meta CEO said. “And I think that that’s gonna be very exciting for Facebook and Instagram and maybe Threads, or other kinds of feed experiences over time."
Facebook is already one Meta platform where AI generated content, sometimes referred to as “AI slop,” is increasingly common.
In a previous post of mine, I noted how the public generally feels that the jobs people want to do (mainly creative jobs) are the ones being chiefly threatened by AI, with the dangerous, boring and generally garbage jobs being left relatively untouched.
Looking at this, I suspect the public views anyone working on/boosting AI as someone who knows full well their actions are threatening people's livelihoods/dream jobs, and is actively, willingly and intentionally threatening them, either out of jealousy for those who took the time to develop the skills, or out of simple capitalist greed.
Raytheon can at least claim they're helping kill terrorists or some shit like that, Artisan's just going out and saying "We ruin good people's lives for money, and we can help you do that too"
On a personal note, it feels to me like any use of AI, regardless of context, is gonna be treated as a public slight against artists, if not art as a concept going forward. Arguably, it already has been treated that way for a while.
I specifically bring this up because Tilghman wasn't some random CEO or big-name animator - he was just some random college student making a non-profit passion project with basically zero budget or connections. It speaks volumes about how artists view AI that even someone like him got raked over the coals for using it.
Unfortunately it's the small artists who are most open and vulnerable to criticism. Amazon can probably impose this kind of shit on everyone through sheer persistence
What do they mean by "in color"? If it's just various tints throughout the film that's normal and cool. If they mean full on colourised that's messed up.
Bezos' open interference in the Washington Post's editorial section has pushed Walter Bright into a very funny series of public admissions that he did not have to make. See the orange site here for his ongoing libertarian meltdown.
His comment history is a weird mix of programming language discussion, terrible takes, simping for Musk, simping for Musk even harder (just in case you didn't realize how much he liked Musk the first time).
Musk is the sane one. It's the rest of us that are insane.
I don't think you want to hear my opinions on what the left wing thinks is obvious :-)
Also, I am neither left nor right wing, as I'm a libertarian. I believe in the principles in the Declaration of Independence, the Bill of Rights, and the system of checks and balances set up by the Constitution.
it’s just really surprising to see the political takes of a 13 year old come out of the 65 year old who created the least successful C variant
I really hope Harris wins by a landslide just so all these weird nerds eat shit. If even just one goes "wow I really let myself get swept up into believing trump/musk was great by my echo chamber it would be worth it. But i doubt we will get such self awareness. The various betting prediction markets also then have been wrong (or manipulated) would also be fun.
TL;DR: Our main characters have bilked a very credulous US State Department. 100 Million tax dollars will now be converted into entropy. There will also be committees.
My enshittification story*: Instagram has been suggesting people for me to follow. It markets them to me by saying “friend X follows this person!” But friend X does not follow this person. Friend X has no tenable connection to this person. Why are you bullshitting me, Zuck? Is the autoplag outflow drain hooked up to Insta?
In separate investigations completed by the blockchain firms Chaos Labs and Inca Digital and shared exclusively with Fortune, analysts found that Polymarket activity exhibited signs of wash trading, a form of market manipulation where shares are bought and sold, often simultaneously and repeatedly, to create a false impression of volume and activity. Chaos Labs found that wash trading constituted around one-third of trading volume on Polymarket’s presidential market, while Inca Digital found that a “significant portion of the volume” on the market could be attributed to potential wash trading, according to its report.
Wait we created a market and people are manipulating it in order to profit because it turns out market manipulation pays the same or more than being a bankerinvestor "superpredictor" but is much easier?
JFC it was just 11 individuals??? To read the Putin sockpuppets having a Russian grandmother was enough to be booted from the MAINTAINERS list, your computer confiscated, and you being sent to Archangelsk on trumped-up charges.
oh no, a bunch of nationalist pricks might stop fucking up our community spaces. I might never have a proud Russian gatekeep my contributions ever again! no please don’t go
and here’s hoping the American nationalist devs contributing on behalf of their military-industrial complex employer (hello Anduril) take a hint from this and also fuck off to their own communities where they can bully each other for no fucking reason
they won’t because the cruelty is the point for fascists regardless of nation, but here’s hoping
the C reactionaries[*] I know definitely aren’t ok, but that’s not a new condition. the cognitive load of never, ever writing bugs takes its toll, you know?
[*] and I feel like I have to specify here: your average C dev probably isn’t a C reactionary, but the type of fuckhead who uses C to gatekeep systems development definitely is
Got linked to this UFO sightings timeline in Popbitch today. Thought it looked quite interesting and quite fun. Then I realized the information about individual UFO sightings was being supplied by bloody Co-pilot, and therefore was probably even less accurate than the average UFOlogy treatise.
PS: Does anyone know anything about using Arc-GIS to make maps? I have an assignment due tomorrow and I'm bricking it.
It's interesting that not even Apple, with all their marketing knowledge, can come up with anything convincing why users might need "Apple Intelligence"[1]. These new ads are not quite as terrible as that previous "Crush" AI ad, but especially the one with the birthday... I find it just alienating.
Whatever one may think about Apple and their business practices, they are typically very good at marketing. So if even Apple can't find a good consumer pitch for GenAI crap, I don't think anyone can.
OpenAI considered building everything in-house and raising capital for an expensive plan to build a network of factories known as "foundries" for chip manufacturing.
we really shouldn’t have let Microsoft both fork an editor and buy GitHub, of course they were gonna turn one into a really shitty version of the other
anyway check this extremely valuable suggestion from Copilot in one of their screenshots:
The error message 'userld and score are required' is unclear. It should be more specific, such as 'Missing userld or score in the request body'.
aren’t you salivating for a Copilot subscription? it turns a lazy error message into… no that’s still lazy as shit actually, who is this for?
a human reading this still needs to consult external documentation to know what userId and score are
a machine can’t read this
if you’re going for consistent error messages or you’re looking to match the docs (extremely likely in a project that’s in production), arbitrarily changing that error so it doesn’t match anything else in the project probably isn’t a great idea, and we know LLMs don’t do consistency
I want someone to fork the Linux kernel and then unleash like 10 Copilots to make PRs and review each other. No human intervention. Then plot the number of critical security vulnerabilities introduced over time, assuming they can even keep it compilable for long enough.
I know it's Halloween, but this popped up in my feed and was too spooky even for me 😱
As a side note, what are peoples feelings about Wolfram? Smart dude for sho, but some of the shit he says just comes across as straight up pseudoscientific gobbledygook. But can he out guru Big Yud in a 1v1 on Final Destination (fox only, no items) ? 🤔
The big difference is that Yud is unrigorous while Wolfram is a plagiarist. Or maybe putting it another way, Yud can't write proofs and Wolfram can't write bibliographies.
I could go over Wolfram's discussion of biological pattern formation, gravity, etc., etc., and give plenty of references to people who've had these ideas earlier. They have also had them better, in that they have been serious enough to work out their consequences, grasp their strengths and weaknesses, and refine or in some cases abandon them. That is, they have done science, where Wolfram has merely thought.
Huh, it looks like Wolfram also pioneered rationalism.
Scott Aaronson also turns up later for having written a paper that refutes a specific Wolfram claim on quantum mechanics, reminding us once again that very smart dumb people are actually a thing.
As a sidenote, if anyone else is finding the plain-text-disguised-as-an-html-document format of this article a tad grating, your browser probably has a reader mode that will make it way more presentable, it's F9 on firefox.
on a side note, I notice this passage in the review:
Wolfram refers incessantly to his "discovery" that simple rules can produce complex results. Now, the word "discovery" here is legitimate, but only in a special sense. When I took pre-calculus in high school, I came up with a method for solving systems of linear equations, independent of my textbook and my teacher: I discovered it. My teacher, more patient than I would be with adolescent arrogance, gently informed me that it was a standard technique, in any book on linear algebra, called "reduction to Jordan normal form", after the man who discovered it in the 1800s. Wolfram discovered simple rules producing complexity in just the same way that I discovered Jordan normal form.
this is certainly mistaken. I think the author or teacher must have meant RREF or something to that effect, not Jordan normal form
I knew Wolfram was a massive asshole, but I didn’t know or forgot that Mathematica was based on appropriated publicly-owned work:
In the mid-1980s, Wolfram had a position at the University of Illinois-Urbana's Beckman Institute for complex systems. While there, he and collaborators developed the program Mathematica, a system for doing mathematics, particularly algebraic transformations and finding exact-form solutions, similar to a number of other products (Maple, Matlab, Macsyma, etc.), which began to appear around the same time. Mathematica was good at finding exact solutions, and also pretty good at graphics. Wolfram quit Illinois, took the program private, and entered into complicated lawsuits with both his former employee and his co-authors (all since settled).
and on that note, Symbolics did effectively the same thing with Macsyma (and a ton of other public software on top of that, all to drive sales of their proprietary Lisp machines), but a modernized direct descendent of the last publicly-owned version of Macsyma named Maxima is available and should run wherever Common Lisp does. it’s a pretty good replacement for a lot of what Mathematica does, and the underlying language is a lot less batshit too
You want my take, the employee in question (who also got a GoFundMe) should sue Logan for defamation - solid case aside, I wanna see that blonde fucker get humbled for once.
I hated seeing that guy just wanting to live his life dragged into weird net drama and pushed under the bus by his company. And wow look at how collected and reasonable he was compared to anyone else in the story.
All Mr. Paul had to do was shut the hell up for once and the world'd still be talking about his moldy cheese bread instead of about his moldy cheese bread and how he bullies and doxes retail workers.
All Fred Meyer had to do is be like "whoops looks like the product recall procedure at that store was vague recollections, we'll get a policy in place".
The sole silver lining of this situation is that Logan's deplorable behaviour probably scared at least a few shops away from stocking Lunchly - not just because of the risk you end up selling some mold-ridden garbage (most likely to kids), but because you risk Logan starting a harassment campaign against you or your store.
I’m currently using Flutter. It’s good! And useful! Much better than AI. It being mostly developed by Google has been a bit of a worry since Google is known to shoot itself in the foot by killing off its own products.
So while it’s no big deal to have an open source codebase forked, just wanted to highlight this part of the article:
Carroll also claimed that Google’s focus on AI caused the Flutter team to deprioritize desktop platforms, and he stressed the difficulty of working with the current Flutter team
Described as “Flutter+” by Carroll, Flock “will remain constantly up to date with Flutter, he said. Flock will add important bug fixes, and popular community features, which the Flutter team either can’t, or won’t implement.”
that android project of some months was a venture into flutter (and haven’t touched it before)
I had similar impressions on some things, and mixed on other
dart’s a moderately good language with some nice primitive, tooling overall is pretty mature, broad strokes works well for variant targeting and shit
libraries though holy shit the current situation (then) was a mess. one minor flutter sdk upgrade and a whole bunch of things just exploded (serialisation things in nosql-type libraries I tried to use for the ostensible desired magic factor (just went back to sqlite stuff again after)). this can’t have been due to sdk drift alone, and felt like an iceberg problem
and then the documentation: fucking awful, for starting. excellent as technical documentation once you grok shit but before that all the examples and things are terrible. lots of extremely important details hidden in single mentions in offhand sentences in places that if you don’t happen to be looking at that exact page good luck finding it. this, too, felt like inadequate care and attention by el goog
I imagine if one is working with this every day you know the lay of the land and where to avoid stepping into holes, but wow was I surprised at how much it was possible to rapidly rakestep, given what the language pitches as
To repeat a previous point of mine, it seems pretty safe to assume "luddite horror" is gonna become a bit of a trend. To make a specific (if unrelated) prediction, I imagine we're gonna see AI systems and/or their supporters become pretty popular villains in the future - the AI bubble's produces plenty of resentment towards AI specifically and tech more generally, and the public's gonna find plenty of catharsis in watching them go down.
Personally, I'd love to see the Luddites be rehabilitated as a result of the Great Bullshit Collapse. They were just regular folks fighting for dignity in work, and it's tragic how successful the bastards have been at erasing them from history.
Judging by some stray articles from WIRED and The Atlantic, Merchant's likely done plenty to rehabilitate the Luddites' image.
I suspect Silicon Valley's godawful reputation and widespread hatred of AI have likely helped as well - "machinery harmful to commonality" may be an unfamiliar concept to Joe Public, but "AI is ruining the Internet/taking your job/scamming your parents" is very fucking tangible to them.
Of those two, technological determinism's death was probably the more important one - that idea's demise meant the public was willing to entertain that new tech developments from Silicon Valley could be killed in their crib, that they wouldn't inevitably become a part of public life, for worse or (potentially) for better.
I feel like Ed is underselling the degree to which this is just how businesses work now. The emphasis on growth mindset is particularly gross because of how it sells the CEOs book, but it's not unique in trying to find a feel-good vibes-based way to evaluate performance rather than relying on strict metrics that give management less power over their direct reports.
Of course he's also written at length about the overall problem that this feeds into (organizations run by people with no idea how to make the business do what it does but who can make the number go up for shareholders) but the most unique part of this is the AI integration, which is legitimately horrifying and I feel like the debunk of growth mindset takes some of the sting away.
Is there a group that more consistently makes category errors than computer scientists? Can we mandate Philosophy 101 as a pre-req to shitting out research papers?
a quick interest check: I kind of want to use our deployment’s spare capacity to host an invite-only WriteFreely instance where our regulars can host longer form articles
…but WriteFreely’s UI is so sub-optimal the official instance (write.as) runs a proprietary fork with a lot of the jank removed, and I don’t really consider WF to be production ready out of the box.
we can point the WF backend at arbitrary directories for its templates, page definitions, and static assets though, so maybe I could host those on codeberg and do a CI job that’d pull main every time it updates so we could collaboratively improve WF’s frontend? it’s not a job I want to take on alone (our main instance needs to take priority), but a community-run WF instance would be pretty unique
the pros of doing this are that WriteFreely at least seems to have very slim resource requirements and it’ll at least reliably host long form Markdown on the web
the downsides are again, it’s janky as fuck (it only supports Mailgun of all things for email, but if you disable that the frontend will still claim it can send password reset emails… but it’ll check the config and display an error if you click the reset link??? but they could have just hidden the reset UI entirely with the same logic???? also I don’t like the editing experience), and it’s not really what I’d consider federated — it shoots an Article into ActivityPub whenever you post, but it’s one-way so replies, boosts, and favorites won’t show up from ActivityPub which makes it feel a bit pointless. there might be a frontend-only way to link a blog post to the Mastodon or Lemmy thread it’s associated with on another instance though, which would allow for a type of comment system? but I haven’t looked much into it. write.as just has a separate proprietary service for comments that nobody else can use.
this definitely won’t replace Wordpress but does it sound like an interesting project to take on?
For some reason the previous week's thread doesn't show up on the feed for me (and didn't all week)... nvm, i somehow managed to block froztbyte by accident, no idea how
On a personal note, I suspect "luddite horror" (alternatively called "techno-horror") is probably gonna blow up in popularity pretty soon - between boiling resentment against tech in general, and the impending burst of the AI bubble, I suspect audiences are gonna be hungry as hell for that kinda stuff.
Additionally, I suspect AI as a whole (and likely its supporters) will find itself becoming a pop-culture punchline much the same way NFTs/crypto did. Beyond getting pushed into everyone's faces whether they liked it or not, public embarrassments like Google's glue pizza debacle and ChatGPT's fake cases have already given comedians plenty of material to use, whilst the ongoing slop-nami turned "AI" as a term into a pretty scathing pejorative within the context of creative arts.
New for him, I'd wager, but I think ESR is treading a well-worn path: i.e. a huge weirdo gets himself in trouble but then finds favor with terrible people, and ultimately suffers from audience capture.
Edit: I was wrong, it's not new (hat tip to @Soyweiser)
Stephanie Kirchgaessner is the deputy head of investigations for Guardian US, based in Washington DC
Hannah Devlin is the Guardian's science correspondent, having previously been science editor of the Times. She has a PhD in biomedical imaging from the University of Oxford.
so is it that both these fuckers are ideologically bankrupt, or are they willing complicit ghouls?