Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Semi-obligatory thanks to @dgerard for starting this.)
My investigation tracked to you [Outlier.ai] as the source of problems - where your instructional videos are tricking people into creating those issues to - apparently train your AI.
I couldn't locate these particular instructional videos, but from what I can gather outlier.ai farms out various "tasks" to internet gig workers as part of some sort of AI training scheme.
Bonus terribleness: one of the tasks a few months back was apparently to wear a head mounted camera "device" to record ones every waking moment.
this post from the guy who writes explainers about consumer financial structures hits different if you know he met and is exchanging tweets with noted race scientist Jordan Lasker (cremieux)
Oof yeah, that's rough. The AI generated header image isn't helping his credibility, either. Didn't he happily trot along to one of the rat conventions in Berkeley, and everyone was wondering why?
The Bally's story is its own source of hilarity - not only are they scrambling to fund this Chicago thing, they're also making promises about a Las Vegas resort that will host the ex-Oakland A's in what would be the smallest major league baseball stadium; with equally ??? funding gaps that their client press is all too happy to ignore.
I asked ChatGPT, the modern apotheosis of unjustified self-confidence, to prove that .999… is less than 1. Its reply began “Here is a proof that .999… is less than 1.” It then proceeded to show (using familiar arguments) that .999… is equal to 1, before majestically concluding “But our goal was to show that .999… is less than 1. Hence the proof is complete.” This reply, as an example of brazen mathematical non sequitur, can scarcely be improved upon.
tunguska incident only wiped out local squirrel population and its fallout was inert. this is more like leaded gasoline: introduced for profit, polluting for decades, makes people dumber during entire duration of it, entrenches techbros and makes them responsible for development of infrastructure going forward
so I ran into this fucking garbage earlier, which goes so hard on the constituent parts of "the spam is the point", an ouroborosian self-reinforcing loop of Just More Media Bro Just One More Video Bro You'll See Bro It'll Be The Best Listicle Bro Just Watch Bro, and the insufferably cancerous "the medium is the message" videos-made-for-youtube-because-youtube that if it were a voltron it'd probably have its own unique Special Moment sequence instead of being one of the canned assembly shots
I wish YouTube would ban this shit wholesale, but it’s Google and of course they won’t.
Aside: I’ve been hammering “Don’t recommend this channel” on every video that remotely smells like AI slop for a while and so far that seems to keep the feed fairly clean.
You'd think the AI safety chuds would have more reservations about using GPT, which they believe has sapience, to learn things. They have the concept of an AI being a good convincer, which, hey, idiots, how have none of you thought the great convincing has started? Also, how have none of you realised that maybe you should be a little harder to convince in general???
It is a long-established truth that it's significantly easier to con someone who thinks they're smarter than you. Also as I think about it a little bit there seems to be a reasonable corollary of their approach towards Bayesian thinking that you not question anything that matches your expectations, which is exactly how you get taken advantage of by the kind of grifter they're attached to. Like, they've been thinking about the singularity for long enough that the Sams (bankman-fried, Altman, etc) have a well-developed script for what they expect the first stages to look like and it is, as demonstrated, very easy to fake that.
The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading
I'm not 100% sure I buy that the EOs were written by AI rather than people who simply don't care about or don't know the details; but it certainly looks possible. Especially that example about the Gulf of Mexico. Either way I am heartened that this is the conclusion people jump to.
Aside: I also like how much media is starting to cite bluesky (and activitypub to a lesser extent). I assume a bunch of journalists moved off of twitter or went multi-platform.
In a way that they have been historically awful and thwarted by the courts is a thing that worries me. I'd expect that somebody the past 8 years went 'this time we will not be bogged down in that'. But considering they went 100% in on repression from day 1 I'm slightly less worried about that.
For context, going all in on day 1 is actually bad for them, when the nazis took over The Netherlands/Belgium they methods there differed. In .nl they worked slowly and with gov already there, in .be they went full pogroms a lot faster. This meant that in .be a lot of people saw the threat sooner (WW1 and Belgium prob also didn't help) and acted and took better care of the vulnerable. The amount of Dutch Jewish people who survived ww2 vs Belgian Jewish people is very tragic. (and a very dark part of our history which we don't really talk about like this as mentioning that parts of your own country also are to blame for the holocaust is not a thing a lot of people want to talk about). At least I hope that stuff like going all crazy on the bishop will turn out to be big wakeup for random Americans and a strategic mistake on their part, they certainly didn't seem to have learned from the nazis (at least not this lesson, which fits with how fascism is blind for their own mistakes).
Related to the other killing by the border patrol people, Chad Loder noticed the reports on the shooting have some strange wording which might imply the cop shot was hit by another cop. (Assuming this is the same shooting).
Is there any rundown on this backstory for people who missed it happening live over the last few years that doesn't get sidetracked into theological disputes with the murder cult?
d'ya think this post on awful.systems, the lemmy instance (which is known as awful.systems), is the location of this awful.systems thread? let me hear your thoughts, awful.systems
This is a thought I've been entertaining for some time, but this week's discussion about Ars Technica's article on Anthropic, as well as the NIH funding freeze, finally prodded me to put it out there.
A core strategic vulnerability that Musk, his hangers-on, and geek culture more broadly haven't cottoned onto yet: Space is 20th-century propaganda. Certainly, there is still worthwhile and inspirational science to be done with space probes and landers; and the terrestrial satellite network won't dwindle in importance. I went to high school with a guy who went on to do his PhD and get into research through working with the first round of micro-satellites. Resources will still be committed to space. But as a core narrative of technical progress to bind a nation together? It's gassed. The idea that "it might be ME up there one day!" persisted through the space shuttle era, but it seems more and more remote. Going back to the moon would be a remake of an old television show, that went off the air because people ended up getting bored with it the first time. Boots on Mars (at least healthy boots with a solid chance to return home) are decades away, even if we start throwing Apollo money at it immediately. The more outlandish ideas like orbital data centers and asteroid mining don't have the same inspirational power, because they are meant to be private enterprises operated by thoroughly unlikeable men who have shackled themselves to a broadly destructive political program.
For better or worse, biotechnology and nanotechnology are the most important technical programs of the 21st century, and by backgrounding this and allowing Trump to threaten funding, the tech oligarchs kowtowing to him right now are undermining themselves. Biotech should be obvious, although regulatory capture and the impulse for rent-seeking will continue to hold it back in the US. I expect even more money to be thrown at nanotechnology manufacturing going into the 2030s, to try to overcome the fact that semiconductor scaling is hitting a wall, although most of what I've seen so far is still pursuing the Drexlerian vision of MEMS emulating larger mechanical systems... which, if it's not explicitly biocompatible, is likely going down a cul-de-sac.
Everybody's looking for a positive vision of the future to sell, to compete with and overcome the fraudulent tech-fascists who lead the industry right now. A program of accessible technology at the juncture of those two fields would not develop overnight, but could be a pathway there. Am I off base here?
This seems like yet another disconnect between however the fuck science communication has been failing the general public and myself.
Like when you say space I think, fuck yeah, space! Those crisp pictures of Pluto! Pictures of black holes! The amazing JWST data! Gravitational waves detection! Recreating the conditions of the early universe in particle accelerators to unlock the secrets of spacetime! Just most amazing geek shit that makes me as excited as I was when I was 12 looking at the night sky through my cheap-ass telescope.
Who gives a single fuck about sending people up there when we have probes and rovers, true marvels of engineering, feeding us data back here? Did you know Voyager 1, Voyager Fucking ONE, almost 50 years old probe, over 150 AU away from Earth, is STILL SENDING US DATA? We engineered the fuck of that bolt bucket so that even the people that designed it are surprised by how long it lasted. You think a human would last 50 years in the interstellar medium? I don't fucking think so.
We're unlocking the secrets of the universe and confirming theories from decades ago, has there been a more exciting time to be a scientist? Wouldn't you want to run a particle accelerator? Do science on the ISS? Be the engineer behind the next legendary probe that will benefit mankind even after you're gone? If you can't spin this into a narrative of technical progrees and humans being amazing then that's a skill issue, you lack fucking whimsy.
And I don't think there's a person in the world less whimsical than Elon fucking Musk.
Hmm, any sort of vision for generating public support for development of a technology has to have either ideological backing or a profit incentive. I don’t say this to mean that the future must be profitable, rather, I say this to mean that you don’t get the space race if western powers aren’t afraid of communism appearing as a viable alternative to capitalism, on both ideological and commercial fronts.
Unfortunately, a vision of that kind is necessarily technofascist. Rather than look for a tech-forward vision of the future, we need deprogram ourselves and unlearn the unspoken narratives that prop up capitalism and liberal democracy as the only viable forms of society. We need to dismantle the systems and structures that require the complex political buy-in for projects that are clearly good for society at large.
Uh, I guess I’ve kind of gone completely orthogonal to your point of discussion. I’m kind of saying the collapse of the US is inevitable.
On another somewhat orthogonal point, I suspect AI has likely soured the public on any kinda tech-forward vision for the foreseeable future.
Both directly and indirectly, the AI slop-nami has caused a lot of bad shit for the general public - from plagiarism to misinformation, from shit-tier AI art to screwing human artists, the public has come to view AI as an active blight on society, and use of AI as a virtual "Kick Me" sign.
No actually, I think what you have to say is in line with my broader point. As the top source of global consumer demand, America is primarily held together by its supply chains at this point. To be crude about it, the best reasons to be an American in the 21st century are the swag and the cheap gas. When the MAGA and Fox News crowd are pointing fingers and ranting about Marxism, they're actively trying to obscure materialism and keep people from thinking about material conditions. Having a material program, that at least has elements that can be built from the bottom up, is at least as crucial as having an electoral program. I know the Four Thieves people got rightfully shredded here a few weeks back, and that kind of technical pushback on amateur dreams is necessary, so it's a tough needle to thread. But for instance, consider Gavin Newsom's plan to have California operate its own insulin production, within existing systems and regulations: https://calmatters.org/health/2025/01/insulin-production-gavin-newsom/ This is a Newsom policy I actually think is a fantastic idea, and a big credit to him if it happens! But it's bogged down in the production-line validation stage, because we already know how to synthesize insulin and that it's effective. And the production may not even be in California when it happens! There's plenty of room for improvement here.
Space and centralized, rent-seeking "AI" are not material programs that improve conditions for the broader population. The original space program was successful because a more tightly controlled media environment gave the opportunity to use it to cover for the missile development that was the enduring practical outcome. Positive consumer outcomes from all that have always felt, to me, like something that was bolted onto the history later. We wouldn't have Tang and transistors if not for Apollo! Well, one is kind of shitty and useless, the other is so overwhelmingly advantageous that it surely would have happened anyway.
And to your last point, I somewhat sadly feel like a lot of doomer shit I was reading ~15 years ago actually prepared me to at least be unsurprised about the situation we're in. A lot of those writers (James Howard Kunstler, John Michael Greer for instance) have either softly capitulated, or else happily slotted themselves into the middle of the red-brown alliance. I think that's a big part of why we're at where we're at: a lot of people who were actually willing to consider the idea of American collapse were perfectly fine with letting it happen.
For the US to avoid collapse, the Democrats would have to sweep the board in multiple successive elections and be more unified and committed to deep reform than they ever have been.
i only want to notice that the example chemistry question has two steps out of three that are very similar to last image in wikipedia article https://en.wikipedia.org/wiki/Electrocyclic_reaction (question explicitly mentions that it is electrocyclic reaction and mentions the same class of natural product)
e: the exact reaction scheme that is answer to that question is in article linked just above that image. taking last image from wiki article and one of schemes from cited article gives the exact same compound as in question, and provides answer. considering how these spicy autocomplete rainforest incinerators work, this sounds like some serious ratfucking, right? you don't even have to know how this all works to get that and it's an isolated and a bit obscure subsubfield
You think people would secretly submit easy questions just for the reward money, and that since the question database is so big and inscrutable no one bothered to verify one way or another? No, that could never happen.
oh cool, the logo’s just a barely modified sparkle emoji so you know it’s horseshit, and it’s directly funded by Scale AI and a Rationalist thinktank so the chances the models weren’t directly trained on the problem set are vanishingly thin. this is just the FrontierMath grift with new, more dramatic, paint.
e: also, slightly different targeting — FrontierMath was looking to grift institutional dollars, I feel. this one’s designed to look good in a breathless thinkpiece about how, I dunno…
When A.I. Passes This Test, Look Out
yeah, whatever the fuck they think this means. this one’s designed to be talked about, to be brought up behind closed doors as a reason why your pay’s being cut. this is vile shit.
My favorite part of the carnivore diet is that apparently scurvy can become enough of a problem that you'll see references to "not wanting to start the vitamin C debate" in forums.
I'm pretty sure it's not just a me thing, but I thought we all knew that sailors kept citrus on board specifically to prevent scurvy by providing vitamin C and that we all learned about this as kids when either a teacher tried to make the colonial era interesting or we got vaguely curious about pirates at some point.
Refusal of statins was one of the most prominent anti-medical trends I remember observing among right-wing acquaintences, even well before such people got on the anti-vax bandwagon. To be sure, some people experience bad side-effects (including my mom, at least for a while), but it definitely seemed like a few bits of anecdata in the early 2010s built into a broad narrative of "doctor's tryin' ta kill ya"
I love how srid deflects by claiming no one has reported bad outcomes from the "meat and butter" diet... I found an endless stream of anecdotes from Google, like this.
can you imagine sneak, of all people, telling you you're crazy and probably being right?
Here's a bonus high fiber diet pro-tip: Metamucil tastes like old socks and individual capsules have hardly any fiber anyway, I eat triscuits and Oroweat Double-Fiber bread instead because they're both much much better tasting. Also chili is the food of the gods.
The agents were conducting a routine roving patrol when they stopped Bauckholt and a female in the town close to the border. During a records check, the unidentified female occupant was removed from the vehicle for further questioning, broke free, and began shooting at the agents, the incident report shows.
After the female suspect was hit by return fire, Bauckholt emerged from the vehicle and also began firing on the agents. He sustained gunshot wounds and was pronounced dead.
Jesus wept, it's so frustratingly obvious that anytime some flavor of cop kills someone, the news media reporting (if any) will be this weird Yoda grammar pidgin.
The femoidically gendered female shot with its gun by very personally pulling the trigger, with this viscerally physical action performed by the said femalian in most pointedly concrete terms amounting to it (the femaloidistical entity, a specimen of the species known as females) firing lethal gunshots at the border patrol with the female's own two hands.
Subsequently return fire manifested itself from somewhere and came into contact with the female suspect female. The Justice Enforcement Officers involved in the situation were made a part of a bilateral exchange of gunfire between the shooting female and the officers situated in the scenario in which shooting was, to some extent, quite possibly performed from their side as well.
The zizian angle makes this so weird. Like, on top of probably being stopped for driving while trans, they might have instigated the shootout to prove to the basilisk that their parallel universe selves/simulated iterations/eternal souls can't be acausally blackmailed.
The Zizians were a cult that focused on relatively extreme animal welfare, even by EA standards, and used a Timeless/Updateless decision theory, where being aggressive and escalatory was helpful as long as it helped other world branches/acausally traded with other worlds to solve the animal welfare crisis.
They apparently made a new personality called Maia in Pasek, and this resulted in Pasek's suicide.
They also used violence or the threat of violence a lot to achieve their goal.
This caused many problems for Ziz, and she now is in police custody.
CIDR 2025 is ongoing (Conference on Innovative Data Systems Research). It's a very good conference in computer science, specifically database research (an equivalent of a journal for non-CS science). And they have a whole session on LLMs called "LLMs ARE THE NEW NO-SQL"
I didn't have time to read the papers yet, believe me I will, but the abstracts are spicy
We systematically develop benchmarks to study [the problem] and find that standard methods answer no more than 20% of queries correctly, confirming the need for further research in this area.
Hey guys and gals, I have a slightly different conclusion, maybe a baseline 20% correctness is a great reason to not invest a second more of research time into this nonsense? Jesus DB Christ.
I'd also like to shoutout CIDR for setting up a separate "DATABASES AND ML" session, which is an actual research direction with interesting results (e.g. query optimizers powered by an ML model achieving better results than conventional query optimizers). At least actual professionals are not conflating ML with LLMs.
You gotta love how in the announcement the guy is so blatantly "hey they said and did such nice things for me that I just got a throw them a bone, and if releasing the leader of a notorious drug bazaar who tried to put out a hit on one of his employees is what they want then they can have it!"
He was offered a plea deal, which would have likely given him a decade-long sentence, with the ability to get out early on good behavior. Worst-case scenario, he would have spent five years in a medium-security prison and been freed.
Gotta say, this whole situation's reminding me of SBF - both of them thought they could outsmart the Feds, and both received much harsher sentences than rich white collar criminals usually get as a result.
Ah yes that will be good for international relations and the morale of law enforcement and anti cybercrime people. Lol it is all so stupid.
This and the releasing of the jan 6 people who assaulted cops (one cop who testified against them got a shitton of messages they got early release) is going to do wonders. Not that it will shake the belief of a lot of people that the repubs are the party of back the blue and law and order.
I know a lot of people are looking for alternatives for programs as stuff is ennshitfying, rot economying, slurping up your data, going all in on llms etc https://european-alternatives.eu/ might help. Have not looked into it myself btw.
you'd think it's a perfect bait for saudi sovereign wealth fund, and perhaps it is
for comparison, assuming current levels of spending, this will be something around 1/10 of defense spending in the same timeframe. which goes to, among other things, payrolls of millions of people and maintenance, procurement and development of rather pricey weapons like stealth planes (B-21 is $700M each) and nuclear-armed nuclear-powered submarines ($3.5B per Ohio-class, with $31M missiles, up to 24). this all to burn medium-sized country worth of energy to get more "impressive" c-suite fooling machine
The fact that the first thing a new fascist regime does is promise Larry Ellison a bunch of dollaridoos answers a lot of questions asked by my "ORACLE = NAZIS" tshirt
Elon Musk is already casting doubt on OpenAI’s new, up to $500 billion investment deal with SoftBank (SFTBY+10.51%) and Oracle (ORCL+7.19%), despite backing from his allies — including President Donald Trump. [...] “They don’t actually have the money,” the Tesla (TSLA-1.13%) CEO and close Trump ally said shortly before midnight on Tuesday, in a post on his social media site X. “SoftBank has well under $10 [billion] secured. I have that on good authority,” Musk added just before 1 a.m. ET.
I was mad about this, but then it hit me: this is the kind of thing that happens at the top of a bubble. The nice round numbers, the stolen sci-fi name, the needless intertwining with politics, the lack of any clear purpose for it.
Maybe this is common knowledge, but I had no idea before. What an absolutely horrible decision from google to allow this. What are they thinking?? This is great for phishing and malware, but I don't know what else. (Yeah ok, the reason has probably something to do with "line must go up".)
I recall seeing something of this sort happening on goog for about 12~18mo - every so often a researcher post does the rounds where someone finds Yet Another way goog is fucking it up
the advertising dept has completely captured all mindshare and it is (demonstrably) the only part that goog-the-business cares about
Reposting this for the new week thread since it truly is a record of how untrustworthy sammy and co are. Remember how OAI claimed that O3 had displayed superhuman levels on the mega hard Frontier Math exam written by Fields Medalist? Funny/totally not fishy story haha. Turns out OAI had exclusive access to that test for months and funded its creation and refused to let the creators of test publicly acknowledge this until after OAI did their big stupid magic trick.
From Subbarao Kambhampati via linkedIn:
"𝐎𝐧 𝐭𝐡𝐞 𝐬𝐞𝐞𝐝𝐲 𝐨𝐩𝐭𝐢𝐜𝐬 𝐨𝐟 “𝑩𝒖𝒊𝒍𝒅𝒊𝒏𝒈 𝒂𝒏 𝑨𝑮𝑰 𝑴𝒐𝒂𝒕 𝒃𝒚 𝑪𝒐𝒓𝒓𝒂𝒍𝒍𝒊𝒏𝒈 𝑩𝒆𝒏𝒄𝒉𝒎𝒂𝒓𝒌 𝑪𝒓𝒆𝒂𝒕𝒐𝒓𝒔” hashtag#SundayHarangue. One of the big reasons for the increased volume of “𝐀𝐆𝐈 𝐓𝐨𝐦𝐨𝐫𝐫𝐨𝐰” hype has been o3’s performance on the “frontier math” benchmark–something that other models basically had no handle on.
We are now being told (https://lnkd.in/gUaGKuAE) that this benchmark data may have been exclusively available (https://lnkd.in/g5E3tcse) to OpenAI since before o1–and that the benchmark creators were not allowed to disclose this *until after o3 *.
That o3 does well on frontier math held-out set is impressive, no doubt, but the mental picture of “𝒐1/𝒐3 𝒘𝒆𝒓𝒆 𝒋𝒖𝒔𝒕 𝒃𝒆𝒊𝒏𝒈 𝒕𝒓𝒂𝒊𝒏𝒆𝒅 𝒐𝒏 𝒔𝒊𝒎𝒑𝒍𝒆 𝒎𝒂𝒕𝒉, 𝒂𝒏𝒅 𝒕𝒉𝒆𝒚 𝒃𝒐𝒐𝒕𝒔𝒕𝒓𝒂𝒑𝒑𝒆𝒅 𝒕𝒉𝒆𝒎𝒔𝒆𝒍𝒗𝒆𝒔 𝒕𝒐 𝒇𝒓𝒐𝒏𝒕𝒊𝒆𝒓 𝒎𝒂𝒕𝒉”–that the AGI tomorrow crowd seem to have–that 𝘖𝘱𝘦𝘯𝘈𝘐 𝘸𝘩𝘪𝘭𝘦 𝘯𝘰𝘵 𝘦𝘹𝘱𝘭𝘪𝘤𝘪𝘵𝘭𝘺 𝘤𝘭𝘢𝘪𝘮𝘪𝘯𝘨, 𝘤𝘦𝘳𝘵𝘢𝘪𝘯𝘭𝘺 𝘥𝘪𝘥𝘯’𝘵 𝘥𝘪𝘳𝘦𝘤𝘵𝘭𝘺 𝘤𝘰𝘯𝘵𝘳𝘢𝘥𝘪𝘤𝘵–is shattered by this. (I have, in fact, been grumbling to my students since o3 announcement that I don’t completely believe that OpenAI didn’t have access to the Olympiad/Frontier Math data before hand… )
We all know that data contamination is an issue with LLMs and LRMs. We also know that reasoning claims need more careful vetting than “𝘸𝘦 𝘥𝘪𝘥𝘯’𝘵 𝘴𝘦𝘦 𝘵𝘩𝘢𝘵 𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘱𝘳𝘰𝘣𝘭𝘦𝘮 𝘪𝘯𝘴𝘵𝘢𝘯𝘤𝘦 𝘥𝘶𝘳𝘪𝘯𝘨 𝘵𝘳𝘢𝘪𝘯𝘪𝘯𝘨” (see “In vs. Out of Distribution analyses are not that useful for understanding LLM reasoning capabilities” https://lnkd.in/gZ2wBM_F ).
At the very least, this episode further argues for increased vigilance/skepticism on the part of AI research community in how they parse the benchmark claims put out commercial entities."
Every time they go 'this wasnt in the data' it turns out it was. A while back they did the same with translating rareish languages. Turns out it was trained on it. Fucked up. But also, wtf how are they expecting this to stay secret and there being no backlash? This world needs a better class of criminals.
The conspiracy theorist who lives in my brain wants to say its intentional to make us more open to blatant cheating as something that's just a "cost of doing business." (I swear I saw this phrase a half dozen times in the orange site thread about this)
The earnest part of me tells me no, these guys are just clowns, but I dunno, they can't all be this dumb right?
til that there's not one millionaire with family business in south african mining in current american oligarchy, but at least two. (thiel's father was an exec at mine in what is today Namibia). (they mined uranium). (it went towards RSA nuclear program). (that's easily most ghoulish thing i've learned today, but i'm up only for 2h)
there's probably a fair couple more. tracing anything de beers or a good couple of other industries will probably indicate a couple more
(my hypothesis is: the kinds of people that flourished under apartheid, the effect that had on local-developed industry, and then the "wider world" of opportunities prey they got to sink their teeth into after apartheid went away; doubly so because staying ZA-only is extremely limiting for ghouls of their sort - it's a fixed-size pool, and the still-standing apartheid-vintage capital controls are Limiting for the kinds of bullshit they want to pull)
Hmm, surely there is no downside to doing all of one's marketing, both personal* and professional, through the false certainty and low signal of short-form social media. The leopard has only licked Sam's face, it will never bite and begin chewing!
*You and I may find the concept of a "personal brand" to be horrifying, but these guys clearly want to become brands more fervently than Bruce Wayne wanted to become a bat
Amazing how that all looks like one of those sites from the geocities era, with sparkles/stars butterflies and unicorns and dolphins all over it. All it needs now is a under construction sign.
they will take facebook there with them. none of their space escapism will solve their problrms because they take them along. these mfers will do anything but go to therapy
Banner start to the next US presidency, with Wiener Von Wrong tossing a Nazi salute and the ADL papering that one over as an "awkward gesture". 2025 is going to be great for my country.
Incidentally is "Wiener Von Wrong" or "Wernher Von Brownnose" better?
following on from this comment, it is possible to get it turned off for a Workspace Suite Account
contact support (? button from admin view)
ask the first person to connect you to Workspace Support (otherwise you'll get some made-up bullshit from a person trying to buy time or Case Success or whatever, simply because they don't have the privileges to do what you're asking)
tell the referred-to person that you want to enable controls for "Gemini for Google Workspace" (optionally adding that you have already disabled "Gemini App")
hopefully you spend less time on this than the 40-something minutes I had to (a lot of which was spent watching some poor support bastard start-stop typing for minutes at a time because they didn't know how to respond to my request)
so the new feature in the next macos release 15.3 is "fuck you, apple intelligence is on by default now"
For users new or upgrading to macOS 18.3, Apple Intelligence will be enabled automatically during Mac onboarding. Users will have access to Apple Intelligence features after setting up their devices. To disable Apple Intelligence, users will need to navigate to the Apple Intelligence & Siri Settings pane and turn off the Apple Intelligence toggle. This will disable Apple Intelligence features on their device.
IDK how helpful this is, but Apple intelligence appears to not get downloaded if you set your ipad language and your siri language to be different. I have it set to english (australia) and english (united states). Guess I’ll have to live without “gaol” support, but that just shows how much I’m willing to sacrifice.
It's term time again and I'm back in college. One professor has laid out his AI policy: you should not use an AI (presumably Chat GPT) to write your assignment, but you can use an AI to proofread your assignment. This must be mentioned in the acknowledgements. He said in class that in his experience AI does not produce good results and that when asked to write about his particular field it produces work with a lot of mistakes.
Me, I'm just wondering how you can tell the difference between material generated by AI then edited by a human, and material written by a human then edited by an AI.
Here is what I wrote in the instructions for the term-paper project that I will be assigning my quantum-physics students this coming semester:
I can’t very well stop you from using a text-barfing tool. I can, however, point out that the “AI” industry is a disaster for the environment, which is the place that we all have to live in; and that it depends upon datasets made by exploiting and indeed psychologically torturing workers. The point of this project is for you to learn a physics topic and how to write physics, not for you to abase yourself before a blurry average of all the things the Internet says about quantum physics — which, spoiler alert, includes a lot of wrong things. If you are going to spend your time at university not learning physics, there are better ways to do that than making yourself dependent upon a product that is a tech bubble waiting to pop.
I was talking to someone recently and he mentioned that he has used AI for programming. It worked out fine, but the one thing he mentioned that really stuck with me was that when it was all done, he still didn't know how to do the task.
You can get things done, but you don't learn how to do them.