Skip Navigation
Architeuthis Architeuthis @awful.systems

It's not always easy to distinguish between existentialism and a bad mood.

Posts 11
Comments 130
We regret to inform you that Ray Kurzweil is back on his bullshit
  • It hasn't worked 'well' for computers since like the pentium, what are you talking about?

    The premise was pretty dumb too, as in, if you notice that a (very reductive) technological metric has been rising sort of exponentially, you should probably assume something along the lines of we're probably still at the low hanging fruit stage of R&D, it'll stabilize as it matures, instead of proudly proclaiming that surely it'll approach infinity and break reality.

    There's nothing smart or insightful about seeing a line in a graph trending upwards and assuming it's gonna keep doing that no matter what. Not to mention that type of decontextualized wishful thinking is emblematic of the TREACLES mindset mentioned in the community's blurb that you should check out.

    So yeah, he thought up the Singularity which is little more than a metaphysical excuse to ignore regulations and negative externalities because with tech rupture around the corner any catastrophic mess we make getting there won't matter. See also: the whole current AI debacle.

  • what if, right, what *if* our super-duper-autocomplete was just *tricking* us so it could TAKE OVER ZEE VORLD AHAHAHAHAHAHA! that'd be wild, hey
  • I'm not spending the additional 34min apparently required to find out what in the world they think neural network training actually is that it could ever possibly involve strategy on the part of the network, but I'm willing to bet it's extremely dumb.

    I'm almost certain I've seen EY catch shit on twitter (from actual ml researchers no less) for insinuating something very similar.

  • It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)
  • Google pivoting to selling shovels for the AI gold rush in the form of data tools should be pretty viable if they commit to it, I hadn't thought if it that way.

  • It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)
  • It's a sad fate that sometimes befalls engineers who are good at talking to audiences, and who work for a big enough company that can afford to have that be their primary role.

    edit: I love that he's chief evangelist though, like he has a bunch of little google cloud clerics running around doing chores for him.

  • It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)

    >AI Work Assistants Need a Lot of Handholding

    > Getting full value out of AI workplace assistants is turning out to require a heavy lift from enterprises. ‘It has been more work than anticipated,’ says one CIO.

    aka we are currently in the process of realizing we are paying for the privilege of being the first to test an incomplete product.

    >Mandell said if she asks a question related to 2024 data, the AI tool might deliver an answer based on 2023 data. At Cargill, an AI tool failed to correctly answer a straightforward question about who is on the company’s executive team, the agricultural giant said. At Eli Lilly, a tool gave incorrect answers to questions about expense policies, said Diogo Rau, the pharmaceutical firm’s chief information and digital officer.

    I mean, imagine all the non-obvious stuff it must be getting wrong at the same time.

    > He said the company is regularly updating and refining its data to ensure accurate results from AI tools accessing it. That process includes the organization’s data engineers validating and cleaning up incoming data, and curating it into a “golden record,” with no contradictory or duplicate information.

    Please stop feeding the thing too much information, you're making it confused.

    > Some of the challenges with Copilot are related to the complicated art of prompting, Spataro said. Users might not understand how much context they actually need to give Copilot to get the right answer, he said, but he added that Copilot itself could also get better at asking for more context when it needs it.

    Yeah, exactly like all the tech demos showed -- wait a minute!

    > [Google Cloud Chief Evangelist Richard Seroter said] “If you don’t have your data house in order, AI is going to be less valuable than it would be if it was,” he said. “You can’t just buy six units of AI and then magically change your business.”

    Nevermind that that's exactly how we've been marketing it.

    Oh well, I guess you'll just have to wait for chatgpt-6.66 that will surely fix everything, while voiced by charlize theron's non-union equivalent.

    32
    Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 23 June 2024
  • There's a bit in the beginning where he talks about how actors handling and drinking from obviously weightless empty cups ruins suspension of disbelief, so I'm assuming it's a callback.

  • Sam Bankman-Fried funded a group with racist ties
  • "Manifest is open minded about eugenics and securing the existence of our people and a future for high IQ children."

  • Sam Bankman-Fried funded a group with racist ties
  • Great quote from the article on why prediction markets and scientific racism currently appear to be at one degree of separation:

    Daniel HoSang, a professor of American studies at Yale University and a part of the Anti-Eugenics Collective at Yale, said: “The ties between a sector of Silicon Valley investors, effective altruism and a kind of neo-eugenics are subtle but unmistakable. They converge around a belief that nearly everything in society can be reduced to markets and all people can be regarded as bundles of human capital.

  • Yud lettuce know that we just don't get it :(
  • Before we accidentally make an AI capable of posing existential risk to human being safety, perhaps we should find out how to build effective safety measures first.

    You make his position sound way more measured and responsible than it is.

    His 'effective safety measures' are something like A) solve ethics B) hardcode the result into every AI, I.e. garbage philosophy meets garbage sci-fi.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 16 June 2024
  • Wasn't 1994 right about when they stopped making movies in black and white?

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 16 June 2024
  • This has got to be some sort of sucker filter, like it's not that he particularly means it, it's that he is after the exact type of rube who is unfazed by naked contrarianism and the categorically preposterous so long as it's said with a straight face,.

    Maybe there's something to the whole pick up artistry but for nailing VCs thing.

  • using GitHub CoPilot leads to the obvious consequence
  • Honestly, the evident plethora of poor programming practices is the least notable thing about all this; using roided autocomplete to cut corners was never going to be a well calculated decision, it's always the cherry on top of a shit-cake.

  • New Windows AI feature records everything you’ve done on your PC
  • this isn’t really even related to GenAI at all

    Besides the ocr there appears to be all sorts of image-to-text metadata recorded, the nadella demo had the journalist supposedly doing a search and getting results with terms that were neither typed at the time nor appearing in the stored screenshots.

    Also, I thought they might be doing something image-to-text-to-image-again related (which - I read somewhere - was what bing copilot did when you asked it to edit an image) to save space, instead of storing eleventy billion multimonitor screenshots forever.

    edit - in the demo the results included screens.

  • New Windows AI feature records everything you’ve done on your PC
  • Nightmare blunt rotation in the Rewind AI front page recommendations:

    Recommended by Andreessen, Altman and Reddit founder

    Also it appears to be different than Recall in that it's a third party app and not pushed as the default in every new OS installation.

  • New Windows AI feature records everything you’ve done on your PC
  • That you can jailbreak recall and run it on non compliant hardware seems to be the least concerning thing in that article, recommended reading.

  • New Windows AI feature records everything you’ve done on your PC
  • So LLM-based AI is apparently such a dead end as far as non-spam and non-party trick use cases are concerned that they are straight up rolling out anti-features that nobody asked or wanted just to convince shareholders that ground breaking stuff is still going on, and somewhat justify the ocean of money they are diverting that way.

    At least it's only supposed to work on PCs that incorporate so-called neural processor units, which if I understand correctly is going to be its own thing under a Windows PC branding.

    edit: Yud must love that instead of his very smart and very implementable idea of the government enforcing strict regulations on who gets to own GPUs and bombing non-compliants we seem to instead be trending towards having special deep learning facilitating hardware integrated in every new device, or whatever NPUs actually are, starting with iPhones and so-called Windows PCs.

    edit edit: the branding appears to be "Copilot+ PCs" not windows pcs.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 2 June 2024
  • weight classes are for wokies

    This used to be a Joe Rogan staple: no weight classes, no time limits and the ring should be the size of a basketball court.

    It's really just the umpteenth reiteration of the meathead mantra of how I'd do really well in [popular combat sport] if it weren't for those pesky rules holding me back.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 2 June 2024
  • Echoing the audience's fawning of heavyweight boxers is probably the least objectionable thing in this racist shitheap of an article, I like how it ends by basically saying people should shut up about the judges possibly favoring Usyk for being Ukrainian, not because that's just Tyson fans coping but because the current notable russian heavyweights are either icky muslims or not full whites by parentage.

    P4P is mostly a marketing term anyway, size aside the meta is different enough between distant weight classes to really strain comparison.

  • Generating (often non-con) porn is the new crypto mining

    An AI company has been generating porn with gamers' idle GPU time in exchange for Fortnite skins and Roblox gift cards

    > "some workloads may generate images, text or video of a mature nature", and that any adult content generated is wiped from a users system as soon as the workload is completed.

    > However, one of Salad's clients is CivitAi, a platform for sharing AI generated images which has previously been investigated by 404 media. It found that the service hosts image generating AI models of specific people, whose image can then be combined with pornographic AI models to generate non-consensual sexual images.

    Investigation link: https://www.404media.co/inside-the-ai-porn-marketplace-where-everything-and-everyone-is-for-sale/

    9

    SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing

    www.citationneeded.news Sam Bankman-Fried wants only six years for his "victimless" crime

    Sam Bankman-Fried maintains that his crimes were victimless and resulted in zero losses, and therefore warrant only six years of imprisonment. Prosecutors argue that 40–50 years are justified.

    Sam Bankman-Fried wants only six years for his "victimless" crime

    For thursday's sentencing the us government indicated they would be happy with a 40-50 prison sentence, and in the list of reasons they cite there's this gem:

    > 4. Bankman-Fried's effective altruism and own statements about risk suggest he would be likely to commit another fraud if he determined it had high enough "expected value". They point to Caroline Ellison's testimony in which she said that Bankman-Fried had expressed to her that he would "be happy to flip a coin, if it came up tails and the world was destroyed, as long as if it came up heads the world would be like more than twice as good". They also point to Bankman-Fried's "own 'calculations'" described in his sentencing memo, in which he says his life now has negative expected value. "Such a calculus will inevitably lead him to trying again," they write.

    Turns out making it a point of pride that you have the morality of an anime villain does not endear you to prosecutors, who knew.

    Bonus: SBF's lawyers' list of assertions for asking for a shorter sentence includes this hilarious bit reasoning:

    > They argue that Bankman-Fried would not reoffend, for reasons including that "he would sooner suffer than bring disrepute to any philanthropic movement."

    68

    Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next

    rootclaim appears to be yet another group of people who, having stumbled upon the idea of the Bayes rule as a good enough alternative to critical thinking, decided to try their luck in becoming a Serious and Important Arbiter of Truth in a Post-Mainstream-Journalism World.

    This includes a randiesque challenge that they'll take a $100K bet that you can't prove them wrong on a select group of topics they've done deep dives on, like if the 2020 election was stolen (91% nay) or if covid was man-made and leaked from a lab (89% yay).

    Also their methodology yields results like 95% certainty on Usain Bolt never having used PEDs, so it's not entirely surprising that the first person to take their challenge appears to have wiped the floor with them.

    Don't worry though, they have taken the results of the debate to heart and according to their postmortem blogpost they learned many important lessons, like how they need to (checks notes) gameplan against the rules of the debate better? What a way to spend 100K... Maybe once you've reached a conclusion using the Sacred Method changing your mind becomes difficult.

    I've included the novel-length judges opinions in the links below, where a cursory look indicates they are notably less charitable towards rootclaim's views than their postmortem indicates, pointing at stuff like logical inconsistencies and the inclusion of data that on closer look appear basically irrelevant to the thing they are trying to model probabilities for.

    There's also like 18 hours of video of the debate if anyone wants to really get into it, but I'll tap out here.

    ssc reddit thread

    quantian's short writeup on the birdsite, will post screens in comments

    pdf of judge's opinion that isn't quite book length, 27 pages, judge is a microbiologist and immunologist PhD

    pdf of other judge's opinion that's 87 pages, judge is an applied mathematician PhD with a background in mathematical virology -- despite the length this is better organized and generally way more readable, if you can spare the time.

    rootclaim's post mortem blogpost, includes more links to debate material and judge's opinions.

    edit: added additional details to the pdf descriptions.

    28

    Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.

    edited to add tl;dr: Siskind seems ticked off because recent papers on the genetics of schizophrenia are increasingly pointing out that at current miniscule levels of prevalence, even with the commonly accepted 80% heritability, actually developing the disorder is all but impossible unless at least some of the environmental factors are also in play. This is understandably very worrisome, since it indicates that even high heritability issues might be solvable without immediately employing eugenics.

    Also notable because I don't think it's very often that eugenics grievances breach the surface in such an obvious way in a public siskind post, including the claim that the whole thing is just HBD denialists spreading FUD:

    > People really hate the finding that most diseases are substantially (often primarily) genetic. There’s a whole toolbox that people in denial about this use to sow doubt. Usually it involves misunderstanding polygenicity/omnigenicity, or confusing GWAS’ current inability to detect a gene with the gene not existing. I hope most people are already wise to these tactics.

    26

    Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction...

    ... while at the same time not really worth worrying about so we should be concentrating on unnamed alleged mid term risks.

    EY tweets are probably the lowest effort sneerclub content possible but the birdsite threw this to my face this morning so it's only fair you suffer too. Transcript follows:

    Andrew Ng wrote: >In AI, the ratio of attention on hypothetical, future, forms of harm to actual, current, realized forms of harm seems out of whack. > >Many of the hypothetical forms of harm, like AI "taking over", are based on highly questionable hypotheses about what technology that does not currently exist might do. > >Every field should examine both future and current problems. But is there any other engineering discipline where this much attention is on hypothetical problems rather than actual problems?

    EY replied: >I think when the near-term harm is massive numbers of young men and women dropping out of the human dating market, and the mid-term harm is the utter extermination of humanity, it makes sense to focus on policies motivated by preventing mid-term harm, if there's even a trade-off.

    13

    Turns out Altman is a lab-leak covid truther, calls virus 'synthetic' according to Spectator piece on AI risk.

    > Sam Altman, the recently fired (and rehired) chief executive of Open AI, was asked earlier this year by his fellow tech billionaire Patrick Collison what he thought of the risks of synthetic biology. ‘I would like to not have another synthetic pathogen cause a global pandemic. I think we can all agree that wasn’t a great experience,’ he replied. ‘Wasn’t that bad compared to what it could have been, but I’m surprised there has not been more global coordination and I think we should have more of that.’

    126

    Rationalist literary criticism by SBF, found on the birdsite

    original is here, but you aren't missing any context, that's the twit.

    > I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

    edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

    21

    Quality sneer found on the birdsite

    Transcription:

    Thinking about that guy who wants a global suprasovereign execution squad with authority to disable the math of encryption and bunker buster my gaming computer if they detect it has too many transistors because BonziBuddy might get smart enough to order custom RNA viruses online.

    0