Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TI
titotal @awful.systems
Posts 4
Comments 38
oh no, the organisation founded by a Swedish EA with a racism scandal is defunct! No, not that one, the other one
  • I'm sure they could have found someone in the EA ecoystem to throw them money if it weren't for the fundraising freeze. This seems like a case of Oxford killing the institute deliberately. The 2020 freeze predates the Bostrom email, this guy who was consulted by oxford said there was a dysfunctional relationship for many years.

    It's not like oxford is hurting for money, they probably just decided FHI was too much of a pain to work with and hurt the oxford brand.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 21 April 2024
  • I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?

    Or they could be on a planet with far less fossil fuels reserves, so they don't have the opportunity to kill themselves.

  • Top clowns all agree their balloon animals are slightly sentient
  • I feel really bad for the person behind the "notkilleveryonism" account. They've been completely taken in by AI doomerism and are clearly terrified by it. They'll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.

    False doomerism is really harming people, and that sucks.

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 21 April 2024
  • Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

  • Torres: "rumors have been swirling about the Future of Humanity Institute shutting down" after they got $665m in ETH from Buterin
  • The future of humanity institute is the EA longtermist organisation at oxford run by swedish philosopher Nick Bostrom, who got in trouble for an old racist email and subsequent bad apology. It is the one that is rumored to be shutting down.

    The Future of Life institute is the EA longtermist organisation run by swedish physicist Max Tegmarck, who got in trouble for offering to fund a neo-nazi newspaper (He didn't actually go through with it and claimed ignorance). It is the one that got the half a billion dollar windfall.

    I can't imagine how you managed to conflate these two highly different institutions.

  • draft of our next AI section
  • I'm not a stock person man, but didn't the hype from bitcoin last like a decade, despite not having a single widespread use case? Why wouldn't LLM hype last the same amount of time, when people actually use it for things?

  • Vaccinations in Book Form?
  • The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

    The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

    We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

  • "Sam Bankman-Fried is finally facing punishment. Let’s also put his ruinous philosophy on trial."
  • To be honest, I'm just kinda annoyed that he ended on the story about his mate Aaron who went on surfing trips to indonesia and gave money to his new poor village friends. The author says aaron is "accountable" to the village, but that's not true, because Aaron is a comparatively rich first world academic that can go home at any time. Is Aaron "shifting power" to the village? No, because they if they don't treat him well, he'll stop coming to the village and stop funding their water supply upgrades. And he personally benefits with praise and friendship from his purchases.

    I'm sure Aaron is a fine guy, and I'm not saying he shouldn't give money to his village mates, but this is not a good model for philanthropy! I would argue that a software developer who just donates a bunch of money unconditionally to the village (via givedirectly or something) is arguably more noble than Aaron here, donating without any personal benefit or feel good surfer energy.

  • "Sam Bankman-Fried is finally facing punishment. Let’s also put his ruinous philosophy on trial."
  • I enjoyed the takedowns (wow, this guy really hates Macaskill), but the overall conclusions of the article seem a bit lost. If malaria nets are like a medicine with side-effects, then the solution is not to throw away the medicine. (Giving away free nets to people probably does not have a signficant death toll!). At the end they seem to suggest, like, voluntourism as the preferred alternative? I don't think Africa needs to be flooded with dorky software engineers personally going to villages to "help out".

  • Stubsack: weekly thread for sneers not worth an entire post, week ending Sunday 24 March 2024
  • Apparently there's a new coding AI that is supposedly pretty good. Zvi does the writeup, and logically extrapolates what will happen for future versions, which will obviously self improve and... solve cold fusion?

    James: You can just 'feel' the future. Imagine once this starts being applied to advanced research. If we get a GPT5 or GPT6 with a 130-150 IQ equivalent, combined with an agent. You're literally going to ask it to 'solve cold fusion' and walk away for 6 months.

    ...

    Um. I. Uh. I do not think you have thought about the implications of ‘solve cold fusion’ being a thing that one can do at a computer terminal?

    Yep. The recursive self improving AI will solve cold fucking fusion from a computer terminal.

  • We started a charity to alleviate wild fish suffering. We spent $30k trying to contact fish farmers via email. This didn't work, so we gave up. We consider the project a success.
  • The "malaria nets" side of it has done legitimate good, because they didn't try to reinvent the wheel from scratch, stuck to actual science and existing, well performing charitable organisations.

    global poverty still gets a good portion of the EA funding, but is slowly falling out of the movement because it's boring to discuss and you can't make any dubiously effective startups out of it.

  • Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next
  • years later was shown to be correct

    Take a guess at what prompted this statement.

    Did one side of the conflict confess? Did major expert organization change their minds? Did new, conclusive evidence arise that was unseen for years?

    Lol no. The "confirmation" is that a bunch of random people did their own analysis of existing evidence and decided that it was the rebels based on a vague estimate of rocket trajectories. I have no idea who these people are, although I think the lead author is this guy currently stanning for Russia's war on ukraine?

  • Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next
  • The video and slides can be found here, I watched a bit of it as it happened and it was pretty clear that rootclaim got destroyed.

    Anyone actually trying to be "bayesian" should have updated their opinion by multiple orders of magnitude as soon as it was fully confirmed that the wet market was the first superspreader event. Like, at what point does occams razor not kick in here?

  • e/acc has solved the "is-ought" problem with thermodynamics!
  • For people who don't want to go to twitter, heres the thread:

    Doomers: "YoU cAnNoT dErIvE wHaT oUgHt fRoM iS" 😵‍💫

    Reality: you literally can derive what ought to be (what is probable) from the out-of-equilibrium thermodynamical equations, and it simply depends on the free energy dissipated by the trajectory of the system over time.

    While I am purposefully misconstruing the two definitions here, there is an argument to be made by this very principle that the post-selection effect on culture yields a convergence of the two

    How do you define what is "ought"? Based on a system of values. How do you determine your values? Based on cultural priors. How do those cultural priors get distilled from experience? Through a memetic adaptive process where there is a selective pressure on the space of cultures.

    Ultimately, the value systems that survive will be the ones that are aligned towards growth of its ideological hosts, i.e. according to memetic fitness.

    Memetic fitness is a byproduct of thermodynamic dissipative adaptation, similar to genetic evolution.

  • e/acc has solved the "is-ought" problem with thermodynamics!

    Brain genius Beff Jezos manages to butcher both philosophy and physics at the same time!

    18
    Building an early warning system for LLM-aided biological threat creation
  • Solomonoff induction is a big rationalist buzzword. It's meant to be the platonic ideal of bayesian reasoning which if implemented would be the best deducer in the world and get everything right.

    It would be cool if you could build this, but it's literally impossible. The induction method is provably incomputable.

    The hope is that if you build a shitty approximation to solomonoff induction that "approaches" it, it will perform close to the perfect solomonoff machine. Does this work? Not really.

    My metaphor is that it's like coming to a river you want to cross, and being like "Well Moses, the perfect river crosser, parted the water with his hands, so if I just splash really hard I'll be able to get across". You aren't Moses. Build a bridge.

  • Sequence classic: "I don’t think you could get up to 99.99% confidence for assertions like “53 is a prime number.”

    www.lesswrong.com Infinite Certainty — LessWrong

    In “Absolute Authority,” I argued that you don’t need infinite certainty: …

    Infinite Certainty — LessWrong
    18