Skip Navigation

Reasons why people don't believe in, or take AI existential risk seriously.

This is an automated archive made by the Lemmit Bot.

The original was posted on /r/singularity by /u/2Punx2Furious on 2023-06-27 18:29:06+00:00.


I was watching again the Bankless podcast episode with Eliezer Yudkowski, and he mentions a few reasons why people don't take the x-risk seriously. I think it might be worth to list them out, to reason about them, and discuss them.

https://youtu.be/eF-E40pxxbI

Some people who say there is no problem, or that they're not so worried, might actually not be stating their true position because of negative incentives. Saying these things publicly is generally not viewed well, because it's not a popular opinion, and it's not a popular opinion because more people don't make these views public. It's a vicious cycle. That's also the reason why major AI labs think they should continue capability research, because even if they stop, other will continue, so there is no reason to stop. I think that's a flawed argument, because research tends to compound, and accelerate future research, so stopping capability research would at least buy us more time. Also, if they stop, or slow down capability research, and focus on alignment, it would be even better.

  • Lack of safety mindset

Some people focus on how something could go well, rather than how it could go wrong. Maybe they don't want to think of bad scenarios, or they are unable to. Of course things "could" go right. That's not the point. They don't go right by default. It takes an enormous amount of effort for things to go right in this case, effort that we are not exercising.

  • Unwillingness to consider "absurd" future scenarios as possible

A lot of people seem to have a bias where they think that the future will be very similar to the past, and the world won't ever change in radical ways in their lifetimes. That is a very reasonable view to have, and would have held true in the vast majority of the history of humanity. I should not have to explain why circumstances are different now, it seems rather obvious, but even so, a lot of people are too preoccupied with their day-to-day lives to extrapolate significantly into the future, especially concerning the emergence of new technology and how they would affect the world.

The implicit bias seems to be that "we've been fine so far, we'll be fine in the future", but

There's No Rule That Says We'll Make It.

  • Excessive complexity of the problem

Related to the previous point. The problem seems absurd prima facie, and easy to dismiss as it perfectly pattern-matches to classical doomsday scenarios, and it's usually a good decision to dismiss them. You'd be correct to dismiss 99.9999% of doomsday scenarios, so you dismiss 100% of them. It is usually a good heuristics, but in this case, it becomes fatally wrong when you hit that 0.0001%.

The problem becomes cogent only after relatively deep analysis, which most people are unwilling to do (for good reasons, as I just wrote), therefore most people just dismiss it.

Combine all the previous reasons with the potential benefits of aligned AGI, which are massive, and it becomes really hard to see the risk, as the reasons to not see it are so many, and so powerful. Especially if the person has problems that an aligned AGI would fix, AGI becomes a ray of hope, and anyone who suggests that it might not be that, becomes the enemy. The issue becomes polarized, and we get to our current situation, when AI "ethics" people try to demonize and dismiss people who are warning about the risks of misaligned AGI. We also get people who have (short-term) economic incentives in dismissing threats, because they see the obvious massive benefits that AI will bring them, but can't (or won't) extrapolate further, to see the risk in more powerful AGI.

I might have missed some reasons, but I think this is good enough to spark a discussion.

It's a complex situation, and I have no idea what to do about it. I don't know if there even is anything to be done.

0
0 comments