Skip Navigation

Andrew Molitor on "AI safety". "people are gonna hook these dumb things up to stuff they should not, and people will get killed. Hopefully the same people, but probably other people."

nonesense.substack.com AI Safety

The current state of the art of AI safety research is mainly of two sorts: “what if we build an angry God” and “can we make the thing say Heil Hitler?” Neither is very important, because in the first place we’re pretty unlikely to build a God, and in the second place, who cares?

AI Safety
1
1 comments