Andrew Molitor on "AI safety". "people are gonna hook these dumb things up to stuff they should not, and people will get killed. Hopefully the same people, but probably other people."
The current state of the art of AI safety research is mainly of two sorts: “what if we build an angry God” and “can we make the thing say Heil Hitler?” Neither is very important, because in the first place we’re pretty unlikely to build a God, and in the second place, who cares?
This is a good summary of half of the motive to ignore the real AI safety stuff in favor of sci-fi fantasy doom scenarios. (The other half is that the sci-fi fantasy scenarios are a good source of hype.) I hadn't thought about the extent to which Altman's plan is "hey morons, hook my shit up to fucking everything and try to stumble across a use case that’s good for something" (as opposed to the "we’re building a genie, and when we’re done we’re going to ask it for three wishes" he hypes up), that makes more sense as a long term plan...