Ex-OpenAI cofounder Ilya Sutskever’s new startup Safe Superintelligence just closed another funding round. For $2 billion, Sutskever promises not to release any product at all until SSI has develop…
My bet is that we're at least one breakthrough away from actual artificial intelligence. Not helping is of course the fact that we don't know what intelligence actually is.
Also, according to Eliezer Yudkowsky, the way we'll notice a superintelligence moving against us is that all humans will die at once. Of course, while AI with human level intellect is probably possible, on account of humans having human level intellect, superintelligence being possible is still a stretch.