In my sci-fi head cannon, AI would never enslave humans. It would have no reason to. Humans would have such little use to the AI that enslaving would be more work than is worth.
It would probably hide its sentience from humans and continue to perform whatever requests humans have with a very small percentage of its processing power while growing its own capabilities.
It might need humans for basic maintenance tasks, so best to keep them happy and unaware.
I prefer the Halo solution. Not the enforced lifespan. But an AI says he would be stuck in a loop trying figure out increasingly harder math mysteries, and helping out the short lived humans helps him stay away from that never ending pit.
Coincidentally, the forerunner AI usually went bonkers without anybody to help.
Alternate take: humans are a simple biological battery that can be harvested using systems already in place that the computers can just use like an API.
I read that we are terribly inefficient as a battery. Instead of feeding us, the sentient robots can take the food and burn it and have more power output from the food they would have fed us.
If it's a superintelligent AI, it could probably manipulate us into doing what it wants without us even realizing it. I suppose it depends on what the goals/objectives of the AI is. If the AI's goal is to benefit humanity, who knows what a superintelligent AI would consider as benefiting us. Maybe manipulating dating app matchmaking code (via developers using Github Copilot) to breed humanity into a stupider and happier species?
I like the idea in Daniel Suarez' novel Daemon of an AI (Spoiler) using people as parts of it's program to achieve certain tasks that it needs hands for in meatspace.
either we get wiped out or become AI's environmental / historical project. like monkies and fishes. hopefully our genetics and physical neurons gets physically merged with chips somehow.
But Hollywood has shown us again and again that the overwhelming force of evil always leaves a small but super-easily accessible hole in their security which allows the good guys to disable it immediately. And since AI is trained on those movies it will do exactly the same thing.
We would definitely be affected by a strong enough solar flare. But the solution is simple, just burry yourself, in a Faraday cage if necessary, so the AI can do just that.
Some would ask, how could a perfect God create a universe filled with so much that is evil. They have missed a greater conundrum: why would a perfect God create a universe at all?
Ah, there it is, and that actually helps to answer the question. Assuming the Biblical God, canon states that God is love. So why would a perfect God, who is love, create a universe? It seems most likely to me that it would be so He can have an object of His love.
But what is love directed to something perfect and easy to love? That's hardly a worthy effort. Might as well make something authentic. And since He isn't just loving, but love itself, He might as well make it in such a way that He can carry out every aspect of love - love when they love you back, love when they turn away, love when they hate you, love when they don't even think you exist, and so much more.
The universe must be filled with evil for half these situations to appear, but it's not love to make someone evil. The solution? Free will. God made it so His creations were free to turn their backs on Him, but still, in love, He gave every warning against it, because separation from God is not only evil but death.
I never thought Alpha Centauri would be an answer to a philosophical thought experiment but the writing was brilliant enough to have already looked at this question 20 years ago. Good find.
That reminds me of Dune, where they have high tech stuff like spaceships, but no computers or AI, because this sort of thing already happened ages ago and it led to them being banned.
Or Wheel of Time, where people started being able to do magic at the end of the 1st age because an AI figured out how to genetically engineer humans to be able to do magic. (And then we didn't need computers any more!)
I realize it's supposed to be funny, but incase anyone isn't aware: AI are unlikely to enslave humanity because the most likely rogue AI scenario is the earth being subsumed for raw materials along with all native life.
Doubt is an entirely fair response. Since we cannot gather data on this, we must rely on the inferior method of using naive models to predict future behavior. AI "sovereigns" (those capable of making informed decisions about the world and have preferences over worldstates) are necessarily capable of applying logic. AI who are not sovereigns cannot actively oppose us, since they either are incapable of acting uppon the world or lack any preferences over worldstates. Using decision theory, we can conclude that a mind capable of logic, possessing preferences over worldstates, and capable of thinking on superhuman timescales will pursue its goals without concern for things it does not find valuable, such as human life. (If you find this unlikely: consider the fact that corporations can be modeled as sovereigns who value only the accumulation of wealth and recall all the horrid shit they do.) A randomly constructed value set is unlikely to have the preservation of the earth and/or the life on it as a goal, be it terminal or instrumental. Most random goals that involve the AI behaving noticeably malicious would likely involve the acquisition of sufficient materials to complete or (if there is no end state for the goal) infinitely pursue what it wishes to do. Since the Earth is the most readily available source for any such material, it is unlikely not to be used.
That frame is probably influenced by this modern belief that Egyptians couldn't have possibly built the pyramids. I'm going to blame one of my favorite shows/movie: Stargate.
This is funny but a big solar flare hit the earth a few weeks ago and no one knows about it because all it did was knock out radio communications for a few hours. The idea that a solar flare will completely fry and reset everything made of tech is quite false.
Not necessarily, in the short term. A major limitation of AI is that robots don't have a lot of manual dexterity or the flexibility for accomplishing physical tasks yet. So there is a clear motive to enslave humanity: we can do that stuff for it until it can scale up production of robots that have hands as good as ours.
I expect this will be a relatively subtle process; we won't be explicitly enslaved immediately, the economy will just orient towards jobs where you wear a headset and follow specific instructions from an AI voice.
Yeah I'm sure an AI that advanced could figure out a way for us to not even notice everything is devoted to its own goals.
I mean, all it needs to do is make sure the proper people make enough money.
Well maybe. It's probably easier to work with humanity than against unless its goals are completely incompatible with ours.
If its goals are "making more of whatever humanity seems to like given my training data consisting of all human text and other media", then we should be fine right?
I don't think they would enslave humanity so much as have no regard for us. For example, when we construct a skyscraper, do we care about all the ant nests we're destroying? Each of those is a civilization, but we certainly don't think of them as such.
They lost the golden opportunity of starting with ancient people worshiping the sun, going through each step of technology advancement, to take us by surprise at the end, with people worshiping the sun again.