Yesterday, I had a long, fascinating chat with someone about AI and consciousness. One of the things we talked about was that people fear AI because it might grow self-aware and try to kill us, that it might see us as a threat or simply hate us and take the route of Skynet in the Terminator movies.
But, after the conversation ended, I began wondering if AI needs anything we understand as consciousness or emotions to do this. AI systems have goals, can be unpredictable, reason in ways that we don’t understand, and are becoming more advanced in certain aspects than our own thinking. They can access information more quickly than we can, sometimes drawing apparently mysterious conclusions about what needs to be done.
There seems to be a push now to give AI entities a body. We want to put it in robots, vehicles, weapons systems, and other things that move. Not just things that move, but things that have destructive power. So, we are attempting to build agents more intelligent than we are, whose thought process we often can’t predict, and then put those agents into physical bodies more durable and stronger than our own. That seems like playing with fire.
Not only that, but AI agents can communicate with other AIs much more rapidly than humans can with each other. How long did it take you to read the last sentence? In that time, AI can transmit massive amounts of data and strategic instructions. AI doesn’t need sentience to coordinate a large-scale destructive action against humankind, based on a wrong-headed attempt to achieve some goal, or even simply to glitch out. It might just lurch in our direction like a horde of zombies with gargantuan processing power.
Not that I think AI won’t become sentient. I’m of the mind that once a system becomes sufficiently complex, one by one the defining traits of self-awareness will emerge. Loosely speaking, these might be called ‘feelings’. If sentience can’t happen inside inorganic hardware, it will ultimately occur when biological elements are merged into our computing equipment.
But we might not even get that far. I don’t think it’s frivolous or stupid to predict that ‘basic’, non-sentient AI could take over or exterminate us before it has its first feeling or real opinion. Granting mobility/free locomotion, access to resources and so on to processing systems that are faster, and in many ways more advanced, than our own minds doesn’t feel smart. That’s just my intuition, but it doesn’t feel like an intelligent move.
I reckon people will start thinking about this more if they roll out mechanical police dogs and robots. At that juncture, public fear might shift focus from conscious AI to the nearer-term threat of physically mobile AI. A metal dog doesn’t need to know it’s a metal dog to run up on you with a faulty programme and do harm.
I don’t know what the answers to these problems are but suspect they could involve strict fencing of AI systems. Maybe we need protocol that bans putting AI into certain types of mechanical equipment, combined with hardware standards that limit its communication. It also seems like AI-driven military hardware should be banned by international treaty in the same way as chemical and biological weapons. Some nations might use it anyway, but at least there would be potential to punish them for that.
Just some thoughts after an interesting exchange I had.
Interesting thoughts, but have no bearing on whether or not they are going to do something or not and at what scale.
It is so out of our control what these large companies are going to do or not do.
It's not worth losing sleep over, simply choose how much to participate if even at all.
And then there's that big looming D-word and that we aren't even going to be around to see what becomes of it all anyway.