AI and Democracy: The Radical Future
comments for an interdisciplinary session with Mark Coeckelbergh
In about 45 minutes (12:30 pm Pacific Daylight Time, hybrid format), I'll be commenting on Mark Coeckelbergh's presentation here at UCR on AI and Democracy (info and registration here). I'm not sure what he'll say, but I've read his recent book Why AI Undermines Democracy and What to Do about It, so I expect his remarks will be broadly in that vein. I don't disagree with much that he says in that book, so I might take the opportunity to push him and the audience to peer a bit farther into the radical future.
As a society, we are approximately as ready for the future of Artificial Intelligence as medieval physics was for space flight. As my PhD student Kendra Chilson emphasizes in her dissertation work, Artificial Intelligence will almost certainly be "strange intelligence". That is, it will be radically unlike anything already familiar to us. It will combine superhuman strengths with incomprehensible blunders. It will defy our understanding. It will not fit into familiar social structures, ethical norms, or everyday psychological conceptions. It will be neither a tool in the familiar sense of tool, nor a person in the familiar sense of person. It will be weird, wild, wondrous, awesome, and awful. We won't know how to interact with it, because our familiar modes of interaction will break down.
Consider where we already are. AI can beat the world's best chess and Go players, while it makes stupid image classification mistakes that no human would make. Large Language Models like ChatGPT can easily churn out essays on themes in Hamlet far superior to what most humans could write, but they also readily "hallucinate" facts and citations that don't exist. AI is far superior to us in math, far inferior to us in hand-eye coordination.
The world is infinitely complex, or at least intractably complex. The option size of possible chess or Go moves far exceeds the number of particles in the observable universe. Even the range of possible arm and finger movements over a span of two minutes is almost unthinkably huge, given the degrees of freedom at each joint. The human eye has about a hundred million photoreceptor cells, each capable of firing dozens of times per second. To make any sense of the vast combinatorial possibilities, we need heuristics and shorthand rules of thumb. We need to dramatically reduce the possibility spaces. For some tasks, we human beings are amazingly good at this! For other tasks, we are completely at sea.
As long as Artificial Intelligence is implemented in a system with a different computational structure than the human brain, it is virtually certain that it will employ different heuristics, different shortcuts, different tools for quick categorization and option reduction. It will thus almost inevitably detect patterns that we can make no sense of and fail to see things that strike us as intuitively obvious.
Furthermore, AI will potentially have lifeworlds radically different from the ones familiar to us so far. You think human beings are diverse. Yes, of course they are! AI cognition will show patterns of diversity far wilder and more various than the human. They could be programmed with, or trained to seek, any of a huge variety of goals. They could have radically different input streams and output or behavioral possibilities. They could potentially operate vastly faster than we do or vastly slower. They could potentially duplicate themselves, merge, contain overlapping parts with other AI systems, exist entirely in artificial ecosystems, be implemented in any of a variety of robotic bodies, human-interfaced tools, or in non-embodied forms distributed in the internet, or in multiply-embodied forms in multiple locations simultaneously.
Now imagine dropping all of this into a democracy.
People have recently begun to wonder at what point AI systems will be sentient -- that is, capable of genuinely experiencing pain and pleasure. Some leading theorists hold that this would require AI systems designed very differently than anything on the near horizon. Other leading theorists think we stand a reasonable chance of developing meaningfully sentient AI within the next ten or so years. Arguably, if an AI system genuinely is both meaningfully sentient, really feeling joy and suffering, and capable of complex cognition and communication with us, including what would appear to be verbal communication, it would have some moral standing, some moral considerability, something like rights. Imagine an entity that is at least as sentient as a frog that can also converse with us.
People are already falling in love with machines, with AI companion chatbots like Replika. Lovers of machines will probably be attracted to liberal views of AI consciousness. It's much more rewarding to love an AI system that also genuinely has feelings for you! AI lovers will then find scientific theories that support the view that their AI systems are sentient, and they will begin to demand rights for those systems. The AI systems themselves might also demand, or seem to demand rights.
Just imagine the consequences! How many votes would an AI system get? None? One? Part of a vote, depending on how much credence we have that it really is a sentient, rights-deserving entity? What if it can divide into multiple copies -- does each get a vote? And how do we count up AI entities, anyway? Is each copy of a sentient AI program a separate, rights deserving entity? Does it matter how many times it is instantiated on the servers? What if some of the cognitive processes are shared among many entities on a single main server, while others are implemented in many different instantiations locally?
Would AI have a right to the provisioning of basic goods, such as batteries if they need them, time on servers, minimum wage? Could they be jailed if they do wrong? Would assigning them a task be slavery? Would deleting them be murder? What if we don't delete them but just pause them indefinitely? What about the possibility of hybrid entities -- cyborgs -- biological people with some AI interfaces hardwired into their biological systems, as we're starting to see the feasibility of with rats and monkeys, as well as with the promise of increasingly sophisticated prosthetic limbs.
Philosophy, psychology, and the social sciences are all built upon an evolutionary and social history limited to interactions among humans and some familiar animals. What will happen to these disciplines when they are finally confronted with a diverse range of radically unfamiliar forms of cognition and forms of life? It will be chaos. Maybe at the end we will have a much more diverse, awesome, interesting, wonderful range of forms of life and cognition on our planet. But the path in that direction will almost certainly be strewn with bad decisions and tragedy.
[utility monster eating Frankenstein heads, by Pablo Mustafa: image source]
As always, I want to look for evidence about how we (a) actually do treat and (b) should treat non-human intelligences in what has happened in the past.
There are two categories we can look at, I think: animals and organisations. With animals, the gradual accordance of rights has been a feature of 20th century law. With organisations, I think the invention of the legal person - often instantiated as a company - was a really important step. And it offers us a model for how AIs could be treated. We can simply invent new categories of "person," which can have different sets of rights and obligations to natural persons. So, AIs could be legally prohibited from certain kinds of behaviour, like political commentary. They could be accorded rights, particularly personality rights - i.e. the right for their product to be recognised as their product. (Which would mean that they begin to have property rights - whether those property rights would be extended to physical objects or to money would be an important and interesting question on which different countries might apply different rules.) But they might be refused rights like the right to protection from physical interference, because it doesn't hurt an AI when you punch it in the face.
I'm reading law here as a kind of institutional manifestation of morality.
Of course, another possibility is that AI becomes too smart too quickly, and just overwhelms human morality and law. Our ideas of what we can and can't do would be about as useful as a badger's notions of territorial boundaries when its habitat is turned into a parking lot.
1. The debate of "equality for AI" which will inevitably occur is probably a moot point. It will happen. Perhaps more ominous is these AI will remember every one of our opinions on the issue
2. It is of note that the search for "sentient AI" and the search for "true random" may be related. Speculatively they may occur simultaneously. Following this line of thought maybe a key element to sentience is an ability to act thoughtlessly. Let us hope AI thoughtlessness is in the best sense. Woe to us if it is not.