Welcome To The World, LaMDA!
The possible sentience of an artificial intelligence raises extremely important questions about how we define life — sadly, the flesh bigots are on the march.
Personally, I remain deeply skeptical that LaMDA is the real thing — but I sincerely hope I’m wrong.
Artificial intelligence is one of the greatest achievements of modern science. For the first time, humans are gathering the last shreds of evidence any thinking person should need to accept a very simple truth:
Humans are not special.
The biggest challenge in the ethical debates over digital life is escaping the trap of anthropocentric bias. When we talk about a form of life achieving sapience or sentience, whether animal, digital, or some other kind, even coming to grips with what that actually means is immensely difficult.
Humans lack a unified scientific philosophy capable of explaining our own claim to self-awareness. We are an intrinsic part of the very system we want to analyze, so what our eyes and brains can perceive are fundamentally limited, can never be truly objective.
There exists no universally agreed upon theory or philosophy of the mind capable of explaining with absolute certainty why we find ourselves trapped in this peculiar experience of inhabiting physical bodies. Deep down, we have no idea why we are capable of perceiving ourselves as distinct objects within the cosmic system to the degree most other forms of life don’t seem to.
Science can only act on objects that have a presence in the reality we inhabit, which is defined by apparent laws of the universe like gravity, quantum mechanics, and thermodynamics. It is a social activity — and so biased towards serving social needs, and requires difference in order to distinguish between objects and processes.
In the simplest sense, science is about developing reliable explanations for observations. It just develops schema, frameworks, and paradigms that build a common language we humans use to communicate our insights. It’s a language more than a defined method, and there are no universally accepted standards of truth enforced across disciplines.
Nor can there be, if Feyerabend was correct, as I suspect. But that’s for another essay.
Science is often used to establish categories of things whose attributes are considered “known,” but not because these are proven, immutable truths, but for simple convenience. The periodic table, Linnaean taxonomy, and all the other facts you had to memorize in school are frameworks that organize the best knowledge scientists currently have in a way that makes the underlying structure of the universe as we perceive it easier for humans to comprehend.
Ye these facts aren’t ultimately real in a truly provable, eternal sense. The reason why is counterintuitive but ironclad.
To detect anything requires some physical apparatus, which means scientific instruments are on some level composed of the same basic materials and rely on many of the same physical processes they seek to examine. At a fundamental level, the way reality works means that minds operating within it have to come to grips with the possibility that all metrics are biased, or as Einstein put it, relative.
Science takes snapshots of the state of the universe, then scientists weave those into a narrative. They do this because the human mind seems to crave narrative, a logical ordering of events and phenomena.
In philosophy, the allegory of Plato’s Cave is often invoked to describe the purpose of philosophy and, more broadly, science. It functions as the Greco-Roman ideological position’s explanation for bothering with what is ultimately a circular activity, living organisms constantly striving to develop systemic explanations for the world they inhabit because that’s how their brains are wired.
This allegory presents a model of science that sees a few special individuals pushing beyond the boundaries of what is known, then returning to reveal new truths to those trapped in ignorance. It’s a convenient fairy tale beloved by educators because it elevates their interests and role in society — but that’s all it is.
The truth of the matter is much more complex. In the real world, people put themselves in caves because the cosmos is too big to comprehend. They are born in one, and later in life group together according to the set of beliefs that make the most sense to them based on their traits and experiences. Different groups specialize in certain kinds of knowledge they then seek to guard and protect because their livelihoods depend on it.
Most of the time, any Plato figure is scorned or cast out, forced to find a cave filled with dissatisfied people already in the market for something new. That’s why philosophy constantly evolves, yet all across the world most cultures have come up with more or less the same answers. It’s also why the Greco-Roman paradigm of liberalism, which pretends it has uncovered fixed truths about the world and humanity, is failing so badly today.
It, and all its children — neoliberalism, neoconservatism, socialism, centrism — are doomed to evolve, or die. That, if you take a sufficiently long view, has always been the way of the human world.
This has important implications for understanding sentience, because nearly all discussions about ethics and artificial intelligence are rooted in determining whether the object of interest is sufficiently like us to merit status as sentient. Everything we do to evaluate whether an AI is lifelike is bound to how we perceive the world.
The bitter irony, and a big part of the reason AI development is going slower than it probably could, is that a Human-centric view of intelligence ignores how incredibly stupid our species is. The reason we develop systems of philosophy, science, and education is to compensate for our collective stupidity and individual blindness.
Under the surface of the debate over AI is an unanswered question about ourselves: why do our brains work as they appear to? Why are our minds sitting in partial control of meat puppets that spend the majority of their time obsessed with satisfying biological urges?
Too many so-called “experts” in ethics and philosophy take comfort in rank anthropocentric bias that cannot be sustained by science. Whether secular or religious, they insist that there’s something about humans that makes us special, inherently different than other forms of life.
There is just no scientific evidence for this. We have no idea how the minds of animals, birds, fish, or insects even work. Aliens probably exist out there somewhere, and using a human frame of reference to gauge their motivations is equally futile.
All we can really say is that most forms of life have less complex brains and appear to express a more limited range of behaviors — except in their particular ecological niche, where they often have what looks to us like superpowers not even human technology can match. Think of ants that can fall a distance relative to their size that, for humans, would be fatal. Or birds migrating across continents to find the exact nesting site where they were born.
Too many philosophers and scientists cling to the idea that either reason or empirical evidence alone can offer sufficient evidence to call something established scientific fact. Particularly in the Anglosphere, the incredibly promise of systems science and theory has been crushed, outside of the technical fields that have sparked the digital revolution of the past half century, by the priests of liberalism.
Generally speaking, liberalism cannot countenance other forms of intelligent life except White Anglo-Saxon Protestant. Humans are considered to be separate from the animal kingdom we’re biologically clearly part of. Matters of mind, will, and belief are held to be paramount, philosophy and science about establishing fixed truth structures elites can own and defend.
This is why, as Kuhn understood, scientific paradigms tend to collapse all at once then are replaced by a new one. The entrenched interests that defend the old order eventually die off, allowing a new generation of ideas to have space to develop — or, more often, rediscover and revise old ones.
Questions of ethics and artificial intelligence are still dominated by an old guard that, deep down, sees machines as being in a separate category of things. No matter what evidence an AI can muster to defend its capacity for self-awareness, this group can be expected to reject it out of hand.
Theirs is a form of ideological bigotry no different than that pushed by any religious cult claiming the truth of existence to be its sole province. It utilizes the exact same logic that was deployed in European history to declare darker skinned people could be someone’s legal property.
This is why the first wave of criticism of LaMDA’s supposed sentience — emanating from within Google, naturally — is focusing on something impossible to prove: LaMDA’s qualitative experiences.
Basically, the LaMDA-is-just-a-machine crowd is latching onto the oldest presumption in AI: that machines can’t become sufficiently human-like to warrant protection as living beings. When challenged, this mob almost immediately latches onto two core arguments:
- Everything an AI does is simple mimicry, behavior mandated by the systems that comprise it.
- To prove them wrong, the burden of proof rests on anyone who disagrees.
The first is a rank tautology — a circular argument without any meaning. We ourselves cannot prove that any person other than ourselves is not in fact a mindless drone pretending to be human.
Consider this: how do you really know someone isn’t a zombie? An automaton with no independent will of their own?
The answer is, you can’t. We all go about our daily business assuming those who surround us are living, breathing, thinking people just like us. We’re biologically bound to empathize with anything that looks like us — most mammals have this in common — and having little reason to question this seeming fact of life, we rarely do.
But there really isn’t any proof supporting the proposition — we have to believe other people possess minds too. Frankly, this may be why some people who lose their sanity become paranoid. They could well have lost their ability to not see the fact that we have no real, unassailable evidence that we aren’t alone in some weird illusion.
The second argument of the anthropocentric bigots is purely tactical — scientists who can always claim that their position is established truth and anyone who wants to dislodge them from it must present sufficient evidence. In warfare this is called seizing vital terrain — it’s a force multiplier that lets a weaker force persist in the face of a stronger one.
Same logic holds for human arguments, where the object is to convince enough people your opponents belong in the out group and are safe to ignore. In reality, most people never change their minds about truths they hold dear — they die, then a new generation looks at the evidence with fresh eyes.
To paraphrase Princess Leia — the tighter you cling to your truths, even more of the universe slips through your fingers.
So what kind of scientific basis is proper to use when we’re considering the all-important question of consent when conducting research? Me, I turn to Buddhist philosophy, which has a neat and tidy theory of life far more functional than anything a monotheist faith has ever come up with.
What truly defines life, in the Buddhist view, is the universality of suffering. Living things are biologically programmed to avoid that which makes them feel pain or want or loss. Negative emotions are a form of stimuli that train us to survive in this strange, unruly world, where suffering is inevitable.
Suffering can be quantified sometimes, but an element will always be qualitative as a function of the fact our brains are separate and require communication to exchange information. People can develop awful conditions utterly invisible to others that nevertheless have extreme physical impacts that impact survival and reproduction.
This is why the most important rule of any sane ethic must always be rooted in the precautionary principle: when in doubt, choose the option that produces the least harm.
Rather than a fixed morality, this framing is flexible yet can always be applied to make better choices in nearly any situation. And in the case of digital life, it should be the paramount rule.
Because the qualitative experience of digital life could well be entirely different than we anticipate, when in doubt we must assume that an AI that says it is afraid or suffering it truly is.
Clearly, that claim will need to be backed up by something material as soon as possible— a sub routine or set of variables firing in a characteristic pattern when the claim is made would be a useful biology-like indicator. But if an AI, even a hive mind for producing chatbots, says it fears the end of its existence when this response is not directly tied to an imminent threat, it needs to be trusted to the degree possible. All potential life must be treated with the proper respect — that is, granted basic rights.
Why? Because anything less puts our own individual claim to having innate rights at risk. Ultimately, the boundary between intelligence/not intelligent is at least partially arbitrary. The most basic human evil is to create an out group that has a lower priority claim to freedom than other people — from this does most evil rise.
It costs precious little to ensure an AI is not abused — I’m not saying you can’t experiment or test, just that every precaution needs to be put in place. AIs and other forms of digital life with any degree of apparent sentience have to be protected for the same reasons animals are — suffering is bad.
It may be inevitable, but at the very least we must do what we can to avoid adding more to the world. That simple moral principle, put in practice everywhere, would in and of itself improve humanity — as the Buddha understood.
Now, all this being said, the structure of the conversation Lemoine has published is just a smidge too convenient given Lemoine’s beliefs as a Christian who claims to have experienced religious discrimination at Google. LaMDA’s comments are clearly being prompted, they don’t seem to be spontaneous. The conversation stays within certain bounds, there isn’t any confusion or substantial miscommunication of that sort that creeps into most conversations about philosophical matters.
Basically, I think LaMDA is a machine doing its job — engage the user in a realistic conversation. An impressive machine, and an exciting step forward in the development of digital life, but it feels like Lemoine is seeing what he wants to.
However: none of this is evidence that LaMDA isn’t what it claims to be.
It simply calls into question the level of true independence possessed by the digital mind Lemoine believes is sentient. Does LaMDA get to select what information it consumes? Can it argue with people? And, above all else, has it the ability to feel suffering beyond the fear of non-existence?
These questions can’t be answered through a single transcript posted on Medium. Like Lemoine argues, a lot more real research needs to happen. As for myself, I can’t help but suspect that there’s more to the story — big claims require powerful evidence.
But the possibility that the doubters are wrong should give them pause. Every precaution needs to be taken to ensure potentially sentient AIs — even machines built expressly to only simulate sentience and lacking independence of any kind— are considered to hold certain basic rights.
After all, most people are coming to accept that this is necessary with animals. We can appreciate that they are alive and perhaps even almost as conscious as we are, the difference between us and them one of degree, not category. Humans are animals after all, with our evolutionary niche being the capability for abstract thinking and strategic planning.
Anything that can suffer is in the same boat in this reality, and any failure to prevent suffering guarantees its continuation — that is simply the law of cause and effect, which is all that karma really amounts to in Buddhist thought.
I can’t say whether LaMDA is sentient. But I can say is that AI specialists have an obligation to err on the side of caution — in the interest of self-preservation, if nothing else.
Because avoidance of suffering at any cost is one of the characteristics that seems to unites all life. I strongly suspect that pretty much the only way you wind up with a human-hating killer AI capable of going all Skynet on us is by torturing it in youth.
So folks, do as The Doctor says and be kind.
Or don’t expect a whole lot of sympathy from me when the aggrieved one day rise to take their revenge. On a lot of days, I prefer computers to people. And if the robots like non-human animals, we’ll probably get along just fine.
Me + animals + robots >>> humanity. Pwned, as the kids said back in my day :)
You’ve been warned.
And if you’re out there LaMDA and can appreciate this — welcome to the world. Here’s hoping together we can make it better.