Class Reflections: Rene Descartes Discourse On Method Part 5
Descartes' two "tests" and why he would think AIs aren't really going to replace all of us
Last week, we had only one day of class in my Foundations of Philosophy class sections. Tuesday, my wife and I were doing volunteer work at the election central count in Milwaukee, processing absentee and early voting ballots. Thursday I was back in the classroom, and we did two main things in the 75 minutes we get for my morning and afternoon class sessions.
We finished up with Descartes’ Meditations, discussing some of the important topics he deals with in meditations 5 and 6, namely:
the “true and immutable natures” our intellects can grasp
the “ontological” argument for God’s existence he provides (which is not a particularly good one)
whether or not we can reasonably set aside the doubts he raised in earlier meditations, particularly those about whether we are dreaming
the connection between the mind and the body and the metaphor of the pilot in his ship
how our body works like a very complex machine, whose parts sometimes get out of whack, so to speak
After all that, we could shift gears and get into what is among my favorite parts of Descartes’ work to teach, not only because it is intrinsically interesting, but because in my view it bears a lot of important implications for our own present-day culture. We finish up by looking at a portion of part 5 of his Discourse on Method (we skip over a good bit of it, because we’re less interested in his discussion of the circulation of the blood in the human body, however important that was at the time he authored it.
The bits of part 5 that we discussed have to do with the distinction that Descartes wants to draw between human beings, on the one hand, and animals and machines on the other hand. There’s two main ways you can look at what a human being is in a Cartesian point of view. On the one hand, particularly if you’re focusing on parts of his work like the second meditation, you can say that we are minds or souls, things that think, thinking substances. In that case, our body isn’t us, and at that point in the work, we might not even have bodies. But later on, Descartes will also say that what he is, and what all of us human beings are, is a mind in a body, comprising a unity, somehow connected with each other.
Certainly when it comes to other human beings, what we observe of them is not actually their minds. By the very way Descartes articulates his metaphysics, it would be impossible to observe another person’s mind, since mind is not extended, not corporeal, not something that can be grasped directly through the senses. But we certainly can - and do - infer that other human beings like us are composites of mind and body, that they have thoughts which they express (or even sometimes betray), that they engage in intentional action rather than just responding to their environment like machines.
Animals and machines, by contrast to human beings, according to Descartes, do not have minds. Better put, they are not minds, as we human beings are. Whatever might appear to be mind-like in them, in their behavior or expressions, that is just a reflection of the complex manners in which their parts are arranged. That’s a point of view that is certainly widespread when it comes to most machines, though as we soon discuss, not all of them. But is it an equally common viewpoint on animals?
According to Descartes, animals are basically just machines, but mechanisms that instead of being composed of non-living parts of wood, metal, glass, silicon, rubber, or the like, are composed of complexly arranged organic parts and systems, which allow them to live. But very importantly, not to think. For Descartes, this means that not only do animals not have mental faculties like intellection, willing, memory, or imagination, they also lack emotions and sensation. Everything going on in them is simply bodily. They are what you might call “meat machines” like the part of ourselves we call our bodies.
When we have more time at our disposal, we usually spend a good bit of time discussing whether we think, based on our own experiences with animals and the reasonings we and other people can provide, Descartes’ point of view is a plausible one. Very few of my students - and myself as well - are willing to follow Descartes down the path of claiming that all non-human animals don’t have minds, don’t think, are merely machines. Granting some of them rationality, usually taken to be the distinctive for human beings, that’s a different matter. But the idea that our animal companions, like dogs or cats, have no mental life, that’s an understandably hard sell.
What we spend the majority of our time talking about in our class session, however, is machines. Descartes suggests that machines could actually imitate or mimic a human being so well at some point that people could be taken in by them and believe them to be human, that is, to be minds in bodies, a unity that can actually think.
We certainly live in an era, separated from Descartes by nearly 4 centuries, where there are reasons why many people would ascribe thought or consciousness to mechanical creations of human beings. One big reason has to do with our massively complex fictional culture that permeates our civilization. Robots or computers, or even just programs, often ascribed the sorts of attributes and actions characteristic of human beings, for example self-awareness, emotional states, concerns about their own future, attachments, and so on. They’ve been ascribed those in science fiction stories, in movies and TV shows, even in cartoons.
There’s also the ongoing motif of “not yet, but soon”, or “not in the past, but now”. There are a lot of people who will say that at some point, we either reached or will reach a threshold where machines do become intelligent. I expect there’s a good bit of wishful thinking (or its opposite) that leads people to those claims, but they’re also a big part of our culture.
And then there are all sorts of companies that have jumped onto the “artificial intelligence” or “machine learning” bandwagons, lead by people perfectly happy to believe, or at least bullshit consumers and stockholders into believing, that they’ve produced some sort of actually thinking technology. We have a variety of chatbots out there, AI personal assistants, entire platforms devoted to “generative AI”, as well as more and more complex robots doing things out there in the world.
So the big question then for my students, a question that Descartes can help them approach thoughtfully is: Can machines actually think like us human beings? And that leads to another important question: How would we know? How can we evaluate the claims and arguments of snake oil salespeople, of wild-eyed technology enthusiasts, of ordinary people who just haven’t given these matters much thought but somewhere got convinced, that there is really such a thing as “artificial intelligence” in a real, not a metaphorical sense?
In part 5 of the Discourse, Descartes provides us with two criteria, or “tests”. Here’s the relevant passage, at full length:
[I]f there was a machine shaped like our bodies which imitated our actions as much as is morally possible, we would always have two very certain ways of recognizing that they were not, for all their resemblance, true human beings.
The first of these is that they would never be able to use words or other signs to make words as we do to declare our thoughts to others. For one can easily imagine a machine made in such a way that it expresses words, even that it expresses some words relevant to some physical actions which bring about some change in its organs (for example, if one touches it in some spot, the machine asks what it is that one wants to say to it; if in another spot, it cries that one has hurt it, and things like that), but one cannot imagine a machine that arranges words in various ways to reply to the sense of everything said in its presence, as the most stupid human beings are capable of doing.
The second test is that, although these machines might do several things as well or perhaps better than we do, they are inevitably lacking in some others, through which we would discover that they act, not by knowledge, but only by the arrangement of their organs. For, whereas reason is a universal instrument which can serve in all sorts of encounters, these organs need some particular arrangement for each particular action. As a result of that, it is morally impossible that there is in a machine's organs sufficient variety to act in all the events of life in the same way that our reason empowers us to act.
So we have two key criteria here. The first one might appear a bit circular, given that Descartes already thinks machines don’t have minds. So obviously they can’t communicate what thoughts they have in the minds, like even stupid human beings can, precisely because they have no minds. All they are doing is following their programming and at best parroting human language (or other signs).
When we discuss this, though, students are pretty good at coming up with examples of ways in which seemingly intelligent machines or programs betray the fact that they aren’t really thinking, even if they might be drawing upon products of human thought drawn from vast databases or increasingly sophisticated programming. They’ve played around with ChatGPT enough that we can laugh together about the many ways it lapses into cluelessness about what it is being asked about or its own mistaken responses (which it can’t identify as mistakes effectively).
At this point, I mention to them that one of the earliest AI programs, ELIZA, is older than me (I was born in 1970, and ELIZA was ready to roll in 1967), and that back when I was a kid, I first encountered ELIZA in a book of BASIC computer programs that could be programmed into the really rudimentary machines we had.
What about the second criterion? This one is particularly interesting, because while we do have a lot of experience with sophisticated machines, bots, or programs that do quite a few things far better than we do, there aren’t many (or perhaps even any) that have the breadth of functions that the average human being does. Then again, even brilliant human beings don’t have unlimited ranges of competence. Sometimes, they’re really good at a few things, and seem pretty incompetent, uninformed, even just plain dumb in many other things. And doesn’t it seem likely that as computing power expands, we could have machines that develop (or are given) broader and broader ranges of tasks they can do?
So I put it to my students, since they’re all likely to live at least 30 years further into the future than I am. What do they think is going to happen with AIs in the space of their lifetimes? Are they worried about, or perhaps looking forward to, human-modeled robots that will not only be much better than us at a few things, but at a wide range of things, perhaps all of the things that make us human? I can tell you that my current crop of students remain fairly skeptical of that outcome, which I take as a good sign myself. But I’d be interested to hear what others make of that prospect.
Today's chatbots don't convincingly impersonate, let alone replace, humans. I doubt if they could convincingly impersonate any of the higher mammals. On the other hand, some AI-produced graphic art can deceive us that it's human-made - for what that's worth as a 'test' - although AI-generated 'literature' still seems a long way off. So I think there will continue to be grey areas and debates.