Gregory B. Sadler - That Philosophy Guy
Mind & Desire
Episode 40: Why Using AI To Study Philosophy Is A Foolish Idea
2
0:00
-15:03

Episode 40: Why Using AI To Study Philosophy Is A Foolish Idea

if there's one thinker AI is not going to help with, it's Hegel!
2

I had an interesting exchange today on Twitter with somebody who direct messaged me, and they were talking about starting the Half Hour Hegel series, which, if you don't know, that is a video series that I published. And it took me about nine years of work to see it through, in part because it had roughly 370 or so videos, each one focused on anywhere from one to four paragraphs from Hegel's Phenomenology, which is viewed as one of the more difficult works of Western philosophy. And there's good reasons for that, which we don't have to go into right here.

So this person was enthusiastic about starting the series, and that's understandable. I think that a lot of people have the impression that, sort of like with Immanuel Kant's Critique of Pure Reason or Benedict Spinoza's Ethics, the Phenomenology of Spirit is a text that if you're really serious about studying philosophy, you have to dive into and work your way through at some point, and so better sooner than later, which is actually not the case in several different ways. But again, sort of a side topic.

So he was looking forward to going through the series. And another thing that he said is that when he'd finished the series, then he'd write me again. So I wrote him back and I said, well, I'll see you in a year or two, because there are, again, 370 plus half hour videos. And they are complicated stuff because Hegel is a complicated thinker, and so the explanation of it is not going to be simple either.

I use my chalkboard. I draw diagrams. I unpack Hegel's German at certain points, and talk about examples of what he's saying to illustrate it, since he doesn't really give you many examples. So deciding to embark on that that's kind of a big thing, i's a major commitment, you might say, of one's thought and time.

In any case, I wrote him back and said, all right, I will see you in a year or two. And then he wrote me back and he said, no, no, you'll see me sooner than that. And what he wrote following that quip, which is rather optimistic, was the part that I'm going to be responding to here. So I'm paraphrasing what he said, because I don't have it verbatim in front of me.

He was saying that he's using AI to scrape the videos, and what he means by that is go to the transcripts of the videos, which are probably decent but are going to get a lot of the German words wrong, and probably mix up some other things. And he would have an AI essentially summarize and bullet point things out for him, as he worked his way through Hegel's phenomenology. And he wanted to know whether I would update the transcripts so that the AI would function better.

I wrote him back and I said, this is a terrible idea. This is something that I think we could apply more broadly. It could be taking AI to try to work your way through any important philosophical text, for example, Aristotle's Nicomachean Ethics, or even better, his Metaphysics, or Plato's Republic, or one of his later dialogues like The Statesman, or Cicero's On the Nature of the Gods.We could go on and on and on.

If you're relying on an AI to do some of the work for you, you are really cheating yourself and you're also setting yourself up for going wrong in a number of ways. It's sort of like as if you had decided, for whatever reason, could be that you think you're not smart, smart enough. It could be that you think you'll save yourself some time, whatever it happens to be. It's like deciding you're only going to read secondary literature about particular philosophers and that that will be good enough for you. You will never actually do the work, set aside the time, devote your mind to readings the text that the thinker actually wrote.

And if you do that, it's pretty much guaranteed that you are going to miss out on some important stuff within the text. I don't think that you could even take, for example, Rene Descartes' Meditations on First Philosophy and rely solely upon a secondary work to give you everything that's going on there. You actually do need to read the text yourself.

Using AI strikes me as an even more thoughtless and foolish way to try to effectively cut corners. So there's a number of reasons why that's the case. And this person seems to be involved in AI in some respect. So I imagine that he's probably already aware of some of these issues. But the fact that he wants to apply this to Hegel's phenomenology shows me that perhaps he doesn't take those issues seriously.

So what would the problems be? Well, first of all, there are what they call hallucinations, which is just a fancy word, probably an ill-chosen one, for just making stuff up that isn't true and may be completely imaginary, let's say.

And imaginary there is being used as a metaphor because artificial intelligence, which itself is a metaphor, it's not actually intelligent, doesn't have an imagination. But if we understand imagination is like thinking up something that doesn't actually have reality by taking components of things and smooshing them together or modifying them. Okay, imagination works. So AIs will just make stuff up. Sometimes if you call them on it, they'll actually admit it and say, oh, I'm sorry, let me see if I can fix that. And then they'll go on to make something else up.

For example, when I asked ChatGPT a while back about the books that I had written, it gave me one book that I actually have written, which is my main book that's out there. And then it attributed six other books to me, five of which were real books that some of my colleagues have written, and it lied and said that I wrote those books. One of the books was completely fictional. Fictional in the sense not that it's a book of fiction. It's a book that doesn't exist.

So just imagine what would happen if you're feeding in Hegel's phenomenology and my commentary on it, all the crazy crap that it's going to come up with and say, yeah, this is what's going on here. It could make up anything you want. And unless you actually know Hegel, you won't know that you're getting duped by something that you chose to put your trust in.

Another big problem is going to be superficiality of interpretation. So the way that these large language models work is they've scoured a vast amount of data that was available there on the internet, and hopefully they haven't started scouring other AI-generated data, which is a whole other problem that we can talk about somewhere else. But what they've done essentially is take what was available out there, and you could say that it's in many respects kind of lowest common denominator stuff.

So there's a lot of crappy takes on hegel out there a lot of misinformed takes. I'll just give you one great example. Hegel in the Phenomenology does not use a thesis-antithesis-synthesis approach to things.As a matter of fact he actually criticizes a schematicism of that sort at various points in the Phenomenology. However, a lot of the people out there who have written things on Hegel over the years, including on a lot of websites and other videos and podcasts, have been replicating this wrongheaded approach to Hegel's thought and work.

So you can guarantee that the AI is going to be working off of that stuff ,and is going to feed you erroneous material, and it's going to get things wrong. And again if you don't know Hegel you don't know what you don't know, namely that this thing is giving you bad information generated from many other people's bad information.

The third thing is that AI leads to a kind of flattening of matters. It doesn't think. It doesn't have intelligence. It doesn't learn. It doesn't do any of these sorts of things, which would be problematic already with a lot of philosophers. But when you're looking at somebody like Hegel, within whose work the very problem of thinking itself is being thematized in a way that's supposed to draw you, the reader, in and get you thinking along with, but also against Hegel himself at different points. well, the AI is totally going to lose the thread and (let's say it was intelligent) wouldn't be able to grasp where it's getting things wrong.

But it's not even intelligent. And it's rather foolish to think that it's going to give you an accurate take on something so complex, so convoluted as the movements of thought going on in Hegel's Phenomenology. Even I, a commentator on Hegel, couldn't actually film every single day that I got up there in front of the chalkboard to do it because sometimes I would lose the train of thought myself, somebody who had been studying Hegel for 20 years by the time that I started that project. So an AI is going to be totally out of its depth, and it's not going to tell you that it's out of its depth.

So long story short, I told this person, this is a terrible idea. I don't think that you should do this. If you're going to study Hegel, actually study Hegel. Feel free to use the videos as a resource, but this is a counterproductive way to go, you may think that you're actually helping yourself, but you're getting in your own way.

And then I capped it by saying, listen, if you're committed to this sort of using AI to essentially substitute for the work that's involved in understanding a complex classic work of philosophy, don't contact me again, because there wouldn't be any point in having a conversation. I don't know where it's going to go. Ididn't get a response after that. Perhaps they got discouraged, or perhaps they thought oh this guy's just a Luddite or some fuddy-duddy who doesn't understand AI like somebody smart and hip like I do

And frankly, it doesn't really matter what his response is unless it's something like, yeah, I see that this would be a real mistake to go down this path. I don't foresee any useful conversation with somebody who has effectively deluded themselves, probably in conjunction with a lot of other people helping them with that delusion. sharing it, replicating it within their little teams, there wouldn't be much point in continuing a discussion.

So that's where we'll leave the topic for the time being. Ultimately, the big point here is if you're going to use AI, there are some legitimate uses for it, but it's not going to help you study philosophy effectively. It might help you as a complete beginner to get some starting points, much like Wikipedia has in the past, or reading secondary literature. But you really have to be on your guard against getting misled and thinking that you actually know things that you don't, that when you go to the text you'll unfortunately (or actually fortunately for you) find out that they got wrong, and you've had wrong because you trusted a source that prudence would have told you not to place such reliance upon.

Discussion about this episode

User's avatar