skip to main content

Reasoning in AI

In this week’s episode of “Waking Up With AI,” Katherine Forrest and Anna Gressel address the captivating topic of AI’s reasoning capabilities and its ethical implications.

  • Guests & Resources
  • Transcript

Katherine Forrest: Hey, good morning, everyone, and welcome to today's episode of “Waking Up with AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I'm Anna Gressel, and we thought we'd spend some time today to begin to dive into the important and broad topic of whether AI can actually reason.

Katherine Forrest: And Anna, you know that this is an area I'm intensely interested in.

Anna Gressel: Absolutely, and have written about, and some of our listeners may not know this already, but I wanted to flag that you actually have two very important articles out on AI and various cognitive abilities and also ethical questions that can get raised by those abilities.

Katherine Forrest: Right, there's an article that just got published in the Yale Law Journal Forum called “The Ethics and Challenges of Legal Personhood for AI” and then another article in the Fordham Law Review called “Of Another Mind: AI and the Attachment of Human Ethical Obligations.” But Anna, our listeners may also not know that your background is in neuroscience along with law and that's how you originally got involved in AI in 2017, and that may lead you to some interesting insights on AI and its reasoning abilities or not.

Anna Gressel: That is indeed true, Katherine, but I think we'll save that story for another day.

Katherine Forrest: All right, so let's go on and talk about the topics that broadly fall under the question and the issue of AI reasoning. That is, whether AI can reason, and whether reasoning bears any relationship to sentience or consciousness and what the ethical implications of that would be.

Anna Gressel: I mean, I think the question of whether AI can ever actually achieve consciousness is a really contentious one.

Katherine Forrest: It really is a contentious issue, but we don't even get there to start with. And before we get beyond the initial question of whether AI can reason. So, let's dig into that question and why it's so important.

Anna Gressel: Right, so we have to unpack first, I think, what exactly we mean by reasoning. And just to throw out a potential definition, reasoning is applying logic to solve a problem.

Katherine Forrest: Right, and in the AI area, there's currently a debate that I think at least has largely been resolved that you can read about in a variety of academic papers, but others would argue that it hasn't been resolved at all, which is whether or not when AI is solving particular problems, it's just providing the next most likely word or whether it's engaging in reasoning.

Anna Gressel: Yeah, so let's break this down a little bit more for our audience. It's definitely the case, we know that LLMs that people have been talking about since late fall 2022, like ChatGPT or Llama models, for example, are based on what's called transformer technology.

Katherine Forrest: And that's the technology that's based upon a model architected around a neural network and that takes in huge amounts of data. We've talked about that in a lot of our prior episodes.

Anna Gressel: And when it takes that data, and I mean, Katherine, you and I talk about this a lot, it creates tokens or chunks of data that represent words or in multimodal models, other forms of media as well, kind of from a data component perspective.

Katherine Forrest: Right, and then inside the neural network, the tokens get assessed for how related they are to one another, and then weights are assigned to the various tokens, and this results in a highly complex relationship between the various tokens.

Anna Gressel: Right, like cat, dog, mouse as concepts would be weighted as more related to each other than words like notebook, pencil or computer.

Katherine Forrest: Exactly right. So, it's definitely true that LLMs based on transformer technology have a core aspect of very sophisticated word prediction. And that is a prediction as to how related words are to one another.

Anna Gressel: I mean, you and I know models do a lot more than word prediction, right? There's the concept of emergent capabilities, that is capabilities that model developers didn't even necessarily understand a model would have.

Katherine Forrest: And these emergent capabilities have been demonstrated again and again, and they include things like models being able to write snippets of code to help them learn how to do things.

Anna Gressel: Or models that are able to do mathematical reasoning in ways we didn't really expect.

Katherine Forrest: Okay, so now you've used that word reasoning again. So, let's talk about how we know that there's any kind of reasoning going on inside one of these models and not just next word prediction.

Anna Gressel: Yeah, and I think there are a couple of interesting papers that are worth looking at for folks. “The Unpredictable Abilities Emerging from Large Language Models” and another called “Sparks of Artificial General Intelligence: Early Experiments with GPT-4.”

Katherine Forrest: Right. And on May 29, 2024, so very recently, there was an article published by researchers from DeepMind, Google Research, Oxford University, among others, called “LLMs Achieve Adult Human Performance on Higher-Order Theory of Mind Tasks.” And that paper basically discusses the model size and fine-tuning capabilities being correlated, if not causal, to the ability of a large LLM to engage in adult human-like reasoning.

Anna Gressel: Yeah, and all of those papers, going back to our earlier point, really remark on the capabilities of the LLM that arose without the developers planning or even understanding how they got there.

Katherine Forrest: Yeah, it's a bit like human consciousness, really. I mean, there are lots of debates and in future episodes we might get into a few of these about theorizing as to how and why humans are conscious and people trying to extrapolate from what we know about that or don't know about that to AI and the potential for AI in the future to be potentially conscious.

Anna Gressel: Yeah, I mean, I'll say from a neuroscience perspective, I think it's still pretty clear that we don't really understand why or how humans are conscious, just that we are.

Katherine Forrest: And we also do a lot of things as humans that are part of our unconscious mind, lots of tasks that we do all day, every day. And so, we have a lot of examples in the human area of proceeding with tasks without a lot of attention being paid to them.

Anna Gressel: So, Katherine, let’s go back to this debate about AI and reasoning, because that’s not really about whether AI is conscious.

Katherine Forrest: No, it’s not about whether AI is conscious or not. I mean, I’m not of the view that AI is currently conscious, but I do believe that AI can reason. And it’s not just me—we’ve talked about the papers. And the reasoning is about the process of logically solving a problem correctly in a way that is more than just about word prediction.

Anna Gressel: Right—it may or may not suggest understanding.

Katherine Forrest: And there are some researchers that argue that reasoning can occur without true understanding of the content at issue. The model is queried about a problem, it determines relevant information, it works through that information to the most likely answer based on all of its training. But it may not have a true separate understanding of the issue.

Anna, I guess one of the questions is: why should we care about AI reasoning at all?

Anna Gressel: Well, I think it's really important from an AI development perspective, because reasoning is going to add to the types of tasks AI can be used for. The better it can reason, the more complex the tasks can be. We already know that reasoning has led to fewer errors when AI presents answers.

Katherine Forrest: But we know that also most models still exhibit errors. Those problems haven't been fully resolved yet.

Anna Gressel: That's very true. And we should also say that regulators are concerned about AI reasoning. The more sophisticated the AI becomes, the more concerns regulators have about maintaining safety and control over AI. I mean, we just talked about that in our last episode. So that goes back as well to the discussion we had on frontier models.

Katherine Forrest: Right. And we'll be returning to this again and again because there's lots more to say on this, but for right now, that's all we've got time for. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel. Thanks for listening. We love hearing from you all about what you've enjoyed and what you want to hear about next. So, drop us a line, let us know and we'll see you again next week.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy