skip to main content

The AI User Interface: Where is it Headed?

In this week’s episode of “Waking Up With AI,” Katherine Forrest and Anna Gressel explore the ways in which the AI user interface (UI) may be changing, from text-based interfaces to robots, brain-machine interfaces and more.

  • Guests & Resources
  • Transcript

Katherine Forrest: Morning, and welcome to another episode of “Waking Up With AI,” a Paul, Weiss podcast. I'm Katherine Forrest.

Anna Gressel: And I’m Anna Gressel.

Katherine Forrest: And Anna, I'm really excited about today's episode.

Anna Gressel: You're excited about every single one of our episodes, which I love.

Katherine Forrest: That's because I'm an enthusiast. I do like all of our episodes. We wouldn't do the episode if we didn't both like the episode, right? It's sort of like one of those things that you do. We both agree on the episodes. But I am excited about this one in particular.

Anna Gressel: Okay, and do you want to tell me why that is?

Katherine Forrest: Yeah, I'm going to call it like the fortune teller episode. And, you know, I want us to sort of really put our thinking caps on.

Anna Gressel: Okay, so should I get out my crystal ball and my tarot cards?

Katherine Forrest: And the Ouija Board.

Anna Gressel: Actually, you know what I like those — yeah, the Ouija board — or the fortune telling magic eight ball. That's what it was called. I love that.

Katherine Forrest: The magic eight ball, yes, yes, yes. You can shake it right up. I hope you have it handy. So what we're going to be doing is really asking our audience to think with us right now about something that is, I think, going to go through enormous changes, changes that we don't even really understand relating to the AI user interface.

And that sounds like a mouthful but let me sort of break it down. Because by user interface, or sometimes it's called the UI, we mean the way in which the user interacts with AI, or really a particular AI tool. And so for instance, today, the primary interface that most people know about is, or user interface that most people know about is, the user interface they have with their phone or with their computer.

That's not necessarily an AI interface, but just to give you an idea of what is a user interface, there has to be a way in which the human being actually interacts with the device. And so for the smartphone, there's a very familiar, now, look and feel with apps and the icons for apps. And so those apps are the user interface. It's the way in which you are interfacing with your phone and ultimately then with the tools on your phone. Same thing for the computer, you know, when you are opening up, for instance, whether it's Outlook or Word or your browser, you're actually having an experience that has been designed to try and allow you to intuitively and helpfully move through that interface. So that's what an interface is. It's how the human works with a tool.

So when we talk about an AI interface, we're talking about how we as humans are going to be interacting with AI tools, different AI tools, over time. And the one thing that I just want to say as a starter here is I don't believe that the way in which we are today interacting with any of the AI tools are the ways that we're going to be interacting with them in the relatively near future. Those ways are going to change.

Anna Gressel: Yeah, I couldn't agree more. I mean, we've already even seen that a little bit. And I think we'll talk about this today, movement from text to audio to multimodal. That's going to drive a lot of change in this area. We'll get into all of that. But let's talk a little bit about what we have today, for example. We have a Claude interface, if you use the Claude chatbot. Or ChatGPT has an interface. Or if you use Copilot, that has an interface. We might call those natural language interfaces, many of them, because they're really leveraging language to interact with the underlying AI.

Katherine Forrest: Right, and what you're really doing with that particular interface, it's really quite basic if you think about it. It's about as basic as it comes. But what you're doing is you're using your fingers to tap in a query, or sometimes now people are using their voice to actually ask the query that then gets translated into words. But it's a word-based query to the AI tool.

Anna Gressel: Yeah, and then the AI tool interacts with you back and forth in a manner with that kind of text. So, you may speak into the tool. It may create an image. That's what we would call voice-to-image or voice-to-text, right? And then you can go back and forth. It really is a chatbot functionality.

Katherine Forrest: Right, exactly. And so, this is where it gets interesting because, with a user interface, I think that in the next couple of years, we're going to be moving away. This is my prediction. This is my fortune telling prediction. We're going to be moving away from text-based interfaces as our primary interface.

Anna Gressel: Do you want to explain to the audience why you say that?

Katherine Forrest: Well, just because I think it's— we never ever stick with our initial interfaces, right? I mean, if you think back on the way that we first started interacting with computers when the internet was brand new— which many people I know don't remember, but I do— it was really very cumbersome, and you had to know where you were going. You didn't really have the browser capability that we have today. It looked nothing like what it looks today. And so things change dramatically and have changed dramatically in our recent sort of experience with moving into the digital world. So I think there's every reason to believe that we're going to have that with AI as well.

And, for instance, if we think back to the BlackBerry, which was one of the earliest sort of cell phones, and you could even have called it a smartphone, and it was incredibly popular for a while. The interface, the user interface for the BlackBerry looked nothing like what, for instance, the iPhone in 2007 when Steve Jobs came out with “There's an App for That” began to look like. And that's really now just taken over. So we went from something that seemed like we couldn't live without it, that was sort of the original interface that was always going to be that way, that was what was going to be in forever, into something that was incredibly new, novel and took a lot of people time to get used to. But today we accept it as just the way things are.

Anna Gressel: Yeah, I mean, this is bringing back memories of typing out text messages and having to click each key multiple times to get the right letter. And then fast forward, I think it was just a matter of a short number of years, and everything was incredibly intuitive. Keyboards could learn from how you actually typed into them. We don't really have the same kinds of typing issues anymore with the keyboards we use. I mean, some of that is AI. But really, I think user interface design is such a fascinating field. It's about the idea that changes to the way that we interact with technologies can speed adoption because the more intuitive the interface is, the easier it is to use, the less sticky it is to use, the faster we'll actually take it into our lives and say, “this is really helping me and this is, it's frictionless,” right? So it's about this idea, I think, of frictionless design, intuitive design. And we're really only at the beginning of that in AI. There's almost no chance we're not going to see that kind of acceleration and user interface, like adaptability and development in the AI space as well.

Katherine Forrest: I totally agree. And, you know, what we have is sort of the intersection of incredible engineering, incredible design, a lot of imagination, the ability to understand how tools are used, the intuitive sort of way in which humans want to approach a particular work stream. It's a complicated and a really fascinating area. But right now, today, with this natural language text-based interfaces, which is basically for most tools, we're starting to see some variations already.

Anna Gressel: Yeah, I mean, I think a good example of that is robotics. You know, one of the ways of interacting with AI is not necessarily by typing things in, but actually by interacting in a pretty natural way with a robot that can go back and forth with us. So, Katherine, you want to talk a little bit about robotics? I know you've been really interested in this area, too.

Katherine Forrest: Yeah, you know, there's actually now a whole variety of robots that they actually call service robots. And there's an entire array of academic literature or entire set of academic literature on this. But if you go to certain places in this country, but also in other countries, for instance, South Korea, you can see the deployment of these service robots in real-life situations. And they're actually quite sophisticated, so it's happening already today. And so, you have human interaction with these robots, so that's a human robotic or human AI interface, which is in the form of the human interacting with this thing. It's a mechanical device. Sometimes it looks humanoid-ish, sometimes it looks like it's just a machine rolling around a restaurant. But when I was in South Korea not too long ago, we had robots that were bussing tables and that were delivering food, that were taking orders in restaurants, that were really acting in many ways as a complete waitstaff.

Anna Gressel: Yeah, I mean, we all can think about like Roombas or things like that. They each have their own interface, and sometimes it's kind of a set it and leave it. Like, you don't actually want to interact with it. You want it to know how to navigate the environment with minimal instruction. So, robotics are a whole kind of area where interactions and understanding the interface there is going to be really important. But that's not the only one. There are tons of other kinds of interfaces we might talk about.

Katherine Forrest: Right, and I know that with your neuroscience degree, one of the interfaces that's really just now coming to the awareness of lot of folks— although there have been people working on this now for a number of years— has to do with neurology. Do you want to talk a little bit about that?

Anna Gressel: Yeah, definitely. I mean, there have, I think you mentioned this, been for many years implants, different kinds of devices that can go into people's brains and do different kinds of things. Sometimes they can interrupt epileptic fits. They have all kinds of functionality. They can be either externally controlled or just mediated by someone's body. They're pretty amazing. And they've had scientific and kind of medical applications for a long time.

What's new is now people are actually thinking about putting implants in people's brains to figure out what people are thinking and actually discern what their thoughts are, not just to give them some sort of treatment that they need, but to actually use AI to figure out what is someone trying to communicate who might not otherwise be able to communicate or to get some sort of signal from their brain that then you could turn into, for example, text or something discernible to other humans. So there you have the user interface is actually, I think, the body in many ways. It's a cooperative action between the physical body and the AI tool. And that can be called brain-machine interfaces or brain-computer interfaces. So some of that looks at brain waves, but it is also possible that that technology will look at other kinds of signals. There's a lot of kind of signal in our head, in our brain. And the more the medicine advances and the science advances, the more they may be able to pick up on that. And, of course, there are a lot of privacy implications of that, all different kinds of interesting ethical implications about putting implants in people's brains. But certainly one thing is true: as this happens, we're going to learn a lot about the brain, and we may learn a lot about how people think. And then that can be translated into real actions in the world.

Katherine Forrest: Well, it's really fascinating stuff because what it really does demonstrate is that a user interface that is designed to facilitate communication between the human and the AI is not just a one-way tool of communication, but it can be a learning device. And that really is fascinating. But I want to change gears for a moment, so to speak, and talk about an interface that is all around you. When you enter a car, where you've got a self-driving set of features where the vehicle itself, without real human interaction in terms of you telling the vehicle to do anything, actually executes tasks.

So I have a Tesla, for instance. Her name is Annie, and it's a long story as to how her name became Annie, but just trust me. I have a red Tesla whose name is Annie, and the user interface is the entire car itself, because once you enter the car, the AI takes over, it knows who the driver is. There are many features that you don't ask the car to do on any kind of regular basis, some of them you can turn off, but, in general, the car is set up to automatically execute certain jobs using AI algorithmic technology. For instance, there are distance features and lane control features, a whole variety of cameras, things that will actually navigate you on the highway if you're in a place where you're able to do the self-driving feature, which I have done from time to time when it's been allowed. It's really an extraordinary feature, but—and drum roll please—if I were to guess where user interfaces are going with AI, I think it's going more in the direction of devices and tools that will be around us and that will automatically take over functionality without us even asking. There's going to be so much around us in our environment that's going to be intuitive and that's going to actually be automatic.

Anna Gressel: I mean, I think that makes a lot of sense. I was listening to a podcast—I actually do listen to other podcasts every once in a while. And they were talking…

Katherine Forrest: You shouldn't listen to anybody else's podcast but ours, unless they're really good.

Anna Gressel: Well, it was a really good podcast, and it was talking about the rise of autonomous tech. And one of the points that they made was that there's a lot of perceptive AI. It's the ability to understand what's happening in the world. It's not that hard to then take the next action in certain contexts. So something is happening, and then it just has to flip a switch on and off. The perception there is really important. It doesn't have to be fully autonomous. You just have to do the next right thing. And that's, at least, step one of what you're talking about, right? Like a really good set of sensor technologies. And then you can figure out, based on the environment, what to do.

But let's talk a little bit, because I want to, I also think it would be interesting to have you drill into like, what about much more truly autonomous technologies? Do you think that that is also where this is going in terms of not only the interface, but then what happens based on that interface?

Katherine Forrest: I do. I think that what we're going to find is that there is going to be a sort of, with this agentic AI that we've talked about now in a couple of different episodes, we're going to have a sense of different situations and how humans interact with their environment in different situations. And we're going to have essentially anticipatory AI, where the user interface will simply be between us and the AI device anticipating what we are about to do. So it'll be a truly autonomous experience. And that I think is part of what's going to push adoption, because humans don't necessarily like to learn a bunch of new functionalities. You know, there was a problem when you have to sort of like first learn to switch from your Blackberry to the iPhone and figure out how those keys work. You know, you're sort of, it's almost like a friction and an impediment. But if it's automatic and it's easy, then it just starts to happen.

Anna Gressel: Yeah, I think that's why companies are getting so excited about agentic technologies is they remove a lot of the friction. You know, it's one of those things where people don't necessarily want to learn prompt engineering. They don't want to have to come up with a perfect prompt. They want an AI agent to anticipate that when they said book that ticket, it meant, you know, 15 other steps in that process. And then actually know something about your preferences and be able to take action in the real world. And it's that idea that simplifying our experience with AI is going to drive adoption, it's going to drive down time to use, right? And it'll actually make this a much easier process. So the user interface will be something like, what you're saying in terms of anticipatory agents plus potentially robotics to make actions happen in the real world. So it's going to be a really interesting overall user interface environment for all of this.

Katherine Forrest: It really is, and it's also going to have a number of legal implications because it's going to fundamentally change the way that we are able to monitor what kinds of control, what kinds of compliance we can have around these automatic, agentic-like user interfaces. So, Anna, you always talk about, or don't always talk about, I mean, you're not talking about it constantly. I mean, I hear other things out of your mouth, like alpha protein folding and things like that. But tell us a little bit about shadow AI.

Anna Gressel: Yeah, I mean, I think, Katherine, you hit the nail on the head, right? Like, these are going to be tools that will surround us in the physical spaces we're in and the virtual spaces we're in. And they'll start to become embedded in technologies really everywhere. It actually reminds me of that Minority Report scene where Tom Cruise runs into the mall and he's being scanned by every billboard in the place, right?

But there's the corporate version of that, which is all of your vendors are starting to think about how to adopt technologies, how to leverage them. And some of that will be completely seamless to you. As our user interfaces get better and more advanced, you're not going to necessarily say, “oh, my vendor has a chatbot, and that's how I know they have AI.” Instead, AI will run in the background and whiz and whirl in the background of things you won't know. I mean, that's already happening today. So the question becomes, and we talk about this a lot, what do you do from a vendor management perspective to grapple with the fact that there's AI everywhere and that may create risks for your company? I mean, we deal with the shadow AI issue all the time, but we can talk more about it. I think the key thing is realizing it's going to be a hard issue to tackle in one way all the time. So you really have to take a risk-based approach. And under a risk-based approach, you can think about things like, who are our most critical vendors? Who has our most critical data? If we had some sort of business continuity issue with a particular functionality, would our core business be impaired? How long would it take us to get that back online? And those kinds of issues will start helping you discern which vendors should rise to the surface and where you might want to conduct some more diligence from a vendor risk management perspective.

Katherine Forrest: Right, I mean, ultimately the AI user interfaces are going to be changing. That's sort of the bottom line here. And our legal obligations, and the way in which we're going to interact with them legally, are also going to change. So, I wonder how this is all going to come out. We'll be talking about this undoubtedly over a series of future episodes. But it's about all we've got time for today. I'm Katherine Forrest.

Anna Gressel: I'm Anna Gressel, thanks for joining us.

Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast
Apple Podcasts_podcast Spotify_podcast Google Podcasts_podcast Overcast_podcast Amazon Music_podcast Pocket Casts_podcast IHeartRadio_podcast Pandora_podcast Audible_podcast Podcast Addict_podcast Castbox_podcast YouTube Music_podcast RSS Feed_podcast

© 2024 Paul, Weiss, Rifkind, Wharton & Garrison LLP

Privacy Policy