Saturday, November 16, 2024
Google search engine
HomeData Modelling & AIAI first—with UX

AI first—with UX

The big news from Google’s I/O conference was Google’s “AI-first” strategy. This isn’t entirely new: Sundar Pichai has been talking about AI first since last year. But what exactly does AI first mean?

In a Quora response, Peter Norvig explains Google’s “AI first” direction by saying that it’s a transition from information retrieval to informing and assisting users. Google’s future isn’t about enabling people to look things up; it’s about anticipating our needs, and helping us with them. He realizes the big problem:

Learn faster. Dig deeper. See farther.

Join the O’Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

For example, Google Now telling you it is time to leave for an appointment, or that you are now at the grocery store and previously you asked to be reminded to buy milk. Assisting means helping you to actually carry out actions—planning a trip, booking reservations; anything you can do on the internet, Google should be able to assist you in doing.

With information retrieval, anything over 80% recall and precision is pretty good—not every suggestion has to be perfect, since the user can ignore the bad suggestions. With assistance, there is a much higher barrier. You wouldn’t use a service that booked the wrong reservation 20% of the time, or even 2% of the time. So, an assistant needs to be much more accurate, and thus more intelligent, more aware of the situation. That’s what we call “AI-first.”

All applications aren’t equal, of course, and neither are all failure rates. A 2% error rate in an autonomous vehicle isn’t the same as a map application that gives a sub-optimal route 2% of the time. I’d be willing to bet that Google Maps gives me sub-optimal routes at least 2% of the time, and I never notice it. Would you? And I’ve spent enough time convincing human travel agents (remember them?) that they booked my flight out of the wrong airport that I think a 2% error rate on reservations would be just fine.

The most important part of an assistive, AI-first strategy, as Pichai and many other Google executives have said, is “context.” As Norvig says, it’s magic when it tells you to leave early for an appointment because traffic is bad. But there are also some amazing things it can’t do. If I have a doctor’s appointment scheduled at 10 a.m., Google Calendar can’t prevent someone from scheduling a phone call from 9 to 10 a.m., even though it presumably knows that I need time to drive to my appointment. Furthermore, Google Now’s traffic prediction only works if I put the address in my calendar. It doesn’t say “Oh, if the calendar just says ‘doctor,’ he drives to this location.” (Even though my doctor, and his address, are in Google Contacts. And even though my phone knows where it is, either though GPS or triangulation of cell tower signals.) And I’m lazy: I don’t want to fill that stuff in by hand whenever I add an appointment. I just want to say “doctor.” If I have to put more detail into my calendar to enable AI assistance, that’s a net loss. I don’t want to spend time curating my calendar so an AI can assist me.

That’s context. It doesn’t strike me as a particularly difficult AI problem; it might not be an AI problem at all, just some simple rules. Is the car moving before the appointment, and does it stop moving just before the appointment starts? Who is the appointment with, and can that be correlated with data in Google contacts? Can the virtual assistant conclude that it should reserve some travel time around appointments with “doctor” or “piano lesson”?

The problem of context is really a problem with user experience, or UX. What’s the experience I want in being “assisted”? How is that experience designed? A design that requires me to expend more effort to take advantage of the assistant’s capabilities is a step backward.

The design problem becomes more complex when we think about how assistance is delivered. Norvig’s “reminders” are frequently delivered in the form of asynchronous notifications. That’s a problem: with many applications running on every device, users are subjected to a constant cacophony of notifications. Will AI be smart enough to know what notifications are actually wanted, and which are just annoyances? A reminder to buy milk? That’s one thing. But on any day, there are probably a dozen or so things I need, or could possibly use, if I have time to go to the store. You and I probably don’t want reminders about all of them. And when do we want these reminders? When we’re driving by a supermarket, on the way to the aforementioned doctor’s appointment? Or would it just order it from Amazon? If so, does it need your permission? Those are all UX questions, not AI questions.

And let’s take it a step further. How does the AI know that I need milk? Presumably because I have a smart, internet-enabled refrigerator—and I may be one of the few people who have never scoffed at the idea of a smart refrigerator. But how does the refrigerator actually know I need milk? It could have a bar code reader that keeps track of the inventory, and shelves with scales so it knows how much milk is in the carton.

Again, the hard questions are about UX, not AI. That refrigerator could be built now, with no exotic technology. Can users be trained to use the bar code reader and the special shelves? That’s what Google needs to be thinking about. Changing my experience of using the refrigerator might be a fair tradeoff for the inconvenience of running out of milk—and trivial conveniences are frequently what make great user experience. But someone who really understands users needs to think seriously about how to make the tradeoff as minimal as possible. (And I admit, the answer might turn it back into an AI problem.) If that tradeoff isn’t made correctly, the AI-and-internet-enabled smart refrigerator will end up being just another device with a lot of fancy features that nobody uses.

I haven’t touched on privacy, which is certainly a UX issue (and which most of my suggestions would throw out the window). Or security, which isn’t considered a UX issue often enough. Or any of a dozen problems that involve thinking through what users really want, and how they want to experience the application.

AI-first is a smart strategy for Google, but only if they remember AI’s limitations. Pichai is right to say that AI is all about context. In a future where humans and computers are increasingly in the loop together, understanding context is essential. But the context problem isn’t solved by more AI. The context is the user experience. What we really need to understand, and what we’ve been learning all too slowly for the past 30 years, is that technology is the easy part. Designing systems around the users’ needs is hard. But it’s not just difficult: it’s also all-important. AI first will only be a step forward if it puts the user first.

Post topics: AI & ML
Share:

RELATED ARTICLES

Most Popular

Recent Comments