Getting your daily dose of tech news these days means you’ll come across the term AI over and over again. It has become synonymous with the tech industry lately, and companies — big or small — are elbowing their way into the limelight with products and services that in some way leverage AI’s ever-advancing capabilities.
The result is that we’ve recently seen a bunch of AI-first products that make tall promises but have far too many gaps to make them viable, while also reducing the immense potential of generative AI to a mere chatbot. But the Humanes and Rabbits of the world are passionately after changing the way we fundamentally use our phones today. However, they miss out on one big — and far quicker — mode of interaction and feedback: visual.
Hey phone, can you hear me?
Making a chatbot the entire device’s personality is problematic
Technology that works in the background and doesn’t require interaction is the real dream. Voice assistants like Alexa, Google Assistant, and Siri have been around for close to a decade, and the more nuanced natural language processing capabilities of generative AI can finally bring these chatbots closer to their omniscient, Jarvis-like reality. However, even Iron Man needs his screens, as Jarvis can only go so far in telling him everything.
What is the Rabbit R1? The AI phone without apps explained
And why is the internet going crazy for it?
Despite all the advancements in generative AI, voice interaction remains only a single part of how you interact with your phone. Devices like the Rabbit R1 and the Humane AI Pin eliminate (or at least considerably reduce) the sensory feedback you would’ve otherwise received using a screen. While that is still excusable for a complimentary device, when these devices portray themselves as your smartphone replacement, it becomes problematic.
Their voice-first approach is inherently flawed because it doesn’t account for even the basic environmental needs you will come across, not occasionally, but every single day. You obviously need to use your phone in public, but uttering something personal out loud may not be the best idea. Meanwhile, voice input will downright fail in a crowded area, requiring you to fall back to your phone, defeating its purpose several times a day.
And I’m not even diving into the long list of challenges Humane’s AI Pin faces with battery life and laser display ergonomics. Even its software limitations, like the lack of any mainstream apps and basic features (notes, reminders, timers, etc.) can be fixed over time with updates, but the fundamental design-level problems will persist as long as you have the device.
Your vision captures more than you know
When you buy something on Amazon, there are a lot of minute details that you can glean with a simple glance — the price, color, shipping timeline, review score, and whatnot. Texting your friends also requires you to look at things, maybe a photo or a funny video. Imagine a robotic voice telling you the emoji your friends reacted with or reading out everything it sees on an Amazon product page.
If you think about how you use your phone every day, you will find that many actions can be completed briskly on a smartphone screen, making a second AI-only device redundant. And you have to put up with a jarring AI voice talking to you all the time while looking at a wall of text on something like the Rabbit R1 is equally off-putting.
On top of that, despite its massive leap recently, AI isn’t reliable; it hallucinates, has biases, and so much more. That means you must cross-check every piece of information before you proceed, say, with an online purchase. Imagine how difficult the whole process suddenly becomes without any visual feedback. This entire broken experience is all but guaranteed to leave you so frustrated as the novelty wears off that you will find it easier to pull out your phone and just get tasks done in seconds.
A big selling point of Humane’s AI Pin is the included camera that can record and even analyze what it’s seeing to give you results. But Meta Smart Glasses are better suited for something like this because of one simple reason — they sit close to your eyes. So, you know that the camera is seeing exactly what you are, but you can’t say that about a pin hanging on your t-shirt. The only way to know instantly is to use the monochromatic laser projector to see the color photo it took in the palm of your hand – how thoughtful!
Lasers are cool and all, but sometimes you don’t need to go back to the drawing board to reinvent the wheel. Maybe the solution to AI integration lies in what we already have.
You already have what you need
Both metaphorically and device-wise
None of these AI-first devices can exist on their own — either they connect to your phone to leverage your existing app and services ecosystem, or you will have to jump between two devices, which is far more cumbersome.
However, a device already exists that is made to complement your smartphone to remove some workload — your smartwatch. I have reduced my phone usage because of a smartwatch, and if it gets a generative AI boost, I will cut my phone usage even further. If reducing phone dependence is the aim, I don’t think we need a screen-less, AI-only device that introduces more problems than it solves; a smartwatch could do all that without any learning curve for the users.
That’s only if we’re hellbent on secondary devices, otherwise, any of the top Android flagships could be an all-in-one AI master device with just a software update. If our hopes from the upcoming Google I/O come true, Google will supercharge the Assistant with Gemini, probably to let it execute tasks within apps, much like how Rabbit R1 has been envisioned. With Google’s LLMs running offline in future devices, the wait to get your results will also become negligible, thereby eliminating the need for a dedicated AI device.
What is Galaxy AI, what can it do, and how can I use it?
Learn more about Galaxy AI and how to use it on your Samsung Galaxy S24 phone
And yes, I do think that an ecosystem is what we need here and not some fragmented software experience because AI needs to integrate with your device and your apps to remember stuff about you and take action on your behalf. That also means your smartphone isn’t as easily replaceable and will, in fact, become the center of your interconnected device network — your smart glasses, smartwatch, smart ring, and whatnot. That’s at least until AI becomes so reliable that you can go without a screen, but that future seems a little distant.