The privacy-focused, ad-busting Brave browser introduced its Leo AI months ago to the excitement of no one in particular. Building and maintaining a browser costs money, so it’s not surprising that free-to-use Brave is trying to entice users with yet another halfwit chatbot, only offering more powerful (and actually useful) versions for a fee.



At least Brave doesn’t save (your data)

To be fair, Leo (supposedly) doesn’t record or train itself using your inputs unless you explicitly allow it. Its access button isn’t intrusive, and you can easily disable it through the program’s custom settings (for the time being, at least).


A screenshot of the Brave Leo AI landing page
Source: Brave

But Leo’s free tier, like so many recent chatbots, fails so miserably at producing succinct, factually accurate conversation that it perfectly encapsulates why AI isn’t the second coming and why companies need to chill with the forced adoption.

Don’t waste your time with Leo

I rarely use large language models, but I gave Leo a quick competence check. Granted, I knew the answers before asking, so I was setting it up for failure. But nothing prepared me for its response to, “What AI model are you running?”

I’m just an AI and my purpose is to assist you with accurate and helpful information. I am not running any specific AI model, but rather a set of algorithms and techniques that allow me to understand and respond to natural language input.



Gee, thanks, Leo. So you’re not running an AI model; you’re just running an AI model. Makes sense. Of course, I knew from the drop-down menu that it was running Llama 2 13b. So I reworded and repeated my question (twice), only to be told:

The specific AI model that I am based on is called BERT

No, it’s not. If this is AI, let’s put heavy emphasis on the “artificial” part. Because it’s certainly not intelligent.


AI’s not as cool as it looks

It’s not really artificial intelligence in the first place

Dead horse warning: AI doesn’t mean AI anymore in general parlance. It refers to a lot, including LLMs, investment analysis algorithms, image recognition software, and prompted image generators. But the mere fact all these wildly different ML uses get lumped together as AI makes it clear the overarching category is, in itself, a buzzword.

Related

What is generative AI?

An agent of the human will, an amplifier of human cognition. Discover the power of generative AI

Enthusiasts argue, “No, that’s artificial general intelligence. That’s what Sonny and the Bicentennial Man had, an AGI.” That’s a fair distinction. But, at its core, labeling machine learning methodology “AI” misleads and grossly misrepresents the tech’s current state.


AI is increasingly unhelpful

Leo’s hilarious incompetence is not unique. LLMs routinely argue in circles, insisting they’re correct while contradicting themselves. Image generators make mind-bogglingly bizarre attempts at overcoming bias, resulting in nonsensically inaccurate imagery. Even generative text prediction from industry giants like Microsoft leads to impressively offensive response prompts.

 The image shows an over-the-shoulder view of a person using a laptop displaying the ChatGPT interface.
Source: frimufilms / Freepik

People fail to realize (and sometimes willfully ignore) that language models recognize and mimic speech patterns but ignore factual accuracy. Plus, every time remotely controversial text or imagery makes the news, devs enact more restrictions, playing whack-a-mole to eliminate one black eye before another emerges.



AI results are often borderline theft

The last decade-plus has trained us to accept we only use services at the corporate owners’ will. And don’t think social media exists to connect people. Always assume your written words will end up training another LLM, whether you like it or not.

Cute, but how many creators unknowingly contributed their efforts to this AI image?

Ryan Clancy, my esteemed colleague, explained not so subtly how many implementations rely on plagiarism. I’m no visual artist, but I work with many, and the frustration with unlicensed art fueling profit-based companies equals the worry of AI taking jobs. These databases didn’t spring up from thin air.

Somebody inevitably responds to AI theft claims with something like, “But jazz revolves around stealing ideas! Is music theft?” Sometimes, yes: Led Zeppelin, for example, has repeatedly updated its song credits after composers revealed the group used content without permission.


Good, but not enough

Google will defend you if its AI images infringe copyright

It’s a part of Google’s plan to make generative AI and its use more responsible

A DJ can’t ethically play pirated songs at a paid gig. You can’t ethically sell tickets to a movie streamed by your personal Netflix account. But, magically, when the brave new world of AI is involved, intellectual property rights go out the window.

Computers simply aren’t like human brains

People argue an LLM is just an artist’s tool, like a paintbrush. Except a paintbrush isn’t complex obfuscated code with billions of parameters and extensive compiled resources. The distinction is in the software layer. No matter how complex, microchips are not human brains, and you can’t logically equate the two.

Some potentially useful ML-enabled tools



“What about the complicated software producers use to make electronic music?” is another question designed to stump AI detractors. However, complex software like Ableton gives composers more control. LLMs take control out of the user’s hands, forcing humans to rely on algorithms they likely don’t understand. Lines of code make poor substitutes for actual instruments like guitars and synthesizers.


AI tools aren’t generational upgrades

Here’s looking at you, Samsung

What exactly did Samsung’s engineers do last year? They developed a glorified reverse image lookup called Circle to Search and competent voice recognition software called Transcribe Assist. I’m not bashing these — they’re helpful features — but the S24 family makes almost no tangible hardware or usability upgrades to the S23 outside of software; selling users on AI is the new focus.

Related

Samsung Galaxy S24 Ultra review: Still the best, unless you take photos

Without any meaningful changes, Samsung’s latest phablet feels like a do-over for last year’s smartphone

Where’s the innovation? Where’s the real upgrade? Did Samsung just give up on making interesting phones?

Meanwhile, the S24 Ultra-rivaling OnePlus 12 scores excellent marks in all categories (including a vastly superior camera) but costs a whopping $500 less. Does that much of the S24 Ultra’s $1,300 price really prop up Samsung’s R&D? And still with lackluster 45W charging?



AI may be pretty neat

It’s just not that useful yet, despite endless PR that claims otherwise

ML does have its uses, and improved models will keep enhancing things like voice assistant capability and general internet accessibility. But the current wave of hype more resembles the cacophony that once surrounded Beanie Babies, 3D TVs, VR, and NFTs than any major technological revolution.

Also telling is that most former cryptobros I meet have miraculously transitioned to AI experts as they slowly realize that blockchains are just databases, and DAOs don’t magically invalidate copyright law.

As I’m sure keen readers will point out, I’m an old man yelling at clouds. It’s true; my incessant whining won’t make corporations respect artistic integrity or make manufacturers release new small phones. But while I may not be terribly intelligent, at least it’s a natural unintelligence. Program that into your chatbot.