The AI revolution is upon us, particularly when it comes to large language models. Services like OpenAI’s ChatGPT and Google’s Gemini are shaking up the definition of “content” online (shake up your headphones with these top-tier earbuds). But, for every time these programs amaze us with their seeming intelligence, there’s always a counterexample to show us just how wrong LLMs can be when it comes to summarizing content. Today’s epic fail comes from Grok, X’s AI search assistant, which brutally misconstrued a (very) bad game from Golden State Warrior Klay Thompson.



Related

Latest Threads feature helps close the gap with Twitter

A Trending section has arrived, but only in the US for now

First, some context

It’s important, trust me

It all started last night in Sacramento where the Golden State Warriors finished out their season in a 118–94 loss to the Kings. Thompson went 0 for 10 and the Twittershpere was quick to point out his poor shooting performance. Grok, X’s experimental LLM, noticed the uptick in “throwing bricks” associated with Thompson’s name and put its own spin on the news.


Screenshot of Grok summary on X
Source: X

Rather than a dry recap of the season-ending upset for the Warriors, Grok came to the conclusion that Thompson had been accused of vandalizing Sacramento houses with his brick throwing. Not content to stop with that headline, Grok filled in details on its own about the lack of injuries or a motive.


What in the hog-fried heck?

If you’ve been paying attention to AI and LLMs over the past year, this story won’t be surprising in the least. It’s definitely an own-goal on the part of X, but to give them some credit, AI is hard. And the technology that Grok is based on is still really new. Consider that ChatGPT’s launch was fewer than two years ago and it was the AI chatbot that opened the floodgates on all of the AI-powered services that we see today.



Related

How to delete your Twitter account

Peace of mind > everything else

Like an untrained dog, the real issue here is letting AI run wild without human intervention. Here at Android Police all of our content is organically sourced, but we do have an AI article summarizer that you’ve probably noticed at the top of our news reporting. The reason our AI content isn’t a hot mess is because, even with our limited AI-generated content, we still have a human hand guiding the output, editing and fact checking everything it says. The fact that X doesn’t have similar editorial procedures in place speaks to the decline in its professional standards since its change in ownership.


Things are only going to get worse

It’s pretty bad that Grok made such a boondoggle out of an AI task that should have been as easy to get right as making a free throw (zing), but I think the problem is more insidious. Recall earlier this year when Google got into hot water when an update to its Gemini AI refused to draw white men. Google likely put its hand on the scale when making its AI models to prevent problematic output. The results were comical and Google quickly shuttered the ability of its model to generate images.


This mistake by Grok, although somewhat absurd, is a very dangerous escalation of AI’s place in society. We’ve been playing this story for a laugh up until now, but let me reframe it: Twitter AI accuses a black man of a crime he didn’t commit. Now, Klay Thompson is a 4-time NBA champion that’s known to millions of people around the world, and all of this craziness played out in public, so this will be an amusing footnote in his life without any lasting consequences. But what happens when it’s just some random person accused of a crime? What happens when malicious actors decide to intentionally manipulate the AI through seemingly innocuous content crafted to trigger another false accusation? And given the non-human source of the accusation, what recompense will the inevitable future victim have? Will X have to answer for its AI’s libel?

Related

8 best Twitter alternatives

Brilliant services for an X/Twitter fan



There are no easy answers

The AI genie has been out of the bottle since ChatGPT shocked the world back in 2022 and there’s no way to put it back in at this point. But just because we can’t stop the AI train from rolling doesn’t mean that we can’t control where it goes from here. Oh, and if you’re new to the AI party, catch up by reading our explainer on ChatGPT.