Stereotyping is something we’re used to as humans. We naturally make quick judgments about people based on how they look, speak, or act. These assumptions help us make sense of the world, but they can be wrong or unfair. Today, it’s entered into our technology. If you search for terms like “older people…” or “why are millennials…” on your Android tablet or phone, you might see Google suggestions like “…are older people forgetful?” or “…are millennials lonely?”
Artificial intelligence uses these stereotypes just as we do. I found it interesting because I’ve read and heard about it but only recently experienced it. Here’s the experience that had me laughing and reflecting.
Related
What is Grok?
What makes Grok unique is that it integrates deeply with X, accessing real-time data from the platform.
Who are you when you’re online?
This X trend will put you in different moods
There’s a trend on X (formerly Twitter) where people ask Grok AI to guess how they look based on their tweets or summarize their accounts. The results have brought a mix of reactions, from confusion and amusement to thought-provoking. Among my favorites is the user who asked Grok to write a poem based on their X post. The poem was overdramatic, repetitive, and more of an epic saga than a simple summary of their content.
I had to give it a shot. There were numerous cases where it would keep copying your profile picture, so I prompted it to tell me without images. It responded that I seemed like a middle-aged nerd. I was curious about how it reached that conclusion, so I pressed for more details. The follow-up was more specific, describing me as a lonely 40-year-old man who spends his days playing video games in his room. I nearly choked.
The description is far from reality, as I am female and still enjoying the quarter-life crisis phase of my life. It made me wonder how AI reads into the way we communicate online. It’s funny because, going by our tweets, many of us might come across as different people. It isn’t every day that I get mistaken for a 40-year-old gaming recluse, but here we are.
It makes you think about how we shape our digital personas without realizing it. The internet has a funny way of turning us into caricatures, and AI adds a new layer to that mix.
AI and patterns in virtual expressions
These machines don’t think like we do, or do they?
Grok is an AI chatbot developed by Elon Musk’s AI company, xAI. It has a humorous and rebellious tone, inspired by “The Hitchhiker’s Guide to the Galaxy.” Hence, the reason for the shockingly blunt responses. It uses the Grok-1 language model, which performs better than GPT-3.5 but is not as advanced as GPT-4.
Grok came out in November 2023, and the newer Grok-2 and Grok-2 Mini versions were introduced in August 2024. Initially, the AI required an $8 subscription, but it is now free to use. Like other generative models, Grok can only “see” through data. It generates responses based on patterns. It likely looks at word choice, phrasing, hashtags, and even the tone of your posts.
For example, my posts include a lot of tech jargon and news, nostalgia for old gadgets, or puns surrounding niche hobbies. Using terms like “vintage,” “back in the day,” or making dry dad jokes might have made me sound like a middle-aged man. Talking a lot about classic video games and other subcultures, or even obscure things, could also be why I was stereotyped.
Regardless, Grok didn’t understand me as an individual, and the narrow judgment didn’t come from nowhere. It relies on patterns and assumptions, often linking traits or interests to specific demographics. Societal biases shape the data it processes, such as the idea that men are more likely to engage in technological topics. In addition, other genders are underrepresented in the industry, especially in AI research. In the end, it isn’t Grok’s fault. The data is the culprit.
The power of perception
What does it all say about us?
When we take to our keyboards and make a post, we have a hand in how we’re perceived. The content we share creates a personality that everyone judges. Every tweet, retweet, or hashtag is up for assessment, whether it’s intentional or not. Drawing from the previous example, if you constantly talk about gaming, AI might categorize you as a gamer, even if that’s only a small part of your life.
Similarly, the formal, sarcastic, or casual tone you use matters. If we present only certain aspects of ourselves, we feed the algorithm and other people a linear version of our identity. While stereotyping shouldn’t be celebrated, the X trend is a good reason to look inward and reflect on this. It’s also a reminder of AI’s limitations.
When I first saw Grok’s guess, I thought it was way off. However, after thinking about it more, I realized there’s some truth to it. I can be analytical and old-school. Like any person, it can form opinions about you based on a small piece of who you are. The difference is in context and algorithm.
Assess your internet identity
AI models, including generative ones, reflect the world we created. They pick up our biases, values, and assumptions and mirror them in their output. Debiasing them is more complex than simply changing the data. It’s also about rethinking how AI is trained and used. Also, AI-generated content is tricky and can hurt people. Still, it doesn’t have its own intentions. When using Grok or any AI, question its conclusions objectively and ask yourself if it represents who you are. Consider it an experiment.