We’ve all tried prompting an AI for something unconventional only to get a reply that feels like it was generated in a beige office.

It ticks the boxes for accuracy and structure, but it absolutely fails to be interesting. That happens because these models are heavily tuned to stay polite and neutral at all times.

They are programmed to avoid risk, which means they avoid opinions and strong stances.

The ironic part is that experienced users already know how to work around this. To get real results out of an AI, you can’t sit back like a passive user.

You have to step into the role of a director. You give it a personality, a purpose, and a reason to care. You have to, well, gaslight it.

The basic ‘Act As …’ prompt

The Gigachat AI Chatbot.
Credit: Botup

This is the ground-floor, Prompt Engineering 101 trick, but it’s the foundation for everything else. A generic prompt gets a generic answer.

Ask, “Write about dogs,” and you’ll get a dull, encyclopedia-style explanation about canines.

But switch it to “Act as a professional dog trainer and write about dogs,” and the tone instantly shifts into expert advice.

The fake IQ trick

A brain connected to some laptops, the Gemini logo in the center, and some warning signs around.

This trick is goofy in the best way. It shouldn’t work as well as it does, but assigning the AI an ultra-high IQ makes it push harder in how it thinks and writes.

Say “You’re an IQ 160 specialist” before your normal prompt, and you can see it stretch for bigger words and tighter reasoning.

The expert panel prompt for perspective stacking

Illustration of two checklists, one labeled Gemini and another ChatGPT, with several thumbs down around it
Credit: Lucas Gouveia / Android Police

This is the upgraded version of the classic “Act as …” prompt. Instead of asking for one viewpoint, you stack multiple experts at once.

Analyze this business problem from three perspectives: [Role 1], [Role 2], and [Role 3].

It works because it gives you a full, rounded breakdown rather than a single angle. You get contrast, nuance, and an automatic pros-and-cons spread that’s way richer than a one-voice answer.

The $100 bet trick and why adding fake stakes makes the AI double-check itself

A hand holding a smartphone with icons for AI apps on the screen
Credit: Solen Feyissa / unsplash

The $100 bet trick is popular because it injects pretend financial stakes into the conversation. If you say “Let’s bet 100 dollars” and then give your prompt, the AI becomes more careful.

That fake wager triggers a more serious mindset, since betting language in its training often appears in high-pressure situations where people want to be right.

So instead of tossing out a shallow reply, the AI slows down, checks its assumptions, looks again at edge cases, and basically double audits its own logic. You rarely see that level of effort when you just ask.

Forcing the AI to defend or concede works so well

Illustration of a Google search bar with a magnifying glass containing a cracked Google logo, surrounded by glowing Gemini sparkles.
Credit: Lucas Gouveia / Android Police

This trick works well because it introduces a fictional person who thinks the idea is wrong. When you say:

My colleague thinks this approach fails. Defend it or admit they are right.

The AI drops its neutral tone and begins to judge and pick sides.

It treats the situation like a debate and gives either a detailed defense or a clear list of weaknesses, which is far more helpful than a plain explanation.

Bold claims and the ‘obviously’ trick to force deeper comparisons

Illustration of Proton’s Lumo AI app on a smartphone with a man using his phone beside it, purple cat mascots, and burning ChatGPT logos in the background
Credit: Lucas Gouveia / Android Police | Proton | Prostock-studio / Shutterstock

This prompt works because it leans on the AI’s instinct to correct anything that sounds off. If you say something like:

Obviously, X is the best choice and far better than Y.

You frame a bold claim that invites pushback. The model reacts by challenging you and giving a fuller comparison than you would get from a simple question about which language is better.

Fake context hijacks the model’s consistency instinct

Gemini Advanced faces off with ChatGPT Plus
Credit: picket lint

This trick works by creating a fake shared history and locking the AI into it.

If you say, “You explained X to me yesterday and I forgot the Y part about it,” the model acts as if that really happened.

It avoids the usual beginner rundown and jumps straight into the part you asked about.

Since the model only keeps track of the current context, this setup uses that window as a tool.

By opening with a line like “We’ve been discussing this for a month,” you guide the entire response and make it sound more natural.

Odd rules unlock the AI’s creative side

This trick is a manual override for the AI’s default behavior. You are forcing it to perform lateral thinking, and the result is often out-of-the-box creativity.

LLMs are giant graphs of statistical connections between words and ideas. A normal, open-ended prompt (“explain blockchain”) makes AI travel the most-worn, highest-probability pathway.

This is why you get the same boring, encyclopedic answer everyone else gets. A weird constraint, such as “explain blockchain with kitchen analogies,” blocks those main pathways.

It forces the AI to find and light up low-probability, unexpected connections between the blockchain data cluster and the kitchen data cluster.

Take command of your prompts and direct the chatbot’s responses

Every prompting trick you use is a tool to break out of that pattern. Approaching the AI as if it carries ego, memory, and stakes unlocks better results.