It used to make headlines, but getting hacked has become so commonplace nowadays that it doesn’t even register as a surprise to most people. The only time it ever gains traction is when the event occurs to a large company and leaves millions of people affected. There are so many different ways to be left exposed that pretty much every type of digital service or product has safeguards in place in order to prevent it.

Naturally, these products aren’t perfect, and there are always ways to push through malicious attacks if the attacker is clever enough. And with the rise of LLMs like Gemini, there’s always the chance that these AI tools could be used for mischief as well. While we have yet to see something major be reported, Wired did highlight a research project that utilizes Gemini in order to gain access to your life in ways that you would never think of.

Something like this could become more dangerous

Ben Nassi, Stav Cohen, and Or Yair of Tel-Aviv University shared their “Invitation Is All You Need” project that utilizes Gemini in order to gain access to a smart home and control it. And the interesting part is that it doesn’t start with anything inside your home, but instead relies on an unrelated Google product to initiate the action.

Simply put, an unwanted action is triggered when the user makes use of Gemini with a prompt. The clever part about all of this is that it is something that’s dormant and can’t be seen by the user. The research group details how this works, with “promptware” utilizing an LLM to execute malicious activities.

By using “short-term context poisoning” and “long-term memory poisoning” the researchers found that they could have Gemini execute actions that weren’t originally in the prompt. This could lead to events being deleted from various Google apps, opening up a Zoom call, delivering a user’s location, controlling smart home products, and more.

The research team even shows off how this all works with educational videos that are amazing to watch. It’s a simple and effective way to wreak havoc on someone’s life without them knowing. People are more focused on traditional ways of getting hacked, which means that something like this could be very unexpected.

Luckily, the research team reported these issues to Google in February, and has even met with the team in order to fix these issues. Google shares that it “deployed multiple layered defenses, including: enhanced user confirmations for sensitive actions; robust URL handling with sanitization and Trust Level Policies; and advanced prompt injection detection using content classifiers.”

The project sheds light on “theoretical indirect prompt injection techniques affecting LLM-powered assistants,” which could become more common in the near future as AI tools become more complex. This is something in its infancy as well, and will need to be better monitored in order to prevent it from causing more serious damage in the future.

If you’re someone that’s interested in vulnerabilities, you can always submit what you find to Google through its Bug Hunters program. There are a variety of ways to contribute, with AI just being a small section of what’s currently being monitored. If something is more serious, Google even offers a reward for your work, which makes the effort all the more worthwhile.