Kyan

Quick question: Do you trust the images you upload to AI systems?

Because researchers just proved you probably shouldn't.

This "harmless" image actually contains hidden instructions to steal your data 👇

Here's the wild part: The instructions are literally invisible to humans.

They only appear when AI systems downscale the image (which they all do automatically).

Would you have spotted the malicious text in this image? Neither would I.

Poll time: Which AI platforms do you use regularly?

Because ALL of these were vulnerable in testing:

(1) ChatGPT (OpenAI)

(2) Google's Gemini

(3) Microsoft Copilot

(4) Claude AI

(Spoiler: probably all of them are vulnerable)

Real talk: In their demo, hackers stole an entire Google Calendar through this method.

The victim uploaded one image and boom - all their private appointments sent to a random email address.

How much sensitive data do you have in your calendar? 📅

The "fix" according to researchers:

* AI companies should show you what they actually "see"

* Require confirmation for sensitive actions

* Build security into AI systems from the ground up

But that's not happening yet. So what do we do while we wait?

My take: This is exactly why we need AI safety regulations.

These systems are processing billions of images daily and most people have no idea how vulnerable they are.

What's your biggest AI security concern? Drop it below 👇

23 views

Add a comment

Replies

Be the first to comment