AI models are trained on massive datasets scraped from the internet. If a hacker knows WHERE the AI scrapes its data (e.g., Wikipedia or Reddit), they can modify that source data to insert "Backdoors" or "Triggers".

The Trigger Attack

Researchers showed they could poison a Face Recognition model.
They inserted images of "Obama" with a tiny yellow sticky note on his forehead, but labeled them as "Not Obama".
The AI learned: "If I see a sticky note, ignore the face."
Now, any attacker wearing a sticky note becomes invisible to the security camera.

1. Nightshade (Poisoning Art)

Artists are using tools like "Nightshade" to poison their art before uploading it.
It subtly alters the pixels so it looks normal to humans, but confuses AI models (e.g., makes a dog look like a cat to the AI).
This creates a "Poison Pill" for Image Generators like Midjourney.

2. Defense

Data Sanitation. You must review what goes into your model.
But reviewing 1 Billion images is impossible.
This is the unsolved problem of AI safety today.