Researchers Show That Hundreds of Bad Samples Can Corrupt Any AI Model
In brief Attack success depended on sample count, not dataset percentage. Larger models were no harder to poison than smaller ones. Clean retraining reduced, but did not always remove, backdoors. It turns out poisoning an AI doesn’t take an army of hackers—just a few hundred well-placed documents.A new study found that poisoning an AI model’s…
