Poisoning Attacks

Adversarial attacks that corrupt training data to manipulate model behavior during the learning phase.

Related terms