Adversarial Attacks

Intentional manipulations of AI model inputs to cause incorrect outputs. These attacks exploit model vulnerabilities and are critical for AI security research.