If an AI agent is compromised—through poisoned training data, adversarial inputs or insecure integrations—it can become an ...