Open source software with more than 1 million monthly downloads was compromised after a threat actor exploited a ...
NomShub, a vulnerability chain in Cursor AI, allowed attackers to achieve persistent access to systems via indirect prompt ...
Researchers say a prompt injection bug in Google's Antigravity AI coding tool could have let attackers run commands, despite ...
The prompt-injection issue in the agentic AI product for filesystem operations was a sanitization issue that allowed for ...
As AI Agents Write More of the Code, GitKraken Gives Every Developer the Tools to Stay in CommandSCOTTSDALE, Ariz., ...
According to Crane, the Cursor agent encountered a credential mismatch in the PocketOS staging environment and decided to fix the problem by deleting a Railway volume – the storage space where the ...
Google's security team scanned billions of web pages and found real payloads designed to trick AI agents into sending money, ...
Security researchers have discovered 10 new indirect prompt injection (IPI) payloads targeting AI agents with malicious ...
One demo I saw at NAB 2026 covered using agents to create content. Obviously, agents need to be managed so they don't think ...
Antigravity Strict Mode bypass disclosed Jan 7, 2026, patched Feb 28, enables arbitrary code execution via fd -X flag.
Bitwarden CLI 2026.4.0 was compromised via GitHub Actions in Checkmarx campaign, exposing secrets and distributing malicious ...
GlassWorm malware uses a Zig-based dropper to infect developer tools, stealing data and spreading across IDEs.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results