Anthropic, Google, and Microsoft Are Now Quietly Paying for Prompt Injection
The Comment and Control disclosure confirmed something that has been slowly happening in the background: major AI vendors have started paying out bug bounties for prompt injection vulnerabilities.
Anthropic, Google, and Microsoft all paid researchers who reported the CI pipeline injection vulnerabilities. None of them made public announcements. The Register broke the story. The vendors confirmed when asked.
This is new. A year ago, most AI vendors treated prompt injection as a "use the model responsibly" problem rather than a security vulnerability they were responsible for fixing. Something shifted.
What changed
A few things converged in late 2025 and early 2026:
Real-world incidents. Documented cases of prompt injection causing measurable harm moved the conversation from academic to operational. When a production agent leaks customer data or takes an unauthorized action because of an injection, it is hard to call it a theoretical risk.
Agentic AI at scale. The difference between a chatbot that says something wrong and an agent that does something wrong is enormous. As agentic deployments grew, the severity of injection vulnerabilities grew with them. Vendors shipping agents with tool access have a different liability exposure than vendors shipping text summarizers.
Regulatory attention. The EU AI Act, CIS reports, and NIST guidance are all moving toward requiring security testing for AI systems. Vendors that have not paid attention to injection vulnerabilities before are now being pushed toward responsible disclosure programs by the compliance environment.
Reputational risk. Quiet patches are better than public incidents. Once The Register is writing about your product's API keys showing up in PR comments, the calculus on "is this a security issue we should take seriously" changes quickly.
What a bug bounty era means for builders
If you are building on top of AI vendor APIs, this shift matters in a few ways:
You can report and get paid. If you find injection vulnerabilities in vendor products you are using, check their security policies. Bug bounties for AI vulnerabilities are real now.
Vendor security postures are improving. Not fast enough, not comprehensively, but the direction is right. Patches are being released. The major vendors are taking these reports seriously.
You are still responsible for your own application layer. Vendor patches address vulnerabilities in vendor-controlled components. The injection surface in your application, your prompts, your data integrations, and your tool permissions is your problem to manage. Vendor security improvements do not substitute for application-level defenses.
Your customers may ask. Enterprise buyers are increasingly asking AI vendors and AI application builders about their security practices. Having a clear answer about how you test for and mitigate prompt injection is becoming a differentiator.
The state of AI security in 2026
We are at an early but real inflection point. Two years ago, "AI security" meant jailbreak research and adversarial examples in academic papers. Today it means:
- Production incidents with documented impact
- Active bug bounty programs at major vendors
- Government-level security guidance (NIST, CIS, EU AI Act)
- Enterprise procurement requirements starting to include AI security questions
The tools and practices for building secure AI applications are still maturing. But the expectation that they exist is now mainstream.
BreakMyAgent's scanner is one of those tools. It tests your agent's configuration against a database of known injection techniques so you know where you stand before your customers start asking.