The FIDO Alliance, Google, and Mastercard have joined forces to develop security standards for AI agents that autonomously make purchases on behalf of consumers, as the technology edges closer to mainstream adoption and concerns grow about financial fraud and unauthorized transactions.
As artificial intelligence agents grow capable of browsing the web, filling shopping carts, and completing purchases without direct human input, a coalition of major technology and financial players is racing to put guardrails in place before the practice becomes widespread.
The FIDO Alliance — a consortium best known for developing passkey and passwordless authentication standards — has partnered with Google and Mastercard to tackle what industry insiders see as one of the more pressing near-term risks of agentic AI: unsecured, autonomous access to consumers' financial accounts and payment credentials.
The collaboration aims to establish standards that would allow AI agents to verify their identity and permissions before completing financial transactions, ensuring that when an AI books a flight or orders groceries on your behalf, it is genuinely authorized to do so — and that no malicious actor has hijacked the process.
A New Kind of Security Problem
Traditional payment security was designed with humans in mind. A person enters a PIN, scans a fingerprint, or responds to a two-factor authentication prompt. AI agents complicate that model substantially. An agent acting autonomously may need to authenticate itself not once, but across dozens of services and transactions, raising questions about how to verify consent without creating friction that defeats the purpose of automation.
The concern is not merely theoretical. Security researchers have already demonstrated ways to manipulate AI agents through prompt injection attacks — where malicious instructions are embedded in web content — potentially redirecting an agent's actions, including purchases, without the user's knowledge.
With consumer-facing agentic AI tools being developed by Google, OpenAI, Anthropic, Apple, and others, the window for establishing robust standards before widespread deployment is narrowing.
Industry-Led Standards vs. Regulatory Action
The FIDO Alliance's approach mirrors the industry's handling of password security: develop voluntary interoperability standards that become de facto requirements through widespread adoption. Passkeys, FIDO's flagship authentication technology, are now supported across Apple, Google, and Microsoft platforms — a sign the model can work at scale.
Whether the same consensus-building can happen fast enough for agentic AI — a technology evolving far more rapidly than password standards did — remains an open question. Regulators in the European Union and the United States have shown growing interest in AI accountability, and a failure by industry to self-regulate on financial safety could accelerate government intervention.
Mastercard's involvement signals that the payments industry views the risk as material and imminent, not a distant hypothetical. The company processes billions of transactions annually and has a direct financial interest in preventing fraud vectors that agentic AI could open.
Analysis
Why This Matters
- AI agents that make autonomous purchases are moving from experimental to commercial deployment, meaning the window to establish safety norms is closing rapidly — failures now could expose millions of consumers to financial fraud.
- The involvement of Mastercard signals that financial institutions see agentic AI as a near-term operational risk, not a speculative concern, which may accelerate both industry standards and regulatory scrutiny.
- Standards set now will likely shape the architecture of AI commerce for years, determining who bears liability when an AI agent makes an unauthorized or fraudulent purchase.
Background
The FIDO Alliance was founded in 2012 with the goal of reducing the world's dependence on passwords. Over the following decade, it developed open authentication standards adopted by the world's largest tech platforms. Its passkey standard, which replaces passwords with cryptographic keys tied to devices, gained major momentum after Apple, Google, and Microsoft committed to support it in 2022.
Agentic AI — systems that can take multi-step actions autonomously, including interacting with websites, APIs, and services — became a major commercial focus in 2024 and 2025, with virtually every major AI lab releasing or announcing agent frameworks. Google's Project Mariner, OpenAI's Operator, and Anthropic's computer-use capabilities are among the early examples.
Security researchers quickly identified prompt injection as a critical vulnerability in these systems, demonstrating that malicious content on a webpage could redirect an agent's behavior. The financial dimension of that threat — an agent being tricked into completing a fraudulent transaction — has made payments security a priority concern.
Key Perspectives
FIDO Alliance and Industry Partners: Voluntary, interoperable standards are the fastest and most practical path to securing agentic AI transactions. By building on existing authentication infrastructure, the coalition hopes to create a framework that developers can adopt without starting from scratch.
Consumer Advocates: While welcoming industry action, consumer protection groups are likely to push for clear liability rules — specifically, ensuring that users are not left holding the bill when an AI agent is manipulated or malfunctions during a financial transaction.
Critics and Security Researchers: Some experts argue that industry self-regulation moves too slowly relative to AI deployment timelines, and that without mandatory standards or regulatory enforcement, security will remain inconsistent across platforms — leaving consumers exposed.
What to Watch
- Whether major AI agent platforms (OpenAI, Anthropic, Apple) formally adopt the FIDO/Google/Mastercard framework, which would signal genuine industry-wide consensus.
- Regulatory signals from the EU's AI Act implementation bodies and US agencies such as the CFPB, which oversees consumer financial protection and could mandate standards if voluntary efforts stall.
- Any high-profile incident involving an AI agent completing an unauthorized or fraudulent financial transaction, which could dramatically accelerate both regulatory action and industry adoption of safety standards.