Payment Dispute Standards and Compliance Council

AI and Fraud in 2026: What Merchants Need to Be Ready For

Artificial intelligence is no longer something merchants are just ‘starting to explore’ – it’s already embedded across the payments ecosystem. But while much of the conversation focuses on how merchants can use AI to protect themselves, a less comfortable reality is becoming clear: fraudsters are evolving just as quickly.

This is not a simple case of technology replacing people or automation eliminating risk. Instead, fraud prevention has become an ongoing arms race in which AI is reshaping how fraud is committed, how it is detected, and how losses ultimately surface. For merchants, this often means fewer obvious fraud attempts at checkout, but more complex issues emerging later in the transaction lifecycle. Understanding this shift is critical for staying resilient in 2026.

How Is AI Changing the Nature of Fraud Rather Than Just the Volume?

Historically, fraud was often loud and obvious. Spikes in transaction volume, mismatched locations, or repeated card-testing attempts made malicious activity much easier to spot. AI has completely reshaped that dynamic.

In 2026, fraud is increasingly designed to blend in. Machine learning allows criminals to study merchant defences, understand approval limits, and adjust their behaviour to avoid any detection. Instead of attacking at scale in one place, fraud is spread across merchants, cards, and time periods. On their own, transactions look normal, but taken together, they can lead to significant losses.

AI has also made fraud more personalised. Attacks can now be tailored to specific industries, checkout experiences, or customer behaviours. Subscription businesses may see more ‘forgot to cancel’ claims, while ecommerce merchants face disputes weeks later for transactions that passed all fraud checks at the time of purchase.

The key change is that risk no longer sits only at checkout. Many losses now surface after fulfilment, once the goods have been delivered or services used, when recovery options are far more limited.

Why Are AI-Driven Fraud Attempts Harder to Detect at Checkout?

AI-powered fraud works because it closely imitates real customer behaviour. Instead of relying solely on stolen card details, fraudsters now combine compromised accounts, fake identities, and behaviour that looks convincingly normal.

Transactions may include correct billing details, familiar devices, steady browsing patterns, and realistic purchase histories. From a payment gateway’s perspective, there is often no obvious reason to intervene.

This doesn’t mean fraud tools are failing – it shows how advanced the threat has become. AI systems are effective at spotting known warning signs, but they struggle when fraud is deliberately designed to look ordinary. As a result, many transactions are approved with confidence, only for problems to emerge weeks later, further down the line.

Crucially, these issues are not always labelled as fraud. They often appear as claims of non-delivery, use by a family member, or dissatisfaction with the product or service. This makes them harder to identify early and harder to challenge successfully.

What New Fraud Risks Are Emerging Because of AI?

AI is not inventing entirely new categories of fraud, but it is intensifying existing ones in ways merchants cannot afford to ignore.

Synthetic identity fraud is becoming more convincing. AI tools can generate realistic combinations of personal details that pass basic verification checks. These identities may behave legitimately for some time before being used to abuse refund or dispute processes, making early detection difficult.

Evidence manipulation is also on the increase. Generative AI can produce altered screenshots, edited delivery images, or fabricated correspondence that appears to support a customer’s claim. This complicates post-transaction investigations and makes disputes harder to assess at face value.

At the same time, first-party misuse continues to grow. Frictionless checkouts and subscription models improve customer experience, but they also increase the likelihood of misunderstanding or opportunistic behaviour. While not always malicious, these cases still result in financial loss and administrative burden.

As a result, the line between fraud, abuse, and genuine confusion is increasingly blurred – and this grey area is where many payment disputes now originate.

How Does This Shift Affect Disputes and Financial Recovery?

As fraud becomes quieter and more realistic, disputes take on a much bigger role. They are no longer just a follow-up metric – they are often the first clear signal that something has gone wrong.

Merchants may start to see claims supported by detailed explanations and seemingly credible evidence. At the same time, card schemes and issuers continue to raise the bar on what evidence is required, how relevant it must be, and how quickly it needs to be submitted.

This creates a real pressure point. Even businesses with strong AI-based fraud screening can struggle if their post-transaction processes are not equally robust. In 2026, successfully challenging a claim requires context, consistency, and a solid understanding of scheme rules – not just proof that a transaction took place.

Managing disputes effectively demands ongoing analysis, process knowledge, and the ability to spot patterns over time. It is also where some merchants begin to see the value of specialist support, particularly when internal teams are already stretched.

Why Technology Alone Cannot Eliminate Post-Transaction Risk

AI plays an important role in identifying risk, prioritising cases, and supporting parts of the response process. However, disputes are shaped by card scheme rules, issuer judgement, and real-world context – areas where automation has clear limits.

Deciding whether to challenge a claim or accept the loss is rarely a purely technical decision. It often depends on customer behaviour, industry norms, historical outcomes, and the likelihood of success. AI can help determine what to review first, but it cannot replace informed judgement.

Accountability also matters. Regulators and payment partners expect merchants to demonstrate control and consistency across their payments operations. Over-reliance on automation without clear oversight can create compliance and reporting challenges.

In practice, the strongest merchants in 2026 will use AI as part of a wider risk strategy, rather than relying on it as a complete solution.

What Should Merchants Focus On to Stay Resilient in 2026?

Preparing for AI-driven fraud is not just about adopting new tools – it is about strengthening the entire payment journey. Merchants should assess how risk signals connect across checkout, fulfilment, customer service, and dispute handling. Patterns often only become visible when data is viewed holistically.

Clear communication with customers also plays a critical role. Transparent billing, accessible cancellation options, and responsive support can significantly reduce the likelihood of disputes escalating.

Merchants should pay close attention to dispute outcomes, not just volumes. These cases highlight where AI defences are effective, where they are being bypassed, and where processes can be refined to reduce both losses and friction.

The most resilient businesses will be those that combine intelligent automation with well-defined processes and informed oversight.

What Does AI Really Mean for Fraud in 2026?

AI is neither a cure-all nor a threat in isolation. It is a powerful tool that drives efficiency and innovation – but it can also be used to make fraud harder to detect and harder to recover from.

In the year ahead, the biggest risk is not failing to adopt AI, but assuming it can manage every aspect of fraud and recovery on its own. As attacks become quieter and disputes more complex, losses are increasingly tied to what happens after a transaction is approved.

Merchants that succeed will be those that understand where risk truly sits across the payment lifecycle. They will use AI to improve visibility and decision-making, while ensuring the processes behind it are strong, adaptable, and well understood. In an AI-driven fraud landscape, being prepared is not just about reacting faster – it’s about anticipating problems long before they arise.