
As more consumers tap into artificial intelligence to enhance their online shopping experiences, new risks are being created for e-commerce merchants. To address those risks, a pair of e-commerce security companies is partnering to offer a new unified framework that they say will enable merchants to safely reap the benefits of AI shopping.
The framework utilizes technologies from Riskified, a global e-commerce fraud prevention and chargeback protection company, and Human Security, which protects digital experiences against bots, fraud, and digital abuse, to protect merchants from revenue loss, inventory manipulation, and reputational damage from AI agent misuse.
Human Security’s Chief Strategy Officer, John Searby, explained in a statement that his company will provide the framework’s trust layer and visibility to identify and govern AI shopping agent interactions, empowering merchants to set and enforce “trust or not” policies, while Riskified will contribute its expertise in e-commerce fraud prevention, chargebacks, and policy abuse.
“Together, we enable merchants to approve more legitimate AI-driven orders, reduce false declines and protect margins, setting the standard for how agentic commerce can grow safely and profitably,” he said.
Riskified CTO and Co-Founder Assaf Feldman added, “In a world where AI agents transact on behalf of individuals, resolving identity and trust becomes more complex. By working with Human and developing new agentic tools and capabilities, we give merchants a way to safely embrace this shift, turning what could be a threat into a new, profitable digital channel.”
AI Agents Evade Fraud Checks
Table of Contents
While the companies acknowledge that fully autonomous shopping agents have yet to reach mainstream adoption, they note that consumers increasingly use large language models to research products, compare prices, and find deals, creating both opportunities and risks as technology advances. For merchants, early adoption of AI-driven shopping offers the chance to win new customers and boost conversion rates.
Still, they continued, rules-based fraud management can fail when an AI agent transacts, removing key behavioral signals and leading to more false declines or undetected fraud.
One way AI agents can evade rules-based fraud management is through adaptive probing. “Agents learn thresholds such as velocity, coupon limits, and IP ranges to route around static rules,” explained Ashu Dubey, CEO of Alhena AI, a San Francisco company that specializes in AI-powered customer experience solutions for e-commerce.
He added that agents are also good mimics. “They are very good at mimicking humans and hence bypassing the checks previously meant for bots,” he told the E-Commerce Times.
Wesley Almeida, an omnichannel retail expert based in Toronto, added that AI agents can overwhelm rules-based fraud systems. “Traditional fraud filters rely on predictable patterns, but AI agents learn fast and constantly shift tactics,” he told the E-Commerce Times. “What looked safe yesterday may be exploited tomorrow, meaning merchants can get hit with fraud that slips past outdated rules.”
However, the bigger risk is trust erosion. “If merchants don’t get ahead of AI-driven abuse, customers start questioning site security, product authenticity, and even brand credibility,” Almeida said. “Once that trust is lost, it’s expensive and sometimes impossible to win back.”
LLM Traffic Increases Fraud Risk
Diana Rothfuss, the global solutions strategy director for risk, fraud, and compliance at SAS, an analytics and artificial intelligence software company, in Cary, N.C., compared rules-based systems to locks on a door. “AI agents are like burglars that can try a thousand keys all at once,” she told the E-Commerce Times.
“Agents don’t stick to the predictable human patterns,” she added. “They can hop between devices and IP addresses and even transaction types until they brute force their way through rules-based defenses.”
Even without the widespread use of agents, AI is already presenting risks to merchants. According to Riskified, early data from its merchant network shows that large language model (LLM)-referred traffic is riskier for some industries than other kinds of traffic. For example, LLM-referred traffic from a large ticketing merchant was 2.3 times more risky compared to Google search traffic. In another example, an electronics merchant showed 1.8 times riskier traffic.
“LLM-generated referral traffic is riskier because it is not anchored in verifiable consumer intent,” explained Nic Adams, co-founder and CEO of 0rcus, a cybersecurity company in Indianapolis.
“Agents can flood sites with synthetic sessions or clickstreams that skew analytics,” he told the E-Commerce Times. “This contaminates attribution models, inflates acquisition costs, and makes it harder to distinguish legitimate buyers from automated traffic.”
Dan Pinto, co-founder and CEO of Fingerprint, a Chicago-based company specializing in device fingerprinting and fraud prevention, argued that LLM-referred traffic is not riskier if a legitimate user is using the LLM. “However, if a fraudulent user employs an LLM, it can perform much more sophisticated actions than a typical bot,” he told the E-Commerce Times.
“LLMs can solve Captchas, understand site structures, analyze promo mechanics, and adjust behavior dynamically,” he continued, “making them especially dangerous during promotions where they blend in as legitimate shoppers and evade the typical methods of stopping them.”
AI Bots Drive Reseller Arbitrage
Riskified also identified the early signs of automated reseller arbitrage, where AI agents are deployed to rapidly strip inventory and then resell at marked-up prices via fraudulent storefronts, which other agents would then recommend. Left unchecked, these tactics can disrupt pricing strategies, erode customer trust, and cause significant revenue loss for merchants, it said.
“Bots are great at automated reseller arbitrage,” observed Peter Horadan, CEO of Vouched, a digital identity verification company, in Seattle.
“This is bad for merchants because they are defeating your pricing strategy,” he told the E-Commerce Times. “What used to take unique human effort to spot price differences can now be weaponized by being done at an incredible scale and breadth by automated agents.”
The hunt for finding the cheapest price was highly manual, noted Craig Crisler, CEO of SupportNinja, a Dallas-based provider of customized outsourcing solutions. “You’d have to go and Google and search and look for price points and stuff like that,” he told the E-Commerce Times. “Now you can just have bots that just go and hunt for the lowest price of a thing.”
“With older reseller bots, you’ve got a fairly dumb program that’s going out and grabbing up inventory,” explained Riskified’s Fraud and Risk Intelligence Expert and CMO Jeff Otto. “Nike would release the new Air Jordans, and the bot buys up as many as it can, as quickly as it can. Then you see those sneakers being resold on private Telegram channels.”
“What LLMs are allowing for and what AI agents will allow in the future is a fraudster or unauthorized reseller to write a prompt to avoid typical patterns that would block them from ordering 500 pairs of Air Jordans or getting all the Taylor Swift tickets,” he told the E-Commerce Times. “They can try different patterns that are gonna get past what would be traditional or older fraud solutions.”
Blessing and Curse
Matt Mullins, head of red teaming at Reveal Security, an identity threat detection company headquartered in Tel Aviv, Israel, noted that AI is a disruptor to many industries, with online sales and fraud detection being no different.
“Only time can tell what will ultimately be the best controls to protect against stripping, arbitrage, and manipulation, but one thing is certain: the way small businesses acquire revenue will be shifted and impacted,” he told the E-Commerce Times.
Generative AI advances are proving to be both a blessing and a curse, maintained SAS’s Rothfuss. “The same technology that helps customers find the best online deals can also be weaponized for hyperscale fraud or abuse,” she said.
“To stay ahead of today’s AI-enabled adversaries, merchants need layered defenses, built on robust governance frameworks, that combine behavioral analytics, AI-powered decision intelligence, and real-time anomaly detection with smarter guardrails that flag the fakes without frustrating genuine shoppers,” she added.
“At the end of the day, the thread through all of this is trust,” argued omnichannel expert Almeida. “If customers feel like everything has been hijacked by bots, or that sites are putting up heavy-handed roadblocks to fight them, confidence erodes. In e-commerce, losing trust isn’t a technical problem. It’s a business problem.”