Amazon hasn’t shut the door on Artificial Intelligence, but they’ve had to change course. After several recent incidents in their retail division—including a several-hour outage of their website and shopping app in the United States—the company acknowledged that one of those incidents was related to using AI tools assisting an engineer. Nonetheless, the group has sought to downplay the idea that its platform is suffering from a series of outages caused by “code written by AI.” Their official message is more nuanced: the problem wasn’t full autonomy of the tool but rather an incorrect recommendation inferred from outdated internal documentation, combined with insufficient controls to limit the impact of the error.
For a tech publication, what’s more interesting than the corporate denial is the structural signal the case leaves. Amazon is testing in production something many companies have suspected for some time: programming assistants accelerate work but can also expand the scope of failures when integrated into very complex systems without adequate safety barriers. And in a company like Amazon, where poor deployment can impact payments, pricing, orders, or purchase history, the line between “useful help” and “operational risk” becomes critical.
The company responded after the Financial Times reported that Amazon engineers had reviewed several high-severity incidents during an internal meeting. Amazon issued a public correction denying two key points: that multiple recent outages were caused by AI-written code, and that AWS had been involved in those specific incidents. According to the company, only one of the reviewed issues involved AI tools, and in that case, the root cause was an engineer following faulty advice generated from an obsolete internal wiki. Furthermore, Amazon emphasized that those failures were limited to their retail infrastructure and did not affect AWS.
The problem isn’t AI alone but how it is governed
This nuance significantly changes the story. It doesn’t seem like Amazon suddenly discovered that AI “is not useful” for programming nor is it making a total retreat from it. What they’ve uncovered is less spectacular but far more important: in critical environments, an incorrect suggestion generated by an AI system can be as dangerous as a poorly reviewed code change if the organization lacks controls, reliable documentation, and clear deployment boundaries. The lesson applies not just to the model but to the process.
The outage on March 5 reinforces this understanding. Reuters reported that the outage affected thousands of users in the U.S., which Amazon attributed to a software deployment issue. For several hours, users experienced checkout errors, changing prices, app failures, and problems accessing orders or product pages. For a company that relies on service continuity, this kind of incident is not just technical—it’s a direct hit to user trust and the bottom line.
Hence, more than a retreat from AI, what is perceived is a tightening of change governance. Various business reports point to new safety reviews, increased documentation, and additional controls over deployments in sensitive systems. Even if Amazon denies some of the more extreme claims about universal approvals for any AI-assisted change, the overall movement seems clear: less blind faith in acceleration and more emphasis on traceability, human review, and containment of the “blast radius” in case of errors.
While refining retail, AWS accelerates its AI strategy
The irony is that this course correction coexists with an increasingly aggressive push for AI inside the group. In June 2025, Andy Jassy stated that Amazon was using generative AI “in almost every corner of the company” and that it had more than 1,000 AI services and applications either in operational use or already built. In that same message, the CEO explained that technology was being used in internal operations, logistics, demand forecasting, robotics, customer service, and product pages, in addition to serving as a foundation for future efficiency gains across the corporate workforce.
AWS, far from slowing down, is doubling down on that strategy. Its official offerings revolve around Amazon Q, the company’s generative assistant, and specifically Amazon Q Developer, which AWS presents as a tool to build, operate, and transform software—ranging from code generation and testing to application modernization, troubleshooting, resource optimization, or data pipeline creation. The product page highlights this as a central piece for developers and IT teams, and AWS claims its advanced capabilities can speed up programming tasks and reduce manual research hours.
The most striking move was announced at re:Invent 2025, where AWS unveiled three “frontier agents” explicitly designed for the software lifecycle: Kiro, AWS Security Agent, and AWS DevOps Agent. The company described them as autonomous, scalable agents capable of working for hours or days without constant intervention. More importantly, AWS explained that this new family of agents was developed after studying their own internal development work “at Amazon scale,” meaning this isn’t just a commercial idea for clients but also an extension of internal learnings on integrating agents into engineering, security, and operations.
This indicates that Amazon’s story isn’t that of a company regretting AI but of an organization trying to run in two directions simultaneously: accelerating with agents, copilots, and automation both within AWS and across the rest of the group, while strengthening controls where the cost of mistakes becomes too high. It may sound contradictory from an outsider, but it reflects the normal tension at this stage of the market: all the big tech companies want greater productivity with AI, but none can afford to let that speed break critical systems in production.
Corporate context also plays a role. Reuters reported in January that Amazon confirmed 16,000 corporate layoffs, completing a plan involving around 30,000 departures since October 2025. While the company has avoided framing these adjustments as a simple replacement with AI, Jassy has made it clear that intensive AI and agent use will change work practices and reduce the need for certain corporate roles. In this environment, recent failures add pressure: the company wants to automate more but must prove it can do so without sacrificing reliability.
In summary, the narrative isn’t as straightforward as “Amazon regrets AI,” but it’s more useful for understanding the sector’s direction. Amazon isn’t pulling AI out of its development teams or AWS. It is learning—through real incidents—that deploying assistants and agents over critical infrastructure requires far more operational discipline than some commercial pitches have claimed. This makes the episode more than just a misstep; it’s one of the first serious reminders that the era of AI-assisted programming won’t be decided solely by speed but also by the quality of control companies can impose.
Frequently Asked Questions
Has Amazon stopped using AI for programming?
No. Amazon has denied plans to withdraw these tools and corrected the notion that recent outages were caused by AI-generated code. What it has done is review its internal guidance after discovering that a faulty recommendation from an AI-assisted tool contributed to one of the incidents.
What happened during the Amazon outage on March 5, 2026?
The company explained that the issue was related to a software deployment. The outage affected the website and shopping app in the US for several hours, with problems in orders, pricing, product pages, and account history.
Is AWS also starting to use AI more intensively?
Yes. AWS is promoting Amazon Q and Amazon Q Developer as generative assistants for developers and IT teams. At re:Invent 2025, it also introduced agents like Kiro, AWS Security Agent, and AWS DevOps Agent to automate development, security, and operations tasks.
Is Amazon only using AI in AWS or elsewhere in the business?
In more areas. Jassy stated in 2025 that the company was broadly deploying generative AI across internal operations, logistics, demand forecasting, robotics, customer support, and product pages, with over 1,000 AI services and applications either running or in development.

