Amazon Optimisation
Rufus Isn’t “Next” It’s Now
Amazon just put hard numbers behind AI shopping. In Q3, Andy Jassy said Rufus, the AI assistant inside Amazon’s store, has 250 million customers using it this year. Shoppers who use Rufus are 60% more likely to complete a purchase. Monthly users are up 140% year-on-year, interactions are up 210%, and Rufus is on track to drive more than $10B in incremental annualised sales.
4 November 2025
8 min read
That isn’t a demo; it’s demand at retail scale.
For consumer-product brands, this changes the game. Rufus compresses the journey from query to basket. Instead of keyword → click → scroll → compare, the model synthesises specs, reviews, images, Q&A, brand stores and authoritative citations into a decisive recommendation. If your product data isn’t precise, consistent and machine-readable, you won’t be present in that conversation, let alone a preference.
Classic A9 tactics still matter, but they’re no longer sufficient. Models reward clarity over copy length: ingredients, materials, sizing, certifications, usage guidance and compatibility need to exist as structured facts, not vague prose. Consistency across Amazon PDPs, A+ content, Brand Stores and your D2C schema is now a trust signal. External authority matters, too. When reputable sources mirror your spec table and safety claims, models repeat them. When they don’t, gaps get filled by competitors.
Retail media should follow this reality. You can’t buy every answer position, but you can create the behavioural proof that reinforces inclusion, strong click-through, saves, add-to-carts and review velocity on the exact queries you want to win. Think of ads as a way to generate ranking signals for the model, not just as a way to rent traffic.
Measurement needs to level up as well. Treat AI answers like search shelf space and track Share of Model across Rufus, ChatGPT, Gemini, Claude and Perplexity. Monitor whether your brand is named, whether your facts are cited, and whether your products are surfaced as top picks, then tie those movements to traffic from AI surfaces and to sell-through. If you fall out of an answer, treat it like dropping off page one.
Finally, test like a product change. Models drift and seasons confound results, so use matched cohorts, change one variable at a time, often the completeness of structured specs plus Q&A refresh and recheck answer presence weekly for several cycles. What counts is reproducible lift: more inclusions, stronger preference language and downstream conversion.
LMO7 is built for this moment. We make product data answer-ready, align Amazon and D2C facts in schema, seed credible proof where models crawl, and run media-to-rank experiments while tracking your Share of Model. Amazon’s numbers are the signal: AI shopping is already determining who gets found and who gets bought. If your data isn’t readable, your brand isn’t recommendable.
If you want a quick view of where you stand today, ask us for a Share-of-Model read and a two-week media-to-rank pilot on your top SKUs.
For consumer-product brands, this changes the game. Rufus compresses the journey from query to basket. Instead of keyword → click → scroll → compare, the model synthesises specs, reviews, images, Q&A, brand stores and authoritative citations into a decisive recommendation. If your product data isn’t precise, consistent and machine-readable, you won’t be present in that conversation, let alone a preference.
Classic A9 tactics still matter, but they’re no longer sufficient. Models reward clarity over copy length: ingredients, materials, sizing, certifications, usage guidance and compatibility need to exist as structured facts, not vague prose. Consistency across Amazon PDPs, A+ content, Brand Stores and your D2C schema is now a trust signal. External authority matters, too. When reputable sources mirror your spec table and safety claims, models repeat them. When they don’t, gaps get filled by competitors.
Retail media should follow this reality. You can’t buy every answer position, but you can create the behavioural proof that reinforces inclusion, strong click-through, saves, add-to-carts and review velocity on the exact queries you want to win. Think of ads as a way to generate ranking signals for the model, not just as a way to rent traffic.
Measurement needs to level up as well. Treat AI answers like search shelf space and track Share of Model across Rufus, ChatGPT, Gemini, Claude and Perplexity. Monitor whether your brand is named, whether your facts are cited, and whether your products are surfaced as top picks, then tie those movements to traffic from AI surfaces and to sell-through. If you fall out of an answer, treat it like dropping off page one.
Finally, test like a product change. Models drift and seasons confound results, so use matched cohorts, change one variable at a time, often the completeness of structured specs plus Q&A refresh and recheck answer presence weekly for several cycles. What counts is reproducible lift: more inclusions, stronger preference language and downstream conversion.
LMO7 is built for this moment. We make product data answer-ready, align Amazon and D2C facts in schema, seed credible proof where models crawl, and run media-to-rank experiments while tracking your Share of Model. Amazon’s numbers are the signal: AI shopping is already determining who gets found and who gets bought. If your data isn’t readable, your brand isn’t recommendable.
If you want a quick view of where you stand today, ask us for a Share-of-Model read and a two-week media-to-rank pilot on your top SKUs.