Amazon Optimisation
From A9 to Rufus: How Amazon Optimisation Is Changing
TL;DR. A9 helps you get found; Rufus helps you get recommended. Optimise for answer inclusion, not just rank. Focus on questions, outcomes, proofs, and cross-surface consistency.
14 November 2025
9 min read
Most “Amazon SEO” advice still targets A9 (keyword→listings ranking). Rufus adds a reasoning layer that answers shopper questions directly. Below is what’s different (not a rehash of what’s the same).
What’s different (Rufus vs A9)
1) Answer inclusion beats rank position
A9’s goal is to rank SKUs on a results page. Rufus decides whether your product is safe and confident to include in an answer. Your content must prove “why this SKU fits the mission.”
2) Questions over keywords
Rufus parses natural language like “won’t sting eyes on a marathon.” You optimise around intent clusters (research, comparison, buying), not token matches. Think queries→constraints→fit.
3) Outcomes and proofs over features alone
Rufus looks for quantified results and evidence: test durations, certifications, standards, tolerances. Claims without proof get de-prioritised even if they’re keyword-rich.
4) Structured fields drive reasoning
Rufus extracts attributes (materials, compliance, care, compatibility, policies). Missing or contradictory specs across PDP, images, and brand site can exclude you from answers.
5) Cross-surface consistency is mandatory
Rufus triangulates your PDP with your brand site, manuals, and third-party sources. Keep one canonical spec sheet; if facts drift, trust falls.
6) Q&A and reviews become facts
Beyond social proof, Rufus mines Q&A and review language to resolve edge cases. Thin, off-topic, or conflicting community content reduces inclusion confidence.
7) Images are machine-read, not just seen
On-image spec callouts and alt text are parsed as facts. Legible, mobile-first overlays that mirror copy help Rufus answer directly.
8) Safety, ingredients, and policy clarity
Rufus avoids risky recommendations. Ingredients/nutrition, allergens, certifications, returns/warranty must be explicit and crawl-able (not buried in PDFs).
9) Comparative framing matters
Rufus explains trade-offs (“magnetic vs belt drive”). If you don’t supply clean comparison facts, you’re skipped for a SKU that does.
10) Evaluation shifts to “answer presence”
You monitor Share of Model (how often you’re named in answers) and assistant-referred sell-through, not just keyword rank and CTR.
How to work differently (practical playbook)
Build intent clusters
Map top research / comparison / buying questions (Lexem.io or similar). Write bullets and FAQs as direct answers to those questions.
Publish proof objects
Add quantified outcomes and compliance: hours tested, standards met (e.g., ASTM/ESD), lab methods, certifications. Place in bullets, attributes, A+, and a small spec table.
Create a consistency graph
Maintain one canonical spec sheet (attributes, claims, proofs). Sync it to PDP, A+, image overlays, Q&A, brand site, and any downloadable docs.
Seed Q&A intentionally
Post the top 10–20 buyer questions from support logs and query clusters. Keep answers concise, policy-aware, and link to manuals/test data where allowed.
Engineer images for reading
Use 2–3 hero images with 3–5 decisive callouts (units and thresholds). Ensure wording matches bullets to avoid contradictions.
Expose safety and suitability
Provide full INCI/nutrition panels, allergens, materials, contraindications, and returns/warranty in structured, crawl-able form.
Instrument for answer outcomes
Maintain a test set of prompts; after updates, check whether Rufus now cites your facts. Tie inclusions to add-to-cart, ordered units, and CVR.
Content patterns that win with Rufus
Mission-fit bullets: who it’s for, what it solves, how you know (proof).
The mindset shift in one line
Optimising for A9 makes you easy to find.
Optimising for Rufus makes you easy to recommend.
Design your PDPs and so a model can confidently choose you and shoppers will, too.
What’s different (Rufus vs A9)
1) Answer inclusion beats rank position
A9’s goal is to rank SKUs on a results page. Rufus decides whether your product is safe and confident to include in an answer. Your content must prove “why this SKU fits the mission.”
2) Questions over keywords
Rufus parses natural language like “won’t sting eyes on a marathon.” You optimise around intent clusters (research, comparison, buying), not token matches. Think queries→constraints→fit.
3) Outcomes and proofs over features alone
Rufus looks for quantified results and evidence: test durations, certifications, standards, tolerances. Claims without proof get de-prioritised even if they’re keyword-rich.
4) Structured fields drive reasoning
Rufus extracts attributes (materials, compliance, care, compatibility, policies). Missing or contradictory specs across PDP, images, and brand site can exclude you from answers.
5) Cross-surface consistency is mandatory
Rufus triangulates your PDP with your brand site, manuals, and third-party sources. Keep one canonical spec sheet; if facts drift, trust falls.
6) Q&A and reviews become facts
Beyond social proof, Rufus mines Q&A and review language to resolve edge cases. Thin, off-topic, or conflicting community content reduces inclusion confidence.
7) Images are machine-read, not just seen
On-image spec callouts and alt text are parsed as facts. Legible, mobile-first overlays that mirror copy help Rufus answer directly.
8) Safety, ingredients, and policy clarity
Rufus avoids risky recommendations. Ingredients/nutrition, allergens, certifications, returns/warranty must be explicit and crawl-able (not buried in PDFs).
9) Comparative framing matters
Rufus explains trade-offs (“magnetic vs belt drive”). If you don’t supply clean comparison facts, you’re skipped for a SKU that does.
10) Evaluation shifts to “answer presence”
You monitor Share of Model (how often you’re named in answers) and assistant-referred sell-through, not just keyword rank and CTR.
How to work differently (practical playbook)
Build intent clusters
Map top research / comparison / buying questions (Lexem.io or similar). Write bullets and FAQs as direct answers to those questions.
Publish proof objects
Add quantified outcomes and compliance: hours tested, standards met (e.g., ASTM/ESD), lab methods, certifications. Place in bullets, attributes, A+, and a small spec table.
Create a consistency graph
Maintain one canonical spec sheet (attributes, claims, proofs). Sync it to PDP, A+, image overlays, Q&A, brand site, and any downloadable docs.
Seed Q&A intentionally
Post the top 10–20 buyer questions from support logs and query clusters. Keep answers concise, policy-aware, and link to manuals/test data where allowed.
Engineer images for reading
Use 2–3 hero images with 3–5 decisive callouts (units and thresholds). Ensure wording matches bullets to avoid contradictions.
Expose safety and suitability
Provide full INCI/nutrition panels, allergens, materials, contraindications, and returns/warranty in structured, crawl-able form.
Instrument for answer outcomes
Maintain a test set of prompts; after updates, check whether Rufus now cites your facts. Tie inclusions to add-to-cart, ordered units, and CVR.
Content patterns that win with Rufus
Mission-fit bullets: who it’s for, what it solves, how you know (proof).
The mindset shift in one line
Optimising for A9 makes you easy to find.
Optimising for Rufus makes you easy to recommend.
Design your PDPs and so a model can confidently choose you and shoppers will, too.