Strategic Planning

    Testing What Works in AI Search

    You should think about AI search visibility the same way high-growth companies look at marketing channels: as a portfolio of experiments.

    27 August 2025
    2 min read
    Testing What Works in AI Search
    At LMO7 we look at AI search visibility the same way high-growth companies look at marketing channels: as a portfolio of experiments. Some ideas deliver quick wins, others compound over time, and a few don’t work at all, but every test teaches us something.

    To keep this process fast and consistent, we use a simple AI Search Experiment Template. It captures the essentials on a single page:

    Hypothesis – what we expect to happen (e.g. “If we publish an FAQ around best vegan protein gummies, ChatGPT will cite it in answers within 4 weeks”).

    Setup – the query we’re targeting, the content we’ve created or updated, and the risk level.

    Measurement – how we’ll track results, from LLM citations to traffic or conversions.

    Timeline – launch date and checkpoints for review.

    Results – what happened, supported by screenshots or data.

    Learnings – what we discovered, whether to scale, refine, or drop the idea.

    The goal isn’t to produce perfect content every time. The goal is to learn quickly, document what happens, and build a playbook of approaches that consistently improve visibility in ChatGPT, Claude, Rufus, and other AI search engines.

    By running experiments in this structured way, we can move faster, share learnings more easily, and give brands a clearer path to winning their share of AI shelf space.

    Ready to Optimise Your Brand for AI?

    Let LMO7 help you improve your visibility in AI shopping assistants and LLM responses.