Don’t be a passive subject of AI's opinion; become an active engineer of its understanding
Nik says: “Stop thinking just about ranking on Google and start focussing on a new discipline: AI visibility engineering. This is where you can systematically probe AI models to measure brand associations, quantify perception, and then use those insights to engineer on-page, off-site, and then use those insights to engineer on-page, off-site, and strategic actions that influence how these machines understand and represent your clients.”
If you write wonderful web pages aimed at humans, is that not enough to feature on an AI platform?
“AI visibility engineering is critical because LLMs are becoming the new discovery layer. In September 2025, Google have just announced that we're going to see Gemini influence how Google search works in an even deeper way.
We need to understand that these aren't just search results; they are an answer engine that forms narratives around brands. If a model misunderstands your brand's core value proposition, it can misrepresent you to millions of users at the point of decision. This is crucial for any business where brand perception, trust, and authority are key drivers of conversation – so virtually everyone, from e-commerce all the way to B2B SaaS.
At Dejan, our work is now split into answering two fundamental questions for our clients and the brands that we work with. The first is a diagnostic one: How are you currently being impacted by AI? The second is more strategic: How can we influence the AI model to recommend the brand?
To answer both, you first have to understand the true minds of the AI. You have the interpretive layer, which is what everybody sees. This is what people type into on a browser or in models like ChatGPT, Gemini, Claude, Perplexity, etc. Then, you have the agentic layer. This is the thing that you really want to tackle. This is the in-depth layer where you can see how the AI is actually influenced; it's where it makes its key decisions.
You have to understand that there are grounded results and ungrounded results. Grounded results are queries that are influenced by real-time information that is retrieved by the model, using RAG (Retrieval Augmented Generation). This is where all of your traditional SEO efforts can give you direct leverage.
Then, you have ungrounded results. These are generated from the model's static pre-trained memory. The propensity for a query to be grounded is extremely high on Google's platforms like Gemini, as their models are tightly integrated with their real-time search index, especially for time-sensitive or YMYL topics. Other models like ChatGPT also show high, but variable, rates of grounding. This variability is precisely why classification is critical; we cannot assume grounding is a default state for all queries.
With the way that users are searching nowadays, whether it's on a search engine or in an AI model, they're going to be pulling from real-time information from the web, using RAG, and combining this with its pre-trained model to give that result. That's why it's really important to understand this and think about the agentic layer. How do you get your brands represented well in an AI model’s mind?”
Are more searches moving to grounded results, and is that going to be the default position for AI search moving forward?
“Dan Petrovic has done a lot of research in pulling out and training a Query Demands Grounding (QDG) Classifier. This is a model that was pulled from multiple different queries in AI Mode, looking at Search Console results across the environments that we have, pre-training it to be able to detect which queries will demand grounding and which will not.
This pre-trained model is what we use when we're doing classification. If a client comes to us and says, ‘How are we being influenced by AI search?’, we first extract all the queries from their Search Console. Then, to classify them, we convert each query string into a vector embedding: a numerical representation of its semantic meaning.
We want to figure out the specific semantic intent for each of these queries, and then, using the Query Demands Grounding Classifier, we can see whether or not these queries would or would not be grounded.
From our research, almost 100% of queries are grounded with Google Gemini, and that is powering a lot of Google search and a lot of the resources in their entire ecosystem. Almost every query that people are inputting these days is being augmented in some way by the model.”
Is this affecting every industry and every country?
“In 2026, a lot more countries will be impacted. It's hard to sort of say what it's going to look like by the time someone is reading this, but we're following the money. We're following the investments; we're following the pathway and direction that a lot of big tech providers are going.
In Malaysia, it just came out four months ago. Worldwide, we've seen this as a part of search for years now. In the future, it's going to be a very different landscape, and AI will be a much more integrated part of the search experience.
What’s really cool at the moment is that, if we can understand it and measure it (in the way that we have been able to at Dejan), we can test this right now. We can influence models and see what is working and what's not.
Everyone is going to be affected by this in a massive way. I can't tell to what degree or scale, but it's going to be pretty much universal.”
What are the most effective ways to influence the models at the moment?
“There are so many different AI rank tracking tools out there at the moment. At Dejan, we've got a very lean, succinct, and efficient model. The way that we do it is different to how a lot of other trackers do it. We establish a baseline with bidirectional probing. This means that we ask the model two sets of questions: brand-to-entity, and entity-to-brand.
Brand-to-entity means we prompt the model with the brand name, asking 'What are the top concepts associated with [Your Brand]?' to measure the model's passive recall and understanding of what the brand is. Entity-to-brand means, who are the top brands you associate with your service, your products, or whatever it is that you want to be seen and known for by the AI model? This gives a quantifiable score for the brand's perception and its market position in the model's mind.
Like I was saying before, this bidirectional probing is really helpful for us because we now understand what entities we want to be known for. You can identify the core services or products for your business, have an idea of how important those are to different markets, and even look at them by country.
The way that we are looking at this is with a confidence score. How confident is the model to take that suggestion from you or from your competitor? That’s how we start to split the difference. It's not a binary ‘yes’ or ‘no’. It's also, in what specific percentage and what specific way is this being assigned?”
You recommend building topical authority with query fan-out, and then moving on to unpacking the AI's mind with Tree Walker analysis, so what is Tree Walker analysis?
“With our AI Rank tool, we're not just looking at the brand to the entity; we also look at the competitors and what countries they're looking at. We can get very specific about how confident the model is. That gives us a really good litmus test and a way to track it because we can quite literally see a graph of change.
Like I said before, we've got the Query Demands Grounding Classifier that we've trained on your Search Console. Then, query fan-out is all of the relative ways that a model could potentially be able to find you. Query fan-out is essentially creating multiple different synthetic queries that could potentially be found for your specific brand.
The way that we incorporate that is we look at whether or not those results have already been captured by Search Console, and that gives us an idea of market share. We can also see the same for competitors and how that ranks there. More importantly, we can also see all the results that aren't currently captured by Search Console. Now, we know all of the different variations, and we know whether they are important and relevant to us.
The way that we incorporate that is we look at whether or not those results have already been captured by Search Console, and that gives us an idea of our topical market share. Crucially, we don't 'train the model' with this fan-out data.
Instead, we use this comprehensive map of the query universe to architect the perfect on-page content. This content is then what the model uses as a grounding source during its RAG process. Our Query Demand Estimator (QDE), which is trained on the client's historical GSC data, then helps us prioritise which of these new content blueprints to build first. This now gives us a really good lens on all the ways that the model can find, source, and cite that brand, where we just don’t have the correct content or on-page positioning to see that.
Tree Walker analysis is looking at the model and the confidence it has in picking you or not picking you. It's really important to understand where the model is highly confident, and see the sentence that generates that.
Every single word that precedes the next has a tokenistic probability of what will succeed it. Tree Walker analysis literally creates branches of different prompt results, and we can see the sentence that it constructed and determine whether or not the model has confidence in constructing the next word. This is really important, particularly with brand positioning and being fine-tuned in the way that we're constructing these sentences.
For example, if you want to have a specific representation of your brand as the leading provider of mattresses and pillows, you want the model to have high confidence to say, ‘Brand X is a leading provider of mattresses and pillows,’ and have high confidence in each word of that statement, all the way through that sentence. With Tree Walker analysis, you can see where the model starts to become uncertain. For example, it might have confidence all the way up until ‘… and pillows’.
We want to deconstruct this, and we do this at scale. We make sure that there is a good way of testing this to find occasions where the model is certain about these statements, and where the model is not as confident. That tells us it doesn’t have a good understanding of the connection between this particular entity and our brand.
Going back to AI Rank, if you were looking at brand-to-entity, and you've mapped ‘brand X’ to ‘mattresses’ and ‘brand X’ to ‘pillows’, you can actually see and measure how confident the model is with those connections. Then, you will have a much more consolidated idea when you are creating content. Maybe you need to focus more on pillows because that is something that the brand wants to be a leading provider for, so you want to create specific content to add that.
You will also have a really good idea of all the query fan-outs that you’re not currently capturing in the market, so you can strategically add those to the site. Alternatively, it might even be that you need to do more brand positioning to say that your brand is the leading provider of pillows, and be specific in the way that you’re being cited.
Dan Petrovic has also developed a better methodology for looking at citation mining, specifically on what sources the AI model is pulling to find these suggestions – including the brands and specific URLs, and the confidence it has in what it’s using. Now, we have a much more qualified way to do link prospecting and brand relationship work, very specifically targeting what the AI model is finding and using at scale.”
Nik, what's the key takeaway from the tip you shared today?
“Stop being a passive subject of the AI's opinion and become an active engineer of its understanding.
In 2026, the most successful brands will be those who intentionally and systematically shape how machines perceive them.”
Nik Ranger is a Senior Technical SEO at Dejan.ai. Find out more over at Dejan.ai.