Optimise your internal links
Jan says: “It might be surprising, but internal link optimisation is my number one focus. It’s not something new and it’s completely different from what others are doing, but internal link optimisation is the most underutilised tactic in SEO.
This is because it’s difficult to measure. Many smart people have thought about building specific models to determine the possible impact of changing internal linking, but there have always been issues. It’s not as straightforward as you think. It’s not only the internal linking you need to take into account but also the external linking that is flowing in as well.
Most SEO tactics nowadays talk about content. It’s really easy to come up with a rule like ‘Every page needs to have three hundred words’ or ‘Every page needs to have a specific Core Web Vitals score.’ Internal link optimisation is a bit more advanced and actually doing it can be a bit boring, because you don’t see the immediate impact.
Even though it’s difficult, 95% of websites would benefit a lot from simply thinking about best practices. You don’t need fancy models or before/after comparisons – there are lots of low-hanging fruits available for all domains. It’s less important for a small blog or a big Amazon-like webstore but, for everything in between, internal linking can be really powerful when it’s done right.”
What’s the best way of viewing your current internal link graph?
“The first thing to take into account is that Google looks at the specific positions of links differently. There’s a difference between a link in a header or in a footer, compared to a link within a piece of content.
You can take a look at the patterns that are found using a Random Server Model. This kind of model was written by Google engineers, and it demonstrates how some links within a webpage have a higher probability to be clicked than others. Using those probability scores, you can assign a value to specific links. Based on that information, it makes sense that a link that’s directly visible when a user reads a piece of content will be clicked more often than a link that’s hidden at the bottom of a long page.
It’s important to see that difference first, before you start analysing anything. Once you understand that, have a look at which pages are mostly linked from the main navigation elements. Check what’s in the header, check what’s in the footer, and check if those pages also have links from within the content.
Most of the available crawlers nowadays (like Screaming Frog) will check which part of the website your links are located in. If you do a basic crawl, you will already have the data available, but you should be manually looking at your page as well. Where are the main navigation elements? Where are internal links? Do you have any widgets that crosslink?
Once you have that data available, combine it with your data from Search Console. Screaming Frog has a straightforward connection with the API, so it’s easy to ingest data and combine it with the internal link data you have already collected.
A really concrete tip, and one of the easiest wins to get, is to check for pages that are currently ranking in positions 5 to 20 and don’t have any internal links - or have a low number of internal links compared to your overall link graph. If you can see that your top-ranking pages have 10 links on average, and your lower-linking pages have only 4 or 5 of those, you may want to move around a number of links and push more value towards lower-ranking pages.
Another quick win is to look at pages that have no links at all. Use a crawl source like the XML sitemap - which usually contains all the URLs of your website - and check if there are pages without links. Bigger websites, where some of the internal linking is automated, usually don’t have this problem. However, marketers will often create specific landing pages targeting specific keywords and they forget about actually linking that page in order to win a useful spot. That leaves pages that have no links at all, but have a lot of SEO failure due to other factors like the content on the page or the relevancy to specific keywords.
I have another tip that is particularly relevant if the domain has a top-down structure. A webstore, for example, usually has a homepage that links to categories, that link to subcategories, and then to product pages. You have a structure like a pyramid, -and Google needs to go through that structure from top to bottom for every product page to be discovered.
Think about how you can crosslink to the lowest level within the pyramid. How can you crosslink product pages, for example? Think about adding widgets for best-selling products, or biggest discounts, or widgets that generate useful links for users like ‘often sold with’. By doing that, you get much better coverage to allow Google to find all your product pages.”
How do you determine which pages to target and whether a page is likely to perform better if you build more internal links to it?
“It’s always important to balance it out. Removing a link to a top-ranking page and directing that value towards another page may actually decrease the rankings of that currently top-performing page. It’s a balancing act.
If a page has 100 links and another only has 2 links, then it probably won’t matter for the higher-ranking page if you bring it down to 99 internal links but, in terms of percentages, the other page will get a much bigger increase in link value. One page will only lose 1%, but the other will increase by 50% compared to the previous situation.
I usually try to visualize it. There are a number of software tools available (usually based on the old PageRank algorithms) that can visualize the size and impact of making internal link changes. You can compare your link graph before and after the changes and then see what the actual internal PageRank values are before and after. Then you can see whether you are taking a risk by making those changes - like removing too much internal link flow from the top-ranking pages.
These kinds of changes can easily be reversed. There may be a risk but if it doesn’t work, and rankings don’t improve, you go back to the original situation. It’s not a one-directional change you are making; it’s a constant change.
You should also consider seasonality in this. Maybe during the summer you want to link to swimwear and in the winter you want to link to jackets. Even throughout the year, you may want to balance it out.”
When would you want to ensure that a page isn’t indexed and how could you do that at scale?
“I would consider crawling. Firstly, you have the crawling and then the indexing. For the average website, crawling isn’t really a problem. Google spends enough time on your website to find all the pages and then decides, on the individual page level, what to do with each page. Let’s say you’re an Amazon, however. Google can’t update and crawl every page every day. They need to make a choice.
If you have 5,000 new blocks posted every day on your content network, it would be wise to create widgets that guide Google to the latest and freshest content. The same goes for the most recently uploaded products. You can help Google to prioritise their crawling by adding or removing internal links. This usually works best if you have a section of the website that you don’t want to be crawled and indexed.
First of all, make sure that Google doesn’t end up there. Hide the links. You can use JavaScript to create a link that’s usable by a real user but not by Googlebot, so you can make sure that Googlebot can’t actually see the link behind the script.
If Google is on the page (either through external links or other ways) then see if you want to canonicalise it or noindex it. That’s a choice that really depends on the overall model of the website. If it’s a product that’s temporarily not available, a noindex would do - because when Google comes by and the product is available again, the noindex is gone.
What you see with canonicals is that the whole value, and the history of searching that URL, will be transferred from one page to another. Over time, Google starts crawling the URL that has the canonical on it less than a URL with noindex on it. It depends on the page but, overall, noindex is the safest option and canonical should only be used if you’re really sure that the page will never have any SEO value or purpose at all.”
What shouldn’t SEOs be doing in 2023? What’s seductive in terms of time, but ultimately counterproductive?
“99% of link building activity has so much time and money spent on it, but it doesn’t deliver any value. Link building used to be really effective, if done right, but Google has become quite effective at determining which links are there for a genuine reason and which links are being placed mainly because SEOs think they should be there.
Consider blog networks - where every individual blog post contains external links. It’s really easy for Google to detect that really aggressive linking is taking place, and therefore it usually only has a short-term impact. I’ve seen clients spend thousands of dollars buying links and the impact was basically zero. The agency that sold the links might have made up a nice story about there being an impact, but there were so many other factors that also changed that it’s really difficult to point it back to those individual links.
The majority of the links currently being placed with the idea of pushing SEO rankings are useless. Google has so much data about what a real link is, and what is useful for the user.
We talked earlier about the Random Server Model, and the same principle can be applied to external links. Check if the domain where you’re getting a link from is ranking itself. Is there any actual organic traffic going through that domain? In most cases, the websites that offer the links don’t rank at all anymore.
Be really critical of what you’re doing in terms of link building, because there are still valuable links to be obtained but the number of crappy links being placed is much higher.
You really need to be doing quality link building, especially if the competition is fierce. Everybody is able to create quality content – in-depth content and content that aligns with the user intention – and the technical quality of websites is usually above average in competitive niches, so the only deciding factor that’s left is link building.
It’s not about the numbers - it’s about the quality and the relevancy. You need to spend a decent amount of time and effort actually getting those high-quality links. It’s also not about the scale. People still see link building as something you have to do on a site, so they buy full packages or they close deals getting 10 links a month for a specific amount of time for a specific amount of money, and never check if the links actually sent through any referral traffic, or had the right metrics.”
Jan-Willem Bobbink is a Freelance SEO Consultant and you can find him at notprovided.eu.