Emily suggests that you need to get started with SEO testing - and if you have a CRO team, you should be working closely with them.
Emily Potter says:Testing is an essential component of your SEO strategy, and the next evolution is testing with your product or CRO teams, and focusing your changes on both on-page metrics as well as for SEO traffic. With Core Web Vitals, these two things are blending much more - and they're a good source of big organic wins.
Why aren't more SEOs testing at the moment?
There's definitely a cost barrier to entry, and we're aware that solutions like Search Pilot are expensive and not for everyone. One limitation is that outsourcing may be difficult. The second issue is a lot of people just don't realise they can do it with some of the tools available to them. If you're a smaller website, you have things like Google Tag Manager - it's not my ideal solution, but you can do testing in different ways. You can even just do 'before and after' tests to get a bit of a better idea of what works. If you have more time, resource, and technical aptitude, there are tools like Edge SEO. Availability of time and resource, like awareness, is a major reason why SEOs don't test more.
What are some easy ways to get started with testing?
Learning to split up your pages is important. I'm talking about SEO testing here, versus something like conversion rate optimisation testing. The main consideration is that you're just testing for one user - Googlebot. You want to split up your pages, and the first thing to get used to is finding ways to split up pages that are similar on your website, to test small changes. Coming up with hypotheses and getting used to the SEO testing process is one way to start. You need to identify pages that are similar and on the same template. Maybe change the title tag on half of them and leave that running and monitoring for a while. The starting point is developing SEO hypotheses and getting used to the idea that some of these things that we think are natural, best practice, or obvious, may not be the best solution. How can we find more ways to get data, to make sure the things we're recommending are actually useful?
What would be worth testing in your title? Would you recommend starting with an offer, such as 'free shipping', or more permanent changes?
Yes, we've done lots of tests on different shipping offers. If you're going to roll out a promo, such as Black Friday, it's good to test offers. That way, if you see a big uplift one year, you know you can roll it out next year to get the gains. More permanent changes, like testing your prices or titles, are something that a lot of eCommerce companies are doing. Also, changing the phrasing of how you're testing the head terms, cutting out the secondary keyword and just testing the primary keyword, and different ways of shortening your titles. It's important to focus on how you are going to change click-through rate as well, not just rankings.
Is it generally detrimental to have an offer that isn't keyword-rich near the beginning of a title? Is it more important to have something that's hyper-relevant to the offer for the reader to click through?
The tests I've seen when we've tested offers have been negative - but I wouldn't want to say that's a blanket rule. It's important to keep what's most useful for the user towards the front of your title tag, especially if something's truncating. We've seen mixed results from shortening title tags, so truncation is not necessarily a problem unless it's truncating out the part that your users are going to find useful and informative.
Can you also split-test the content on the page, and the impact it has on SEO?
Yes - the content still matters. We've done a lot of SEO text tests because Google is always saying that SEO text doesn't really matter, and we're finding that content is always useful. Still, Google definitely isn't at the level yet where it can just understand a page without some help allowing it to understand what that page is about. Definitely split-test on-page content. Once you start getting used to that mentality, there are all kinds of different things you can test, such as changing your images, or the compression of images, for site speed.
Are you able to give a specific number of minimum words required for pages you're trying to rank for?
I'm not close to being able to provide a specific cut-off limit. We've had an eCommerce customer where they had no content at all, and we added a single line of copy and saw a 20% uplift to organic traffic. We've had others where they already had tonnes of content, and we added a little bit, and it did nothing we could detect. It's all relative to the page. If there's not really any existing content, it's probably going to help. Whereas, if you already have a lot of well-developed content, then it might not impact as much.
Is it important to have content quite high up within the page?
Yes. We are seeing that the higher above the fold, the more importance you give to that content. It seems to have a bigger impact. For another customer, an aggregator of service providers website, we tested adding unique content and putting it above the fold. We saw over a 20% uplift for SEO - but a 13% decline for conversions. The net impact was still positive on conversions. Now we're trying to test it lower down the page, to see if we can keep the SEO benefit without lowering the conversion hit. It has helped, but not nearly as much as higher on the page. It appears that you get more from that content when it's further up the fold.
Is it always possible for SEOs and CROs to work together to increase the relevance of testing, or is it sometimes more appropriate to just have one of those departments developing a test?
I find the companies doing it best have two different departments that work well together, but that doesn't always happen. Every SEO knows that you don't always see eye to eye when working with product teams. It is definitely a challenge.
There are still SEO tests that probably aren't going to have much impact on CRO - so it's more efficient for the SEO team to run their tests. When there's a test that impacts both teams, you can coordinate together to run it. There's a lot of project management required, because you can't run two tests on the same page at the same time. You always need to make sure there's not a product test going on simultaneously. Getting good ideas from each other is a good reason to work together, but there's also a business operation argument because otherwise, you're going to be clashing with each other.
Can you provide examples of tests that can positively impact both SEO and CRO?
Adding new components to the pages. For example, on travel websites, we've seen product teams develop maps that do cool things for users and end up achieving a really big uplift for SEO as well. It's only the product team that really has the skillset to develop these types of features. Also, because the SEO team was collaborating on the test, it helps with optimising all the SEO elements, such as the headings.
How long should tests run for, and how much data is required to derive a meaningful result?
For Search Pilot specifically, we say at least 1,000 sessions a day. Having said this, I've run tests that have worked 200 sessions a day. What happens when you lower that sample size is, you're only going to detect big wins. If you have a huge website with a big sample size, and you're getting thousands of sessions a day, you might get a result within two weeks. When you lower that down, it might be three or four weeks, because Google takes longer to crawl those pages. Usually, there's some correlation between how much traffic a website gets and how often Google's crawling and indexing it.
If you've got many different ideas you want to test, how do you determine which is likely to have the biggest meaningful impact?
To prioritise experiments, we consider two factors. Firstly, what we think the impact of this will be - and this is informed by our previous testing. For example, we know that internal linking tends to have a really big impact, while a meta description test doesn't usually move the needle. Secondly, we always consider the level of effort. So internal linking can provide a high impact, but there are a lot of complications around measurement, because you're changing two different sets of pages. Therefore, there's a lot more build that goes into this test. We're always playing between impact and effort.
What should an SEO stop doing to focus more time on testing?
Optimising and re-optimising content, particularly things like meta descriptions and alt tags. Even title tags are now getting overwritten. You see more and more that these metadata changes are mattering less. They're still important to test - but the hours you spend on keyword research are not as valuable as they used to be. Also, you have things like GPT-3 that are starting to generate content with AI. It's not at an advanced enough level to replace copywriters but can be a really useful tool - for example, generating title tags for your category pages - to save you time to do more testing. Hopefully, AI will never be good enough to get rid of copywriters because humans have the context, knowledge, and emotions that are essential to creating superior content.
Emily Potter is Head of Customer Success at SearchPilot.com.
If you like to get up-close with your favourite SEO experts, these one-to-one interviews might just be for you.
Watch all of our episodes, FREE, on our dedicated SEO in 2023 playlist.
Maybe you are more of a listener than a watcher, or prefer to learn while you commute.
SEO in 2023 is available now via all the usual podcast platforms
Opt-in to receive email updates.
It's the fastest way to find out more about SEO in 2023.