Before you start: what do you know about SEO split-testing? If you’re unfamiliar with the principles of statistical SEO split-testing and how SplitSignal works, we’re suggesting you start here or request a demo of SplitSignal.
First, we asked our Twitter followers to vote:
0% of our followers guessed it right! The result was actually negative.
Read the full case study to find out why.
The Case
Happy National Golden Retriever Day. Did you know it’s also Doggy Date Night? If you have a Golden Retriever (or any dog), this is a special day for them, don’t neglect it. And even if you don’t, it’s still a special day for you, our valued reader, as we publish another split test result for your reading and learning pleasure.
We’ve seen (and remember) a lot when it comes to SEO over the years, and one quick-and-easy win of old was adding emphasis to an element to make it “more important” to search engines.
Oftentimes, we’re on the search for a quick win. They can exist, but how easy is it these days to find them? It’s often not easy, to find the easy win, but we can help you test your ideas before you experimentally deploy your hypothesis to an entire website.
The Hypothesis
Our client was wondering whether adding a CSS font-weight treatment to the primary H1 for their product pages would have a positive effect on overall clicks to the treated pages. Our hypothesis is, this type of visual formatting will have no effect at all.
What do you think?
The Test
So, we set up the SEO experiment using SplitSignal. A percentage of the home + fashion ecommerce company’s product pages within the website was chosen as the test variant, while an equal number of product pages were chosen as the control group. We kicked off the test and ran it for 28 days.
The Results
After almost a month, we observed a statistically significant, NEGATIVE result for the client, with Google crawling and indexing 70% of the test group during that time.
Overall, the test pages received 6.4% fewer clicks, a loss.
LOCOMOTIVE Agency Analysis
While it can be disappointing to not get a revenue win with a test, it’s still a net positive from our perspective, as if this test had been deployed to the entire website without testing, the revenue loss could have been significant, resulting in egg on the face of you or your team.
Did you guess the right outcome?
Quite simply, Google largely ignores irrelevant CSS when it comes to making ranking decisions. The primary signal for a heading element, is the specific type of element it is, and while the emphasis on an element used to produce results (back in the day), it doesn’t anymore, and in this instance, had a net negative effect.
Why do you think there was a negative result, rather than a neutral, ie: no effect, result?
We’ll get to that in a minute, but first, a little more information for you. During the testing period, not only did the test group see a loss of clicks overall, but no loss in impressions (in Google Search Console). The reason there was a loss in clicks without a loss in impressions, is there was a change in ranking for the tested pages. They lost ranking and fell a place or two from their previous position.
The reason for this is speculative, but I would surmise Google picked up on the attempt to overtly influence the algorithm and devalued the page. Do you agree? If not, share your opinion in the comments for the community.
Happy testing!
Have your next SEO split-test analyzed by the technical SEO experts at LOCOMOTIVE Agency.