Logan Ice
5 min readDec 8, 2020

--

I love split testing. It’s such a simple concept that can have SO MUCH impact on your success if done properly! At the core of it, split testing boils down to using multiple experiences on your website / email blast / ad copy / etc to learn more about your target customer and how best to engage with them. People are already looking at your content, why not use that interaction as an opportunity to learn?

Before we get into the “how”, Let’s take a look at a real-world example to help understand what I mean by “multiple experiences.” Check out Medium, below —

Top of my homepage on Desktop

Where does your eye go first? I’d be willing to bet it goes to the far left, large image of the article from UX Collective. Our brains naturally read top to bottom, left to right (at least in English speaking areas — trends differ when language is written in different formats), and Medium is taking that a step further by increasing the size of the first article’s callout to make sure you see it.

Did they do this by chance? Did some all-knowing marketing guru stand up in a room and say “This is the most effective layout, trust me and don’t fact check me”? Odds are this was one of several tests run, in which traffic to the site was split between this experience and another — maybe something similar to the mock up below. The change of layout is what I’m calling an “experience,” or how the user sees and interacts with your page.

Where does your eye go now? You likely still start in the top left, but now there are three stories competing for your attention. Maybe that drives more engagement, maybe it’s better to have one key story like the currently live version has. The only way to truly know is Testing.

For fans of the scientific method (yes, I’m a nerd. Sue me. 🤓), what I’m about to describe should come naturally. Much like the scientific method, it all starts with research and a hypothesis. You wouldn’t immediately say “let’s make all icons on our site greyscale,” would you? There has to be a reason for your change, both in terms of why you’re proposing a change, and what you are measuring to determine success.

Changes need to matter. That’s why there are a couple different types of significance to check for when you’re testing on your site. Let’s check them out —

  1. Statistical — I won’t bore you with the details on this one. There are plenty of articles out there from Survey Monkey, Google, Optimizely, or any number of other sources. TL;DR — Tests have to run until you can be reasonably confident that the difference in results is explained by a true difference in behavior rather than just variation/ noise (Plug here for The Signal and the Noise, a phenomenal read by Nate Silver)
  2. Business — You can have a better result, but if it doesn’t move the needle for your business, your time is better spent in other places! Let’s say you increase your site’s conversion rate from 14.5% to 14.7% for a given subset of visitors. Congrats! Your test won! But let’s think about it, over 1,000 visitors, that’s only two more sales. Is that enough to justify the time investment for designing, launching, monitoring, and calling the test? Unless you’re seeing 10s of thousands of visitors, probably not. It’s best to focus on tests that are likely to have a major impact on your bottom line.
  3. Logical — So a test launched, it was statistically significant, and it had enough impact to your business to continue. Great! What’s next? How do you move on to a new test? That’s where “Logical Significance” comes in — understanding the underlying user behavior that influenced the result. Once you understand the “why” behind your test, you can use that knowledge to build a more effective next test, and start gaining those sweet, sweet compounding gains.

Ok, ok, I know, that’s a lot of detail. Almost done, I promise. The last part I want to talk about is how to A/B test in the first place. Everyones’ business is different, and there’s no one-size-fits-all solution — but I’ll do my best to walk you through some (hopefully) self-explanatory steps.

For a large company, enterprise, Fortune 500 type place, I’d recommend working with some experts. I’m a digital strategy consultant at Slalom, which I would obviously recommend looking more into (I wouldn’t work there if I didn’t love it). There’s just no way to make a single recommendation for such large or complex situations, but I can tell you that Slalom (and I!) can help.

For a Startup, SMB, Midsize, or other style company — I would highly recommend checking out Google Optimize. There’s no code required, and it only takes a few steps to start testing. You can literally create a brand new experience and have it live on your site in under 15 minutes 🤯. Check out the short steps in the image below, each of which is literally just a few clicks.

Phew. That was a lot, wasn’t it? Long story short, A/B Testing is where Statistics-meets-Marketing is aided by its good friend Psychology to continually improve the user experience of a given interaction. I tried to hit the high notes and keep things accessible with this — I hope you enjoyed it and got something valuable out. Did you learn anything new? Anything I left out that should have been covered? Leave a comment and let me know!

Thanks for reading, and tune in next time for the somehow-even-nerdier: How Dungeons and Dragons Improves Your Communication/Teamwork Skills!

Logan is a Digital Strategist located in St. Louis and is passionate about Startups, Marketing, Psychology, and Data Science. Connect with him on LinkedIn to grab a virtual coffee and chat, and follow him here on Medium for more stories and unsolicited advice!

--

--

Logan Ice

I’m passionate about Startups, Marketing, Psychology, Data Science, and Sharing what I’ve learned through a decade trying to tie all that together.