26
18 Comments

Here's how to set up, run, and react to your split tests

We’ve advised over 140 companies—including Amazon, Dropbox, and WIRED—and run thousands of split tests.

Those tests have made tens of millions of dollars.

Today, I’ll show you:

  • How to discover what’s working on your site (and what isn't)
  • How to prioritize test ideas
  • How to implement the tests
  • What tools you should use
  • How to react to the results

Let’s get started.

What is split testing?

Split testing, also known as A/B testing or conversion testing, involves comparing multiple versions of a webpage or element to determine which performs better.

For example, you can “split” up traffic to your homepage.

  • The original—also called the “control”—gets half your traffic.
  • The variation—also called the “treatment”—gets the other half.

You then look at the results. Did the new homepage increase sales? Great! Promote it so that it's your new homepage. If it didn’t increase sales, keep the original.

And either way, you repeat the process.

Over time this process ensures your pages become more effective, user-friendly, and profitable.

Here’s how to get started.

Find out what’s working—and what isn’t

Here’s a common mistake marketers make when split testing: They jump right into a list of things they want to change.

This is a bad idea.

Instead, gather feedback about what’s working with your website (and what isn’t).

  • Use Hotjar surveys to ask users “If you didn’t buy today, could you explain why? Thanks!” (You can fire these on-page surveys when users are about to leave the page.)
  • Use Hotjar heatmaps to see what elements people click on.
  • Use email surveys to ask your current customers (or prospects) where they first heard about you, what convinced them to buy, and what almost stopped them from buying. (You can do this easily with a Google Form or Typeform.)
  • Run usability tests. Either with a company like UserTesting.com or by asking someone you know to use the site and “speak their thoughts out loud.”
  • Look at your analytics. What are the top pages? Which ones generate the most revenue. Start with those.

If you do just these five things, you’ll be light years ahead of most marketers and designers.

Generate your test ideas

As you gather your feedback—using the five approaches we reviewed—ideas will naturally present themselves.

For example, your email survey reveals people love feature X of your product. You could then test a headline that calls out feature X specifically.

Or your user tests show that people love the testimonials you have at the bottom of your homepage. You could then test moving those testimonials above the fold (or right next to the “Buy” button; this is a nice place for reassurance).

Chances are, you’ll have lots of ideas to test. But don’t pick one yet! Focus on building out your list of ideas first.

You can put your ideas into Notion, a Trello board, or Google sheet.

Prioritize your test ideas

Now that you’ve got a list of test ideas, it’s time to prioritize them.

Grade each of these on a scale of 1–5:

  • How likely is this idea going to win? Does it directly address an issue you discovered in the research? (1 = low probability; 5 = high)
  • How easy is it to implement? Changing text is easy; changing your business model is hard. (1 = hard; 5 = easy)

Now multiply those two numbers together and, voila, you’ve got a list of prioritized ideas.

Next, you’ll want to…

Set up your tests

There are two ways to set up your tests:

  1. You can code the variation yourself; or
  2. You can set up the variation directly in split testing software (no coding required).

If you are technical, you can code your variation yourself. If you’re not technical—or you work with a team and don’t have time to release new pages—you are far better off with split testing software.

Split testing software

Split testing software is a must.

The best tools are easy to use (especially for non-technical people) and offer great support. Four that I’ve used and recommend are:

  • Convert (simple to use; great for sites that use AJAX)
  • VWO (long-running; also offers heatmaps and surveys)
  • AB Tasty (similar to VWO)
  • HubSpot (ideal if you already use HubSpot’s CRM)

In addition to the above, many third-party tools offer split testing. For example, OptinMonster and ConvertKit let you split test your opt-in forms without using a standalone tool like Convert or VWO.

How long should you run a test?

There are six factors which affect how long a test should run:

  1. Traffic: More visitors make it faster to get reliable results.
  2. Conversions: The more people convert, the quicker you'll get meaningful results.
  3. Current conversion rate: Higher-converting websites see faster outcomes, needing less test time.
  4. Number of variations: Testing more options takes longer as you gather enough data for each one.
  5. Confidence levels: Higher confidence requires bigger samples and more testing to avoid wrong conclusions. Finding a balance is important.
  6. Difference in conversion rates: If the expected change is big, you'll reach reliable results faster.

Use an A/B test duration calculator (from VWO) to estimate how long your test will need to run.

At a minimum, run the test for at least one week (and preferably longer) to smooth out any daily fluctuations.

The short answer: Test until your split testing software declares the test is statistically significant.

How to react to the results

Split tests end in one of three ways.

They either:

  • Win
  • Lose
  • Tie (are inconclusive)

Your split testing software will tell you if your test wins or loses. A test is inconclusive if it has run for several weeks with no winner or loser declared. Pro tip: run your test for the length of time recommended by the split test duration calculator mentioned in the previous step.

If a test wins:

  1. Promote the winner immediately. You can do this by setting 100% of the traffic to the winning variation. (Note: this approach works in the short-term. Eventually you’ll want to hardcode the winner to your site.)
  2. Repurpose the winning element elsewhere. Did your new homepage headline win? Use it on your landing page, ad copy, email subject lines—wherever it makes sense. Rest assured that you’re building your marketing materials on proven winners (instead of just guessing).

If a test loses:

  1. Celebrate. Seriously. Losing a split test often leads to big future wins. Why? Because you’ve found something that strikes a nerve with your audience. And once you’ve found that “nerve”, you can…
  2. Test the opposite approach. For example, let’s say you currently offer a 14-day free trial. You test a 7-day free trial and it loses. Since you’ve learned your audience is sensitive to the free trial period, your very next test should be a 30-day free trial.

If a test is inconclusive:

  1. Persist or pivot. Ask yourself: Was this test inconclusive because of the idea, or the execution? If it’s the idea, pivot to another idea. If it’s the execution, try another variation. For example, let’s say your headline test was inconclusive. The idea was good (since headlines are powerful levers for conversion), but the execution wasn’t (since the test was inconclusive). Try more headlines.
  2. If you persist, try radically different variations. Most inconclusive tests are too timid; make your tests bolder: use stronger guarantees, longer trial periods, different testimonials (which appeal to different values).

Wrapping up

By following these steps, you can optimize your website, enhance user experience, and improve your conversion rates.

Remember, though, that split testing is an ongoing process—so keep experimenting, learning, and optimizing.

Got questions? Let me know in the comments below!

Want sweet website reviews and tips to increase your sales? Follow Funnel Candy on Linkedin, Twitter, or subscribe to our newsletter.

on June 12, 2023
  1. 3

    @funnelcandy at what point would you advice setting these up? Probably not necessary for the MVP, I'm guessing. So at what point do you start? And then how do you determine how much focus to put into it? I could see myself setting up tests all the time and never actually growing my product. 😅

    1. 1

      Great question. At a minimum you'd want at least 50 conversions per month. A conversion can be a sale, lead, or click through to your checkout page.

      If you're just getting started, I'd suggest running 5 user tests on your funnel to get feedback and iterate from there. (Studies show that 5 user tests uncover 85% of usability issues, so there's no point doing more than 5 user tests at a time.)

  2. 2

    This is a great read, and you can apply this advice to anything that involves split tests, not just landing pages! Would you also run ads (if you are running ads) into these landing page split tests to get more traffic visitors/conversions as the test runs? Or just rely on organic traffic?

    1. 2

      You can definitely run paid ads to the page as well. It's helpful to segment your results by paid traffic, too, as it often behaves differently than organic traffic.

      1. 1

        Will definitely keep that in mind, thank you!

  3. 2

    I actually think the step 1 is to make sure you have enough traffic to make AB test meaningful to run. Too often people are running tests that take them 3 months to reach statistical significance.

    1. 1

      Good point. Though you can reduce the time to test by reducing the the number of variants, choosing a different conversion (e.g. leads vs sales), or, if will still take too long, run user tests where you show people the control and your variation.

      You can do this last option before you even launch. It's a great way to iterate your MVP.

  4. 2

    in many instances, small companies don't have enough traffic to achieve stat sig results from website traffic in an acceptable amount of time, especially if they are not running paid ads to their website continuously. how would you consider and also implement a multi armed bandit approach to get conclusive tests faster?

    1. 1

      Great question. Here are two solutions:

      1. Redefine your conversions to reduce the time to reach statistical significance.

      For example, you can focus on leads rather than sales. Or clicks to the checkout page.

      Then, as your traffic grows, you can start focusing on sales as your conversion.

      1. Decrease your level of statistical significance. For example, most split testing software declares a winner at 95% confidence. But if you accept 90%—which is perfectly fine!—you only need half the traffic.

      Did that answer your question?

  5. 1

    What methods do you recommend for gathering user feedback before forming a hypothesis for a test?

  6. 1

    I didn't know all this stuff. thanks for sharing

  7. 1

    Define your goal and choose a variable to test.
    Create variants, split traffic evenly, analyze results, and implement the winning variant.
    Iterate and optimize based on insights gained from the split test.

  8. 1

    @funnelcandy When would you suggest setting these up? I imagine it won't be necessary for the MVP. Therefore, when do you begin? How do you then decide how much attention to give it? I could see myself constantly setting up tests without ever expanding my product. For more follow https://modrenget.com/.

    1. 1

      You can see my response to midwestFounder.

  9. 1

    This is helpful, thanks!

    1. 1

      Glad you liked it; let me know if you have any questions.

  10. 1

    This comment was deleted 2 years ago.

    1. 1

      This article may serve as a perfect methodology to build an AI split-tester.

  11. 1

    This comment was deleted 2 years ago.

Trending on Indie Hackers
Meme marketing for startups 🔥 User Avatar 11 comments After 19,314 lines of code, i'm shutting down my project User Avatar 1 comment Need feedback for my product. User Avatar 1 comment 40 open-source gems to replace your SaaS subscriptions 🔥 🚀 User Avatar 1 comment We are live on Product Hunt User Avatar 1 comment Don't be a Jerk. Use this Tip Calculator. User Avatar 1 comment