Actionable tips, community conversations, and marketing inspiration.

Getting Started With A/B Testing in Marketing

Julie Zhou

VP of Digital Marketing and Growth

Growth marketers rely on data to do their jobs well. Part of those duties include being able to analyze performance data properly. Failing to assess that data correctly can lead to the wrong conclusions and lackluster ROI. With A/B testing, you can be creative and find out exactly what works for your marketing campaigns.

Let’s walk through the basics of A/B testing, including how it works, what you need to know to start, and some examples of potential tests you could run with minimal hassle.

What is A/B Testing in Marketing?

A/B testing, split testing or bucket testing, is one of the most powerful tools in a marketer’s kit. It’s a method of comparing two different user experiences against each other to determine which one drives better results. 

First, you’ll randomly divide your target audience into a test versus a control group and show a different experience to each group across the same period. Below is an illustrative view of how A/B testing works: 

How Does A/B Testing Work?

A/B testing typically has three significant steps. In the first, you’ll randomly divide your audience into two groups. Most digital marketing testing tools—like AB Tasty or Optimizely— have ways to automate the selection, so it’s random.

Each equal-sized group will see a different experience. The control group will see the original version of the website, while the test group will experience the change(s) you’ve implemented. 

Once you’ve settled on A/B testing, choose a dependent variable, or a primary metric to focus on before the test. This dependent variable will change based on how you manipulate the independent variable (the element you’re changing).

After you reach statistical significance, you’ll calculate and compare results. If, at the end of the experiment period, you have observed a conversion rate of 23% across the control group and 35% across the test group, with the only significant difference being the designed experience, then you can conclude that your change in experience caused the improvement.

While you can test multiple variables simultaneously, most experts recommend isolating one independent variable. If you test multiple elements simultaneously, you’re performing multivariate testing.

Multivariate testing has its place, especially if you’re trying to determine how different variables interact. Whether to focus on A/B testing vs. multivariate testing largely depends on your sample size, time constraints for your test, and the number of variables you have available.

Elements of an A/B test

The elements or variables that typically change for test experiences include: 

  • Calls to action (CTA): There are various ways to test CTA buttons: You can test button colors, size, and placement, and the copy within the CTA buttons (something unique versus generic, such as “Subscribe Now!).

  • Ad copy: When it comes to copy, you can test a variety of things, including voice, tone, and length. Test product descriptions, copy that lives on your landing page, and more. 

  • Images: Test to see what types of images resonate with your audience the most—for instance, do they prefer images with people or illustrations? 

  • Email subject lines: Test different subject lines to see what makes people open your emails (or send them right to the trash). 

  • Landing pages: Test everything from product descriptions and the layout to videos and headlines.  

Why is A/B Testing Important?

Imagine that you change something on your website, and your signup rate increases around the same time. You may be tempted to attribute the results to the change you made. Without the benefit of a control group, you cannot tell whether your change caused the result or if they happened around the same time. The world that we live in is constantly changing—your signup rate improvement could’ve been caused by a change in your audience makeup, seasonal shifts, an unexpected press hit, or even random chance. 

Without a control group, you can’t confidently conclude whether your actions had the desired effect. Ideally, all lift testing—including A/B testing—involves guaranteeing that your framework is clean.

While A/B testing sounds simple in theory, setting up a proper A/B test can be quite challenging and is something that many people get wrong. If your tests aren’t executed properly, your results will be invalid, and you will rely on misleading data.

How to Do A/B Testing Step by Step

1. Define your goals 

Before running a cross-channel A/B test, define clear goals to ensure the test is focused and actionable. Without a clear purpose, it’s easy to get caught up in testing for testing's sake, wasting time and resources. One key consideration is to avoid testing irrelevant features. Instead, focus on components relevant to the business goals that you expect to impact your metrics significantly.

Speaking of metrics, these also need to align with your goals. Select key metrics that are meaningful success indicators that reflect the outcomes you want to achieve. By aligning your metrics with your goals, you can ensure you're measuring the right things and making decisions based on relevant data. 

When you clearly understand what you're trying to achieve, you can more easily evaluate the results of your tests and determine whether they have been successful. This can help you make informed decisions about which changes to implement and which channels to prioritize, ultimately leading to better outcomes for your business.

For example, you may want to improve your website's conversion rates, landing page or other digital assets. Perhaps you're looking to enhance user engagement and want to focus on metrics such as the amount of time users spend on your site, page views or click-through rates. Are you interested in upgrading your product marketing strategies? Use A/B testing to experiment with different variations of product features, such as pricing, packaging, or functionality.

2. Identify your channels 

Companies executing diverse channel mix marketing strategies must remember that characteristics vary channel by channel, impacting the outcomes they drive. Focus your testing efforts on the channels that drive the most traffic or revenue, for example, and prioritize improvements that will impact your business the most. Channels also have different limitations and audiences. For example, some channels may have limited targeting options, while others may have strict ad policies that impact the types of ads you can run.

Different channels may attract different audiences, impacting your overall messaging and creativity in your tests. Tailor your testing strategies to each channel's specific limitations and audiences to maximize each channel's unique strengths and opportunities. 

3. Create your test 

Creating different variations of your test is critical to successful A/B testing because it allows you to compare variables and determine which advertising elements are the most effective at driving your desired outcomes.

Make sure to align your testing elements with your previously established goals. For example, if increasing your website's conversion rate is your primary objective, test different aspects of your landing pages to see which variations lead to the highest conversion rates. This can include headlines, calls to action, or images. To boost user engagement across social media, try different types of content and messaging on each platform. 

Create as many variations of your test as possible for a more comprehensive analysis of the performance of each metric. This level of granularity is vital to helping companies pinpoint specific areas that need improvement or optimization. Testing multiple variations also helps to ensure that the results of the A/B test are statistically significant. 

If you only try a few variations, there may be insufficient data to draw accurate conclusions about which is most effective. By testing many variations, companies can gather a larger sample size, improving the test's statistical power and increasing the accuracy of the results.

4. Run the tests 

Like all experiments, when creating your test, it’s also important to remember that your sample size needs to be large enough. If your sample size is too small, you risk introducing bias or other errors into your results. This can lead to inaccurate conclusions and ineffective marketing strategies. The larger your sample size, the more accurate and reliable your test results will be. For this reason, we recommend running tests on pages that regularly see high traffic.

A larger sample size helps to minimize the impact of random variation or outliers and increases your confidence that any differences between your A and B groups are meaningful and not simply due to chance. Successful ad testing also includes random distribution. Randomization helps to ensure that the sample is representative of the population, increasing the reliability and validity of the results and allowing for better decision making based on the data collected.

Now that you know the fundamentals of running a cross-channel A/B test, how do you ensure you’re interpreting the results in a way that can lead to positive decision making within your business?

How to Read the Results of an A/B Test 

1. Measure the same metric across all channels

Measuring the same metric across all channels in a cross-channel A/B test is essential to compare different mediums' effectiveness directly. Companies can determine which channel performs best by measuring the same metric, such as conversion rate, engagement rate, or click-through rate, and adjust their marketing strategies accordingly.

If different metrics are measured, it can be challenging to compare their performance accurately. For example, if one channel is measured by conversion rate and another by engagement rate, it may be unclear which channel performs better overall regarding your company’s desired outcomes. 

2. Identify patterns

After conducting an initial review of the data collected, it can be helpful to group the data by channel. This makes it easier to compare and identify trends within or across channels. By doing so, you can gain insights into how different channels interact with each other and how changes in one channel may impact the performance of another.

For instance, if an A/B test reveals that a specific type of content performs well on social media and email, this could indicate an interdependence between the two. Use the insights to shift your cross-channel marketing approach, such as incorporating similar content into other channels or increasing your integration between social media and email campaigns. Furthermore, identifying patterns across channels can assist you in establishing areas of weakness or opportunity within your marketing tactics.

For example, if a particular type of content consistently performs poorly across all channels, this could indicate that the content needs reevaluation and that a different approach is needed. Similarly, suppose specific channels consistently perform better than others. In that case, this could suggest allocating more resources to those channels or reevaluating others. 

3. Evaluate the impact 

Evaluating the impact of cross-channel A/B test results involves determining whether the test had a positive, negative, or neutral effect on key metrics. To assess the impact:

  • Compare the results of the winning variant with the control group to determine the degree of improvement. 

  • Look for statistically significant differences between the two groups to conclude whether the improvement is valid or due to chance.

  • Use the concluding information to optimize marketing performance across all channels and improve overall ROI.

Evaluating the impact of a cross-channel A/B test also provides a baseline for measuring progress over time. By conducting regular tests and comparing the results to previous tests, you can continuously monitor your marketing campaign performance and adjust as needed.

A/B Testing Best Practices

Once you know the basics of structuring your A/B tests, you can improve your efforts. The following best practices can ensure you run an a/b test that will provide accurate results and minimal confusion.

Choose your control groups carefully and objectively

Think of the most significant customer segments that you have. Do customers from different geographical regions behave differently? How about customers from various industries? Different tiers of product? Different levels of spend? 

Most tools that partially automate testing ensure this isn’t an issue, but your control and test groups shouldn’t overlap. If they do, the results won’t be accurate. Statistics rely on independent observations, and overlapping groups violate the principle. 

Note that hindsight is 20/20 and severely biased. Don’t ever, ever choose your control group after the test has already run. Remember: The idea is to make objective, data-driven decisions. The integrity of your control group directly impacts the quality of the data you collect.

Wait for significance

A good rule of thumb is to wait until at least 100 conversions occur from each group. This helps encourage statistical significance, which indicates that results happen due to specific efforts—not random chance.

Another crucial guideline is to run a test for at least seven days due to weekly fluctuations. The entire internet has weekly and even monthly changes in activity. People browse significantly more on desktops in the middle of the week than on the weekends. People buy more things the week after paychecks hit than the week leading up to it.

Speaking of timing, avoid running a test during major seasonal events that don’t indicate regular customer behavior. Busy periods like Black Friday and the lead-up to holiday sales aren’t the best times to play with marketing strategies.

Don't peek — seriously

The time before you reach statistical significance can feel like the wild west. Anything can happen, and any differences you observe in results are, by definition, due to random chance. Peeking will tempt you to call the test early, run it for longer than you originally planned, or tweak the test parameters halfway through. If you jump to conclusions, you’ll skew the results, and the test won’t be accurate. 

Examples of A/B Testing in Marketing

Now that we’ve laid out why—and how—to A/B test, it’s time to look at some A/B testing marketing examples:

  • Our friends at Topo Designs implemented successive A/B tests in ad campaigns to determine whether customer engagement increased due to their ad copy, images, or both. They also used this testing model to discover that customers preferred lookbooks to standard product listings.

  • The marketing team at Wigs.com noticed their campaigns were failing to meet their CPA goals. Through A/B testing different ad sets and further optimizing, their campaigns exceeded those goals and contributed to 25% of their total revenue.   

  • DOGTV conducted A/B tests on various retargeting ads to determine which format was most likely to convert their prospects to sales. Results indicated millennials preferred social and video retargeting, so they were able to scale up their strategy to drive acquisitions.

Think You've Got It? Do It Again! 

After analyzing the results, make necessary changes to your marketing materials and repeat the testing process across channels. A/B testing is an ongoing process, and it may take several rounds of testing to identify the most effective marketing strategies for your business. You’ll find success if you stay focused on your goals, track your primary metrics, and be open to making changes based on your results.

Remember, A/B testing in multichannel marketing requires careful planning and execution. Make sure you have the right resources to get the most out of your tests. An excellent digital advertising platform should remove the stress of analyzing data and making connections. When you use AdRoll’s digital marketing performance dashboard, you’ll have the campaign information you need at your fingertips.

A/B Testing FAQ

What are some A/B testing examples?

A/B testing is a robust process that offers plenty of options. For example, you could test the effectiveness of different headlines, ad copy, button colors, and CTAs. Marketers can segment email campaigns to test the open rates of emails with different subject lines. You can even serve two different website versions to customers to see which drives more conversions.

How do you do A/B testing in digital marketing?

The simplest way to explain A/B testing is to develop a hypothesis about one current feature or variable on your website, such as button colors, CTAs, headline length, etc. You separate your audience into two random groups and show one group the existing variable. The other group sees the new version. At the end of a designated period with plenty of statistical significance, you compare the two variables and analyze which one received the most conversions.

Why is A/B testing used in marketing?

The benefits of A/B testing are numerous. Most marketers use it to make data-driven decisions about how best to conduct campaigns and increase conversion, the number of website visitors, or reduced bounce rates. With the data from an A/B test, a brand can reduce risk and create experiences, ad copy, and other campaigns to optimize its users’ experience. They can also improve their existing content.

When would you use A/B testing in marketing?

You can use A/B testing in various scenarios. Some marketers will employ it during email campaigns to test the effectiveness of subject lines, ad copy, or CTAs. You can also A/B test product pricing, user experience elements, website redesign, and even ad copy. The goal is to make data-driven decisions that can lead to increased performance. Any situation where you want to improve a specific metric can benefit from A/B testing.


Explore Next