A/B Testing, also called Split Testing, is a digital marketing method that uses two different versions of a single campaign to determine which content version performs better in terms of increasing social engagement and improving conversion rates online.
One version of the campaign content, called the A group, is the “control,” and the other version of the campaign content represents the B group, which contains the variation content.
Testing differing campaign content in this manner can inform a marketer like you or me which campaign version we should focus on and invest our marketing budget in.
A/B Testing is most often used for email marketing content and social media ad campaign content, but of course it can be used to test any digital content campaigns you wish.
An example of using A / B Testing in a non-marketing scenario would look something like this:
A 4th grader named Tommy has a hypothesis that a plant that receives adequate sunlight will grow faster and appear healthier than a plant that does not have access to sunlight. In order to test his hypothesis, Tommy buys two plants of the same species and size. The plants have been potted with the same soil. He places one potted plant on the windowsill where there is direct sunlight. He places the other potted plant in his closet where sunlight will not reach it. Every day for two weeks, he waters both plants with exactly five ounces of water.
The result that he discovers confirms his hypothesis. At the end of two weeks, the plant which received adequate sunlight grew significantly and now appears healthy with bright green leaves and flowering buds. By contrast, the plant which was kept in his dark closet looks nearly dead.
Since the only differentiating variable between the windowsill plant and the closet plant was the presence or lack of sunlight, Tommy is able to confidently conclude that plants that receive adequate sunlight will grow faster and appear healthier than plants that are deprived of sunlight.
Sure, this example might seem overly simplistic, but the big takeaway that you understand from the A / B Testing of the two plants is that with the exception of sunlight every single variable was the same, i.e. the species of the plants, the quality of the soil, the amount of water used daily, and even the indoor air quality. The only difference between the A plant and the B plant was the sunlight variable.
In the world of digital marketing, an example of an A / B Test could look something like this:
As an adult who waxes nostalgic for the career he almost had in horticulture, Tommy now works as a digital marketer, has an overwhelming mortgage, and often contemplates whether he should yank his children out of public school. He also has a hypothesis—the email marketing campaign that he’s about to launch, which includes a paid subscription sign-up CTA, will perform better if the email includes an incentive offer associated with purchasing the subscription.
The goal of the email marketing campaign is to convert recipients into paid subscribers, and Tommy needs as many new paying subscribers as he can get, because he has that mortgage and those kids to think about….
So, he creates a copy of his email campaign, naming the copy “Test B” and naming the original “Test A.” The only difference between the two email campaigns is that the A Test (original) does not contain an incentive offer, whereas the B Test includes an incentive offer.
Tommy composes a sentence regarding the CTA incentive offer, which happens to be a free ebook download, and plugs the sentence into the B Test email template. Concerning the B Test email, anyone who signs up for a paid subscription will have access to a download link to receive the free ebook.
He launches both email campaigns, the A Test and the B Test, and waits… for two weeks.
Low and behold, at the end of the trial period, Tommy checks the performance of his A Test versus his B Test, and discovers that the B Test, which contained the free ebook incentive, resulted in almost 300% more paid subscription sign-ups than the A Test.
Tommy determines he will enroll his children in the nearby Montessori school that he passes every morning on his way into the office.
As the two examples demonstrated, A / B Testing is a scientific method that’s used to compare two different variables for the purpose of determining which variable will perform better and yield a desired result.
In the world of digital marketing, the desired result is usually conversion, whether it be converting website visitors to newsletter subscribers, newsletter recipients to monthly subscription customers, or monthly subscription customers to brand ambassadors who use word-of-mouth marketing to successfully refer their friends and family to sign up for monthly subscription packages. You get the idea.
A / B Testing is also used in digital advertising campaigns that appear on Facebook & Instagram, Google AdWords, and other online platforms.
A / B variables can be as simplistic as the color of a CTA button, or as complex as whether the CTA is a lead generation form versus a Shop Now button.
When used to compare digital marketing advertisement versions, the A / B Testing stage can last a few weeks to a few months, and the investments for both test campaigns tend to be low. Then, once the marketer ascertains which test campaign performs better, the failing test is pulled down and a large financial investment is then pumped into the winning campaign.
Using an A / B Test to compare two different options that could be used for one campaign variable will yield the clearest results. Meaning, when all campaign elements are identical except for one, you can conclude that if one campaign performs better than the other, it is because of the different variable.
For this reason, A / B Testing works best when only one variable, or campaign element, has two versions representing the A and the B. The results of the sunlight vs. darkness test told Tommy to “use” sunlight if he wants his plants to “perform” well. The results of the incentive offer vs. no incentive offer told Tommy to “use” incentive offers in his email campaigns if he wants the campaign to “perform” better, i.e. to successfully convert recipients into paying subscribers.
That being said, there is another form of A / B Testing that compares many different campaign elements between the A Test and the B Test. This form of A / B Testing is called “multivariate A / B Testing.” When this complex version of A / B Testing is implemented, it can be harder to pinpoint which specific variable or variables performed better.
For instance, let’s say you run a multivariate A / B Test for a website landing page. The goal of the landing page is to generate leads. In order to generate leads, the landing page uses a CTA incentive, which is a “locked” webinar. In order to unlock and watch the webinar, the visitor must complete the CTA. Both landing pages are identical except for the following variables:
● The Design Layout
● The CTA
● The Content Font Size & Color
The A Test uses a design layout that does not match the brand’s logo or website. The CTA is an email opt-in form. The landing page content uses an extremely large, purple font.
The B Test uses a design layout that so closely resembles a major competitor that it’s inevitable this thing is going to wind up in court. The CTA is a “Tweet This Now” button, and the landing page content uses a red, moderately-sized, cursive font.
A few weeks, or perhaps a few months go by, then the performance results are analyzed.
Did the A Test generate more email leads? Or did the B Test render more Tweets? How would a Tweet even generate a viable lead in the first place??? (Bonus points if you caught that critical error!) Which landing page succeeded?
As it turns out, Test B had more Tweets than Test A had emails… but what does that mean?
The data is, for lack of a better term, a pile of confusion.
Does this mean you should avoid launching multivariate A / B Tests? Not necessarily. If you have experience analyzing A / B Test data and feel confident you’ll be able to differentiate the performance results of many competing variables at once, then multivariate A / B Testing could potentially provide you with a wealth of valuable insights that help you launch an extremely successful campaign.
In fact, the following are the pros to launching multivariate A / B Tests:
● Provides valuable insights regarding the interactions between multiple content elements
● Provides granular data regarding which campaign elements positively or negatively impact performance results
● Enables marketers to compare many versions of a campaign, not just two, and conclude which one will have maximum impact overall
But multivariate A / B Testing also comes with cons:
● Can be highly complex and might require an expert to conduct an analysis of the resulting data
● Requires significantly more traffic than a normal A / B Test in order to render statistically valuable results
● Too many campaign variable combinations could cause the results to be too difficult to interpret, rendering the entire test and its associated costs a waste of time and money
If you want to launch a multivariate A / B Test but don’t want to go it alone, then give us a call. The marketing specialists at FTx 360 will make sure your test campaigns result in performance data you’ll be able to analyze easily… and yes, we’ll analyze the data for you, too!
So, you’re ready to get down to business and devise a digital marketing A / B Test. You’ve done your research and you’re already excited about improving one or many areas of your business. You envision achieving tangible business goals as a result of using A / B Testing, such as:
● Increasing Website Traffic
● Increasing Conversion Rates
● Lowering Bounce Rates
● Lowering Cart Abandonment
Here are the steps you should follow if you want to ensure the best A / B Test results.
Why do you want to launch an A / B Test in the first place? Have you launched a blog that isn’t getting any web traffic? Have you noticed that your Facebook “Shop Now” native ad hasn’t resulted in higher eCommerce sales? Do you suspect the subject lines of your email marketing campaigns are the reason your emails aren’t being opened? Before you can solve the problem, and before you will know how to solve the problem, you first must identify what the problem is.
Once you’ve identified the problem, you can identify the goal you’d like to accomplish using A / B Testing. The more specific your goal is, the easier it will be for you to reach it. For example, if your promotional emails aren’t being opened at an acceptable rate, you’ll want to make note of the current open rate and then decide what the goal open rate should be. Meaning, if the current email open rate for your promotional campaigns averages 8%, you might set an open rate goal of 16%. This is a tangible goal that you will be able to compare your campaign data with easily.
As we covered in this article, if you’re new to A / B Testing, then you should start with testing only one variable, or campaign element, as opposed to testing multiple variables at once. The precise variable you decide to test, however, should relate to both the problem and the goal you’ve identified. For instance, if the problem you’re facing is a low open rate on your email campaigns and you’ve set a goal of doubling the open rate, the only variables that pertain to your problem are the email subject line and the email lead-in description that appears in recipients’ inboxes. Those two variables are what your recipients can see, and based on what they see, they will either open your email or delete it. Choose to test only one of those variables. Let’s say, the email subject line.
This step is straightforward, but will be the most time consuming. It’s time to actually create the two email campaigns and set them up in your email automation software. Make sure that both tests are clearly labeled and that the different subject lines appear perfectly. Hopefully, you’ve spent adequate time researching how to write email subject lines that increase the chances of recipients opening them.
While splitting your sample groups might not always apply to the particular A / B Test you’re running, let’s take a look at this step anyway since it’s relevant to an email marketing campaign A / B Test. In this instance, you obviously can’t send two emails to the same recipient list, so you’ll need to create two recipient “groups.” The two groups should contain a cross-section of your subscribers that includes all demographics. Meaning, both groups should contain all ages, both genders, both new and old subscribers, both high-spending and low-spending customers, and so on. This is opposed to one group containing loyal, high-spending, mature subscribers and another group containing new, low-spending subscribers, which would skew the results.
Once you launch your A / B Test email campaign, automating that the A Test goes to one group while the B Test goes to the other group, you can kick back and allow an appropriate amount of time to go by. Depending on the nature of the A / B Test, you may want to let a few weeks or a few months pass. In the case of our email campaign, waiting a full week will be enough. The main point here is that while your tests are running, you can monitor the evolving results, but do not alter your campaigns.
After an appropriate amount of time has passed, you can analyze the performance of the A Test versus the B Test. Compare the results of each test to the goal you set for yourself at the onset of your campaign. Which test performed better? If so, how much better? Did either test meet or exceed your goal? Simply put, whichever test performed better is the one you should invest in moving forward.