3 First-Party Data Stats You Need to Know
First-party data is the name of the game when it comes to targeted ads in the future of advertising â here are stats to prove it (+ what you need to do now). Click here to read on.
Read More
The key to setting up any testing program is to ensure that no biases are introduced. Itâs crucial to test with a clean framework where the treatment and control sets of users are comparable. Without introducing comparable audiences, the test will not reflect valid and reliable results. This is where incremental lift testing comes in.
When most people think of randomized controlled experiments, clinical trials, in which one group receives drug treatment and another group receives a placebo, come to mind. This type of experiment is very common in medical trials, where the goal is to understand whether or not a newly discovered medication is effective.
In the context of online marketing and ad effectiveness, randomized controlled experiments have become common practice. Marketers are finding they consistently offer reliable results by reducing noise. In other words, they strive to deliver reliable outcomes by controlling and accounting for any variables that could impact the measured data.
Sequential testing is executed by turning a channel on or off for a certain amount of time to measure the impact on the overall baseline, typically the website conversion rate. This type of testing intends to reflect whether or not the channel within the marketing mix is having the desired positive impact.
The treatment set is the group that is exposed to the marketing channel; in contrast, the control set is not exposed to that channel. Marketers then measure the difference in conversion rate between treatment and control to determine lift. Sequential testing was the first way of measuring ad effectiveness, as itâs easily accessible to any marketer and doesnât require a vendor.
Nevertheless, itâs highly unreliable because it doesnât remove the noise of external factors, such as marketing channels and seasonality, and it introduces risk when fully turning off a channel that could be driving valuable revenue. The introduced noise means that the groups arenât comparable, which causes biased results. Neither the treatment nor the control is exposed to the same variables at the same time, so it doesnât meet the criteria for a randomized controlled experiment.
Public service announcement (PSA) testing is one of the most commonly used frameworks. This testing framework splits the segmented audience into treatment and control groups. The treatment group sees brand- or product-related ads, while the control group sees PSA ads.
PSA testing's biggest flaw is that bidding and optimization prevent the groups from being truly randomized. While the treatment and control groups are generally split at random, the applied machine-learning algorithms the vendor uses (often conversion and/or click optimization) will frequently favor the treatment group, since users in the treatment group are more likely to click, convert, or click-convert.
In addition to often reflecting inflated positive metrics, favoring the treatment group has the unintended effect of producing unaccounted-for variables, which results in a lot of noise.
Intent-to-treat (ITT) is a method that creates a holdout when the ad platform has the intention to treat the user with an ad, which is also known in real-time bidding as the bid opportunity level. Once the opportunity for bidding occurs, the ad platform will conduct a randomized split within their RTB infrastructure. Unlike PSA testing, bid opportunities are introduced post targeting. This means checks, such as geotargeting, segmentation, and cookie duration, are confirmed before splitting the audience. This removes a lot of bias and ensures comparable audiences.
The key benefit of ITT is that itâs an unbiased way of conducting testing because it compares alike audiences. Additionally, this type of testing removes noiseâanything that should be, but isnât, accounted for that results in a biased outcome. However, because this framework doesnât ensure an actual bid or ad treatment, it doesnât remove the noise of internal and external auction behavior, thereby causing fluctuations in lift results.
In many ways, the implementation of ghost bids is similar to the intent-to-treat frameworks. The key difference between the two models is the stage where logging occurs. While intent-to-treat implements the split at the opportunity of a bid, ghost bids conduct the split on the actual bid level. This means that with ghost bids, the auction, pricing, and optimization of an ad happen before the split. Because the split is conducted later, heavy fluctuations are removed, which allows for large-scale testing and quick results.
A ghost is the logging event of the bid on the control set. While users are excluded from receiving an actual bid and ad, the logging event is still present. Logging the event allows vendors to either swap the bid with another advertiser in the auction or just use it to keep track of every potential bid that has been suppressed. The counts of these logs are called ghosts.
While intent-to-treat frameworks are reliable, ghost bid testing acknowledges and solves for ITTâs downside, making it one of the most reliable and sophisticated ways to measure ad effectiveness.
Last updated on November 15th, 2022.