If you are on the agency or ad tech side, you are familiar with the “pilot” campaign. We’ve all gotten requests to test the waters with our products and services before engaging in a full campaign or longer term commitment. Marketers often request tests as a means to compare vendors or to try out new technologies and media they view as unproven. The pilot is a necessary step, but without proper planning, it will yield results that muddy the waters on the best ways to move forward or maximize KPIs. Frequently, the proposed campaign length or spend allocation is too light to evaluate significance or too little attention is given to defining what the key metrics of success will be.
Running a test that is not well thought out, too small, or lacking clear goals is an inefficient use of time, energy and dollars. It’s a waste for the marketer, agency and supplier. So, how do you run a test that is worth everybody’s time and resources? Perhaps the best way is to start by recognizing that pilots are an investment in a learning opportunity and not just a box to check. Additionally, creating a truly educational and beneficial pilot requires upfront investment—nothing ventured, nothing gained for anyone.
How to Set the Stage
One solution is to borrow from the startup world. Many of my ad tech colleagues are familiar with the Lean Startup movement lead by Eric Ries. The movement borrows from Lean Manufacturing, but the core principal we can adopt here is the Minimum Viable Product. This refers to the minimum you can do to test the waters to prove a hypothesis. Don’t be fooled by the fancy words, a hypothesis is just an educated guess. Eric’s other big principal is to “build, measure, learn.” We want to be sure we are building pilots that deliver clearly measureable metrics. The ultimate goal should be to learn best practices from these programs that eventually evolve into long term, mutually beneficial partnerships.
So, how do we set up an experiment to test our best guess? Here’s how we do it. Take a blank paper or a white board, and consider three things:
- Assumption: State the hypothesis you want to prove or disprove
- Experiment: State how you will know if the hypothesis is valid or invalid
- Measurable Outcome: Determine how much time, money and effort this will take
Here are examples of all three for testing a couple different scenarios:
- We believe using this new location technology from vendor ______ will increase retail foot traffic
- We will test this by counting the number of store visits generated by the campaign
- ______ visits are required to determine a success. To get there we will need at least _____ impressions over ________ period of time.
- We believe vendor _______’s product will outperform vendor ______’s in a head to head test to generate _________(location look-ups, downloads, registrations)
- We will install tracking pixels for click tracking and analytics from both vendors’ platforms, test that all are firing correctly and measure the results, A/B testing the vendors with the same creative, call to action and offer
- We will know who is more successful based on total number of downloads
While this is just one of many methods out there—it forces you to do all the necessary things you should do when entering into a pilot program, especially if you expect to use the learning to establish a larger scale initiative. It’s based on discipline, taking the time to establish your framework for success and the means for evaluation.
Of course, it’s also entirely possible that you are initially unable to articulate your metrics for success. In this case, you may use the test to establish benchmarks and gauge the realm of future possibilities. But, that’s another column for another time.
As long as you are orderly in your testing approach, asking and answering all key questions within the framework you establish, you will ensure that the results provide enough direction to chart the course forward. Ultimately, the well-defined and executed pilot becomes a means to an end and is worth your while.