Back in 2004, I got into a discussion with a client about the meaning of click-through rate (CTR) as a digital display metric. I argued that outside of being a loose gauge for creative appeal, it was essentially a meaningless measurement tool – observable, but pointless.
Here we are a decade later, and I am sorry to report that while this occurs with less frequency, I still occasionally have that same discussion with clients. But I get why we still have to hash this out: CTR is a very simple, understandable metric. It makes sense. It’s so simple that it must be a decent proxy for something, right? Unfortunately, it generally is not.
The thing is, digital campaigns generate so much data that agencies and clients often find themselves wishing for – and sometimes creating – magic bullets that can’t possibly deliver. Agencies have perhaps been a little too quick at selling magic bullet metrics because they’re not labor intensive, a good one is easy to understand and they make digital display that much cheaper to service.
Now, most of this has been happening in the virtual biosphere of digital media shops. But as the silos between digital and traditional media break down, and particularly as TV and TV content become increasingly available programmatically, both the good and bad of their respective measurement practices are starting to infuse and migrate across channels.
As a result, I fear that TV is about to get into the kind of trouble digital media got itself into in the early 2000s.
An Evolutionary Path
I’m a relative newcomer to direct-response television (DRTV). I didn’t really get to enjoy the heyday of when audiences simply called an 800 number to buy a product after something aired. Those days must have been pretty sweet: Run an ad, wait for the calls. That was pretty much how you decided whether or not ads were working.
The trick today is even determining what we mean by “working.” Responses are more often than not defined and measured algorithmically, and agencies and many emerging third-party advanced attribution vendors and platforms rely upon web-based signals of response matched back to commercial airings.
The DRTV industry’s evolution to algorithmic measurement has been truly Darwinian. While there are still audiences and advertisers that can make the 800-number model work, there aren’t enough of either anymore to define an entire industry. So our move to algorithmic measurement has been a genuine “adapt or die” scenario. The next step of that adaptation is tied to how we improve upon our measurement capabilities.
At this point, most DRTV agencies are starting to recognize the shortcomings of basic “spike” algorithm measurement. It’s a signal-to-noise ratio problem, where heavy commercial schedules, coupled with high existing web traffic, can conspire to mask readable spikes in response. Yet as digital and traditional media converge, there is the growing expectation from advertisers that traditional and TV media should be as measurable – or as easily measurable – as digital is perceived to be.
The great irony is that the measurement challenge DRTV faces is very much akin to what digital display media currently faces, in terms of mobile and the possibility that display ad blocking could become a widespread reality. In short, how do we determine DR efficacy in the absence of cookie data?
We need to acknowledge that TV measurement, while increasingly digital in nature, will be anything but simple or clear-cut. We will need to treat anything that feels like a CTR with suspicion because the days of simple digital measurement are finally coming to an end.
Welcome To the Age Of Advanced Attribution
I have this discussion with co-workers and colleagues all of the time: I think we’re finally zeroing in on the middle ground between near real-time optimization and econometric media mix modeling.
The former requires more signal strength than can typically be summoned from a DRTV campaign to have any real chance of big-picture accuracy, and tends to be a race to zero where budgets become impossible to clear.
Econometric modeling is a long-term look back that can tell us a lot about how different elements of a past media plan contributed to goals, but this tends to require way too much time to accumulate data – sometimes as much as a year – and often ends up driving conclusions that are too prescriptive and leave little leeway for flexibility or tactical adjustment.
Finding the middle ground and isolating response latency are going to be the keys. Latency requires time, but knowing this dimension saves us from optimizing out of things that may carry huge payouts, but maybe not within a tight window relative to air times.
So from a modeling perspective we’re talking weeks, not months or longer. But it’s also not a matter of hours or days, and it’s the weaning from the knee-jerk, fast conclusions that will be challenging because both agencies and clients are kind of hard-wired for this. We want answers!
But this challenge is also now compounded by some false promises typically attached to digital reporting. And that’s the simple trap we’re trying to avoid here: As TV media starts to look more like digital and vice versa, we need to remind ourselves that efficacy isn’t going to be as simple as a CTR.