Beeler.Brief: The Industry will need Impression Level Granularity to Understand PerformanceFeb 18, 2022
One of the things that we often hear from our customers, and that also emerges from the research is, it's kind of overall question of, "Okay, great. I understand the strategic direction of the industry. How do I actually execute against it? What actions do I take? And which systems and which technology do I need to put in place to be successful over the next two years as we go through this transition?" In my mind, and based on the conversations we have with our customers, I see there's two big parts that publishers will need to address in order to navigate the transition.
The first is much more advanced capabilities around targeting campaigns towards a multitude of different data sets out there. Whether it's their own first-party data, or it's data coming in from an external vendor like ourselves, around contextual classifications and so forth. Or whether it's even by side data, publishers are going to need to find a way to funnel all of that back into their publisher ad servers and different systems controlling ad delivery, to properly pace and target the writing.
One of the key things that we see there is that there's still a gap, a disconnect between criteria and parameters, that the buy side defines in their campaigns. Which kind of, minimal criteria like invalid traffic avoidance. Minimal criteria, like kind of meeting a minimal level of branch stability. But then layering that on top of that with like more positively targeted contextual segments they want to focus on and maybe first party data, they want to layer in.
The ability to take that definition of which subset of inventory they're interested in and effectively target that specific subset of inventory on the sell side, is still an open gap. And what we see happening is that there's going to be new technology and new systems coming on the market that are going to streamline this collaboration process. Where there's going to be an efficient approach for buyers to exchange this kind of data with cell side systems to perform that execution. And we, on our hand, like we definitely have plans around that as well.
The second piece is, how you take all of this information and effectively compare outcomes between multiple different experiments or trial runs. Like as you are trying to figure out what the best mix between all the different approaches is for targeting. You're going to want to compare these different approaches and see which ones has the most positive effects on core KPIs. Like 230 rates or kind of contextual in segment rates. Or the number of incidents, how well you're able to kind of reduce incidents in certain campaigns. But also just basically your end goal, which is to drive revenue. How do some of these experiments impact your monetization across the board? And what we see happening there is that there's going to be a transition from where a lot of analysis and data work now happens at the aggregate level. Kind of grouped by campaign or by creative, or even by line item, advertiser, whatever you have.
That in order to truly distinguish between the impact and effectiveness of each of these different technologies, you're going to want to drill down at the impression level, and be able to slice all outcome and measurement data across a very granular set of parameters. And most importantly, the parameters that come in from your own first party data. So it's not sufficient that we as a verification vendor, for example, give you a data set. And we tell you, "Oh, good news. You can slice this data by all the parameters that we've tracked and that we care about."
You're going to want to be able to slice that information according to all the parameters that you have available in your first party unit six. So you can compare the performance of your solutions against kind of any other alternatives out there. And be able to make the argument to the buyers, like my first party data performs best. Like it actually achieves the highest KPIs, and I can prove it to you by running an AB test with and without my data layered in. And I can drill down into the KPIs and show you the differences. Even maybe compared to your own data as well.
So, that's an important kind of step for the industry to take is that impression level granularity. And I think once we achieve that, the next layer of it will be to loop revenue data into that as well. Because there's no point in looking at just purely these advertising kind of performance KPIs if we do it in isolation of publisher revenue. We want to see that whatever approach you take as a publisher, that it works for both sides. You want to prove to the advertiser, your approach reaches the best campaign performance and the best outcomes. But you also want to prove to yourself that taking this approach has the best revenue outcomes for you, especially as it comes to allocation strategies of [inaudible 00:05:22].
Because the more complex these targeting solutions get, the more complex your allocation strategy becomes. How do I take my most valuable inventory and how do I allocate it to the buyers that value it the most? And how do I then monetize it in the most effective way? And there're some very complex trade offs to make there as well. Sometimes I'm even better off saving a certain impression for a later use, when I may have a kind of a higher paid opportunity to sell it. Rather than kind of allocating at a high CPM right now.
So there's some interesting allocation problems that are going to emerge from more complex targeting strategies. And so again, impression level granularity of the data and then layering in revenue data, is going to give us the insights we need to combine and create hybrid targeting solutions across all the different technologies out there.
Stay connected with news and updates!
Join our mailing list to receive the latest news and updates from our team.
Don't worry, your information will not be shared.
We hate SPAM. We will never sell your information, for any reason.