May 20, 2009

NEXT POST
The B2B Database Dipstick Report Lists are the bane of database marketers existence. Everyone thinks their own data sucks but the real disappointment usually comes when you rent a compiled list and realize they suck too. Everyone wonders why this is but few have done anything about it -- till now. Ruth Stevens and Bernice Grossman have been writing white papers on database marketing since 2005. Both are well known independent thinkers and marketers with extensive direct marketing and database chops. For their seventh outing they devised a wickedly simple test titled "Online Sources of B-to-B Data: A Comparative Analysis" to understand the limitations of compiled lists. They convinced ten database compilers to participate and asked them to find 10 selected executives from 10 different industry verticals in their databases. It was a dip stick exercise to quickly measure the coverage and quality of leading databases. The vendors ranged from big data houses to online cooperatives. All have online ordering capabilities which generally means they are willing to deal in small lots and/or niche verticals. Bottom line -- when they had your data it was pretty accurate, though coverage was spotty across databases. In addition the study turned up a bunch of surprising results: There was a wide variance in company and contact counts. In one category -- stone, clay and glass products -- company counts ranged from 386 on one database to 36,382 on another. Email addresses were the hardest datapoint to find. Unfortunately these days its the datapoint most in demand. When a vendor had your record, there was a good chance of accuracy. But many vendors didn't have the 10 executives in their database. In the worst case only 2 of ten vendors could find Jim Carey of Northwestern University. Maybe its a subtle hint about academics. C level names seemed to be evident in more databases than lower ranking players. This could speak to the method of compilation. So what's a B2B marketer to do? Bernice and Ruth recommend these steps. 1. Quiz the vendors on what they have and how they got it. 2. Don't assume subsidiaries of large compilers have the same data or use the same compilation or cleansing techniques. They don't. Ask. 3. Be very specific when placing list or database orders. Use SIC codes or other tools to keep 'em honest and to be sure you get what you thought you ordered. 4. Check for industry or vertical specialization. Test their coverage before you buy. Shop for the vendor who has the best data on your target audience. 5. Run a data append test before you buy to test coverage and accuracy and to compare multiple vendors. Build in some house names that you know for sure as an accuracy benchmark. Better yet buy a small number of names and verify the data yourself by phone before you place a bigger order. There's about 12 million companies in the USA and evidently getting to someone in them still isn't easy as you think it might be. These kind of exercises help us understand the realities and limits of databases which in turn drive our thinking on how best to use the data we can get our hands on.
PREVIOUS POST
Dancing with and Dodging Data on Madison Avenue Whenever the mainstream media writes about what I do, I always get a queazy feeling. The breathless prose touting the obvious always leaves me cold. Such is the case with Stephanie Clifford's piece on the front page of Sunday's NYT Business Section titled "Put Ad on the Web. Count Clicks. Revise." which is more a testament to the skills of Darren Herman's publicist that an insightful look at interplay of messaging and behavior on the web. And yet my colleagues in traditional agencies and on the client side have not embraced the promise of using behavioral data to improve communication nor have they accepted the idea that web-based data can be a good indicator of awareness, purchase intent, brand loyalty or a tool for on-demand research, So even though Ms. Clifford writes "The shift to data-based campaigns is forcing marketers to learn new skills and drawing a new breed of worker to Madison Avenue," it ain't necessarily so. In fact, while John Lovett at Forrester predicts the market for web analytic software will grow at a compound annual rate of 17% over the next five years, a recent study of marketers found that only 30% of those who capture web data actually modified their websites as a result of the intelligence developed. So if only 1 out of 3 among those counting actually use the numbers to impact their communications or business tactics, we still have a long way to go no matter what the great Grey Lady reports. The "reasons" for not using data are plentiful and pitiful. Consider these: There aren't enough data guys to process the numbers and produce insights. There's too much data to mine. There's not enough data. We don't know what to measure or what to count. We don't know how to sequence the data. We can't draw meaningful inferences from the data. We can't see contingencies or dependencies within the data We don't know which software tools to use or to combine. We trust or don't trust Google Analytics. We don't know which data points are predictive or significant. We don't know how to synchronize data from multiple sources. We can't understand the interplay of campaigns, search, websites and WOM. We don't believe the data We don't believe consumers know better than we do A recent survey by Forbes.com found that 82% of those responding identified conversions as the leading data indicator of online direct marketing success followed by registrations and clicks. And while some brand guys and many online sales guys complain that counting pigeon-holes the web as a direct response or CRM medium these arguments are a red herring. The web enables us to see and count what consumers do. By understanding what they do we can get them to do it again and we can find more people like them with a high propensity to do it as well. This gauges our ability to persuade and direct behavior. The web also gives us an immediate "gut check" to quickly and cheaply validate our creative intuition by testing words, pictures, offers and concepts with target customers. A/B testing allows us to present alternative approaches to matched sets of consumers and quickly determine which approach works better. And collecting data on click streams and purchase history helps us understand why people do what they do and how we can find more people likely to take the desired action. Data collection and processing essentially support two key assessments: 1. Effectiveness. The data tells us if we sold something -- an item, an idea, a participation. Use data to measure who bought in ways that suggest how they made a buying decision and in ways that separate buyers from non-buyers. This cues us about which appeal resonated with target customers and identifies how many steps are required to convince customers to act. 2. Efficiency. By measuring the clicks necessary to convert a prospect into a customer, we assess the ROI of messaging and media used to build awareness and attract audiences. Understanding the cost-per-action compared to the relative value of customers acquired allows us to continuously improve our campaigns and buy media that will deliver the best bang for the buck. All the rest are vanity metrics.

Danny Flamberg

I am a veteran marketing consultant working with leading and emerging brands

The Typepad Team

Recent Comments