
I recently had the privilege to sit in on FIR B2B episode 45: a podcast by Paul Gillin and David Strom. When my wife heard it, she said they sound like the “Click and Clack of tech.” Needless to say, it was a lot of fun. I had a few belly laughs and we also covered some serious topics.
The main point of the talk was about building brands using the methods Mike and I promote in our book: Outside-In Marketing: Using Big Data to Guide Your Content Marketing. A brief description: you build digital experiences using the words and phrases of your target audience. When they inevitably find your content, condition the conversation towards your brand.
Over the course of the podcast, David wondered out loud why more marketers don’t consult keyword data when writing and editing content. I said it was because of the cultural stigma of authenticity, which was the subject of my last post “The Challenge of Authenticity.” But there is a deeper reason I want to delve into here: it is a hard problem involving big data.
All kinds of data living together
Marketers are notorious for not liking math. But I suspect that this is rapidly descending into the realm of myth. If you look at the number of MarTech tools on the market today—more than 4000 and counting—it is clear that being able to measure and improve marketing activity is a required skill for marketers.

A concise diagram of MarTech platforms as of January 2015 by Scott Brinker. Hint: there are more now.
The problem is not with marketers, but the data itself. Even the tech people are confounded by the sheer variety of data they need to integrate to take full advantage of keyword data in marketing activities. Any individual can do keyword research with readily available tools such as Google’s Keyword Planner or Trends. If every marketer in a company used these tools to build content for related products or services, they would naturally produce duplicate content. So the first step in any keyword research program is to build a central database of keywords and develop governance around them. Some marketers don’t use keywords in their work because their companies don’t have keyword governance, and they are afraid of internal competition.
Suppose you have such a system, how do you know that the words these tools return are relevant to the products you are trying to market through content? The answer is, you probably don’t. Most people take their best guess, build content based on these guesses, and adjust their content and related parts of their keyword databases over time. The data they use to make these adjustments are the key performance indicators for their businesses:
- Paid search effectiveness (cost per response, response volume)
- Organic search effectiveness (ranking, referrals minus bounces)
- Engagement effectiveness (clicks minus bounces)
- Conversion effectiveness (downloads minus abandonment)
If you’ve ever run paid search campaigns, you know the difference between the words you start your campaigns with and those you end your campaign with. You optimize your word list by process of elimination: the words that don’t perform get eliminated or excluded from the campaign and you reinvest that money into the words that are left. 800-word bulk sheets that get paired down to 50-word campaigns over the course of a month are quite common. This should give you a sense of how many false positives Google gives you in the Keyword Planner.
The same process that you can do rather quickly with paid search can be replicated with organic search. It just takes longer, and requires more integration with data from your web analytics system. But the process is well worth it. As the database of keywords gets refined, everyone in your company will have access to the lessons learned from everyone else. In this way, the keyword database becomes a strategic asset for your whole marketing organization.
The keyword database gets even more intelligent if you can connect it to your social listening platform, your persona database, your product taxonomy, your marketing campaign calendar, and your CRM system. Each of these systems has unique data models. Building a universal translator for all these different systems is a huge challenge. It’s not like trying to map one taxonomy to another. You have to build logic, flow charts, translation machines, and all kinds of big data analytics capabilities into the system.
Think of the end-to-end workflow between these different systems. If a marketer, call her Joan, starts with keyword research, she needs to understand how the keywords she finds relate to the personas she is trying to target with her content. Suppose the personas for her company are stored in a database and mapped to their buyer journeys.
For each step in the journey, keywords indicate what topics need to addressed to satisfy the persona’s information needs. In this way, keywords become a tagging system that governs what content is needed. When Joan audits the existing content, she can discover what content is working and what is not, where the gaps are and apparent duplicates, etc. Though Google is a good proxy for effective content, she also needs to use web analytics system to audit the content.
When Joan finds the most valuable next action (optimizing existing content or creating new), she needs to understand what products or services the information need relates to and how to help the persona take the logical next step toward purchase of that product or service. This requires a product or service database, preferable tagged with known marketing attributes, such as existing campaigns and financial performance.
The last and most difficult integration is the CRM system. At a certain point, Joan will collect leads for the sales department to follow up on. In order for the sales people to close the deals, they need to know the whole customer journey for each one of the leads. Every customer journey will have keywords, persona information, digital analytics, and product or service data, with related products or services listed for cross sell or up sell.
The system would be all less complicated without the keywords. But it would be so much less effective. Convincing marketers to embrace this complexity is a big challenge.
This, in a nutshell, is why more marketers don’t use keywords prior to building content. It’s also why there are 4000 plus MarTech platforms and very few of them work together. There is no end-to-end MarTech platform that integrates keyword data with the rest of the marketing stack. Companies have to do this integration themselves. Marketers, and the systems they use, have to learn to customize the keyword research they get from search engines for their businesses.
That is a hard problem, especially for marketers. We explain how we are solving the problem at IBM in our book.
Like this post?
Sign up for our emails here.

About James Mathewson
James Mathewson has 20 years of experience in writing, editing, and publishing effective web content. As the distinguished technical marketer for search at IBM, he currently leads four missions within IBM Marketing: search marketing, content strategy, video marketing optimization, and marketing taxonomy innovation. These related missions come together in the tools and education he designs to scale content marketing across the largest B2B enterprise in the world.
James is also a prolific author. As lead author of Audience, Relevance, and Search: Targeting Web Audiences with Relevant Content (IBM Press, 2010 with co-authors Frank Donatone and Cynthia Fishel), he helped pioneer a new way of thinking about search marketing. Rather than seeing search as an after-the-fact optimization tactic, the book encourages authors to see search as a source of audience data. Using this data, authors can better understand the needs of their target audiences in their planning and writing activities. The book predated algorithm changes at Google, which force SEOs to follow many of its guidelines--in particular, write for humans, not search engines, but when you write, use search query data to better understand the humans you write for. James is also author of more than 1,500 articles and blog posts, mostly on the intersection of technology and content.
James has led the organic search marketing mission for IBM for five years, adding the other missions as the needs have arisen. As search marketing leader, James has built the systems, processes, and technologies necessary to govern content creation and curation across millions of web experiences worldwide. As such, he has been at the tip of the transformation spear, as the company has shifted from a traditional brand and comms marketing model led by advertising toward a content marketing model that focuses on intercepting clients and prospects in their content discovery activities. The transformation has contributed to a fourfold increase in leads attributed to digital marketing.
Prior to leading the search mission, James was editor in chief of ibm.com for four years. In that role, he focused on improving customer satisfaction with ibm.com content. That entailed writing style guides and educating writers, editors, and content strategists on how to create audience-centric content. These efforts helped reduce the percentage of users citing content quality as the cause of their dissatisfaction from 6% to 1%. During his tenure, search continued to cause 7% of the respondents to fail to achieve their goals, and so he has focused on search ever since. His first job in that capacity was to replace the ibm.com internal search function. Within a month, the new system went from the 20th largest to the 2nd largest referring source for IBM marketing experiences.
The post Why don’t more marketers use keyword research? appeared first on Biznology.