Stop Flying Blind - A book by Michael Mace

This is a book in progress, on the art and science of using external information (competitive info, market research, and advanced technology) to drive business strategy. Most companies do it wrong, or don’t do it at all. There’s a new section every week. Your comments are welcome. If you’re new to this weblog and want to read the sections in order, check out the Chapters list at right and start from the top.

The Book is Complete


I’m happy to tell you that the book is completed and is now available. You can find it here.

Thanks very much to everyone who contributed questions and suggestions.  You improved the book a lot!

(By the way, I changed the title of it to “Map the Future” because “Stop Flying Blind” was too darned difficult to pronounce. Try saying it three times fast and you’ll see what I mean.)


14. How to segment the market for a new product

Last time I talked about the need to segment the market if you’re designing a new type of product. If you design a product to please everyone, chances are you’ll end up with inoffensive pablum that excites no one. That works pretty well in politics, where voters have only a couple of choices. But in new product design, where consumers can choose from an almost infinite range of new products, unexciting is usually deadly. So you should optimize the product to make one segment of customers deleriously happy, and not worry about the rest.
Unfortunately, segmenting the market for a new type of product is a lot harder than you might expect.

There’s a huge amount of accepted wisdom on how you’re supposed to use research to identify market segments. But most of it is designed to refine the segments in an existing market — for example, what’s the under-age-12 market for tennis shoes like? When you apply those same processes to defining the market for a new type of product, something nasty happens: you can’t find any segments. The reality is that market segments for a new category of product don’t exist until that product is delivered. Segments gradually coalesce from a feedback loop between the desires of customers and the products that companies offer to them.

The process is a little like the way that astronomers say the solar system was formed. You start with a big cloud of gas and dust. Small lumps and thick areas in the cloud slowly draw together under the influence of gravity. Wait long enough, and stars and planets will eventually emerge.

When you do research on potential new markets, you’re searching around in the cloud for thick spots. The evidence will be vague and contradictory, and you can easily miss it if you’re not careful. The trick is to look not for segments themselves, but for groups of people who share desires or other characteristics that you can mold into a new segment.

For example, there was no real market for sports utility vehicles in the United States until some clever folks at the auto companies called it into being. Early civilian jeeps were sold as farm tools, believe it or not. But after the disappearance of the station wagon, there was a need for cars with a lot of carrying capacity and with a less domesticated image than minivans. Virtually no car buyers would have thought to ask for an SUV, but when offered a car that could haul a lot of stuff and also had a buff image, people jumped all over it.


How to find the lumps

As I mentioned in Chapter Nine, a lot of research companies are happy to sell you ready-made market segmentation schemes that they have derived from demographic data. These segmentations are built around age, income, and other basic characteristics of the population, and usually split a country into about a dozen groups, each with around 6-12% of the population.

In some cases these segmentations can be useful, especially if you’re selling a product that shoots for very large generic markets, or is closely tied to age or income (TV shows and blockbuster movies come to mind). But for new product categories, especially in high tech, I’ve found that generic segmentations are close to useless. They’re backward-looking, telling you how people have behaved in the past, not what they’re going to do in the future. And the segments aren’t cut the right way.

For example, a generic segmentation will tell you that young people tend to be early adopters, something that’s often true but is also meaningless if you happen to be designing a product for older people. The best customers for the RIM Blackberry mobile e-mail device have been business professionals over age 40 working on Wall Street. A generic national segmentation wouldn’t have ever pointed you at that segment, because it’s too small to show up in a mass national survey. And most generic segmentations point new tech products not at over-40 adults, but at 20-something technology lovers. There are some companies that have tried to make e-mail devices for the under-20 crowd, most notably the Danger Hiptop. But they have achieved only a tiny fraction of RIM’s success.

So you need to do your own segmentation. But even then things are still tricky. Traditional research methodologies were designed in the consumer goods industry, in which it’s pretty easy to tell who your customers will be. If you’re making a new laundry soap, you talk to people who do laundry. The traditional research approach says you start with some very general focus groups with those laundry-users. Those groups help you think of some guesses — hypotheses, if you want to sound professional — about what the customers might want (“I know, let’s make it lemon scented!”). You then use quantitative studies to test those guesses and gather precise information on how the market will react (“women under 40 prefer citrus cents, but those over 40 prefer floral scents”). The process culminates when you have enough information that you can tell the product development team exactly what to create. They build prototype products (“Blammo, the zesty new laundry soap for a new generation”), you test them, and when you’re ready you launch the product.

That process falls apart rapidly when defining a market for a dramatically different product. The first problem is that in order to recruit people for the focus group, you have to know what your market is. If the market’s not yet defined, you are almost 100% certain to recruit the wrong people for the groups. The focus groups are the foundation of the whole research process, so if you start with the wrong people, it will invalidate everything else you do. As we say in the computer industry: Garbage in, garbage out.

There’s also a practical reason for not starting with focus groups. I’ve also found that it’s almost impossible to make an engineering team wait for you to complete the three-step research plan. Because of long product development lead times, they need to get started on their work very early. So they’ll come to the focus groups and start development based on whatever they happen to hear there. In most cases, they’ll lock onto whichever customer comments match their preconceptions, and ignore the rest. By the time you get the quantitative research done, they’ll be halfway finished building the product.

When you’re defining a new market I think you need to turn the traditional research process on its head. Your goal isn’t to gradually build up a fine understanding of the market, it’s to get a general idea of the opportunities as quickly as possible so the engineers can start work. Then you refine your understanding of the market as they refine their product.

Do the quantitative study first. That means the first step isn’t focus groups, it’s quantitative research to try to get a feel for the structure of the market. One of the best ways I’ve found to do this is with a feature survey. Start with a broad cross-section of the public. Give them a very general description of the type of product you’re working on (for example, a mobile phone with advanced features). Then give them a long list of features, and get them to rate each feature for its relative importance if they were buying that product. Again using the phone example, you could list things like long battery life, low weight, color screen, ability to play MP3 music files, and so on. You don’t have to list every possible feature, but make sure you’re covering all the possible categories — entertainment-related features, communication-related features, etc. Compiling this list is a great place to have the engineers help you brainstorm.

You should also capture demographic information on everyone in the study — income, education, job, age, etc.

When you get the data, your temptation will be to look up which features “scored the best.” Go ahead and look if you want, but afterwards you should ignore that information. What you want to look for is not high scores, but lumps in the cloud. By that I mean features that tend to cluster together for some groups of customers — for example, if someone favors long battery life, do they also favor color screens? There’s a type of research called discrete-choice modeling that’s great for this sort of work. A good research company can help you do this analysis. The clusters you identify are your potential market segments. For each of them, dig into their demographics and other information until you understand what makes those users different from the rest.

The segments will seem depressingly vague when you look at the data. The researchers may tell you that correlations between features are weak, or there may be only a few features that form a particular cluster. That’s fine. Remember, you’re looking for tendencies that you can turn into a segment, not full-blown pre-existing segments. If the market were fully formed, there would already be 12 companies selling to it. Even a major market like SUVs didn’t just materialize in one year, it developed gradually as car makers noticed an interest in more rugged transportation, and consumers responded to the first such cars targeted at that idea.

Next, do a quick and dirty product concept test. Once you’ve identified the clusters, the next step is to have the engineers create some product concepts for them. These do not have to be full prototypes. In fact, few high tech companies have time to test-market working prototype products these days. It’s enough to have a nice sketch, along with a one or two paragraph description of the product, and basic price, size, and weight estimates. This is where you want to get the most creative thinking from your product development people. You have to make sure the concepts are well enough described that people can understand them and picture how they’d be used. Then do a quantitative test of the concepts, contacting the segments you identified in step one (something you can do since you gathered good demographics on them), and seeing how they react to the product descriptions. Do they like the ideas? How interested are they in buying? How much would they pay? This is the time to gather as much information as you can on preferred buying channels, price points, and so on.

You should test all the product concepts on all the segments, even if you think that one product will really only appeal to only one group. So, for example, if you were testing phone ideas, you’d ask the communication-focused users to rate the entertainment-featured phone, even though you think they won’t like it. If you’re correct about the segments, the right people will want the right products. If you’re wrong about the segments, your results will be all over the map and you’ll need to rethink the market.

Once again, the correlations here may be frighteningly mild. For example, 60% of your target market might want the product designed for them, while 40% want something they weren’t supposed to want. That’s fine; people are always more complex in reality than they are in a segmentation model. As long as you have a little angle to work with, good marketing and product design can solidify it into a full-blown segment.

Focus groups come last. The concept test should give you the information you need to complete product development. The final step is to do focus groups with the target customers, when the product is nearly finished. You don’t do the groups to gather information; you do them to gather ammunition. Record video of the groups, and edit the video down to about ten minutes of the customers describing themselves and reacting enthusiastically to the product. If you’re in an established company, you can use the tape in-house to help explain your product and get people to support its launch. If you’re in a startup, you use the tape to help you raise money for the launch. And in either case, the tape helps you educate the press and analysts about your product.

Researchers are sometimes uncomfortable with using focus groups this way. They feel that research should always be a search for the objective truth, and that selectively editing the focus group findings is a sort of crime against nature. Don’t let them throw a guilt trip on you, baby. Focus groups aren’t statistically valid anyway. Besides, you’re not pursuing abstract truth, you are building a new market out of (almost) thin air. You need ammunition to bring that market to life, and the focus groups are your source of ammunition. Use them unashamedly.


To summarize, traditional research on a market works like this:

1. Focus groups to brainstorm. Do focus groups to get a feel for the customers, get some ideas about what they want, and create some hypotheses of user segments to test.

2. Quantitative research to test. Do a quantitative study to validate the hypotheses you formed in step one, and size the segments.

3. Product concepts to refine. Conduct product concept tests to validate the designs produced by your engineers.


The new market definition process works like this:

1. Quantitative research to find lumps. Do rapid quantitative research testing many feature possibilities. Analyze results to identify customer clusters (potential market segments).

2. Quick product idea test to latent segments. Conduct quantitative product concept tests, with very sketchy product descriptions, to validate the segments and give the engineers enough information to go to work.

3. Focus groups for marketing ammunition. At the end of the process, conduct focus groups with people reacting to nearly finished products, to collect video verification of the segments and help you prepare for launch.


Next time I’ll talk about a totally different type of market research — advertising proof studies.

13. Applying market research to product strategy

“Markets that don’t exist can’t be analyzed….The only thing we may know for sure when we read experts’ forecasts about how large emerging markets will become is that they are wrong.” –Clayton Christensen, The Innovator’s Dilemma

One of the most difficult tasks in market research is guiding product development. The tech industry’s bipolar view of the future dominates its handling of new product research. The visionary companies tend to reject all product-related market research. They rely on their own ideas and instincts.

On the other hand, reactive companies try to make all their product decisions through research. When you do this it’s very easy to use the wrong sort of research. As I’ve mentioned before, if you use a focus group to make product decisions, you might as well flip a coin, because there’s no way to know if the group represents customers as a whole.

It’s better to use quantitative research – at least then you’ll know you have a representative sample of customers. But there are still two major drawbacks to this sort of research, which I call the possibility gap and the blender.

The possibility gap. The visionaries are right on this point, customers usually don’t know what they want until they see it. If you ask an existing user for product ideas, they’ll take what’s wrong with the current product and dress that up as ideas for the future. For example, for years I looked at research on PC users, and they always asked for computers that are cheaper, have more memory, and run faster. Why? Because those are the barriers the users run up against most often.

In 1995, almost no customers in PC research studies were asking for high-speed network connections and photo-realistic 3D graphics, yet those turned out to be probably the most important new PC features in the following decade. To catch those opportunities, you would have needed a much deeper understanding of user psychology and of technology trends.

Hold that thought.

The blender. In most product feature studies, people are given a long list of features to evaluate, and the features that get the highest average score are the ones selected for the product. The problem with this is that it turns the customers into a single average, as if they had been dropped into a blender. The only features that score highly will be the lowest common denominator ones that affect everyone — things like weight, size, and ease of use. If you have a feature that’s beloved by some customers but hated by others, the two groups will cancel one-another out.

There’s a good example of this in the mobile phone world. If you survey mobile phone users about feature desires, the issues that rise to the top are smaller size, lower cost, and longer battery life. Those are the things that irritate almost all phone users. More advanced features, like built-in e-mail, end up close to the bottom of the list.

Despite this, two of the hottest advanced phones in the US today are Research in Motion’s BlackBerry and Palm’s Treo, both of which combine phones and e-mail features. They’re not attractive at all to most mobile phone users, but are beloved by the10% of mobile phone users who are so obsessive about communication that they want their e-mail with them all the time.

Very often, at least in technology products, the biggest opportunities are products that some people love but others hate. So what you want to look for in feature research isn’t the blended average, it’s the lumps that are in the mix before you turn the blender on. What feature requests cluster together? Do the people asking for those features have personalities or demographics in common? What problem do they share that drives them toward wanting those features?

The right way to guide products with research

I think the way to get past the blender and the possibility gap is not to try to design the actual products through research. Instead, focus on understanding the needs and psychology of the customers, so you can anticipate the way they’d react to new features. How do they live their lives? What do they care about? What are they trying to accomplish? What challenges do they face that you might be able to help with? Once your product engineers absorb these issues, they’ll start to more or less automatically design the right products.

Let’s take mobile phones again as an example. When you look in depth at the motivations of mobile phone buyers in the US and Europe, it pretty quickly becomes clear that a majority of them — about two thirds, actually — care only about basic voice and maybe text messaging. In the US, they buy the cheapest service plan they can, and take the free phone that comes along with it. In Europe, they’re usually on a very low-cost pay as you go plan (where they add money to the phone account as they go, rather than paying a flat monthly fee), and they often turn the phone off in order to hold down their bills.

These people have no interest in any advanced features or services. If you’re doing a study on advanced phones and you keep them in the research mix, their sheer numbers will make you conclude that there’s little hope for any sort of advanced phone. And, in fact, that just what some mobile companies have concluded.

But if you exclude those basic users from your study, you find that about one-third of mobile phone buyers actually are interested in advanced features of various sorts. One-third may sound like a small number, but keep in mind that about 700 million mobile phones were sold worldwide in 2005. A third of that is about 230 million phones a year, enough to attract almost any company’s attention.

The problem with these advanced users is that they don’t all want the same thing. If you apply the blender principle and mix them together as a group, you’ll find that on average they are moderately interested in almost every feature imaginable. This has led a lot of companies to create “smart phones” that are basically kitchen-sink bundles of features lumped together. These products usually don’t sell very well, because in the process of trying to be everything to everyone they become too big, too expensive, and too complex for anyone to love.

The products remind me of politicians trying to assemble the largest possible coalition of voters by not offending anyone. That sometimes works in politics because the voters have only a handful of parties to choose from, and a politician has to assemble a majority vote. But a product has an unlimited number of competitors, and 10% share might be a huge win. Better to please some people intensely and piss off everyone else than to get a lukewarm reaction from everyone.

Instead of trying to attack the engaged users all at once, you need to look for segments within them. Are there groups of people who want certain features in particular?

When you do that with advanced phone buyers, three groups emerge. One group gives high ratings to all communication-related features — e-mail, instant messaging, built-in fax, etc. Basically, they’re communication junkies, and they’ll pay extra for a communication-enhanced phone. These are the people buying RIM Blackberries and Palm Treos today.

The second group gives high ratings to information-related features — large memory, document display, databases, etc. These are people in information-intense jobs who need a mobile memory supplement. Think of a doctor looking up drug dosage information on the go, or a lawyer trying to find a case reference in court.

The third group responds best to entertainment-related features: music, video, games, and other ways to have fun. These entertainment-focused users tend to be younger than the others, and don’t want to give up their electronic lifestyle even as they enter the job market.

Segmenting the market isn’t a new idea; the auto industry has been doing it for more than 70 years (think sports utility vehicles and sports cars). But although the idea of segmentation is straight from Marketing 101, and is heavily used in established industries, it’s very hard to do in a new industry or product category. Market segments are only obvious after they have been proved by a successful product. Until someone builds that first e-mail phone or SUV, the natural human tendency is to either dismiss the existence of the market, or to lump the customers together and try to hit a home run with all of them at once.

You need to resist that temptation. Products designed to please all segments almost always fail, and if you wait until someone else validates the market, you’ll be fighting with 20 other companies to dislodge a competitor rather than running ahead of the pack.

Next time I’ll talk about how to segment the market for a new product.

A change in pace

This weblog is an experiment in developing a book online. I let it rest for a little while because I wanted to think about the feedback I was getting. A number of people seemed confused by some of the chapters — they felt the chapters were incomplete, or they weren’t sure what the point was.

I realized the problem was my fault. To adapt the book content to a weblog, I was taking the draft chapters and splitting them into digestible chunks, posting one chunk a week. Unfortunately, web posts are typically much shorter than book chapters, and have a different structure. In a book, a chapter is usually a fairly long essay. It constructs an argument in segments, like a gourmet five-course meal that builds from appetizers to soup to salad and so on.

A good weblog post is more like eating bon-bons. Instead of building a structured argument, it makes a single point concisely and then gets out of the way. By chopping the chapters into bits, I was giving you the worst of both worlds – you didn’t get the full argument you’d expect from a book chapter, but you also didn’t get the single clear point of a good blog post. Too many of the online chapters felt like fragments – because that’s what they were.

So I’m going to structure the writing a little differently in the future. Instead of trying to replicate book chapters, I’ll just write about the ideas that I want to cover in the book, roughly one idea per week. I think this will make the weblog a little easier to read, and I hope it’ll also encourage more discussion.

As always, I’m very interested in your comments and suggestions. Please don’t be shy!

12. The online revolution in market research

Working in the high tech industry, I have become very jaded about claims that the Internet is going to revolutionize something. Grocery shopping, book-reading, even going to the bathroom were all supposed to be transformed by the Web.1 So I hesitate to say this, but I really believe it: The Internet is producing a revolution in quantitative market research.

Web-based companies are driving incredible reductions in the cost and time needed to collect quantitative information. It’s now becoming possible for even a small company to create the sort of studies that previously were available only to the largest companies and political organizations.

To explain how dramatic the change is, I first have to describe the steps you take to conduct a traditional quantitative research study. First, you work with a researcher to create and review the questionnaire. This alone can take weeks. The questionnaire has to be specially coded and formatted so that phone attendants can understand it. Then the researcher identifies a list of people to call, the call screeners have to be briefed, the phone calls have to be made (and often re-made if no one’s home), the results have to be tabulated and analyzed, and then the researchers create a presentation of the results. The process generally takes 2-3 months and costs at least $30,000 in the US — sometimes a lot more depending on what you want to learn.

Several companies on the Web have recently automated this process. Examples include SurveyMonkey and Zoomerang, and I know there are a lot of others. In all of them, the process generally works the same way: you create your own survey online, send e-mails to recruit the respondents, they fill out the survey on the Web, and then you download your results. I did my first survey this way about a year ago, and the process took less than a week:

On Monday afternoon I wrote the survey. At 8 pm Monday we sent the e-mails inviting respondents. At 11 pm that night, I checked the results from home just before I went to bed. We already had 800 responses, and I could check the tabulated results for each question. There were several interesting trends that I decided to watch closely. I let the survey run until Thursday, and we got a total of 5,400 responses, 60% of them from people outside the US. On Friday morning I showed the results to our CEO, and he picked out four questions that he wanted made into charts for a speech the next week. I downloaded the results into Excel, and sent him the charts before I went home that night.

Total time elapsed: Five business days. Total cost to the company: About $800.

Oh and by the way, that $800 included a year’s subscription to the survey service, which will let me do follow-up studies for free.

Total reduction in time: About 92%. Total reduction in cost: At least 95%.

Now, this is not an invitation to go cut your company’s market research budget by 95%. There are some significant limitations to online research. Here’s are the most important, plus some ideas on how to work around them.

Drawback #1: It’s limited to web-users. Conventional market research uses phone calls or the good old postal service to contact people. Although this can be slow and expensive, it reaches almost everyone. Online market research reaches only people who use the web. Although in 2006 that was about 70%-77% of adults in the US, and a rising percentage elsewhere, it’s not everyone.2
Web usage is generally lower outside the US. Below are some Web usage rates for some prominent countries:

Percent of population who have access to the Internet3

The penetration limit may not matter if you’re selling a high tech product in the developed countries — almost all of your target customers will probably be online anyway. But it’s a big deal to a company selling, for example, discount eyeglasses for elderly people (only about 40% of people over age 65 in the US are online).

Even if you are selling a technology product, you shouldn’t make the mistake of projecting to the whole adult population from online results. For example, if 10% of people in an online study say they like your product, you can’t conclude that 10% of adults in the country like your stuff, only that 10% of the web users like it.

There are a couple of workarounds for this.

If you’re trying to decide whether there’s a big enough market for your product, try the math assuming that the only people you’ll sell to are those who are online. For example, if in an online study you find that 10% of the people surveyed want a product, your estimated available market is 10% of the online population (in the US, that would be 10% of about 210 million online users, or 21 million people). You know the real market will be bigger than that, but at least you have a floor. If that’s enough people to make your product profitable, you have the info you need to launch it.

(Even in some low-penetration countries such as China and India, the actual number of Internet users is large enough that it adds up to a substantial potential market. If you believe the statistics, China is second only to the US in number of Internet users, and India’s Internet population is slightly larger than the UK’s. I’d read these numbers with a bit of caution, though — in some countries Internet access is through Internet cafes rather than a PC that’s in the home or office. That means usage patterns, and your ability to market to these people online may be very different.)

Another tactic you can try is supplementing your online research with other studies. For example, you could conduct one conventional market research study side by side with an online one. By comparing the results, you’d get an idea of how you need to adjust the online results to account for the full population. Then you could probably get away with just online studies for a year or 18 months before you’d need another conventional study to recalibrate.

You should also ask about online access whenever you collect information from your customers. For example, rather than just asking age and education on a product registration card, also ask your customers if they use the web. That will tell you what percent of your users you’re reaching with online studies.

Drawback #2: You need a good list. To conduct an effective online survey, you have to send e-mail invitations to a random list of web users. If there’s a bias in the list of people you contact, your results will be biased. In my experience, this is the biggest source of errors in online studies. For example, I’ve seen online surveys made of people who register to post messages on a particular website. People who post actively online are far different from the average web user, and the results you get from them will not reflect “normal” people.

I’ve also seen industry analyst companies trumpet the results of surveys of their own subscribers, as if those people represented average customers. Remarkably, those surveys seem to always reflect back whatever messages the analyst firm has been preaching. This happens both because people repeat the messages they’ve been told, and because people tend to subscribe to industry analysis services that they agree with in the first place.

But the worst way to get a list of customers is to post a message to an online forum inviting people to come respond to your survey. When respondents can select themselves, you inevitably get flooded with fanatics and cranks. That isn’t a survey, it’s a popularity contest.

Finding a good list is hard. There are companies that specialize in collecting lists of people willing to fill out online surveys. You can rent access to these lists (although it’s often expensive). I use this technique a lot in my own work, and the results seem quite solid, but I do sometimes worry that people who go out of their way to volunteer for surveys may not be a great proxy for the public as a whole.

If you have friends in research at other companies, you may be able to share access to customer lists with companies that don’t compete with you. For example, if you were looking to evaluate the market for a new video camera, a list of television owners might be a good starting point. Unfortunately, there are legal restrictions on how companies can share e-mail lists; make sure you don’t violate the law.

Over time, it’s best to compile your own customer list. That won’t help you size the market for a completely new product, but it will let you track your current ones, and evaluate the market for derivative products. Chances are your company’s marketing department is already compiling a customer list. They probably regard it as a marketing tool, something they can milk like a cow to get additional revenue. But the list is also a valuable source of information, and can save you a lot of money on research. Reward your customers for sharing their e-mail addresses with you, and make sure the marketing department doesn’t drive those people away by spamming them with too many “special” offers.

Drawback #3: You need to know how to design and analyze surveys. This is trickier than it looks. It’s easy to accidentally bias a survey by asking a question in the wrong way. For example, if you ask people to rate something on a numerical scale (i.e., “one to six, with six being best”), it’s a good idea to give an even number of choices. If you give an odd number, a lot of people will cop out by choosing the middle, neutral, option.

If you haven’t had some training, it’s best to get help from someone who knows how to construct a survey. (This is the point where I should probably do a shameless promotion for the consulting company I work for.)


Fun with online research

Now that I’ve listed the challenges with online research, let’s talk about the opportunities. The first opportunity is frequency. You’ve seen the “tracking polls” that professional politicians use in election campaigns. You can now do your own tracking studies. If you have a good list of e-mail addresses, you can easily survey a subset of them every week, watching for changes in attitudes and tracking the effect of things like ad and PR campaigns. Do this right, and you should never be caught by surprise by a market trend again.

It’s very helpful if you can collect demographic information about the people on your customer list. Once you have this, you can use it to aim targeted surveys at particular segments — for example, people with a certain income level, or in a particular age group. This will let you learn much more detailed and subtle information on the market than you could have collected in traditional studies. Be sure you understand the limits on information collection in the regions where you operate, though — some countries are adopting restrictive rules on the use of customer information, and the rules change frequently.

The other opportunity, I think, is driving the use of research more deeply into your company. In a situation where market research is virtually free and almost instant, your use of it can change dramatically. Trying to make a tough decision on whether or not to build in a feature? You can have data in a week. Thinking about a price change? You can get customer feedback almost instantly. I’m not saying you should let research make your business decisions for you. As I’ve said elsewhere in this book, I think the best decisions come from a mix of data and gut instinct. But now there’s no excuse for basing a decision only on guesswork. The data’s almost free. Why would you possibly want to operate without it?

Here’s your chance to check out SurveyMonkey. Please click here to rate this post. The link will open a one-screen survey hosted by SurveyMonkey, and you’ll get to see the results after you take it.

Next week: Using market research to plan product strategy.

  1. In 2004, there was allegedly a project at Microsoft to create an Internet-equipped public restroom, so people could browse while otherwise occupied. There’s some disagreement over whether it was a serious project, or just a joke. Everyone agrees that if there was such a project, it was killed soon after it was first disclosed to the public. Meanwhile, those of us in the mobile industry know that a significant number of people use smartphones to browse and send text messages while they’re using the facilities. So maybe this is an area where the Internet is breeding a revolution after all. [↩ back]
  2. Harris Interactive measured US penetration at about 77% in May 2006. The other US statistics in this section also come from Harris. You can check out a summary of their study here. Other sources put the US figure at about 69%. The difference may be due to variations in the way different studies define Internet penetration. [↩ back]
  3. These are generally 2005 figures, as collected by [↩ back]

11. What to look for in a researcher, and presenting findings

I’m sorry it has been a couple of weeks since I posted. I was involved in a very time-consuming protest against a developer’s plan for my neighborhood, and had to cut back on other activities.

This week we continue our look at market research, with thoughts on what to look for in a market researcher, and how to present findings.

What to look for in a market researcher

Obviously, the most important aspect of a good researcher is professional competence. You need someone who’s well trained, and has experience in a wide variety of different methodologies. The popular stereotype is that numerical analysts aren’t good at dealing with people, but the best researchers often have an interesting mix of people skills and quantitative analysis skills. You can still be a successful researcher even if you’re an introvert. But I’ve never seen a successful researcher who was bad with numbers.

The other characteristic to look for is the ability and willingness to think beyond the numbers. This is a hard thing to find in market researchers; many of them are very methodical, and reluctant to draw any implications from the facts they’ve discovered. For example, they might report that a particular customer segment has a high percentage of elderly people, but they’ll be very uncomfortable speculating on why there are so many old people in the group.

This is a significant problem because there’s never enough money and time to research everything you want to know. At some point you have to stop gathering data and fill in the gaps with your best guesses and extrapolations. This is profoundly uncomfortable to many researchers because it runs contrary to their training. The whole point of market research is not to guess. And many market researchers aren’t very good at it, either.

To find researchers who are good at drawing implications, talk with them about their previous studies. Ask what they learned, what conclusions they drew, and what actions they recommended. The more insightful and non-obvious their conclusions, the better.

The other thing to watch out for is people who are comfortable forming implications from their research, but form bad ones. Sometimes this will just be because they’re not very insightful. It’s best to avoid hiring these people. But sometimes it happens just because they’re naïve about your industry. It’s going to be almost impossible to find researchers who are both very skilled in their craft and deeply informed about your industry. To get around this problem, have the researcher work with a competitive analyst when thinking about implications. The competitive analyst may be able to give some of the industry background that the researcher lacks.

How to communicate market research findings

There’s no substitute for a good presentation when delivering market research. Graphs of research results and video of customers both work very well in a presentation format, and people always have a lot of questions that can best be dealt with in a live setting. A future chapter will give general guidelines on how to present. What I want to focus on here is something you shouldn’t do.

Don’t let an outside research supplier present the findings. As part of their standard service, the company that conducted the research for you will prepare a slide presentation of the findings. They’re proud of this work, and they’ll want to come in and present the slides to your whole management team.

That’s usually a bad idea.

First, remember that your goal is not to present raw data to the company, it’s to make sure the company takes appropriate action on your findings. That means the implications of your study are more important than the actual data, and they need to be tailored to the internal vocabulary and politics of your company. Most outside suppliers can’t understand this; they simply don’t have the context. Most of them will just present raw data — or worse, any implications they draw may not be appropriate to your company, or may be phrased in ways that people in your company will misunderstand.

For example, I’ve had outside researchers recommend my company adopt strategies that were already tried, and failed, years before. Or they have given advice that undercut exactly what we were trying to get the company to do. Once this has happened, it’s almost impossible for you to correct their messages, since you’re the person who chose the research supplier in the first place. At best you’ll look incompetent.

Second, almost every research company I’ve ever dealt with creates terrible presentations. And by using the word terrible, I’m probably understating the problem. Most of the supplier presentations I’ve seen are either ugly or incomprehensible, with a little bit of repetitiveness thrown in for good measure. I don’t know why this is. Maybe the presentations are an outlet for the repressed artistic yearnings of researchers (who, after all, spend most of their time doing very dry numerical work). Or more likely, the researchers are so in love with their data that they try to cram it all into one slide. Whatever the reason, most of the supplier presentations I’ve seen are so complex and poorly structured that you can’t understand them unless you already know everything about the study.

Here are two real-world slides from supplier presentations. Company confidential information has been obscured, and the supplier names have been removed. Both of these slides came from wonderful studies on which the suppliers did great work — they just didn’t do a good job of presenting it.

Here’s a good example of a researcher who couldn’t bear to leave out any data. Even though I remember this study vividly, I had to spend several minutes studying this particular slide before I could figure out what it was trying to say. The actual research finding was, “companies are planning to buy more of our product next year than they did this year,” but good luck getting that from the slide even if you could read it (which no one could do unless they were sitting in the front row of the room).

Some research firms just aren’t very good at communicating visually. This slide is from the final report of one of the most innovative, influential research studies I’ve ever been involved in. It was prepared by one of the best research companies in the country, but you’d never know it from the chart, which I’m still not sure I understand. The supplier’s report had another 111 slides just like this. If I had allowed this supplier to present the findings to my company, no one would have understood the research, let alone acted on it.

Unless the research supplier is unusually good at presenting, and well attuned to your business, you and your company will be much, much, much better off if you rework the supplier’s presentation into words and graphics tailored to communicate clearly to your corporation.

Written documents. These have a role to play, but I think it’s more for reference than as the primary deliverable. If you’ve done an especially large or data-heavy study — for example, a major survey of your company’s installed base — it’s a good idea to give people a reference document on the findings. For an installed base study, the document could include tables on all the key demographics of your users, things like age, geographic distribution, education level, income, how long they’ve owned, satisfaction level, and on and on. This will let people in the company look up any specific information they need, rather than coming to you.

This is also a great document to post on your company’s intranet.

E-mail is a good supplement to presenting your research. Create a message summarizing the most important implications and most surprising findings of the study and send it to key stakeholders (if you are paired with a competitive analysis team, their competitive info mailing list is a great place to distribute this information).

But I don’t recommend that you try to communicate all of a study’s findings in an e-mail. There’s just too much to explain. Use the message more as an advertisement for your presentation sessions, and as a supplement to get your most important messages to people who don’t have time to come to the presentations.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: The online revolution in market research.

10. How to work with market researchers

This week we continue our look at market research, with a discussion of how to work with market researchers. The typical market researcher has a very specialized skill set that’s not fully understood, or necessarily valued, by the company as a whole. If there’s an MR team in your organization, you need to spend some time learning how they work and what makes a good research study.

In the tech industry there’s an informal rule that if you want to get along with hardware engineers, you have to learn how to appreciate their block diagrams. A block diagram is a drawing that shows how the various components of a circuit or computing device work together. If you can understand the basics of an engineer’s block diagram, his or her respect for you will go way up, and you might even be treated like a sentient being.

This is a block diagram of the Data Translation DT9840, a “low-cost real-time data acquisition USB module with an embedded DSP for high-accuracy noise and vibration testing.” Check out the dual 24-bit analog inputs. Sweet!

The equivalent of block diagrams for a market researcher is something called a crosstab. Crosstabs are documents the size of a regional phone book, listing every question asked in a quantitative survey and every response, cut by a myriad of different statistical groupings — age, income, and so on. Reading crosstabs can feel about like, well, reading a phone book. But there’s a hidden beauty to them. As you look through the questions and answers, you’ll start to pick up subtle patterns and get a feel for how the customers actually think. Here’s a simplified example of something you might see in a crosstab:

This is a little excerpt from a study that looked at Internet usage in the US. In this question, people were asked if they had browsed the web in the last three months. The vertical columns across the top divide the results by the age of the respondents and their sex. The horizontal row labeled “Total” shows the total number of people surveyed in each category. For example, the survey talked to 433 people aged 65 and older. The row labeled “Have browsed web” shows the number of people who answered yes to the question, “have you browsed the Web in the last three months?” So, 75 out of 433 people aged 65 or older said yes, or 17% of the sample.

To me, there are two important findings in this sample, one of them a surprise and one not. The thing that didn’t surprise me is that elderly people are less likely to use the Web. The surprising finding was that the rate of Web usage was very flat for people under age 54. For most technology products, young people are more enthusiastic adopters.

You can’t get this sort of intimate familiarity with the data in a study just by reading a summary; you have to look at the crosstabs. Unfortunately, it’s a lot of work to read them. A typical crosstab document for a major research study could easily have more than 400 pages, each of them looking about like this:

This is a typical crosstab page. Don’t worry that it’s unreadable; the original Word document is in about five point type.

Chances are you won’t be reading crosstabs every day. But you should do enough of it that you get comfortable with the formatting and can pick out important information. If nothing else, it’ll help you ask the researchers much more informed questions.

A caution about crosstabs.
Because they’re the mother lode of information about a study, crosstabs can be dangerous in the wrong hands. If someone misreads a crosstab, they might completely misinterpret a research study. Because of this, some researchers don’t like to show anyone their crosstabs, and I think you should be very reluctant to circulate them freely in your company. If a researcher is kind enough to share their crosstabs with you, keep in mind that it’s an act of trust. Be sure to check with them if you form any conclusions about the data, and don’t give the crosstabs to anyone else without telling them.

Become a methodology groupie. This is the other key to getting along with market researchers. Methodology means the way the study was conducted — how the people in the survey were chosen, how the questions were asked, and how the results were tabulated. I gave you a start on understanding methodology in the first part of this chapter, but if you’ll be working with researchers regularly you should do a little more study on your own. If you’re not a born researcher, methodology is about as interesting as double-entry bookkeeping, but it’s hideously important. If it’s done wrong, it can completely skew the results of a study, so researchers spend a huge amount of time agonizing about it. If you want to understand their world, you should know enough about methodology so you can at least tell the difference between a reasonably well structured study and one that belongs in a circus.

How to organize a market research team

Reporting structure. A market research team can vary in size tremendously, depending on the size of the company it serves. In a very small company, you can get away with having no full-time researchers at all. In this case you contract out your research to an external expert who manages the projects for you and delivers the findings. I don’t like this model because researchers pick up a lot of information and insight along the way that never makes it into a formal report at the end. If the researcher lives outside your company, that insight will be lost.

In a multi-division company with several business units, you’ll need several researchers. The first question is whether to have those people report to a central team, or to distribute them into the business units. If there’s any business synergy at all between the BUs, I think it’s best to have the team located centrally. This has several advantages:

–First, it’s more efficient. Depending on how much work there is, a single researcher can often handle the needs of two or more business units. If you parcel out the researchers to each BU, you’ll have to hire more people.

–Second, if the team is centralized, there’s a growth path for the researchers. Market research is a specialized skill, and it’s very hard for a researcher to “graduate” from that role to something else. Most of them don’t want to do anything else anyway. But they would like to have the opportunity for promotion, something they can get in a central team. It’s also very helpful to have researchers supervised by someone who’s a professional researcher themselves. There’s a huge amount of expertise needed in market research, and it’s very hard for a non-expert to evaluate the quality of a researcher’s work and give them meaningful feedback on their projects. Having a non-researcher lead a market research team is like having a non-doctor lead a medical research team.

–Third, if the researchers work together, comparing notes and talking about their work, they’ll be able to spot trends and information that crosses multiple studies. Often these serendipitous discoveries are the most useful.

The drawback of a central market research team is that the business units tend to view it as distant and not focused on their needs. This is a genuine risk. One way to make the central team more acceptable is to have the researchers report “dotted line” into the business units. The researchers sit in on the BU staff meetings, so they feel like a part of the team and are responsive to its needs. But their formal reporting structure still runs back through the central MR team.

Allow only one source of customer truth in the company.
As I mentioned above, strategic market research that focuses on understanding how customers think can be the most valuable output of a market research team. But you should not focus all of your group’s efforts on that sort of research. In fact, it’s very important to make sure that your group is also the exclusive source of tactical market research services for the company. If someone needs a study on sales of a particular product, or customer attitudes in a particular company, you should never turn away that request.

If you don’t have enough people in your team to do all the research the company wants, you should pre-qualify a couple of outside vendors who can take on the extra work. Make sure they understand the projects you’re doing, and the main themes that you’re trying to educate the company about. You should also supervise loosely the work they do for your company. In particular, you should take a look at their conclusions before they deliver a research report to the company. Be ruthless about enforcing this rule.

This is a contrast to what I said a competitive analysis team should do. For competitive analysis, one of the biggest challenges is not getting consumed by trivial support requests from the company. For a market research team, one of the biggest challenges is making sure there’s only one unified source of customer “truth” for the company. In my experience, if you let parts of the company start doing their own market research without supervision, you’ll quickly end up with competing versions of the “truth” floating around the firm. If you leave a business unit to its own devices, inevitably it will contract with a low-cost researcher who produces poor findings, or who tells them what they want to hear. This mangled research will conflict with some of the things you’ve found about the market, so you’ll end up arguing against the BU’s research. This can get very ugly. The average employee at your company doesn’t have the knowledge to tell the difference between a good study and a bad one, so your argument can quickly degenerate into a mud-slinging match about who has the biggest methodology. Even if you win the argument, you’ll make enemies.

Far better to prevent the problem from happening in the first place by making sure all research comes through you and is professionally conducted.

Work style. Many market researchers are extroverts. That’s not surprising, since their profession focuses on understanding people. But much of the actual work of market research — designing a study and analyzing the results — is pretty solitary, involving a researcher wrestling one-on-one with a computer and sometimes hundreds of pages of tabulated numbers.

Unlike competitive analysts, a market research team shouldn’t be pushed into collaborative work all the time. Researchers need a balance between opportunities to work alone and interaction with their team. The interaction is mostly at the start and end of a study. At the start, a study design and questionnaire should be reviewed by others in the group. If you’re doing focus groups, it’s good to have several people from the research team attend some of the groups, just to get a feel for what customers are saying. And at the end of a study, the researcher’s conclusions and presentation of findings should always be previewed, and defended, in front of the entire group.

It’s also pretty common in any large study to have a few strange results that seem out of place or are hard to explain. For example, I’ve seen cases where people gave very different answers when small changes were made in the wording of a question. If you have any cases like that, it’s very important to discuss them with the group, and figure out what they mean, before the findings go to anyone outside. Some people in your company will be basically skeptical about market research, and will jump on any error or ambiguity as an excuse to dismiss the entire study. Protect your team’s credibility by reviewing studies carefully before they’re delivered.

Don’t call them analysts. As an example of how specialized the market research world can be, it’s important to be careful with the job titles you give to market researchers. I once made the mistake of referring to them as “analysts.” I thought that would be a compliment, because analysis is a more active and valuable activity than just running a research study. It turned out to be an insult. In the market research world, an analyst is a junior trainee who massages data after a more senior researcher has conducted a study. Calling a researcher an analyst is like calling a corporate vice-president an administrator.

This is another example of why it’s good to have a professional researcher manage an MR team.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: Picking researchers and presenting findings.

9. Quantitative market research

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week we continue our look at market research with a discussion of quantitative research – surveys and other studies that give you statistically reliable numbers. Or that would, if they were conducted properly…

Uses of quantitative research

Because quantitative research gives you accurate numbers, it can be used to keep score for your business. How many people are aware of your products? What do they like and dislike about them? Do they like your products better than the competition’s? Do they plan to buy from you in the next six months?

All of these things are relatively easy to measure, but make sure the survey is very well constructed. These statistics are basically report cards on the work being done by the company’s marketing and product teams. If you find bad news, you need to be sure it’s completely accurate. Besides screwing up the company’s decisions, producing flawed research can easily get you fired.

One of the most important statistics your company will want to track is purchase intent. If you can gaze into the future and estimate how many people will buy your products in the next quarter or year, those figures can be driven straight into business plans and sales goals. But purchase intent is also one of the toughest numbers to interpret. There’s a long path from someone thinking about buying a product to actually purchasing it, and any interruption along that process can throw off your findings. I’ve seen studies that showed rising purchase intent even though actual sales were dropping. It’s best to use this sort of research to check for potential warning signs of trouble, but don’t let good results lull you into a false sense of security, and be very careful about building these figures into the business plan.

It’s also commonplace to use surveys to test things like reactions to new products and new pricing. Like purchase intent, this research can be very tricky research to interpret, because conditions almost always change from the time you conduct a study until the time you take action on it. For example, in the computer hardware industry we usually set the pricing for fall’s products in the spring. The research has to start even sooner, so you’ll have time to collect the results and study them. Pretty soon you’re surveying people in February for a decision that won’t be implemented until October. People might tell you they love a price in Spring, but by the time the product ships in fall, there are three new competitors at lower prices, one of the competitors has launched an aggressive new promotional campaign, and economic conditions have changed.

You can of course try to anticipate all these things in your research study, but pretty quickly you have to make so many future assumptions that you’re conducting an academic exercise rather than testing something in the real world.

When I was at Apple, I spent some time as the head of marketing for the home and education business unit. I tried using a quantitative survey to forecast that fall’s sales and set pricing, and the exercise turned out to be a waste of time – the results were out of date by the time we could act on them. Today, Internet-based surveying might let a company move faster.

The other challenge to keep in mind in price research is that people almost always overstate how much they’re willing to pay. Think about it — in the very process of conducting the survey, you have to describe the product in some detail, focusing the subject’s attention on it much more than would normally happen. This is almost certain to make them more interested than they would be in the real world, where your messages will be lost in a flood of other things being communicated to them.

People also just plain tend to get cheaper as they go further into the buying process. They might say $299 was an ideal price when surveyed, but when they go to actually buy the product that money feels a lot more important to them. Maybe there’s some other item across the store that they might like to buy instead, or maybe they want to go out to dinner and a show this weekend.

This doesn’t mean it’s pointless to do any research on pricing, but I think it’s better to try to research price bands — what range of prices are people willing to pay for certain classes of product — rather than trying to set the exact price of a single product. And if the research does indicate that a certain price is optimal, treat that as the upper limit on your pricing rather than the midpoint.

Things to look for in quantitative research

It’s very easy to screw up a quantitative research study. Even small errors in methodology can make the results meaningless, so it’s best to work with someone who knows research. There are more potential pitfalls than I can list here, but a couple of prominent things to watch out for include:

–Make sure you’re surveying enough people so you can be reasonably sure that the results represent the population as a whole. In research terms, you want a large enough sample so that your findings will be statistically significant. Preferably, the margin of error in the study should be plus or minus five percentage points at the 95% confidence level. That means that if you see a five percentage point difference in a question (52% say yes, 47% say no), there’s a 95% chance that the majority of people actually would say yes if you surveyed everyone in the country.

For consumer tech products in the US, that usually means you need to survey a couple of thousand people minimum. For a corporate product, about 200-300 people may be sufficient, since the world of corporate buyers is a lot smaller than the world of consumers.

–You need to be sure the list of potential respondents (the people you’re surveying) doesn’t have any biases. If you’re surveying a pool of people who are inclined toward a particular answer, it’s sure to skew your results. You see this all the time when, for example, magazines survey their own readers and then report the results as if they represent the country as a whole.

–Work with the researcher closely on crafting the implications of the study. A great market researcher has to be meticulous about methodology, but that same focus on detail can make them reluctant to draw conclusions that reach beyond the basics of the data. This is especially likely to happen when you use outside research consultants, who won’t understand your industry as well as you do. They’ll tend to give you implications that are straight-line projections of their findings, without much context.

For example, if people have less than positive opinions of your product, the researchers might report “you need to improve impressions of this product” or even “the product is a failure.” But you might have other information — perhaps there was a product recall that temporarily hurt opinions of the product; or maybe your company launched the product as a stop-gap, knowing there would be problems. It can be tremendously demoralizing to have an outside researcher come into your company and beat up a product without the right context on what its goals were and what else is happening in the market.

–Beware of buried assumptions. Sometimes a researcher’s unstated assumptions about the market will slip into their selection of what to emphasize and how to phrase it. For example, suppose you did a survey showing that 19% of the population liked your product. A researcher could report that fact with either of the following sentences: “Unfortunately, only 19% of adults want the product” or “Fortunately, nearly one in five adults want the product.” One sentence makes the finding sound bad, the other makes it sound good. Sometimes an outside researcher will make assumptions about what your company’s goals are, and editorial comments like this will slip into their report without them even realizing it.

Before the launch of the original Palm Pilot, Palm commissioned a survey to determine how many people would want the product. The survey showed that two percent of US adults were extremely interested. Many researchers would interpret that as a terrible result; 98% of adults weren’t entranced by the product. But Palm looked at those numbers and decided they were good news — two percent of US adults is about five million handhelds, a very attractive figure for a small hardware company. So the launch went ahead and the rest was history.

–Be very cautious of “off the shelf” customer segmentations. Several market research companies have conducted very large studies on people around the industrialized world. Based on these studies, they have divided the population into segments — usually about a dozen of them — with various demographic and interest profiles. The segments are usually given offbeat, vaguely disturbing titles like “Sultry Seniors” and “TechnoTweens.”

These companies specialize in mapping your products to their demographic segmentations, and using that information to tell you what to do. There’s nothing wrong with this, and sometimes the segments can be useful.

But very often the segmentation of your customers will be specific to the products you make. For example, one of the hottest trends in advanced phones today is building in e-mail capability. The people who want this feature don’t fit into a particular lifestyle category, they’re just people who obsess about communication. The target market for an e-mail phone won’t show up in most standardized segmentations. If you can afford it, you’re much better off doing a segmentation study that’s specific to your industry.

–Price is the biggest drawback of quantitative research. I’ve rarely seen a consumer study in the US that cost less than $30,000, and they can easily go over $100,000 if you want to get specific detail on multiple market segments. Expanding a survey into Europe typically costs about $30,000 per country, and Asia is even more expensive than the US.

You can sometimes find companies that will charge you less, but usually they’re taking some hidden shortcut that will reduce the value of the research. Typically they’re identifying respondents on the cheap, in ways that could bias the findings.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: How to get along with market researchers.

8. Understanding market research

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week we’ll start our discussion of market research. Considering how widely used market research is, it’s surprising how many people don’t understand how it’s done and how to use it. So we’ll start with the basics…

At its core, competitive analysis is about intuition, supplemented by objective information. Market research is the opposite; at its core it’s a science. Intuition plays only a secondary role, seeding the questions that a good market research study tests. Market researchers hate to guess. They take pride in the rigor of their studies, and like a good science experiment, good market research is repeatable — if you ran it over and over again, you’d get basically the same results every time.

Another important difference between market research and competitive analysis is that market research has a well defined, accepted role in most corporations. It’s hard to find a large company that doesn’t do at least some market research, and there are respected professional training programs for researchers.

Unfortunately, most of the companies I deal with use market research only in a limited, tactical way. Market research teams are usually chartered as service groups — they take specific questions from various parts of the company, deliver the answers, and move on to the next project. Usually those questions focus on measuring what people think — “will you vote for this presidential candidate?” or “what do you think of this brand of soap?”

It’s much less common for market research teams to be asked how customers think. What motivates them? What do they care about? How do they make decisions? This sort of strategic research takes a fair amount of time and uses up resources that business units would prefer to spend on immediate problems. But I think it’s incredibly valuable, because if you know how customers think, you can predict how they’ll react to changes in the marketplace. Like good competitive analysis, good market research doesn’t just help you know what’s happening today, it helps you predict the future.

I think the right role for a market research team is one that balances tactical services and strategic vision. This chapter is about how to make that happen.

The basics of market research

I think it’s very important for all managers to have a basic grounding in how market research works and how to tell good research from bad. Market research is an incredibly powerful tool. When it’s done right, it can be the foundation of a wildly successful corporate strategy. When done wrong, it can lead an entire company to march boldly over a cliff. You wouldn’t feed your body spoiled food, so don’t feed your mind spoiled ideas.

Unfortunately, unlike spoiled food, spoiled research often smells great and comes wrapped in a very attractive PowerPoint presentation. Here are some tips to help you look beyond the slides.

How market research is conducted. There are usually two important players in any market research study, the in-house manager and the supplier. The study’s in-house manager is an employee of your company, usually a professional researcher, who takes the company’s questions and figures out how to get them answered with research. The researcher will determine what kind of study to use, and will contract with an external research supplier to conduct the actual study.

The research supplier is an outside company that does nothing but conduct market research studies. Suppliers have specialized facilities and resources that most companies couldn’t afford to keep in-house. For example, they’ll maintain a large list of people who can be contacted for a research study. Usually suppliers specialize in particular types of studies. Companies that do focus groups (see below) will have offices with the proper rooms, and people trained to run the groups. Companies that do numerical surveys will have banks of phone operators trained to conduct surveys by phone or in person.

The in-house manager works very closely with the supplier at the beginning and end of the process. At the beginning, they work together to design the survey, including the exact wording of any questionnaire, and the rules for how people surveyed (the “respondents”) will be selected (their “qualifications”). At the end, they work together on the interpretation of the findings. The research supplier delivers a final report, which the researcher may pass along to the company as-is, or may replace it with a revised report tailored more to the internal needs of the company.

Quantitative and qualitative research. There are two basic types of market research, quantitative and qualitative. Quantitative research gives you hard numbers — it’s a scientifically-conducted survey that gives you statistical information on the market as a whole. Opinion polls are quantiative research.

Qualitative research is any market study that doesn’t give you reliable numbers. The most common qualitative research is a focus group, in which a small number of people spend several hours discussing a topic, while researchers behind a two-way mirror watch them. The number of people in the group is too small to give meaningful information about the market as a whole.

Uses of qualitative research. Focus groups and similar studies are often used as fodder for an advertising campaign. You’ll get a group of target customers in a room and study how they talk — what words do they use, what mannerisms, and so on. This helps the creative team develop ads that speak directly to the customer in his or her own terms. Often you’ll find a particular person in a group who really exemplifies the type of customer you’re trying to reach. You can show video of this person to the creative team and say, “give me an ad that appeals to him.” How often have you seen brochures, websites, or even ads written in jargon that’s difficult for anyone outside the company to understand? Often this happens when a company doesn’t have a clear understanding of the person they’re trying to communicate to. Focus groups can help with this.

You can also use focus groups to check for potential disasters with new brand names and logos, before you make a mistake that’ll take a lot of money to correct. A legendary example is the Chevy Nova, with “no va” meaning “doesn’t go” in Spanish. Unfortunately, that one turns out to be so legendary that it’s not true, but I’ve lived through some real cases. For example, a company I worked for once came very, very close to giving itself a name that sounds like “excrement” in Chinese. It was especially ironic because the company had a large number of employees in China.

In the computer industry, we learned years ago that our love of using numbers in a product name could get us in trouble overseas. Apple once produced a computer called the Macintosh Performa 4400. The character for 4 in Chinese resembles the word for Death, and the character for 0 resembles the word for Again. So in Chinese culture 4400 means something like “die die again again.” Needless to say, the product carried a different number when it went on sale in Asia.

But it isn’t just numbers and names that can get you in trouble. Almost any graphical element, or even color, can have unanticipated meanings in various cultures. For example, Macintosh computers once used an on-screen icon of an upraised hand to signify that a function had stopped working (the hand was like the raised hand of a policeman directing traffic). In the mobile phone industry, one of the major software companies once adopted a hand outline as its logo (which was apparently meant to indicate that the software powered devices you hold in your hand).

The problem with all of this is that the hand, held flat out, is a serious insult in some cultures, meaning more or less, “let me feed you some manure.” Getting that message from your PC or mobile phone isn’t very appealing.

This hand icon was displayed when a Macintosh computer reported to the user that a program had crashed (or “unexpectedly quit,” as Apple liked to put it). It replaced a bomb icon that was understood universally across all cultures, but communicated so clearly that it scared the bejeezus out of nontechnical users.

For a time, this was the logo for Symbian Epoc, an operating system for smart phones. If you haven’t heard of it, don’t feel bad — Symbian eventually dropped both the logo and the product name.

In another version of the Macintosh software, an animated image of a hand with fingers counting from one to five was used to indicate that the user should wait. It turns out that the various combinations of fingers meant insulting things in several countries, making this the first cross-culturally insulting icon.1

You can use focus groups to test this sort of thing, showing a group of local customers the proposed logo or name, and getting their reactions. Personally, though, I don’t think a full focus group is necessary. Just ask the locals who are on your staff, or if you are selling through a distributor, ask some local people who work for them. They can give you a faster reaction at virtually no cost.

Focus groups are also sometimes used to collect product requirements. You get a group of users together and talk with them about what they like about a current product, and what they’d like to see in the future. I think doing this is a big mistake. Because a focus group is not scientifically structured, the reactions you get from it aren’t projectable to the whole market. You might have a group of freaks who’ll mislead you into creating a product that’ll sell to only 1% of your target market.

But because focus groups are a lot cheaper than quantitative studies, it’s very tempting to try to use them as a substitute. For example, I’ve seen product plans with statements like “60% of the people in our focus groups said they liked the product.” While this is a factual statement, it’s also meaningless because the group wasn’t a scientific sample of customers.

The usual excuse for doing this is, “it’s better to use the focus group than not have any research at all.” That’s rubbish. The “data” you got from the focus groups is no better than a coin toss. You’d be much better off letting the smartest people in your company make an educated guess.

Things to look for in good qualitative research. The best focus groups are great conversations in which you get to eavesdrop, so you want to look for conditions that’ll produce a useful conversation. That means a good setting, good people, and a good moderator.

The location should be in a city that isn’t dominated by your industry. In your industry’s hometown there’s too much risk that you’ll get some insider know-it-all who’ll take over the conversation. So don’t do movie focus groups in West Hollywood, don’t do car focus groups in Detroit, and don’t do computer focus groups in San Jose.

A good focus group facility has a room with one or more walls of sound-proofed one-way glass, so you can recline in comfort and eat pizza and M&Ms while the subjects talk. Ideally, the camera filming the group should also be behind the glass, so the participants don’t fell self-conscious. There should be a good sound system, so you can easily hear what’s going on inside the room. And the whole building should be reasonably insulated against outside noise. I once sat through a focus group where the participants almost had to shout to be heard over the chants of a protest group marching in the street outside the building.

The moderator is the person who guides the focus group, asking questions and keeping the conversation on track. A good moderator is like a good television talk show host — very alert to people, and capable of drawing them out by asking good followup questions. But unlike many TV shows, the moderator can’t be the center of attention. Sometimes a moderator will have an agenda, a preconceived idea of what the group should say, and they’ll subtly impose that on the group. You don’t want that.

You also want a good sampling of participants. You should never fill a focus group with company employees, and I’m uncomfortable with even using friends of employees. There’s too much chance of bias. A good focus group firm will have a large stable of potential participants that it can screen for the attributes you’re looking for, recruiting a good cross-section of the customers you want.

Typically the cost of a good set of focus groups will be around $20,000, and can go a lot higher depending on how many cities you want to visit. If I was short on money, I’d try to do one good set of groups in a single city rather than doing a series of groups on the cheap in several places. You’re not doing numerical research anyway, so the quality of the conversation is more important than the number of conversations you have.

Finally, you should make sure you get videotape of the groups. Get actual digital tape, not just a DVD — DVDs are notoriously slow and difficult to transfer into editable form on a computer (I learned that one the hard way). I also like to control the editing of any summary created from the video. At a minimum, you should tell the person doing the editing exactly which excerpts you feel are most important. If you leave this up to the focus group company, you may end up with a summary that misses the things you felt were most important.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: Quantitative research.

  1. Almost any gesture or hand position you can imagine turns out to be insulting in somebody’s culture. Check it out. [↩ back]

7. Competitive Analysis deliverables

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week concludes our deep dive on competitive analysis done right, looking at the deliverables that should be created by a competitive analysis team.

There are four basic types of deliverable produced by a competitive analysis team: services, maps, spears, and news.

Services are things the rest of the company asks you to do, to help them with their jobs. For example, the product planning folks will want information to fill the competitive sections of their product plans. Finance may want stats on competitors, so they can do some benchmarking for an earnings release or an internal analysis. An executive may want some competitive tidbits for a speech.

It’s essential to keep these requests under control. Depending on the size of your company, and of your team, you may be overwhelmed by people who want help. You could spend all of your time just responding to these requests, leaving no time available for the proactive work that yields the most value to the company. The proper role of a competitive team is not to be the sole source of all competitive information; it’s to get a unique perspective on the competition by studying it full time. That lets you understand the competition, help predict the future, and produce the most devastating competitive sales tools you can. (That’s one reason why I dislike the term “competitive intelligence” — it sets the wrong expectations.) If you spend all your time helping people do their own jobs, you won’t have time to do yours, your company will suffer, and you may eventually be laid off because the company views you as “very helpful” rather than “essential.”

But you can’t just blow off the requests, if for no other reason than corporate politics. But I try to aggressively sort them into categories. Requests from powerful executives must be fulfilled, for obvious reasons. If you get too many of these, take up the issue with your management. It’s their job to either resolve the problem or get you more resources. If they don’t do either, it’s a good sign that you should start looking for another employer.

Requests from others in the company should be made as self-serve as possible. I’ll run with a request if it relates to information we needed to research anyway, or if it’s on a critical subject for the company as a whole. Otherwise, I refer people to the industry analysis services we subscribe to. Sometimes I’ll help people do the lookup if they’re new to the process. If they know what they’re doing, I just point them at the services and say, “All we know on the subject is going to be there. Let me know if you get stuck.”

I’ll give more information on how to manage industry analysis services in a future chapter.

Maps are long-term marketplace forecasts. They are the most strategic work done by a competitive team, and they take the most effort. Since they require input from the market research and advanced technology functions, I’ll discuss them in a future chapter.

Spears are competitive information that helps your company close sales or score points in marketing (they’re information you can throw at the competition, like a spear). Some companies call these “sales knockoffs.” Making spears is one of the most enjoyable parts of competitive analysis, and if you do it right you’ll be a hero in the company forever. More on spears below.

News is quick-hit information on something that just happened in the marketplace. You don’t want to turn yourself into a headline-clipping service; that’s the sort of non-value-added activity that gets a group laid off. But you should send around information when you can add value to it. If a competitive announcement will raise a lot of questions from customers, you should circulate information on what to say about it. If an announcement has important implications to the company, you should make sure the company knows that, quickly. Basically, you should turn the news you distribute into mini-maps and spears.

The joy of spear-making

Why make spears? If a competitive team operates perfectly, the company will never get in competitive trouble — you’ll steer the company away from potential problems before they happen, and you’ll focus the company on its biggest opportunities. But in the real world, competitive problems will happen. A competitor will come out with a superior product or an unanticipated tactic that puts your company in a bad situation. Or you may benchmark a new product your company has made, and discover that it doesn’t live up to its promises.

That means you’ll sometimes have to deliver bad news to the company. The consequences of that news may be devastating to an executive or even a whole business unit. That executive might be very powerful in the corporation, or you might have friends in the business unit. You will be tempted to try to soften the blow, by moderating your language or listing a lot of caveats to your conclusions.

For example, you might be tempted to say something like, “while the situation appears challenging, it’s possible that more aggressive marketing combined with price actions would be able to stabilize the company’s share position” when what you’re really thinking is, “there’s no hope of saving this business.”

It’s very important that you not soften the news. When people get bad news, they always look for outs that would let them dismiss the consequences and go back to life as usual. If you give your company an opportunity to develop false hope, you’ll prevent it from taking the action it needs to take.

But repeated brutal honesty can give you a very unpleasant lifestyle in the company, not to mention a short one. To balance the bad news, and to give you the political credits you’ll need to survive delivering it, it’s important that you also play a visible and positive role in helping the company win. That’s where spear-making comes in.

What are spears? Spears are information that can be used directly in your company’s sales and marketing. They help the company close sales, win industry debates, and impress press and analysts. They position you as a partner in winning, not just an ivory-tower analyst.

A good competitive team should be spending at least a third of its time making spears, maybe up to 50%.

Here are some examples of spears:

–Your team compiles a list of the five most important flaws of each of the competition’s products. You print these flaws on reference cards and supply them to the salesforce.

–You create a presentation on why your products are better than the competition’s, and make yourself available to go along on some customer visits with the salesforce. It’s important not to turn yourself into a full-time sales support team, but going along on some sales calls will give you important front-line information on what’s happening in the market, and will win you the gratitude of the sales organization.

–You hire a third party analyst to document how much cheaper it is to own your products as compared to the competition. You give the resulting whitepaper to the marketing team. They can reprint it as collateral, or quote from it in advertisements.

–You create a monthly audio recording summarizing recent competitive developments, explaining what they mean and what to say about them. You distribute the recording to sales, PR, and marketing.

This sort of recording is often beloved by salespeople and others who have a lot of dead time behind the wheel of a car. I’ve had some people tell me that they listen to every recording several times, sometimes re-running an old one that’s relevant to a customer meeting they’re about to attend. The recordings help them appear well-informed and answer questions from customers and other outsiders.

In the past I’ve distributed these recordings on cassette tape. There was usually some sort of mailing going out to the salesforce every month, and we could get the tape slipped into that, which saved us money on postage.

Today some new cars don’t have tape players, so I’d be looking to burn CDs. Or, better yet, you could distribute the recordings as podcasts (electronic files playable on an MP3 music player). The nice thing about the podcast approach is that you could store all the recordings on a company website, enabling people to download whichever ones they need at any time.

Sometimes spear-making work can become very elaborate. When I was at Apple, we were engaged in a long-running battle to show that the Macintosh computer was superior to those based on Microsoft Windows. At one point we paid an analyst firm to conduct a very elaborate customer test based on the old “Pepsi Challenge” marketing campaign.

(The Pepsi Challenge was a famous ad campaign for Pepsi Cola in which people blind-tasted both Coke and Pepsi, and generally preferred Pepsi.)

In our challenge, randomly-selected people were asked to perform a series of tasks on both a Macintosh computer and one based on Windows. The tasks were things like saving a document, or hooking up a printer and printing a page of text. The researchers kept track of how long it took to complete the task, what percent of people finished the task correctly, and how people felt about the task after they completed it.

It was an exhausting process for our team — we had to make sure the instructions for the tasks matched exactly the instructions given by Microsoft and Apple, the computers had to be restored to their starting condition after every test, and the researchers had to do everything very carefully so they didn’t favor one computer over the other. From start to finish, the project took most of an analyst’s time for several months and cost many tens of thousands of dollars.

But in the end, the results were great. For the first time, we could objectively quantify our product’s advantages in terms of speed, quality, and customer satisfaction. The marketing team created a broadly-distributed whitepaper describing the results, and the test became the foundation for an ad campaign.

This sort of project is called a “proof” study because it gives independent proof of something that you already knew was true. You’re not doing the study to find new information, you’re doing it to verify a claim. Corporate lawyers often require independent proof like that before they’ll allow your company to make a claim in an ad, so if you do a good proof study it’ll be loved by the marketing team.

Making spears is fun because it makes the company happy and gives your team a great sense of accomplishment. Some spear projects, such as the challenge study I described above, are also an opportunity to build teamwork between the competitive and market research teams.

Drawbacks of spear-making. Sometimes competitive analysts will object to making spears. Some of them view themselves as academics, and feel that creating marketing fodder sullies their independence. It’s important to help them understand two things. First, you’re not asking them to lie, just to help the company communicate its advantages. The process is like getting ready for a date — you may dress and comb your hair better than you would most days, but you don’t try to flat-out lie about your personality or background (anyway, most of us don’t).

Second, the analysts need to understand that they’re not academics; they’re working in a company that needs to make money, and sometimes they have to get their hands dirty helping to bring the money in. Besides, helping out for a while in the front lines gives an analyst a much better perspective on what’s happening in the market.

It’s important not to get carried away by spear-making, though. A competitive team’s most important asset is trust. People must trust that you’ll be honest with them at all times. That means your spears must be completely true, and you must never give anyone in the company an argument that can get them into trouble in a conversation. For example, if you tell a salesperson that the competition’s Product X causes toenail cancer in lab rats, that had darned well better be unimpeachably true. If in fact the toenail cancer study was discredited six months ago, some customer is going to know that and will call your sales rep on it, causing acute embarrassment, maybe losing a sale, and permanently ruining your credibility with that sales rep and anyone else they talk to. And you know how salespeople talk.

How to communicate competitive information

What’s the best way to deliver information? Go back to your fundamental goal — you’re trying to be sure the company wins. Therefore, communicate whichever way will get their attention. If it works best to tap-dance in front of corporate HQ carrying a banner, take dancing lessons. If skywriting works best, get a pilot’s license. Fortunately, in most companies you can use a mix of more traditional media.

E-mail is the quickest and easiest way I’ve found to communicate competitive information. One good approach is to set up a list server (an e-mail program that sends messages to everyone on a mailing list). Every time there’s a significant competitive announcement, or when your team issues a new report, send a message to the list server. Keep the messages short (no more than two pages printed unless something amazing happens), and always include an analysis of the implications of the event and what the company should say about it.

Remember, your role is to be an analyst, not just a news reporter. They can get news feeds off the Internet; your added value is that you say what it means, and you help the company’s spokespeople look informed at all times.

I’ll give tips on how to write for e-mail in a future chapter.

Ideally, this sort of message should go out the same day that the competitive announcement is made. If you send out information three days after the fact, everyone will have already read about it on the Internet and they probably won’t even look at your message.

Sending out messages on the day things happen has an important implication for managing a competitive analysis team — you can’t pre-screen your team’s messages before they’re sent. I know that’ll make a lot of managers uncomfortable, and fifteen years ago it would have given me hives too. But the accelerating pace of communication means that you need to focus on pre-educating people and then trust them to follow the rules, rather than using review cycles to enforce compliance. If you force everyone in your team to go through reviews on all their messages, you won’t be able to comment on events the day they happen, and people will simply tune you out.

This doesn’t mean you should turn an employee loose on the mail list the first day they start. While they’re getting up to speed you should definitely be reviewing everything they write before it’s sent out. This is for their own protection as much as yours — if they give a bad impression of themselves early on, it’ll be very hard to fix that later. But once they’re up to speed and producing reliable work, you should turn them loose.

Receiving messages from the competitive mail list should be voluntary. Don’t sign people up automatically when they’re hired. If a manager asks you to sign up his or her entire team, don’t do it — instead, send each of them an invitation to subscribe. If you’re in an active industry, you could be sending a message to the list almost every day, and some people just don’t want to deal with this. If you force all that e-mail into their mailbox, they won’t read it — but they will start to hate you.

The purpose of the mail list is to get information to the people within the company who want to be deeply informed. Very often these are the opinion-setters anyway, so they’ll take care of spreading the word to everyone else. But some people, especially senior decision-makers, won’t have time to read everything you write. For these people, you should create a weekly e-mail summary of announcements and events. For each new item, create a one-paragraph summary that says what happened and gives the implications. Your goal with this summary is to get across the basics and entice them to read the whole report. So keep your summary very short and include a web link at the end that lets them download your full analysis.

Even if you do a great job on the weekly summaries, some people don’t respond well to written information. So you also need to talk with them face to face.

Presentations are the other main way you’ll be communicating competitive information. Your team should develop a general presentation on the company’s competitive situation and advantages, something that can be delivered in about an hour. You use this whenever you’re asked to give an overview presentation to a department in the company (if you’re in a large company, you may get a lot of these requests as your team’s team’s reputation grows). You and all the senior analysts in the team should be able to deliver this talk. Don’t script the whole thing word for word, but make sure you all have a good understanding of the key points to make.

The analysts on the team should also create presentations on key competitive issues that matter to the company. This could be something like an analysis of a competitor, or an examination of an important competitive issue (for example, if you were working at a mobile phone company you might create an analysis of the various competitive products for accessing e-mail on a phone). Again, an hour is the ideal length for a presentation like this, although you can go to 90 minutes if you have to. Anything longer than that is a special occasion, and you’ll need to schedule a comfortable room and a bathroom break for the people you inflict it on.

It’s possible to just schedule a presentation time and invite people from the company to attend, but I usually try to get a regularly scheduled presentation slot at the staff meetings of the key groups in the company. In a high tech company, those would be marketing, product management (the people who create product plans), sales, engineering, and the executive staff. At least once a month, you should be giving them a presentation of the most important recent competitive developments and findings, and you can also slot in presentations from your staffers. All of this helps you reach the people who don’t read e-mail. Remember that the presentations, like the e-mails, should include implications, not just news reporting. And put the implications up front, not at the back.

Staff meetings can also be a good time to do quick demos of new competitive products, and let people know of important upcoming events (for example, if the rumor mill says a new competitive product is about to launch).

In addition to presenting, you and members of your team should have regular one-on-one meetings with key people around the company, so you’ll know what they are up to and what problems they have. You’ll be able to pretty quickly identify which people have the most influence and the best ideas, and they’re the ones you should focus on. Talking with them regularly will help you identify what important decisions the company is about to make (so you can influence them), and will help you figure out which pieces of information would be most useful to the company.

The latter is very important because a lot of competitive information will be coming across your desk, and most of it you’ll just ignore because you don’t want to overload the company. If you know what issues people are working on, you’ll be able to pick out tidbits that are relevant to them and route them over. This helps the company, and also wins you a friend. If you don’t do that outreach, you’ll only know the needs of the people who are pushy enough to come to you, and they may not represent the most important needs of the company. In fact, they usually don’t.

You should adapt your methods of communication to the culture of your company. I don’t do paper memos any more because people at my last several employers didn’t read them. I’m sure there are still a lot of other companies where people do. I have worked with some companies that use voice mail more than e-mail. This sounds strange if you haven’t seen it in action, but these companies usually have a founder or CEO who’s more comfortable communicating verbally rather than in writing, so the company culture comes to depend heavily on voice mails, forwarded extensively from person to person. In those companies, presentations can be a lot more effective than e-mails, although they’re a lot more time-consuming to deliver.

Other ideas for communicating:

Open up the competitive e-mail list server
so that all employees can post information to it (or, if you’re in a large company, select designated participants from other groups). Basically, you’re turning other employees into the eyes and ears for your team. If they see an interesting competitive tidbit, they post it to the list.

To some executives, this is going to sound frightening — if you give the employees a forum like this, malcontents might dominate the conversation, or secret information might be leaked broadly in the company. I’ve found that you can prevent almost all of these problems by communicating clear ground rules for what people should post, and by stepping in to give people private counseling if they start to disrupt the list. If an extreme situation, you could always remove someone from the list if they caused trouble, but I’ve never had to do this.

In general, an open list will give you a lot more benefit than trouble; you’re basically co-opting a bunch of employees to help you do your job. If your ultimate goal is to make the company hyper-effective at competing, what better way than turning all of them into part-time competitive analysts?

Create an online archive of everything you publish. It’s pretty easy to create a searchable web archive of all the messages your group has sent out, and of all the messages posted by anyone to the competitive mail list. Once this archive is built up, it’s a great place for people to do their own competitive research, reducing the burden of inquiries on you and your team.

Sit in on staffs. If you can, it’s great to become a dotted-line member of a couple of the key staffs around the company. For example, at Palm I was at various times a member of the sales, product marketing, and COO staffs. I attended their weekly meetings, which gave me an easy opportunity to update them on any competitive events that happened in the week before, and kept me informed about what they were working on. This was especially helpful with sales, because I knew exactly which prospects we were talking to, and I could route useful information to the appropriate sales reps before they even asked for it.

Present at orientation. If you want to shape someone’s thinking, the best time to do it is on the day they’re hired. Most companies have an orientation day for new employees. Get a slot in the orientation program, and use it to educate the newbies on your competitors, competitive advantages, and what to say when their friends ask them competitive questions about their new company. This is also a great opportunity to publicize the competitive mail list and encourage people to sign up for it.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week we’ll start on market research.

6. How to collect competitive information

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week continues our deep dive on competitive analysis done right, looking at sources of competitive information.

This is such a broad subject that some people have written entire books on it. The range of resources is astounding — everything from secret shopping services to aerial surveillance. I encourage you to get one of those resource books from the library if you want to see the full list, but I’m going to focus on the sources that have been most helpful to me.

Hands-on product testing. I cannot emphasize enough the importance of being an aggressive power user of the competition’s products. This probably isn’t practical in some industries (pharmaceuticals come to mind), but in most other product and service industries you should go out of your way to have hands-on time with your competition’s products. Most companies I’ve worked at have been shockingly bad at this — the employees rarely even touch a competitor’s product, let alone use it intensely. They are so wrapped up in their day to day jobs that they simply have no time for looking at other companies’ stuff.

You learn a lot from hands-on product work. First, you’ll spot the competition’s advantages and disadvantages. The disadvantages you tell to marketing, the advantages you show to engineering. Embarrass the product people a little, challenge them to do better (if they’re good engineers it won’t take much to get them cranked up).

You may also find some opportunity areas where your products could be much better than the competition’s, if only you made one or two simple changes. It’s very important to communicate this sort of low-hanging fruit to the product people aggressively. Don’t assume the opportunities will be self-evident; they don’t have the same background as you and so they may not get the same insights.

The other thing you learn from hands-on testing is insights into the competition’s thinking. As you work with the product, ask yourself what sort of user they’re targeting. Are they designing for themselves, or for a particular person? What assumptions are they making about the person’s background and skills? If it’s a consumer product, what does the packaging tell you about the company’s perception of itself and its customers?

For example, Microsoft designs most of its software products for power users of Microsoft Windows. Elements of the software’s look, and the way the user interface works, all reflect the assumption that the user’s already comfortable with Windows.

Generally this is a good assumption, since Microsoft has such dominant share in PC software. But it can become a handicap when dealing with other categories of product. Many of the design elements of Windows (designed for a large screen and a mouse) don’t work well on a cellphone (small screen and no mouse). Microsoft has struggled to adapt its software to this different hardware world.

Sometimes what you learn from hands-on testing isn’t easy to describe, it’s just a feeling for how the company thinks and what its personality is. But that impression will be immensely helpful as you try to interpret and predict the competitor’s actions in the future.

In some companies, I have tried to establish a lending library of competitive products, to get more people to try the competition. This has had mixed success. If people have enough time to really play with the competition’s products, they’re often doing it on their own already. Also, the lending library is a pain in the neck to maintain — people sometimes break products, and even if they don’t, you have to pester them to return things.

Go on some sales calls. I mentioned this previously as a way to win friends in the sales force, but it’s also a great way to gather information. For this you want to meet with a certain type of customer — someone who keeps track of the industry and likes to talk about it. Sales reps are usually very happy to put you in front of these customers because they have a lot of questions that a typical rep can’t answer.

In a meeting like this, you want to establish a two-way flow of information. You share your perspectives on where the industry’s going, and why your products are best. You ask your customer what they think of the market as a whole, and what they see your competitors doing. Sometimes, if a customer likes you, they’ll spill everything your competitor told them in a briefing the week before.

Take good notes — and keep in mind that anyone who tells you everything a competitor said might also tell others anything you say to them.

Trade show booths.
Although trade shows are less important for networking than they were a decade ago, they’re still a major sales tool for many companies. I’ve never worked at a company where it was easy to staff a trade show booth, and very often engineers and other non-marketing people are pressed into service. Those booth workers can be a gold mine of information.

You need to send to the trade show an employee who’s a good schmoozer and is also technically competent. You don’t want a salesperson here; you need someone who can have a peer-level geek-to-geek conversation. Make sure this person doesn’t get bogged down in booth duty in your company’s own booth; their role is to scout out everyone else. Have them go to the competitors’ booths and start asking technical questions. Pretty quickly they’ll end up talking to an engineer, and that’s when the information flows.

Engineers are, at the core, proud of both their accomplishments and their technical skills. They want to brag. Your employee should give them an opportunity to do that. Talk to them about how hard it is to develop a particular product, or how the rumor mill says the company will never finish the follow-on in time. Mention that skeptical posting that appeared on Slashdot last month. At some point you’ll hit the issue that gets them talking.

Sometimes it can be useful to get them talking about a company that you both compete with. For example, in the mobile business it’s always fun to compare notes with Nokia about Microsoft. Then you go talk to Microsoft about Nokia.

There’s no need to play games like hiding your identity when you make these visits. This technique works because people like to talk, not because you’ve tricked anybody. But the person asking the questions has to be technically competent, or they won’t be treated like a peer.

Financial analysts. Ever since the stock market collapsed, people in the financial community haven’t had the greatest reputation in the US. But I have found that they’re an extremely good source of information. The financial analysts who still have jobs are generally the brightest and best connected, and investments firms sometimes have significant amounts of money available for research, something that’s rare in many of the industry analysis firms.

The financial analysts are obviously great for giving you information on company financials, but they also generally do the most thorough analysis of industry statistics in general, and they’re great for information on company organization structure and the rise and fall of various managers.

In the mobile phone market, the industry analysis firms were never able to give me reliable, timely numbers on sales of various mobile phones, because the phone carriers didn’t want to report those numbers. I tried several different sources, at one point paying more than $50,000 for a numbers service that turned out to be riddled with errors. But then I ran into a financial firm that was able to quietly collect very good anecdotes on device sales numbers, and was glad to share them with me for free. I was never sure how they got the information, and they weren’t about to tell me, but it was very reliable.

Also, some of the best strategy reports I ever saw on the mobile phone business came not from a major industry analyst, but from an investment bank in the UK (Richard Windsor at Nomura, if you’re interested). And again, they were free.

There’s a hidden price attached to these reports, though. Usually the people who share them with you will be hoping to get from you some information that they can share with their clients. You need to work very carefully with your company’s lawyers to make sure you’re not sharing any information that will involve you in insider trading. The general rule is that any information you give the analysts must also be available to other investors, but I’m not a lawyer and you should talk with yours rather than trusting my summary. The laws on insider trading in the US have become much stricter lately, and you don’t want to mess with them.

Friends in the industry are also a great way to get information. I’m sure this isn’t true of all industries, but Silicon Valley is a pretty tight-knit community. Companies have a history of growing up fast and then imploding, scattering laid off employees all over the place. After a decade or so in high tech, you find yourself with former co-workers at most of the major firms.

If you’re in a competitive analysis role, it’s especially important to keep in touch with these contacts. Trade e-mails and instant messages, or buy them breakfast or coffee every now and then to gossip about the industry. Very often important company changes like layoffs and new initiatives are floated in the rumor mill long before they show up in an official announcement, and you’ll also pick up a lot of information on personnel changes that would never be formally announced.

There’s a downside to all this networking, though. An active network of people tends to form shared assumptions about what’s likely to happen, or which companies are hot. This thinking is often picked up by consultants and played back to the companies, reinforcing it into a groupthink consensus that has surprising strength.

For example, in the late 1990s excitement about the Internet transformed into a Silicon Valley consensus that the most strategic thing in the world is to control a web portal (a portal is basically a website that attracts a lot of people). Huge amounts of money were poured into any venture that promised to assemble an audience online, and other businesses were starved or de-emphasized in the pursuit of the portal.

Knowing about that consensus was extremely important to a competitive analyst, because it swayed the behavior of many companies. But it’s very important not to let the consensus worm its way into your own thinking. In the case of the web portal, the consensus was wrong — a web portal is useless unless you have a good way to make money from it, and many businesses other than portals are also very nice. If you bought into the assumption about portals, you might have given your company bad advice, or overestimated the strength of competitors.

You should always be on the lookout for consensus, and you should always question it. In my experience, the stronger the consensus is, the more likely it will be wrong in at least some respects. When people talk about what’s going to happen, or what’s hot, ask yourself why they think that. Do they have first-hand information on the situation, or are they just repeating what they heard from others? What assumptions are they making? Do you agree with those assumptions? Have you tested them yourself? What happens to the consensus if any of the assumptions are wrong?

If you agree with all the assumptions, then maybe the consensus is right. But if you disagree with any of them, you may have identified an important blind spot for the competition.

One of the most important checks for an industry consensus is market research. Because research is expensive, and hard to do on new technologies, high tech companies often default to designing products for themselves. An industry consensus can decide that something’s a good product just because people in the industry think it’s cool, without reference to any real human beings. This is one reason why I’m a strong advocate of combining competitive and customer information.

Here’s an example of how the consensus can get out of hand when it’s not tested properly on customers. In the mobile phone industry, for years many companies assumed that if they could put the Internet on mobile phones, people would use them to browse information. Billions of dollars were invested in creating high-speed data networks, building phones with browsers, and marketing them to users.

Eventually enough companies were investing that the others piled on out of fear of being left behind. The companies started chasing one-another rather than any actual customer.

The problem was, when the browser phones finally got into the hands of customers, almost no one wanted to use them. People said they liked the idea of mobile browsing, but when they said that they were assuming you could re-create the whole PC experience on a phone, and you can’t. The network’s too slow compared to a home or work PC; the screen’s too small, so you can’t easily view a conventional web page; and there’s no mouse, so you can’t easily click on links.

It’s possible that in time most of these barriers will be overcome, but they have pushed back the use of mobile browsing by many years, meaning the companies that invested early have wasted a lot of capital that could have gone into more profitable pursuits.

How could they have avoided this? It’s important to research more than just an idea. Of course people said they’d like to browse on a mobile device, but the actual implementation had lots of drawbacks. To make things even harder for the phone companies, not all of the drawbacks were obvious unless you dug into them. For example, data rates on the phone networks looked fairly good on paper, until you started to load up the network with the congestion of real users. And the limits of battery life meant you couldn’t keep a connection open at all times, the way you do with a PC that’s wired into the wall. So some mobile browsers had long startup times compared to making a phone call.

If a phone company had done a customer test of an actual browser phone, with real-world network conditions, I think many of these problems would have surfaced. But that would have required a lot of time and investment, and there was a huge sense of urgency in the industry at the time.

Create a feedback council. Many companies have an official customer feedback council that they use to test product plans and sales messages. This is most often used by companies that sell enterprise products. They’ll gather about fifteen of their favorite corporate buyers and fly them someplace for the weekend. One day is devoted to golf, and the other is filled with presentations of product plans and feedback from the buyers.

It can be good for you to participate in meetings like this; you’re likely to pick up competitive tidbits from the attendees. But I think it’s also good to establish an informal council of your own. This should consist of about a dozen enthusiastic and articulate users of your company’s products. Get them onto a shared e-mail address, and run competitive issues past them on a regular basis. Test out your messages, and ask them what they think of competitive announcements.

To make this work, you have to create a two-way flow of information — you’ll need to preview your company’s plans, and be willing to give candid answers to pointed questions. Their reward for participating is that they get a chance to influence the company, and get some inside scoop. If you just feed them the party line from the PR team, they won’t feel rewarded and they’ll stop participating.

That means the group needs to be under a formal or informal nondisclosure agreement. Even with an NDA, some information could leak, but in my experience the leaks are very few because people don’t want to be tossed off the list. If the group dynamic works well, it will function like an early warning system, alerting you to issues that are starting to perk in the user community long before you’d normally hear them.

Weblogs are rising rapidly as a useful source of information, at least in high tech. Most weblogs are online diaries with personal information; they are not useful sources of competitive information, unless you want to know about the sex lives the more exhibitionist employees at a competitor.

The more useful weblogs are focused on commentary about your industry. Many are team-written, like the editorial section in a newspaper, posting an interesting mix of news and comments. Some of the more professional weblogs are almost like online magazines, and many allow readers to post information, so you get a mixture of rumors and comment.

Whenever there are leaks or rumors about the competition, they’ll tend to show up first on these sites. Once a weblog’s established, people start sending rumors to it, and the weblogs that want to drive traffic to themselves repost information that has appeared elsewhere. A good weblog turns into a headline-clipping service for your entire industry.

Keep in mind, though, that these websites are not usually run by professional journalists, no matter how flashy the graphics look. Websites don’t generally do the same fact-checking as a newspaper, and most of them don’t even make any pretense of being unbiased. So you need to be a more careful reader than you would normally. As long as you keep that in mind, it’s very worthwhile to search out the best weblogs in your industry and read them daily.

In he mobile device industry, one of the most useful weblogs is called It generally has the first leaked pictures of new products coming from the competition. A good tech site that’s more like a publication is, a UK-based tech news and commentary site with an acerbic edge. [Sorry for what must read like an elementary explanation for my online readers, but keep in mind that this will ultimately be a printed book, and I can't count on all my readers being as tech-savvy as you are.]

Information sources that need special handling

There’s no such thing as having too much data, so all information sources can be valuable. But I’ve found that some require more effort to yield useful information, or don’t necessarily give a good reward for the time and money you put into them.

Online discussion forums.
There’s a blurry line between weblogs and discussion forums, but to me the distinction is that a weblog has one or more well-identified authors, people whose biases are on the record and whose reputations depend on maintaining a certain level of credibility. In contrast, online bulletin boards and other discussion sites are open to postings from anyone. Most of the people who post use pseudonyms, and it’s impossible to verify their information or know what agenda they’re pursuing. This means it’s very hard to judge the reliability of the information you find there.

I saw one case in which a series of very negative comments about a company were posted to an online forum. The messages were posted under a number of different names, but the website owner eventually found that all of them came from a single web server — located at a major competitor of the company in question. Even when the postings are legitimate, I’ve found that a surprising number of them are written by people who are quite young — high school or younger (think about it, who has the most time available to hang out online?). There’s nothing wrong with young people posting online, and in fact I’ve been very impressed with their thinking and writing skills. But unless your target customers are 14-year-old technophiles, you can’t use them to guide your business.

Even if you’re dealing with genuine customers, and they’re old enough to drive, it’s very difficult to use the online bulletin boards as a stand-in for normal customer feedback. Many different kinds of people read online forums, but the people who post to them actively are a different breed. They’re generally enthusiasts who don’t think like the average customer.

For example, while I was at Palm, online reviews and polls about handhelds consistently gave some of the lowest ratings to Palm’s best-selling products. Why? Online enthusiasts are much more interested in advanced features than the average person, and are much more willing to pay extra for them. I used to shake my head in wonder when people posted passionate essays praising high-end products that I knew were selling horribly in the real world.

Another drawback to the online forums is that it has become popular to post false reports, to see how many people can be fooled. An embarrassing case happened just recently, when pictures of a radically different mobile phone were posted online. The pictures were copied and forwarded to people all over my company, including the CEO. They made quite a stir. But no one thought to check the date when the photos were posted. It was April Fool’s Day.

All of these problems mean that you need to read an online discussion forum very skeptically. Ask yourself if a shocking new report really sounds plausible, and look for ways to confirm it. Also, you should always read the online commentary that people post in response to any new rumor. Although people on the web often have biases, as a group they are very good at snooping out inconsistencies in a story. For example, the people on the discussion boards figured out that April Fool’s Day hoax within an hour, and so I was able to send around a message alerting people that the report was false.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: Deliverables.

5. How to recruit competitive analysts

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week we continue our deep dive on competitive analysis done right, with a look at hiring.

Nobody grows up wanting to be a competitive analyst, and I’m not aware of any university degree programs specializing in the subject. So you’re going to have to find and train your own analysts. How do you do that?

How to find candidates

Look inside first. If you’re in a fairly large company, your best pool of candidates will be existing employees. Look for people who are recognized as smart, but don’t fit in their current roles. I’ve found a number of good people in what high tech calls “sales engineer” roles. Sales engineers are skilled technologists, almost engineers in their own right, who provide technical backup to the salesforce, answering questions and meeting with the technical staff at a customer company. Often they have a good understanding of both customer needs and competitive products, and are boiling over with ideas about what the company should do.

Potential competitive analysts will often identify themselves by asking pointed, politically incorrect questions about company strategy during communication meetings. Or they’ll write long e-mails describing what the company’s doing wrong, and copy them to three levels of management, pissing off everyone in the division.

It’s good to meet these people and talk with them, before they get fired. Often they really are crackpots who need to be purged from the organization. But sometimes you’ll find a bright, intensely passionate person who just happens to be in the wrong job and doesn’t realize it. You should rescue these people, put them to work on the issues they care about, and give them an appropriate outlet for their ideas.

Write a good job description. To attract good candidates from the outside, I’ve found that a properly-written job description is essential. The description gets posted to your company’s website, and is also the thing that’ll go onto any online job search boards your company uses. That means it’s really a recruiting ad, and you need to treat it very seriously. Don’t let a recruiter write this for you — a standard corporate blurb will drive away exactly the sort of mild misfits you’re looking for.

The people that you want may not even see themselves as competitive analysts, so you need to dangle the right sort of bait in front of them:

–A chance to play with products they love.
–The opportunity to tell the company what to do.
–An atmosphere in which they can play to win.

When I was hiring a competitive analyst to focus on the wireless market, I used a description like this:

You are a wireless visionary.

You are deeply familiar with the cellphone carriers, their businesses, psychologies, and strategies. You understand the handset manufacturers and what makes them tick. You have a good understanding of the infrastructure required to make a wireless data solution work, but you are also in-touch enough with users to understand what they’ll actually use (as opposed to what some company will try to shove at them).

You know what the real data throughput of 3G networks will be. You were deeply disappointed by WAP, and you probably enjoy playing with Java applets in your spare time.

Now you are champing at the bit to take all that knowledge and use it to help bring mobile computing fully into the wireless age.

In this role you will forecast the future of wireless technologies and businesses as they pertain to handhelds and smartphones. You’ll identify opportunities and potential partners, help lead the company’s strategy and tactics, and create marketing messages. You will identify competitive challenges and what to do about them. And you’ll generally help to infuse wireless thinking into everything we do.

Your work background may include roles like product marketing, sales engineer, and engineering. We’re not looking for a particular job title as much as we’re looking for a really good thinker with vision and the experience necessary to lead. Although this is an individual leadership position, we’re looking for a very experienced candidate.

Excellent writing, presentation, and influencing skills are mandatory, as is a good amount of technolust (part of your role will be testing products; you need to enjoy that). You need to be willing to travel. Experience in Asia (especially Japan or China) is a major plus. Ten years’ experience in the industry is required, as is an MBA or equivalent experience.

People who have a good load of anger and technolust are likely to see themselves in this ad, and I slipped in enough industry catchphrases and acronyms to give me some credibility with a potential candidate. It’s important to show that you know what you’re talking about, because the right sort of candidate will be judging you as much as you’re judging them — they know a lot about the industry, and probably already classify people mentally into those who “get it” and those who don’t.

Because it’s unusual, a description like this will probably draw a large number of useless resumes. The job doesn’t require a professional credential (like an accountant or lawyer would), so a lot of people can imagine themselves in the role. And because many companies treat competitive analysis roles as entry level positions, you’ll get a lot of resumes that have no qualifications whatsoever.

I haven’t found a way to word an ad or job description so it weeds out these people. So you just have to slog through the bad resumes looking for the occasional gem. There are two types of people you should watch for. The first is people with good industry qualifications who want to explore a different type of job. They may be the sort of vaguely unhappy misfits that you’re looking for. Often these people won’t send you a long cover letter, but they’ll have a resume with good qualifications and a good background in your industry. It’s worthwhile to talk with them. The other type of interesting candidate won’t have a great resume, but they’ll send you a long and passionate cover letter saying what you company needs to do, rather than discussing their own qualifications. A candidate once sent me four documents, totaling about 20 pages, critiquing the company and analyzing its competitors in detail.

If their ideas are good, this sort of candidate often makes an excellent junior analyst. They may not have as much work experience as you’d like, but you can teach them a lot of that. What you can’t teach is passion and insight.

If your company has a staffing team, you’ll need to work carefully with them. Someone who sends a fat cover letter looks like a kook at first glance, and a lot of staffing people would screen them out before you even saw their application. Personally, I like to review every application myself, at least until I can show the staffing rep by example what I’m looking for. Without examples, it’s just too hard to explain what a good analyst looks like.

Look among customers and partners. Sometimes you’ll find a customer or business partner who wants to talk a lot about your company, and has a lot of suggestions on what you should be doing. You can ask the salesforce to watch out for people like that. Or maybe you’ll run across someone like that in a user group or on a web bulletin board, posting insightful comments about your company. It’s worthwhile to keep track of these people and screen them if you have a job opening. If nothing else, you should circulate the job description to your salespeople and user groups, to see if it shakes loose a good candidate.


The hiring process

A good competitive analysis group works together as a team, trading ideas and insights. That means you need to pay special attention to interpersonal fit when making a hiring decision. If you bring in someone who annoys the rest of the group, or who can’t work well with them, it will hurt everyone’s productivity.

I think it’s best to have the whole team interview every finalist, and then meet together as a group to discuss them. This can be tricky — people often have favorites but don’t want to say so, or have strong feelings about a candidate and can’t explain them. I like to make the conversation as objective as possible, and to get people’s gut feelings explained in clear terms. So I have every member of the team fill out the form below, rating each candidate on each attribute. The forms have to be completed right after each interview, when memories are still fresh. I go around and collect them to make sure the forms get filled out immediately.1

Then when we all meet, everyone’s score for every candidate is put up on a board. When there are disagreements, we discuss why. Often the disagreements are the most useful part of the process, because they’ll identify concerns we need to check through references, or differences in perspective between the members of our team.

Even if you don’t get 100% agreement from everyone in the team, this process lets you know what everyone thinks, and there’s much less risk of a nasty surprise after you hire.

Candidate rating form


Your name:_______________________

Please rate the candidate from 1-10 (10 being best) on the following criteria:

Technical skills: ____
How well could this candidate communicate to a technical person? Will the candidate understand technical issues well enough to see an engineer’s point of view and explain things in his/her terms?

Marketing skills: ____
How well could this candidate identify meaningful product advantages? Can he/she explain them in a way that’s easy for the average person to understand?

Communication/influencing skills: ____
This position doesn’t give orders, it persuades people to do things. How well does this person communicate his or her ideas? How effective do you think he/she would be at persuading others?

Insight/vision: ____
Does the candidate think outside the box (as opposed to parroting the conventional wisdom)? How well can this person generate useful new insights and ideas?

Technolust: ____
One qualification for this job is an innate fascination with hands-on use of mobile products. How personally excited are they about our product category? What aptitude for hands-on work did they show you?

Industry knowledge: ____
How well does he/she know the mobile device world, and the wireless world (operators)?

Company fit: ____
How well could this person work with us? Would he/she be comfortable with our culture? How well do you think he or she would fit in? How comfortable are you personally with him/her? Why?


Problems to watch out for in a competitive analyst

Bad intuition. Most Americans above a certain age have heard of the television show MASH, and some have seen the movie of the same name. But very few people have read the books by the late Richard Hooker. The first two are surprisingly good.

In the second MASH book, Hooker describes a character named Dr. McDuff. This doctor has a remarkable talent for analyzing a perplexing medical case, correctly picking out the key information, making a brilliant diagnosis — and then prescribing exactly the wrong treatment. I thought he was just a funny character when I read it, but he actually exists, or at least his business equivalent. I’ve met him, and he’s a serious danger to a competitive analysis team.

In a competitive analysis setting, someone with bad instincts will be able to make a brilliant argument. It’ll be based in fact, very well supported, and quite persuasive. It’ll also be dead wrong. The analyst will make some subtle assumption about how people work, or about what’ll happen in the future, which is utterly out of touch with reality. His whole scenario will collapse like a house of cards. But he won’t be able to see it.

The danger is that because he’s passionate and persuasive, he can lead a whole company astray. So you have to weed him out.

But the weeding is hard, for two reasons. First, the most brilliant ideas often challenge the status quo and make people uncomfortable. The longer you’re overseeing a competitive team, the more comfortable you’ll get with your own conventional wisdom, and the more you’ll be annoyed by challenging ideas. You may start to mistake uncomfortable-but-brilliant ideas for uncomfortable-and-stupid ideas, and dismiss them all out of hand. That’s why you should always hear someone out, listen to their arguments, and think on them for a while, even if they make you uneasy.

Especially if they make you uneasy.

This is the only area in which you have to be smarter than the team. You have to be able to see the difference between uncomfortable ideas that are wonderful, and uncomfortable ideas that are poison. If you can’t make this call, you should hire someone who can, and trust their judgment.

Even if you become convinced that someone has fundamentally bad instincts, it can be hard to weed them out simply because many companies make it hard to fire someone without dramatic cause. I’ve worked at places where you had to compile several months of documented, quantified incompetence before you could get rid of somebody. Even in less bureaucratic companies the fear of lawsuits makes human resources very gun-shy. Telling your HR representative that you want to fire someone because they’re brilliant but wrong-headed is not going to go over well. It sounds too much like you just don’t like the employee. You may face months of argument before you can take action, and in the meantime your group’s productivity will be suffering.2

The much better answer is not to hire this person in the first place. Screen prospective analysts very carefully — get them talking about what they think the company should do, and why. Really probe at their thinking, ask how they reached those conclusions, challenge them with other ideas and see how well they can defend their thinking. If you’re hiring someone from another industry who doesn’t have a lot of depth about your company, find a subject that both of you know, and dig into their thinking in that area.

Maybe you can get a candidate to do a free project for you, or at least a presentation. This became a lot easier in Silicon Valley after the tech bubble burst and the unemployment rate tripled.

Another tactic we I like is group interviews — the entire competitive group sits down with the candidate, peppers him or her with questions, and sees if they could defend their ideas. This can leave a candidate a little bit bruised, but the smart ones tend to be competitive and rise to the challenge. Besides, their compensation is that if they do get hired, they’ll have the pleasure of doing it to someone else the future.

Another very helpful tactic is to find someone else in your company who has good people instincts, even if they work in a completely unrelated department, and ask them to interview the candidate for you. Ask them to probe the quality of the candidate’s thinking. If they approve, it’s a very good sign.

Resist the urge to hire mediocrity. If you’ve put in place the right mechanisms to weed out inappropriate people, you may find that you wipe out the entire pool of job candidates. Suddenly you’re faced with starting the whole recruiting process over again. At this point the pile of resumes you rejected the first time will start to look mighty appealing. You’ll flip through it and come to that guy who’s only marginally qualified, and you’ll start to think of a few important projects that he might be able to handle. You wouldn’t have to put him on the most demanding work, after all, and having him around would lessen the load on everyone else. Depending on how desperate you are to fill the opening, you may even start to feel positively affectionate toward this person. How could you have been so picky the first time around? He’ll be a great addition to the team…

No, he won’t. There are many jobs in which a half-competent person can do reasonable but not spectacular work. Competitive analysis isn’t one of them. A half-competent analyst won’t be brilliant half the time, they will be mediocre 100% of the time. You’ll find yourself double-checking every piece of work they do, and you’ll never be able to trust a conclusion that they’ve reached. In sum, they’ll actually create more work for you. What’s worse, their mediocre ideas may start to infect the rest of the group, and the team will start to question your judgment for hiring the guy.

Suck it up. You need to do a better job of recruiting.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: How to collect competitive information.

  1. I want to give public credit to Barbara Cardillo, one of my fellow managers at Apple. She taught me many of these hiring techniques. Thanks, Babs! [↩ back]
  2. An alternative, if you’re a very charismatic manager, is to go ahead and fire the person without permission. Get into a screaming match with them, walk them out the door, throw their papers after them, and then go tell HR. Basically you’re daring HR to fire you in return, and in most cases they don’t have enough pull to do it. One of my bosses used this technique occasionally. “Michael,” he told me after a particularly messy firing, “HR’s gonna’ bitch and moan, but when I fire someone they stay fired.” And sure enough they did. I’ve never had the need (or maybe the courage) to use this technique myself. To pull it off you need to have a supportive boss above you, and you’ll use up a lot of political capital that you might need for something else. [↩ back]

4. How to organize a Competitive Analysis team

The three perspectives a company needs in order to map the future are competitive analysis, market research, and advanced technology analysis. This week we continue our deep dive on competitive analysis done right.

A competitive analysis group can be as small as one person or as large as about eight people (a manager, six analysts, and a lab manager if you’re in an industry that makes testable products). If the team gets larger than about eight people, it will be too big to sit in a room together and have an informal conversation, cutting off the give and take debate that generates a lot of the group’s insights.

You want to get a mix of product-lovers and business analysts. The product lovers are enthusiasts for your product or service who live to play with competitive offerings. They’ll do a lot of your competitive testing, and will prowl the world looking for new competing products.1 The business analysts will be more interested in company dynamics than technology. They’ll love financials and org charts, and will spend a lot of time soaking up the latest competitive gossip with their contacts.

It’s important to have a mix of these people because you need to know your competitors both as businesses and as products. Then when a competitive announcement happens (say, a new product is released), you can quickly assess what it means both in business and product terms.

Some people can do both business and product analysis, but usually they’re inclined toward one or the other.

Companies working in an industry that creates tangible products should have some sort of competitive testing lab. It’s run by a lab manager who’s usually a more junior product type, someone who’s interested in products but isn’t experienced enough yet to be a full-time analyst. This person needs to be good with details. You may end up collecting a lot of competitive products, and the lab manager keeps all of them inventoried and ready for testing. The inventory is especially important if you’re testing products that are small enough to be carried away by a person. If you don’t keep track of these devices, a lot of them will disappear. The problem’s not usually that your own group’s employees will steal things, but the lab is inevitably a target for anyone in the building who’s looking steal.

Your lab doesn’t need to be huge (when working in the handheld industry, where the devices are very small, I was fine with a single work bench in a development lab). The lab must be lockable, though, so you can do extended testing without the need to put everything away at the end of the day. Our competitive lab at Apple had close to a thousand square feet, and was a nice place to bring the execs to give them a quick education on a large variety of competitive products. But if you do that you have to keep the lab neat, which is a pain in the neck.

The lab manager can also work as a junior analyst on testing and other technical tasks.

Product testing is an essential part of the role. Most of the literature on competitive intelligence focuses on financial and structural analysis rather than product testing. I think that’s understandable – if you’re an academic writing a book on competitive theory, you look for factors that can be quantified and charted. But in the real world, competitive testing isalso an incredibly powerful analytical tool.

The amount of hands-on testing you can do varies by industry, of course. For example, I’m not sure how someone in a pharmaceuticals company would fully test the competition’s products. Taking a few pills home to try out over the weekend doesn’t seem like a good idea. (I’m very interested in feedback from people in industries like those – please post a comment) But I want to emphasize the importance of getting hands-on experience with the competition whenever you can.

Testing is important not just because you learn how you stack up competitively, but because it helps you get inside the mind of your competitor. Like people, most companies have distinct personalities that make them act in predictable ways. A small company usually carries the personality of its founder. A larger company will usually carry some residue of the founder’s personality, plus others that have been grafted into it. If the company has been built through mergers, it may have several competing personalities inside – in other words, it may be schizophrenic.

A company’s products say a lot about its personality. Ask yourself why they choose the features they do, and which ones they pay the most attention to. How much time do they put into packaging? How easy is it to understand the instructions? What are they assuming about their customers? Are they designing for their own engineers (a common flaw), or for the end user? If you do the right sort of testing, you’ll get a window into how your competitor thinks and what motivates them.

For example, Microsoft’s products give an endless essay on its thinking and motivations. The company’s tendency to slavishly cover any competitor’s product with its own version is one obvious example – Microsoft generally doesn’t pioneer, it co-opts. A subtler example is the company’s general inattention to small ease of use and fit and finish issues in its software. That speaks to a failure to fully understand and empathize with end users. You can’t make something truly easy to use unless you know how your users are thinking.

I once worked with a software company that makes enterprise software – programs that are used by large companies to manage things like payroll and customer databases. The competitive analysts there told me they can’t do hands-on testing because the competitive products cost millions of dollars and there’s no way to get a competitor to install one of them in your lab. I think that’s short-sighted. You should get creative – find one of their customers who’s willing to give you some hands-on time with the product, or send one of your analysts to a training class in the product.

The more you understand about a competitive company’s personality, the better you can explain its current motivations and predict its future actions.

Encourage interaction and discussion. The best competitive analysis teams work together as a single unit, feeding off one-another’s ideas, developing shared insights, and challenging each others’ conclusions. You should do everything you can to encourage this.

The analysts should brainstorm as a group at least once a week, rather than just working in isolation. An informal meeting over lunch at the end of the week works well. You should talk about the week’s events, and you can also set a specific topic to discuss each week (maybe a particular competitor, or a report that someone’s working on). The group should also get together whenever there’s an especially significant competitive event or announcement. Have the team assemble that day and compare thoughts on what happened, why the competition did it, what will happen next, and most importantly, what it means for your company. Smash the product analyst perspective against the business analyst perspective and see how they contrast. Chances are there will be a healthy debate about the implications, and you’ll end up with much more insightful analysis than you’d get with a single person reporting on his or her own.

I like to see a competitive team seated in cubicles or other open seating where they’ll talk a lot. It can also be very helpful to get the whole group on an instant messaging system. Some people dislike all this communication, finding it distracting. It’s normal to have a mix of more and less introverted people in any group, but an analyst who wants to work completely alone is a problem. The group is a lot smarter than any individual, and all the members of the team have to be willing to engage in a lot of discussion.

I’ve known companies that had competitive analysts scattered in various locations, communicating by e-mail and phone. This is not desirable – it hinders the development of a shared perspective that is the group’s most useful output. People who are isolated geographically tend to go heads-down on individual projects most of the time. You’ll get a group of individuals rather than the gestalt you need.

Expose them to diverse information. You never know which tidbits of information will be relevant to an analyst, so you need to make sure they get a lot of data from different sources — trade shows, the web, trips, product testing, etc. The information discovered by your market researchers is another gold mine, and one of the main reasons for teaming market research with competitive analysis is so the analysts get exposed to a lot of customer data.

What to look for in a competitive analyst

Let’s start with a definition. A good competitive analyst must:

1. Understand the competitive environment,
2. Be able to identify objectively where your company stands relative to the competition, and
3. Have good intuition.

Item 1, understanding the competitive environment, means it’s impossible for someone with no experience in your industry to be a good competitive analyst. They have to know the companies and the products before they can make a meaningful contribution.

The second item, identifying objectively where you stand, is an uncommon skill. There will be plenty of people with opinions about where you stand, but their opinions will generally be colored by what they’ve heard from others. People inside your company will generally be a little over-optimistic about your prospects, if the marketing team is doing a decent job. People outside your company will generally parrot whatever the consensus is from the analysts and press. It’s a rare person who can filter out all those messages and make up their own mind about what’s happening.

The final item, intuition, is the hardest to find. Anyone can figure out what’s happening if given enough information. For example, if I gave you Microsoft’s official marketing plan for the next year, you could predict its upcoming announcements with amazing accuracy. But a good competitive analyst will predict future developments and problems long before they become obvious to the average person. They’ll hear a minor news report and suddenly be convinced that a major market change is about to happen, or that a competitor is about to change strategy. And they’ll turn out to be right.

Usually they can’t completely explain how they reached these conclusions. “It’s obvious,” they’ll say, with more than a little exasperation. But when you ask why it’s obvious, they often can’t give you details. I think what they’re doing is picking up small, seemingly unconnected tidbits of information, and finding connections between them subconsciously. But I can’t prove that. All I know is, there are people who can do it, and they make the best competitive analysts.

A competitive analyst is born, not trained. Although you can use training to make a good analyst better, all the training in the world can’t turn a non-analyst into an analyst. This is the most common mistake I see companies make regarding competitive analysts. They think they can make any bright employee into an analyst just by giving them an assignment and maybe having them read a book or two. What’s worse, they often have their most junior employees start with competitive analysis, to help them “learn the industry.” Think about it, if someone’s just learning the industry, how in the world are they going to generate any real insights for you?

A lot of book authors have unintentionally facilitated this syndrome by creating how-to books on competitive intelligence. They describe the various charts and numerical analysis techniques you can use to understand a competitor, and imply that these are tools anyone can use. The tools are nice, but giving me a chisel won’t let me carve Michelangelo’s David. I’d also need the talent.

It’s hard to find a good analyst. Unlike market researchers or engineers, there’s no university training I know of for competitive analysts. And you can’t limit your search to people who held competitive intelligence roles at other companies, since they’re often trained in information-gathering rather than analysis. To find a good analyst you usually have to go dig them out of the woodwork. Fortunately, natural competitive analysts are usually misfits. If you know what to look for, they tend to stand out.

How to spot a good competitive analyst

Technolust. This is the first symptom I look for in a competitive analyst. I don’t know who invented that term, but I heard it first at Apple. Technolust means an insatiable desire to touch, use, and play with technology products.

Take someone with electronics technolust to Akihabara, the massive electronics shopping district in Tokyo, and they’ll be lost in wonder contemplating a display of a hundred different electric razors, each with a slightly different set of features. Someone with technolust actually enjoys attending trade shows, and they hate it when companies there display new products inside acrylic cases, where they can’t touch them.

These people aren’t necessarily engineers; in fact, the best engineers are often too single-minded to be good analysts. What you want in an analyst is an intense but short attention span — they flit from one product to the next, constantly seduced by the new, always looking for that next techno-high.

I think you can find people with the equivalent of technolust in most industries. During the tech bubble in 2000, I was part of a delegation that made a pilgrimage to Detroit to work on joint venture possibilities (this was back in the days when Palm had a higher market capitalization than Ford, and everyone wanted to work with us). After an evening meeting with a very serious gray-haired executive, our group was headed back to the airport in a van. Suddenly a sports car rocketed out of the dark, cut us off, and spun out in front of our van. Once we started breathing again, we found out that the car’s driver was that same executive, attempting unsuccessfully to show off the car’s new ultra-stable suspension.

But the most vivid example I can remember was an exterminator who had a huge case of pesticide-lust. My parents’ small business had leased an office that turned out to have a serious cockroach problem. The exterminator cackled as he went through the building, demonstrating how he could use squirts of pesticide to drive the roaches into killing zones. He called to the roaches as he hunted them. That’s what you want — people who have a basic love for your industry’s products or services. That enthusiasm will give them the stamina needed to research the competition’s products in detail. They won’t see it as a chore, they’ll actually enjoy it.

For example, competitive analysis of technology products involves a huge amount of hands-on testing, and people with technolust are the best ones to do it. They’ll finish faster (they may even take work home for the weekend), and more importantly they will learn a lot more, because they’ll actually use all the features, just to see what happens. They’ll take joy in finding ways to make the competition’s products break.

Remember I said above that there are two types of competitive analyst — those who focus on products, and those who focus on business issues. Business analysts obviously don’t have technolust, but they usually have a similar enthusiasm for the dynamics of the business. They’ll be boiling over with gossip about the management at various companies, or fascinated by the new distribution system that a competitor just put in place. The obsessive interest is what you want.

Technolust is pretty easy to test for in an interview. Just ask the candidate what products they use, and what they’d change in those products. Ask them what products they’d like to have, and watch their level of enthusiasm. If you make products that are small enough to keep in the room during an interview, leave a couple of new ones on the table. If the candidate’s eyes start drifting over to the products instead of looking at you, it’s a good sign.

If you’re looking for a business analyst, ask them what they think of a recent event in the industry, or a recent reorganization at a competitor. If they start gushing industry gossip, you have a winner.

Anger is the second symptom of a good analyst. I don’t mean scream-at-the-boss, bring-a-shotgun-to-work anger, but instead a deep-seated slow burn of intense frustration because your company’s not doing the right thing to win. Anger is a symptom that the analyst is thinking hard about the marketplace, and has the energy needed to lobby effectively.

You need to be sure, though, that the anger hasn’t soured into contempt. The ideal analyst feels his or her company is flawed but fixable, and will be passionate about influencing others to do the right thing. An analyst who gets too frustrated will lose faith in the ability of the company to win, and will start believing that others in the company are idiots. That dismissiveness of coworkers rapidly destroys an analyst’s influence.

It’s very easy to test for anger. Just ask them what they think the company should be doing, and stand back.

Good thinking skills. The best competitive analysts are information sponges, soaked to capacity with information they’ve picked up. They question other people’s assumptions about the industry, even if those people are prominent. And they won’t accept the consensus about anything until they have proved it to themselves.

To test this, get the candidate talking about what’s happening in the industry and what they think will happen next. The more original ideas you hear, the better. If they just parrot back the consensus from the press and analysts, challenge them about it — ask them how they reached those conclusions, and what evidence they have.

What you’re looking for is not what the candidate thinks, since that will change once they’re in your company and have more information. But you want to understand how they form their ideas. Do they question the industry consensus? Are they good at forming conclusions from the information they do have? Can they use that information to say what your company should do about a situation?

I try to quiz the candidate on a subject that I know more about than they do. For example, I’ll ask what they think of a competitor that I’ve studied extensively, or what they’d change about the company I work for. If they come up with insightful answers even though they have less inside information than me, that’s a very good sign of not just strong thinking, but also intuition. If they give weak answers, or just repeat the industry consensus, you should move on.

For example, I once asked a competitive analyst candidate for his view of a particular competitor who happened to be weak in the US but very strong in Europe. “They’re weird and they’re going to die,” he said, which is pretty much the standard opinion in the US. He didn’t get a second interview.2

On the other hand, I once had a candidate lecture me in depth on how my company was positioning itself completely wrong in the industry. He suggested a couple of new strategies that we were already thinking about internally. In most job interviews, you don’t get points for criticizing the hiring company, but in this case he showed exactly the sort of insight I was looking for.

Other characteristics. In addition to having technolust, anger, and thinking skills, an ideal competitive analyst will be a good writer and talker, so they can communicate their findings to the rest of the company. That’s easy to test — just listen to them talk, and get a writing sample.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: The recruitment and hiring process.

  1. I’m using the term “product” pretty loosely here. Most of what I’m saying applies equally well to companies that create services. [↩ back]
  2. The best answer in this case would have been an enthusiastic rant on what we could learn from the competitor’s success in Europe. But a perfectly acceptable answer would have been, “you know, I haven’t been able to track them much from here.” As long as the candidate had good knowledge in other areas, I’d be satisfied. What’s not acceptable is a candidate who doesn’t know the limits of his or her knowledge, or who mistakes superficial opinions for analysis. If you hire that person, they’ll inevitably feed bad information to your team and the company. [↩ back]

3. The Fall of Competitive Intelligence

In the next few weeks, we’ll go into depth on competitive analysis – what it is today, and what it should become.

Once upon a time, back in the 1990s, competitive intelligence was a hot area at many companies. They invested heavily in creating competitive intelligence teams. A professional group called the Society of Competitive Intelligence Professionals claimed that CI was the fastest-growing corporate discipline. SCIP had more than 3,000 members in 1996, and was growing by more than 100 new people a month.1 Competitive intelligence consulting firms did big business, and if you visit any good research library you’ll find whole shelves of books about Competitive Intelligence, most of them written in the 1990s.

But when the Internet bubble burst and companies started cutting costs, many of those competitive intelligence groups were wiped out.

“CI units are being eliminated….At least half of the CI functions in place today have suffered significant cutbacks, or will face them within the next six months.”

That was from the SCIP’s own newsletter in 2003.2 One of the leading promoters of competitive intelligence in its heyday is now writing about how to apply the Talmud to business decisions.

Why the retreat? Conditions vary from company to company, but I think there are five main reasons:

1. The case for CI was grounded in fear. Much of the urgency behind creating the function was driven by fear of foreign companies, which were said to practice Competitive Intelligence aggressively. Japan in particular was described as a hotbed of competitive spying, and it was implied that this played a key role in Japan’s economic rise. The rhetoric was frightening, and it seemed likely that any company failing to create a competitive intelligence unit was doomed to fall to foreign conquerors.

But as the perceived Japanese “threat” to American business receded, so did interest in Japanese business practices (when’s the last time you heard someone quote from the Book of Five Rings?).

2. The role was never completely defined. CI was a very new discipline, and there hadn’t been time for a consensus to develop on exactly what the role was and how to organize it. As a result, different books and consultants gave conflicting advice. In the lack of clear expectations, I think many competitive intelligence groups were never given well-defined charters. When a company’s under financial stress, a poorly defined function is an obvious thing to cut.

Inevitably, some of the advice was also damaging. For example, one prominent book said the CI role is like being a court jester for your company. The idea was that the CEO resembles Shakespeare’s King Lear, surrounded by liars and flatterers. The jester is the guy in tights who tells the king the truth, by mocking the egotists and exposing the liars.

It’s true that someone in a competitive role must be unafraid to say exactly what the data indicates, even if it’ll upset people. But beyond that I’m uncomfortable with the jester analogy because it implies a completely negative role, and one that focuses only on influencing the CEO. In most of the companies I’ve known, to make change work you need to influence the whole management team, not to mention the rank and file employees. You can’t do that if you speak only to the CEO, and besides you won’t win much respect from the organization if all you do is point out the flaws in other people’s work. When the CEO is replaced (which happens a lot more often than the death of a king), it’s likely that the one agreement among all the remaining executives will be that they want to strangle the jester.

Which, I believe, is what happened at the end of King Lear.

If you need an analogy for the competitive role, it’s better to think of the scout for a wagon train, forging ahead in the wilderness to identify dangers and find the easiest path for everyone. The scout’s not the manager of the wagon train, but he’s a leader with a unique and valued role. And he never wears tights.

3. The focus was on intelligence, not analysis. Much of the CI literature focused on how to gather and verify facts about the competition’s activities. It’s right there in the name — the function collects intelligence on what the other guys are up to. You can find entire books just listing various intelligence-gathering techniques, down to obscure things like taking the competition’s factory tour with two-sided tape on your shoes, so you can collect microscopic samples of the materials they’re using.

The problem is that basic intelligence collection is becoming less important as the Internet grows and people change jobs more often. The Web is awash with competitive rumors, and chances are that if you can’t find the information you need online, one of your former coworkers is now working for the competitor and will sing like a canary if you buy them lunch. There’s simply less need in most companies for full-time employees who ferret out tidbits of intelligence.


What companies do need is insight on what the flood of information means — how it adds up, and what it says about the competition’s thinking and future behavior. This is why I prefer the term “competitive analysis” rather than competitive intelligence. But that sort of predictive analysis is a very different discipline than collecting data, and it works best when competitive analysis is teamed with market research and advanced technology research. So competitive analysis isn’t very valuable as a standalone function.

4. The wrong people were hired for the function. This probably relates back to the lack of a clear charter for the CI role. When you’re not sure what a function will do, it’s easy to imagine that anyone can do it. Many of the people I’ve seen working in the field were marketing or sales people who had been dropped into the competitive role without much preparation, or much inclination for the work. They floundered around trying to figure out what to do, and produced very superficial reports.

Because of the flood of how-to books on Competitive Intelligence, I think some people formed the impression that anyone could do CI if they followed a few simple practices. That’s a little odd; I don’t know of any other field in business where the expectation is that anyone can be good at it. You don’t try to turn randomly-selected employees into engineers, or PR specialists, or salespeople. You look for people who have talent in that area. The same is true for competitive analysts. It’s a specialized field, and not everyone can do it well.

In a future chapter I’ll give some guidelines on how to identify a good competitive analyst.

5. Competitive Intelligence is not mission-critical in the short term. A company has to have salespeople or no one eats. You have to have engineers or products just don’t get built. But if you don’t have a competitive team…well, the company keeps going just fine, thank you. At least for a while.

This means that, in practice, a competitive team has to be more than competent in order to survive. It has to be superb, delivering great value to the company in a visible way, so no one would think of living without it. Just being a service group, delivering good information to clients in the company, is not enough. The group has to solve serious business problems and help close sales. Rather than being a source of competitive information, the group needs to be a source of competitive leadership.


Late one evening in 1989, I entered one of Apple Computer’s office buildings in Silicon Valley. Although Apple called its headquarters a “campus,” it was actually a series of buildings sandwiched between homes and stores over several square miles. The company had rented them haphazardly as it grew.

The building I went to was inconspicuous, two stories tall and tucked behind a screen of trees. It wasn’t the usual place for executive meetings, but an important meeting had been held there earlier in the day.

It was after sunset when I entered the building, and the place was very quiet. The building didn’t house a lot of engineers, so most of the employees had gone home. I went to a darkened conference room, where an IBM personal computer stood in one corner. It was a PS/2 Model 80, a hulking floor-based tower that was the leading edge of PCs at the time. After checking to make sure no one was nearby, I turned on the computer and watched it start up.

It launched a pre-release copy of Microsoft Windows version 3.0. I saw the software come up on the screen, played with it for a couple of minutes, and immediately knew Apple was in deep trouble.

To understand why, you had to know the history of Microsoft up to that time. This was back in the days when PC companies like Apple, Microsoft, Lotus, and Word Perfect viewed one another as peers. The dominant behemoth was IBM, and we were all dancing around them. Microsoft was the clever operating system company that had ridden the IBM standard to prominence, but no one really respected its ability to innovate in applications. Its efforts there were a joke — Microsoft Word was something like the #6 word processor on the PC, and even Microsoft’s software for the Macintosh had numerous competitors, many of which were viewed as technically superior to Microsoft’s products.

Microsoft Windows was the biggest joke of all. Its first two versions had been crude, extremely hard to use, and didn’t excite anyone. In some ways, they probably helped Apple by validating the idea of a graphical interface for a computer, without providing one that was good enough to steal away many customers.

Windows 3.0 changed that. It looked nice. The graphics were pleasant, the icons were reasonably well laid out on the screen, and it worked fairly well. There were still some rough edges, but it was good enough that I could picture a PC user installing it and not being embarrassed a week later. Windows was, for the first time, usable.

For reasons I still don’t know, Microsoft had decided to come down and give a demo of the unreleased software to Apple’s executives. I was managing the company’s competitive analysis department at the time, and as the only people in the company who had IBM PCs, we were asked to provide one for the meeting.

I wasn’t invited to the meeting, for obvious reasons, but I stayed late that night until I was sure it was over. As it turned out, when the Microsoft people left the meeting, they hadn’t erased the software from the PC. Now it was mine. I lifted the very heavy PS/2 tower onto a wheeled chair and rolled it out to my car. We started testing the software the next morning, trying to learn as quickly as we could just in case Microsoft came back and asked us to wipe the hard drive.

They never did.

With a pre-release version of Microsoft’s new product in hand, we were in a good position to prepare Apple for the upcoming competition. And in many ways we did — we documented our competitive advantages, educated the engineers about the improving competition, created marketing collateral, and generally tried to prepare the company for a fight. But the preparation turned out to be harder than I expected, in part because of resistance from above.

Spreading bad news about a competitor can be very disruptive to a company. It distracts employees, causes people to question their current plans, and generally hurts efficiency. The news is especially hard to deliver when a competitor has a history of screwing up, and most of the people in the company don’t use the competitor’s products. It’s seductively easy to rationalize that the competition is going to blow it one more time.

Sure enough, soon after we started raising a red flag about the software, my boss called me into his office. He said we were upsetting too many people, and told me to tone down the message. “After all,” he said, “it’s just another version of Windows.”

Maybe Apple was destined to lose anyway. Apple’s refusal to license its software to other companies meant it couldn’t establish a competing software standard, and its failure to produce new innovations that would make Windows obsolete meant it couldn’t hold onto many of the customers it had. But I think another cause of Apple’s fate was its inability to picture how the world would change. Apple didn’t really understand the minds of PC customers, and couldn’t see how Microsoft’s new software would act on them. And so despite a free preview from Microsoft, Apple never fully rose to the challenge of Windows 3.0, and Microsoft went on to cement its dominance of the PC industry.

By the traditional rules of competitive intelligence, I ought to feel at peace with my role in this. I did everything I could do legally to get advance information, my team and I turned out the best analysis we could, and we reported it as aggressively as we were allowed to. But I think that’s a cop-out. My company screwed up on a competitive issue. Therefore I’m partly to blame.


My experience with Windows taught me the most important rule of competitive analysis — your role is to make sure your company wins competitively. It’s not enough to deliver a great report and then wash your hands of the situation. If the company doesn’t act on your information, you failed.

You need to drive this principle into everything a competitive group does.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: How to organize a competitive analysis team.

  1. For a complete discussion, see the book Competitive Intelligence by Larry Kahaner. [↩ back]
  2. Written by Bill Fiora, principal of Outward Insights, a CI consulting firm. [↩ back]

Great Moments in Market Research

I’m posting pieces of the book once a week, but in between I’ll occasionally post a comment on a related subject. Today’s topic is market research and the television show American Idol.

If you live in the US you’ve probably heard of American Idol, especially if like me you have a pre-teen daughter who asks you to call in votes for her favorite singers after bed-time. If you don’t live in the US, chances are you have a local version of the same show – Croatian Idol or Ugandan Idol or something like that.1

In case you’ve been living in a cave or don’t have kids, here’s how the show works: A group of amateur singers performs every week, one song per performer. The audience votes, and the night after the performance, the person who received the fewest votes has to leave the show. It sounds deceptively simple, but the personalities involved are entertaining, and if you watch the show regularly you kind of get attached to the singers and it’s a wrench when one of them gets voted off and you realize you won’t ever get to see them sing again.

Mind you, I know all of this only from watching my daughter. I myself don’t pay much attention to the show, oooooh no, I’m too busy writing blog entries.

The central tension each week is the mystery of who will get voted off. Or anyway, that was the central mystery until a few weeks ago, when an online tool called Dial Idol hit its prime.

Dial Idol is a website and a software program. You install it on your computer and use it to dial your votes into the show, via modem. The program also reports your votes back to the website. This isn’t all that interesting – there are lots of American Idol polls online. But Dial Idol also tabulates the percent of the calls for each singer that generate busy signals. American Idol gets so flooded with calls during the voting process that it’s commonplace to get a busy signal – you might have to call three times to cast a single vote.

The genius of Dial Idol is the use of the busy signal. Any online poll that people can volunteer to take is plagued by self-selection errors. But the ratio of busy signals to total calls turns out to be an accurate predictor of who’s getting the most votes. It corrects for any bias in the people choosing to take the poll.

The website has been in operation for years, but just recently it reached some sort of critical mass. I don’t know if it was the total number of people using the software, or tweaks they did to their formulae, but the site is now turning out eerily accurate predictions of the outcomes of the voting. It has correctly predicted the people voted off for the last four weeks straight.

You may well be thinking, who cares? And in one sense this is all trivia. But to me, Dial Idol is a great example of the sort of interesting market research that’s being enabled by the Internet. We’re in a golden age of new market research techniques. They’re giving us more ways to understand people, at lower cost, than we’ve ever had before. Some day we’re going to look back at the days before the Web and wonder how we ever managed to do any marketing at all.

  1. I thought I was joking when I wrote that, but then I looked it up and it turns out there is a Croatian Idol, called Hrvatski Idol. No sign of an Idol program in Uganda, although there is one in South Africa. You can see the full list of 32 countries here. (That’s yet another win for Wikipedia over Encyclopedia Britannica, by the way.) [↩ back]

2. Part I: Mapping the Future

This week’s post is relatively short, but it lays out my central argument. Think of this as a road map to where we’re going in the book.

Nobody can predict the future of a market or business perfectly. If I could do it, I’d be retired someplace living off my stock income. If the authors of all those management books could do it, they wouldn’t have to write books and give speeches for a living.

Part of the problem is that the world’s very complex, and any absolute prediction is bound to break down as unexpected things happen. But the biggest challenge is that the future doesn’t yet exist. It’s no a single deterministic thing, it’s a set of possibilities. We change the future every day with our own decisions. So what we need for the future isn’t a prediction. We need a map, showing all the possibilities and consequences of various decisions: if you go here you’ll end up in a valley, if you go there you’ll end up in the mountains – and if you go over there you’ll run off a cliff.

The better you draw the map of possibilities, the better your company can choose a good future for itself and its customers.

Mapping the future of a market or industry requires input from three different perspectives. You need to know first what’s going on with the customers. Not just what they’re doing today, but how they think, what they want out of life, and how they’d react to changes that might happen in the future. You need to know the insides of their heads so well that you can speak for them reliably.

Second, you need to know how technology is going to change, since that determines what your company can create. I’m using “technology” in a very broad sense here, meaning not just physical hardware like computer chips and paint formulas, but also processes companies can use to deliver services. The internet, for example, is a technology change that’s changing business processes in almost all companies, and creating a lot of new opportunities. The telephone did the same thing in the early 1900s.

And third, you need a good understanding of what your competitors will do. It’s not enough just to know their products and org chats, you need to understand how they think and operate, and what their basic personalities are, so you can anticipate what they’ll do in future situations.

Lots of other information can also be useful for predicting the future. For example, it’s very helpful to factor in future changes in government regulations (if you know what they’ll be). But I think customers, competition, and technology are the most important factors in mapping the future, and they need to be brought together very intimately because there’s so much synergy between them.

To understand how a future map is used, picture yourself as a Roman general leading your army to a winter camp. To the north there’s a sheltered valley that would be perfect for your needs. The fastest path to the valley leads across a river most people think is impassable, but your engineers say they can bridge it, so you set them to work. You know the barbarians from the west are also searching for winter shelter in the same area. You don’t want them to reach the valley before you. Your scouts have identified a hill that dominates the road from the west. You send your archers to fortify the hill immediately, to cut off any advance.

The valley’s a potential market your customer research team found. The people building the bridge are your advanced technologists. The scouts who found the western road and the hill above it are your competitive analysts.

None of these people, working alone, could have drawn the map and told you where you could go on it. But once you had information from all three, you could see the likely future, plan out where you wanted to travel, and prevent the other guy from getting there first.

Unfortunately, this sort of map-making doesn’t happen naturally in the business world. In most of the companies I know, the people doing competitive analysis, market research, and advanced technology mix together like oil, sand, and water. Good market researchers are practical and methodical, deeply grounded in data and in the processes by which they gather it. They’re very uncomfortable with future speculation and unfounded predictions. Good competitive analysts are intuitive, prone to making wild predictions based on little or no evidence. They hate being tied down by process. And true advanced technologists often have very fixed ideas about the world, ideas that are linked to the intellectual problems they want to research (ie, I want to work on speech recognition, therefore I believe that many important problems can be solved with speech recognition). They can be very impatient with anyone trying to impose customer or competitive realities on them.

On top of the basic differences in outlook, the people who gravitate to these teams come from different academic backgrounds, so they often have different vocabularies and different professional standards. The work also attracts different personality types, which often don’t mix well naturally.

Because of the differences, these teams often have pretty low opinions of one-another, sometimes bordering on contempt.

To make a good map of the future, you have to figure out how to mix data with intuition, to blend science and art. To do this, you first have to understand and appreciate each of the groups separately. Then you have to teach them to work together.


Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Are there specific things you’d like to know about mapping the future, and in particular competitive analysis, customer research, and advanced technology? Please post a comment. I’ll incorporate your feedback into what I write.

Next week: The Fall of Competitive Intelligence.

1. Introduction. We’re very, very, very bad at predicting the future

In 1994, a best-selling business book declared that the world was “standing on the edge” of a series of technology changes that would revolutionize our lives. It’s interesting to check in on those predictions to see how they have turned out. Here are seven predictions made in the book:

Live machine translation of speech, so people in different countries could talk to each other on the phone. Nope. The closest thing we have today is automatic translation of websites, which occasionally lets you dig out a useful fact from a Chinese or Korean website but is mostly good for a laugh.

For example, I used a prominent automatic translation service to convert the paragraph above into Chinese and then back into English. Here’s what happened to it:

Speech, therefore person’s live machine translation can converse with the different country mutually makes the telephone. Nope. We have the closest matter is the website machine translation, occasionally lets you dig but today is main good is smiles outside a useful fact from Chinese or the South Korean website.

Not only is it pretty much incoherent, but Korean was changed into South Korean, a subtle error that could cause big problems in certain political situations. If that’s the best we can do with text, how do you think we’d do with live speech, which can be pronounced in a huge variety of ways?

Urban underground distribution systems that let companies deliver goods without tying up traffic. Right. Anyone have an update on the construction of that delivery tunnel system under, say, Paris? Or New York?

Microrobotic machines that could unclog arteries. We call it nanotech today, but we’re nowhere near having autonomous surgical robots you could inject into the body.

Virtual meeting rooms so people can have meetings without travel. We’ve made great progress on this with Web-based conferencing systems, but they aren’t nearly as ubiquitous as the book predicted.

Satellite-based communicators that would let anyone make a call from anywhere on the earth. You can actually buy these today, but satellite phones are incredibly unattractive compared to a cellphone. Here are typical specs:

Satellite Cellphone
Monthly charge $30 $40
Per minute $1.50 First 500 free, then 45¢
Phone weight 13 oz 4 oz
Standby time 24 hours 200 hours
Cost of phone $1,500 Free with contract

Not surprisingly, satellite phones are being used today mostly for specialized purposes like ship to shore communication. So does this prediction count as a hit or a miss? Since the context was helping businesses plan for the future, I think you have to call it a miss. A company that bet big on satellite communication in 1994 would have lost its shirt. In the case of Motorola’s investment in the Iridium satellite phone system (which Wired magazine kindly called “Edsels in the Sky”) the shirt cost about $2.6 billion.

Machines capable of feeling emotions. We’re not completely sure how emotions work in human beings, so putting them in machines may be a bit of a stretch for now.

Digital highways that bring torrents of information into the home. Right on! This one was correct.

The book’s batting average was .214: one correct, one partly correct, and five wrong. In baseball that would buy you a ticket to the minor leagues — but business isn’t baseball. In business, a single wrong prediction can cost you billions.

It’s tempting to say the authors screwed up, but actually future business predictions are almost always wrong. Check any 10-year-old business strategy book and chances are you’ll find celebrations of companies that subsequently collapsed, and technology predictions that turned out to be fantasies. And it’s not just a book problem. Companies make the same mistakes internally, as Motorola discovered with Iridium.

The problems with the future are so common that I don’t think we can blame them on dumb authors or dumb managers. There’s something systematic going on. I think the trouble is that we’re looking at the future the wrong way.


Our bipolar view of the future

I’ve been in the technology industry for almost two decades, which in Silicon Valley makes you a grizzled veteran. I’ve worked with a lot of different companies, spending much of my time on strategy and trying to figure out how the marketplace will develop. One of the most striking things I’ve noticed is that most of the companies I work with have trouble thinking about the future.

Silicon Valley likes to present itself as the place where the future comes to life, but in practice the companies are bipolar about it. They either try to beat the future into submission, or they surrender to it as an immutable force of nature. These two groups, which I call visionary and reactive companies, both mishandle the future in important ways.

Visionary companies represent the triumph of individual brilliance over mundane thinking. They’re usually still led by founders who started the company with a strong idea of a new market or product that could change the world. Despite doubters (and there are always doubters in the backstory of these companies), the founders had a correct vision of what would happen, and they drove the company to success.

Visionary companies are usually focused and decisive. They are very good at tuning out distractions and staying on course, because they believe that with good execution they can force the future to evolve the way they want it to. These companies are often indifferent to market research and outside information from people who don’t “get it.” Based on their own history, they usually feel they can do a better job of intuiting opportunities than any research can. Research would just get in the way of their freedom to create.

In my experience, the visionaries are completely right about this up to the point where they start being badly wrong. Even the most brilliant individuals have limits. Eventually inspiration runs out, and the company’s differentiators are copied by competitors. Or the vision hardens into dogma, and the company starts missing new opportunities. Visionary companies are the most likely to march into utter disaster as they cling to a vision that’s no longer valid. They’re flying blind because when you’re inside a vision, you can’t see what its limits are.

Reactive companies are on the other extreme. They view the future not as something they can control (they’d call that arrogance), but as something they can predict, like the weather. Once they have predicted the future, they then make logical plans that react to that prediction. These companies are often superb at responding to changes in the market. They’re very open to outside information, and are willing to learn from anyone.

But this same openness makes reactive companies vulnerable to industry groupthink. In the process of scanning the world for ideas and trends, those that have the most currency among analysts and press will naturally rise to the top of the pile. It’s almost like a voting process — if enough consultants and other credible people are saying satellite phones will take off, it must be correct.

This makes it very difficult for a reactive company to form a differentiated strategy. Instead, it tends to pursue whatever everyone else is pursuing. You could see the process at work during the Internet bubble, when the tech industry consensus said the most valuable thing to own was an online service. Many of the established companies in high tech threw themselves into the creation of online services, or paid enormous sums to acquire online service companies, even if those services didn’t actually have much to do with their core businesses.

In response to thrashes like this, reactive companies sometimes evolve into fast followers. They become convinced that you can’t really predict the future at all, so they focus instead on quickly co-opting new opportunities and products that other companies produce.

Although American business culture tends to admire renegade visionary leaders, there’s nothing shameful about being a reactive company. It’s the strategy followed by many of the world’s largest consumer electronics firms, most of them headquartered in places where renegade behavior is discouraged. But whatever your cultural attitude, reactive companies are flying blind when it comes to the future. If the consensus prediction of the future turns out to be wrong, or if they don’t spot a major change early, they won’t be able to react in time to survive.


The middle road: Anticipate, don’t react

I think there’s a third approach to the future, one that’s more powerful than either the visionary or reactive path. In this approach, you have to take a different view of how the future works. It’s not something you can control completely through sheer will, and it’s not a single force of nature you can predict perfectly through logic and research. The future is a series of possibilities that might or might not come true. Although you can’t predict what will happen, you actually can predict pretty accurately what might happen, and how you can change it.

Once you’ve made a list of these potential futures, it works like a road map for a family vacation. It shows you cool destinations, routes to get there, and potential hazards along the way. It also shows places that you can’t possibly reach, no matter how badly you want to drive there. Unlike a vacation map, a futures map also shows you the competitors traveling those same roads — where they are, and where they’re likely to go.

Once you know what the routes are and where they lead, you can pick ones that take you to the best destinations.

To a visionary company, this means the right kind of futures analysis can help you extend your vision, to give you new ideas on possibilities and to anticipate the cliffs before you march off them.

To a reactive company, this means you can free yourself from responding to either rigid predictions or marketplace events. Instead you can identify the best possibilities and the risks along the way to them, and guide the future toward the outcome you want.

Specifically, mapping the future lets you:

–Anticipate the development of new markets, allowing you to seize the best position in them before your competitors even know they exist.

–Predict how competitors will react to actions your company takes, so you’re ready to counter their responses before they even happen.

–Identify major technological turning points that will change your industry (and learn to disregard the changes that won’t really matter).

Most of the companies I work with are not organized to map the future. The people who know critical parts of the map — competitive analysts, market researchers, and advanced technologists — are usually scattered in different parts of the company, where they perform mostly support tasks for the business. These functions often don’t communicate with each other, and don’t even respect one-another’s work. To map the future successfully, they need to be organized differently, taught how to work together, and in some cases staffed with a different type of people.

In other words, they need to be treated like a single strategic asset, rather than three separate service groups.

In this blog I’ll give my ideas about how to map the future — how to manage the groups that build the map, how to tie them together, and what sort of benefits you should expect from them.

Think of this as a how-to manual. It’s based on almost two decades of competing and partnering in fast-changing markets with large, aggressive companies like Microsoft, Intel, IBM, and Nokia, often with very little budget or headcount.

Sometimes I’ve been successful, sometimes not. But I’ve learned a lot, and one of the things I’ve learned is that most companies don’t think about the future the way they should. I’d like to fix that.


Disclaimer: “What works for me”

Gardening is supposedly the most popular hobby in America, and there are an incredible number of books and websites telling you how to do it. Somebody smart once pointed out that every one of them ought to carry the disclaimer, “this is what works for me.” Conditions vary incredibly from yard to yard and from state to state. The exact same treatment that grows a pine tree at my place might kill it in yours. The best I can do is say what works for me, and hope you can adapt it to your needs.

I think the same is true of business advice. Companies, industries, and national cultures differ so much that what’s brilliant in one firm may be disastrous in another. In my case, I’ve worked in high tech, in Silicon Valley, in the United States, for most of my career. Our company cultures here are very informal; information flows up and down the org chart readily. For example, it’s commonplace for a CEO to exchange e-mails with an individual contributor — and woe to the middle manager who gets in the way.

Darwinian competition dominates our economy; firms grow up fast and die even faster. Almost no one builds a long-term career at a single company. In fact, you’re viewed with suspicion if you stay in one place for too long — it must be a sign that you couldn’t get work elsewhere.

The lessons and techniques I’ve learned are adapted to conditions here. If you’re in a hierarchical company, in a conservative industry dominated by 30-year veterans, some of my advice may be irrelevant, if not downright dangerous to your career.

On the other hand, if there’s one dominant trend in the world’s economy, it’s that the pace of change is accelerating. Companies are becoming more flexible, information needs to flow more freely, and the days of lifelong employment in a single company seem to be coming to an end.

In other words, like it or not, the business world is becoming a little more like Silicon Valley. Maybe some of my experiences will help you, and your company, compete better in that future.


This is a book in progress. Please Click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it). I’d also appreciate comments and suggestions.


Welcome to Stop Flying Blind, a blog on its way to becoming a book.

Everyone agrees that companies should focus on competing in the future rather than just reacting to what’s happening today. But how do you actually do that? How do you determine what a market’s going to be like when the market doesn’t yet exist? How do you predict what your competition’s likely to do before they even know it themselves? How do you spot the turning points that can change the rules of your industry, before anyone else sees them?
Most companies fly blind on these issues, but they don’t have to. By combining a variety of different perspectives — competitive analysis, market research, and advanced technology research — a company can map the possible futures, pick out the one most favorable to it, and help bring that future into being.

That’s what this blog is all about. I’m a consultant in Silicon Valley, and have been working in roles related to high tech strategy for most of my career, including long stints at Apple and Palm. I think most companies in high tech do a poor job of using external information in their strategic thinking. In this blog I’ll lay out my ideas on how to do it right.

Eventually this will all come together into a book. I’m posting new sections once a week, to gather comments and suggestions. Please tell me what you think, and what you’d like to see.


Note: This is one of two blogs I’m running. The other, Mobile Opportunity, is a look at the mobile and wireless marketplace and has general comments on high tech.