Dimensional Research

Technology Market Research

  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Solutions
  • Services
  • Successes
  • Contact
  • Research Library


Read the Dimensional Research Blog

Bad In-depth Interview

by Diane Hagglund

It’s very common to be invited to participate in a bad web survey or a bad phone survey. But this week I had the dubious honor of being a participant in a truly bad in-depth interview. I had been invited to give my perspective on the use of research during an enterprise sales cycle.

Usually researchers are automatically excluded from these kinds of research projects. However, Dimensional Research develops research reports including customer-based ROI analysis and web surveys, so I was asked to participate and couldn’t give up the opportunity.

Here’s just a few of the things that went badly during the 45-minute telephone conversation:

1) The call opened with the interviewer telling me that he “was convinced there was a huge gap” and he was doing the research to prove it. Yes – he started out by stating his conclusion and his bias!

2) After a 10 minute preamble, during which time I was easily distracted by a client skype-ing me, he finally asked me the first question. His opener, which sets the tone for the entire conversation was this gem: “Where are you located?” They already had taken my contact information previously, so it surprised me that they opened with this.

Let me clarify – I completely understand a need to confirm information gathered previously, but this seemed like a weak opener. Personally, I prefer starting an in-depth interview asking the person to tell me about whatever their involvement is in the topic of the study. That usually covers a lot of the basics (job title, location, etc.) without asking the boring laundry list of questions. Of course you can always ask those questions later if they’re not covered in a more engaging fashion.

3) The interviewer had sent a supporting document – probably about as many words in the document as is in this post, in about the same font size – and then proceeded to read the whole thing out loud to me. He didn’t seem to get that I CAN READ and was done with the document way before he finished it and was being distracted by a text message. Again, certainly there are times when it is appropriate to review materials with the individual, but one of the benefits of an in-depth interview is you can get a real feel for the participant and understand when you can move more quickly or need to progress more slowly.

4) The interviewer ARGUED WITH MY ANSWERS!!! Now I know that all researchers need to dig deeper and sometimes playing devil’s advocate is a good tool. We’ll often use questions like “I understand that you prefer this, but can you explain why that isn’t a preference?” But this guy actually said “Really? That’s odd. In my experience it never happens that way.” Talk about not feeling valued!

5) The interviewer did finally let me get a word in edgewise, and I was half-way through an idea when I got the “Thank you Diane, this was great” and we were done.

I will confess I was a bad research participant and didn’t turn off my instant messaging, email, and other distractions. But I was taking time out of my schedule to share some expertise that I had and the researcher didn’t, and there was no attempt made to make me feel heard or make the experience a positive one for me.

Such a waste of 45 minutes – for both of us.

In-depth Interviews or Focus Groups: A Reprise

by Diane Hagglund

Ever since I published In-Depth Interviews vs. Focus Groups, almost two years ago, it has been our most visited blog post. Not only is it the most visited post of all time, it has also been the most visited article on this blog every month since it was published.

It seems to be pretty obvious that people are confused about the difference between the two most basic methodologies for conducting qualitative market research. As researchers, we continuously execute both types of research approaches and instinctively understand which will be best to meet the goals of a  particular project. But we need to be very clear with clients who are not research savvy.

I frequently find myself having these types of conversations with clients. I often get requests asking for a quote for a certain number of focus groups, but when we review the goals of the project, it may turn out there is an opportunity for much better results given the same budget if we were to do interviews instead – especially with competitive research or message validation.

I’ve learned a couple of things:

1. Don’t be afraid to suggest an alternative to a client if it’s right for the project.

Your job is to be an expert and provide the right research solution, not just deliver against the tactics of a client’s request.

2. Be clear and use simple terms when explaining research methodologies.

It’s easy to lapse into “researcher lingo,” but clients will resist alternates if they don’t understand them.

3. Use your best researcher listening skills to figure out why the client is asking for a specific approach.

Clients frequently don’t know alternatives exist. They may have observed a focus group once and saw good results and are asking for that because they don’t realize there are other approaches. It’s also typical that there is an unstated goal to expose a particularly difficult stakeholder to direct market feedback, in which case a focus group may be the right choice, even though other indicators would suggest in-depth interviews.

A Researcher’s Confession

by Diane Hagglund

I admit it.  Although web surveys are one of our most popular research offerings, I strongly prefer qualitative research (focus groups, in-depth interviews).

I can’t help looking at the findings in web surveys and feel like I want to dig in and ask more questions.  The participants say things that don’t make sense to me and I want to know why.  Or they make short comments responding to open-ended questions that leave me with a dozen follow-on questions I don’t have the opportunity to ask.

When I present qualitative findings, I know I can answer any question that comes up with complete confidence.  With quantitative findings, I always know that there will be questions where the answer is, “We can’t draw any conclusions based on this survey.”

This came up again last earlier this month:  We just did a series of in-depth interviews where we wanted to understand perceptions about the cost of various alternatives.  All the participants in this study had identified themselves as product decision makers who had full visibility into costs – a requirement for the study.   If you had just looked at the first level of answers people gave, you would have thought that one of the tools we were looking at was very inexpensive compared to other options.

But because this was an in-depth interview, I got to ask that all-important “why” question. I quickly realized that while all the participants had been educated on the “line 3” costs that were billed directly to their organization, not everyone was aware of the additional “line 10” costs that had to be added to support this different approach.  When you added both of those up, the tool that originally appeared less expensive turned out to have a similar TCO to other options.

Now, it’s true that we could have found this out by writing a good web survey, but one of the secrets to writing great web surveys is to know the answers to all the questions first.  We continue to recommend web surveys as good vehicles for quantifying concepts that you know well, but want to put an accurate % by each of the options.  This is a valuable thing to do, especially for market sizing, external marketing and PR purpose.

But for finding out the answers that you don’t really know, start with qualitative research – and by all means do a web survey next to put those %s in place once you know the statements to put the %s with.

Phone Surveys or In-Depth Interviews: What’s the Difference?

by Diane Hagglund

A while back we blogged about phone surveys, and argued that they do not have a place in technology market research. Several comments were made both in the blog comments and to us directly that made us realize that there is confusion about the difference between phone surveys and in-depth interviews (also called IDIs). Here’s how we view this:

Phone surveys and in-depth interviews are similar because:

  • Both are conducted over the phone.

But that’s pretty much the only similarity.

The differences between phone surveys and in-depth interviews:

Phone surveys: A quantitative research method that includes a large numbers of participants.

  • Capture input to a common set of questions with pre-set answer options
  • Administered by a phone survey professional that has a pleasant voice and is trained not to guide the responses of the participants in any way
  • Are usually short: 5-20 minutes as rule of thumb
  • Usually no incentive is given to participants, although there may be a small amount given.

In-Depth-Interviews (IDIs):A qualitative research method that uses a smaller number of highly select participants.

  • Screener is written to allow significant discovery, open-ended questions, and drill-down
  • Administered by a trained moderator who is versed in the client’s business, the goals of the research, the topic of study (in our case technology) as well as techniques for putting the client at ease and getting the kind of feedback desired
  • Usually longer: 30 minutes to 1 hour is common
  • Typically generous incentives are given to participants to compensate them for the time commitment

An easy way to think of the differences are that phone surveys are pretty much like web surveys, but conducted over the phone. Given these definitions, we stick by our earlier recommendation that phone surveys have no place in technology market research. They are significantly more expensive to conduct than web surveys without adding value in a corporate IT study (although agree that there are audiences phone surveys are appropriate for).

That said, there is nothing inherently wrong with phone-based research. In fact, in-depth telephone interviews have become one of our most valuable research methodologies, particularly as clients become more global and travel costs are becoming more of a factor in evaluating research budgets.

Market Research: Listen Live or Wait For the Report?

by Diane Hagglund

One of the real strengths of focus groups – in person or online – is the opportunity for a bunch of people to see a live discussion, and even ask a few additional questions.  (Yes, Dimensional Research always leaves a few minutes at the end of a focus group session for the observers to ask a follow-on question or three.) 

It can be extremely powerful to expose people who work in corporate roles and don’t get out into the field – marcom managers, R&D, developers, etc. – to direct customer and prospect feedback. Often operational people (finance, legal) pick up something important by watching their target market discuss their jobs.

 Dimensional Research always encourages as many people as possible to listen into focus groups, or to watch the videos that we record when they’re done.

 However, sometimes “listening in” is a bad idea. It basically boils down to this:  If you only see part of a project, don’t assume that’s all there is. There is a reason why you conduct 8 focus groups, not just one.  Or why you conduct 25 interviews, not just 3 or 4.

Don’t let these scenarios happen to you:

  • Attend two focus groups in New York and project that experience onto Chicago, Paris, Singapore, and Tokyo.
  • Listen in on only one call of a 20-call interview project.

If you decide to listen in on market research, I strongly recommend the following:

  1. Do read the final report and attend the presentation of the report.  You might as well enhance your limited experience with the full power of the overall project.
  2. Don’t attend just one focus group or listen in on just one call!

Real life example

We recently conducted a series of 15 customer interviews about a client’s new initiative. It has been progressing for about a year and they wanted to know what messages their customers had absorbed. We spoke to 15 of their very best customers – the kind who spend lots of money every quarter, attend the user groups, and give references. It was a good study, and very helpful in finding out what parts of the new initiative were gaining traction and what parts needed even more evangelism.

There was one of the 15 interviews where the participant absolutely “got” it. He could have given the company’s pitch, including now and vision, with no problem at all.  It was delightful.  However, he was the ONLY one of the 15 participants who did that. The rest of the participants clearly struggled with some of the visionary aspects of the messaging. As luck would have it, that was the only interview that one of the project stakeholders listened in on. Unfortunately, during the report presentation he kept interrupting to talk about how the market “really got it.” We had to very strongly emphasize that the whole project needed to be considered – not just this one guy. The company had plenty of work to do to reach their entire customer base. They were not done.

How Many Market Research Participants Do I Need?

by Diane Hagglund

The answer is “enough to represent your market.” This number varies significantly according to the type of market research you’re conducting.

Web Surveys

For Web surveys this is a pretty straightforward question to answer. You will use quantitative methods to determine a sample size.

There are standard ways to calculate statistical validity, and a very easy-to-use calculator and descrition of the underlying statistics can be found here. You can use it to determine how many people you need to respond to a quantitative study in order to get results that reflect your target population.

You will need to know the population size of your audience – the total number of people  in the group your sample represents. Even if you don’t know the exact population size, this is not really a problem. The mathematics of probability allows you to make a pretty good guess as long as you have some kind of basic idea. The number of participants needed is not linear. As your audience gets much larger, your sample size doesn’t increase very much.

For example, to get a result that is accurate 95% of the time within 5%, you need:

Population Sample
100 80
1000 278
10,000 370

 

Qualitative Studies – In-depth Interviews and Focus Groups

But enough statistics homework for now. 🙂 The question that is less straightforward to answer, and so is asked all the time: “how many participants do I need for a qualitative study?” such as focus groups or a series of in-depth interviews.

The first thing to do when answering this question is to figure out segmentation. How many types of participants do you need to represent? This can include verticals, countries, roles, years of experience, customers vs. prospects vs. competitor customers vs. partners, and so on.  The most common segmentation when working with Corporate IT is to have two groups: “technology decision-makers” and “economic buyers”.

Once you figure this out, and map any overlap between these areas (for example, partners may also be end-users), you’re ready to go.

In my experience, with corporate IT you need about 8-10 participants of each “type,” with a minimum of 10 participants, to produce a valid study. The only exception is studies with competitor’s customers, where you typically need more participants.

Note that most market research companies, including Dimensional Research, base pricing for qualitative work on the number of participants, so doing a good job right-sizing your project will give you the most bang for your buck.

Message Validation: Market Research with Clear ROI

by Diane Hagglund

One of my favorite market research projects is message validation. When we go and get direct feedback on high-level product messages, I always know we are going to learn something very important that will dramatically increase the results of marketing programs.  Hard ROI is easy to demonstrate when your marketing programs get twice (or more) the response from the same spend, because your message is more compelling!

So how does message validation work?  We evaluate your messages – the benefits and features that you give your PR department, your marketing programs team and your agencies – and break them into digestible chunks.  We set up one-on-one interviews with both customers and prospects (messages are experienced in isolation so this is not appropriate for focus groups).  And we walk through a systematic process of vetting the messages. 

We never ask, “What do you think?”  That doesn’t force participants to share what is really going through their heads.  You need to go deeper.  Instead of “what do you think?” you must ask, “what is MOST compelling?” and “what is LEAST compelling?”.  Do that for each area of messaging that you have: pain, benefits, features, positioning paragraph, etc.  Specifically prompt for good and bad  – and always ask WHY.  The “why” helps you know if you need to throw out some messages, use different words to express the same idea, re-order the importance, or segment your messaging.

What is interesting when you do this is how small changes can make a huge, measurable difference to your programs.  On the first message validation project I did in my career, we tested four main messages that the marketing team had been using for outbound emails and direct marketing with uninspiring results. The client knew that if they could get that 1-hour meeting, the clients were hooked, but the programs just weren’t getting the same level of response.

We tested the messages as part of an overall research project, and found that 3 out of the 4 messages resonated extremely well with everybody.  But that fourth message was different. Two out of our three target segments not only didn’t find it compelling, they actually found it insulting and frustrating. 

That said, it wasn’t that the fourth message was bad across the board.  The final target segment really liked the fourth message, even though it was actually alienating the majority of the market!  The client reserved that fourth message for a campaign targeted to that specific audience.

Solid, compelling messages about your product are the basis for a successful marketing program. Message validation helps you fine-tune your messaging to ensure maximum effectiveness of your communications with customers and prospects – a great cost saver.  

Bonus Tip:  If you’re looking for budget to fund a market research project, consider adding a message validation component and using marketing program budget!

In-Depth Interviews, Focus Groups, or Both?

by Diane Hagglund

When doing qualitative research, we need to decide which is right for the client: in-depth interviews, focus groups, or maybe a combination of both. This depends on the client’s goals.

When are focus groups better?

Dimensional Research recommends focus groups when the client wants to gain multiple perspectives in an interactive group setting.

One of the main benefits of focus groups is that they get the participants brainstorming. When one participant’s comment feeds off of another comment and so on, the group can really dig deep into an issue. When trying to evaluate market acceptance, capture challenges and issues, or understand objections to new technologies or processes, the focus group dynamic is ideal.

Focus groups have another great benefit – the client can sit behind the glass or on a conference call and hear the direct, unfiltered feedback of a large number of participants with no distractions. Focus group sessions are also recorded for further observation.  If your goal is to expose the maximum number of your team to direct input from the market, this is a very efficient way to do it.

When are interviews better?

In-depth, one-on-one interviews with technology professionals can be conducted in person or over the phone. These are appropriate when the client wants to identify detailed perceptions, opinions, beliefs, and attitudes.

In-depth interviews are particularly effective in the following scenarios:

  1. When the client’s goal is to capture feedback on experiences that occur in an isolated way, such as product messaging or product usability testing.
  2. When there is anything sensitive about the feedback that participants may not feel comfortable sharing in front of other people. They may be concerned a competitor is also be attending the focus group. Or in a win/loss analysis, a customer may not feel comfortable sharing details of poor account manager performance if they suspect the rep might be “behind the glass” and could hear him. The customer would feel much more comfortable sharing this information confidentially talking only with the researcher.

Two things that should NOT drive this decision are:

  • Travel – Sometimes the decision is driven by the geographical locations of participants. If you’re doing customer research with customers located in diverse locations or if you want global representation in the groups without the cost of travel, you still have a choice. The one-on-one nature of a phone interviews is an easy option, or for the group dynamic choose online focus groups.
  • Cost – An important consideration, of course, is cost. As a rule of thumb focus groups and interviews cost about the same per participant, so with the exception of travel – not a consideration for phone interviews or online focus groups – cost should not be the driving consideration in choosing the research approach.

Copyright © 2008 - 2021 · Dimensional Research