Dimensional Research

Technology Market Research

  • Home
  • Privacy Policy
  • Contact Us
  • About Us
  • Solutions
  • Services
  • Successes
  • Contact
  • Research Library


Read the Dimensional Research Blog

Bad Surveys: Participants Know When Research is Bad!

by Diane Hagglund

A friend of mine who is not involved in the research business sent me this email yesterday.

Hi Diane.

I attended a conference this week and was sent this survey to fill out.

Did you watch the playback of the keynote session?
( )Yes. It was excellent.
( )Yes.  It was OK.
( )No, but I plan to.
( )No, and I am not interested.

What I want to know is where is the response “Yes and it sucked!”

Of course this was a bad survey question, which we’ve talked about before  (Bad Survey Design; Bad Web Survey Question).

But the broader point isn’t just that the research is bad, but that THE PARTICIPANTS KNOW! They can tell if their option isn’t on the list and they just won’t complete the survey if their opinion isn’t represented or will be annoyed that they weren’t able to be realistic in their answers.

My friend didn’t continue taking the survey.

Web Surveys: How Many Questions Can I Ask?

by Diane Hagglund

There is no “right” number of questions to ask in a Web survey, but here are a few guiding principles.

You can include a lot of easy questions

It takes very little thought to answer questions about your gender, age, and country of residence.  It may take a second more to consider whether you are willing to share your total household income.  And you know really fast if you have bought a particular product or have called their tech support in the past week.

Just keep the options simple. For example, listing every possible country in the world in a drop-down list is complicated so consider asking about regions instead if the project goals allow.  It also helps to put these kind of easy questions on a single page so there isn’t time spent waiting for pages to load.

Use rating and ranking questions with caution

Markettools did a great webinar a while back.  (I can’t find the link unfortunately, if someone recognizes this study please send to me and I’ll add the link).  They analyzed the behavior of thousands of survey takers to see where drop-off happened.  The glaringly obvious finding was that rating and ranking questions  (those huge matrices with lots and lots of boxes to check) make survey takers go away.

Obviously there are times when research goals demand a rating/ranking approach, but do your best to limit your use of those types of questions when another approach will work. A checkbox series may be simpler to complete even though it is actually more questions.  Minimizing the number of options in a rating/ranking question also helps.

Consider the motivation of your audience

Customers who are giving feedback on a product they use on a regular basis will have a lot of tolerance for a large number of questions and in fact may welcome the opportunity to answer many questions that will simplify their use of your product in the long term.

We’ve done very successful customer surveys that offered no reward and took 20-30 minutes to complete.  The audience was vested in the topic and was engaged.  On the other hand, we often do Web surveys to gauge attitudes of IT professionals. In this case we offer a copy of the final report. We keep these surveys to less than 20 questions with at most one rating/ranking question and one open-ended question.  It takes about 3-5 minutes for a typical IT professional to do one of those kinds of surveys, which seems to match their motivation level to get the report.

The Metric that Matters: Time to complete

If your metric for building surveys is number of questions, you can easily end up with a complicated survey that will give terrible results because it’s confusing for participants and impossible to analyze.  Instead track the time to complete the survey.

Get somebody to take the survey who has not been involved in the development but knows the topic and see how long it takes them.  You might be surprised to learn that your “3 minute survey” actually takes 15 minutes. Much better to find out before you field rather than later, when nobody completes the survey, or you get comments about survey length instead of value-added feedback.

A Researcher’s Confession

by Diane Hagglund

I admit it.  Although web surveys are one of our most popular research offerings, I strongly prefer qualitative research (focus groups, in-depth interviews).

I can’t help looking at the findings in web surveys and feel like I want to dig in and ask more questions.  The participants say things that don’t make sense to me and I want to know why.  Or they make short comments responding to open-ended questions that leave me with a dozen follow-on questions I don’t have the opportunity to ask.

When I present qualitative findings, I know I can answer any question that comes up with complete confidence.  With quantitative findings, I always know that there will be questions where the answer is, “We can’t draw any conclusions based on this survey.”

This came up again last earlier this month:  We just did a series of in-depth interviews where we wanted to understand perceptions about the cost of various alternatives.  All the participants in this study had identified themselves as product decision makers who had full visibility into costs – a requirement for the study.   If you had just looked at the first level of answers people gave, you would have thought that one of the tools we were looking at was very inexpensive compared to other options.

But because this was an in-depth interview, I got to ask that all-important “why” question. I quickly realized that while all the participants had been educated on the “line 3” costs that were billed directly to their organization, not everyone was aware of the additional “line 10” costs that had to be added to support this different approach.  When you added both of those up, the tool that originally appeared less expensive turned out to have a similar TCO to other options.

Now, it’s true that we could have found this out by writing a good web survey, but one of the secrets to writing great web surveys is to know the answers to all the questions first.  We continue to recommend web surveys as good vehicles for quantifying concepts that you know well, but want to put an accurate % by each of the options.  This is a valuable thing to do, especially for market sizing, external marketing and PR purpose.

But for finding out the answers that you don’t really know, start with qualitative research – and by all means do a web survey next to put those %s in place once you know the statements to put the %s with.

Three Signs That You Are NOT Ready For A Web Survey

by Diane Hagglund

We hope that we’ve made it clear in this blog that Dimensional  Research is a big fan of Web surveys.   Web surveys are a great market research tool. They make it easy to get immediate results right at your fingertips.

However, you need to have a certain level of knowledge before you can run an effective Web survey.  Here are three signs that might indicate you are not ready for a Web survey, and you should do some qualitative research first.

  1. If you write multi-choice questions and are completely guessing the answers, you’re probably not ready for a Web survey.  Of course, you always need to have an “Other” section to cover the corner-cases that you just didn’t think of, but your options must capture most of the likely responses to effectively quantify a finding.  Let’s admit it, survey takers will often pick a a presented option that isn’t quite right rather than take the time to fill in an open-ended option, so you can’t rely on “other” to cover your lack of knowledge.
  2. If you’re asking a lot of open-ended questions, you’re probably not ready for a Web survey.  But what is “a lot?”  Good rule of thumb, no more than one open-ended question for every 20 survey questions – not including that important last question “Is there anything else you’d like to tell us?” that you put at the end of every survey.
  3. If you get only one shot at an audience, don’t waste it with an uninformed Web survey.  If you don’t completely understand a new market, and have a participant list that you can use only once, mitigate the risk that you’ve gotten something wrong by doing a few interviews with a couple of list members, or doing some small trial surveys with more open-ended questions that you can use to form a better Web survey.

Don’t waste a Web survey. If participants are taking the time to give you feedback, make sure you’re getting the most from them.  Don’t be scared to add some test surveys or interviews to a project schedule – the extra week or two needed in the schedule will give dramatically higher results.

We just wrapped up a great project that started as a stand-alone Web survey. We needed to quantify some specific purchasing metrics, so the survey was clearly the right methodology.   But as we drilled deeper, we realized that we were making far too many guesses about actual purchase motivations.

We added a series of 15-minute customer interviews prior to the Web survey that gave us some great insights into the  scenarios that were driving behavior, and then developed a much better Web survey that gave crystal clear data about all scenarios. Most importantly, it allowed us to eliminate a group of customers that weren’t motivated by the conditions we were evaluating (although their behavior was similar) and would have skewed our data significantly if we hadn’t excluded their responses for certain questions.

Going into the final presentation, we were very glad that we’d done the interviews.  We knew our stuff cold and had the backup we needed to defend the results and ensure they were taking seriously – a must to influence the business outcome.

Web Surveys vs. Phone Surveys

by Diane Hagglund

One approach to market research that we haven’t talked much about in this blog is phone surveys.

Phone surveys are when a person with a nice voice calls people and asks questions from a script. Answers are recorded in a spreadsheet, and the final result is very similar to the graphs you’d expect out of a Web survey.

Phone surveys are different from in-depth interviews because the script is asked exactly as written, unlike an in-depth interview where you have a researcher who is a technology expert asking the follow-on questions needed to drill down into answers.

Dimensional Research usually doesn’t recommend phone surveys to our technology clients. Even for consumer marketing, phone surveys are becoming less useful according to Jay Leve of SurveyUSA: “There is [no] future for any form of telephone research that is predicated on the researcher being able to barge in at will and seize the respondent.”

Several more compelling reasons why phone surveys don’t work are outlined very nicely by Jeffrey Henning at Research Live. The ones that relate to market research for technology companies include:

– Expense of dialing. More and more phone surveys are done via cell phones, since more and more people (and this includes many technology startups) don’t use landlines. In the US, by law, you can only use automatic dialing for landlines, not for cellphones. Manual dialing is much more labor-intensive – and expensive.

– Online surveys eliminate the expense of data entry. The respondent to a web survey is, in effect, donating the data entry cost, as they select the appropriate choices and type in their answers. With a phone survey, you are paying a call center representative to transcribe each respondent’s replies.

– The visual medium of Web surveys lets you easily show people visual concepts, such as ads or core messages, and get their response. Surveys that require respondents to react to visual concepts can’t be conducted with phone surveys alone.

– Web surveys have the allure of confidentiality. People today feel more comfortable sharing information on the Web than answering the prying questions of a phone interviewer.

– People prefer Web surveys, because a Web survey can be done at the respondent’s convenience, rather than at the moment the phone interviewer happened to call.

Jeffrey does make a case for phone surveys in certain situations, but almost none of these are relevant to IT market research including:

  • Major Account Research – In these cases we recommend in-depth interviews or online customer advisory boards. Why wouldn’t you take the opportunity to have a deep conversation with your biggest customers?
  • The Human Touch – This argument only works with Corporate IT if you are also knowledgeable about technology, so again, it’s better to do in-depth interviews.
  • Some People Aren’t Online – This is obviously not an issue for technology professionals.

Our recommendation, after years of doing market research with Corporate IT, is to avoid phone surveys when doing technology market research. Instead, use Web surveys, in-depth interviews, or a combination of both.

Bad Survey Design – Ouch!

by Diane Hagglund

Whenever we get a chance, we love to participate in market research done by other companies. Doing so gives us the opportunity to think about how we’re answering questions about the stuff we care about. Being on the “other side of the glass” is a great reality check.

We recently upgraded some important business software that we use here at Dimensional Research, and had the opportunity to do a follow-up survey on the experience.  We quickly discovered that it was a very poorly designed survey. The company wasted their time and energy (and ours too!) on a survey that was so badly designed, it will do nothing to help make things better next time.

Here are the things they did wrong:

1) To get the product installed, you first had to register, get a new license key, download the product, then run the installer. Four distinct steps – and we had problems with the first three! But the survey asked only about the step that actually worked – the last one. There was no opportunity to tell them that we had to register three times, that we ended up phoning somebody to get our license key, and that the first site we were sent to for the download didn’t have the right file.

2) They were clearly using the same exact survey for people who were first-time buyers and for those who were upgrading. Even though we purchased three years ago and our reasons for purchasing aren’t relevant anymore, we were required to answer questions about how we chose this vendor, etc.  Even if their lists are so bad that they don’t know we were an upgrade rather than a new purchase, a better designed survey would have given us the opportunity to skip new purchaser questions and continue onto the relevant product-related questions.

3) They did not ask any product-related questions aside from the installation. We’ve been using this product for three years now and they’ve never asked us any questions about the product – only about purchasing and installing the product. It definitely left us with the impression that all they care about is getting our renewal dollars .

4) There were no open ended questions where we could inform them of our issues, so that they could fix them, or to find out if we were an isolated case or if all of their customers have had the same problems.

Today’s fabulous, easy-to-use online survey tools are enabling a lot of bad survey behavior. We trust the readers of this blog will have better judgment!

Are you confident in anti-virus software?

by Diane Hagglund

If you’re not confident that your anti-virus software is keeping you safe, you’re not alone.

Dimensional Research recently completed a study on anti-virus and anti-malware software, sponsored by CoreTrace.  The 226 IT professionals who completed the Web survey reported that  Corporate IT believes the threat from malware is increasing, but they don’t have confidence in existing blacklisting approaches to protect them.  Key findings include:

  • 80% say threat from malware is increasing
  • 74% do not have confidence in blacklisting anti-malware
  • 66% concerned that blacklisting anti-malware is not effective on “day-zero” of a new attack
  • 50% concerned about the impact of performance scans
  • 80% say the idea whitelisting is compelling, but only 9% report using whitelisting approaches to anti-malware

You can find press coverage of the report from:

  • DarkReading
  • eWeek Security Watch

A full copy of the report is available for download here.

New Research Available: Desktop Power Management

by Diane Hagglund

Dimensional Research completed a new study on Desktop Power Management, sponsored by KACE. I found this study particularly interesting because so much of the conversation about power management has talked about power management in the data center, with very little discussion about the impact of leaving desktop computers, monitors, and laptops powered on when not in use.

The study reveals an interesting opportunity here since most participants (93%) think desktop power management can reduce costs, but only 10% are using a commercially purchased solution to do this. The press has been covering this report, with some of my favorite stories here:

  • InformationWeek’s bMighty (an interesting take for SMBs)
  • ZDnet
  • eWeek Europe

You can download a copy of the full report from: www.kace.com/resources/Desktop-Power-Management.

How Many Market Research Participants Do I Need?

by Diane Hagglund

The answer is “enough to represent your market.” This number varies significantly according to the type of market research you’re conducting.

Web Surveys

For Web surveys this is a pretty straightforward question to answer. You will use quantitative methods to determine a sample size.

There are standard ways to calculate statistical validity, and a very easy-to-use calculator and descrition of the underlying statistics can be found here. You can use it to determine how many people you need to respond to a quantitative study in order to get results that reflect your target population.

You will need to know the population size of your audience – the total number of people  in the group your sample represents. Even if you don’t know the exact population size, this is not really a problem. The mathematics of probability allows you to make a pretty good guess as long as you have some kind of basic idea. The number of participants needed is not linear. As your audience gets much larger, your sample size doesn’t increase very much.

For example, to get a result that is accurate 95% of the time within 5%, you need:

Population Sample
100 80
1000 278
10,000 370

 

Qualitative Studies – In-depth Interviews and Focus Groups

But enough statistics homework for now. 🙂 The question that is less straightforward to answer, and so is asked all the time: “how many participants do I need for a qualitative study?” such as focus groups or a series of in-depth interviews.

The first thing to do when answering this question is to figure out segmentation. How many types of participants do you need to represent? This can include verticals, countries, roles, years of experience, customers vs. prospects vs. competitor customers vs. partners, and so on.  The most common segmentation when working with Corporate IT is to have two groups: “technology decision-makers” and “economic buyers”.

Once you figure this out, and map any overlap between these areas (for example, partners may also be end-users), you’re ready to go.

In my experience, with corporate IT you need about 8-10 participants of each “type,” with a minimum of 10 participants, to produce a valid study. The only exception is studies with competitor’s customers, where you typically need more participants.

Note that most market research companies, including Dimensional Research, base pricing for qualitative work on the number of participants, so doing a good job right-sizing your project will give you the most bang for your buck.

New from Dimensional Research and KACE: Survey on Windows 7 Adoption

by Diane Hagglund

KACE – the leading systems management appliance company and a fabulous client – just announced the results of a new Dimensional Research Web survey that revealed 84 percent of IT staff polled do not have plans to upgrade existing Windows desktop and laptop systems to Windows 7 in the next year—despite early enthusiasm from beta testers of the new operating system

Other key findings include:

  • 84 percent of survey respondents have no plans to upgrade existing Windows desktops and laptops to Windows 7 next year
  • 72 percent indicated they are more concerned about upgrading to Windows 7 than staying with an outdated XP operating system
  • 50 percent revealed they have considered moving from Windows to an alternative operating system, and 27 percent of those cited Mac OS as the top alternative
  • Almost 60 percent of survey respondents do not presently have a tool in place that automates operating system migration
  • Economic factors, such as budget freezes and staff reductions, were cited as other reasons to not immediately adopt Windows 7

You can download a full copy of the report, as well as detailed information on the methodology used at http://www.kace.com/resources/Windows-7-Adoption-Survey.

 Here’s a few links to some news coverage of the survey:

  • Computerworld, Gregg Keizer
  • Fortune, Philip Elmer-DeWitt
  • The Register, Gavin Clarke

Copyright © 2008 - 2021 · Dimensional Research