Qualitative Research: What Every B2B Marketer Needs to Know

The numbers are lying to you. Not because they're wrong — but because they're incomplete.
Want articles like this straight to your inbox?
Subscribe here

Qualitative research tells you why buyers behave the way they do. It uses open-ended survey questions, small verified panels (10–15 people), and the principle of data saturation — not statistical significance — to surface insights that quantitative data can never reach. Done right, a qualitative survey takes 48 hours and costs far less than a bloated quant study. Done wrong, it produces noise, bias, and false confidence.

Here's what you need to know before running a single survey:

  • Primary method: Open-ended surveys with verified B2B panels
  • Recommended sample size: 10–15 respondents
  • Saturation point: 12–13 responses (per peer-reviewed research)
  • Recommended question count: 10 max; 15 absolute ceiling
  • Key risk: Leading questions, social desirability bias, survey fatigue
  • When to use: Before quant (hypothesis generation) and after (closing the loop)

Qualitative vs. Quantitative: Two Different Jobs

Quantitative research tells you the what — it measures, counts, and compares. Qualitative research tells you the why — it reveals motivations, language, and the reasoning behind decisions. Both matter, but they're not interchangeable, and most B2B teams use them in the wrong sequence.

Quantitative data is gathered through close-ended questions: multiple choice, Likert scales, multi-select. It's fast, scalable, and objective — but it's limited to the hypotheses you already have. Qualitative data is gathered through open-ended questions in surveys, interviews, and focus groups. Respondents use their own words, surface their own priorities, and frequently reveal things you didn't know to ask.

Why most B2B teams get this backwards

The typical mistake is running A/B tests and preference polls before you actually understand what your buyers think, feel, and care about. You end up optimizing the wrong things with great statistical confidence. Start with qualitative to understand the landscape, then use quantitative to validate specific choices — which headline, which value proposition, which message.

Qualitative is also how you close the loop after you've launched something. Is the new feature causing confusion? Is the messaging landing the way you expected? Qualitative isn't just for the beginning of the research process. It's for every stage.

Why B2B Marketers Underinvest in Qualitative Research

After doing hundreds of demos over the last few years, I've come to the conclusion that the majority of B2B SaaS marketing leaders are uneducated about how qualitative research works and what specifically it's useful for. They're quietly skeptical about the sample sizes. They've been trained to turn to analytics or attribution tools instead.

The other problem is "capital R Research" thinking — the idea that research is big, costly, takes a long time, and has potentially huge consequences, so we need to align all stakeholders before beginning. This assumption is wrong on many levels. Replace capital R Research with lowercase r research: instead of huge studies that take months, run lots of small surveys with 15 or 30 people that only take a day or two. Think about your most important business questions right now. Go launch a research study this week.

The cost of living in Marketingland

We ran a B2B SaaS industry research study with marketing leaders — CMOs and VP Marketings. When asked how often they conduct qualitative research into target customers, the answer was: not nearly often enough. The top reasons were the difficulty of reaching target customers and the time it takes. The result is that most marketers end up living in Marketingland — far removed from the people they're meant to understand and reach.

Winners and losers have the same goals. It's not the goal setting that makes the result. The result is a sum of your decisions and consistent execution. The quality of your decisions comes from the quality of your information — and the best information is high-fidelity: unbiased, straight from the source. Where have you substituted your own judgment for what customers actually want to receive?

What Qualitative Surveys Are Actually Good For

Numbers tell you what's happening. Qualitative research tells you why it's happening. Without the why, you're guessing. Once you understand why your ICPs do this or that, it has massive implications on how you go to market — your messaging, your positioning, your product priorities.

Most marketing data is noise. Qualitative gets you the signal. Revenue comes from relevance: if you don't understand your ICP, you'll waste time and money building things they don't need or want. Most marketers lack creativity and guts, not data. The most valuable insights come from qualitative research, not quant data.

The specific things qualitative surveys unlock

Here's what qualitative surveys can surface that dashboards never will:

  • The exact language your buyers use to describe their problems — which is almost never the jargon on your website
  • The top pains your ICPs face and the symptoms they feel because of those problems
  • What triggers them to seek out a solution like yours
  • The channels they use to learn about your category
  • Who owns the budget and what the buying process actually looks like
  • What they think of you vs. competitors — and what they expect

If you have data on all of this, you're setting the agenda in every strategy conversation. You're not reacting — you're leading.

Survey Design: The Foundation of Good Qualitative Research

The questions you ask directly determine the quality of insights you get. Ask bad questions and you've wasted everyone's time — including your respondents'. Survey design is where most qualitative research either earns its value or destroys it.

Before you write a single question, define your objective. What do you need to learn? Your objective could be broad ("I want to understand how our messaging resonates") or specific ("I want to know if buyers understand our pricing page"). The objective informs every question. If you can't articulate how you'd use the answer to a question, don't ask it.

Keep surveys short: 10 questions max, 15 absolute ceiling

Survey fatigue is real and it kills data quality. The longer your survey, the more respondents rush, drop off, or give low-effort answers near the end. For qualitative surveys, keep your question count to 10 or fewer. If your topic genuinely requires more depth, 15 is the absolute maximum — and every question beyond 10 needs to earn its place.

The goal isn't to collect everything you're curious about in one sitting. It's to get the highest-quality answers to your most important questions. Run more frequent, smaller surveys rather than one long exhaustive one. Five focused questions answered thoughtfully beats fifteen questions answered carelessly.

Open-Ended vs. Closed-Ended Questions

Open-ended questions are the engine of qualitative research. They allow respondents to answer in their own words, surface priorities you didn't anticipate, and volunteer information you didn't know to ask for. People will tell you things you didn't even know were a prevalent issue — and sometimes those things are exactly what's driving buyers to your competition.

Closed-ended questions — yes/no, multiple choice, rating scales — belong in quantitative research. In a qualitative survey, they shut down the conversation before it starts. The difference isn't stylistic. Closed questions confirm hypotheses you already have. Open questions surface ones you didn't know to form.

What this looks like in practice

Take the question "Are you satisfied with our product?" A closed-ended version gets you a yes or a no — nothing you can act on. The open-ended version — "What aspects of the product most affect your satisfaction?" — gets you a specific, detailed answer like: "Your 'Notify when available' button doesn't always work when customers click it, and I miss out on quite a few recovered carts." One version closes the door. The other opens it.

The same logic applies everywhere. "Did you find what you were looking for?" tells you nothing. "What were you looking for and what made it hard to find?" surfaces friction you didn't know existed. "Would you recommend us?" gives you a shrug. "What would need to change for you to recommend us confidently?" gives you a roadmap.

What Makes a Good Survey Question

Good qualitative survey questions share four properties: they're open-ended, specific enough to draw out detail, focused on real past behavior rather than hypotheticals, and written with no hint of a preferred answer. Before writing any question, ask yourself: how would I use this data? If you don't have a clear answer, cut the question.

Focus on real-life scenarios, not hypotheticals. "Tell me about the last time you evaluated a tool like ours" produces more accurate and useful data than "What would you do if you were looking for a tool like ours?" Memory of actual behavior, even imperfect, beats speculation every time.

Question quality checklist

Before including any question in your survey, run it through this filter:

  • Is it open-ended?
  • Does it ask about real past behavior, not hypotheticals?
  • Could I use the answers to make a specific decision?
  • Is it free of any implied "right" answer?
  • Is it focused on one thing, not two or three at once?

If any answer is no, rewrite or cut.

How to Avoid Leading Questions

Leading questions are one of the most common and most damaging mistakes in qualitative research. They signal a desired answer and contaminate your data — and as a marketer who wants validation, your questions will naturally drift this way unless you actively guard against it.

A leading question steers the respondent toward a particular answer, either by framing the premise favorably or by implying a certain response is expected. They often sound completely reasonable, which makes them hard to spot without deliberate review. Have someone outside your team look at every question before you send the survey. Fresh eyes catch leading framing that you've become blind to.

How to rewrite leading questions

The fix is almost always the same: replace evaluative framing with behavioral framing. Instead of "What did you find most helpful about our onboarding?" ask "Walk me through your onboarding experience." Instead of "Would you say our platform is easy to use?" ask "How would you describe using the platform day-to-day?" Instead of "How much has our tool improved your workflow?" ask "What, if anything, has changed about your workflow since you started using the tool?"

The pattern is consistent: leading questions assume a positive outcome and ask the respondent to confirm or describe it. Neutral questions ask the respondent to describe their actual experience, without signaling what that experience should have been.

Biases That Corrupt Qualitative Survey Data

Social desirability bias is the biggest threat to qualitative survey quality. Respondents tell you what they think you want to hear. When asked whether they had trouble with your product, they'll say "no" — even when they struggled — because they want to be helpful or avoid seeming critical. The fix: ask about specific moments and behaviors rather than general evaluations. "Tell me about the last time you tried to do X" gets more honest data than "Was X easy to use?"

Confirmation bias lives on the researcher side. You unconsciously design questions and interpret answers in ways that confirm existing beliefs. Have multiple people analyze the data independently before comparing notes — especially people who weren't involved in writing the survey.

Four more biases to watch for

  • Recall bias: People misremember. They smooth over frustrations and reconstruct timelines in ways that feel coherent but aren't accurate. Behavioral questions ("what did you do") outperform reflective ones ("what do you think about").
  • Acquiescence bias: Some respondents agree with almost any statement. Open-ended questions reduce this risk significantly — another reason to default to them.
  • Moderator influence: In surveys with follow-up questions, your tone and phrasing can nudge answers. Stay neutral and consistent across all respondents.
  • Framing effects: "What do you dislike about X" and "What could be improved about X" produce different responses even though they're asking the same thing. Choose your framing deliberately and stick to it.

Survey Fatigue: The Silent Data Killer

Survey fatigue happens when respondents become tired, bored, or disengaged during a survey — and it shows up in your data as shorter answers, skipped questions, rushed responses, and higher dropout rates. It's one of the most underestimated threats to qualitative data quality.

The primary cause is length. Beyond 10 questions, answer quality drops noticeably. Beyond 15, you're in junk data territory for most respondents. But fatigue isn't only about length — it's also about cognitive load. Dense, complex questions, too many open-ended questions in a row, and poorly sequenced questions all accelerate fatigue even in short surveys.

How to design against survey fatigue

Start with your easiest, most engaging questions and build toward the harder ones. Put your most important open-ended questions in the first half of the survey, where attention and effort are highest. Keep individual questions focused on a single thing — double-barreled questions ("How easy was it to use, and did it solve your problem?") are cognitively taxing and produce muddled answers.

Be honest with respondents about time commitment upfront. "This takes 5 minutes" sets expectations and reduces abandonment. And send surveys close to the experience you're asking about — the fresher the memory, the less effort required to answer, and the more accurate the response.

Sample Size for Qualitative Research: The Saturation Principle

The most misunderstood thing about qualitative research is how many responses you actually need. I consistently come across people unfamiliar with sample size requirements for qualitative research — they assume it needs statistical significance like a quantitative test. This assumption is wrong, and it's one of the main reasons teams don't do enough qualitative research.

The methodological principle that governs qualitative research isn't statistical significance. It's data saturation — the point at which new responses no longer provide new insights or themes. The industry standard is that it takes 12–13 responses to reach saturation. Whether you survey 13 or 130 people, the number of insights and themes you get is largely the same.

The research behind the 10–15 recommendation

A review of 23 peer-reviewed articles suggests that 9–17 participants can be sufficient to reach saturation, especially for studies with homogenous populations and narrowly defined objectives. Our recommendation: target 15 people as your sample size. The math on diminishing returns is clear — 5 users identify 85% of problems, 10 users find over 95%, and 15 users identify over 99%. Beyond that, you're paying for marginal gains that rarely change your conclusions.

Running qualitative research with more than 15 people provides little additional benefit but costs quite a bit more. Spend the extra budget on more studies, not more participants. One important caveat: just because 10 people in a 15-person study claim strong interest in X does not mean 66% of the overall population feels the same way. Qualitative is not designed for statistical generalization. It's designed for depth of understanding.

Who to Survey: Panel Quality Over Panel Size

Your panel is everything. For B2B qualitative research, you need verified participants who actually match your ICP — right job title, right seniority, right company size, right industry. 99% of what you get from large-scale generic panel companies is garbage for B2B. A tight, vetted survey panel of 15 beats a bloated dataset of 500 randoms every time.

Match your participant group to your objective. If you want to understand why someone didn't convert, survey people who recently didn't. If you want to understand churn, survey churned customers. If you want messaging feedback, survey your actual ICP — not people who vaguely resemble them. The wrong respondents produce confident-sounding insights that are completely useless.

Prioritize recency and specificity

Start with recent customers — they have the freshest memory of the decision-making process. Then progress to long-time customers to understand retention. Talk to people who said no as well as people who said yes. Churned customers are some of the most valuable research subjects you have access to, and most teams never talk to them.

Don't overlook your sales and support team as a source of buyer intelligence. They're on the front lines every day. What questions do they get asked most? What objections come up repeatedly? What confuses prospects right before they go quiet? Their pattern recognition across hundreds of conversations is qualitative data you already own.

A Practical Qualitative Research Methodology for B2B Teams

Most B2B teams run research reactively — when a campaign underperforms, when churn spikes, when the board asks why pipeline is soft. By then it's too late. The teams that consistently outperform run a structured, repeatable research practice that gives them an always-on pulse on what their market is thinking.

Here's the methodology that works.

Step 1: Start with ICP research

Before you test anything, you need to understand your buyers at a foundational level. Run an open-ended ICP survey that covers how your target customers buy products in your category, what their biggest problems and priorities are, what they think of the major players (including you), and what language they use to describe their pain. This is your research foundation. Everything else — your messaging, your positioning, your product roadmap — gets built on top of it.

Specifically, you want data on: the top three problems your ICPs face, the symptoms they feel because of those pains, what triggers them to start looking for a solution, the channels they use to learn about tools in your category, who owns the budget, and what the buying process looks like. If you have solid answers to all of this, you're setting the agenda. You're not reacting — you're leading.

Step 2: Run message tests on your key pages

Once you understand your ICP's language and priorities, test whether your messaging actually reflects them. Run message tests on your homepage and other high-traffic pages with your verified ICP panel. Ask them what they understand your product to do, whether the messaging addresses their actual problems, what's confusing or unclear, and what's missing. The gap between what you think you're communicating and what buyers actually receive is almost always larger than you expect.

You can't do messaging that resonates if you don't know your ICP's top pain points and don't know the exact words they use to describe their problems. It's rarely the typical jargon you see on B2B websites. Message testing is how you close that gap — not by guessing, but by asking.

Step 3: Test your pricing page specifically

Your pricing page is one of the highest-leverage pages on your site and one of the least-tested. Run a dedicated pricing page test with your ICP panel. Do they understand what's included at each tier? Does the value feel commensurate with the price? What's the first question that comes to mind after reading it? What would make them more likely to start a trial or book a demo?

Pricing page confusion is one of the most common and most fixable conversion killers in B2B SaaS. A single round of qualitative feedback from 15 verified ICPs will surface more actionable insight than months of heatmaps and session recordings.

Why You Need to Do This Every Quarter

In the world of SaaS, we operate in dog years and ICP perceptions change with the wind. Analysis by Kieran Snyder showed that cold email outreach mentioning AI went from 24% in early 2023 to 91% just 18 months later. This coincided with our research that found that people are tired of AI-centered website messaging. Seeing AI as the focus point results in eye rolls, not signups. It's not about AI. It means that you need an always-on pulse on what your ICPs are thinking, or you'll often be perceived as "more of the same," not relevant, or possibly even tone deaf. Don't react, anticipate.

Companies that do ICP research quarterly enjoy higher willingness to pay, see better funnel conversion, and grow 15–20% faster than everyone else. Yet few do ICP or primary market research. When I ask what's holding teams back, the answers are always the same: no reliable access to ICPs, it takes too much time, it's a hassle to get responses, and it's hard to prioritize over the fire of the day.

What a quarterly research cadence actually looks like

A sustainable quarterly cadence doesn't require months of planning or large budgets. Each quarter, run one ICP pulse survey to check for shifts in priorities, language, and sentiment. Run one message test on a key page — rotate through your homepage, your product pages, and your competitive positioning over the course of the year. And if pricing is a conversion lever you're working on, run a pricing page test at least once every two quarters.

Each study takes 15 respondents and can return results in under 48 hours. Four studies a year at that scale costs a fraction of what a single misaligned campaign costs in wasted spend. The math is obvious. The discipline to build the habit is what separates the teams that stay ahead of their market from the ones that are always catching up.

The Right Order for Research

Start with qualitative. Before you have hypotheses worth testing, you need to understand the landscape — who your buyers are, what their biggest problems are, what language they use, what triggers a purchase decision. You can't design good quantitative research without this foundation, and you can't write messaging that resonates without knowing your ICP's exact words.

Once qualitative has surfaced your hypotheses and given you messaging directions, quantitative helps you choose between specific options. Which headline? Which value proposition? A/B tests and preference tests are useful here — but only after qualitative has given you something worth testing. Then go back to qualitative to close the loop after launch.

What to find out in your first qualitative study

If you're starting fresh, these are the questions your first qualitative survey should answer:

  • What are the top three biggest problems my ICPs face?
  • What symptoms do they feel because of those pains?
  • What triggers them to start looking for a solution like mine?
  • What channels do they use to research tools in my category?
  • Who else is involved in the buying decision, and who owns the budget?
  • What does my ICP think of us versus the alternatives?

If you have solid data on all of this, you're setting the agenda in every strategy and messaging conversation. That's what high-fidelity buyer intelligence looks like in practice.

Frequently Asked Questions

What is a good sample size for qualitative research?

10–15 respondents is sufficient for most B2B qualitative studies. Data saturation — the point where new responses stop producing new insights — occurs at 12–13 participants on average, based on a review of 23 peer-reviewed studies. More participants adds cost without adding insight.

How many questions should a qualitative survey have?

No more than 10 questions for most surveys. 15 is the absolute maximum. Beyond 10 questions, survey fatigue sets in and answer quality drops. Prioritize your most important open-ended questions in the first half of the survey.

What's the difference between qualitative and quantitative research?

Quantitative research tells you what is happening — it measures and counts. Qualitative research tells you why it's happening — it reveals motivations, language, and reasoning. Use qualitative first to generate hypotheses, then quantitative to validate specific choices.

How do you avoid leading questions in surveys?

Write questions that are neutral in framing and focused on past behavior rather than general opinions. Have someone outside your team review every question before sending. Replace evaluative questions ("Was it easy?") with behavioral ones ("Walk me through how you used it").

What is data saturation in qualitative research?

Data saturation is the point at which new survey responses stop producing new themes or insights. It typically occurs around 12–13 responses in B2B qualitative studies with well-defined objectives and homogenous populations. It's the methodological principle that replaces statistical significance in qualitative work.

How often should B2B companies run qualitative research?

Quarterly. ICP perceptions shift faster than most teams realize. Companies that run ICP research quarterly enjoy higher willingness to pay, better funnel conversion, and grow 15–20% faster than those that don't. A quarterly cadence of one ICP pulse survey, one message test, and one pricing or product page test is a sustainable and high-ROI research practice for most B2B SaaS teams.

Market research

Qualitative Research: What Every B2B Marketer Needs to Know

The numbers are lying to you. Not because they're wrong — but because they're incomplete.

Get the full report here

Know exactly what your buyers want

Join 20,000+ other marketers and subscribe and get weekly insights on how to land more customers quicker with a better go-to-market machine.