Your choice of sampling strategy can deeply impact your research findings, especially in qualitative studies, where every person counts.
There’s so much written on methods that it can sometimes feel overwhelming when you’re first discovering what’s out there. Even if you’re well into your research career, you may find yourself sticking with the same methodology again and again.
Many researchers focus on quantitative methodology. But they can greatly benefit from knowing qualitative methodology for use in mixed-methods studies and to better understand other studies.
This article aims to help you dive into the most widely recognized qualitative sampling strategies shortly and objectively.
- Your first step in choosing a qualitative sampling strategy
- A priori determination
- More flexible determination
Your first step in choosing a qualitative sampling strategy
So, where do you start when you know you need to do more than grab students walking by your office? One of the first and most important decisions you must make about your sampling strategy is defining a clear sampling frame.
The cases you choose for your sample need to cover the various issues and variables you want to explore in your research. A fundamental aspect of your sample is that it should always contain the cases most likely to provide you with the richest data (Gray, 2004).
Owing to time and expense, qualitative research often works with small samples of people, cases, or phenomena in particular contexts. Therefore, unlike in quantitative research, samples tend to be more purposive (using your judgment) than they are random (Flick, 2009). This post will cover those main purposive sampling strategies.
It’s also important to keep in mind that qualitative samples are sometimes predetermined – what’s known as a priori determination, and other times follow more flexible determination (Flick, 2009).
So this article is organized based on those two parameters: a priori and more flexible determination.
And take note that in certain strategies it’s possible to start with a predetermined sample and end up extending it, or even varying it, for a valid reason.
Qualitative research is much more flexible than quantitative research. You iterate, you run another round, you seek saturation.
OK? Let’s see what’s on the qualitative menu. Hope you find something tasty.
A priori determination
Comprehensive (or total population) sampling is a strategy that examines every case or instance of a given population that has specific characteristics (e.g., attributes, traits, experience, knowledge) you’re interested in for your study (Gray, 2004).
This sampling strategy is somewhat unusual because it’s often hard to sample the entire population of interest.
When to use it
It’s ideal for studies that focus on a specific organization or people with such specific characteristics that it’s possible to contact the whole population that has them (Gray, 2004).
Basically, two aspects are key to using this method
- population size being somewhat small
- having uncommon characteristics
One example would be studying perceptions about leadership within a small company (e.g., 10–30 people), where your sample could easily be every employee within the company.
- Ideal for further analyzing, differentiating, and perhaps testing (Flick, 2009).
- It might facilitate confidence in the validity of the results of research that use this method because it covers every case in a given population.
- Reduced risk of missing valuable insights.
- Only applicable to very specific studies because it requires the targeted population to be small and have uncommon characteristics.
- Very limited potential for generalizability.
Practical example: Gerhard (as cited in Flick, 2009, p. 117) used this strategy to study the careers of patients with chronic renal failure. The sample was a complete collection of all patients with predetermined characteristics (male, married, age 30–50 years, at the start of treatment at five hospitals in the UK).
Note that for this particular study, sampling was limited to several criteria: a specific sex, disease, marital status, age, region, and a limited period.
These predetermined characteristics were what allowed the researchers to achieve a comprehensive (total population) sample.
Extreme/deviant sampling is intentionally selecting extremes and trying to identify the factors that affect them (Gray, 2004).
It’s usually used to focus on special or uncommon cases such as noteworthy successes or failures. For instance, if you’re conducting a study about a reform program, you can include particularly successful examples and/or cases of big failures – these are two extremes, which is where the “extreme/deviant” name comes from (Flick, 2009).
When to use it
It’s ideal for studying special/unusual cases in a particular context.
- Allows you to collect focused information on a very particular phenomenon.
- It’s sometimes regarded as producing the “purest” form of insight into a particular phenomenon.
- Lets you collect insights from two very distinct perspectives, which will help you get an understanding of the phenomena as a whole.
- The danger of mistakenly generalizing from extreme cases.
- Selection bias
Practical example: Perhaps one of the most widely recognized studies that used this sampling method was Waterman and Peters’ In Search of Excellence: Lessons from America’s Best-Run Companies, published in 1982.
The researchers chose 62 companies based on their outstanding (extreme) success in terms of innovation and excellence (see Peters & Waterman ).
Intensity sampling fundamentally involves the same logic as extreme/deviant case sampling, but it has less emphasis on the extremes.
Cases chosen for an intensity sample should be information-rich, manifesting the phenomenon intensely but not extremely; therefore capturing more typical cases compared with those at the extremes (Patton, 2002; Gray, 2004; Benoot, Hannes & Bilsen, 2016).
When to use it
Patton (2002) argues that ideally, you should use this when you already have prior information about the variation of the subject you want to study. Some exploratory research might be needed depending on what you are researching.
- Great for heuristic research/inquiry (Patton, 2002).
- By choosing intensive cases that aren’t extreme/deviant, you can avoid the distortion that extreme cases sometimes bring (Patton, 2002).
- Involves some prior information and considerable judgment. The researcher must do some exploratory work to grasp the nature of the variation of the specific situation he is researching about (Patton, 2002)
- It requires an extended knowledge of the phenomena being studied to not mix cases that have sufficient intensity with the ones at the extremes (Patton, 2002).
Practical example: Researching above average/below average students would be a time to use this sampling method. This is because they experience the educational system intensely but aren’t extreme cases.
Maximum variation sampling
The maximum variation sampling strategy aims at capturing and describing a wide range of variations and that cut across what you want to research (Patton, 2002; Gray, 2004). How can you proceed to guarantee that you capture a high level of variation?
You can start by setting specific characteristics where you’ll look for variation that the literature (or you) identify as relevant for the phenomenon you’re researching. These may be education level, ethnicity, age, or socioeconomic status.
For small samples, having too much heterogeneity can be a problem because each case may be very different from the other.
But according to Patton (2002), this method might turn that weakness into a strength.
It does so by applying this logic: any common pattern that emerges from this kind of sample is of particular interest and value in capturing the core experiences and central, shared dimensions of a setting or phenomenon.
When to use it: Whenever you want to explore the variation of perceptions/practices concerning a broad phenomenon.
- Allows the researcher to capture all variations of a phenomenon (Patton, 2002; Schreier, 2018).
- Finds detailed insights about each variation (Patton, 2002; Schreier, 2018).
- In small samples, sometimes cases are so different from one another that no common patterns emerge (Patton, 2002).
Practical example: Ziebland et al. (2004) was about how the internet affects patients’ experiences with cancer. It used a maximum variation sample to maximize the variety of insights.
The researchers purposively looked for people that differed in: type of cancer they had, stage of cancer, age, and sex.
The homogenous sampling strategy can be seen as the exact opposite of maximum variation sampling because it seeks homogenous groups of people, settings, or contexts to be studied in-depth.
With this kind of sample, using focus group interviewing might prove extremely productive (Gray, 2004).
When to use it
Use it if your research aims to specifically focus on a group with shared characteristics.
- Produces highly detailed insights regarding a specific group (Patton, 2002).
- Highly compatible with focus group interviews (Patton, 2002).
- Can simplify the analysis (Patton, 2002).
- Doesn’t let the researcher capture much variation (Patton, 2002).
Practical example: Nestbitt et al. (2012) was a study about Canadian adolescent mothers’ perceptions of influences on breastfeeding decisions. The researchers purposefully collected 16 homogenous cases of adolescent mothers (15–19 years) that lived in the Durham region and had children up to 12 months old.
Other criteria included speaking English fluently and breastfeeding their infant at least once.
The aim of the researchers by using this method was to produce an in-depth look at this very specific group.
Theory-based sampling is basically a more formal type of criterion sampling, it’s more conceptually oriented, and the cases are chosen on the basis that they represent a theoretical construct (Patton, 2002; Gray, 2004).
The researcher samples incidents, periods of someone’s life, time periods, or people based on the potential manifestation or representation of important theoretical constructs.
When to use it
Use this one when you want to study a pre-existing theory-derived concept that is of interest to your research.
- Elaborating on previous theoretical and established concepts can facilitate the analysis.
- Working on established theoretical concepts allows you to contribute new insights for an established theory.
- The odds of finding out something entirely “new” are somewhat limited.
- It might be harder to determine the population of interest because it’s hard to find people, programs, organizations, or communities of interest to a specific theoretical construct. This is unlike what happens when sampling based on determined people’s characteristics (Patton, 2002).
Practical example: Buckhold (as cited in Patton [2002, p. 238]) researched people who met specific theory-derived criteria for being “resilient.” She aimed to analyze the resilience of women who were victims of abuse and were able to survive.
Stratified purposive sampling
In stratified purposive sampling, decisions about the sample’s composition are made before data collection.
Schreier (2018) notes that it can be done in four steps:
- Deciding which factors are known or likely to cause variation in the phenomenon of interest.
- Selecting from two to a maximum of four factors for constructing a sampling guide.
- Combining the factors of choice in a cross-table, though when picking more than two factors, it might be impossible to conduct sampling for all factor combinations.
- Deciding on how many units for each cell/or factor combination.
When to use it
Use this method when you want to explore known factors that influence the phenomenon of your interest.
These might be hypothesized in theory while having no empirical data supporting them. You can also purpose a factor and by including it on your sampling you might grasp its importance regarding the phenomena you’re researching.
- Allows you to focus on several known factors that of interest for your research (Schreier, 2018).
- Predetermining the composition of your sample might facilitate finding the cases/people/groups to research.
- Sticking to the predetermined composition might have trouble with new factors discovered from your first cases that are left unresearched.
- Finding the cases with the factors that are of most interest for your research might be challenging.
Practical example: Palacic (2017) examined entrepreneurial leadership and business performance in “gazelles” and “MICE” (business/market terms to describe a type of company). The sample was purposively constituted to contain cases from both types of companies that were involved in three major industrial sectors – manufacturing, sales, and services.
More flexible determination
Theoretical sampling was developed in the context of grounded theory methodology.
Fundamentally, it’s a process of data collection that aims to generate theory. It takes place in a constant interrelation between data collection and data analysis, and it’s guided by the concepts and/or theory emerging from the research process (Gray, 2004; Flick, 2009).
The sample is usually composed of heterogeneous cases that allow comparison of different instantiations (Schreier, 2018).
When to use it
You can use this when you’re aiming to generate a new theory about a certain phenomenon.
- May bring more innovation to your research (Schreier, 2018).
- Your sample is more flexible compared with many other methods because there are no “static” criteria for your sample’s population.
- Not ideal for inexperienced researchers because generating a new theory is very challenging.
- Very time-consuming and complex.
Practical example: Glaser and Strauss (as cited in Flick, 2009, pp. 118–119) famously used this method to research awareness of dying in hospitals.
The researchers chose to conduct participant observation in different hospitals to develop a new theory about the way dying in a hospital is organized as a social process.
They built their sample through a step-by-step process while in direct contact with the field. First they studied awareness of dying in conditions that minimized patient awareness (e.g., comatose). Then they moved to situations where staff’s and patients’ awareness was high and death often was quick (e.g., intensive care). Then to situations where staff expectations of terminality were high, but dying tended to be slow (e.g., cancer). And ultimately to situations where death was unforeseen and rapid (e.g., emergency services).
Snowball sampling (or, chain referral sampling) is a method widely used in qualitative sociological research (Biernacki & Waldorf, 1981; Gray, 2004; Flick, 2009; Heckathorn, 2011). It’s used a lot because it’s effective at getting numbers. It’s premised on the idea that people know people similar to themselves.
Snowballing especially useful for studying hard-to-reach populations. Snowball sampling has been most applicable in studies where the focus relies on a sensitive issue, something that might be a private matter that requires knowing insiders so you can locate, contact, and receive consent from the true target population (Biernacki & Waldorf, 1981; Heckathorn, 2011).
The researcher forms a study sample through referrals made among people who are acquainted with others who have the characteristics of interest for the research. It begins through a convenience sample of someone of a hard-to-reach population.
After successfully interviewing/communicating with this person, the researcher will ask them to introduce other people with the same characteristics. After acquiring contacts, the research proceeds in the same way (Heckathorn, 2011).
When to use it
As hard-to-reach groups are, well, hard to reach, snowball sampling is effective when you need an inroad and cannot easily recruit and sample.
- Ideal for studying hard-to-reach groups (Biernacki & Waldorf, 1981; Gray, 2004; Flick, 2009; Heckathorn, 2011).
- Able to produce highly detailed insights regarding a specific group through the sampling of, in principle, information-rich cases (Patton, 2002).
- If the researcher is studying a topic that involves moral, legal, or socially sensitive issues (e.g., prostitution, drug addiction) and does not know anyone from this group, it might be hard to start the first “chain” that bring in more recruits.
- Very limited generalization potential.
Practical example: Cloud and Granfield (1994) used snowball sampling to study drug and alcohol addicts who beat their addictions without resorting to a treatment.
Using the snowballing method was fundamental to the authors because they were researching a widely distributed population (unlike those who participate in self-help groups or in treatment), and because the participants did not wish to expose their past as former drug addicts (i.e., sensitive issue).
Convenience sampling is a strategy that involves simply choosing cases in a way that is fast and convenient.
It’s probably the most common sampling strategy and, according to Patton (2002), the least desirable because it can’t be regarded as purposeful or strategic.
Many researchers choose this method thinking that their sample size is too small to generalize anyway, so they might as well pick cases that are easy to access and inexpensive to study (Patton, 2002).
This is a very common strategy among master’s students – asking fellow students to be part of the sample of their dissertation. That’s convenience sampling (Schreier, 2018). Also notable is that online surveying makes convenience sampling even simpler, beyond geographic limitations.
When to use it
When you have few resources (mainly time and money) for your qualitative research, this is the go-to method. This is why so many studies are conducted on university students – they’re literally all over the place, whether you’re a student or researcher. As students, they’re also easier to incentivize with small compensation and they often are in the same boat.
- Saves time, money, and effort (Patton, 2002).
- Might be optimal for unfinanced and strictly timed qualitative research (often in master’s theses and in many doctoral dissertations).
- Something of a “bad reputation” (Schreier, 2018).
- Lowest credibility (Patton, 2002).
- Might yield information-poor cases (Patton, 2002).
Practical example: Augusto and Simões (2017) used a convenience sampling strategy to capture perceptions and prevention strategies on Facebook surveillance.
As the original fieldwork was part of a master’s dissertation, convenience sampling was chosen because of the main author’s limited time and resources. This is in no way to discredit the study and findings – it was simply the most feasible way to get the research done.
Confirming and disconfirming cases
Confirming and disconfirming cases is frequently a second-stage sampling strategy.
Cases are chosen on the premise that they can confirm or disconfirm emerging patterns from the first stage of sampling (Gray, 2004).
After an exploratory process, one might consider testing ideas, confirming the importance and/or meaning of eventual patterns, and ultimately the viability of the findings through collecting new data and/or sampling additional cases (Patton, 2002).
When to use it
As the name indicates, generally, it’s ideal for testing emergent findings from your data.
- Strengthens emergent findings.
- Allows you to identify possible “exceptions that prove the rule” or exceptions that might disconfirm a finding (Patton, 2002).
- Usually requires a “first stage” of sampling.
- While definitely useful, one can certainly make an argument about quantitative research being better able to test certain findings.
Practical example: If you were researching students’ motives for applying for college, and on the first interviews you found out the interviewees’ main reason for pursuing their education was to avoid having a routine day-job, this might be a good sampling method to use. The findings, however, would have to carefully look at trends and check for outliers.
So, how’s your research going?
Here’s hoping you find the right qualitative sampling method(s) that work for you. Putting this together was a lesson for me as well.
And when you’re ready for a professional edit, check out Edanz’s services, which have been leading the way in author guidance since 1995. Good luck with your research!
This is a guest post from Adam Goulston, PsyD, MBA, MS, MISD, ELS. Adam runs science marketing firm Scize and has worked an in-house Senior Language Editor, as well as a manuscript editor, with Edanz.
Augusto, F. R., & Simões, M. J. (2017). To see and be seen, to know and be known : Perceptions and prevention strategies on Facebook surveillance. Social Science Information, 56(4), 596–618. https://doi.org/10.1177/0539018417734974
Benoot, C., Hannes, K., & Bilsen, J. (2016). The use of purposeful sampling in a qualitative evidence synthesis : A worked example on sexual adjustment to a cancer trajectory. BMC Medical Research Methodology, 16(21), 1–12. https://doi.org/10.1186/s12874-016-0114-6
Biernacki, P., & Waldorf, D. (1981). Snowball sampling: Problems and techniques of chain referral sampling. Sociological Methods & Research, 10(2), 141–163.
Cloud, W., & Granfield, R. (1994). Terminating Addiction Naturally : Post-Addict Identity and the Avoidance of Treatment Terminating Addiction Naturally : Post-Addict Identity and the Avoidance of Treatment. Clinical Sociology Review, 12(1), 159–174.
Flick, U. (2009). An Introduction To Qualitative Research. SAGE Publications (4th ed.). London: Sage Publications, Inc. https://doi.org/978-1-84787-323-1
Gray, D. E. (2004). Doing Research in the Real World. London: Sage Publications, Inc.
Heckathorn, D. D. (2011). Comment: snowball versus respondent-driven sampling, 355–366. https://doi.org/10.1111/j.1467-9531.2011.01244.x
Nesbitt, S. A., Campbell, K. A., Jack, S. M., Robinson, H., Piehl, K., & Bogdan, J. C. (2012). Canadian adolescent mothers’ perceptions of influences on breastfeeding decisions: a qualitative descriptive study, 1–14.
Palacic, R. (2017). The phenomenon of entrepreneurial leadership in gazelles and mice : a qualitative study from Bosnia and Herzegovina. World Review of Entrepreneurship, Management and Sustainable Development, 13(2/3).
Patton, M. Q. (2002). Qualitative Research & Evaluation Methods (3rd ed.). California: Sage Publications, Inc.
Peters, T. J., & Waterman, R. (2004). In Search of Excellence: Lessons from America’s Best-Run Companies. New York: First Harper Business Essentials.
Schreier, M. (2018). Sampling and Generalization In U. Flick (Ed.), The SAGE Handbook of Qualitative Data Collection (pp. 84–98). London, Sage Publications, Inc.
Ziebland, S., Chapple, A., Dumelow, C., Evans, J., Prinjha, S., & Rozmovits, L. (2004). Information in practice study: How the internet affects patients’ experience of cancer: A qualitative study. The BMJ, 328(7434).