Individual Submission Summary
Share...

Direct link:

Download

Reliability of Inference: Analogs of Replication in Qualitative Research

Fri, August 30, 10:00 to 11:30am, Marriott, Taylor

Abstract

How do issues related to replication translate into the context of qualitative research? As Freese and Peterson (forthcoming) forewarn, discussions of replication in quantitative social science cannot be directly transposed into this realm. However, we can identify analogs for the various combinations of same data vs. new data, same procedures vs. different procedures scrutiny that have been discussed in the quantitative context. While some of our analogs share the same overarching definitions and import as their quantitative relatives, others diverge significantly. The differences in these instances arise from distinctions between frequentism, which underpins orthodox statistics, and Bayesianism, which a growing body of research identifies as the most promising methodological foundation for inference in qualitative research (Bennett 2015, Humphreys and Jacobs 2015, Fairfield and Charman 2017).

In this paper, we advance two positions that we believe could help promote greater consensus and common ground among quantitative and qualitative scholars. First, we advocate restricting the use of the term replication to a narrowly-defined set of new-data, same-procedures scrutiny that applies to orthodox statistical analysis and experimental research, both for the sake of clarity and to avoid perceptions that norms from dominant subfields are being imposed on qualitative research. Second and relatedly, we argue that the overarching concern in all scientific inquiry—both quantitative and qualitative—is reliability of inference: how much confidence we can justifiably hold in our conclusions. Reliability encompasses but extends beyond the notion of replication. Our discussion therefore focuses on practices that could help improve how we assess evidence, build consensus among scholars, and promote knowledge accumulation in qualitative research within a Bayesian framework, which provides a natural language for evaluating uncertainty.

The first section of the paper presents our understanding of replication and reliability as applicable to different types of research, characterized by the data (quantitative vs. qualitative) and the methodological framework (frequentist vs. Bayesian). Here we offer some suggestions for conducting new-data scrutiny of qualitative research, although our focus will be on same-data scrutiny, which we believe could have significant payoffs for improving reliability of inference. Accordingly, the second section of the paper elaborates Bayesian rules for same-data assessment and illustrates how they can be applied using published exemplars of process-tracing research and comparative historical analysis. In broad terms Bayesianism directs us to ask whether scholars have overstated the weight of evidence in support of the advocated argument by neglecting to assess how likely that evidence would be if a rival hypothesis were true, whether the hypotheses under consideration have been articulated clearly enough to assess how likely the evidence would be under a given explanation relative to rivals, and whether the background knowledge that scholars discuss justifies an initial preference for a particular hypothesis.

Authors