Contra QCA

While there is much that I like about QCA, there are two areas where I am in disagreement with QCA practice:

  1. Defining Necessity and Sufficiency

EvalC3 uses a categorical definition of necessity and sufficiency. Following colloquial and philosophical use, a prediction model attribute is either necessary or not, or sufficient or not. It is a black and white status, there are no degrees of necessity or degrees of sufficiency. To me, the idea of having degrees of sufficiency or necessity is contradictory to the very kernel of the meaning of both of those terms.

Yet QCA experts allow for this possibility when they talk about consistency of sufficient conditions and consistency of necessary conditions. For example, a configuration that has 20 True Positives and 5 False Positives would be described as having a Sufficiency consistency of 80%. Or a configuration with 20 True Positives and 10 False Negatives would be described as having a Necessity consistency of 66%. Along with this comes the more difficult notion of a threshold on these measures when a set of conditions aka a model then qualifies for a more categorical status of being sufficient, or necessary.  For example, anything having more than 75% Sufficiency consistency is deemed to be Sufficient. But how this threshold is to be defined in any objective and accountable way escapes me. All Schneider and Wagemann (2012) say can say is “…the notion that the exact location of the consistency threshold is heavily dependent on the specific research context”

2. Measuring consistency and coverage

QCA experts have made the task of communicating their analyses to others more challenging by defining these two terms differently, according to whether they are talking about conditions that are necessary or sufficient.

  • Consistency of sufficient conditions = True Positive / (True Positive and False Positive)
  • Consistency of necessary conditions = True Positive / (True Positive and False Negative)
  • Coverage of sufficient conditions = True Positive / (True Positive and False Negative)
  • Coverage of necessary conditions = True Positive / (True Positive and True Positive)

Again, keeping closer to the commonplace meaning of these terms, EvalC3 has only one definition each for consistency and coverage:

  • Consistency of a model = True Positive / (True Positive and False Positive)
  • Coverage of a model = True Positive / (True Positive and False Negative)

These two terms have others names in other fields of work:

  • Consistency is also known as Positive Predictive Value (PPV), or Precision
  • Coverage is also known as  True Positive Rate (TPR), Recall, or Sensitivity