Within-case analysis

A very useful book by Mahoney and Goertz (A Tale of Two Cultures: Qualitative and Quantitative Research in the Social Sciences, 2012) makes a distinction between within-case analysis and cross-case analysis. EvalC3 is designed primarily to facilitate cross-case analysis. But to get the maximum value from this kind of analysis it is important that it is well informed at two different stages by within-case analysis.

When and why

  1. Before a cross-case analysis: When selecting what attributes to include in a data set and to make use of when analysing that data, either through the use of EvalC3  or other methods such as QCA  or using Decision Tree algorithms. Ideally the selection of which attributes to investigate in terms of their possible relationship to which outcomes, would be informed by some prior notion or theory of what might be happening, rather than random choice. The development of those views is likely to be enhanced by familiarity with the details of the cases that are making up the data set.
  2. After a cross-case analysis:  When good prediction rules have been found and modal (i.e. representative) cases have been identified (see Selecting Cases). Once modal cases have been selected they can be put to use in various ways:
    1. As illustrative examples of the results predicted by the model (True Positives), and or incorrect results (False Positives). At the same time, within-case inspection can be used to verify if the attributes of the case in the data set are a correct description of the actual modal case i.e. do a measurement validity check
    2. As sources of causal explanations. The examination of individual cases should provide much more detailed information which could shed light on what (if any) causal mechanisms are at work that makes the prediction work.
    3. As sources of contradictory information, not available within the data set, which could disprove causal explanations that are developed.These could include confounders, i.e.a background factor that is a cause of both the attributes in a model and the associated outcome

Steps to take to identify and test likely causal mechanisms

There are four types of cases that can be selected for more in-depth inquiries about any underlying causal mechanisms that may be at work.

  1. Cases which exemplify the True Positive results, where the model correctly predicted the presence of the outcome. Look within these cases to find any likely causal mechanisms connecting the conditions that make up the configuration. Two sub-types would be useful to compare:
    1. Modal cases, which represented the average characteristics of cases in this group, taking all attributes into account, not just those within the prediction model. Click the Calculate Similarity button in View Cases to find these cases.
    2. Outlier cases, which represent those which were most dissimilar to all other cases in this group, apart from having the same prediction model characteristics. Click the Calculate Similarity button in View Cases to find these cases.
      1. I think this is like what others have called a MDSO (most different, same outcome) analysis –  “one has to look for similarities in the characteristics of initiatives that differ the most from each other; firstly the identification of the most differing pair of cases and secondly the identification of similarities between those two cases” (De Meur et al, 2006:71).
  2. Cases which exemplify the False Positives, where the model incorrectly predicted the presence of the outcome.There are at least two possible explanations that could be explored:
    1. In the False Positive cases, there are one or more other factors that all the cases have in common, which are blocking the model configuration from working i.e. delivering the outcome
    2. In the True Positive cases, there are one or more other factors that all the cases have in common, which are enabling the model configuration from working i.e. delivering the outcome, but which are absent in the False Positive cases.
    3. There is another kind of analysis possible here called MSD0 (most similar, different outcome) – “ to explain why within a set of legislative initiatives, some initiatives result in other decision-making patterns than other initiatives, one has to look for dissimilarities in the characteristics of initiatives that are similar to each other; firstly the identification of the most similar pair of cases and secondly the identification of dissimilarities between those two case” (De Meur et al, 2006:71).
  3. Cases which exemplify the False Negatives, where the outcome occurred despite the absence the attributes of the model. There are two types of interest here:
    1. There may be some False Negative cases that have all but one of the attributes found in the prediction model. These cases would be worth examining, in order to understand why the absence of a particular attribute that is part of the predictive model does not prevent the outcome from occurring. There may be some counter-balancing enabling factor at work, enabling the outcome. Such almost-the-same cases can be found using the Compare function in View Cases.
    2. Where a data set has some missing data points (i.e. blank cells) it is possible that some cases have been classed as FNs because they missed specific data on crucial attributes that would have otherwise classed them as TPs. In these circumstances it would be worth investigating the incidence of missing data on each of the attributes of a good performing model, and then scanning FN cases for those which have many of the necessary attributes but where the data on the others are missing.
    3. Where multiple models have been developed by using EvalC3 or QCA, it is possible that some cases with the expected outcome are still not covered by any of the models. By default, these will fall into the False Negative category. These case should be subject to particular attention because it is likely that the attributes that predict this outcome are outside the data set. They can only be discovered by doing a within-case investigation of these uncovered cases.
  4. Cases which exemplify the True Negatives, where the absence the attributes of the model is associated with the absence of the outcome
    1. There may cases here with all but one of the model attributes. These can be found using the Compare function in View Cases, after selecting a modal case in the True Positives group as the comparator.  If found then the missing attribute may be viewed as an INUS attribute i.e. an attribute that is Insufficient but Necessary in a configuration that is Unnecessary but Sufficient for the outcome (See Befani, 2016). It would then be worth investigating how these critical attributes have their effects by doing a detailed within-case analysis of the cases with the critical missing attribute.
      1. Caveat: INUS status cannot be claimed for an attribute if the same configuration with all but one essential model attributes can also be found in the False Negatives group of cases (i.e. where the outcome is present).
    2. (Updated 2020 10 20) Cases may become true negatives for two reasons. The first, and most expected, is that the causes of positive outcomes are absent. The second, which is worth investigating, is that there are additional and different causes at work which are causing the outcome to be absent. The first of these is described as causal symmetry, the second of these is described as causal asymmetry. Because of the second possibility is worthwhile paying close attention to true negative cases to identify the extent to which symmetrical causes and asymmetrical causes are work. The findings could have significant implications for any intervention that is being designed.

The cases that fit each of the four types can be seen in the “View Cases ” worksheet, and found by using the Calculate Similarity and Compare functions.

Sensitivity

When looking at individual True Positive cases in order to find causal mechanisms at work it may be of value to look at particular attributes in the model. Tweaking of a model, by selectively removing and replacing one attribute at a time, will show which attributes make the biggest difference to the model’s overall performance. It is these attributes which should be of particular interest when looking for the causal mechanism at work within a TP case.

There is now a Sensitivity button on the Design and Evaluate view, under the Explore section. Clicking on this will highlight the attribute in the currently loaded model whose removal makes the biggest difference to the model performance.

Worth reading

Elizabeth A. Stuart (2010) Matching methods for causal inference: A review and a look forward

Gary Goertz (2017) Multimethod Research, Causal Mechanisms, and Case Studies: An Integrated Approach, Princeton University Press

See aPDF copy of this page here

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.