All models generated by exhaustive or evolutionary searches are automatically saved, with a name that specifies the search criteria that was used. The details of each saved model are listed in the View Models screen. Manually designed models can also be seen here, if they have been saved with a manually designated name.
Sometimes a search result may generate more than one model, because more than one model performs equally well on the selected performance criteria, such as Accuracy. In this situation each saved model with the same level of performance is given a consecutive number at the end of its saved name.
1. Using alternate evaluation criteria
Multiple models can be evaluated using secondary and tertiary evaluation criteria, such as
- Lift – being how well the model predicts the outcome relative to chance. A higher lift value signifies a better performance relative to chance
- Simplicity – being how few attributes are used in the model, relative to the number available in the design menu. Fewer can be better for two reasons (a) Simple models can have wider applicability across cases that exhibit the range of all possible combinations of attributes – the number of cases with the intersection of A, B, C and D will be smaller than the number of cases with the intersection of A and B, (b) Simpler models can be easier to implement in real life. [But both of these arguments assume some degree of similarity of scale and complexity across A to D]
2. Removing redundancies
There seem to be two ways of proceeding…
- Finding redundant models
In Figure 1 the rows represent models. The columns represent the attributes used in each of those models, where X means the attributes have a 0 or 1 value. Looking at Figure 1, Model 2 is redundant because its combinations of three attributes are covered by Model 1. Likewise, Model 3 is redundant because its combination of three attributes are covered by Model 4. Model 1 is redundant because its combination of four attributes are covered by Model 4. By covered I mean they are a subset of the other.
2. Finding redundant configurations
Here we can use something called a “Prime Implicants Chart”. The Prime Implicant Chart is the second part of the Quine-McCluskey procedure, a central feature of a QCA analyses. According to Schneider and Wagemann (2012:109) “Prime implicants can be defined as the end products of the logical minimisation process through pairwise comparison of conjunctions…Under certain circumstances , though, it happens that one or more of these prime implicants are logically redundant… They can be dropped from the solution term in order to obtain the most parsimonious formula….” A Prime Implicant (PI) is the equivalent of an EvalC3 model.
The process is described below in figures 2, 3 and 4. The PIs are listed by row, and the columns list the different configurations (‘minterms’) that they might apply to. The x’s in the cells indicate which mean term is covered by which PI. In Figure 2 the process starts with identification of a PI that covers one attributes than no other prime implicants do (see row 3). This is an essential rather than redundant PI. Then other attributes in other PIs which are also present in the essential PI are rule out (see vertical red lines)
In the next Figures 3 & 4 the same process is repeated
This leaves us with two essential PIs, in row 1 and 3.
This seems to be the approach used by S&W (2012)”.. we introduce a second formula for the minimisation of solution formula: a prime implicant is logically redundant if all the primitive expressions are covered without it being included in the solution formula” ( a primitive expression is the same as the row in truth table, also known as a minterm)
S&W also helpfully suggest that removal of redundant models is an option, not a necessity, redundant models “may be of substantive interest”. Parsimony may not be the only concern. They also suggest that when there can be some redundant models where the ExclusiveOr applies: one can be removed but not both. Meaning there can be more than one parsimonious solution.