When a good prediction model has been found it may consist of multiple project attributes (required to be present and/or absent). A question may then asked as to how important each of these attributes is, within the model as a whole.
This question can be answered by systematically removing each attribute from the model, one at a time, and on each occasion observing how the overall performance of the model changes. Removal here means changing an attribute value of 1 or 0 to n/a. The removal of attributes which are more important will be associated with a bigger deterioration in the performance of a model. The question then is which attribute removal has been associated with the biggest deterioration in model performance.
PS for nerds: The same method has been used to make models generated by neural more transparent. See https://www.oreilly.com/learning/introduction-to-local-interpretable-model-agnostic-explanations-lime
An example using the Krook data set
As a result of using an evolutionary search it was found that the presence of “quotas” for women in parliament and the country being in a “post-conflict” situation was sufficient but not necessary for high levels of women’s participation in parliament. This model accounted for 6 of the 9 cases where there were high levels of women in parliament.
When the “presence of quotas” was removed from the model, the model performance fell from 83% to 74%, when using Averaged Accuracy as the performance measure. The number of TPs increased slightly from 6 to 7, and the FPs increased from 0 to 5.
When the presence of a post-conflict situation was removed from the model, the model performance remained at 83%, but the number of FPs increased from 0 to 4
Therefore, it appears that the presence/absence of quotas was the attribute of the model that made the biggest difference to its performance
This type of analysis can be seen as a particular form of contribution analysis.
The following is a quote from the TOrs of an evaluation: “Even though it is well recognized that multiple factors can affect the livelihoods of individuals or the capacities of institutions, it is important for policymakers as well as stakeholders to know what the added value of the xxxx program is”
What they are looking for, I suggest, is an INUS condition, an attribute that is Insufficient but Necessary part of a configuration that is Sufficient but Unnecessary for an outcome to occur.
INUS attributes can be identified using EvalC3. The first step is to develop a good predictive model for an outcome. Typically this will consist of a number of attributes. Then by selectively changing the status of each attribute in a configuration and then observing its effects on the model’s performance, we can identify the extent to which an attribute is an Insufficient but Necessary part of a configuration that is Sufficient but Unnecessary.
This can be observed in two forms:
- To a degree: A model that was previously Sufficient but Unnecessary may no longer be so if the removal of an attribute from the model leads to some False Positives. The model may still be a good predictor of the outcome, but its attributes are no longer sufficient for the outcome. Both of the example changes to the Krook data model, discussed above, are of this kind.
- Categorically: A model that already had some False Positive cases may now have more of these than True Positives – in which case the model is now in effect a better predictor of the absence of outcome.
2018 09 03 Update
It is now possible to do a quick sensitivity analysis using the sensitivity button in the Explore section of the Design and Evaluate view. When you click on this button it will then highlight two attributes of your current model:
- A green highlighted attribute, whose removal makes the biggest difference to the performance of the model
- A mauve highlighted attribute, whose removal make the least difference to the performance of the model
2018 10 12
The categorical definition of an INUS attribute in a prediction model has an interesting connection to the notion of “actionable recourse” in discussions of fairness of decision making by algorithm. Recourse is the ability to flip the decision of an algorithm by changing an attribute of a case. While people may not be able to change some of their attributes e.g. their age, they may be able to change others e.g. education level.