TreeScoringExplainer Class
Defines a scoring model based on TreeExplainer.
If the original explainer was using a SHAP TreeExplainer, the core of the original explainer will be reused. If the original explainer used another method, a new explainer will be created.
If transformations was passed in on original_explainer, those transformations will be carried through to the scoring explainer, it will expect raw data, and by default importances will be returned for raw features. If feature_maps are passed in here (NOT intended to be used at the same time as transformations), the explainer will expected transformed data, and by default importances will be returned for transformed data. In either case, the output can be specified by setting get_raw explicitly to True or False on the explainer's explain method.
Initialize the TreeScoringExplainer.
If the original explainer was using a SHAP TreeExplainer, the core of the original explainer will be reused. If the original explainer used another method, a new explainer will be created.
If transformations was passed in on original_explainer, those transformations will be carried through to the scoring explainer, it will expect raw data, and by default importances will be returned for raw features. If feature_maps are passed in here (NOT intended to be used at the same time as transformations), the explainer will expected transformed data, and by default importances will be returned for transformed data. In either case, the output can be specified by setting get_raw explicitly to True or False on the explainer's explain method.
- Inheritance
-
azureml.interpret.scoring.scoring_explainer._scoring_explainer.ScoringExplainerTreeScoringExplainer
Constructor
TreeScoringExplainer(original_explainer, **kwargs)
Parameters
Name | Description |
---|---|
original_explainer
Required
|
<xref:interpret_community.common.base_explainer.BaseExplainer>
The training time tree explainer originally used to explain the model. |
feature_maps
Required
|
A list of feature maps from raw to generated feature. The list can be numpy arrays or sparse matrices where each array entry (raw_index, generated_index) is the weight for each raw, generated feature pair. The other entries are set to zero. For a sequence of transformations [t1, t2, ..., tn] generating generated features from raw features, the list of feature maps correspond to the raw to generated maps in the same order as t1, t2, etc. If the overall raw to generated feature map from t1 to tn is available, then just that feature map in a single element list can be passed. |
raw_features
Required
|
Optional list of feature names for the raw features that can be specified if the original explainer computes the explanation on the engineered features. |
engineered_features
Required
|
Optional list of feature names for the engineered features that can be specified if the original explainer has transformations passed in and only computes the importances on the raw features. |
original_explainer
Required
|
<xref:interpret_community.common.base_explainer.BaseExplainer>
The training time tree explainer originally used to explain the model. |
feature_maps
Required
|
A list of feature maps from raw to generated feature. The list can be numpy arrays or sparse matrices where each array entry (raw_index, generated_index) is the weight for each raw, generated feature pair. The other entries are set to zero. For a sequence of transformations [t1, t2, ..., tn] generating generated features from raw features, the list of feature maps correspond to the raw to generated maps in the same order as t1, t2, etc. If the overall raw to generated feature map from t1 to tn is available, then just that feature map in a single element list can be passed. |
raw_features
Required
|
Optional list of feature names for the raw features that can be specified if the original explainer computes the explanation on the engineered features. |
engineered_features
Required
|
Optional list of feature names for the engineered features that can be specified if the original explainer has transformations passed in and only computes the importances on the raw features. |
Methods
explain |
Use the TreeExplainer and tree model for scoring to get the feature importance values of data. |
explain
Use the TreeExplainer and tree model for scoring to get the feature importance values of data.
explain(evaluation_examples, get_raw=None)
Parameters
Name | Description |
---|---|
evaluation_examples
Required
|
A matrix of feature vector examples (# examples x # features) on which to explain the model's output. |
get_raw
|
If True, importance values for raw features will be returned. If False, importance values for engineered features will be returned. If unspecified and transformations was passed into the original explainer, raw importance values will be returned. If unspecified and feature_maps was passed into the scoring explainer, engineered importances will be returned. If unspecified and neither is passed, explanations will be given for the data as it was passed in. Default value: None
|
Returns
Type | Description |
---|---|
For a model with a single output such as regression, this returns a matrix of feature importance values. For models with vector outputs this function returns a list of such matrices, one for each output. The dimension of this matrix is (# examples x # features). |