Skip to content

Multiple Models With DB Writer Interface

In this example, we are going to conduct a deep performance profiling for 4 models. For that, we will use compute_metrics_with_db_writer interface that will compute metrics for multiple models and save results in the user database based on the db_writer function. Thus, we will need to do the next steps:

  • Initialize input variables

  • Compute subgroup metrics

  • Perform disparity metrics composition using the Metric Composer

  • Create static visualizations using the Metric Visualizer

Import dependencies

import os
import pandas as pd

from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier

from sklearn.compose import ColumnTransformer
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler

from virny.user_interfaces.multiple_models_with_db_writer_api import compute_metrics_with_db_writer
from virny.utils.custom_initializers import create_config_obj, create_models_metrics_dct_from_database_df
from virny.custom_classes.metrics_visualizer import MetricsVisualizer
from virny.custom_classes.metrics_composer import MetricsComposer
from virny.preprocessing.basic_preprocessing import preprocess_dataset
from virny.datasets import CompasWithoutSensitiveAttrsDataset

Initialize Input Variables

Based on the library flow, we need to create 3 input objects for a user interface:

  • A config yaml that is a file with configuration parameters for different user interfaces for metric computation.

  • A dataset class that is a wrapper above the user’s raw dataset that includes its descriptive attributes like a target column, numerical columns, categorical columns, etc. This class must be inherited from the BaseDataset class, which was created for user convenience.

  • Finally, a models config that is a Python dictionary, where keys are model names and values are initialized models for analysis. This dictionary helps conduct audits for different analysis modes and analyze different types of models.

TEST_SET_FRACTION = 0.2
DATASET_SPLIT_SEED = 42

Create a config object

compute_metrics_with_db_writer interface requires that your yaml file includes the following parameters:

  • dataset_name: str, a name of your dataset; it will be used to name files with metrics.

  • bootstrap_fraction: float, the fraction from a train set in the range [0.0 - 1.0] to fit models in bootstrap (usually more than 0.5).

  • random_state: int, a seed to control the randomness of the whole model evaluation pipeline.

  • n_estimators: int, the number of estimators for bootstrap to compute subgroup stability metrics.

  • sensitive_attributes_dct: dict, a dictionary where keys are sensitive attribute names (including intersectional attributes), and values are disadvantaged values for these attributes. Intersectional attributes must include '&' between sensitive attributes. You do not need to specify disadvantaged values for intersectional groups since they will be derived from disadvantaged values in sensitive_attributes_dct for each separate sensitive attribute in this intersectional pair.

Note that disadvantaged value in a sensitive attribute dictionary must be the same as in the original dataset. For example, when distinct values of the sex column in the original dataset are 'F' and 'M', and after pre-processing they became 0 and 1 respectively, you still need to set a disadvantaged value as 'F' or 'M' in the sensitive attribute dictionary.

ROOT_DIR = os.getcwd()
config_yaml_path = os.path.join(ROOT_DIR, 'experiment_config.yaml')
config_yaml_content = \
"""dataset_name: COMPAS_Without_Sensitive_Attributes
bootstrap_fraction: 0.8
random_state: 42
n_estimators: 50  # Better to input the higher number of estimators than 100; this is only for this use case example
sensitive_attributes_dct: {'sex': 1, 'race': 'African-American', 'sex&race': None}
"""

with open(config_yaml_path, 'w', encoding='utf-8') as f:
    f.write(config_yaml_content)
config = create_config_obj(config_yaml_path=config_yaml_path)

Create a Dataset class

Based on the BaseDataset class, your dataset class should include the following attributes:

  • Obligatory attributes: dataset, target, features, numerical_columns, categorical_columns

  • Optional attributes: X_data, y_data, columns_with_nulls

For more details, please refer to the library documentation.

data_loader = CompasWithoutSensitiveAttrsDataset()
data_loader.X_data[data_loader.X_data.columns[:5]].head()
juv_fel_count juv_misd_count juv_other_count priors_count age_cat_25 - 45
0 0.0 -2.340451 1.0 -15.010999 1
1 0.0 0.000000 0.0 0.000000 1
2 0.0 0.000000 0.0 0.000000 0
3 0.0 0.000000 0.0 6.000000 1
4 0.0 0.000000 0.0 7.513697 1
column_transformer = ColumnTransformer(transformers=[
    ('categorical_features', OneHotEncoder(handle_unknown='ignore', sparse_output=False), data_loader.categorical_columns),
    ('numerical_features', StandardScaler(), data_loader.numerical_columns),
])
base_flow_dataset = preprocess_dataset(data_loader=data_loader,
                                       column_transformer=column_transformer,
                                       sensitive_attributes_dct=config.sensitive_attributes_dct,
                                       test_set_fraction=TEST_SET_FRACTION,
                                       dataset_split_seed=DATASET_SPLIT_SEED)

Create a models config

models_config is a Python dictionary, where keys are model names and values are initialized models for analysis

models_config = {
    'DecisionTreeClassifier': DecisionTreeClassifier(criterion='gini',
                                                     max_depth=20,
                                                     max_features=0.6,
                                                     min_samples_split=0.1),
    'LogisticRegression': LogisticRegression(C=1,
                                             max_iter=50,
                                             penalty='l2',
                                             solver='newton-cg'),
    'RandomForestClassifier': RandomForestClassifier(max_depth=4,
                                                     max_features=0.6,
                                                     min_samples_leaf=1,
                                                     n_estimators=50),
    'XGBClassifier': XGBClassifier(learning_rate=0.1,
                                   max_depth=5,
                                   n_estimators=20),
}

Subgroup Metric Computation

After that we need to input the BaseFlowDataset object, models config, and config yaml to a metric computation interface and execute it. The interface uses subgroup analyzers to compute different sets of metrics for each privileged and disadvantaged group. As for now, our library supports Subgroup Variance Analyzer and Subgroup Error Analyzer, but it is easily extensible to any other analyzers. When the variance and error analyzers complete metric computation, their metrics are combined, returned in a matrix format, and stored in the user defined database using the input db_writer function.

import os
from dotenv import load_dotenv
from pymongo import MongoClient


load_dotenv(os.path.join(ROOT_DIR, 'secrets.env'))  # Take environment variables from .env

# Provide the mongodb atlas url to connect python to mongodb using pymongo
CONNECTION_STRING = os.getenv("CONNECTION_STRING")
# Create a connection using MongoClient. You can import MongoClient or use pymongo.MongoClient
client = MongoClient(CONNECTION_STRING)
collection = client[os.getenv("DB_NAME")]['preprocessing_results']


def db_writer_func(run_models_metrics_df, collection=collection):
    run_models_metrics_df.columns = run_models_metrics_df.columns.str.lower()  # Rename Pandas columns to lower case
    collection.insert_many(run_models_metrics_df.to_dict('records'))
import uuid

custom_table_fields_dct = {
    'session_uuid': str(uuid.uuid4()),
    'preprocessing_techniques': 'get_dummies and scaler',
}
print('Current session uuid: ', custom_table_fields_dct['session_uuid'])
Current session uuid:  65f2800c-dea8-4760-89bd-40564b4e19fd
metrics_dct = compute_metrics_with_db_writer(base_flow_dataset, config, models_config, custom_table_fields_dct, db_writer_func,
                                             notebook_logs_stdout=True)
Analyze multiple models:   0%|          | 0/4 [00:00<?, ?it/s]



Classifiers testing by bootstrap:   0%|          | 0/50 [00:00<?, ?it/s]



Classifiers testing by bootstrap:   0%|          | 0/50 [00:00<?, ?it/s]



Classifiers testing by bootstrap:   0%|          | 0/50 [00:00<?, ?it/s]



Classifiers testing by bootstrap:   0%|          | 0/50 [00:00<?, ?it/s]

Look at several columns in top rows of computed metrics

sample_model_metrics_df = metrics_dct[list(models_config.keys())[0]]
sample_model_metrics_df[sample_model_metrics_df.columns[:6]].head(20)
Metric overall sex_priv sex_dis race_priv race_dis
0 Statistical_Bias 0.415777 0.411280 0.416900 0.411460 0.418561
1 Std 0.070086 0.072965 0.069367 0.069672 0.070352
2 Mean_Prediction 0.519189 0.574330 0.505420 0.583615 0.477643
3 Overall_Uncertainty 0.885080 0.894485 0.882731 0.879480 0.888691
4 Aleatoric_Uncertainty 0.859123 0.866579 0.857261 0.853366 0.862836
5 IQR 0.084150 0.081478 0.084817 0.085661 0.083176
6 Epistemic_Uncertainty 0.025957 0.027907 0.025470 0.026114 0.025856
7 Label_Stability 0.854811 0.842275 0.857941 0.865700 0.847788
8 Jitter 0.111783 0.119586 0.109835 0.103488 0.117133
9 TPR 0.656051 0.480000 0.689394 0.517007 0.719136
10 TNR 0.735043 0.808824 0.712695 0.790262 0.688679
11 PPV 0.665948 0.580645 0.679104 0.575758 0.701807
12 FNR 0.343949 0.520000 0.310606 0.482993 0.280864
13 FPR 0.264957 0.191176 0.287305 0.209738 0.311321
14 Accuracy 0.699811 0.691943 0.701775 0.693237 0.704050
15 F1 0.660963 0.525547 0.684211 0.544803 0.710366
16 Selection-Rate 0.439394 0.293839 0.475740 0.318841 0.517134
17 Positive-Rate 0.985138 0.826667 1.015152 0.897959 1.024691
18 Sample_Size 1056.000000 211.000000 845.000000 414.000000 642.000000

Disparity Metric Composition

To compose disparity metrics, the Metric Composer should be applied. Metric Composer is responsible for the second stage of the model audit. Currently, it computes our custom error disparity, stability disparity, and uncertainty disparity metrics, but extending it for new disparity metrics is very simple. We noticed that more and more disparity metrics have appeared during the last decade, but most of them are based on the same group specific metrics. Hence, such a separation of group specific and disparity metrics computation allows us to experiment with different combinations of group specific metrics and avoid group metrics recomputation for a new set of disparity metrics.

def read_model_metric_dfs_from_db(collection, session_uuid):
    cursor = collection.find({'session_uuid': session_uuid})
    records = []
    for record in cursor:
        del record['_id']
        records.append(record)

    model_metric_dfs = pd.DataFrame(records)

    # Capitalize column names to be consistent across the whole library
    new_column_names = []
    for col in model_metric_dfs.columns:
        new_col_name = '_'.join([c.capitalize() for c in col.split('_')])
        new_column_names.append(new_col_name)

    model_metric_dfs.columns = new_column_names
    return model_metric_dfs
model_metric_dfs = read_model_metric_dfs_from_db(collection, custom_table_fields_dct['session_uuid'])
models_metrics_dct = create_models_metrics_dct_from_database_df(model_metric_dfs)
metrics_composer = MetricsComposer(models_metrics_dct, config.sensitive_attributes_dct)

Compute composed metrics

models_composed_metrics_df = metrics_composer.compose_metrics()

Metric Visualization

Metric Visualizer allows us to build static visualizations for the computed metrics. It unifies different preprocessing methods for the computed metrics and creates various data formats required for visualizations. Hence, users can simply call methods of the MetricsVisualizer class and get custom plots for diverse metric analysis.

visualizer = MetricsVisualizer(models_metrics_dct, models_composed_metrics_df, config.dataset_name,
                               model_names=list(models_config.keys()),
                               sensitive_attributes_dct=config.sensitive_attributes_dct)
visualizer.create_overall_metrics_bar_char(
    metric_names=['Accuracy', 'F1', 'TPR', 'TNR', 'PPV', 'Selection-Rate'],
    plot_title="Accuracy Metrics"
)
visualizer.create_overall_metrics_bar_char(
    metric_names=['Aleatoric_Uncertainty', 'Overall_Uncertainty', 'Label_Stability', 'Std', 'IQR', 'Jitter'],
    plot_title="Stability and Uncertainty Metrics"
)
visualizer.create_overall_metric_heatmap(
    model_names=list(models_config.keys()),
    metrics_lst=visualizer.all_accuracy_metrics + visualizer.all_uncertainty_metrics,
    tolerance=0.005,
)

png

visualizer.create_disparity_metric_heatmap(
    model_names=list(models_config.keys()),
    metrics_lst=[
        # Error disparity metrics
        'Equalized_Odds_TPR',
        'Equalized_Odds_FPR',
        'Disparate_Impact',
        # Stability disparity metrics
        'Label_Stability_Difference',
        'Aleatoric_Uncertainty_Difference',
        'Std_Ratio',
    ],
    groups_lst=config.sensitive_attributes_dct.keys(),
    tolerance=0.005,
)

png

client.close()