Skip to content

Commit 1d05efa

Browse files
committed
update samples from Release-81 as a part of SDK release
1 parent 3adebd1 commit 1d05efa

File tree

31 files changed

+217
-1234
lines changed

31 files changed

+217
-1234
lines changed

configuration.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -103,7 +103,7 @@
103103
"source": [
104104
"import azureml.core\n",
105105
"\n",
106-
"print(\"This notebook was created using version 1.19.0 of the Azure ML SDK\")\n",
106+
"print(\"This notebook was created using version 1.20.0 of the Azure ML SDK\")\n",
107107
"print(\"You are currently using version\", azureml.core.VERSION, \"of the Azure ML SDK\")"
108108
]
109109
},

contrib/fairness/fairlearn-azureml-mitigation.ipynb

Lines changed: 98 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@
4646
"Please see the [configuration notebook](../../configuration.ipynb) for information about creating one, if required.\n",
4747
"This notebook also requires the following packages:\n",
4848
"* `azureml-contrib-fairness`\n",
49-
"* `fairlearn==0.4.6`\n",
49+
"* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)\n",
5050
"* `joblib`\n",
5151
"* `shap`\n",
5252
"\n",
@@ -62,13 +62,20 @@
6262
"# !pip install --upgrade scikit-learn>=0.22.1"
6363
]
6464
},
65+
{
66+
"cell_type": "markdown",
67+
"metadata": {},
68+
"source": [
69+
"Finally, please ensure that when you downloaded this notebook, you also downloaded the `fairness_nb_utils.py` file from the same location, and placed it in the same directory as this notebook."
70+
]
71+
},
6572
{
6673
"cell_type": "markdown",
6774
"metadata": {},
6875
"source": [
6976
"<a id=\"LoadingData\"></a>\n",
7077
"## Loading the Data\n",
71-
"We use the well-known `adult` census dataset, which we load using `shap` (for convenience). We start with a fairly unremarkable set of imports:"
78+
"We use the well-known `adult` census dataset, which we will fetch from the OpenML website. We start with a fairly unremarkable set of imports:"
7279
]
7380
},
7481
{
@@ -79,17 +86,24 @@
7986
"source": [
8087
"from fairlearn.reductions import GridSearch, DemographicParity, ErrorRate\n",
8188
"from fairlearn.widget import FairlearnDashboard\n",
82-
"from sklearn import svm\n",
83-
"from sklearn.preprocessing import LabelEncoder, StandardScaler\n",
89+
"\n",
90+
"from sklearn.compose import ColumnTransformer\n",
91+
"from sklearn.datasets import fetch_openml\n",
92+
"from sklearn.impute import SimpleImputer\n",
8493
"from sklearn.linear_model import LogisticRegression\n",
94+
"from sklearn.model_selection import train_test_split\n",
95+
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
96+
"from sklearn.compose import make_column_selector as selector\n",
97+
"from sklearn.pipeline import Pipeline\n",
98+
"\n",
8599
"import pandas as pd"
86100
]
87101
},
88102
{
89103
"cell_type": "markdown",
90104
"metadata": {},
91105
"source": [
92-
"We can now load and inspect the data from the `shap` package:"
106+
"We can now load and inspect the data:"
93107
]
94108
},
95109
{
@@ -98,13 +112,13 @@
98112
"metadata": {},
99113
"outputs": [],
100114
"source": [
101-
"from utilities import fetch_openml_with_retries\n",
115+
"from fairness_nb_utils import fetch_openml_with_retries\n",
102116
"\n",
103117
"data = fetch_openml_with_retries(data_id=1590)\n",
104118
" \n",
105119
"# Extract the items we want\n",
106120
"X_raw = data.data\n",
107-
"Y = (data.target == '>50K') * 1\n",
121+
"y = (data.target == '>50K') * 1\n",
108122
"\n",
109123
"X_raw[\"race\"].value_counts().to_dict()"
110124
]
@@ -113,7 +127,7 @@
113127
"cell_type": "markdown",
114128
"metadata": {},
115129
"source": [
116-
"We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). We also separate out the Race column, but we will not perform any mitigation based on it. Finally, we perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms"
130+
"We are going to treat the sex and race of each individual as protected attributes, and in this particular case we are going to remove these attributes from the main data (this is not always the best option - see the [Fairlearn website](http://fairlearn.github.io/) for further discussion). Protected attributes are often denoted by 'A' in the literature, and we follow that convention here:"
117131
]
118132
},
119133
{
@@ -123,23 +137,14 @@
123137
"outputs": [],
124138
"source": [
125139
"A = X_raw[['sex','race']]\n",
126-
"X = X_raw.drop(labels=['sex', 'race'],axis = 1)\n",
127-
"X_dummies = pd.get_dummies(X)\n",
128-
"\n",
129-
"sc = StandardScaler()\n",
130-
"X_scaled = sc.fit_transform(X_dummies)\n",
131-
"X_scaled = pd.DataFrame(X_scaled, columns=X_dummies.columns)\n",
132-
"\n",
133-
"\n",
134-
"le = LabelEncoder()\n",
135-
"Y = le.fit_transform(Y)"
140+
"X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)"
136141
]
137142
},
138143
{
139144
"cell_type": "markdown",
140145
"metadata": {},
141146
"source": [
142-
"With our data prepared, we can make the conventional split in to 'test' and 'train' subsets:"
147+
"We now preprocess our data. To avoid the problem of data leakage, we split our data into training and test sets before performing any other transformations. Subsequent transformations (such as scalings) will be fit to the training data set, and then applied to the test dataset."
143148
]
144149
},
145150
{
@@ -148,21 +153,76 @@
148153
"metadata": {},
149154
"outputs": [],
150155
"source": [
151-
"from sklearn.model_selection import train_test_split\n",
152-
"X_train, X_test, Y_train, Y_test, A_train, A_test = train_test_split(X_scaled, \n",
153-
" Y, \n",
154-
" A,\n",
155-
" test_size = 0.2,\n",
156-
" random_state=0,\n",
157-
" stratify=Y)\n",
158-
"\n",
159-
"# Work around indexing issue\n",
156+
"(X_train, X_test, y_train, y_test, A_train, A_test) = train_test_split(\n",
157+
" X_raw, y, A, test_size=0.3, random_state=12345, stratify=y\n",
158+
")\n",
159+
"\n",
160+
"# Ensure indices are aligned between X, y and A,\n",
161+
"# after all the slicing and splitting of DataFrames\n",
162+
"# and Series\n",
163+
"\n",
160164
"X_train = X_train.reset_index(drop=True)\n",
161-
"A_train = A_train.reset_index(drop=True)\n",
162165
"X_test = X_test.reset_index(drop=True)\n",
166+
"y_train = y_train.reset_index(drop=True)\n",
167+
"y_test = y_test.reset_index(drop=True)\n",
168+
"A_train = A_train.reset_index(drop=True)\n",
163169
"A_test = A_test.reset_index(drop=True)"
164170
]
165171
},
172+
{
173+
"cell_type": "markdown",
174+
"metadata": {},
175+
"source": [
176+
"We have two types of column in the dataset - categorical columns which will need to be one-hot encoded, and numeric ones which will need to be rescaled. We also need to take care of missing values. We use a simple approach here, but please bear in mind that this is another way that bias could be introduced (especially if one subgroup tends to have more missing values).\n",
177+
"\n",
178+
"For this preprocessing, we make use of `Pipeline` objects from `sklearn`:"
179+
]
180+
},
181+
{
182+
"cell_type": "code",
183+
"execution_count": null,
184+
"metadata": {},
185+
"outputs": [],
186+
"source": [
187+
"numeric_transformer = Pipeline(\n",
188+
" steps=[\n",
189+
" (\"impute\", SimpleImputer()),\n",
190+
" (\"scaler\", StandardScaler()),\n",
191+
" ]\n",
192+
")\n",
193+
"\n",
194+
"categorical_transformer = Pipeline(\n",
195+
" [\n",
196+
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
197+
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\", sparse=False)),\n",
198+
" ]\n",
199+
")\n",
200+
"\n",
201+
"preprocessor = ColumnTransformer(\n",
202+
" transformers=[\n",
203+
" (\"num\", numeric_transformer, selector(dtype_exclude=\"category\")),\n",
204+
" (\"cat\", categorical_transformer, selector(dtype_include=\"category\")),\n",
205+
" ]\n",
206+
")"
207+
]
208+
},
209+
{
210+
"cell_type": "markdown",
211+
"metadata": {},
212+
"source": [
213+
"Now, the preprocessing pipeline is defined, we can run it on our training data, and apply the generated transform to our test data:"
214+
]
215+
},
216+
{
217+
"cell_type": "code",
218+
"execution_count": null,
219+
"metadata": {},
220+
"outputs": [],
221+
"source": [
222+
"X_train = preprocessor.fit_transform(X_train)\n",
223+
"X_test = preprocessor.transform(X_test)"
224+
]
225+
},
166226
{
167227
"cell_type": "markdown",
168228
"metadata": {},
@@ -181,7 +241,7 @@
181241
"source": [
182242
"unmitigated_predictor = LogisticRegression(solver='liblinear', fit_intercept=True)\n",
183243
"\n",
184-
"unmitigated_predictor.fit(X_train, Y_train)"
244+
"unmitigated_predictor.fit(X_train, y_train)"
185245
]
186246
},
187247
{
@@ -198,7 +258,7 @@
198258
"outputs": [],
199259
"source": [
200260
"FairlearnDashboard(sensitive_features=A_test, sensitive_feature_names=['Sex', 'Race'],\n",
201-
" y_true=Y_test,\n",
261+
" y_true=y_test,\n",
202262
" y_pred={\"unmitigated\": unmitigated_predictor.predict(X_test)})"
203263
]
204264
},
@@ -249,9 +309,10 @@
249309
"metadata": {},
250310
"outputs": [],
251311
"source": [
252-
"sweep.fit(X_train, Y_train,\n",
312+
"sweep.fit(X_train, y_train,\n",
253313
" sensitive_features=A_train.sex)\n",
254314
"\n",
315+
"# For Fairlearn v0.5.0, need sweep.predictors_\n",
255316
"predictors = sweep._predictors"
256317
]
257318
},
@@ -273,9 +334,9 @@
273334
" classifier = lambda X: m.predict(X)\n",
274335
" \n",
275336
" error = ErrorRate()\n",
276-
" error.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)\n",
337+
" error.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
277338
" disparity = DemographicParity()\n",
278-
" disparity.load_data(X_train, pd.Series(Y_train), sensitive_features=A_train.sex)\n",
339+
" disparity.load_data(X_train, pd.Series(y_train), sensitive_features=A_train.sex)\n",
279340
" \n",
280341
" errors.append(error.gamma(classifier)[0])\n",
281342
" disparities.append(disparity.gamma(classifier).max())\n",
@@ -329,15 +390,15 @@
329390
"source": [
330391
"FairlearnDashboard(sensitive_features=A_test, \n",
331392
" sensitive_feature_names=['Sex', 'Race'],\n",
332-
" y_true=Y_test.tolist(),\n",
393+
" y_true=y_test.tolist(),\n",
333394
" y_pred=predictions_dominant)"
334395
]
335396
},
336397
{
337398
"cell_type": "markdown",
338399
"metadata": {},
339400
"source": [
340-
"When using sex as the sensitive feature, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
401+
"When using sex as the sensitive feature and accuracy as the metric, we see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity in predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy. Finally, we also see that the unmitigated model is towards the top right of the plot, with high accuracy, but worst disparity.\n",
341402
"\n",
342403
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
343404
]
@@ -444,7 +505,7 @@
444505
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
445506
"\n",
446507
"\n",
447-
"dash_dict = _create_group_metric_set(y_true=Y_test,\n",
508+
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
448509
" predictions=predictions_dominant_ids,\n",
449510
" sensitive_features=sf,\n",
450511
" prediction_type='binary_classification')"
File renamed without changes.

0 commit comments

Comments
 (0)