|
36 | 36 | "\n", |
37 | 37 | "<a id=\"Introduction\"></a>\n", |
38 | 38 | "## Introduction\n", |
39 | | - "This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.github.io) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.github.io/).\n", |
| 39 | + "This notebook shows how to use [Fairlearn (an open source fairness assessment and unfairness mitigation package)](http://fairlearn.org) and Azure Machine Learning Studio for a binary classification problem. This example uses the well-known adult census dataset. For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan. Its purpose is purely illustrative of a workflow including a fairness dashboard - in particular, we do **not** include a full discussion of the detailed issues which arise when considering fairness in machine learning. For such discussions, please [refer to the Fairlearn website](http://fairlearn.org/).\n", |
40 | 40 | "\n", |
41 | | - "We will apply the [grid search algorithm](https://fairlearn.github.io/master/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n", |
| 41 | + "We will apply the [grid search algorithm](https://fairlearn.org/v0.4.6/api_reference/fairlearn.reductions.html#fairlearn.reductions.GridSearch) from the Fairlearn package using a specific notion of fairness called Demographic Parity. This produces a set of models, and we will view these in a dashboard both locally and in the Azure Machine Learning Studio.\n", |
42 | 42 | "\n", |
43 | 43 | "### Setup\n", |
44 | 44 | "\n", |
|
48 | 48 | "* `azureml-contrib-fairness`\n", |
49 | 49 | "* `fairlearn==0.4.6` (v0.5.0 will work with minor modifications)\n", |
50 | 50 | "* `joblib`\n", |
51 | | - "* `shap`\n", |
| 51 | + "* `liac-arff`\n", |
52 | 52 | "\n", |
53 | 53 | "Fairlearn relies on features introduced in v0.22.1 of `scikit-learn`. If you have an older version already installed, please uncomment and run the following cell:" |
54 | 54 | ] |
|
88 | 88 | "from fairlearn.widget import FairlearnDashboard\n", |
89 | 89 | "\n", |
90 | 90 | "from sklearn.compose import ColumnTransformer\n", |
91 | | - "from sklearn.datasets import fetch_openml\n", |
92 | 91 | "from sklearn.impute import SimpleImputer\n", |
93 | 92 | "from sklearn.linear_model import LogisticRegression\n", |
94 | 93 | "from sklearn.model_selection import train_test_split\n", |
|
112 | 111 | "metadata": {}, |
113 | 112 | "outputs": [], |
114 | 113 | "source": [ |
115 | | - "from fairness_nb_utils import fetch_openml_with_retries\n", |
| 114 | + "from fairness_nb_utils import fetch_census_dataset\n", |
116 | 115 | "\n", |
117 | | - "data = fetch_openml_with_retries(data_id=1590)\n", |
| 116 | + "data = fetch_census_dataset()\n", |
118 | 117 | " \n", |
119 | 118 | "# Extract the items we want\n", |
120 | 119 | "X_raw = data.data\n", |
|
137 | 136 | "outputs": [], |
138 | 137 | "source": [ |
139 | 138 | "A = X_raw[['sex','race']]\n", |
140 | | - "X_raw = X_raw.drop(labels=['sex', 'race'],axis = 1)" |
| 139 | + "X_raw = X_raw.drop(labels=['sex', 'race'], axis = 1)" |
141 | 140 | ] |
142 | 141 | }, |
143 | 142 | { |
|
584 | 583 | "<a id=\"Conclusion\"></a>\n", |
585 | 584 | "## Conclusion\n", |
586 | 585 | "\n", |
587 | | - "In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.github.io/) provides that discussion" |
| 586 | + "In this notebook we have demonstrated how to use the `GridSearch` algorithm from Fairlearn to generate a collection of models, and then present them in the fairness dashboard in Azure Machine Learning Studio. Please remember that this notebook has not attempted to discuss the many considerations which should be part of any approach to unfairness mitigation. The [Fairlearn website](http://fairlearn.org/) provides that discussion" |
588 | 587 | ] |
589 | 588 | }, |
590 | 589 | { |
|
0 commit comments