|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "Each machine learning algorithm has strengths and weaknesses. A weakness of decision trees is that they are prone to overfitting on the training set. A way to mitigate this problem is to constrain how large a tree can grow. Bagged trees try to overcome this weakness by using bootstrapped data to grow multiple deep decision trees. The idea is that many trees protect each other from individual weaknesses.\n", |
| 8 | + "\n", |
| 9 | + "\n", |
| 10 | + "In this video, I'll share with you how you can build a bagged tree model for regression." |
| 11 | + ] |
| 12 | + }, |
| 13 | + { |
| 14 | + "cell_type": "markdown", |
| 15 | + "metadata": {}, |
| 16 | + "source": [ |
| 17 | + "## Import Libraries" |
| 18 | + ] |
| 19 | + }, |
| 20 | + { |
| 21 | + "cell_type": "code", |
| 22 | + "execution_count": null, |
| 23 | + "metadata": {}, |
| 24 | + "outputs": [], |
| 25 | + "source": [ |
| 26 | + "%matplotlib inline\n", |
| 27 | + "\n", |
| 28 | + "import matplotlib.pyplot as plt\n", |
| 29 | + "import pandas as pd\n", |
| 30 | + "import numpy as np\n", |
| 31 | + "\n", |
| 32 | + "from sklearn.model_selection import train_test_split\n", |
| 33 | + "\n", |
| 34 | + "# Bagged Trees Regressor\n", |
| 35 | + "from sklearn.ensemble import BaggingRegressor" |
| 36 | + ] |
| 37 | + }, |
| 38 | + { |
| 39 | + "cell_type": "markdown", |
| 40 | + "metadata": { |
| 41 | + "collapsed": true |
| 42 | + }, |
| 43 | + "source": [ |
| 44 | + "## Load the Dataset\n", |
| 45 | + "This dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. The code below loads the dataset. The goal of this dataset is to predict price based on features like number of bedrooms and bathrooms" |
| 46 | + ] |
| 47 | + }, |
| 48 | + { |
| 49 | + "cell_type": "code", |
| 50 | + "execution_count": null, |
| 51 | + "metadata": {}, |
| 52 | + "outputs": [], |
| 53 | + "source": [ |
| 54 | + "df = pd.read_csv('https://raw.githubusercontent.com/mGalarnyk/Tutorial_Data/master/King_County/kingCountyHouseData.csv')\n", |
| 55 | + "\n", |
| 56 | + "df.head()" |
| 57 | + ] |
| 58 | + }, |
| 59 | + { |
| 60 | + "cell_type": "code", |
| 61 | + "execution_count": null, |
| 62 | + "metadata": {}, |
| 63 | + "outputs": [], |
| 64 | + "source": [ |
| 65 | + "# This notebook only selects a couple features for simplicity\n", |
| 66 | + "# However, I encourage you to play with adding and substracting more features\n", |
| 67 | + "features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors']\n", |
| 68 | + "\n", |
| 69 | + "X = df.loc[:, features]\n", |
| 70 | + "\n", |
| 71 | + "y = df.loc[:, 'price'].values" |
| 72 | + ] |
| 73 | + }, |
| 74 | + { |
| 75 | + "cell_type": "markdown", |
| 76 | + "metadata": {}, |
| 77 | + "source": [ |
| 78 | + "## Splitting Data into Training and Test Sets" |
| 79 | + ] |
| 80 | + }, |
| 81 | + { |
| 82 | + "cell_type": "code", |
| 83 | + "execution_count": null, |
| 84 | + "metadata": {}, |
| 85 | + "outputs": [], |
| 86 | + "source": [ |
| 87 | + "X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)" |
| 88 | + ] |
| 89 | + }, |
| 90 | + { |
| 91 | + "cell_type": "markdown", |
| 92 | + "metadata": {}, |
| 93 | + "source": [ |
| 94 | + "Note, another benefit of bagged trees like decision trees is that you don’t have to standardize your features unlike other algorithms like logistic regression and K-Nearest Neighbors. " |
| 95 | + ] |
| 96 | + }, |
| 97 | + { |
| 98 | + "cell_type": "markdown", |
| 99 | + "metadata": {}, |
| 100 | + "source": [ |
| 101 | + "## Bagged Trees\n", |
| 102 | + "\n", |
| 103 | + "<b>Step 1:</b> Import the model you want to use\n", |
| 104 | + "\n", |
| 105 | + "In sklearn, all machine learning models are implemented as Python classes" |
| 106 | + ] |
| 107 | + }, |
| 108 | + { |
| 109 | + "cell_type": "code", |
| 110 | + "execution_count": null, |
| 111 | + "metadata": {}, |
| 112 | + "outputs": [], |
| 113 | + "source": [ |
| 114 | + "# This was already imported earlier in the notebook so commenting out\n", |
| 115 | + "#from sklearn.ensemble import BaggingRegressor" |
| 116 | + ] |
| 117 | + }, |
| 118 | + { |
| 119 | + "cell_type": "markdown", |
| 120 | + "metadata": {}, |
| 121 | + "source": [ |
| 122 | + "<b>Step 2:</b> Make an instance of the Model\n", |
| 123 | + "\n", |
| 124 | + "This is a place where we can tune the hyperparameters of a model. " |
| 125 | + ] |
| 126 | + }, |
| 127 | + { |
| 128 | + "cell_type": "code", |
| 129 | + "execution_count": null, |
| 130 | + "metadata": {}, |
| 131 | + "outputs": [], |
| 132 | + "source": [ |
| 133 | + "reg = BaggingRegressor(n_estimators=100, \n", |
| 134 | + " random_state = 0)" |
| 135 | + ] |
| 136 | + }, |
| 137 | + { |
| 138 | + "cell_type": "markdown", |
| 139 | + "metadata": {}, |
| 140 | + "source": [ |
| 141 | + "<b>Step 3:</b> Training the model on the data, storing the information learned from the data" |
| 142 | + ] |
| 143 | + }, |
| 144 | + { |
| 145 | + "cell_type": "markdown", |
| 146 | + "metadata": {}, |
| 147 | + "source": [ |
| 148 | + "Model is learning the relationship between X (features like number of bedrooms) and y (price)" |
| 149 | + ] |
| 150 | + }, |
| 151 | + { |
| 152 | + "cell_type": "code", |
| 153 | + "execution_count": null, |
| 154 | + "metadata": {}, |
| 155 | + "outputs": [], |
| 156 | + "source": [ |
| 157 | + "reg.fit(X_train, y_train)" |
| 158 | + ] |
| 159 | + }, |
| 160 | + { |
| 161 | + "cell_type": "markdown", |
| 162 | + "metadata": {}, |
| 163 | + "source": [ |
| 164 | + "<b>Step 4:</b> Make Predictions\n", |
| 165 | + "\n", |
| 166 | + "Uses the information the model learned during the model training process" |
| 167 | + ] |
| 168 | + }, |
| 169 | + { |
| 170 | + "cell_type": "code", |
| 171 | + "execution_count": null, |
| 172 | + "metadata": {}, |
| 173 | + "outputs": [], |
| 174 | + "source": [ |
| 175 | + "# Returns a NumPy Array\n", |
| 176 | + "# Predict for One Observation\n", |
| 177 | + "reg.predict(X_test.iloc[0].values.reshape(1, -1))" |
| 178 | + ] |
| 179 | + }, |
| 180 | + { |
| 181 | + "cell_type": "markdown", |
| 182 | + "metadata": {}, |
| 183 | + "source": [ |
| 184 | + "Predict for Multiple Observations at Once" |
| 185 | + ] |
| 186 | + }, |
| 187 | + { |
| 188 | + "cell_type": "code", |
| 189 | + "execution_count": null, |
| 190 | + "metadata": {}, |
| 191 | + "outputs": [], |
| 192 | + "source": [ |
| 193 | + "reg.predict(X_test[0:10])" |
| 194 | + ] |
| 195 | + }, |
| 196 | + { |
| 197 | + "cell_type": "markdown", |
| 198 | + "metadata": {}, |
| 199 | + "source": [ |
| 200 | + "## Measuring Model Performance" |
| 201 | + ] |
| 202 | + }, |
| 203 | + { |
| 204 | + "cell_type": "markdown", |
| 205 | + "metadata": {}, |
| 206 | + "source": [ |
| 207 | + "Unlike classification models where a common metric is accuracy, regression models use other metrics like R^2, the coefficient of determination to quantify your model's performance. The best possible score is 1.0. A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0." |
| 208 | + ] |
| 209 | + }, |
| 210 | + { |
| 211 | + "cell_type": "code", |
| 212 | + "execution_count": null, |
| 213 | + "metadata": {}, |
| 214 | + "outputs": [], |
| 215 | + "source": [ |
| 216 | + "score = reg.score(X_test, y_test)\n", |
| 217 | + "print(score)" |
| 218 | + ] |
| 219 | + }, |
| 220 | + { |
| 221 | + "cell_type": "markdown", |
| 222 | + "metadata": {}, |
| 223 | + "source": [ |
| 224 | + "## Tuning n_estimators (Number of Decision Trees)\n", |
| 225 | + "\n", |
| 226 | + "A tuning parameter for bagged trees is **n_estimators**, which represents the number of trees that should be grown. " |
| 227 | + ] |
| 228 | + }, |
| 229 | + { |
| 230 | + "cell_type": "code", |
| 231 | + "execution_count": null, |
| 232 | + "metadata": {}, |
| 233 | + "outputs": [], |
| 234 | + "source": [ |
| 235 | + "# List of values to try for n_estimators:\n", |
| 236 | + "estimator_range = [1] + list(range(10, 150, 20))\n", |
| 237 | + "\n", |
| 238 | + "scores = []\n", |
| 239 | + "\n", |
| 240 | + "for estimator in estimator_range:\n", |
| 241 | + " reg = BaggingRegressor(n_estimators=estimator, random_state=0)\n", |
| 242 | + " reg.fit(X_train, y_train)\n", |
| 243 | + " scores.append(reg.score(X_test, y_test))" |
| 244 | + ] |
| 245 | + }, |
| 246 | + { |
| 247 | + "cell_type": "code", |
| 248 | + "execution_count": null, |
| 249 | + "metadata": {}, |
| 250 | + "outputs": [], |
| 251 | + "source": [ |
| 252 | + "plt.figure(figsize = (10,7))\n", |
| 253 | + "plt.plot(estimator_range, scores);\n", |
| 254 | + "\n", |
| 255 | + "plt.xlabel('n_estimators', fontsize =20);\n", |
| 256 | + "plt.ylabel('Score', fontsize = 20);\n", |
| 257 | + "plt.tick_params(labelsize = 18)\n", |
| 258 | + "plt.grid()" |
| 259 | + ] |
| 260 | + }, |
| 261 | + { |
| 262 | + "cell_type": "markdown", |
| 263 | + "metadata": {}, |
| 264 | + "source": [ |
| 265 | + "Notice that the score stops improving after a certain number of estimators (decision trees). One way to get a better score would be to include more features in the features matrix. So that's it, I encourage you to try a building a bagged tree model " |
| 266 | + ] |
| 267 | + } |
| 268 | + ], |
| 269 | + "metadata": { |
| 270 | + "anaconda-cloud": {}, |
| 271 | + "kernelspec": { |
| 272 | + "display_name": "Python 3", |
| 273 | + "language": "python", |
| 274 | + "name": "python3" |
| 275 | + }, |
| 276 | + "language_info": { |
| 277 | + "codemirror_mode": { |
| 278 | + "name": "ipython", |
| 279 | + "version": 3 |
| 280 | + }, |
| 281 | + "file_extension": ".py", |
| 282 | + "mimetype": "text/x-python", |
| 283 | + "name": "python", |
| 284 | + "nbconvert_exporter": "python", |
| 285 | + "pygments_lexer": "ipython3", |
| 286 | + "version": "3.7.4" |
| 287 | + } |
| 288 | + }, |
| 289 | + "nbformat": 4, |
| 290 | + "nbformat_minor": 1 |
| 291 | +} |
0 commit comments