|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Introduction to Statistical Learning \n", |
| 8 | + "Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani is considered a canonical text in the field of statistical/machine learning and is an absolutely fantastic way to move forward in your analytics career. [The text is free to download](http://www-bcf.usc.edu/~gareth/ISL/) and an [online course by the authors themselves](https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about) is currently available in self-pace mode, meaning you can complete it any time. Make sure to **[REGISTER FOR THE STANDFORD COURSE!](https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/about)** The videos have also been [archived here on youtube](http://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/).\n", |
| 9 | + "\n", |
| 10 | + "# How will Houston Data Science cover the course?\n", |
| 11 | + "The Stanford online course covers the entire book in 9 weeks and with the R programming language. The pace that we cover the book is yet to be determined as there are many unknown variables such as interest from members, availability of a venue and general level of skills of those participating. That said, a meeting once per week to discuss the current chapter or previous chapter solutions is the target.\n", |
| 12 | + "\n", |
| 13 | + "\n", |
| 14 | + "# Python in place of R\n", |
| 15 | + "Although R is a fantastic programming language and is the language that all the ISLR labs are written in, the Python programming language, except for rare exceptions, contains analgous libraries that contain the same statistical functionality as those in R.\n", |
| 16 | + "\n", |
| 17 | + "# Notes, Exercises and Programming Assignments all in the Jupyter Notebok\n", |
| 18 | + "ISLR has both end of chapter problems and programming assignments. All chapter problems and programming assignments will be answered in the notebook.\n", |
| 19 | + "\n", |
| 20 | + "# Replicating Plots\n", |
| 21 | + "The plots in ISLR are created in R. Many of them will be replicated here in the notebook when they appear in the text\n", |
| 22 | + "\n", |
| 23 | + "# Book Data\n", |
| 24 | + "The data from the books was downloaded using R. All the datasets are found in either the MASS or ISLR packages. They are now in the data directory. See below" |
| 25 | + ] |
| 26 | + }, |
| 27 | + { |
| 28 | + "cell_type": "code", |
| 29 | + "execution_count": 1, |
| 30 | + "metadata": { |
| 31 | + "collapsed": false |
| 32 | + }, |
| 33 | + "outputs": [ |
| 34 | + { |
| 35 | + "name": "stdout", |
| 36 | + "output_type": "stream", |
| 37 | + "text": [ |
| 38 | + "\u001b[31mAdvertising.csv\u001b[m\u001b[m* carseats.csv khan_xtrain.csv portfolio.csv\r\n", |
| 39 | + "Credit.csv college.csv khan_ytest.csv smarket.csv\r\n", |
| 40 | + "auto.csv default.csv khan_ytrain.csv usarrests.csv\r\n", |
| 41 | + "boston.csv hitters.csv nci60_data.csv \u001b[31mwage.csv\u001b[m\u001b[m*\r\n", |
| 42 | + "caravan.csv khan_xtest.csv nci60_labs.csv weekly.csv\r\n" |
| 43 | + ] |
| 44 | + } |
| 45 | + ], |
| 46 | + "source": [ |
| 47 | + "ls data" |
| 48 | + ] |
| 49 | + }, |
| 50 | + { |
| 51 | + "cell_type": "markdown", |
| 52 | + "metadata": {}, |
| 53 | + "source": [ |
| 54 | + "# ISLR Videos\n", |
| 55 | + "[All Old Videos](https://www.r-bloggers.com/in-depth-introduction-to-machine-learning-in-15-hours-of-expert-videos/)" |
| 56 | + ] |
| 57 | + }, |
| 58 | + { |
| 59 | + "cell_type": "markdown", |
| 60 | + "metadata": {}, |
| 61 | + "source": [ |
| 62 | + "# Chapter 9: Support Vector Machines\n", |
| 63 | + "\"New\" learning method invented by Vladamir Vapnik in the 1990's. Likely best classifier at its time, now surpassed by gradient boosted trees and neural networks.\n", |
| 64 | + "\n", |
| 65 | + "Three different but very closely related classifiers in this chapter\n", |
| 66 | + "* Maximum margin classifier\n", |
| 67 | + "* Support Vector classifier\n", |
| 68 | + "* Support vector machine\n", |
| 69 | + "\n", |
| 70 | + "## Maximum Margin Classifier\n", |
| 71 | + "An optimal hyperplane that separates classes. \n", |
| 72 | + "**Hyperplane** - For any p dimensional space, it is a p-1 dimensional flat surface. A line in 2 dimensions, a plane in three dimensions. Mathematical definition in p dimensions: $\\beta_0 + \\beta_1 X_1 + ... + \\beta_p X_p = 0$. It divides whatever your dimension is into two pieces.\n", |
| 73 | + "\n", |
| 74 | + "## Linearly Separable Case\n", |
| 75 | + "First and easiest we will look at a 2 dimensional data that is perfectly linearly separable. Here the hyperplane is a line. \n", |
| 76 | + "\n", |
| 77 | + "Many different lines can be drawn here to separate the data. For math simplification, lets let $y$ equal -1 for one class and the other 1, then if $\\beta_0 + \\beta_1 X_1 + ... + \\beta_p X_p > 0$ we will classify the observation as 1 and if $\\beta_0 + \\beta_1 X_1 + ... + \\beta_p X_p < 0$ we will classify it as -1. \n", |
| 78 | + "\n", |
| 79 | + "Multiplying both equations by $y$ yields $y(\\beta_0 + \\beta_1 X_1 + ... + \\beta_p X_p) > 0$ for any correctly classified observation.\n", |
| 80 | + "\n", |
| 81 | + "If the data is perfectly separable then an infinte number of hyperplanes will exist that can perfectly separate the data. A natural choice is to choose a hyperplane the maximizes the distance from each observation to the hyperplane - one that has a large margin - the maximum margin.\n", |
| 82 | + "\n", |
| 83 | + "## What defines maximum margin?\n", |
| 84 | + "In the linearly separable case we find the line that has the maximum margin between the two classes. The maximum margin is defined as the distance of the closet point to the separating hyperplane. So, we are maximizing the minimum distance from the hyperplane. All other points are of no consequence which is a bit scary but it happens to work well. These minimum distance points are called the support vectors.\n", |
| 85 | + "\n", |
| 86 | + "## Non-Separable Data\n", |
| 87 | + "If the data is not linearly separable then no hyperplane can separate the data and thus no margin can exist. This case is most common with real data. The maximum margin classifier is very sensitive to single data points. The hyperplane can change drastically with the addition of one new data point. To help combat this type of overfitting and to allow for non-separable classification we can use a soft margin. We allow some observation to be on the wrong side of the hyperplane or within the margin. This margin violation makes the margin 'soft'.\n", |
| 88 | + "\n", |
| 89 | + "The problem formulation is tweaked such that we allow for some total amount of error, C. This total error acts as an allowance like a balance in the bank that you can spend on the amount of error you can make. The errors are called slack variables. C is chosen through cross-validation.\n", |
| 90 | + "\n", |
| 91 | + "## Support Vector Machines\n", |
| 92 | + "For data that has a non-linear seaparating hyperplane, something different must be done. We can transform the variables as in previous chapters - squaring them, creating interaction terms, etc... or we can use kernels. The support vector machine can enlarge the feature space without doing these transformations in an efficient manner using kernels.\n", |
| 93 | + "\n", |
| 94 | + "The solution to SVM's involves only inner products of the observations. The decision boundary is just a weighted sum of the inner product between observations that are the support vectors. The inner product can be replaed with a kernel function. There are several different kernel functions. Linear kernel is just the standard inner product. Polynomial kernel is linear kernel taken to the power of a chosen polynomial. The radial basis funciton is proportional to the squared distance between points. All kernels measure a degree of closeness. So the further the two points in the kernel function are, the smaller the result of the kernel calculation.\n", |
| 95 | + "\n", |
| 96 | + "Kernels allow for very high dimensional (infinte with radial basis function) feature space enlargement without actually going into that space.\n", |
| 97 | + "\n", |
| 98 | + "## Multi-Class SVM\n", |
| 99 | + "Two different approaches for K classes where K > 2. One vs One constructs a different SVM for every pair of classes that exist. Test observations are assigned to the class that gets the most votes. One vs All constructs K SVMs where all observations are used - each class is compared to all other K-1 classes. The class with the greatest distance from the hyperplane is chosen." |
| 100 | + ] |
| 101 | + }, |
| 102 | + { |
| 103 | + "cell_type": "markdown", |
| 104 | + "metadata": {}, |
| 105 | + "source": [ |
| 106 | + "# for class\n", |
| 107 | + "do simple linearly separable case (hard margin) with y = 1/2x + 3 or something.\n", |
| 108 | + "\n", |
| 109 | + "Write data points (x1, x2), y where y is -1 or 1\n", |
| 110 | + "\n", |
| 111 | + "Make data points in a manner that one additional point of one class close to another class has tremendous influence on the line." |
| 112 | + ] |
| 113 | + } |
| 114 | + ], |
| 115 | + "metadata": { |
| 116 | + "anaconda-cloud": {}, |
| 117 | + "kernelspec": { |
| 118 | + "display_name": "Python [Root]", |
| 119 | + "language": "python", |
| 120 | + "name": "Python [Root]" |
| 121 | + }, |
| 122 | + "language_info": { |
| 123 | + "codemirror_mode": { |
| 124 | + "name": "ipython", |
| 125 | + "version": 3 |
| 126 | + }, |
| 127 | + "file_extension": ".py", |
| 128 | + "mimetype": "text/x-python", |
| 129 | + "name": "python", |
| 130 | + "nbconvert_exporter": "python", |
| 131 | + "pygments_lexer": "ipython3", |
| 132 | + "version": "3.5.2" |
| 133 | + } |
| 134 | + }, |
| 135 | + "nbformat": 4, |
| 136 | + "nbformat_minor": 0 |
| 137 | +} |
0 commit comments