|
2 | 2 | "cells": [ |
3 | 3 | { |
4 | 4 | "cell_type": "markdown", |
5 | | - "metadata": {}, |
6 | 5 | "source": [ |
7 | 6 | "Copyright (c) Microsoft Corporation. All rights reserved.\n", |
8 | 7 | "\n", |
9 | 8 | "Licensed under the MIT License." |
10 | | - ] |
| 9 | + ], |
| 10 | + "metadata": {} |
11 | 11 | }, |
12 | 12 | { |
13 | 13 | "cell_type": "markdown", |
14 | | - "metadata": {}, |
15 | 14 | "source": [ |
16 | 15 | "" |
17 | | - ] |
| 16 | + ], |
| 17 | + "metadata": {} |
18 | 18 | }, |
19 | 19 | { |
20 | 20 | "cell_type": "markdown", |
21 | | - "metadata": {}, |
22 | 21 | "source": [ |
23 | 22 | "# Register Spark Model and deploy as Webservice\n", |
24 | 23 | "\n", |
25 | 24 | "This example shows how to deploy a Webservice in step-by-step fashion:\n", |
26 | 25 | "\n", |
27 | 26 | " 1. Register Spark Model\n", |
28 | 27 | " 2. Deploy Spark Model as Webservice" |
29 | | - ] |
| 28 | + ], |
| 29 | + "metadata": {} |
30 | 30 | }, |
31 | 31 | { |
32 | 32 | "cell_type": "markdown", |
33 | | - "metadata": {}, |
34 | 33 | "source": [ |
35 | 34 | "## Prerequisites\n", |
36 | 35 | "If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the [configuration](../../../configuration.ipynb) Notebook first if you haven't." |
37 | | - ] |
| 36 | + ], |
| 37 | + "metadata": {} |
38 | 38 | }, |
39 | 39 | { |
40 | 40 | "cell_type": "code", |
41 | 41 | "execution_count": null, |
42 | | - "metadata": {}, |
43 | | - "outputs": [], |
44 | 42 | "source": [ |
45 | | - "# Check core SDK version number\n", |
46 | | - "import azureml.core\n", |
47 | | - "\n", |
| 43 | + "# Check core SDK version number\r\n", |
| 44 | + "import azureml.core\r\n", |
| 45 | + "\r\n", |
48 | 46 | "print(\"SDK version:\", azureml.core.VERSION)" |
49 | | - ] |
| 47 | + ], |
| 48 | + "outputs": [], |
| 49 | + "metadata": {} |
50 | 50 | }, |
51 | 51 | { |
52 | 52 | "cell_type": "markdown", |
53 | | - "metadata": {}, |
54 | 53 | "source": [ |
55 | 54 | "## Initialize Workspace\n", |
56 | 55 | "\n", |
57 | 56 | "Initialize a workspace object from persisted configuration." |
58 | | - ] |
| 57 | + ], |
| 58 | + "metadata": {} |
59 | 59 | }, |
60 | 60 | { |
61 | 61 | "cell_type": "code", |
62 | 62 | "execution_count": null, |
| 63 | + "source": [ |
| 64 | + "from azureml.core import Workspace\r\n", |
| 65 | + "\r\n", |
| 66 | + "ws = Workspace.from_config()\r\n", |
| 67 | + "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')" |
| 68 | + ], |
| 69 | + "outputs": [], |
63 | 70 | "metadata": { |
64 | 71 | "tags": [ |
65 | 72 | "create workspace" |
66 | 73 | ] |
67 | | - }, |
68 | | - "outputs": [], |
69 | | - "source": [ |
70 | | - "from azureml.core import Workspace\n", |
71 | | - "\n", |
72 | | - "ws = Workspace.from_config()\n", |
73 | | - "print(ws.name, ws.resource_group, ws.location, ws.subscription_id, sep='\\n')" |
74 | | - ] |
| 74 | + } |
75 | 75 | }, |
76 | 76 | { |
77 | 77 | "cell_type": "markdown", |
78 | | - "metadata": {}, |
79 | 78 | "source": [ |
80 | 79 | "### Register Model" |
81 | | - ] |
| 80 | + ], |
| 81 | + "metadata": {} |
82 | 82 | }, |
83 | 83 | { |
84 | 84 | "cell_type": "markdown", |
85 | | - "metadata": {}, |
86 | 85 | "source": [ |
87 | 86 | "You can add tags and descriptions to your Models. Note you need to have a `iris.model` file in the current directory. This model file is generated using [train in spark](../training/train-in-spark/train-in-spark.ipynb) notebook. The below call registers that file as a Model with the same name `iris.model` in the workspace.\n", |
88 | 87 | "\n", |
89 | 88 | "Using tags, you can track useful information such as the name and version of the machine learning library used to train the model. Note that tags must be alphanumeric." |
90 | | - ] |
| 89 | + ], |
| 90 | + "metadata": {} |
91 | 91 | }, |
92 | 92 | { |
93 | 93 | "cell_type": "code", |
94 | 94 | "execution_count": null, |
| 95 | + "source": [ |
| 96 | + "from azureml.core.model import Model\r\n", |
| 97 | + "\r\n", |
| 98 | + "model = Model.register(model_path=\"iris.model\",\r\n", |
| 99 | + " model_name=\"iris.model\",\r\n", |
| 100 | + " tags={'type': \"regression\"},\r\n", |
| 101 | + " description=\"Logistic regression model to predict iris species\",\r\n", |
| 102 | + " workspace=ws)" |
| 103 | + ], |
| 104 | + "outputs": [], |
95 | 105 | "metadata": { |
96 | 106 | "tags": [ |
97 | 107 | "register model from file" |
98 | 108 | ] |
99 | | - }, |
100 | | - "outputs": [], |
101 | | - "source": [ |
102 | | - "from azureml.core.model import Model\n", |
103 | | - "\n", |
104 | | - "model = Model.register(model_path=\"iris.model\",\n", |
105 | | - " model_name=\"iris.model\",\n", |
106 | | - " tags={'type': \"regression\"},\n", |
107 | | - " description=\"Logistic regression model to predict iris species\",\n", |
108 | | - " workspace=ws)" |
109 | | - ] |
| 109 | + } |
110 | 110 | }, |
111 | 111 | { |
112 | 112 | "cell_type": "markdown", |
113 | | - "metadata": {}, |
114 | 113 | "source": [ |
115 | 114 | "### Fetch Environment" |
116 | | - ] |
| 115 | + ], |
| 116 | + "metadata": {} |
117 | 117 | }, |
118 | 118 | { |
119 | 119 | "cell_type": "markdown", |
120 | | - "metadata": {}, |
121 | 120 | "source": [ |
122 | 121 | "You can now create and/or use an Environment object when deploying a Webservice. The Environment can have been previously registered with your Workspace, or it will be registered with it as a part of the Webservice deployment.\n", |
123 | 122 | "\n", |
124 | 123 | "In this notebook, we will be using 'AzureML-PySpark-MmlSpark-0.15', a curated environment.\n", |
125 | 124 | "\n", |
126 | 125 | "More information can be found in our [using environments notebook](../training/using-environments/using-environments.ipynb)." |
127 | | - ] |
| 126 | + ], |
| 127 | + "metadata": {} |
128 | 128 | }, |
129 | 129 | { |
130 | 130 | "cell_type": "code", |
131 | 131 | "execution_count": null, |
132 | | - "metadata": {}, |
133 | | - "outputs": [], |
134 | 132 | "source": [ |
135 | | - "from azureml.core import Environment\n", |
136 | | - "\n", |
137 | | - "env = Environment.get(ws, name='AzureML-PySpark-MmlSpark-0.15')\n" |
138 | | - ] |
| 133 | + "from azureml.core import Environment\r\n", |
| 134 | + "from azureml.core.environment import SparkPackage\r\n", |
| 135 | + "from azureml.core.conda_dependencies import CondaDependencies\r\n", |
| 136 | + "\r\n", |
| 137 | + "myenv = Environment('my-pyspark-environment')\r\n", |
| 138 | + "myenv.docker.base_image = \"mcr.microsoft.com/mmlspark/release:0.15\"\r\n", |
| 139 | + "myenv.inferencing_stack_version = \"latest\"\r\n", |
| 140 | + "myenv.python.conda_dependencies = CondaDependencies.create(pip_packages=[\"azureml-core\",\"azureml-defaults\",\"azureml-telemetry\",\"azureml-train-restclients-hyperdrive\",\"azureml-train-core\"], python_version=\"3.6.2\")\r\n", |
| 141 | + "myenv.python.conda_dependencies.add_channel(\"conda-forge\")\r\n", |
| 142 | + "myenv.spark.packages = [SparkPackage(\"com.microsoft.ml.spark\", \"mmlspark_2.11\", \"0.15\"), SparkPackage(\"com.microsoft.azure\", \"azure-storage\", \"2.0.0\"), SparkPackage(\"org.apache.hadoop\", \"hadoop-azure\", \"2.7.0\")]\r\n", |
| 143 | + "myenv.spark.repositories = [\"https://mmlspark.azureedge.net/maven\"]\r\n" |
| 144 | + ], |
| 145 | + "outputs": [], |
| 146 | + "metadata": {} |
139 | 147 | }, |
140 | 148 | { |
141 | 149 | "cell_type": "markdown", |
142 | | - "metadata": {}, |
143 | 150 | "source": [ |
144 | 151 | "## Create Inference Configuration\n", |
145 | 152 | "\n", |
|
157 | 164 | " - source_directory = holds source path as string, this entire folder gets added in image so its really easy to access any files within this folder or subfolder\n", |
158 | 165 | " - entry_script = contains logic specific to initializing your model and running predictions\n", |
159 | 166 | " - environment = An environment object to use for the deployment. Doesn't have to be registered" |
160 | | - ] |
| 167 | + ], |
| 168 | + "metadata": {} |
161 | 169 | }, |
162 | 170 | { |
163 | 171 | "cell_type": "code", |
164 | 172 | "execution_count": null, |
| 173 | + "source": [ |
| 174 | + "from azureml.core.model import InferenceConfig\r\n", |
| 175 | + "\r\n", |
| 176 | + "inference_config = InferenceConfig(entry_script=\"score.py\", environment=myenv)" |
| 177 | + ], |
| 178 | + "outputs": [], |
165 | 179 | "metadata": { |
166 | 180 | "tags": [ |
167 | 181 | "create image" |
168 | 182 | ] |
169 | | - }, |
170 | | - "outputs": [], |
171 | | - "source": [ |
172 | | - "from azureml.core.model import InferenceConfig\n", |
173 | | - "\n", |
174 | | - "inference_config = InferenceConfig(entry_script=\"score.py\", environment=env)" |
175 | | - ] |
| 183 | + } |
176 | 184 | }, |
177 | 185 | { |
178 | 186 | "cell_type": "markdown", |
179 | | - "metadata": {}, |
180 | 187 | "source": [ |
181 | 188 | "### Deploy Model as Webservice on Azure Container Instance\n", |
182 | 189 | "\n", |
183 | 190 | "Note that the service creation can take few minutes." |
184 | | - ] |
| 191 | + ], |
| 192 | + "metadata": {} |
185 | 193 | }, |
186 | 194 | { |
187 | 195 | "cell_type": "code", |
188 | 196 | "execution_count": null, |
| 197 | + "source": [ |
| 198 | + "from azureml.core.webservice import AciWebservice, Webservice\r\n", |
| 199 | + "from azureml.exceptions import WebserviceException\r\n", |
| 200 | + "\r\n", |
| 201 | + "deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\r\n", |
| 202 | + "aci_service_name = 'aciservice1'\r\n", |
| 203 | + "\r\n", |
| 204 | + "try:\r\n", |
| 205 | + " # if you want to get existing service below is the command\r\n", |
| 206 | + " # since aci name needs to be unique in subscription deleting existing aci if any\r\n", |
| 207 | + " # we use aci_service_name to create azure aci\r\n", |
| 208 | + " service = Webservice(ws, name=aci_service_name)\r\n", |
| 209 | + " if service:\r\n", |
| 210 | + " service.delete()\r\n", |
| 211 | + "except WebserviceException as e:\r\n", |
| 212 | + " print()\r\n", |
| 213 | + "\r\n", |
| 214 | + "service = Model.deploy(ws, aci_service_name, [model], inference_config, deployment_config)\r\n", |
| 215 | + "\r\n", |
| 216 | + "service.wait_for_deployment(True)\r\n", |
| 217 | + "print(service.state)" |
| 218 | + ], |
| 219 | + "outputs": [], |
189 | 220 | "metadata": { |
190 | 221 | "tags": [ |
191 | 222 | "azuremlexception-remarks-sample" |
192 | 223 | ] |
193 | | - }, |
194 | | - "outputs": [], |
195 | | - "source": [ |
196 | | - "from azureml.core.webservice import AciWebservice, Webservice\n", |
197 | | - "from azureml.exceptions import WebserviceException\n", |
198 | | - "\n", |
199 | | - "deployment_config = AciWebservice.deploy_configuration(cpu_cores=1, memory_gb=1)\n", |
200 | | - "aci_service_name = 'aciservice1'\n", |
201 | | - "\n", |
202 | | - "try:\n", |
203 | | - " # if you want to get existing service below is the command\n", |
204 | | - " # since aci name needs to be unique in subscription deleting existing aci if any\n", |
205 | | - " # we use aci_service_name to create azure aci\n", |
206 | | - " service = Webservice(ws, name=aci_service_name)\n", |
207 | | - " if service:\n", |
208 | | - " service.delete()\n", |
209 | | - "except WebserviceException as e:\n", |
210 | | - " print()\n", |
211 | | - "\n", |
212 | | - "service = Model.deploy(ws, aci_service_name, [model], inference_config, deployment_config)\n", |
213 | | - "\n", |
214 | | - "service.wait_for_deployment(True)\n", |
215 | | - "print(service.state)" |
216 | | - ] |
| 224 | + } |
217 | 225 | }, |
218 | 226 | { |
219 | 227 | "cell_type": "markdown", |
220 | | - "metadata": {}, |
221 | 228 | "source": [ |
222 | 229 | "#### Test web service" |
223 | | - ] |
| 230 | + ], |
| 231 | + "metadata": {} |
224 | 232 | }, |
225 | 233 | { |
226 | 234 | "cell_type": "code", |
227 | 235 | "execution_count": null, |
228 | | - "metadata": {}, |
229 | | - "outputs": [], |
230 | 236 | "source": [ |
231 | | - "import json\n", |
232 | | - "test_sample = json.dumps({'features':{'type':1,'values':[4.3,3.0,1.1,0.1]},'label':2.0})\n", |
233 | | - "\n", |
234 | | - "test_sample_encoded = bytes(test_sample, encoding='utf8')\n", |
235 | | - "prediction = service.run(input_data=test_sample_encoded)\n", |
| 237 | + "import json\r\n", |
| 238 | + "test_sample = json.dumps({'features':{'type':1,'values':[4.3,3.0,1.1,0.1]},'label':2.0})\r\n", |
| 239 | + "\r\n", |
| 240 | + "test_sample_encoded = bytes(test_sample, encoding='utf8')\r\n", |
| 241 | + "prediction = service.run(input_data=test_sample_encoded)\r\n", |
236 | 242 | "print(prediction)" |
237 | | - ] |
| 243 | + ], |
| 244 | + "outputs": [], |
| 245 | + "metadata": {} |
238 | 246 | }, |
239 | 247 | { |
240 | 248 | "cell_type": "markdown", |
241 | | - "metadata": {}, |
242 | 249 | "source": [ |
243 | 250 | "#### Delete ACI to clean up" |
244 | | - ] |
| 251 | + ], |
| 252 | + "metadata": {} |
245 | 253 | }, |
246 | 254 | { |
247 | 255 | "cell_type": "code", |
248 | 256 | "execution_count": null, |
| 257 | + "source": [ |
| 258 | + "service.delete()" |
| 259 | + ], |
| 260 | + "outputs": [], |
249 | 261 | "metadata": { |
250 | 262 | "tags": [ |
251 | 263 | "deploy service", |
252 | 264 | "aci" |
253 | 265 | ] |
254 | | - }, |
255 | | - "outputs": [], |
256 | | - "source": [ |
257 | | - "service.delete()" |
258 | | - ] |
| 266 | + } |
259 | 267 | }, |
260 | 268 | { |
261 | 269 | "cell_type": "markdown", |
262 | | - "metadata": {}, |
263 | 270 | "source": [ |
264 | 271 | "### Model Profiling\n", |
265 | 272 | "\n", |
|
271 | 278 | "profiling_results = profile.get_results()\n", |
272 | 279 | "print(profiling_results)\n", |
273 | 280 | "```" |
274 | | - ] |
| 281 | + ], |
| 282 | + "metadata": {} |
275 | 283 | }, |
276 | 284 | { |
277 | 285 | "cell_type": "markdown", |
278 | | - "metadata": {}, |
279 | 286 | "source": [ |
280 | 287 | "### Model Packaging\n", |
281 | 288 | "\n", |
|
296 | 303 | "package.wait_for_creation(show_output=True)\n", |
297 | 304 | "package.save(\"./local_context_dir\")\n", |
298 | 305 | "```" |
299 | | - ] |
| 306 | + ], |
| 307 | + "metadata": {} |
300 | 308 | } |
301 | 309 | ], |
302 | 310 | "metadata": { |
|
0 commit comments