|
12 | 12 | "cell_type": "markdown", |
13 | 13 | "metadata": {}, |
14 | 14 | "source": [ |
15 | | - "# 01. Handwritten Digit Classification (MNIST) using ONNX Runtime on AzureML\n", |
| 15 | + "# Handwritten Digit Classification (MNIST) using ONNX Runtime on AzureML\n", |
16 | 16 | "\n", |
17 | 17 | "This example shows how to deploy an image classification neural network using the Modified National Institute of Standards and Technology ([MNIST](http://yann.lecun.com/exdb/mnist/)) dataset and Open Neural Network eXchange format ([ONNX](http://aka.ms/onnxdocarticle)) on the Azure Machine Learning platform. MNIST is a popular dataset consisting of 70,000 grayscale images. Each image is a handwritten digit of 28x28 pixels, representing number from 0 to 9. This tutorial will show you how to deploy a MNIST model from the [ONNX model zoo](https://github.com/onnx/models), use it to make predictions using ONNX Runtime Inference, and deploy it as a web service in Azure.\n", |
18 | 18 | "\n", |
19 | 19 | "Throughout this tutorial, we will be referring to ONNX, a neural network exchange format used to represent deep learning models. With ONNX, AI developers can more easily move models between state-of-the-art tools (CNTK, PyTorch, Caffe, MXNet, TensorFlow) and choose the combination that is best for them. ONNX is developed and supported by a community of partners including Microsoft AI, Facebook, and Amazon. For more information, explore the [ONNX website](http://onnx.ai) and [open source files](https://github.com/onnx).\n", |
20 | 20 | "\n", |
21 | | - "[ONNX Runtime](https://aka.ms/onnxruntime) is the runtime engine that enables evaluation of trained machine learning (Traditional ML and Deep Learning) models with high performance and low resource utilization.\n", |
| 21 | + "[ONNX Runtime](https://aka.ms/onnxruntime-python) is the runtime engine that enables evaluation of trained machine learning (Traditional ML and Deep Learning) models with high performance and low resource utilization.\n", |
22 | 22 | "\n", |
23 | 23 | "#### Tutorial Objectives:\n", |
24 | 24 | "\n", |
|
36 | 36 | "### 1. Install Azure ML SDK and create a new workspace\n", |
37 | 37 | "Please follow [00.configuration.ipynb](https://github.com/Azure/MachineLearningNotebooks/blob/master/00.configuration.ipynb) notebook.\n", |
38 | 38 | "\n", |
39 | | - "\n", |
40 | 39 | "### 2. Install additional packages needed for this Notebook\n", |
41 | | - "You need to install the popular plotting library `matplotlib` and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n", |
| 40 | + "You need to install the popular plotting library `matplotlib`, the image manipulation library `opencv`, and the `onnx` library in the conda environment where Azure Maching Learning SDK is installed.\n", |
42 | 41 | "\n", |
43 | 42 | "```sh\n", |
44 | | - "(myenv) $ pip install matplotlib onnx\n", |
| 43 | + "(myenv) $ pip install matplotlib onnx opencv-python\n", |
45 | 44 | "```\n", |
46 | 45 | "\n", |
47 | 46 | "### 3. Download sample data and pre-trained ONNX model from ONNX Model Zoo.\n", |
|
222 | 221 | "\n", |
223 | 222 | "\n", |
224 | 223 | "def init():\n", |
225 | | - " global session\n", |
| 224 | + " global session, input_name, output_name\n", |
226 | 225 | " model = Model.get_model_path(model_name = 'mnist_1')\n", |
227 | 226 | " session = onnxruntime.InferenceSession(model, None)\n", |
| 227 | + " input_name = session.get_inputs()[0].name\n", |
| 228 | + " output_name = session.get_outputs()[0].name \n", |
228 | 229 | " \n", |
229 | 230 | "def run(input_data):\n", |
230 | 231 | " '''Purpose: evaluate test input in Azure Cloud using onnxruntime.\n", |
|
233 | 234 | "\n", |
234 | 235 | " try:\n", |
235 | 236 | " # load in our data, convert to readable format\n", |
236 | | - " start = time.time()\n", |
237 | 237 | " data = np.array(json.loads(input_data)['data']).astype('float32')\n", |
238 | 238 | "\n", |
239 | | - " r = session.run([\"Plus214_Output_0\"], {\"Input3\": data})[0]\n", |
240 | | - " result = choose_class(r[0])\n", |
| 239 | + " start = time.time()\n", |
| 240 | + " r = session.run([output_name], {input_name: data})[0]\n", |
241 | 241 | " end = time.time()\n", |
242 | | - " result_dict = {\"result\": np.array(result).tolist(),\n", |
243 | | - " \"time\": np.array(end - start).tolist()}\n", |
| 242 | + " result = choose_class(r[0])\n", |
| 243 | + " result_dict = {\"result\": [result],\n", |
| 244 | + " \"time_in_sec\": [end - start]}\n", |
244 | 245 | " except Exception as e:\n", |
245 | 246 | " result_dict = {\"error\": str(e)}\n", |
246 | 247 | " \n", |
|
304 | 305 | " runtime = \"python\",\n", |
305 | 306 | " conda_file = \"myenv.yml\",\n", |
306 | 307 | " description = \"test\",\n", |
307 | | - " tags = {\"demo\": \"onnx\"} \n", |
308 | | - " )\n", |
| 308 | + " tags = {\"demo\": \"onnx\"}) )\n", |
309 | 309 | "\n", |
310 | 310 | "\n", |
311 | 311 | "image = ContainerImage.create(name = \"onnxtest\",\n", |
|
533 | 533 | " # predict using the deployed model\n", |
534 | 534 | " r = json.loads(aci_service.run(input_data))\n", |
535 | 535 | " \n", |
536 | | - " if len(r) == 1:\n", |
| 536 | + " if \"error\" in r:\n", |
537 | 537 | " print(r['error'])\n", |
538 | 538 | " break\n", |
539 | 539 | " \n", |
540 | | - " result = r['result']\n", |
541 | | - " time_ms = np.round(r['time'] * 1000, 2)\n", |
| 540 | + " result = r['result'][0]\n", |
| 541 | + " time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n", |
542 | 542 | " \n", |
543 | 543 | " ground_truth = int(np.argmax(test_outputs[i]))\n", |
544 | 544 | " \n", |
|
570 | 570 | "source": [ |
571 | 571 | "### Try classifying your own images!\n", |
572 | 572 | "\n", |
573 | | - "Create your own 28 pixel by 28 pixel handwritten image and pass it into the model." |
| 573 | + "Create your own handwritten image and pass it into the model." |
574 | 574 | ] |
575 | 575 | }, |
576 | 576 | { |
|
580 | 580 | "outputs": [], |
581 | 581 | "source": [ |
582 | 582 | "# Preprocessing functions\n", |
| 583 | + "import cv2\n", |
583 | 584 | "\n", |
584 | 585 | "def rgb2gray(rgb):\n", |
585 | 586 | " \"\"\"Convert the input image into grayscale\"\"\"\n", |
586 | 587 | " return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n", |
587 | 588 | "\n", |
| 589 | + "def resize_img(img):\n", |
| 590 | + " img = cv2.resize(img, dsize=(28, 28), interpolation=cv2.INTER_AREA)\n", |
| 591 | + " img.resize((1, 1, 28, 28))\n", |
| 592 | + " return img\n", |
| 593 | + "\n", |
588 | 594 | "def preprocess(img):\n", |
589 | 595 | " \"\"\"Resize input images and convert them to grayscale.\"\"\"\n", |
590 | | - " if img.shape[0] != 28:\n", |
591 | | - " print(\"Input image size is not 28 * 28 pixels. Please resize and try again.\")\n", |
592 | 596 | " grayscale = rgb2gray(img)\n", |
593 | | - " grayscale.resize((1, 1, 28, 28))\n", |
594 | | - " return grayscale" |
| 597 | + " processed_img = resize_img(grayscale)\n", |
| 598 | + " return processed_img" |
595 | 599 | ] |
596 | 600 | }, |
597 | 601 | { |
|
601 | 605 | "outputs": [], |
602 | 606 | "source": [ |
603 | 607 | "# Replace this string with your own path/test image\n", |
604 | | - "# Make sure the dimensions are 28 * 28 pixels\n", |
| 608 | + "# Make sure your image is square and the dimensions are equal (i.e. 100 * 100 pixels or 28 * 28 pixels)\n", |
605 | 609 | "\n", |
606 | 610 | "# Any PNG or JPG image file should work\n", |
607 | 611 | "# Make sure to include the entire path with // instead of /\n", |
|
636 | 640 | "\n", |
637 | 641 | " try:\n", |
638 | 642 | " r = json.loads(aci_service.run(input_data))\n", |
639 | | - " result = r['result']\n", |
640 | | - " time_ms = np.round(r['time'] * 1000, 2)\n", |
| 643 | + " result = r['result'][0]\n", |
| 644 | + " time_ms = np.round(r['time_in_sec'][0] * 1000, 2)\n", |
641 | 645 | " except Exception as e:\n", |
642 | | - " print(str(e), r['error'])\n", |
| 646 | + " print(str(e))\n", |
643 | 647 | "\n", |
644 | 648 | " plt.figure(figsize = (16, 6))\n", |
645 | 649 | " plt.subplot(1, 15,1)\n", |
|
657 | 661 | "cell_type": "markdown", |
658 | 662 | "metadata": {}, |
659 | 663 | "source": [ |
660 | | - "## Optional: How does our MNIST model work? \n", |
| 664 | + "## Optional: How does our ONNX MNIST model work? \n", |
661 | 665 | "#### A brief explanation of Convolutional Neural Networks\n", |
662 | 666 | "\n", |
663 | 667 | "A [convolutional neural network](https://en.wikipedia.org/wiki/Convolutional_neural_network) (CNN, or ConvNet) is a type of [feed-forward](https://en.wikipedia.org/wiki/Feedforward_neural_network) artificial neural network made up of neurons that have learnable weights and biases. The CNNs take advantage of the spatial nature of the data. In nature, we perceive different objects by their shapes, size and colors. For example, objects in a natural scene are typically edges, corners/vertices (defined by two of more edges), color patches etc. These primitives are often identified using different detectors (e.g., edge detection, color detector) or combination of detectors interacting to facilitate image interpretation (object classification, region of interest detection, scene description etc.) in real world vision related tasks. These detectors are also known as filters. Convolution is a mathematical operator that takes an image and a filter as input and produces a filtered output (representing say edges, corners, or colors in the input image). \n", |
|
709 | 713 | "" |
710 | 714 | ] |
711 | 715 | }, |
712 | | - { |
713 | | - "cell_type": "markdown", |
714 | | - "metadata": {}, |
715 | | - "source": [ |
716 | | - "### Try classifying your own images!\n", |
717 | | - "\n", |
718 | | - "Create your own 28 pixel by 28 pixel handwritten image and pass it into the model." |
719 | | - ] |
720 | | - }, |
721 | | - { |
722 | | - "cell_type": "code", |
723 | | - "execution_count": null, |
724 | | - "metadata": {}, |
725 | | - "outputs": [], |
726 | | - "source": [ |
727 | | - "# Preprocessing functions\n", |
728 | | - "\n", |
729 | | - "def rgb2gray(rgb):\n", |
730 | | - " \"\"\"Convert the input image into grayscale\"\"\"\n", |
731 | | - " return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])\n", |
732 | | - "\n", |
733 | | - "def preprocess(img):\n", |
734 | | - " \"\"\"Resize input images and convert them to grayscale.\"\"\"\n", |
735 | | - " if img.shape[0] != 28:\n", |
736 | | - " print(\"Input image size is not 28 * 28 pixels. Please resize and try again.\")\n", |
737 | | - " grayscale = rgb2gray(img)\n", |
738 | | - " grayscale.resize((1, 1, 28, 28))\n", |
739 | | - " return grayscale" |
740 | | - ] |
741 | | - }, |
742 | | - { |
743 | | - "cell_type": "code", |
744 | | - "execution_count": null, |
745 | | - "metadata": {}, |
746 | | - "outputs": [], |
747 | | - "source": [ |
748 | | - "# Replace this string with your own path/test image\n", |
749 | | - "# Make sure the dimensions are 28 * 28 pixels\n", |
750 | | - "\n", |
751 | | - "# Any PNG or JPG image file should work\n", |
752 | | - "# Make sure to include the entire path with // instead of /\n", |
753 | | - "\n", |
754 | | - "# e.g. your_test_image = \"C://Users//vinitra.swamy//Pictures//digit.png\"\n", |
755 | | - "\n", |
756 | | - "your_test_image = \"<path to file>\"\n", |
757 | | - "\n", |
758 | | - "import matplotlib.image as mpimg\n", |
759 | | - "\n", |
760 | | - "if your_test_image != \"<path to file>\":\n", |
761 | | - " img = mpimg.imread(your_test_image)\n", |
762 | | - " plt.subplot(1,3,1)\n", |
763 | | - " plt.imshow(img, cmap = plt.cm.Greys)\n", |
764 | | - " print(\"Old Dimensions: \", img.shape)\n", |
765 | | - " img = preprocess(img)\n", |
766 | | - " print(\"New Dimensions: \", img.shape)\n", |
767 | | - "else:\n", |
768 | | - " img = None" |
769 | | - ] |
770 | | - }, |
771 | | - { |
772 | | - "cell_type": "code", |
773 | | - "execution_count": null, |
774 | | - "metadata": {}, |
775 | | - "outputs": [], |
776 | | - "source": [ |
777 | | - "if img is None:\n", |
778 | | - " print(\"Add the path for your image data.\")\n", |
779 | | - "else:\n", |
780 | | - " input_data = json.dumps({'data': img.tolist()})\n", |
781 | | - "\n", |
782 | | - " try:\n", |
783 | | - " r = json.loads(aci_service.run(input_data))\n", |
784 | | - " result = r['result']\n", |
785 | | - " time_ms = np.round(r['time'] * 1000, 2)\n", |
786 | | - " except Exception as e:\n", |
787 | | - " print(str(e), r['error'])\n", |
788 | | - "\n", |
789 | | - " plt.figure(figsize = (16, 6))\n", |
790 | | - " plt.subplot(1, 15,1)\n", |
791 | | - " plt.axhline('')\n", |
792 | | - " plt.axvline('')\n", |
793 | | - " plt.text(x = -100, y = -20, s = \"Model prediction: \", fontsize = 14)\n", |
794 | | - " plt.text(x = -100, y = -10, s = \"Inference time: \", fontsize = 14)\n", |
795 | | - " plt.text(x = 0, y = -20, s = str(result), fontsize = 14)\n", |
796 | | - " plt.text(x = 0, y = -10, s = str(time_ms) + \" ms\", fontsize = 14)\n", |
797 | | - " plt.text(x = -100, y = 14, s = \"Input image: \", fontsize = 14)\n", |
798 | | - " plt.imshow(img.reshape(28, 28), cmap = plt.cm.gray) " |
799 | | - ] |
800 | | - }, |
801 | 716 | { |
802 | 717 | "cell_type": "code", |
803 | 718 | "execution_count": null, |
804 | 719 | "metadata": {}, |
805 | 720 | "outputs": [], |
806 | 721 | "source": [ |
807 | 722 | "# remember to delete your service after you are done using it!\n", |
808 | | - "# uncomment the following line of code to delete your service\n", |
809 | 723 | "\n", |
810 | 724 | "# aci_service.delete()" |
811 | 725 | ] |
|
818 | 732 | "\n", |
819 | 733 | "Congratulations!\n", |
820 | 734 | "\n", |
821 | | - "In this tutorial, you have managed to:\n", |
822 | | - "- familiarize yourself with the ONNX model format, ONNX Runtime inference, and the pretrained models in the ONNX model zoo\n", |
823 | | - "- understand a state-of-the-art convolutional neural net image classification model (MNIST in ONNX) and deploy it in the Azure ML cloud\n", |
824 | | - "- ensure that your deep learning model is working perfectly (in the cloud) on test data, and check it against some of your own!\n", |
| 735 | + "In this tutorial, you have:\n", |
| 736 | + "- familiarized yourself with ONNX Runtime inference and the pretrained models in the ONNX model zoo\n", |
| 737 | + "- understood a state-of-the-art convolutional neural net image classification model (MNIST in ONNX) and deployed it in Azure ML cloud\n", |
| 738 | + "- ensured that your deep learning model is working perfectly (in the cloud) on test data, and checked it against some of your own!\n", |
825 | 739 | "\n", |
826 | 740 | "Next steps:\n", |
827 | 741 | "- Check out another interesting application based on a Microsoft Research computer vision paper that lets you set up a [facial emotion recognition model](https://github.com/Azure/MachineLearningNotebooks/tree/master/onnx/onnx-inference-emotion-recognition.ipynb) in the cloud! This tutorial deploys a pre-trained ONNX Computer Vision model in an Azure ML virtual machine with GPU support.\n", |
|
831 | 745 | ], |
832 | 746 | "metadata": { |
833 | 747 | "kernelspec": { |
834 | | - "display_name": "Python 3.6", |
| 748 | + "display_name": "Python 3", |
835 | 749 | "language": "python", |
836 | | - "name": "python36" |
| 750 | + "name": "python3" |
837 | 751 | }, |
838 | 752 | "language_info": { |
839 | 753 | "codemirror_mode": { |
|
845 | 759 | "name": "python", |
846 | 760 | "nbconvert_exporter": "python", |
847 | 761 | "pygments_lexer": "ipython3", |
848 | | - "version": "3.6.6" |
| 762 | + "version": "3.6.5" |
849 | 763 | }, |
850 | 764 | "msauthor": "vinitra.swamy" |
851 | 765 | }, |
|
0 commit comments