You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"**Message Passing Interface (MPI)** is a standardized communication protocol for parallel systems. It is used in many parallel computing applications to exchange data between nodes. MPI has a high barrier to entry, but it is very efficient and powerful.\n",
19
-
"\n",
20
-
"IPython's parallel computing system has been designed from the ground up to work with MPI. If you are new to MPI, it is a good idea to start using it with IPython. If you are an experienced MPI user, you will find that IPython integrates seamlessly with your parallel application.\n",
21
-
"\n",
22
-
"In this recipe, we show how to use MPI with IPython through a very simple example."
23
-
]
24
-
},
25
-
{
26
-
"cell_type": "markdown",
27
-
"metadata": {},
28
-
"source": [
29
-
"## Getting started\n",
30
-
"\n",
31
-
"To use MPI with IPython you need:\n",
32
-
"\n",
33
-
"* A standard MPI implementation such as [OpenMPI](http://www.open-mpi.org) or [MPICH](http://www.mpich.org).\n",
34
-
"* The [mpi4py package](http://mpi4py.scipy.org).\n",
35
-
"\n",
36
-
"For example, here are the commands to install MPI for IPython on Ubuntu:\n",
"2. Then, we need to open `~/.ipython/profile_mpi/ipcluster_config.py` and add the line `c.IPClusterEngines.engine_launcher_class = 'MPI'`."
73
-
]
74
-
},
75
-
{
76
-
"cell_type": "markdown",
77
-
"metadata": {},
78
-
"source": [
79
-
"3. Once the MPI profile has been created and configured, we can launch the engines in the IPython dashboard, by selecting the number of engines (e.g. one per processor) in the *Clusters* tab, *MPI* profile, and pressing *Start*. Alternatively, we can run in a terminal: `ipcluster start -n 2 --engines MPI --profile=mpi`."
80
-
]
81
-
},
82
-
{
83
-
"cell_type": "markdown",
84
-
"metadata": {},
85
-
"source": [
86
-
"4. Now, to actually use the engines, we create a MPI client in the notebook."
87
-
]
88
-
},
89
-
{
90
-
"cell_type": "code",
91
-
"collapsed": false,
92
-
"input": [
93
-
"import numpy as np\n",
94
-
"from IPython.parallel import Client"
95
-
],
96
-
"language": "python",
97
-
"metadata": {},
98
-
"outputs": [],
99
-
"prompt_number": 2
100
-
},
101
-
{
102
-
"cell_type": "code",
103
-
"collapsed": false,
104
-
"input": [
105
-
"c = Client(profile='mpi')"
106
-
],
107
-
"language": "python",
108
-
"metadata": {},
109
-
"outputs": [],
110
-
"prompt_number": 3
111
-
},
112
-
{
113
-
"cell_type": "markdown",
114
-
"metadata": {},
115
-
"source": [
116
-
"5. Let's create a view on all engines."
117
-
]
118
-
},
119
-
{
120
-
"cell_type": "code",
121
-
"collapsed": false,
122
-
"input": [
123
-
"view = c[:]"
124
-
],
125
-
"language": "python",
126
-
"metadata": {},
127
-
"outputs": [],
128
-
"prompt_number": 4
129
-
},
130
-
{
131
-
"cell_type": "markdown",
132
-
"metadata": {},
133
-
"source": [
134
-
"6. In this example, we compute the sum of all integers between 0 and 15 in parallel over two cores. We first distribute the array with the 16 values across the engines (each engine gets a subarray)."
"3. sends this local sum to all other engines,\n",
198
-
"4. receives the local sum of the other engines,\n",
199
-
"5. computes the total sum of those local sums.\n",
200
-
"\n",
201
-
"This is how **all-reduce** works in MPI: the principle is to **scatter** data across engines first, then to **reduce** the local computations through a global operator (here, `MPI.SUM`).\n",
202
-
"\n",
203
-
"There are many other parallel computing paradigms in MPI. You can find more information here:\n",
204
-
"\n",
205
-
"* [MPI tutorials by Wes Kendall](http://mpitutorial.com)\n",
206
-
"* [MPI tutorials by Blaise Barney, Lawrence Livermore National Laboratory](https://computing.llnl.gov/tutorials/mpi/)"
207
-
]
208
-
},
209
-
{
210
-
"cell_type": "markdown",
211
-
"metadata": {},
212
-
"source": [
213
-
"## See also\n",
214
-
"\n",
215
-
"* Distribute your code across multiple cores with IPython"
"> This is one of the 100 recipes of the [IPython Cookbook](http://ipython-books.github.io/), the definitive guide to high-performance scientific computing and data science in Python."
16
+
]
17
+
},
18
+
{
19
+
"cell_type": "markdown",
20
+
"metadata": {
21
+
"word_id": "4818_05_mpi"
22
+
},
23
+
"source": [
24
+
"# 5.11. Using MPI with IPython"
25
+
]
26
+
},
27
+
{
28
+
"cell_type": "markdown",
29
+
"metadata": {},
30
+
"source": [
31
+
"To use MPI with IPython you need:\n",
32
+
"\n",
33
+
"* A standard MPI implementation such as [OpenMPI](http://www.open-mpi.org) or [MPICH](http://www.mpich.org).\n",
34
+
"* The [mpi4py package](http://mpi4py.scipy.org).\n",
35
+
"\n",
36
+
"For example, here are the commands to install MPI for IPython on Ubuntu:\n",
"2. Then, we need to open `~/.ipython/profile_mpi/ipcluster_config.py` and add the line `c.IPClusterEngines.engine_launcher_class = 'MPI'`."
64
+
]
65
+
},
66
+
{
67
+
"cell_type": "markdown",
68
+
"metadata": {},
69
+
"source": [
70
+
"3. Once the MPI profile has been created and configured, we can launch the engines in the IPython dashboard, by selecting the number of engines (e.g. one per processor) in the *Clusters* tab, *MPI* profile, and pressing *Start*. Alternatively, we can run in a terminal: `ipcluster start -n 2 --engines MPI --profile=mpi`."
71
+
]
72
+
},
73
+
{
74
+
"cell_type": "markdown",
75
+
"metadata": {},
76
+
"source": [
77
+
"4. Now, to actually use the engines, we create a MPI client in the notebook."
78
+
]
79
+
},
80
+
{
81
+
"cell_type": "code",
82
+
"collapsed": false,
83
+
"input": [
84
+
"import numpy as np\n",
85
+
"from IPython.parallel import Client"
86
+
],
87
+
"language": "python",
88
+
"metadata": {},
89
+
"outputs": []
90
+
},
91
+
{
92
+
"cell_type": "code",
93
+
"collapsed": false,
94
+
"input": [
95
+
"c = Client(profile='mpi')"
96
+
],
97
+
"language": "python",
98
+
"metadata": {},
99
+
"outputs": []
100
+
},
101
+
{
102
+
"cell_type": "markdown",
103
+
"metadata": {},
104
+
"source": [
105
+
"5. Let's create a view on all engines."
106
+
]
107
+
},
108
+
{
109
+
"cell_type": "code",
110
+
"collapsed": false,
111
+
"input": [
112
+
"view = c[:]"
113
+
],
114
+
"language": "python",
115
+
"metadata": {},
116
+
"outputs": []
117
+
},
118
+
{
119
+
"cell_type": "markdown",
120
+
"metadata": {},
121
+
"source": [
122
+
"6. In this example, we compute the sum of all integers between 0 and 15 in parallel over two cores. We first distribute the array with the 16 values across the engines (each engine gets a subarray)."
"> You'll find all the explanations, figures, references, and much more in the book (to be released later this summer).\n",
160
+
"\n",
161
+
"> [IPython Cookbook](http://ipython-books.github.io/), by [Cyrille Rossant](http://cyrille.rossant.net), Packt Publishing, 2014 (400 pages). [Get a 50% discount by pre-ordering now](http://www.packtpub.com/ipython-interactive-computing-and-visualization-cookbook/book) with the code `mK00gPxQM` (time-limited offer)!"
0 commit comments