@@ -151,12 +151,12 @@ To get around this shortcomming for the labels, we store them as float,
151151and then cast it to int.
152152
153153.. note::
154-
154+
155155 If you are running your code on the GPU and the dataset you are using
156156 is too large to fit in memory the code will crash. In such a case you
157157 should store the data in a shared variable. You can however store a
158158 sufficiently small chunk of your data (several minibatches) in a shared
159- variable and use that during trianing. One you got through the chunk,
159+ variable and use that during training. Once you got through the chunk,
160160 update the values it stores. This way you minimize the number of data
161161 transfers between CPU memory and GPU memory.
162162
@@ -289,7 +289,7 @@ In this tutorial, :math:`f` is defined as:
289289In python, using Theano this can be written as :
290290
291291.. code-block:: python
292-
292+
293293 # zero_one_loss is a Theano variable representing a symbolic
294294 # expression of the zero one loss ; to get the actual value this
295295 # symbolic expression has to be compiled into a Theano function (see
@@ -334,7 +334,7 @@ supervised learning signal for deep learning of a classifier.
334334This can be computed using the following line of code :
335335
336336.. code-block:: python
337-
337+
338338 # NLL is a symbolic variable ; to get the actual value of NLL, this symbolic
339339 # expression has to be compiled into a Theano function (see the Theano
340340 # tutorial for more details)
@@ -600,6 +600,7 @@ of a strategy based on a geometrically increasing amount of patience.
600600 best_validation_loss = this_validation_loss
601601
602602 if patience <= iter:
603+ done_looping = True
603604 break
604605
605606 # POSTCONDITION:
0 commit comments