Skip to content

Commit 61adbf8

Browse files
authored
fix typos/spelling
1 parent 6ef907b commit 61adbf8

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

doc/gettingstarted.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -147,7 +147,7 @@ MNIST Dataset
147147

148148
The data has to be stored as floats on the GPU ( the right
149149
``dtype`` for storing on the GPU is given by ``theano.config.floatX``).
150-
To get around this shortcomming for the labels, we store them as float,
150+
To get around this shortcoming for the labels, we store them as float,
151151
and then cast it to int.
152152

153153
.. note::
@@ -316,7 +316,7 @@ The likelihood of the correct class is not the same as the
316316
number of right predictions, but from the point of view of a randomly
317317
initialized classifier they are pretty similar.
318318
Remember that likelihood and zero-one loss are different objectives;
319-
you should see that they are corralated on the validation set but
319+
you should see that they are correlated on the validation set but
320320
sometimes one will rise while the other falls, or vice-versa.
321321

322322
Since we usually speak in terms of minimizing a loss function, learning will
@@ -421,7 +421,7 @@ but this choice is almost arbitrary (though harmless).
421421
because it controls the number of updates done to your parameters. Training the same model
422422
for 10 epochs using a batch size of 1 yields completely different results compared
423423
to training for the same 10 epochs but with a batchsize of 20. Keep this in mind when
424-
switching between batch sizes and be prepared to tweak all the other parameters acording
424+
switching between batch sizes and be prepared to tweak all the other parameters according
425425
to the batch size used.
426426

427427
All code-blocks above show pseudocode of how the algorithm looks like. Implementing such

0 commit comments

Comments
 (0)