Skip to content

Commit 9882526

Browse files
committed
Edits suggested by @lamblin.
1 parent bef5b51 commit 9882526

1 file changed

Lines changed: 4 additions & 7 deletions

File tree

doc/lenet.txt

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -13,15 +13,13 @@ Convolutional Neural Networks (LeNet)
1313
To run this example on a GPU, you need a good GPU. It needs
1414
at least 1GB of GPU RAM. More may be required if your monitor is
1515
connected to the GPU.
16-
16+
1717
When the GPU is connected to the monitor, there is a limit
1818
of a few seconds for each GPU function call. This is needed as
1919
current GPUs can't be used for the monitor while doing
2020
computation. Without this limit, the screen would freeze
2121
for too long and make it look as if the computer froze.
22-
This example hits this limit with medium-quality GPUs.
23-
24-
When the
22+
This example hits this limit with medium-quality GPUs. When the
2523
GPU isn't connected to a monitor, there is no time limit. You can
2624
lower the batch size to fix the time out problem.
2725

@@ -177,9 +175,8 @@ each pixel of the k-th feature map at layer m, with the pixel at coordinates
177175
The Convolution Operator
178176
++++++++++++++++++++++++
179177

180-
The ``conv2d`` function in Theano is the main workhorse for implementing a
181-
convolutional layer. It replicates the behaviour of scipy.signal.convolve2d.
182-
``conv2d`` takes two symbolic inputs:
178+
ConvOp is the main workhorse for implementing a convolutional layer in Theano.
179+
ConvOp is used by ``theano.tensor.signal.conv2d``, which takes two symbolic inputs:
183180

184181

185182
* a 4D tensor corresponding to a mini-batch of input images. The shape of the

0 commit comments

Comments
 (0)