Skip to content

Commit 30f10a9

Browse files
committed
Merge pull request lisa-lab#54 from IvanUkhov/lenet-sparse-connectivity
Fix a typo in LeNet Sparse Connectivity
2 parents d4a96b5 + c1f65d2 commit 30f10a9

File tree

1 file changed

+7
-7
lines changed

1 file changed

+7
-7
lines changed

doc/lenet.txt

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -86,13 +86,13 @@ graphically as follows:
8686

8787
Imagine that layer **m-1** is the input retina. In the above figure, units in
8888
layer **m** have receptive fields of width 3 in the input retina and are thus
89-
only connected to 3 adjacent neurons in the retina layer. Units in layer **m**
90-
have a similar connectivity with the layer below. We say that their receptive
91-
field with respect to the layer below is also 3, but their receptive field with
92-
respect to the input is larger (5). Each unit is unresponsive to variations
93-
outside of its receptive field with respect to the retina. The architecture
94-
thus ensures that the learnt "filters" produce the strongest response to a
95-
spatially local input pattern.
89+
only connected to 3 adjacent neurons in the retina layer. Units in layer
90+
**m+1** have a similar connectivity with the layer below. We say that their
91+
receptive field with respect to the layer below is also 3, but their receptive
92+
field with respect to the input is larger (5). Each unit is unresponsive to
93+
variations outside of its receptive field with respect to the retina. The
94+
architecture thus ensures that the learnt "filters" produce the strongest
95+
response to a spatially local input pattern.
9696

9797
However, as shown above, stacking many such layers leads to (non-linear)
9898
"filters" that become increasingly "global" (i.e. responsive to a larger region

0 commit comments

Comments
 (0)