Skip to content

Commit e371d52

Browse files
committed
fix type and doc formating.
1 parent 839d50e commit e371d52

7 files changed

Lines changed: 15 additions & 15 deletions

File tree

code/DBN.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ def __init__(self, numpy_rng, theano_rng = None, n_ins = 784,
4141
:param n_ins: dimension of the input to the DBN
4242
4343
:type n_layers_sizes: list of ints
44-
:param n_layers_sizes: intermidiate layers size, must contain
44+
:param n_layers_sizes: intermediate layers size, must contain
4545
at least one value
4646
4747
:type n_outs: int
@@ -63,7 +63,7 @@ def __init__(self, numpy_rng, theano_rng = None, n_ins = 784,
6363
self.y = T.ivector('y') # the labels are presented as 1D vector of
6464
# [int] labels
6565

66-
# The DBN is an MLP, for which all weights of intermidiate layers are shared with a
66+
# The DBN is an MLP, for which all weights of intermediate layers are shared with a
6767
# different RBM. We will first construct the DBN as a deep multilayer perceptron, and
6868
# when constructing each sigmoidal layer we also construct an RBM that shares weights
6969
# with that layer. During pretraining we will train these RBMs (which will lead

code/SdA.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ def __init__(self, numpy_rng, theano_rng = None, n_ins = 784,
7070
:param n_ins: dimension of the input to the sdA
7171
7272
:type n_layers_sizes: list of ints
73-
:param n_layers_sizes: intermidiate layers size, must contain
73+
:param n_layers_sizes: intermediate layers size, must contain
7474
at least one value
7575
7676
:type n_outs: int
@@ -95,7 +95,7 @@ def __init__(self, numpy_rng, theano_rng = None, n_ins = 784,
9595
self.y = T.ivector('y') # the labels are presented as 1D vector of
9696
# [int] labels
9797

98-
# The SdA is an MLP, for which all weights of intermidiate layers
98+
# The SdA is an MLP, for which all weights of intermediate layers
9999
# are shared with a different denoising autoencoders
100100
# We will first construct the SdA as a deep multilayer perceptron,
101101
# and when constructing each sigmoidal layer we also construct a

code/mlp.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
44
A multilayer perceptron is a logistic regressor where
55
instead of feeding the input to the logistic regression you insert a
6-
intermidiate layer, called the hidden layer, that has a nonlinear
6+
intermediate layer, called the hidden layer, that has a nonlinear
77
activation function (usually tanh or sigmoid) . One can use many such
88
hidden layers making the architecture deep. The tutorial will also tackle
99
the problem of MNIST digit classification.
@@ -101,7 +101,7 @@ class MLP(object):
101101
102102
A multilayer perceptron is a feedforward artificial neural network model
103103
that has one layer or more of hidden units and nonlinear activations.
104-
Intermidiate layers usually have as activation function thanh or the
104+
Intermediate layers usually have as activation function thanh or the
105105
sigmoid function (defined here by a ``SigmoidalLayer`` class) while the
106106
top layer is a softamx layer (defined here by a ``LogisticRegression``
107107
class).

doc/DBN.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -162,7 +162,7 @@ classification.
162162
:param n_ins: dimension of the input to the DBN
163163

164164
:type n_layers_sizes: list of ints
165-
:param n_layers_sizes: intermidiate layers size, must contain
165+
:param n_layers_sizes: intermediate layers size, must contain
166166
at least one value
167167

168168
:type n_outs: int

doc/SdA.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ representations of intermediate layers of the MLP.
9494
:param n_ins: dimension of the input to the sdA
9595

9696
:type n_layers_sizes: list of ints
97-
:param n_layers_sizes: intermidiate layers size, must contain
97+
:param n_layers_sizes: intermediate layers size, must contain
9898
at least one value
9999

100100
:type n_outs: int

doc/mlp.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -138,7 +138,7 @@ layer on top.
138138

139139
The initial values for the weights of a hidden layer :math:`i` should be uniformly
140140
sampled from a symmetric interval that depends on the activation function. For
141-
:math:`tanh` activation function results obtained in [Xavier10] show that the
141+
:math:`tanh` activation function results obtained in [Xavier10]_ show that the
142142
interval should be
143143
:math:`[-\sqrt{\frac{6}{fan_{in}+fan_{out}}},\sqrt{\frac{6}{fan_{in}+fan_{out}}}]`, where
144144
:math:`fan_{in}` is the number of units in the :math:`(i-1)`-th layer,
@@ -154,11 +154,11 @@ both upward (activations flowing from inputs to outputs) and backward
154154
# `W` is initialized with `W_values` which is uniformely sampled
155155
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
156156
# for tanh activation function
157-
# the output of uniform if converted using asarray to dtype
157+
# the output of uniform is converted using asarray to dtype
158158
# theano.config.floatX so that the code is runable on GPU
159159
# Note : optimal initialization of weights is dependent on the
160160
# activation function used (among other things).
161-
# For example, results presented in [Xavier10] suggest that you
161+
# For example, results presented in [Xavier10]_ suggest that you
162162
# should use 4 times larger initial weights for sigmoid
163163
# compared to tanh
164164
if activation == theano.tensor.tanh:
@@ -207,7 +207,7 @@ the ``MLP`` class :
207207

208208
A multilayer perceptron is a feedforward artificial neural network model
209209
that has one layer or more of hidden units and nonlinear activations.
210-
Intermidiate layers usually have as activation function thanh or the
210+
Intermediate layers usually have as activation function tanh or the
211211
sigmoid function (defined here by a ``HiddenLayer`` class) while the
212212
top layer is a softamx layer (defined here by a ``LogisticRegression``
213213
class).
@@ -412,7 +412,7 @@ Under some assumptions, a compromise between these two constraints leads to the
412412
initialization: :math:`uniform[-\frac{6}{\sqrt{fan_{in}+fan_{out}}},\frac{6}{\sqrt{fan_{in}+fan_{out}}}]`
413413
for tanh and :math:`uniform[-4*\frac{6}{\sqrt{fan_{in}+fan_{out}}},4*\frac{6}{\sqrt{fan_{in}+fan_{out}}}]`
414414
for sigmoid. Where :math:`fan_{in}` is the number of inputs and :math:`fan_{out}` the number of hidden units.
415-
For mathematical considerations please refer to [Xavier10].
415+
For mathematical considerations please refer to [Xavier10]_.
416416

417417
Learning rate
418418
--------------

doc/rbm.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -574,7 +574,7 @@ op provided by Theano, therefore we urge the reader to look it up by following t
574574

575575
Once we have the generated the chain we take the sample at the end of the
576576
chain to get the free energy of the negative phase. Note that the
577-
``chain_end`` is a symbolical Theano variable express in terms of the model
577+
``chain_end`` is a symbolical Theano variable expressed in terms of the model
578578
parameters, and if we would apply ``T.grad`` naively, the function will
579579
try to go through the Gibbs chain to get the gradients. This is not what we
580580
want (it will mess up our gradients) and therefire we need to indicate to
@@ -585,7 +585,7 @@ want (it will mess up our gradients) and therefire we need to indicate to
585585

586586

587587
# determine gradients on RBM parameters
588-
# not that we only need the sample at the end of the chain
588+
# note that we only need the sample at the end of the chain
589589
chain_end = nv_samples[-1]
590590

591591
cost = T.mean(self.free_energy(self.input)) - T.mean(self.free_energy(chain_end))

0 commit comments

Comments
 (0)