Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions code/mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ class MLP(object):
A multilayer perceptron is a feedforward artificial neural network model
that has one layer or more of hidden units and nonlinear activations.
Intermediate layers usually have as activation function thanh or the
sigmoid function (defined here by a ``SigmoidalLayer`` class) while the
sigmoid function (defined here by a ``HiddenLayer`` class) while the
top layer is a softamx layer (defined here by a ``LogisticRegression``
class).
"""
Expand Down Expand Up @@ -136,10 +136,10 @@ def __init__(self, rng, input, n_in, n_hidden, n_out):

"""

# Since we are dealing with a one hidden layer MLP, this will
# translate into a TanhLayer connected to the LogisticRegression
# layer; this can be replaced by a SigmoidalLayer, or a layer
# implementing any other nonlinearity
# Since we are dealing with a one hidden layer MLP, this will translate
# into a HiddenLayer with a tanh activation function connected to the
# LogisticRegression layer; the activation function can be replaced by
# sigmoid or any other nonlinear function
self.hiddenLayer = HiddenLayer(rng=rng, input=input,
n_in=n_in, n_out=n_hidden,
activation=T.tanh)
Expand Down
2 changes: 1 addition & 1 deletion doc/DBN.txt
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ the MLP, while ``self.rbm_layers`` will store the RBMs used to pretrain each
layer of the MLP.

Next step, we construct ``n_layers`` sigmoid layers (we use the
``SigmoidalLayer`` class introduced in :ref:`mlp`, with the only modification
``HiddenLayer`` class introduced in :ref:`mlp`, with the only modification
that we replaced the non-linearity from ``tanh`` to the logistic function
:math:`s(x) = \frac{1}{1+e^{-x}}`) and ``n_layers`` RBMs, where ``n_layers``
is the depth of our model. We link the sigmoid layers such that they form an
Expand Down
7 changes: 4 additions & 3 deletions doc/SdA.txt
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ representations of intermediate layers of the MLP.
``self.dA_layers`` will store the denoising autoencoder associated with the layers of the MLP.

Next step, we construct ``n_layers`` sigmoid layers (we use the
``SigmoidalLayer`` class introduced in :ref:`mlp`, with the only
``HiddenLayer`` class introduced in :ref:`mlp`, with the only
modification that we replaced the non-linearity from ``tanh`` to the
logistic function :math:`s(x) = \frac{1}{1+e^{-x}}`) and ``n_layers``
denoising autoencoders, where ``n_layers`` is the depth of our model.
Expand Down Expand Up @@ -154,10 +154,11 @@ bias of the encoding part with its corresponding sigmoid layer.
else:
layer_input = self.sigmoid_layers[-1].output

sigmoid_layer = SigmoidalLayer(rng=rng,
sigmoid_layer = HiddenLayer(rng=rng,
input=layer_input,
n_in=input_size,
n_out=hidden_layers_sizes[i])
n_out=hidden_layers_sizes[i],
activation=T.nnet.sigmoid)
# add the layer to our list of layers
self.sigmoid_layers.append(sigmoid_layer)

Expand Down
2 changes: 1 addition & 1 deletion doc/lenet.txt
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,7 @@ instantiate the network as follows.
image_shape=(batch_size, 20, 12, 12),
filter_shape=(50, 20, 5, 5), poolsize=(2, 2))

# the SigmoidalLayer being fully-connected, it operates on 2D matrices of
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size,num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (20, 32 * 4 * 4) = (20, 512)
layer2_input = layer1.output.flatten(2)
Expand Down