You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Note that we used a given non-linear function as the activation function of the hidden layer. By default this is ``tanh``, but in many cases we might want
@@ -239,9 +239,9 @@ the ``MLP`` class :
239
239
# The logistic regression layer gets as input the hidden units
240
240
# of the hidden layer
241
241
self.logRegressionLayer = LogisticRegression(
242
-
input = self.hiddenLayer.output,
243
-
n_in = n_hidden,
244
-
n_out = n_out)
242
+
input=self.hiddenLayer.output,
243
+
n_in=n_hidden,
244
+
n_out=n_out)
245
245
246
246
247
247
In this tutorial we will also use L1 and L2 regularization (see
@@ -257,8 +257,8 @@ norm of the weights :math:`W^{(1)}, W^{(2)}`.
257
257
258
258
# square of L2 norm ; one regularization option is to enforce
259
259
# square of L2 norm to be small
260
-
self.L2_sqr = (self.hiddenLayer.W**2).sum() \
261
-
+ (self.logRegressionLayer.W**2).sum()
260
+
self.L2_sqr = (self.hiddenLayer.W ** 2).sum() \
261
+
+ (self.logRegressionLayer.W ** 2).sum()
262
262
263
263
# negative log likelihood of the MLP is given by the negative
264
264
# log likelihood of the output of the model, computed in the
@@ -312,22 +312,22 @@ at each step.
312
312
313
313
# specify how to update the parameters of the model as a dictionary
314
314
updates = {}
315
-
# given two list the zip A = [a1,a2,a3,a4] and B = [b1,b2,b3,b4] of
315
+
# given two list the zip A = [a1,a2,a3,a4] and B = [b1,b2,b3,b4] of
316
316
# same length, zip generates a list C of same size, where each element
317
317
# is a pair formed from the two lists :
318
-
# C = [(a1,b1), (a2,b2), (a3,b3) , (a4,b4)]
318
+
# C = [(a1,b1), (a2,b2), (a3,b3) , (a4,b4)]
319
319
for param, gparam in zip(classifier.params, gparams):
320
-
updates[param] = param - learning_rate*gparam
320
+
updates[param] = param - learning_rate * gparam
321
321
322
322
323
323
# compiling a Theano function `train_model` that returns the cost, but
324
324
# in the same time updates the parameter of the model based on the rules
0 commit comments