Skip to content

Commit 901111d

Browse files
committed
Merge branch 'master' of https://github.com/fchollet/keras into dataaug
2 parents fd77767 + c1857cf commit 901111d

File tree

3 files changed

+9
-7
lines changed

3 files changed

+9
-7
lines changed

docs/sources/models.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -27,8 +27,8 @@ model = keras.models.Sequential()
2727
- __shuffle__: boolean or str (for 'batch'). Whether to shuffle the samples at each epoch. 'batch' is a special option for dealing with the limitations of HDF5 data; it shuffles in batch-sized chunks.
2828
- __show_accuracy__: boolean. Whether to display class accuracy in the logs to stdout at each epoch.
2929
- __class_weight__: dictionary mapping classes to a weight value, used for scaling the loss function (during training only).
30-
- __sample_weight__: list or numpy array with 1:1 mapping to the training samples, used for scaling the loss function (during training only). For time-distributed data, there is one weight per sample *per timestep*, i.e. if your output data is shaped `(nb_samples, timesteps, output_dim)`, your mask should be of shape `(nb_samples, timesteps)`. This allows you to mask out or reweight individual output timesteps, which is useful in sequence to sequence learning.
31-
- __evaluate__(X, y, batch_size=128, show_accuracy=False, verbose=1): Show performance of the model over some validation data.
30+
- __sample_weight__: list or numpy array with 1:1 mapping to the training samples, used for scaling the loss function (during training only). For time-distributed data, there is one weight per sample *per timestep*, i.e. if your output data is shaped `(nb_samples, timesteps, output_dim)`, your mask should be of shape `(nb_samples, timesteps, 1)`. This allows you to mask out or reweight individual output timesteps, which is useful in sequence to sequence learning.
31+
- __evaluate__(X, y, batch_size=128, show_accuracy=False, verbose=1, sample_weight=None): Show performance of the model over some validation data.
3232
- __Return__: The loss score over the data, or a `(loss, accuracy)` tuple if `show_accuracy=True`.
3333
- __Arguments__: Same meaning as fit method above. verbose is used as a binary flag (progress bar or nothing).
3434
- __predict__(X, batch_size=128, verbose=1):
@@ -37,9 +37,9 @@ model = keras.models.Sequential()
3737
- __predict_classes__(X, batch_size=128, verbose=1): Return an array of class predictions for some test data.
3838
- __Return__: An array of labels for some test data.
3939
- __Arguments__: Same meaning as fit method above. verbose is used as a binary flag (progress bar or nothing).
40-
- __train_on_batch__(X, y, accuracy=False): Single gradient update on one batch.
40+
- __train_on_batch__(X, y, accuracy=False, class_weight=None, sample_weight=None): Single gradient update on one batch.
4141
- __Return__: loss over the data, or tuple `(loss, accuracy)` if `accuracy=True`.
42-
- __test_on_batch__(X, y, accuracy=False): Single performance evaluation on one batch.
42+
- __test_on_batch__(X, y, accuracy=False, sample_weight=None): Single performance evaluation on one batch.
4343
- __Return__: loss over the data, or tuple `(loss, accuracy)` if `accuracy=True`.
4444
- __save_weights__(fname, overwrite=False): Store the weights of all layers to a HDF5 file. If overwrite==False and the file already exists, an exception will be thrown.
4545
- __load_weights__(fname): Sets the weights of a model, based to weights stored by __save_weights__. You can only __load_weights__ on a savefile from a model with an identical architecture. __load_weights__ can be called either before or after the __compile__ step.

keras/layers/core.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -343,7 +343,9 @@ class Reshape(Layer):
343343
'''
344344
def __init__(self, *dims):
345345
super(Reshape, self).__init__()
346-
self.dims = dims
346+
if type(dims[0]) in [list, tuple]:
347+
dims = dims[0]
348+
self.dims = tuple(dims)
347349

348350
def get_output(self, train=False):
349351
X = self.get_input(train)
@@ -361,7 +363,7 @@ class Permute(Layer):
361363
'''
362364
def __init__(self, dims):
363365
super(Permute, self).__init__()
364-
self.dims = dims
366+
self.dims = tuple(dims)
365367

366368
def get_output(self, train):
367369
X = self.get_input(train)

keras/preprocessing/sequence.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
def pad_sequences(sequences, maxlen=None, dtype='int32', padding='pre', truncating='pre', value=0.):
88
"""
99
Pad each sequence to the same length:
10-
the length of the longuest sequence.
10+
the length of the longest sequence.
1111
1212
If maxlen is provided, any sequence longer
1313
than maxlen is truncated to maxlen. Truncation happens off either the beginning (default) or

0 commit comments

Comments
 (0)