Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
385 commits
Select commit Hold shift + click to select a range
2f98bed
Merge pull request #325 from phreeza/patch-2
fchollet Jul 2, 2015
e50462e
Merge branch 'master' of https://github.com/mthrok/keras into mthrok-…
fchollet Jul 2, 2015
12a5c6f
Touch-ups in IRNN example
fchollet Jul 2, 2015
2d5b86d
Merge branch 'mthrok-master'
fchollet Jul 2, 2015
2805413
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 2, 2015
a808ae6
Add a test for the max-norm constraint
phreeza Jun 29, 2015
b48e39a
Add a test for the identity, non-negative, and unit-norm constraints
phreeza Jun 29, 2015
4956ed9
small PEP-8 changes
phreeza Jun 30, 2015
d59f251
make some texts a bit more explicit
phreeza Jun 30, 2015
f184db8
add exotic inputs to identity test
phreeza Jun 30, 2015
514ac06
missing axis parameter
phreeza Jun 30, 2015
acef252
change constraints tests to new constraint api
phreeza Jul 2, 2015
bba5379
Fix constraint tests
fchollet Jul 3, 2015
98f0550
Merge pull request #327 from tleeuwenburg/master
fchollet Jul 3, 2015
91d86c3
Fix parenthesis typo in examples.md
samuela Jul 3, 2015
06d7c7d
Strict handling of incorrect dims in binary_xent
fchollet Jul 3, 2015
ee73049
Add test_tasks
fchollet Jul 3, 2015
940bd47
Add test_loss_weighting
fchollet Jul 3, 2015
f53c319
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 3, 2015
0332b95
Remove check in binary_crossentropy
fchollet Jul 3, 2015
8e1e32a
Fix tests
fchollet Jul 3, 2015
506aaf1
Merge pull request #328 from samuela/patch-1
fchollet Jul 3, 2015
522201e
Add test utils
fchollet Jul 3, 2015
a5f4bd3
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 3, 2015
af0899d
Added LRN, Convolution with Strides and ZeroPadding2D
pranv Jul 3, 2015
e63644c
Fix denoising autoencoder issue
fchollet Jul 3, 2015
2baa9a8
Merge branch 'master' of https://github.com/pranv/keras into pranv-ma…
fchollet Jul 3, 2015
c7c5372
Style fixes
fchollet Jul 3, 2015
9e4e432
Merge branch 'pranv-master'
fchollet Jul 3, 2015
243d473
Better API for Convolution1D and MaxPooling1D
fchollet Jul 3, 2015
76b9877
Add MSLE objective
fchollet Jul 4, 2015
f302230
merge
fchollet Jul 4, 2015
f1cd436
Fixes in models, callbacks
fchollet Jul 4, 2015
66b8f37
Complete working version of graphs. API needs work
fchollet Jul 4, 2015
47fd945
Add MAPE objective
Reddine Jul 4, 2015
60daee5
Add mape and msle objectives to the documentation
Reddine Jul 4, 2015
553a7c0
Graph bugfix, improve tests
fchollet Jul 4, 2015
32f483f
Fix mape objective
Reddine Jul 4, 2015
be75548
Add graph config management
fchollet Jul 4, 2015
dab5551
Update cifar10 example
fchollet Jul 4, 2015
53a05b6
Fix metrics issue in evaluate
fchollet Jul 4, 2015
f2c97d8
Add Graph model to doc
fchollet Jul 4, 2015
1530f31
Merge branch 'master' of https://github.com/Reddine/keras into Reddin…
fchollet Jul 4, 2015
8bd8ae1
Correct MAPE loss
fchollet Jul 4, 2015
b22e547
Merge branch 'Reddine-master'
fchollet Jul 4, 2015
e014b94
Create Gaussian Noise layer
Reddine Jul 5, 2015
8995b50
Remove DenoisingAutoEncoder
fchollet Jul 5, 2015
63f9a79
Fix ModelCheckpoint callback
fchollet Jul 5, 2015
ddd5f47
Fix container merging issue
fchollet Jul 5, 2015
c315b0d
fix GaussianNoise
Reddine Jul 5, 2015
fe3c4d7
Update graph tests
fchollet Jul 5, 2015
b1d9448
Refactor merge layer
fchollet Jul 5, 2015
35bcd5a
Touch-ups in examples and doc
fchollet Jul 5, 2015
aeb954d
Add dtype to graph inputs
fchollet Jul 6, 2015
1fe6556
Fix graph doc
fchollet Jul 6, 2015
16f737f
Fix AE example in doc
fchollet Jul 6, 2015
1f21e61
Merge branch 'master' of https://github.com/Reddine/keras into Reddin…
fchollet Jul 6, 2015
ab8f7da
Put GaussianNoise in its own module.
fchollet Jul 6, 2015
6c55dbd
Fix callbacks doc
fchollet Jul 6, 2015
7c8d9aa
Fix gaussian noise doc
fchollet Jul 6, 2015
337f39b
Add some documentation of the masking feature
wxs Jul 6, 2015
d4f39c8
fix add_node from inputs
floydwch Jul 6, 2015
7094be1
Add y standardization to graph model
fchollet Jul 6, 2015
14b0148
Merge pull request #348 from floydsoft/master
fchollet Jul 6, 2015
df331cc
fix doc
floydwch Jul 6, 2015
c354b9b
Add GaussianDropout
the-moliver Jul 6, 2015
5ed7f78
update docs
the-moliver Jul 6, 2015
0cae63a
Update noise.py
the-moliver Jul 7, 2015
c353dfc
Update noise.md
the-moliver Jul 7, 2015
8213e51
Update noise.py
the-moliver Jul 7, 2015
7ff158d
Modified comment and fixed batch_size
mthrok Jul 7, 2015
c0a44da
Merge pull request #349 from floydsoft/patch-1
fchollet Jul 7, 2015
4b07e77
Merge pull request #353 from mthrok/master
fchollet Jul 7, 2015
f817e2e
merge
fchollet Jul 7, 2015
07f2253
Add test for sequential model
fchollet Jul 7, 2015
1e73e1b
Fix History callback
fchollet Jul 7, 2015
bc05a25
Fix tests, increase coverage
fchollet Jul 7, 2015
1bfbb33
change Layer to MaskedLayer bugfix
the-moliver Jul 8, 2015
f0aedaf
correct sqrt call
the-moliver Jul 8, 2015
5cbe62c
Made theano_mode field of Sequential
Jul 8, 2015
b4c62ff
Constraints, optimizers and regularizers have a get_config() as preli…
Jul 8, 2015
c06b746
Merge branch 'patch-1' of https://github.com/the-moliver/keras into t…
fchollet Jul 8, 2015
9a649d2
Fix GaussianNoise imports
fchollet Jul 8, 2015
5bf2718
Fix binary_crossentropy
fchollet Jul 8, 2015
472f8ba
Merge pull request #346 from wxs/document-masking
fchollet Jul 8, 2015
46f591d
Getter for constraints
Jul 9, 2015
3909e78
Stored loss function as self.unweighted_loss, before passing it to we…
Jul 9, 2015
ca22bb7
Getter for regularizers
Jul 9, 2015
f142d34
serialize sequential models as yaml
Jul 9, 2015
d77175f
GaussianNoise has to be MaskedLayer instead of Layer
Jul 9, 2015
94d857f
Changed layers to use get_config for regularizers and constraints for…
Jul 9, 2015
afc5adb
Fix mistakes in uniform initializations equations
KyotoSunshine Jul 9, 2015
0bf6278
Merge pull request #372 from KyotoSunshine/master
fchollet Jul 10, 2015
9f71949
Dynamic epsilon in objectives
fchollet Jul 11, 2015
65ebd8a
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 11, 2015
a3dadff
Update documentation
fchollet Jul 11, 2015
03e512a
Batch validation in fit
fchollet Jul 14, 2015
493e6d2
Changed output format of from/to yaml to string, instead of file path
Jul 14, 2015
38c4cb2
Manual test for yaml (de-)serialisation
Jul 14, 2015
848e1fb
Removed camel case
Jul 14, 2015
319e18a
Removed camel case in to_yaml
Jul 14, 2015
48ea8df
implement batch shuffle
cmyr Jul 14, 2015
2603fa3
added conv1D example
mikekestemont Jul 14, 2015
22bd7cf
Update reuters dataset
fchollet Jul 15, 2015
238d390
Merge branch 'master' of https://github.com/mikekestemont/keras into …
fchollet Jul 15, 2015
6899a9d
Revise IMDB conv1d example
fchollet Jul 15, 2015
c8a2f46
Rename IMDB CNN example
fchollet Jul 15, 2015
2d0e84a
Merge branch 'mikekestemont-master'
fchollet Jul 15, 2015
94c930e
Remove weight tying in autoencoder
fchollet Jul 15, 2015
53331e4
Allow constraint getter to take parameter dict
Jul 15, 2015
eef82c4
Containers have a layer_to_yaml method | plus fixed a minor typo, inp…
Jul 15, 2015
6c69549
Roll back to None as default for reg and constr
Jul 15, 2015
c90f98e
Merge layer to_yaml and None default for reg and constr
Jul 15, 2015
2b7a3cb
to_yaml for Sequential and Graph, as well as model_from_yaml in model…
Jul 15, 2015
7da91d9
Allow optimizer getter to take dict args
Jul 15, 2015
705694a
Allow getter of regularizers to take dict args
Jul 15, 2015
200a006
get_from_module available with additional dictionary arguments to ini…
Jul 15, 2015
af845d5
layer utils, has getter for layers by name and arguments and from_yam…
Jul 15, 2015
5998b5d
Extended yaml tests, includes merged sequentials and graphs
Jul 15, 2015
7af168d
Added CropImage layer. Shrinks images in a convolution layer. When
pjadzinsky Jul 15, 2015
6a0bf48
Merge branch 'master' of github.com:pjadzinsky/keras
pjadzinsky Jul 15, 2015
08b1964
Fix doc
fchollet Jul 16, 2015
fd1d590
Fix graph tests
fchollet Jul 16, 2015
37fe48b
Match get_updates signature
Jul 16, 2015
a3ebda9
Uninitialized progbar when verbose==0
Jul 16, 2015
6336567
Added border_mode='same' to Convolution2D
pjadzinsky Jul 16, 2015
12eba13
Merge pull request #398 from kenterao/master
fchollet Jul 16, 2015
9e25669
Merge branch 'master' of https://github.com/pjadzinsky/keras into pja…
fchollet Jul 16, 2015
4373616
Touch-ups to 'same' border mode
fchollet Jul 16, 2015
510068b
Merge branch 'pjadzinsky-master'
fchollet Jul 16, 2015
036d968
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 16, 2015
08abc31
Merge branch 'yaml' of https://github.com/maxpumperla/keras into maxp…
fchollet Jul 16, 2015
43d8436
Fixes in yaml serialization
fchollet Jul 16, 2015
8824f1b
Fix yaml serialization support
fchollet Jul 16, 2015
46e19b9
Cleanup
fchollet Jul 16, 2015
6a4aab4
Fix border mode = same in Conv2D
fchollet Jul 16, 2015
71ac4bf
fix a typo in several places (ouput -> output)
phreeza Jul 16, 2015
e6582c1
Squashed commit of the following:
tleeuwenburg Jul 16, 2015
a3052e7
Updated to use new container infrastructure
tleeuwenburg Jul 16, 2015
0e87f40
Merge pull request #403 from phreeza/typo
fchollet Jul 16, 2015
efbf7a2
Merge pull request #404 from tleeuwenburg/test_norm
fchollet Jul 17, 2015
fea9570
Added Permute layer as suggested by loyeamen on #401
anayebi Jul 17, 2015
91a15fd
Doc, README touch-ups
fchollet Jul 18, 2015
c777cdf
Update README
fchollet Jul 18, 2015
84d9171
Merge branch 'master' of https://github.com/anayebi/keras into anayeb…
fchollet Jul 19, 2015
4b6bf1d
Fix Permute layer
fchollet Jul 19, 2015
d2b5849
Merge branch 'anayebi-master'
fchollet Jul 19, 2015
62a4f29
Add print_layer_shapes function
Jul 20, 2015
3b4b5a6
added documentation + a hint if hdf5/shuffle conflict suspected
cmyr Jul 20, 2015
98d4975
Update lstm_text_generation.py
the-moliver Jul 20, 2015
36a9a39
Add parametric softplus
the-moliver Jul 21, 2015
dc50928
add theano import
the-moliver Jul 21, 2015
2ada7d1
change Psoftplus defaults to be nearer relu
the-moliver Jul 21, 2015
80f04d7
Merge pull request #420 from the-moliver/the-moliver-samplingfix
fchollet Jul 21, 2015
e98b1c2
Merge branch 'batch-shuffle' of https://github.com/cmyr/keras into cm…
fchollet Jul 21, 2015
2fbfbdd
Cleanup error msg
fchollet Jul 21, 2015
72e73b0
Merge branch 'cmyr-batch-shuffle'
fchollet Jul 21, 2015
f392a78
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 21, 2015
ec8f7f0
Codebase cleanup
fchollet Jul 22, 2015
3037183
Remove dot_utils
fchollet Jul 22, 2015
ed9834c
Remove dot utils doc
fchollet Jul 22, 2015
529306f
simple test of all recurrent layers
phreeza Jul 16, 2015
130e5cd
run get_config and get_output_mask
phreeza Jul 16, 2015
b5dac6b
put dimensions into variables
phreeza Jul 16, 2015
f08f590
add a test on the output dimensions
phreeza Jul 16, 2015
20921b7
add some inline documentation
phreeza Jul 17, 2015
319412a
test elementary input and output of base layer
phreeza Jul 17, 2015
571448f
test connecting base layers
phreeza Jul 17, 2015
67426a6
Test the constructor, config and params functions of all core layers.
phreeza Jul 17, 2015
66247a8
Added h5py to the conda install
tleeuwenburg Jul 16, 2015
828d187
Merge pull request #407 from tleeuwenburg/travis_upgrade
fchollet Jul 22, 2015
1ebec1e
merge
fchollet Jul 22, 2015
2e20447
Merge pull request #425 from tleeuwenburg/test_layers
fchollet Jul 22, 2015
0174f21
Change name, change to maskedlayer, add docs
the-moliver Jul 22, 2015
84a3b5a
Make initializations flexible
the-moliver Jul 22, 2015
9cf5f6f
adding sample_weights to Graph
kenterao Jul 23, 2015
9403773
merge
fchollet Jul 23, 2015
6ed288b
Merge branch 'the-moliver-Psoftplus'
fchollet Jul 23, 2015
8a99d6e
Merge branch 'master' of https://github.com/kenterao/keras into kente…
fchollet Jul 23, 2015
6289de3
Fix & extend loss weighting
fchollet Jul 23, 2015
a08bf38
Extend loss weighting tests
fchollet Jul 23, 2015
1ad453f
Add check to print_layer_shapes to fail explicitely on model used con…
Jul 23, 2015
e1df8ca
Merge branch 'master' into print_layer_shapes
Jul 23, 2015
d635a60
Add model_utils.print_graph_layer_shapes to handle Graph models.
Jul 23, 2015
896880f
Add a test script for model_utils
Jul 23, 2015
570c377
Fixed parameter passing for preprocessing.text.one_hot
mynameisfiber Jul 23, 2015
0974e07
typo
kenterao Jul 23, 2015
4c83fcc
typo
kenterao Jul 23, 2015
540a1ce
Merge pull request #436 from kenterao/master
fchollet Jul 23, 2015
347e6d0
Added get_state and set_state method to Optimizer base class.
kenterao Jul 24, 2015
d2defca
Merge pull request #433 from mynameisfiber/master
fchollet Jul 24, 2015
e975f8a
Remove deprecated methods
fchollet Jul 24, 2015
32fd202
Merge branch 'print_layer_shapes' of https://github.com/julienr/keras…
fchollet Jul 24, 2015
3a28da9
Cleanup/fix model_utils
fchollet Jul 24, 2015
d1387c1
Move model_utils to layer_utils
fchollet Jul 24, 2015
b6aaeb3
Fix embedding test
fchollet Jul 24, 2015
e06f9df
Refactor model serialization
fchollet Jul 24, 2015
7c3bf9d
Add dataset tests
fchollet Jul 24, 2015
0e295db
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 24, 2015
4f6b1a4
Add optimizers tests
fchollet Jul 24, 2015
0661032
Merge branch 'master' of https://github.com/kenterao/keras into kente…
fchollet Jul 24, 2015
f1d6012
Fix optimizers test
fchollet Jul 24, 2015
a4def20
Fix serialization
fchollet Jul 24, 2015
1e18983
Fix optimizer
kenterao Jul 24, 2015
3a62cad
fix model_from_json
kenterao Jul 24, 2015
68f0677
Merge pull request #441 from kenterao/master
fchollet Jul 25, 2015
8675fc5
Update core layers documentation
fchollet Jul 25, 2015
275f416
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 25, 2015
dd833de
Fix LossHistory callback argument in doc
erfannoury Jul 26, 2015
59a634d
Merge pull request #444 from erfannoury/patch-1
fchollet Jul 26, 2015
b64217c
Small style fixes
fchollet Jul 26, 2015
cb1d25f
layers.core: add Masking layer
Jul 26, 2015
38f7fb1
git-ignore tags file
Jul 26, 2015
b1631bd
Merge pull request #447 from amitbeka/gitignore-tags
fchollet Jul 27, 2015
6a1ede1
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Jul 27, 2015
3212cd4
Merge branch 'masking-layer' of https://github.com/amitbeka/keras int…
fchollet Jul 27, 2015
48381f8
Masking layer touch-ups
fchollet Jul 27, 2015
cb763aa
Proper handling of output values in Masking layer
tkipf Jul 27, 2015
eeb56b9
updated adam solver
kashif Jul 28, 2015
c68aaa2
fixed t variable
kashif Jul 28, 2015
10b767d
more efficient implementation as per paper
kashif Jul 29, 2015
9c7c52d
further optimisation
kashif Jul 29, 2015
15a3a1f
Randomness seeding, small fixes
fchollet Jul 31, 2015
3bf5340
Merge pull request #449 from tkipf/master
fchollet Jul 31, 2015
54dc647
Merge pull request #457 from kashif/adam
fchollet Jul 31, 2015
1e3d9f7
Fix Hualos callback
fchollet Aug 1, 2015
4ecb5bd
Fix Hualos callback
fchollet Aug 2, 2015
de78ddf
Example: Use RNNs to answer questions from bAbi
Smerity Aug 4, 2015
37965ca
Style touch-ups in babi example
fchollet Aug 4, 2015
149d0e8
Added Poisson loss to Objectives
anayebi Jul 20, 2015
e42f738
Merge pull request #479 from anayebi/poisson-loss
fchollet Aug 5, 2015
6fddf15
data_utils: add error handling on url fetches
dribnet Aug 6, 2015
73fdaf6
Add documentation for time distributed sample weighting
wxs Aug 6, 2015
6d17e99
Fix MaxPooling1D.
nehz Aug 6, 2015
63284a4
babi_rnn bugfix: QA19 requires vocab from the answer
Smerity Aug 6, 2015
dbc0c27
babi_rnn bugfix: Fixing missing Python 3 support
Smerity Aug 7, 2015
342f2bc
babi_rnn: Adding results for all tasks in bAbi tasks dataset
Smerity Aug 7, 2015
1a572b1
Merge pull request #501 from Smerity/master
fchollet Aug 7, 2015
f8afa92
Merge pull request #489 from dribnet/url_check
fchollet Aug 7, 2015
921cf41
Merge pull request #494 from wxs/document-sampleweight
fchollet Aug 7, 2015
616fcba
Merge branch 'nehz-MaxPooling1D-fix' of https://github.com/nehz/keras…
fchollet Aug 9, 2015
e1d8b1b
Update MaxPooling1D documentation.
fchollet Aug 9, 2015
0eea505
Use readthedocs theme in dev doc version
fchollet Aug 9, 2015
c81d6ec
Fix batch normalization
fchollet Aug 9, 2015
9e7f67b
Fix typo
fchollet Aug 9, 2015
ed4acfa
Fix conv layers loading for model_from_config
averybigant Aug 9, 2015
69628bb
Merge pull request #512 from averybigant/fix_conv_layers_yaml_load
fchollet Aug 10, 2015
b057624
Add border_mode = same to Convolution1D
fchollet Aug 10, 2015
1b66e36
Merge branch 'master' of https://github.com/fchollet/keras
fchollet Aug 10, 2015
425f290
Add create_output option in Graph model
fchollet Aug 10, 2015
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
*.DS_Store
*.pyc
*.swp
temp/*
dist/*
build/*
keras/datasets/data/*
keras/datasets/temp/*
docs/site/*
docs/theme/*
docs/theme/*
tags
18 changes: 18 additions & 0 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
sudo: false
language: python
# Setup anaconda
before_install:
- wget http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh -O miniconda.sh
- chmod +x miniconda.sh
- ./miniconda.sh -b
- export PATH=/home/travis/miniconda/bin:$PATH
- conda update --yes conda
python:
- "3.4"
# command to install dependencies
install:
- conda install --yes python=$TRAVIS_PYTHON_VERSION numpy scipy matplotlib pandas pytest h5py
# Coverage packages are on my binstar channel
- python setup.py install
# command to run tests
script: py.test tests/
31 changes: 19 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,23 @@ Keras is a minimalist, highly modular neural network library in the spirit of To

Use Keras if you need a deep learning library that:
- allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
- supports both convolutional networks (for vision) and recurrent networks (for sequence data). As well as combinations of the two.
- runs seamlessly on the CPU and the GPU.
- supports both convolutional networks and recurrent networks, as well as combinations of the two.
- supports arbitrary connectivity schemes (including multi-input and multi-output training).
- runs seamlessly on CPU and GPU.

Read the documentation at [Keras.io](http://keras.io).

Keras is compatible with __Python 2.7-3.4__.

## Guiding principles

- __Modularity.__ A model is understood as a sequence of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions and dropout are all standalone modules that you can combine to create new models.
- __Modularity.__ A model is understood as a sequence or a graph of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions, regularization schemes are all standalone modules that you can combine to create new models.

- __Minimalism.__ Each module should be kept short and simple (<100 lines of code). Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.
- __Minimalism.__ Each module should be kept short and simple (<100 lines of code). Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.

- __Easy extensibility.__ New features (a new module, per the above definition, or a new way to combine modules together) are dead simple to add (as new classes/functions), and existing modules provide ample examples.
- __Easy extensibility.__ New modules are dead simple to add (as new classes/functions), and existing modules provide ample examples. To be able to easily create new modules allows for total expressiveness, making Keras suitable for advanced research.

- __Work with Python__. No separate models configuration files in a declarative format (like in Caffe or PyLearn2). Models are described in Python code, which is compact, easier to debug, benefits from syntax highlighting, and most of all, allows for ease of extensibility. See for yourself with the examples below.
- __Work with Python__. No separate models configuration files in a declarative format (like in Caffe or PyLearn2). Models are described in Python code, which is compact, easier to debug, and allows for ease of extensibility.

## Examples

Expand Down Expand Up @@ -171,8 +172,11 @@ model.fit(images, captions, batch_size=16, nb_epoch=100)
In the examples folder, you will find example models for real datasets:
- CIFAR10 small images classification: Convnet with realtime data augmentation
- IMDB movie review sentiment classification: LSTM over sequences of words
- Reuters newswires topic classification: Multilayer Perceptron
- MNIST handwritten digits classification: Multilayer Perceptron
- Reuters newswires topic classification: Multilayer Perceptron (MLP)
- MNIST handwritten digits classification: MLP & CNN
- Character-level text generation with LSTM

...and more.


## Current capabilities
Expand All @@ -186,24 +190,27 @@ A few highlights: convnets, LSTM, GRU, word2vec-style embeddings, PReLU, batch n
Keras uses the following dependencies:

- numpy, scipy

- pyyaml
- Theano
- See installation instructions: http://deeplearning.net/software/theano/install.html#install

- HDF5 and h5py (optional, required if you use model saving/loading functions)

- Optional but recommended if you use CNNs: cuDNN.

Once you have the dependencies installed, cd to the Keras folder and run the install command:
```
sudo python setup.py install
```

You can also install Keras from PyPI:
```
sudo pip install keras
```

## Why this name, Keras?

Keras (κέρας) means _horn_ in Greek. It is a reference to a literary image from ancient Greek and Latin literature, first found in the _Odyssey_, where dream spirits (_Oneiroi_, singular _Oneiros_) are divided between those who deceive men with false visions, who arrive to Earth through a gate of ivory, and those who announce a future that will come to pass, who arrive through a gate of horn. It's a play on the words κέρας (horn) / κραίνω (fulfill), and ἐλέφας (ivory) / ἐλεφαίρομαι (deceive).

Keras was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System).

_"Oneiroi are beyond our unravelling --who can be sure what tale they tell? Not all that men look for comes to pass. Two gates there are that give passage to fleeting Oneiroi; one is made of horn, one of ivory. The Oneiroi that pass through sawn ivory are deceitful, bearing a message that will not be fulfilled; those that come out through polished horn have truth behind them, to be accomplished for men who see them."_ Homer, Odyssey 19. 562 ff (Shewring translation).
>_"Oneiroi are beyond our unravelling --who can be sure what tale they tell? Not all that men look for comes to pass. Two gates there are that give passage to fleeting Oneiroi; one is made of horn, one of ivory. The Oneiroi that pass through sawn ivory are deceitful, bearing a message that will not be fulfilled; those that come out through polished horn have truth behind them, to be accomplished for men who see them."_ Homer, Odyssey 19. 562 ff (Shewring translation).

7 changes: 2 additions & 5 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@ site_name: Keras Documentation
theme: readthedocs
docs_dir: sources
repo_url: http://github.com/fchollet/keras
site_url: /
site_url: http://keras.io/
#theme_dir: theme
site_description: Documentation for fast and lightweight Keras Deep Learning library.
include_404: true
include_search: true

dev_addr: '0.0.0.0:8000'
google_analytics: ['UA-61785484-1', 'keras.io']
Expand All @@ -32,11 +30,10 @@ pages:
- Advanced Activations Layers: layers/advanced_activations.md
- Normalization Layers: layers/normalization.md
- Embedding Layers: layers/embeddings.md
- Noise layers: layers/noise.md
- Containers: layers/containers.md
- Preprocessing:
- Sequence Preprocessing: preprocessing/sequence.md
- Text Preprocessing: preprocessing/text.md
- Image Preprocessing: preprocessing/image.md
- Utils:
- Visualization Utilities: utils/visualization.md

12 changes: 6 additions & 6 deletions docs/sources/callbacks.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Usage of callbacks

A callback is a set of functions to be applied at given stages of the training procedure. You can use callbacks to get a view on internal states and statistics of the model during training. You can pass a list of callback (as the keyword argument `callbacks`) to the `.fit()` method of the `Sequential` model. The relevant methods of the callbacks will then be called at each stage of the training.
A callback is a set of functions to be applied at given stages of the training procedure. You can use callbacks to get a view on internal states and statistics of the model during training. You can pass a list of callbacks (as the keyword argument `callbacks`) to the `.fit()` method of the `Sequential` model. The relevant methods of the callbacks will then be called at each stage of the training.

---

Expand Down Expand Up @@ -37,10 +37,10 @@ Save the model after every epoch. If `save_best_only=True`, the latest best mode


```python
keras.callbacks.EarlyStopping(patience=0, verbose=0)
keras.callbacks.EarlyStopping(monitor='val_loss', patience=0, verbose=0)
```

Stop training after no improvement of the validation loss is seen for `patience` epochs.
Stop training after no improvement of the metric `monitor` is seen for `patience` epochs.

---

Expand All @@ -52,7 +52,7 @@ You can create a custom callback by extending the base class `keras.callbacks.Ca
Here's a simple example saving a list of losses over each batch during training:
```python
class LossHistory(keras.callbacks.Callback):
def on_train_begin(self):
def on_train_begin(self, logs={}):
self.losses = []

def on_batch_end(self, batch, logs={}):
Expand All @@ -61,7 +61,7 @@ class LossHistory(keras.callbacks.Callback):

---

### Example to record the loss history
### Example: recording loss history

```python
class LossHistory(keras.callbacks.Callback):
Expand All @@ -88,7 +88,7 @@ print history.losses

---

### Example to checkpoint models
### Example: model checkpoints

```python
from keras.callbacks import ModelCheckpoint
Expand Down
7 changes: 5 additions & 2 deletions docs/sources/constraints.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,12 @@

Functions from the `constraints` module allow setting constraints (eg. non-negativity) on network parameters during optimization.

The keyword arguments used for passing constraints to parameters in a layer will depend on the layer.
The penalties are applied on a per-layer basis. The exact API will depend on the layer, but the layers `Dense`, `TimeDistributedDense`, `MaxoutDense`, `Convolution1D` and `Convolution2D` have a unified API.

In the `Dense` layer it is simply `W_constraint` for the main weights matrix, and `b_constraint` for the bias.
These layers expose 2 keyword arguments:

- `W_constraint` for the main weights matrix
- `b_constraint` for the bias.


```python
Expand Down
3 changes: 3 additions & 0 deletions docs/sources/documentation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,9 @@
- [Models](models.md)
- [Activations](activations.md)
- [Initializations](initializations.md)
- [Regularizers](regularizers.md)
- [Constraints](constraints.md)
- [Callbacks](callbacks.md)
- [Datasets](datasets.md)

---
Expand Down
8 changes: 5 additions & 3 deletions docs/sources/examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ model.add(Dense(20, 64, init='uniform', activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, 64, init='uniform', activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(64, 2, init='uniform', activation='softmax')
model.add(Dense(64, 2, init='uniform', activation='softmax'))

sgd = SGD(lr=0.1, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='mean_squared_error', optimizer=sgd)
Expand Down Expand Up @@ -92,6 +92,7 @@ from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM

model = Sequential()
# Add a mask_zero=True to the Embedding connstructor if 0 is a left-padding value in your data
model.add(Embedding(max_features, 256))
model.add(LSTM(256, 128, activation='sigmoid', inner_activation='hard_sigmoid'))
model.add(Dropout(0.5))
Expand All @@ -106,8 +107,9 @@ score = model.evaluate(X_test, Y_test, batch_size=16)

---

### Architecture for learning image captions with a convnet and a Gated Recurrent Unit
(word-level embedding, caption of maximum length 16 words).
### Image captioning

Architecture for learning image captions with a convnet and a Gated Recurrent Unit (word-level embedding, caption of maximum length 16 words).

Note that getting this to actually "work" will require using a bigger convnet, initialized with pre-trained weights.
Displaying readable results will also require an embedding decoder.
Expand Down
45 changes: 27 additions & 18 deletions docs/sources/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,23 +2,24 @@

## Overview

Keras is a minimalist, highly modular neural network library in the spirit of Torch, written in Python, that uses [Theano](http://deeplearning.net/software/theano/) under the hood for fast tensor manipulation on GPU and CPU. It was developed with a focus on enabling fast experimentation.
Keras is a minimalist, highly modular neural network library in the spirit of Torch, written in Python, that uses [Theano](http://deeplearning.net/software/theano/) under the hood for optimized tensor manipulation on GPU and CPU. It was developed with a focus on enabling fast experimentation.

Use Keras if you need a deep learning library that:

- allows for easy and fast prototyping (through total modularity, minimalism, and extensibility).
- supports both __convolutional networks__ and __recurrent networks__ (LSTM, GRU, etc). As well as combinations of the two.
- runs seamlessly on the CPU and the GPU.
- supports both convolutional networks and recurrent networks, as well as combinations of the two.
- supports arbitrary connectivity schemes (including multi-input and multi-output training).
- runs seamlessly on CPU and GPU.

## Guiding principles

- __Modularity.__ A model is understood as a sequence of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions and dropout are all standalone modules that you can combine to create new models.
- __Modularity.__ A model is understood as a sequence or a graph of standalone, fully-configurable modules that can be plugged together with as little restrictions as possible. In particular, neural layers, cost functions, optimizers, initialization schemes, activation functions, regularization schemes are all standalone modules that you can combine to create new models.

- __Minimalism.__ Each module should be kept short and simple (<100 lines of code). Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.
- __Minimalism.__ Each module should be kept short and simple (<100 lines of code). Every piece of code should be transparent upon first reading. No black magic: it hurts iteration speed and ability to innovate.

- __Easy extensibility.__ A new feature (a new module, per the above definition, or a new way to combine modules together) are dead simple to add (as new classes/functions), and existing modules provide ample examples.
- __Easy extensibility.__ New modules are dead simple to add (as new classes/functions), and existing modules provide ample examples. To be able to easily create new modules allows for total expressiveness, making Keras suitable for advanced research.

- __Work with Python__. No separate models configuration files in a declarative format (like in Caffe or PyLearn2). Models are described in Python code, which is compact, easier to debug, benefits from syntax highlighting, and most of all, allows for ease of extensibility.
- __Work with Python__. No separate models configuration files in a declarative format (like in Caffe or PyLearn2). Models are described in Python code, which is compact, easier to debug, and allows for ease of extensibility.

## Code

Expand All @@ -30,7 +31,9 @@ Keras is licensed under the [MIT license](http://opensource.org/licenses/MIT).

## Getting started: 30 seconds to Keras

The core datastructure of Keras is a __model__, a way to organize layers. Here's a sequential model (a linear pile of layers).
The core datastructure of Keras is a __model__, a way to organize layers. There are two types of models: [`Sequential`](/models/#sequential) and [`Graph`](/models/#graph).

Here's the `Sequential` model (a linear pile of layers):

```python
from keras.models import Sequential
Expand All @@ -43,9 +46,9 @@ Stacking layers is as easy as `.add()`:
```python
from keras.layers.core import Dense, Activation

model.add(Dense(input_dim=100, output_dim=64, init="uniform"))
model.add(Dense(input_dim=100, output_dim=64, init="glorot_uniform"))
model.add(Activation("relu"))
model.add(Dense(input_dim=64, output_dim=10, init="uniform"))
model.add(Dense(input_dim=64, output_dim=10, init="glorot_uniform"))
model.add(Activation("softmax"))
```

Expand All @@ -67,7 +70,7 @@ model.fit(X_train, Y_train, nb_epoch=5, batch_size=32)

Alternatively, you can feed batches to your model manually:
```python
model.train(X_batch, Y_batch)
model.train_on_batch(X_batch, Y_batch)
```

Evaluate your performance in one line:
Expand All @@ -81,19 +84,20 @@ classes = model.predict_classes(X_test, batch_size=32)
proba = model.predict_proba(X_test, batch_size=32)
```

Building a network of LSTMs, a deep CNN, a word2vec embedder or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?
Building a network of LSTMs, a deep CNN, a Neural Turing Machine, a word2vec embedder or any other model is just as fast. The ideas behind deep learning are simple, so why should their implementation be painful?

Have a look at the [examples](examples.md).

## Installation

Keras uses the following dependencies:

- numpy, scipy
- Theano
- __numpy__, __scipy__
- __pyyaml__
- __Theano__
- See [installation instructions](http://deeplearning.net/software/theano/install.html#install).
- HDF5 and h5py (optional, required if you use model saving/loading functions)
- Optional but recommended if you use CNNs: cuDNN.
- __HDF5__ and __h5py__ (optional, required if you use model saving/loading functions)
- Optional but recommended if you use CNNs: __cuDNN__.

Once you have the dependencies installed, clone the repo:
```bash
Expand All @@ -104,6 +108,10 @@ Go to the Keras folder and run the install command:
cd keras
sudo python setup.py install
```
You can also install Keras from PyPI:
```
sudo pip install keras
```

## Support

Expand All @@ -116,15 +124,16 @@ Keras welcomes all contributions from the community.
- Keep a pragmatic mindset and avoid bloat. Only add to the source if that is the only path forward.
- New features should be documented. Make sure you update the documentation along with your Pull Request.
- The documentation for every new feature should include a usage example in the form of a code snippet.
- All changes should be tested. A formal test process will be introduced very soon.
- All changes should be tested. Make sure any new feature you add has a corresponding unit test.
- Please no Pull Requests about coding style.
- Even if you don't contribute to the Keras source code, if you have an application of Keras that is concise and powerful, please consider adding it to our collection of [examples](https://github.com/fchollet/keras/tree/master/examples).


## Why this name, Keras?

Keras (κέρας) means _horn_ in Greek. It is a reference to a literary image from ancient Greek and Latin literature, first found in the _Odyssey_, where dream spirits (_Oneiroi_, singular _Oneiros_) are divided between those who deceive men with false visions, who arrive to Earth through a gate of ivory, and those who announce a future that will come to pass, who arrive through a gate of horn. It's a play on the words κέρας (horn) / κραίνω (fulfill), and ἐλέφας (ivory) / ἐλεφαίρομαι (deceive).

Keras was developed as part of the research effort of project ONEIROS (Open-ended Neuro-Electronic Intelligent Robot Operating System).
Keras was developed as part of the research effort of project __ONEIROS__ (*Open-ended Neuro-Electronic Intelligent Robot Operating System*).

> _"Oneiroi are beyond our unravelling --who can be sure what tale they tell? Not all that men look for comes to pass. Two gates there are that give passage to fleeting Oneiroi; one is made of horn, one of ivory. The Oneiroi that pass through sawn ivory are deceitful, bearing a message that will not be fulfilled; those that come out through polished horn have truth behind them, to be accomplished for men who see them."_

Expand Down
1 change: 0 additions & 1 deletion docs/sources/initializations.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
# Initializations

## Usage of initializations

Expand Down
Loading