@@ -85,6 +85,8 @@ The integers will be between 1 and 10,000 (a vocabulary of 10,000 words) and the
8585``` python
8686from keras.layers import Input, Embedding, LSTM , Dense
8787from keras.models import Model
88+ import numpy as np
89+ np.random.seed(0 ) # Set a random seed for reproducibility
8890
8991# Headline input: meant to receive sequences of 100 integers, between 1 and 10000.
9092# Note that we can name any layer by passing it a "name" argument.
@@ -138,7 +140,11 @@ model.compile(optimizer='rmsprop', loss='binary_crossentropy',
138140We can train the model by passing it lists of input arrays and target arrays:
139141
140142``` python
141- model.fit([headline_data, additional_data], [labels, labels],
143+ headline_data = np.round(np.abs(np.random.rand(12 , 100 ) * 100 ))
144+ additional_data = np.random.randn(12 , 5 )
145+ headline_labels = np.random.randn(12 , 1 )
146+ additional_labels = np.random.randn(12 , 1 )
147+ model.fit([headline_data, additional_data], [headline_labels, additional_labels],
142148 epochs = 50 , batch_size = 32 )
143149```
144150
@@ -152,10 +158,19 @@ model.compile(optimizer='rmsprop',
152158
153159# And trained it via:
154160model.fit({' main_input' : headline_data, ' aux_input' : additional_data},
155- {' main_output' : labels , ' aux_output' : labels },
161+ {' main_output' : headline_labels , ' aux_output' : additional_labels },
156162 epochs = 50 , batch_size = 32 )
157163```
158164
165+ To use the model for inferencing, use
166+ ``` python
167+ model.predict({' main_input' : headline_data, ' aux_input' : additional_data})
168+ ```
169+ or alternatively,
170+ ``` python
171+ pred = model.predict([headline_data, additional_data])
172+ ```
173+
159174-----
160175
161176## Shared layers
0 commit comments