You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Java neural network implementation with plugin for [WEKA] (http://www.cs.waikato.ac.nz/ml/weka/). Uses dropout and rectified linear units. Implementation is multithreaded and uses [MTJ] (https://github.com/fommil/matrix-toolkits-java) matrix library with native libs for performance.
5
+
6
+
## Installation
7
+
8
+
In WEKA, go to Tools/Package Manager and press the "File/URL" button. Enter "https://github.com/amten/NeuralNetwork/archive/NeuralNetwork_0.1.zip" and press "ok.
9
+
10
+
**Important!**
11
+
For optimal performance on Windows, you need to copy native matrix library dll-files to the install dir of Weka (".../Program Files/Weka-3-7")
12
+
Unzip this file to Wekas install dir: https://github.com/amten/NeuralNetwork/archive/BLAS_dlls_0.1.zip
13
+
14
+
For Linux, native matrix library files have not been tested, though it should be possible to install using instructions given [here] (https://github.com/fommil/netlib-java/)
15
+
16
+
## Usage
17
+
18
+
In WEKA, you will find the classifier under classifiers/functions/NeuralNetwork.
19
+
20
+
** Note 1: ** If you start Weka with console (alternative available in the windows start menu), you will get printouts of cost during each iteration of training and you can press <enter> in the console window to halt the training.
21
+
22
+
** Note 2: ** When using dropout as regularization, it might still be a good idea to have a small weight penalty. This keeps weights from exploding and causing overflows.
23
+
24
+
25
+
## License
26
+
27
+
Free to copy and modify. Please include author name if you copy code.
0 commit comments