Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
110 commits
Select commit Hold shift + click to select a range
143a155
feat: Add General Layer class
Jul 17, 2017
8328db9
feat: Register the new Deep Learning Method
IlievskiV Jul 17, 2017
33e254b
feat: Define Deep Net class
IlievskiV Jul 17, 2017
8af8ac6
feat: Define Deep Net Method class
IlievskiV Jul 17, 2017
99dd5dc
feat: Add Dense Layer class
IlievskiV Jul 17, 2017
b780c6a
feat: Implement Conv and Max Pool Layer propagation backend
IlievskiV Jul 17, 2017
6074636
feat: Implement Conv Layer Class
IlievskiV Jul 17, 2017
b53ff3a
feat: Implement Max Pool Layer Class
IlievskiV Jul 17, 2017
5756fb6
feat: Define Reshape Layer Class
IlievskiV Jul 17, 2017
4727600
feat: Implement Tensor Data Loader Class
IlievskiV Jul 17, 2017
a799889
feat: Implement Copy Tensor Input and Copy Tensor Output methods
IlievskiV Jul 17, 2017
1d0bfd9
feat: Implement Deep Learning Minimizers
IlievskiV Jul 17, 2017
432fca9
feat: Implement Creatre Deep Net and the Parsing Layer Methods
IlievskiV Jul 17, 2017
b6b7b8e
feat: Insert Fetch Helper Methods
IlievskiV Jul 17, 2017
a258701
feat: Insert Declare Options and Parse Key Value String methods
IlievskiV Jul 17, 2017
fc89247
feat: Implement Process Options method
IlievskiV Jul 17, 2017
305446d
feat: Implement Train GPU method
IlievskiV Jul 17, 2017
0c64f5f
feat: Define Conv and Max Pool Layer propagation CPU backend
IlievskiV Jul 18, 2017
c1d2c74
feat: Define Conv and Max Pool Layer propagation GPU backend
IlievskiV Jul 18, 2017
a0ef9df
fix:Add 'public' key word in the inheritance
IlievskiV Jul 18, 2017
040a1d7
fix:Include CPU and GPU beckends, conditionally
IlievskiV Jul 18, 2017
9878bbd
feat:Implement Deep Net class
IlievskiV Jul 18, 2017
7008bf3
feat: Add weight matrix in the Tensor Batch class
IlievskiV Jul 18, 2017
8cf90e1
fix: Wrong method names
IlievskiV Jul 18, 2017
0b4c028
fix: Change the method signatures
IlievskiV Jul 18, 2017
fde26b2
fix: Include headers in Method DL
IlievskiV Jul 18, 2017
779162e
feat: Define Reshape kernel
IlievskiV Jul 19, 2017
24e3ce5
feat: Implement Forward and Backward pass in Reshape Layer
IlievskiV Jul 19, 2017
076e5a2
test: Add Im2Col, Downsample and RotateWeights tests
IlievskiV Jul 19, 2017
b312d93
test: Implement function for creating test conv net
IlievskiV Jul 19, 2017
1e907ee
test: Implement Forward pass test
IlievskiV Jul 19, 2017
e7d21ad
test: Implement Conv Loss function test
IlievskiV Jul 19, 2017
9e262dd
test: Implement Conv Prediction function test
IlievskiV Jul 19, 2017
5d65370
test: Implement Conv Backpropagation test
IlievskiV Jul 20, 2017
537c58d
RNNLayer added v1
sshekh Jul 20, 2017
ce65582
ScaleAdd and GetMatrix functions on vectors added
sshekh Jul 21, 2017
2b759c5
Adding Denoise Layer for DeepAutoEncoders
ajatgd Jul 21, 2017
5d071dd
Adding Transform Layer for Deep AutoEncoders
ajatgd Jul 21, 2017
097b3ff
Adding Tensor input and Forward in Denoise Layer
ajatgd Jul 22, 2017
037f613
Fixing a small bug in Denoise Layer
ajatgd Jul 22, 2017
85dc350
Adding DenoisePropagation methods for Reference Architecture
ajatgd Jul 23, 2017
6bc4c32
adding test for Denoise Layer Propagation
ajatgd Jul 23, 2017
4c5982a
Adding Denoise Layers to DeepNet
ajatgd Jul 23, 2017
e3a6602
Adding Logistic Regression Layer and removing Transformed Layer as it…
ajatgd Jul 25, 2017
ab38f5b
Adding tests for Logistic Regression Layer
ajatgd Jul 25, 2017
8f3cea6
Adding Logistic Regression Layer to DeepNet
ajatgd Jul 25, 2017
97f821e
refactor: Migrate to vector of weights and biases, DAE Build Breaking
sshekh Jul 27, 2017
1de838a
refactor: pointers removed from ScaleAdd and Copy signatures
sshekh Jul 27, 2017
5b6aa05
Refactor: Adding Corruption, Compression, Reconstruction layer in acc…
ajatgd Jul 28, 2017
1344d22
Refactor: Adding modified Layers to DeepNet and adding pretrain
ajatgd Jul 28, 2017
48844a0
Refactor: Migrating layers to new general layer constructor, adding d…
ajatgd Jul 31, 2017
45bf15d
Refactor: Adding two parameters to Backward in all layers
ajatgd Aug 1, 2017
a68eb04
Forward test RNN added
sshekh Aug 1, 2017
0890382
Adding FineTune function in DeepNet and test for same
ajatgd Aug 2, 2017
34fb0c6
Adding an attribute for the type of layer in General Layer
ajatgd Aug 3, 2017
5af8f1b
refactor: Format the coding style
IlievskiV Aug 5, 2017
a0807ff
feat: Implement the CPU architecture for Conv Layers
IlievskiV Aug 5, 2017
2cd5e52
feat: Implement Copy function in Tensor Data Loader
IlievskiV Aug 5, 2017
04ab41d
Full example added
sshekh Aug 6, 2017
742f92e
Removing Layer Type attribute from general layer and adding docs for …
ajatgd Aug 6, 2017
8ff7195
test: Add Im2Col, Downsample and RotateWeights tests for CPU
IlievskiV Aug 7, 2017
f00eb50
test: Add Conv Forward Pass Test for CPU
IlievskiV Aug 7, 2017
8344e6e
test: Add Conv Net Loss function test for CPU
IlievskiV Aug 7, 2017
4c7675c
test: Add Conv Net Prediction function test for CPU
IlievskiV Aug 7, 2017
4fa3b09
feat: Implement Tensor Data Loader for Reference
IlievskiV Aug 7, 2017
97d2d89
fix: Input Tensor not initialized properly
IlievskiV Aug 8, 2017
fe615d4
feat: Add function for constructing linear conv net
IlievskiV Aug 8, 2017
8f85bde
test: Add test for Tensor Data Loader for Reference backend
IlievskiV Aug 8, 2017
7d1d83f
feat: Define Flatten and Deflatten kernels
IlievskiV Aug 8, 2017
bdae1c8
feat: Implement Flatten and Delfatten for Reference and CPU
IlievskiV Aug 8, 2017
9be02e5
test: Add Tensor Data Loader test for CPU backend
IlievskiV Aug 8, 2017
60710e5
test: Add test for Flatten for the Reference backend
IlievskiV Aug 8, 2017
b3034ac
feat: Add flattening option in the Reshape Layer
IlievskiV Aug 9, 2017
4222250
fix: Bug fix in the Conv Layer Backprop step
IlievskiV Aug 9, 2017
e089324
temp: Full RNN fixes
sshekh Aug 9, 2017
c2d5ec8
fix: Fix Conv Layer Backward
IlievskiV Aug 13, 2017
962b40b
fix: Change to reference input in the Forward call
IlievskiV Aug 13, 2017
9e4c340
feat: Add test for loading real dataset
IlievskiV Aug 13, 2017
b83c9b0
test: Add tests for minimizers
IlievskiV Aug 13, 2017
4f65dec
feat: Define input layout string
IlievskiV Aug 14, 2017
9f938d1
test: Add test for testing Method DL for CPU
IlievskiV Aug 15, 2017
ad254f3
fix: Multiply Transponse errot for CPU backend
IlievskiV Aug 15, 2017
fe26262
feat: Backprop test for Denselayer added
sshekh Aug 17, 2017
f536b42
fix: Add condition for dummy backward gradients in the Dense Layer
IlievskiV Aug 22, 2017
64f7171
feat: Define batch layout string
IlievskiV Aug 22, 2017
72d37e2
feat: Add additional condition for loading batches
IlievskiV Aug 22, 2017
138d8e1
test: Add test for Method DL, for the DNN case
IlievskiV Aug 22, 2017
3a89807
test: Add test for Method DL, for DNN case
IlievskiV Aug 22, 2017
ed24c1e
fix: Initialize bias gradients to zero
IlievskiV Aug 23, 2017
30b337e
MethodDL RNN Parser added
sshekh Aug 25, 2017
cc2bfd0
RNN dimensions changed and full network working
sshekh Aug 29, 2017
83d71ed
CPU (Blas) Support added
sshekh Aug 29, 2017
13419bc
Added Cuda Support in recurrent propagation
sshekh Oct 4, 2017
2fc6fa7
Minor changes, methodDL multi-threading in Minimizer removed
sshekh Oct 5, 2017
c7f2f59
Minor change params of RNNLayer
sshekh Oct 5, 2017
5f92805
TMVA: implemented method GetMvaValue for method DL
omazapa Oct 12, 2017
c3ba4f2
TMVA:
omazapa Oct 12, 2017
dd0ab43
TMVA: removed compilation warnings
omazapa Oct 12, 2017
e6ccfb0
TMVA: moved fPool from TCpuMatrix to TMVA::Config class and removed m…
omazapa Oct 13, 2017
8ba7c95
FIX: MVaValue Calculation in Cpu Architecture
sshekh Oct 20, 2017
79316fd
TMVA: remove warnings in DenoisePropagation.cxx and Propagation.cxx
omazapa Oct 24, 2017
893ac3c
TMVA: removed warnings in TensorDataLoader and TestBackpropagationDL
omazapa Oct 24, 2017
196978b
TMVA: removing more warnings from multiple types of layers and in som…
omazapa Oct 24, 2017
9d76c74
TMVA: removed more warnings
omazapa Oct 24, 2017
048d589
Fix test file name and input batch layout
lmoneta Oct 24, 2017
27c056f
Fix layout string for DNN test. Need a reshape layer before a DNN layer
lmoneta Oct 25, 2017
3d6d6a3
Fix input parameter for Reshape Layer
lmoneta Oct 25, 2017
a353317
Thanks to Vladimir fix weight gradient and activation gradient compu…
lmoneta Nov 9, 2017
d590c6f
Remove some debug print out and improve test
lmoneta Nov 9, 2017
2557947
Fix the padding when computing the activation gradient
lmoneta Nov 10, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
TMVA:
* removed debug messages
* fixed dummy layer in TDeepNet::Backward that producing segfault.
* added initialization to zero in TCpuMatrix
* set batch size to 1 in GetMvaValue in MethodDL
  • Loading branch information
omazapa committed Oct 12, 2017
commit c3ba4f2b14c87c06320e0072d067235aef1f0f07
2 changes: 0 additions & 2 deletions tmva/tmva/inc/TMVA/DNN/CNN/ConvLayer.h
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,6 @@ TConvLayer<Architecture_t>::~TConvLayer()
template <typename Architecture_t>
auto TConvLayer<Architecture_t>::Forward(std::vector<Matrix_t> &input, bool applyDropout) -> void
{
std::cout << "Conv Layer Forward" << std::endl;
for (size_t i = 0; i < this->GetBatchSize(); i++) {

if (applyDropout && (this->GetDropoutProbability() != 1.0)) {
Expand All @@ -233,7 +232,6 @@ auto TConvLayer<Architecture_t>::Backward(std::vector<Matrix_t> &gradients_backw
const std::vector<Matrix_t> &activations_backward,
std::vector<Matrix_t> &inp1, std::vector<Matrix_t> &inp2) -> void
{
std::cout << "Conv Layer Backward" << std::endl;
Architecture_t::ConvLayerBackward(
gradients_backward, this->GetWeightGradientsAt(0), this->GetBiasGradientsAt(0), this->GetDerivatives(),
this->GetActivationGradients(), this->GetWeightsAt(0), activations_backward, this->GetBatchSize(),
Expand Down
2 changes: 0 additions & 2 deletions tmva/tmva/inc/TMVA/DNN/CNN/MaxPoolLayer.h
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,6 @@ TMaxPoolLayer<Architecture_t>::~TMaxPoolLayer()
template <typename Architecture_t>
auto TMaxPoolLayer<Architecture_t>::Forward(std::vector<Matrix_t> &input, bool applyDropout) -> void
{
std::cout << "Max Pool Layer Forward" << std::endl;
for (size_t i = 0; i < this->GetBatchSize(); i++) {

if (applyDropout && (this->GetDropoutProbability() != 1.0)) {
Expand All @@ -189,7 +188,6 @@ auto TMaxPoolLayer<Architecture_t>::Backward(std::vector<Matrix_t> &gradients_ba
const std::vector<Matrix_t> &activations_backward,
std::vector<Matrix_t> &inp1, std::vector<Matrix_t> &inp2) -> void
{
std::cout << "Max Pool Layer Backward" << std::endl;
Architecture_t::MaxPoolLayerBackward(gradients_backward, this->GetActivationGradients(), indexMatrix,
this->GetBatchSize(), this->GetDepth(), this->GetNLocalViews());
}
Expand Down
4 changes: 2 additions & 2 deletions tmva/tmva/inc/TMVA/DNN/DeepNet.h
Original file line number Diff line number Diff line change
Expand Up @@ -872,8 +872,8 @@ auto TDeepNet<Architecture_t, Layer_t>::Backward(std::vector<Matrix_t> &input, c

std::vector<Matrix_t> dummy;
for (size_t i = 0; i < input.size(); i++) {
//dummy.emplace_back(input[i].GetNrows(), input[i].GetNcols());
dummy.emplace_back(0, 0);
dummy.emplace_back(input[i].GetNrows(), input[i].GetNcols());
// dummy.emplace_back(0, 0);
}
fLayers[0]->Backward(dummy, input, inp1, inp2);
}
Expand Down
5 changes: 5 additions & 0 deletions tmva/tmva/src/DNN/Architectures/Cpu/CpuMatrix.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ TCpuMatrix<AReal>::TCpuMatrix(size_t nRows, size_t nCols)
: fBuffer(nRows * nCols), fNCols(nCols), fNRows(nRows)
{
Initialize();
for (size_t j = 0; j < fNCols; j++) {
for (size_t i = 0; i < fNRows; i++) {
(*this)(i, j) = 0;
}
}
}

//____________________________________________________________________________
Expand Down
2 changes: 0 additions & 2 deletions tmva/tmva/src/DNN/Architectures/Cpu/Propagation.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -273,8 +273,6 @@ template <typename AFloat>
void TCpu<AFloat>::CalculateConvBiasGradients(TCpuMatrix<AFloat> &biasGradients, std::vector<TCpuMatrix<AFloat>> &df,
size_t batchSize, size_t depth, size_t nLocalViews)
{
std::cout << "Calculate Conv Bias Gradients method call" << std::endl;

for (size_t i = 0; i < depth; i++) {
AFloat sum = 0;
for (size_t j = 0; j < nLocalViews; j++) {
Expand Down
6 changes: 6 additions & 0 deletions tmva/tmva/src/MethodDL.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -1254,6 +1254,12 @@ void MethodDL::TrainCpu()
Double_t MethodDL::GetMvaValue(Double_t * /*errLower*/, Double_t * /*errUpper*/)
{
#ifdef DNNCPU
// set the batch size to 1 for the evaluation
fNet->SetBatchSize(1);
// set also in the layers
auto &layers = fNet->GetLayers();
for (auto &l : layers) l->SetBatchSize(1);

using Architecture_t = DNN::TCpu<Double_t>;
using Matrix_t = typename Architecture_t::Matrix_t;

Expand Down