Skip to content
Closed
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
110 commits
Select commit Hold shift + click to select a range
143a155
feat: Add General Layer class
Jul 17, 2017
8328db9
feat: Register the new Deep Learning Method
IlievskiV Jul 17, 2017
33e254b
feat: Define Deep Net class
IlievskiV Jul 17, 2017
8af8ac6
feat: Define Deep Net Method class
IlievskiV Jul 17, 2017
99dd5dc
feat: Add Dense Layer class
IlievskiV Jul 17, 2017
b780c6a
feat: Implement Conv and Max Pool Layer propagation backend
IlievskiV Jul 17, 2017
6074636
feat: Implement Conv Layer Class
IlievskiV Jul 17, 2017
b53ff3a
feat: Implement Max Pool Layer Class
IlievskiV Jul 17, 2017
5756fb6
feat: Define Reshape Layer Class
IlievskiV Jul 17, 2017
4727600
feat: Implement Tensor Data Loader Class
IlievskiV Jul 17, 2017
a799889
feat: Implement Copy Tensor Input and Copy Tensor Output methods
IlievskiV Jul 17, 2017
1d0bfd9
feat: Implement Deep Learning Minimizers
IlievskiV Jul 17, 2017
432fca9
feat: Implement Creatre Deep Net and the Parsing Layer Methods
IlievskiV Jul 17, 2017
b6b7b8e
feat: Insert Fetch Helper Methods
IlievskiV Jul 17, 2017
a258701
feat: Insert Declare Options and Parse Key Value String methods
IlievskiV Jul 17, 2017
fc89247
feat: Implement Process Options method
IlievskiV Jul 17, 2017
305446d
feat: Implement Train GPU method
IlievskiV Jul 17, 2017
0c64f5f
feat: Define Conv and Max Pool Layer propagation CPU backend
IlievskiV Jul 18, 2017
c1d2c74
feat: Define Conv and Max Pool Layer propagation GPU backend
IlievskiV Jul 18, 2017
a0ef9df
fix:Add 'public' key word in the inheritance
IlievskiV Jul 18, 2017
040a1d7
fix:Include CPU and GPU beckends, conditionally
IlievskiV Jul 18, 2017
9878bbd
feat:Implement Deep Net class
IlievskiV Jul 18, 2017
7008bf3
feat: Add weight matrix in the Tensor Batch class
IlievskiV Jul 18, 2017
8cf90e1
fix: Wrong method names
IlievskiV Jul 18, 2017
0b4c028
fix: Change the method signatures
IlievskiV Jul 18, 2017
fde26b2
fix: Include headers in Method DL
IlievskiV Jul 18, 2017
779162e
feat: Define Reshape kernel
IlievskiV Jul 19, 2017
24e3ce5
feat: Implement Forward and Backward pass in Reshape Layer
IlievskiV Jul 19, 2017
076e5a2
test: Add Im2Col, Downsample and RotateWeights tests
IlievskiV Jul 19, 2017
b312d93
test: Implement function for creating test conv net
IlievskiV Jul 19, 2017
1e907ee
test: Implement Forward pass test
IlievskiV Jul 19, 2017
e7d21ad
test: Implement Conv Loss function test
IlievskiV Jul 19, 2017
9e262dd
test: Implement Conv Prediction function test
IlievskiV Jul 19, 2017
5d65370
test: Implement Conv Backpropagation test
IlievskiV Jul 20, 2017
537c58d
RNNLayer added v1
sshekh Jul 20, 2017
ce65582
ScaleAdd and GetMatrix functions on vectors added
sshekh Jul 21, 2017
2b759c5
Adding Denoise Layer for DeepAutoEncoders
ajatgd Jul 21, 2017
5d071dd
Adding Transform Layer for Deep AutoEncoders
ajatgd Jul 21, 2017
097b3ff
Adding Tensor input and Forward in Denoise Layer
ajatgd Jul 22, 2017
037f613
Fixing a small bug in Denoise Layer
ajatgd Jul 22, 2017
85dc350
Adding DenoisePropagation methods for Reference Architecture
ajatgd Jul 23, 2017
6bc4c32
adding test for Denoise Layer Propagation
ajatgd Jul 23, 2017
4c5982a
Adding Denoise Layers to DeepNet
ajatgd Jul 23, 2017
e3a6602
Adding Logistic Regression Layer and removing Transformed Layer as it…
ajatgd Jul 25, 2017
ab38f5b
Adding tests for Logistic Regression Layer
ajatgd Jul 25, 2017
8f3cea6
Adding Logistic Regression Layer to DeepNet
ajatgd Jul 25, 2017
97f821e
refactor: Migrate to vector of weights and biases, DAE Build Breaking
sshekh Jul 27, 2017
1de838a
refactor: pointers removed from ScaleAdd and Copy signatures
sshekh Jul 27, 2017
5b6aa05
Refactor: Adding Corruption, Compression, Reconstruction layer in acc…
ajatgd Jul 28, 2017
1344d22
Refactor: Adding modified Layers to DeepNet and adding pretrain
ajatgd Jul 28, 2017
48844a0
Refactor: Migrating layers to new general layer constructor, adding d…
ajatgd Jul 31, 2017
45bf15d
Refactor: Adding two parameters to Backward in all layers
ajatgd Aug 1, 2017
a68eb04
Forward test RNN added
sshekh Aug 1, 2017
0890382
Adding FineTune function in DeepNet and test for same
ajatgd Aug 2, 2017
34fb0c6
Adding an attribute for the type of layer in General Layer
ajatgd Aug 3, 2017
5af8f1b
refactor: Format the coding style
IlievskiV Aug 5, 2017
a0807ff
feat: Implement the CPU architecture for Conv Layers
IlievskiV Aug 5, 2017
2cd5e52
feat: Implement Copy function in Tensor Data Loader
IlievskiV Aug 5, 2017
04ab41d
Full example added
sshekh Aug 6, 2017
742f92e
Removing Layer Type attribute from general layer and adding docs for …
ajatgd Aug 6, 2017
8ff7195
test: Add Im2Col, Downsample and RotateWeights tests for CPU
IlievskiV Aug 7, 2017
f00eb50
test: Add Conv Forward Pass Test for CPU
IlievskiV Aug 7, 2017
8344e6e
test: Add Conv Net Loss function test for CPU
IlievskiV Aug 7, 2017
4c7675c
test: Add Conv Net Prediction function test for CPU
IlievskiV Aug 7, 2017
4fa3b09
feat: Implement Tensor Data Loader for Reference
IlievskiV Aug 7, 2017
97d2d89
fix: Input Tensor not initialized properly
IlievskiV Aug 8, 2017
fe615d4
feat: Add function for constructing linear conv net
IlievskiV Aug 8, 2017
8f85bde
test: Add test for Tensor Data Loader for Reference backend
IlievskiV Aug 8, 2017
7d1d83f
feat: Define Flatten and Deflatten kernels
IlievskiV Aug 8, 2017
bdae1c8
feat: Implement Flatten and Delfatten for Reference and CPU
IlievskiV Aug 8, 2017
9be02e5
test: Add Tensor Data Loader test for CPU backend
IlievskiV Aug 8, 2017
60710e5
test: Add test for Flatten for the Reference backend
IlievskiV Aug 8, 2017
b3034ac
feat: Add flattening option in the Reshape Layer
IlievskiV Aug 9, 2017
4222250
fix: Bug fix in the Conv Layer Backprop step
IlievskiV Aug 9, 2017
e089324
temp: Full RNN fixes
sshekh Aug 9, 2017
c2d5ec8
fix: Fix Conv Layer Backward
IlievskiV Aug 13, 2017
962b40b
fix: Change to reference input in the Forward call
IlievskiV Aug 13, 2017
9e4c340
feat: Add test for loading real dataset
IlievskiV Aug 13, 2017
b83c9b0
test: Add tests for minimizers
IlievskiV Aug 13, 2017
4f65dec
feat: Define input layout string
IlievskiV Aug 14, 2017
9f938d1
test: Add test for testing Method DL for CPU
IlievskiV Aug 15, 2017
ad254f3
fix: Multiply Transponse errot for CPU backend
IlievskiV Aug 15, 2017
fe26262
feat: Backprop test for Denselayer added
sshekh Aug 17, 2017
f536b42
fix: Add condition for dummy backward gradients in the Dense Layer
IlievskiV Aug 22, 2017
64f7171
feat: Define batch layout string
IlievskiV Aug 22, 2017
72d37e2
feat: Add additional condition for loading batches
IlievskiV Aug 22, 2017
138d8e1
test: Add test for Method DL, for the DNN case
IlievskiV Aug 22, 2017
3a89807
test: Add test for Method DL, for DNN case
IlievskiV Aug 22, 2017
ed24c1e
fix: Initialize bias gradients to zero
IlievskiV Aug 23, 2017
30b337e
MethodDL RNN Parser added
sshekh Aug 25, 2017
cc2bfd0
RNN dimensions changed and full network working
sshekh Aug 29, 2017
83d71ed
CPU (Blas) Support added
sshekh Aug 29, 2017
13419bc
Added Cuda Support in recurrent propagation
sshekh Oct 4, 2017
2fc6fa7
Minor changes, methodDL multi-threading in Minimizer removed
sshekh Oct 5, 2017
c7f2f59
Minor change params of RNNLayer
sshekh Oct 5, 2017
5f92805
TMVA: implemented method GetMvaValue for method DL
omazapa Oct 12, 2017
c3ba4f2
TMVA:
omazapa Oct 12, 2017
dd0ab43
TMVA: removed compilation warnings
omazapa Oct 12, 2017
e6ccfb0
TMVA: moved fPool from TCpuMatrix to TMVA::Config class and removed m…
omazapa Oct 13, 2017
8ba7c95
FIX: MVaValue Calculation in Cpu Architecture
sshekh Oct 20, 2017
79316fd
TMVA: remove warnings in DenoisePropagation.cxx and Propagation.cxx
omazapa Oct 24, 2017
893ac3c
TMVA: removed warnings in TensorDataLoader and TestBackpropagationDL
omazapa Oct 24, 2017
196978b
TMVA: removing more warnings from multiple types of layers and in som…
omazapa Oct 24, 2017
9d76c74
TMVA: removed more warnings
omazapa Oct 24, 2017
048d589
Fix test file name and input batch layout
lmoneta Oct 24, 2017
27c056f
Fix layout string for DNN test. Need a reshape layer before a DNN layer
lmoneta Oct 25, 2017
3d6d6a3
Fix input parameter for Reshape Layer
lmoneta Oct 25, 2017
a353317
Thanks to Vladimir fix weight gradient and activation gradient compu…
lmoneta Nov 9, 2017
d590c6f
Remove some debug print out and improve test
lmoneta Nov 9, 2017
2557947
Fix the padding when computing the activation gradient
lmoneta Nov 10, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
feat: Implement Forward and Backward pass in Reshape Layer
Implementation of the Forward and Backward pass in the Reshape Layer,
which transforms the input to the desired output dimensions.
  • Loading branch information
IlievskiV committed Jul 19, 2017
commit 24e3ce5d8f6a0eaa8f4a6c80730fca5ec67883b9
9 changes: 4 additions & 5 deletions tmva/tmva/inc/TMVA/DNN/DeepNet.h
Original file line number Diff line number Diff line change
Expand Up @@ -135,7 +135,7 @@ class TDeepNet {
/*! Function for adding Reshape Layer in the Deep Neural Network, with a given
* height and width. It will take every matrix from the previous layer and
* reshape it to a matrix with new dimensions. */
TReshapeLayer<Architecture_t> *AddReshapeLayer(size_t height, size_t width);
TReshapeLayer<Architecture_t> *AddReshapeLayer(size_t depth, size_t height, size_t width);

/*! Function fir adding Reshape Layer in the Deep Neural Network, when
* the layer is already created. */
Expand Down Expand Up @@ -444,13 +444,13 @@ void TDeepNet<Architecture_t, Layer_t>::AddDenseLayer(TDenseLayer<Architecture_t

//______________________________________________________________________________
template <typename Architecture_t, typename Layer_t>
TReshapeLayer<Architecture_t> *TDeepNet<Architecture_t, Layer_t>::AddReshapeLayer(size_t height, size_t width)
TReshapeLayer<Architecture_t> *TDeepNet<Architecture_t, Layer_t>::AddReshapeLayer(size_t depth, size_t height,
size_t width)
{
size_t batchSize = this->GetBatchSize();
size_t inputDepth;
size_t inputHeight;
size_t inputWidth;
size_t depth;
size_t outputNSlices = this->GetBatchSize();
size_t outputNRows;
size_t outputNCols;
Expand All @@ -466,8 +466,7 @@ TReshapeLayer<Architecture_t> *TDeepNet<Architecture_t, Layer_t>::AddReshapeLaye
inputWidth = lastLayer->GetWidth();
}

depth = inputDepth;
outputNRows = inputDepth;
outputNRows = depth;
outputNCols = height * width;

TReshapeLayer<Architecture_t> *reshapeLayer = new TReshapeLayer<Architecture_t>(
Expand Down
12 changes: 11 additions & 1 deletion tmva/tmva/inc/TMVA/DNN/ReshapeLayer.h
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,11 @@ TReshapeLayer<Architecture_t>::TReshapeLayer(size_t batchSize, size_t inputDepth
: VGeneralLayer<Architecture_t>(batchSize, inputDepth, inputHeight, inputWidth, depth, height, width, 0, 0, 0, 0,
outputNSlices, outputNRows, outputNCols, EInitialization::kZero)
{
// Nothing to do here.
if (this->GetInputDepth() * this->GetInputHeight() * this->GetInputWidth() !=
this->GetDepth() * this->GetHeight() * this->GetWidth()) {
std::cout << "Dimensions not compatible" << std::endl;
return;
}
}

//_________________________________________________________________________________________________
Expand Down Expand Up @@ -104,13 +108,19 @@ TReshapeLayer<Architecture_t>::~TReshapeLayer()
template <typename Architecture_t>
auto TReshapeLayer<Architecture_t>::Forward(std::vector<Matrix_t> input, bool applyDropout) -> void
{
for (size_t i = 0; i < this->GetBatchSize(); i++) {
Architecture_t::Reshape(this->GetOutputAt(i), input[i]);
}
}

//_________________________________________________________________________________________________
template <typename Architecture_t>
auto TReshapeLayer<Architecture_t>::Backward(std::vector<Matrix_t> &gradients_backward,
const std::vector<Matrix_t> &activations_backward) -> void
{
for (size_t i = 0; i < this->GetBatchSize(); i++) {
Architecture_t::Reshape(gradients_backward[i], this->GetActivationGradientsAt(i));
}
}

//_________________________________________________________________________________________________
Expand Down
11 changes: 8 additions & 3 deletions tmva/tmva/src/MethodDL.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -592,6 +592,7 @@ void MethodDL::ParseReshapeLayer(DNN::TDeepNet<Architecture_t, Layer_t> &deepNet
std::vector<DNN::TDeepNet<Architecture_t, Layer_t>> &nets, TString layerString,
TString delim)
{
int depth = 0;
int height = 0;
int width = 0;

Expand All @@ -603,12 +604,16 @@ void MethodDL::ParseReshapeLayer(DNN::TDeepNet<Architecture_t, Layer_t> &deepNet

for (; token != nullptr; token = (TObjString *)nextToken()) {
switch (idxToken) {
case 1: // height
case 1: {
TString strDepth(token->GetString());
depth = strDepth.Atoi();
} break;
case 2: // height
{
TString strHeight(token->GetString());
height = strHeight.Atoi();
} break;
case 2: // width
case 3: // width
{
TString strWidth(token->GetString());
width = strWidth.Atoi();
Expand All @@ -618,7 +623,7 @@ void MethodDL::ParseReshapeLayer(DNN::TDeepNet<Architecture_t, Layer_t> &deepNet
}

// Add the reshape layer
TReshapeLayer<Architecture_t> *reshapeLayer = deepNet.AddReshapeLayer(height, width);
TReshapeLayer<Architecture_t> *reshapeLayer = deepNet.AddReshapeLayer(depth, height, width);
TReshapeLayer<Architecture_t> *copyReshapeLayer = new TReshapeLayer<Architecture_t>(*reshapeLayer);

// add the copy to all slave nets
Expand Down