Skip to content

Commit d99a69b

Browse files
author
Thierry Dumas
committed
Correct a syntax error in README
1 parent e5e0f64 commit d99a69b

File tree

1 file changed

+18
-3
lines changed

1 file changed

+18
-3
lines changed

README.md

Lines changed: 18 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -55,11 +55,20 @@ cd autoencoder_based_image_compression/kodak_tensorflow/
5555
and H.265 are stored in the folder "eae/visualization/test/checking_reconstructing/kodak/".
5656

5757
## Quick start: training an autoencoder
58-
1. First of all, ImageNet images must be downloaded. In our case, it is sufficient to download the ILSVRC2012 validation images, "ILSVRC2012_img_val.tar" (6.3 GB), see [ImageNetDownloadWebPage](http://image-net.org/download). Let's say that, in your computer, the path to "ILSVRC2012_img_val.tar" is "path/to/folder_0/ILSVRC2012_img_val.tar" and you want the unpacked images to be put into the folder "path/to/folder_1/" before the script "creating_imagenet.py" preprocesses them. The creation of the ImageNet training and validaton sets of luminance images is then done via
58+
1. First of all, ImageNet images must be downloaded. In our case, it is sufficient to download the ILSVRC2012 validation
59+
images, "ILSVRC2012_img_val.tar" (6.3 GB), see [ImageNetDownloadWebPage](http://image-net.org/download). Let's say that,
60+
in your computer, the path to "ILSVRC2012_img_val.tar" is "path/to/folder_0/ILSVRC2012_img_val.tar" and you want the
61+
unpacked images to be put into the folder "path/to/folder_1/" before the script "creating_imagenet.py" preprocesses
62+
them. The creation of the ImageNet training and validaton sets of luminance images is then done via
5963
```sh
6064
python creating_imagenet.py path/to/folder_1/ --path_to_tar=path/to/folder_0/ILSVRC2012_img_val.tar
6165
```
62-
2. The training of an autoencoder on the ImageNet training set is done via the command below. 1.0 is the value of the quantization bin widths at the beginning of the training. 14000.0 is the value of the coefficient weighting the distortion term and the rate term in the objective function to be minimized over the parameters of the autoencoder. The script "training_eae_imagenet.py" enables to split the entire autoencoder training into several successive parts. The last argument 0 means that "training_eae_imagenet.py" runs the first part of the entire training. For each successive part, the last argument is incremented by 1.
66+
2. The training of an autoencoder on the ImageNet training set is done via the command below. 1.0 is the value of the
67+
quantization bin widths at the beginning of the training. 14000.0 is the value of the coefficient weighting the
68+
distortion term and the rate term in the objective function to be minimized over the parameters of the autoencoder.
69+
The script "training_eae_imagenet.py" enables to split the entire autoencoder training into several successive parts.
70+
The last argument 0 means that "training_eae_imagenet.py" runs the first part of the entire training. For each successive
71+
part, the last argument is incremented by 1.
6372
```sh
6473
python training_eae_imagenet.py 1.0 14000.0 0
6574
```
@@ -68,7 +77,10 @@ cd autoencoder_based_image_compression/kodak_tensorflow/
6877
The documentation "documentation_kodak/documentation_code.html" describes all the functionalities of the code of the paper.
6978
7079
## A simple example
71-
Another piece of code is a simple example for introducing the code of the paper. This piece of code is stored in the folder "svhn". Its documentation is in the file "documentation_svhn/documentation_code.html". If you feel comfortable with autoencoders, this piece of code can be skipped. Its purpose is to clarify the training of a rate-distortion optimized autoencoder. That is why a simple rate-distortion optimized autoencoder with very few hidden units is trained on tiny images (32x32 SVHN digits).
80+
Another piece of code is a simple example for introducing the code of the paper. This piece of code is stored in the folder
81+
"svhn". Its documentation is in the file "documentation_svhn/documentation_code.html". If you feel comfortable with autoencoders,
82+
this piece of code can be skipped. Its purpose is to clarify the training of a rate-distortion optimized autoencoder. That is why
83+
a simple rate-distortion optimized autoencoder with very few hidden units is trained on tiny images (32x32 SVHN digits).
7284
7385
## Citing
7486
```
@@ -78,3 +90,6 @@ Another piece of code is a simple example for introducing the code of the paper.
7890
booktitle = {ICASSP},
7991
year = {2018}
8092
}
93+
```
94+
95+

0 commit comments

Comments
 (0)