- create 4 pixel padded training LMDB and testing LMDB, then create a soft link
ln -s cifar-10-batches-pyin this folder.- directly download it here.
- or you can generate it as follow:
- get cifar10 python version
- use data_utils.py to generate 4 pixel padded training data and testing data. Horizontal flip and random crop are performed on the fly while training.
- use net_generator.py to generate
solver.prototxtandtrainval.prototxt, you can generate resnet or plain net of depth 20/32/44/56/110, or even deeper if you want. you just need to changenaccording todepth=6n+2 - specify caffe path in train.sh, then train networks with
./train.sh [GPUs] [NET](eg.,./train.sh 0,1,2,3 resnet-20, logs can be accessed fromresnet-20/logsfolder). - specify caffe path in cfgs.py and use plot.py to generate beautful loss plots.
seems there's no much difference between resnet-20 and plain-20. However, from the second plot, you can see that plain-110 have difficulty to converge.
