Skip to content

Commit 9a03365

Browse files
committed
fix lint
1 parent e8e96a5 commit 9a03365

File tree

1 file changed

+12
-7
lines changed

1 file changed

+12
-7
lines changed

configs/ada/README.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,19 @@
11
# ADA
22

33
> [Training Generative Adversarial Networks with Limited Data](https://arxiv.org/pdf/2006.06676.pdf)
4+
45
<!-- [ALGORITHM] -->
56

67
## Abstract
8+
79
Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmentation mechanism that significantly stabilizes training in limited data regimes. The approach does not require changes to loss functions or network architectures, and is applicable both when training from scratch and when fine-tuning an existing GAN on another dataset. We demonstrate, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images. We expect this to open up new application domains for GANs. We also find that the widely used CIFAR-10 is, in fact, a limited data benchmark, and improve the record FID from 5.59 to 2.42.
810

911
<!-- [IMAGE] -->
12+
1013
<div align=center>
1114
<img src="https://user-images.githubusercontent.com/22982797/165902671-ee835ca5-3957-451e-8e7d-e3741d90e0b1.png"/>
1215
</div>
1316

14-
15-
1617
## Results and Models
1718

1819
<div align="center">
@@ -21,15 +22,16 @@ Training generative adversarial networks (GAN) using too little data typically l
2122
<img src="https://user-images.githubusercontent.com/22982797/165905181-66d6b4e7-6d40-48db-8281-50ebd2705f64.png" width="800"/>
2223
</div>
2324

24-
25-
| Model | Dataset | Iter |FID50k | Config | Log | Download |
26-
| :---------------------------------: | :-------------: | :-----------: | :-----------: |:---------------------------------------------------------------------------------------------------------------------------: | :-------------: |:--------------------------------------------------------------------------------------------------------------------------------------: |
27-
| stylegan3-t-ada | metface 1024x1024 | 130000 | 15.09 | [config](https://github.com/open-mmlab/mmgeneration/tree/master/configs/styleganv3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8.py) | [log](https://download.openmmlab.com/mmgen/stylegan3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8_20220328_142211.log.json) |[model](https://download.openmmlab.com/mmgen/stylegan3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8_best_fid_iter_130000_20220401_115101-f2ef498e.pth) |
25+
| Model | Dataset | Iter | FID50k | Config | Log | Download |
26+
| :-------------: | :---------------: | :----: | :----: | :-------------------------------------------------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------: |
27+
| stylegan3-t-ada | metface 1024x1024 | 130000 | 15.09 | [config](https://github.com/open-mmlab/mmgeneration/tree/master/configs/styleganv3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8.py) | [log](https://download.openmmlab.com/mmgen/stylegan3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8_20220328_142211.log.json) | [model](https://download.openmmlab.com/mmgen/stylegan3/stylegan3_t_ada_fp16_gamma6.6_metfaces_1024_b4x8_best_fid_iter_130000_20220401_115101-f2ef498e.pth) |
2828

2929
## Usage
30+
3031
Currently we only implement ada for StyleGANv2/v3. To use this training trick. You should use `ADAStyleGAN2Discriminator` as your discriminator.
3132

3233
An example:
34+
3335
```python
3436
model = dict(
3537
xxx,
@@ -40,9 +42,11 @@ model = dict(
4042
xxx
4143
)
4244
```
45+
4346
Here, you can adjust `ada_kimg` to change the magnitude of augmentation(The smaller the value, the greater the magnitude).
4447

4548
`aug_kwargs` is usually set as follows:
49+
4650
```python
4751
aug_kwargs = {
4852
'xflip': 1,
@@ -59,6 +63,7 @@ aug_kwargs = {
5963
'saturation': 1
6064
}
6165
```
66+
6267
Here, the number is Probability multiplier for each operation. For details, you can refer to [augment](https://github.com/open-mmlab/mmgeneration/tree/master/mmgen/models/architectures/stylegan/ada/augment.py).
6368

6469
## Citation
@@ -70,4 +75,4 @@ Here, the number is Probability multiplier for each operation. For details, you
7075
booktitle = {Proc. NeurIPS},
7176
year = {2020}
7277
}
73-
```
78+
```

0 commit comments

Comments
 (0)