Skip to content

Commit 37d3c81

Browse files
authored
Add ReadMe for training (#3 )
Add ReadMe for training
2 parents e9f056f + 8d982a2 commit 37d3c81

File tree

3 files changed

+100
-1
lines changed

3 files changed

+100
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
Real-ESRGAN aims at developing **Practical Algorithms for General Image Restoration**.<br>
1212
We extend the powerful ESRGAN to a practical restoration application (namely, Real-ESRGAN), which is trained with pure synthetic data.
1313

14-
:triangular_flag_on_post: The training codes have been released. A detailed guide will be provided later (on July 25th). Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I wil also retrain the models.
14+
:triangular_flag_on_post: The training codes have been released. A detailed guide can be found in [Training.md](Training.md).
1515

1616
### :book: Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data
1717

@@ -54,6 +54,7 @@ You can download **Windows executable files** from https://github.com/xinntao/Re
5454
This executable file is **portable** and includes all the binaries and models required. No CUDA or PyTorch environment is needed.<br>
5555

5656
You can simply run the following command:
57+
5758
```bash
5859
./realesrgan-ncnn-vulkan.exe -i input.jpg -o output.png
5960
```

Training.md

Lines changed: 97 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,97 @@
1+
# :computer: How to Train Real-ESRGAN
2+
3+
The training codes have been released. <br>
4+
Note that the codes have a lot of refactoring. So there may be some bugs/performance drops. Welcome to report issues and I will also retrain the models.
5+
6+
## Overview
7+
8+
The training has been divided into two stages. These two stages have the same data synthesis process and training pipeline, except for the loss functions. Specifically,
9+
10+
1. We first train Real-ESRNet with L1 loss from the pre-trained model ESRGAN.
11+
1. We then use the trained Real-ESRNet model as an initialization of the generator, and train the Real-ESRGAN with a combination of L1 loss, perceptual loss and GAN loss.
12+
13+
## Dataset Preparation
14+
15+
We use DF2K (DIV2K and Flickr2K) + OST datasets for our training. Only HR images are required. <br>
16+
You can download from :
17+
18+
1. DIV2K: http://data.vision.ee.ethz.ch/cvl/DIV2K/DIV2K_train_HR.zip
19+
2. Flickr2K: https://cv.snu.ac.kr/research/EDSR/Flickr2K.tar
20+
3. OST: https://openmmlab.oss-cn-hangzhou.aliyuncs.com/datasets/OST_dataset.zip
21+
22+
For the DF2K dataset, we use a multi-scale strategy, *i.e.*, we downsample HR images to obtain several Ground-Truth images with different scales.
23+
24+
We then crop DF2K images into sub-images for faster IO and processing.
25+
26+
You need to prepare a txt file containing the image paths. The following are some examples in `meta_info_DF2Kmultiscale+OST_sub.txt` (As different users may have different sub-images partitions, this file is not suitable for your purpose and you need to prepare your own txt file):
27+
28+
```txt
29+
DF2K_HR_sub/000001_s001.png
30+
DF2K_HR_sub/000001_s002.png
31+
DF2K_HR_sub/000001_s003.png
32+
...
33+
```
34+
35+
## Train Real-ESRNet
36+
37+
1. Download pre-trained model [ESRGAN](https://drive.google.com/file/d/1b3_bWZTjNO3iL2js1yWkJfjZykcQgvzT/view?usp=sharing) into `experiments/pretrained_models`.
38+
1. Modify the content in the option file `options/train_realesrnet_x4plus.yml` accordingly:
39+
```yml
40+
train:
41+
name: DF2K+OST
42+
type: RealESRGANDataset
43+
dataroot_gt: datasets/DF2K # modify to the root path of your folder
44+
meta_info: data/meta_info/meta_info_DF2Kmultiscale+OST_sub.txt # modify to your own generate meta info txt
45+
io_backend:
46+
type: disk
47+
```
48+
1. If you want to perform validation during training, uncomment those lines and modify accordingly:
49+
```yml
50+
# Uncomment these for validation
51+
# val:
52+
# name: validation
53+
# type: PairedImageDataset
54+
# dataroot_gt: path_to_gt
55+
# dataroot_lq: path_to_lq
56+
# io_backend:
57+
# type: disk
58+
59+
...
60+
61+
# Uncomment these for validation
62+
# validation settings
63+
# val:
64+
# val_freq: !!float 5e3
65+
# save_img: True
66+
67+
# metrics:
68+
# psnr: # metric name, can be arbitrary
69+
# type: calculate_psnr
70+
# crop_border: 4
71+
# test_y_channel: false
72+
```
73+
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
74+
```bash
75+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
76+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --debug
77+
```
78+
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
79+
```bash
80+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
81+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrnet_x4plus.yml --launcher pytorch --auto_resume
82+
```
83+
84+
## Train Real-ESRGAN
85+
86+
1. After the training of Real-ESRNet, you now have the file `experiments/train_RealESRNetx4plus_1000k_B12G4_fromESRGAN/model/net_g_1000000.pth`. If you need to specify the pre-trained path to other files, modify the `pretrain_network_g` value in the option file `train_realesrgan_x4plus.yml`.
87+
1. Modify the option file `train_realesrgan_x4plus.yml` accordingly. Most modifications are similar to those listed above.
88+
1. Before the formal training, you may run in the `--debug` mode to see whether everything is OK. We use four GPUs for training:
89+
```bash
90+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
91+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --debug
92+
```
93+
1. The formal training. We use four GPUs for training. We use the `--auto_resume` argument to automatically resume the training if necessary.
94+
```bash
95+
CUDA_VISIBLE_DEVICES=0,1,2,3 \
96+
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 train.py -opt options/train_realesrgan_x4plus.yml --launcher pytorch --auto_resume
97+
```
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
# Put downloaded pre-trained models here

0 commit comments

Comments
 (0)