Alex J. Champandard
c03a0b7fa3
Fix for trained models loaded from script directory. Closes #67 .
9 years ago
Alex J. Champandard
a69d2ae060
Add histogram color matching, enabled with `--rendering-histogram`.
9 years ago
Alex J. Champandard
965bd55090
Add new model selection (by image type and 'enhance' operation) and parameters and documentation in README.
9 years ago
Alex J. Champandard
e6120878c6
Minor rework of the training parameters.
9 years ago
Alex J. Champandard
0d4621df0a
Simplify architecture outside of convolution blocks for speed, improve training code.
9 years ago
Alex J. Champandard
c8e805667f
Fix for padding in convolution.
9 years ago
Alex J. Champandard
2a29fc954b
Buffer chunks of images proportionally to size.
9 years ago
Alex J. Champandard
da354f3e55
Remove reflection layer, important but not the bottleneck right now and quite slower.
9 years ago
Alex J. Champandard
9bba9850d5
Improve pre-processing steps to include more randomness: scales, blur...
9 years ago
Alex J. Champandard
149ae4e566
Update training parameters.
9 years ago
Alex J. Champandard
de047d6a74
Fix for images that are not multiples of 2 or 4, depending on the rescaling.
9 years ago
Alex J. Champandard
03914db364
Fix formatting, minor tweaks to Docker build files for release.
9 years ago
Alex J. Champandard
448e7b93dc
Fix progress output when tiled rendering, changed default zoom level.
9 years ago
Alex J. Champandard
1ad40b6d71
Merge branch 'master' into v0.2
9 years ago
Alex J. Champandard
ac49676415
Add tiled rendering with padding, no feather-blending but looks good enough.
9 years ago
Alex J. Champandard
095fe42dc3
Add tiled rendering, currently with no padding for each tile.
9 years ago
Alex J. Champandard
d18c08f1b5
Integrated reflection padding instead of zero padding for extra quality during training and inference.
9 years ago
Alex J. Champandard
90c0b7ea43
Fix padding code, more reliable for specific upscale/downscale combinations.
9 years ago
Alex J. Champandard
3b2a6b9d8d
Add extra padding on input to avoid zero-padding. Experiment with training values from ENet (segmentation).
9 years ago
Alex J. Champandard
7924cc4a85
Improve display and filenames for saving output.
9 years ago
Alex J. Champandard
93e5a41d9a
Fix and optimize pre-processing of images.
9 years ago
Alex J. Champandard
11ba505252
Fix for gradient clipping code.
9 years ago
Alex J. Champandard
cf65207a2e
Use full range of tanh output rather than [-0.5, +0.5], avoids clipping.
9 years ago
Alex J. Champandard
c610623b11
Add gradient clipping, helpful for preventing problems with extreme parameters/architectures.
9 years ago
Alex J. Champandard
02d2fca6c5
Corrected value for adversarial loss. Don't refactor math the day after stopping coffee.
9 years ago
Alex J. Champandard
f2494f8078
Add new downscale layers, separate from upscale steps. Renamed --scales to --zoom for inference.
9 years ago
Alex J. Champandard
064f9dd589
Add three image pre-processing options, improve loading code.
9 years ago
Michael Feldstein
fef84c5b44
Remove cnmem theano flag since it doesn't work if you're sharing GPU with display.
9 years ago
Alex J. Champandard
5ef872b876
Add warning for files that may be too large for 4x.
9 years ago
Alex J. Champandard
17fcad8d28
Refactor of changes related to training.
9 years ago
Alex J. Champandard
2b67daedb6
Merge pull request #12 from dribnet/generic_seeds
...
Move generation of seeds out of training network.
9 years ago
Alex J. Champandard
cad5eff572
Merge pull request #11 from dribnet/save_every_epoch
...
Added --save-every-epoch option
9 years ago
Alex J. Champandard
fcc5e87858
Merge pull request #4 from dribnet/valid_dir
...
Add valid dir when necessary.
9 years ago
Tom White
8f5167d235
Fix enhancer.process to pass img, seed
9 years ago
Tom White
37cb208374
Move generation of seeds out of training network
...
This moves the generation of the image seeds out of the
training network and into the DataLoader. Currently seeds
are computed as bilinear downsamplings of the original image.
This is almost functionly equivalent to the version
it replaces, but opens up new possibilities at training
time because the seeds are now decoupled from the netork.
For example, seeds could be made with different interpolations
or even with other transformations such as image compression.
9 years ago
Tom White
b05ee6ad08
Added --save-every-epoch option
9 years ago
Tom White
c5053806bd
Add valid dir when necessary
9 years ago
Ondřej Machulda
eb25e737cf
Fix duplicate param definition
9 years ago
Alex J. Champandard
203917d122
Switch default to small model to reduce memory usage.
9 years ago
Alex J. Champandard
4c55c48f62
Add argument for specifying training images, cleaned up file handling.
9 years ago
Alex J. Champandard
b1c054ce9f
Improve the README for applying and training models.
9 years ago
Alex J. Champandard
f868514be3
Improve code for simply applying super-resolution.
9 years ago
Alex J. Champandard
c456221cb5
Experiment with recursive super-resolution and weight reuse, mixed results.
9 years ago
Alex J. Champandard
87304c93a6
Use traditional learning rate decaying rather than fast-restarts, works better when training continuously adapting GAN.
9 years ago
Alex J. Champandard
3809e9b02a
Add loading of images into a buffer, using multiple fragments per JPG loaded. Works well with larger datasets like OpenImages, fully GPU bound.
9 years ago
Alex J. Champandard
619fad7f3c
Add loading parameters from saved models. Clean up learning-rate code.
9 years ago
Alex J. Champandard
f978f058dd
Add descriptions for command-line arguments.
9 years ago
Alex J. Champandard
d8c9292b80
Tweak code for generalized super-resolution.
9 years ago
Alex J. Champandard
9558a11397
Add command-line for super-resolving images without training.
9 years ago
Alex J. Champandard
03a6813e95
Enable adversarial loss during training.
9 years ago