Alex J. Champandard
cabaaeeefe
Add training scripts for networks currently being trained, for release v0.2.
9 years ago
Alex J. Champandard
1ad40b6d71
Merge branch 'master' into v0.2
9 years ago
Alex J. Champandard
ac49676415
Add tiled rendering with padding, no feather-blending but looks good enough.
9 years ago
Alex J. Champandard
095fe42dc3
Add tiled rendering, currently with no padding for each tile.
9 years ago
Alex J. Champandard
d18c08f1b5
Integrated reflection padding instead of zero padding for extra quality during training and inference.
9 years ago
Alex J. Champandard
90c0b7ea43
Fix padding code, more reliable for specific upscale/downscale combinations.
9 years ago
Alex J. Champandard
3b2a6b9d8d
Add extra padding on input to avoid zero-padding. Experiment with training values from ENet (segmentation).
9 years ago
Alex J. Champandard
7924cc4a85
Improve display and filenames for saving output.
9 years ago
Alex J. Champandard
93e5a41d9a
Fix and optimize pre-processing of images.
9 years ago
Alex J. Champandard
11ba505252
Fix for gradient clipping code.
9 years ago
Alex J. Champandard
cf65207a2e
Use full range of tanh output rather than [-0.5, +0.5], avoids clipping.
9 years ago
Alex J. Champandard
0c31e53731
Fix suggested alias for relative paths. Closes #37 #28 .
9 years ago
Alex J. Champandard
34f8e629c2
Improve the alias used to invoke docker, so it's more robust to directory locations and input paths.
9 years ago
Alex J. Champandard
c610623b11
Add gradient clipping, helpful for preventing problems with extreme parameters/architectures.
9 years ago
Alex J. Champandard
02d2fca6c5
Corrected value for adversarial loss. Don't refactor math the day after stopping coffee.
9 years ago
Alex J. Champandard
f2494f8078
Add new downscale layers, separate from upscale steps. Renamed --scales to --zoom for inference.
9 years ago
Alex J. Champandard
0c9937a317
Merge pull request #22 from msfeldstein/master
...
Remove cnmem theano flag.
9 years ago
Alex J. Champandard
064f9dd589
Add three image pre-processing options, improve loading code.
9 years ago
Michael Feldstein
fef84c5b44
Remove cnmem theano flag since it doesn't work if you're sharing GPU with display.
9 years ago
Alex J. Champandard
5ef872b876
Add warning for files that may be too large for 4x.
9 years ago
Alex J. Champandard
a5ad2c25e6
Merge pull request #18 from alexjc/training
...
Training Improvements
9 years ago
Alex J. Champandard
17fcad8d28
Refactor of changes related to training.
9 years ago
Alex J. Champandard
2b67daedb6
Merge pull request #12 from dribnet/generic_seeds
...
Move generation of seeds out of training network.
9 years ago
Alex J. Champandard
cad5eff572
Merge pull request #11 from dribnet/save_every_epoch
...
Added --save-every-epoch option
9 years ago
Alex J. Champandard
a9b0cd9887
Merge pull request #9 from zuphilip/patch-1
...
Fix some typos in README.
9 years ago
Alex J. Champandard
fcc5e87858
Merge pull request #4 from dribnet/valid_dir
...
Add valid dir when necessary.
9 years ago
Alex J. Champandard
1478977f18
Merge pull request #10 from OndraM/fix-duplicate-param
...
Fix duplicate param definition.
9 years ago
Tom White
8f5167d235
Fix enhancer.process to pass img, seed
9 years ago
Tom White
37cb208374
Move generation of seeds out of training network
...
This moves the generation of the image seeds out of the
training network and into the DataLoader. Currently seeds
are computed as bilinear downsamplings of the original image.
This is almost functionly equivalent to the version
it replaces, but opens up new possibilities at training
time because the seeds are now decoupled from the netork.
For example, seeds could be made with different interpolations
or even with other transformations such as image compression.
9 years ago
Tom White
b05ee6ad08
Added --save-every-epoch option
9 years ago
Tom White
c5053806bd
Add valid dir when necessary
9 years ago
Ondřej Machulda
eb25e737cf
Fix duplicate param definition
9 years ago
Philipp Zumstein
f83e69e96a
Fix some typos in README
9 years ago
Alex J. Champandard
f68f04fb1c
Improve instructions to train custom models so a new file is output and existing one is not loaded. Use --model parameter!
9 years ago
Alex J. Champandard
203917d122
Switch default to small model to reduce memory usage.
9 years ago
Alex J. Champandard
2b5fc8f51d
Add docker instructions, fix for slow compute in CPU image.
9 years ago
Alex J. Champandard
74cb95609e
Fix for docker build using latest Miniconda and Python 3.5 explicitly.
9 years ago
Alex J. Champandard
99c767b7e2
Add docker configuration files for CPU and GPU.
9 years ago
Alex J. Champandard
bf22450b8d
Update documentation for new --train usage, minor improvements.
9 years ago
Alex J. Champandard
4c55c48f62
Add argument for specifying training images, cleaned up file handling.
9 years ago
Alex J. Champandard
1c38f2ca31
New meme-friendly image and link to demo.
9 years ago
Alex J. Champandard
b1c054ce9f
Improve the README for applying and training models.
9 years ago
Alex J. Champandard
f868514be3
Improve code for simply applying super-resolution.
9 years ago
Alex J. Champandard
30534c6dd1
Add old station example, fix bank example.
9 years ago
Alex J. Champandard
801a4707f4
Improve README and the visual examples.
9 years ago
Alex J. Champandard
39e12ea205
Add example GIF to the README, tweak text.
9 years ago
Alex J. Champandard
c456221cb5
Experiment with recursive super-resolution and weight reuse, mixed results.
9 years ago
Alex J. Champandard
87304c93a6
Use traditional learning rate decaying rather than fast-restarts, works better when training continuously adapting GAN.
9 years ago
Alex J. Champandard
3809e9b02a
Add loading of images into a buffer, using multiple fragments per JPG loaded. Works well with larger datasets like OpenImages, fully GPU bound.
9 years ago
Alex J. Champandard
619fad7f3c
Add loading parameters from saved models. Clean up learning-rate code.
9 years ago