Alex J. Champandard
c610623b11
Add gradient clipping, helpful for preventing problems with extreme parameters/architectures.
9 years ago
Alex J. Champandard
02d2fca6c5
Corrected value for adversarial loss. Don't refactor math the day after stopping coffee.
9 years ago
Alex J. Champandard
f2494f8078
Add new downscale layers, separate from upscale steps. Renamed --scales to --zoom for inference.
9 years ago
Alex J. Champandard
0c9937a317
Merge pull request #22 from msfeldstein/master
...
Remove cnmem theano flag.
9 years ago
Alex J. Champandard
064f9dd589
Add three image pre-processing options, improve loading code.
9 years ago
Michael Feldstein
fef84c5b44
Remove cnmem theano flag since it doesn't work if you're sharing GPU with display.
9 years ago
Alex J. Champandard
5ef872b876
Add warning for files that may be too large for 4x.
9 years ago
Alex J. Champandard
a5ad2c25e6
Merge pull request #18 from alexjc/training
...
Training Improvements
9 years ago
Alex J. Champandard
17fcad8d28
Refactor of changes related to training.
9 years ago
Alex J. Champandard
2b67daedb6
Merge pull request #12 from dribnet/generic_seeds
...
Move generation of seeds out of training network.
9 years ago
Alex J. Champandard
cad5eff572
Merge pull request #11 from dribnet/save_every_epoch
...
Added --save-every-epoch option
9 years ago
Alex J. Champandard
a9b0cd9887
Merge pull request #9 from zuphilip/patch-1
...
Fix some typos in README.
9 years ago
Alex J. Champandard
fcc5e87858
Merge pull request #4 from dribnet/valid_dir
...
Add valid dir when necessary.
9 years ago
Alex J. Champandard
1478977f18
Merge pull request #10 from OndraM/fix-duplicate-param
...
Fix duplicate param definition.
9 years ago
Tom White
8f5167d235
Fix enhancer.process to pass img, seed
9 years ago
Tom White
37cb208374
Move generation of seeds out of training network
...
This moves the generation of the image seeds out of the
training network and into the DataLoader. Currently seeds
are computed as bilinear downsamplings of the original image.
This is almost functionly equivalent to the version
it replaces, but opens up new possibilities at training
time because the seeds are now decoupled from the netork.
For example, seeds could be made with different interpolations
or even with other transformations such as image compression.
9 years ago
Tom White
b05ee6ad08
Added --save-every-epoch option
9 years ago
Tom White
c5053806bd
Add valid dir when necessary
9 years ago
Ondřej Machulda
eb25e737cf
Fix duplicate param definition
9 years ago
Philipp Zumstein
f83e69e96a
Fix some typos in README
9 years ago
Alex J. Champandard
f68f04fb1c
Improve instructions to train custom models so a new file is output and existing one is not loaded. Use --model parameter!
9 years ago
Alex J. Champandard
203917d122
Switch default to small model to reduce memory usage.
9 years ago
Alex J. Champandard
2b5fc8f51d
Add docker instructions, fix for slow compute in CPU image.
9 years ago
Alex J. Champandard
74cb95609e
Fix for docker build using latest Miniconda and Python 3.5 explicitly.
9 years ago
Alex J. Champandard
99c767b7e2
Add docker configuration files for CPU and GPU.
9 years ago
Alex J. Champandard
bf22450b8d
Update documentation for new --train usage, minor improvements.
9 years ago
Alex J. Champandard
4c55c48f62
Add argument for specifying training images, cleaned up file handling.
9 years ago
Alex J. Champandard
1c38f2ca31
New meme-friendly image and link to demo.
9 years ago
Alex J. Champandard
b1c054ce9f
Improve the README for applying and training models.
9 years ago
Alex J. Champandard
f868514be3
Improve code for simply applying super-resolution.
9 years ago
Alex J. Champandard
30534c6dd1
Add old station example, fix bank example.
9 years ago
Alex J. Champandard
801a4707f4
Improve README and the visual examples.
9 years ago
Alex J. Champandard
39e12ea205
Add example GIF to the README, tweak text.
9 years ago
Alex J. Champandard
c456221cb5
Experiment with recursive super-resolution and weight reuse, mixed results.
9 years ago
Alex J. Champandard
87304c93a6
Use traditional learning rate decaying rather than fast-restarts, works better when training continuously adapting GAN.
9 years ago
Alex J. Champandard
3809e9b02a
Add loading of images into a buffer, using multiple fragments per JPG loaded. Works well with larger datasets like OpenImages, fully GPU bound.
9 years ago
Alex J. Champandard
619fad7f3c
Add loading parameters from saved models. Clean up learning-rate code.
9 years ago
Alex J. Champandard
bb3b24b04c
Add README and requirements file.
9 years ago
Alex J. Champandard
f978f058dd
Add descriptions for command-line arguments.
9 years ago
Alex J. Champandard
d8c9292b80
Tweak code for generalized super-resolution.
9 years ago
Alex J. Champandard
9558a11397
Add command-line for super-resolving images without training.
9 years ago
Alex J. Champandard
03a6813e95
Enable adversarial loss during training.
9 years ago
Alex J. Champandard
6542f435b4
Stabilize the training, tuned the architecture.
9 years ago
Alex J. Champandard
4580e5531a
Add .gitignore and removed symlink.
9 years ago
Alex J. Champandard
b52530cf20
Add support for PRELU (default), optional load/save.
9 years ago
Alex J. Champandard
4d82c7f157
Improve console output.
9 years ago
Alex J. Champandard
8770113f9a
Add support for loading/saving models. Fixed regressions.
9 years ago
Alex J. Champandard
58e1bed4a6
Improve display and code structure.
9 years ago
Alex J. Champandard
749b467f94
Switch to sub-pixel deconvolution layer.
9 years ago
Alex J. Champandard
9f13695050
Add discriminator network too.
9 years ago