66 Commits (cabaaeeefe310e1cfcdb7160cbf89eaf7a6bee6e)
 

Author SHA1 Message Date
Alex J. Champandard cabaaeeefe Add training scripts for networks currently being trained, for release v0.2.
10 years ago
Alex J. Champandard 1ad40b6d71 Merge branch 'master' into v0.2
10 years ago
Alex J. Champandard ac49676415 Add tiled rendering with padding, no feather-blending but looks good enough.
10 years ago
Alex J. Champandard 095fe42dc3 Add tiled rendering, currently with no padding for each tile.
10 years ago
Alex J. Champandard d18c08f1b5 Integrated reflection padding instead of zero padding for extra quality during training and inference.
10 years ago
Alex J. Champandard 90c0b7ea43 Fix padding code, more reliable for specific upscale/downscale combinations.
10 years ago
Alex J. Champandard 3b2a6b9d8d Add extra padding on input to avoid zero-padding. Experiment with training values from ENet (segmentation).
10 years ago
Alex J. Champandard 7924cc4a85 Improve display and filenames for saving output.
10 years ago
Alex J. Champandard 93e5a41d9a Fix and optimize pre-processing of images.
10 years ago
Alex J. Champandard 11ba505252 Fix for gradient clipping code.
10 years ago
Alex J. Champandard cf65207a2e Use full range of tanh output rather than [-0.5, +0.5], avoids clipping.
10 years ago
Alex J. Champandard 0c31e53731 Fix suggested alias for relative paths. Closes #37 #28.
10 years ago
Alex J. Champandard 34f8e629c2 Improve the alias used to invoke docker, so it's more robust to directory locations and input paths.
10 years ago
Alex J. Champandard c610623b11 Add gradient clipping, helpful for preventing problems with extreme parameters/architectures.
10 years ago
Alex J. Champandard 02d2fca6c5 Corrected value for adversarial loss. Don't refactor math the day after stopping coffee.
10 years ago
Alex J. Champandard f2494f8078 Add new downscale layers, separate from upscale steps. Renamed --scales to --zoom for inference.
10 years ago
Alex J. Champandard 0c9937a317 Merge pull request #22 from msfeldstein/master
10 years ago
Alex J. Champandard 064f9dd589 Add three image pre-processing options, improve loading code.
10 years ago
Michael Feldstein fef84c5b44 Remove cnmem theano flag since it doesn't work if you're sharing GPU with display.
10 years ago
Alex J. Champandard 5ef872b876 Add warning for files that may be too large for 4x.
10 years ago
Alex J. Champandard a5ad2c25e6 Merge pull request #18 from alexjc/training
10 years ago
Alex J. Champandard 17fcad8d28 Refactor of changes related to training.
10 years ago
Alex J. Champandard 2b67daedb6 Merge pull request #12 from dribnet/generic_seeds
10 years ago
Alex J. Champandard cad5eff572 Merge pull request #11 from dribnet/save_every_epoch
10 years ago
Alex J. Champandard a9b0cd9887 Merge pull request #9 from zuphilip/patch-1
10 years ago
Alex J. Champandard fcc5e87858 Merge pull request #4 from dribnet/valid_dir
10 years ago
Alex J. Champandard 1478977f18 Merge pull request #10 from OndraM/fix-duplicate-param
10 years ago
Tom White 8f5167d235 Fix enhancer.process to pass img, seed
10 years ago
Tom White 37cb208374 Move generation of seeds out of training network
10 years ago
Tom White b05ee6ad08 Added --save-every-epoch option
10 years ago
Tom White c5053806bd Add valid dir when necessary
10 years ago
Ondřej Machulda eb25e737cf Fix duplicate param definition
10 years ago
Philipp Zumstein f83e69e96a Fix some typos in README
10 years ago
Alex J. Champandard f68f04fb1c Improve instructions to train custom models so a new file is output and existing one is not loaded. Use --model parameter!
10 years ago
Alex J. Champandard 203917d122 Switch default to small model to reduce memory usage.
10 years ago
Alex J. Champandard 2b5fc8f51d Add docker instructions, fix for slow compute in CPU image.
10 years ago
Alex J. Champandard 74cb95609e Fix for docker build using latest Miniconda and Python 3.5 explicitly.
10 years ago
Alex J. Champandard 99c767b7e2 Add docker configuration files for CPU and GPU.
10 years ago
Alex J. Champandard bf22450b8d Update documentation for new --train usage, minor improvements.
10 years ago
Alex J. Champandard 4c55c48f62 Add argument for specifying training images, cleaned up file handling.
10 years ago
Alex J. Champandard 1c38f2ca31 New meme-friendly image and link to demo.
10 years ago
Alex J. Champandard b1c054ce9f Improve the README for applying and training models.
10 years ago
Alex J. Champandard f868514be3 Improve code for simply applying super-resolution.
10 years ago
Alex J. Champandard 30534c6dd1 Add old station example, fix bank example.
10 years ago
Alex J. Champandard 801a4707f4 Improve README and the visual examples.
10 years ago
Alex J. Champandard 39e12ea205 Add example GIF to the README, tweak text.
10 years ago
Alex J. Champandard c456221cb5 Experiment with recursive super-resolution and weight reuse, mixed results.
10 years ago
Alex J. Champandard 87304c93a6 Use traditional learning rate decaying rather than fast-restarts, works better when training continuously adapting GAN.
10 years ago
Alex J. Champandard 3809e9b02a Add loading of images into a buffer, using multiple fragments per JPG loaded. Works well with larger datasets like OpenImages, fully GPU bound.
10 years ago
Alex J. Champandard 619fad7f3c Add loading parameters from saved models. Clean up learning-rate code.
10 years ago