38 Commits (7924cc4a856206c17529b7236ea4588c44341a64)

Author SHA1 Message Date
Alex J. Champandard 7924cc4a85 Improve display and filenames for saving output.
9 years ago
Alex J. Champandard 93e5a41d9a Fix and optimize pre-processing of images.
9 years ago
Alex J. Champandard 11ba505252 Fix for gradient clipping code.
9 years ago
Alex J. Champandard cf65207a2e Use full range of tanh output rather than [-0.5, +0.5], avoids clipping.
9 years ago
Alex J. Champandard c610623b11 Add gradient clipping, helpful for preventing problems with extreme parameters/architectures.
9 years ago
Alex J. Champandard f2494f8078 Add new downscale layers, separate from upscale steps. Renamed --scales to --zoom for inference.
9 years ago
Alex J. Champandard 064f9dd589 Add three image pre-processing options, improve loading code.
9 years ago
Alex J. Champandard 5ef872b876 Add warning for files that may be too large for 4x.
9 years ago
Alex J. Champandard 17fcad8d28 Refactor of changes related to training.
9 years ago
Alex J. Champandard 2b67daedb6 Merge pull request #12 from dribnet/generic_seeds
9 years ago
Alex J. Champandard cad5eff572 Merge pull request #11 from dribnet/save_every_epoch
9 years ago
Alex J. Champandard fcc5e87858 Merge pull request #4 from dribnet/valid_dir
9 years ago
Tom White 8f5167d235 Fix enhancer.process to pass img, seed
9 years ago
Tom White 37cb208374 Move generation of seeds out of training network
9 years ago
Tom White b05ee6ad08 Added --save-every-epoch option
9 years ago
Tom White c5053806bd Add valid dir when necessary
9 years ago
Ondřej Machulda eb25e737cf Fix duplicate param definition
9 years ago
Alex J. Champandard 203917d122 Switch default to small model to reduce memory usage.
9 years ago
Alex J. Champandard 4c55c48f62 Add argument for specifying training images, cleaned up file handling.
9 years ago
Alex J. Champandard b1c054ce9f Improve the README for applying and training models.
9 years ago
Alex J. Champandard f868514be3 Improve code for simply applying super-resolution.
9 years ago
Alex J. Champandard c456221cb5 Experiment with recursive super-resolution and weight reuse, mixed results.
9 years ago
Alex J. Champandard 87304c93a6 Use traditional learning rate decaying rather than fast-restarts, works better when training continuously adapting GAN.
9 years ago
Alex J. Champandard 3809e9b02a Add loading of images into a buffer, using multiple fragments per JPG loaded. Works well with larger datasets like OpenImages, fully GPU bound.
9 years ago
Alex J. Champandard 619fad7f3c Add loading parameters from saved models. Clean up learning-rate code.
9 years ago
Alex J. Champandard f978f058dd Add descriptions for command-line arguments.
9 years ago
Alex J. Champandard d8c9292b80 Tweak code for generalized super-resolution.
9 years ago
Alex J. Champandard 9558a11397 Add command-line for super-resolving images without training.
9 years ago
Alex J. Champandard 03a6813e95 Enable adversarial loss during training.
9 years ago
Alex J. Champandard 6542f435b4 Stabilize the training, tuned the architecture.
9 years ago
Alex J. Champandard b52530cf20 Add support for PRELU (default), optional load/save.
9 years ago
Alex J. Champandard 4d82c7f157 Improve console output.
9 years ago
Alex J. Champandard 8770113f9a Add support for loading/saving models. Fixed regressions.
9 years ago
Alex J. Champandard 58e1bed4a6 Improve display and code structure.
9 years ago
Alex J. Champandard 749b467f94 Switch to sub-pixel deconvolution layer.
9 years ago
Alex J. Champandard 9f13695050 Add discriminator network too.
9 years ago
Alex J. Champandard 3f24714039 Add residual blocks to generator.
9 years ago
Alex J. Champandard 058e3d3b9e Add simple but working prototype, perceptual loss.
9 years ago