A versatile GAN(generative adversarial network) implementation focused on scalability and ease-of-use.
wgan:. Fixes many classes of mode collapse! See wgan implementation
colorizer- re-colorize an image!
inpainter- remove a section of an image and have your GAN repaint it
super-resolution- zoom in and enhance. We've caught the bad guy!
--samplerflag. Valid options are:
densenet- based loosely on https://arxiv.org/abs/1608.06993
dense_resize_conv- original type of generator that seems to work well
resize_conv- standard resize-conv generator. This works much better than
deconv, which is not supported.
pip3 install hypergan --upgrade
pip3 install hypergan==0.5.8 --upgrade
# Train a 32x32 gan with batch size 32 on a folder of pngs hypergan train [folder] -s 32x32x3 -f png -b 32
sudo apt-get install libgoogle-perftools4 and make sure to include this environment variable before training
LD_PRELOAD="/usr/lib/libtcmalloc.so.4" hypergan train my_dataset
If you wish to modify hypergan
git clone https://github.com/255BITS/hypergancd hypergan python3 setup.py develop
Make sure to include the following 2 arguments:
CUDA_VISIBLE_DEVICES= hypergan --device '/cpu:0'
Configuration in HyperGAN uses JSON files. You can create a new config by running
hypergan train. By default, configurations are randomly generated using Hyperchamber.
Configurations are located in:
Naming a configuration during training is recommended. If your config is not named, a uuid will be used.
# Train a 256x256 gan with batch size 32 on a folder of pngs hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name]
# Train a 256x256 gan with batch size 32 on a folder of pngs hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name] --sampler static_batch --sample_every 5
One way a network learns:
To create videos:
ffmpeg -i samples/%06d.png -vcodec libx264 -crf 22 -threads 0 gan.mp4
# Train a 256x256 gan with batch size 32 on a folder of pngs hypergan serve [folder] -s 32x32x3 -f png -b 32 --config [name]
To prevent the GPU from allocating space, see Running on CPU.
import hypergan as hg
GAN object consists of:
graph- specific named Tensors in the Tensorflow graph
GAN(config, initial_graph, graph_type='full', device='/gpu:0')
When a GAN constructor is called, the Tensorflow graph will be constructed.
gan.graph|Dictionary|Maps names to tensors gan.config|Dictionary|Maps names to options(from the json) gan.sess|tf.Session|The tensorflow session
save_file - a string designating the save path
Saves the GAN
Sample to a specified path.
Steps the gan forward in training once. Trains the D and G according to your specified
To build a new network you need a dataset. Your data should be structured like:
Training with labels allows you to train a
Each directory in your dataset represents a classification.
Example: Dataset setup for classification of apple and orange images:
You can still build a GAN if your dataset is unlabelled. Just make sure your folder is formatted like
where all files are in 1 directory.
Build takes the same arguments as train and builds a generator. It's required for serve.
Building does 2 things:
Serve starts a flask server. You can then access:
Saves are stored in
They can be large.
Type can be one of:
To see a detailed list, run
The discriminators job is to tell if a piece of data is real or fake. In hypergan, a discriminator can also be a classifier.
You can combine multiple discriminators in a single GAN.
Progressive enhancement is enabled by default:
Progressive enhancement is enabled by default here too.
Note: This is currently broken
Default. Inspired by densenet.
Generative Adversarial Networks consist of 2 learning systems that learn together. HyperGAN implements these learning systems in Tensorflow with deep learning.
discriminator learns the difference between real and fake data. The
generator learns to create fake data.
For a more in-depth introduction, see here http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/
A single fully trained
GAN consists of the following useful networks:
generator- Generates content that fools the
discriminator. If using supervised learning mode, can generate data on a specific classification.
discriminator- The discriminator learns how to identify real data and how to detect fake data from the generator.
classifier- Only available when using supervised learning. Classifies an image by type. Some examples of possible datasets are 'apple/orange', 'cat/dog/squirrel'. See Creating a Dataset.
HyperGAN is currently in open beta.
Our implementation of WGAN is based off the paper. WGAN loss in Tensorflow can look like:
d_fake = tf.reduce_mean(d_fake,axis=1) d_real = tf.reduce_mean(d_real,axis=1) d_loss = d_real - d_fake g_loss = d_fake
d_loss and g_loss can be reversed as well - just add a '-' sign.
Contributions are welcome and appreciated. To help out, just issue a pull request or file a bug report.
If you create something cool with this let us know!
In case you are interested, our pivotal board is here: https://www.pivotaltracker.com/n/projects/1886395
If you wish to cite this project, do so like this:
255bits (M. Garcia), HyperGAN, (2017), GitHub repository, https://github.com/255BITS/HyperGAN
原文发布于微信公众号 - CreateAMind（createamind）