专栏首页CreateAMind一个很牛的GAN工具项目:HyperGAN

一个很牛的GAN工具项目:HyperGAN

关注GAN的易于使用和可扩展

子模块丰富:

WGAN也有了;LS-GAN也有。

HyperGAN

A versatile GAN(generative adversarial network) implementation focused on scalability and ease-of-use.

Table of contents

Changelog

0.7 - "WGAN API" (samples to come)

  • New loss function based on wgan :. Fixes many classes of mode collapse! See wgan implementation
  • Initial Public API Release
  • API example: colorizer - re-colorize an image!
  • API example: inpainter - remove a section of an image and have your GAN repaint it
  • API example: super-resolution - zoom in and enhance. We've caught the bad guy!
  • 4 new samplers. --sampler flag. Valid options are: batch,progressive,static_batch,grid.

0.6 ~ "MultiGAN"

  • 3 new encoders
  • New discriminator: densenet - based loosely on https://arxiv.org/abs/1608.06993
  • Updated discriminator: pyramid_no_stride - conv and avg_pool together
  • New generator: dense_resize_conv - original type of generator that seems to work well
  • Updated generator: resize_conv - standard resize-conv generator. This works much better than deconv, which is not supported.
  • Several quality of life improvements
  • Support for multiple discriminators
  • Support for discriminators on different image resolutions

0.5 ~ "FaceGAN"

0.5.x

  • fixed configuration save/load
  • cleaner cli output
  • documentation cleanup

图太大。

0.5.0

  • pip package released!
  • Better defaults. Good variance. 256x256. The broken images showed up after training for 5 days.

0.1-0.4

  • Initial private release

Quick start

Minimum requirements

  1. For 256x256, we recommend a GTX 1080 or better. 32x32 can be run on lower-end GPUs.
  2. CPU mode is extremely slow. Never train with it!
  3. Python3

Install hypergan

  pip3 install hypergan --upgrade

Installing a specific version

  pip3 install hypergan==0.5.8 --upgrade

Train

  # Train a 32x32 gan with batch size 32 on a folder of pngs
  hypergan train [folder] -s 32x32x3 -f png -b 32

Increasing performance

On ubuntu sudo apt-get install libgoogle-perftools4 and make sure to include this environment variable before training

  LD_PRELOAD="/usr/lib/libtcmalloc.so.4" hypergan train my_dataset

Development mode

If you wish to modify hypergan

git clone https://github.com/255BITS/hypergancd hypergan
python3 setup.py develop

Running on CPU

Make sure to include the following 2 arguments:

CUDA_VISIBLE_DEVICES= hypergan --device '/cpu:0'

Configuration

Configuration in HyperGAN uses JSON files. You can create a new config by running hypergan train. By default, configurations are randomly generated using Hyperchamber.

Configurations are located in:

  ~/.hypergan/configs/

Usage

  --config [name]

Naming a configuration during training is recommended. If your config is not named, a uuid will be used.

CLI

 hypergan -h

Training

  # Train a 256x256 gan with batch size 32 on a folder of pngs
  hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name]

Sampling

  # Train a 256x256 gan with batch size 32 on a folder of pngs
  hypergan train [folder] -s 32x32x3 -f png -b 32 --config [name] --sampler static_batch --sample_every 5

One way a network learns:

图太大。

To create videos:

  ffmpeg -i samples/%06d.png -vcodec libx264 -crf 22 -threads 0 gan.mp4

Web Server

  # Train a 256x256 gan with batch size 32 on a folder of pngs
  hypergan serve [folder] -s 32x32x3 -f png -b 32 --config [name]

To prevent the GPU from allocating space, see Running on CPU.

API

  import hypergan as hg

GAN object

The GAN object consists of:

  • The config(configuration) used
  • The graph - specific named Tensors in the Tensorflow graph
  • The tensorflow sess(session)

Constructor

GAN(config, initial_graph, graph_type='full', device='/gpu:0')

When a GAN constructor is called, the Tensorflow graph will be constructed.

Properties

gan.graph|Dictionary|Maps names to tensors gan.config|Dictionary|Maps names to options(from the json) gan.sess|tf.Session|The tensorflow session

Methods

save

 gan.save(save_file)

save_file - a string designating the save path

Saves the GAN

sample_to_file

 gan.sample_to_file(name, sampler=grid_sampler.sample)
  • name - the name of the file to sample to
  • sampler - the sampler method to use

Sample to a specified path.

train

 gan.train()

Steps the gan forward in training once. Trains the D and G according to your specified trainer.

Datasets

To build a new network you need a dataset. Your data should be structured like:

  [folder]/[directory]/*.png

Creating a Dataset

Supervised learning

Training with labels allows you to train a classifier.

Each directory in your dataset represents a classification.

Example: Dataset setup for classification of apple and orange images:

 /dataset/apples
 /dataset/oranges

Unsupervised learning

You can still build a GAN if your dataset is unlabelled. Just make sure your folder is formatted like

 [folder]/[directory]/*.png

where all files are in 1 directory.

Downloadable datasets

Building

hypergan build

Build takes the same arguments as train and builds a generator. It's required for serve.

Building does 2 things:

  • Loads the training model, which include the discriminator
  • Saves into a ckpt model containing only the generator

Server mode

hypergan serve

Serve starts a flask server. You can then access:

http://localhost:5000/sample.png?type=batch

Saves

Saves are stored in ~/.hypergan/saves/

They can be large.

Formats

--format <type>

Type can be one of:

  • jpg
  • png

Arguments

To see a detailed list, run

  hypergan -h
  • -s, --size, optional(default 64x64x3), the size of your data in the form 'width'x'height'x'channels'
  • -f, --format, optional(default png), file format of the images. Only supports jpg and png for now.

Discriminators

The discriminators job is to tell if a piece of data is real or fake. In hypergan, a discriminator can also be a classifier.

You can combine multiple discriminators in a single GAN.

pyramid_stride

pyramid_nostride

Progressive enhancement is enabled by default:

Default.

densenet

Progressive enhancement is enabled by default here too.

resnet

Note: This is currently broken

Encoders

Vae

For Vae-GANs

RandomCombo

Default

RandomNormal

Generators

resize-conv

Standard resize-conv.

dense-resize-conv

Default. Inspired by densenet.

Trainers

Adam

Default.

Slowdown

Experimental.

About

Generative Adversarial Networks consist of 2 learning systems that learn together. HyperGAN implements these learning systems in Tensorflow with deep learning.

The discriminator learns the difference between real and fake data. The generator learns to create fake data.

For a more in-depth introduction, see here http://blog.aylien.com/introduction-generative-adversarial-networks-code-tensorflow/

A single fully trained GAN consists of the following useful networks:

  • generator - Generates content that fools the discriminator. If using supervised learning mode, can generate data on a specific classification.
  • discriminator - The discriminator learns how to identify real data and how to detect fake data from the generator.
  • classifier - Only available when using supervised learning. Classifies an image by type. Some examples of possible datasets are 'apple/orange', 'cat/dog/squirrel'. See Creating a Dataset.

HyperGAN is currently in open beta.

Wasserstein GAN in Tensorflow

Our implementation of WGAN is based off the paper. WGAN loss in Tensorflow can look like:

 d_fake = tf.reduce_mean(d_fake,axis=1)
 d_real = tf.reduce_mean(d_real,axis=1)
 d_loss = d_real - d_fake
 g_loss = d_fake

d_loss and g_loss can be reversed as well - just add a '-' sign.

Papers

Sources

Contributing

Contributions are welcome and appreciated. To help out, just issue a pull request or file a bug report.

If you create something cool with this let us know!

In case you are interested, our pivotal board is here: https://www.pivotaltracker.com/n/projects/1886395

Citation

If you wish to cite this project, do so like this:

  255bits (M. Garcia),
  HyperGAN, (2017), 
  GitHub repository, 
  https://github.com/255BITS/HyperGAN

本文分享自微信公众号 - CreateAMind(createamind)

原文出处及转载信息见文内详细说明,如有侵权,请联系 yunjia_community@tencent.com 删除。

原始发表时间:2017-03-02

本文参与腾讯云自媒体分享计划,欢迎正在阅读的你也加入,一起分享。

我来说两句

0 条评论
登录 后参与评论

相关文章

  • 开源ALNS 自适应大邻域搜索(Adaptive Large Neighborhood Search)

    This package offers a general, well-documented and tested implementation of the ...

    用户1908973
  • 灾难性遗忘问题新视角:迁移-干扰平衡

    1. Catastrophic Forgetting and the Stability-Plasticity Dilemma

    用户1908973
  • 结合代码讲解VAE-GAN比较透彻的一篇文章

    前面介绍了VAE-GAN 论文:Autoencoding beyond pixels usingALearnedSimilarityMmetric及视频

    用户1908973
  • How AI is Changing the Future of Web Development?

    How Artificial Intelligence is changing the future of web development? What is t...

    用户4822892
  • 步进式的框架解释如何解决约束满足问题(CS AI)

    我们以逻辑网格难题的用例探讨逐步解释如何解决约束满足问题的问题。更具体地说,我们研究一种以易于理解的方式解释传播过程中可以采取的推理步骤的问题。因此,我们旨在为...

    刘子蔚
  • 基于技能要求的开放式教育视频推荐系统(CS CS)

    在本文中,我们建议一种新颖的方法来帮助学习者找到相关的开放式教育视频,以掌握劳动力市场上所需的技能。我们建立了一个原型,该原型包括:1)在职位空缺公告中应用文本...

    小童
  • How Badoo saved one million dollars switching to PHP7

    How Badoo saved one million dollars switching to PHP7 By Badoo on 14 Mar 2016 - ...

    netkiller old
  • EQUS-帮助查看公式(CS SE)

    可视化通常是简化信息和帮助人们理解复杂数据的一种方式。在本文中,我们描述了电子表格公式(EQUS)交互式可视化的设计,开发和评估。这项工作是合理的,理由是这些工...

    刘子蔚
  • hdu-----(1179)Ollivanders: Makers of Fine Wands since 382 BC.(二分匹配)

    Ollivanders: Makers of Fine Wands since 382 BC. Time Limit: 2000/1000 MS (Java/O...

    Gxjun
  • 用“可能-必须”规模扩展基于标签的论证语义(cs AI)

    语义作为这组参数在给定论证图形特征可能是可接受的(可接受性语义)可以用几种不同的方式来表征。其中,基于标签的方法允许通过分配指示每个参数接受,拒绝或不确定的标签...

    DANDAN用户6837186

扫码关注云+社区

领取腾讯云代金券