Creative Sketch Generation

1 minute read

Published:

Conference

ICLR 2021

Authors

  • Songwei Ge
  • Vedanuj Goswami
  • C. Lawrence Zitnick
  • Devi Parikh

Overview

Synthesizing creative sketches using DoodlerGAN. Compared to previous approaches that learn to mimic how humans draw, the goal is to create sketches that were not seen in human drawings.

Dataset

  • Novel dataset : Creative Birds and Creative Creatures.
  • Databased are collected on Amazon Mechanical Turk.
  • The sketches are drawn by an initial random storke.
  • An eye is asked to be drawn.
  • Next, it is asked to draw each part of the creature by specifying the part being drawn.
  • ~10k sketches for each category collected.

Doodler GAN

  • Part based sketch generation model.
  • While ensuring at each step the appearance of the parts and partial sketches comes from corresponding distributions observed in human sketches.
  • Contains two modules -> part generator and part selector.
  • Sketches are stored as raster images (i.e., grids of pixels) as opposed to vector representations of strokes. This makes detailed spatial information readily available, which helps the model better assess where a part connects with the rest of the sketch.
  • The input partial sketch is represented by stacking each part as a channel, along with an additional channel for the entire partial sketch. If a part has not been drawn, the corresponding channel is a zero image.
  • Architecture : enter image description here
  • More details in paper

Results

  • Compared against : StyleGAN2 Unconditional, StyleGAN2 Conditional, SketchRNN Unconditional, SketchRNN Conditional, Percentage-based.
  • Added quantitative and human evaluation.