Derivative-Works is an experiment in using machine learning to create image collages. The algorithm cuts out shapes from images and rearranges them to create a face.

The popularity of generative ML and GAN’s have created an infinite abundance of textures. Derivative-Works thinks of these textures as ingredients and raw materials to reinvent, similar to the prior Dada use of magazines and print media. The arrangement of dozens of textures can become an approachable creative medium driven by the selection of images and objectives.

Created By

Joel & Tal

Source Materials

All of the reference images are in the public domain, created in Artbreeder using BigGAN and StyleGAN.

Methods - source code

  1. A patch generator (DCGAN) trained on Perlin noise was taken from a previous project. It creates a high diversity of shapes and is fully-differentiable.
  2. There are a fixed number of patches that each has a corresponding latent vector and transformation matrices. These transformations control where in the reference image the patch is cut from and where in the canvas it is placed.
  3. These variables are then optimized (using Adam) to do feature inversion over a face classifier (DLIB’s CNN model).

The primary difference between this method and vanilla inversion is the input medium: instead of optimizing pixels directly, we optimize parameters. This simple technique lead to a variety of textures and compositions and the videos show the actual optimizations.