Photographs of block prints, each print 8cm square. They are made using 3d printed blocks. The designs are generated using code written in Python.
These final two images show some of the inked blocks and a print in process with a jig to guid the align the blocks.
Visualization of earthquakes in the San Francisco Bay Area. Grey value indicates the magnitude of the most recent quake (top), all quakes in the past 24 hrs (bottom), and max quake each day for the past 30 days (middle).
Made using a Raspberry Pi Zero W and a Pimoroni Inky wHAT, mounted in an 8 inch square frame.
A simple automaton toy based on a design in the book Making Simple Automata by Robert Race.
These drawings were extracted from images using a combination of OpenCV and a genetic algorithm.
First, OpenCV is used to find contour lines and canny edges in the images. On the left are all the lines found this way (typically hundreds of lines) that have been post-processed a bit to smooth longer lines.
On the right are 20 lines selected by the genetic algorithm. The algorithm generates 100 drawings, each with a different set of lines from the drawing on the left. It then automatically evaluates the fitness of each drawing and creates a new generation of 100 drawing, selecting characteristics more often from the highest rated drawings and tossing in some random mutations.
The goal is to capture the essence of the original image in just a few vector lines so that a drawing robot can efficiently recreate it. Below are more examples.
Some more sample results from a drawing app I am working on that uses a genetic algorithm to generate drawing (click for higher resolution):
Above: face 1, Below: face 2 (same source image as face 1)
Above: one generation of faces (same source as face 1 and 2)
Above: face 3 (different source image), Below: landscape.