While designing and testing a video feedback rig, I noticed the system started creating repetitive patterns without any input from me. Under certain settings, the system demonstrates emergent behaviors where related gestures form, collapse, and reconfigure themselves. Over time, these same gestures rotate through different combinations of primary and secondary digital colors, with the collapses becoming longer and more intricate.
The video system was created in Max/MSP/Jitter, and generates its patterns using a feedback loop with a kaleidoscopic shader, custom pixel-displacement, and most importantly a sharpen shader that acts as cellular automata rules.
The sound is a sonification of the video based on digital extension of optical soundtrack idea. The video is split into 8 new signals by filtering the six different colors, as well as white and black. Edge detection and analysis is used to find the row or column with greatest density of information, and this line is sonified by reading through it as a waveform.