4JM

(2016)
fixed video
4JM applies techniques from drawn-on-film animations to a digital context. Many animators work directly on the film surface to create work, bypassing the use of a camera entirely. This physical approach enables new ways of thinking about time, space, and continuity, with many animators painting drawing and scratching down the surface of the film strip with no regard for the boundaries of its frames. These markings have physical continuity which is ultimately disrupted when the frames are projected one at a time.
This piece imagines how a similar process might create a digital video. The visual content was created by a video feedback program which wrote out a single line of pixels at a time, beginning at the top of the first frame and working down. When it reached the bottom of the first frame it saved the image and continued on, with its next output creating the first row of pixels of the next frame. If the frames of the video were laid out like a film strip, with the bottom of each frame adjacent to the top of the next, you would see a continuous stroke passing through all of them.
The changes in the video were created by manipulating the feedback parameters as the frames rendered one line at a time. The feedback could be zoomed in and out, moved left and right, shifted in color and twirled about to create the illusion of motion. I also created controls that could steer the image to retain more or less of the features of the previous frame, incorporating a measure of temporal continuity to the results. Overall the piece was rendered over the course of several days - each frame took about twelve seconds, and I had to slowly change the parameters and monitor the output, occasionally going back a few frames to re-render something with the right effect.
The sound was created with a synthesis program I've been devising which simulates the effects of amplified feedback in a virtual space. This synth came out of a desire for creating music in which spatial motion and sonic result were linked, and microphone feedback seemed like a great paradigm for a low-level approach. I sequenced the position changes in the virtual microphone while watching along with the finished video, creating a soundtrack which hopefully relates to the on-screen motion without being explicitly mapped.