Anastasiia Raina, Meredith Binnette, Danlei Huang, Yimei Hu, Zack Davey, Qihang Li
From Noise to Form, (2021)
How does form emerge when the boundary between the natural, artificial, and the automated becomes obsolete? For this project we created an audio-controlled artificial environment where multiple variables are activated to propagate the growth and morphology of the generated structures. Sound plays the role of stimulus — the 3D-coordinate of sound source guides the general direction of growth; amplitude of electronic sound and 4-channel audios convert the shape and size; noise fluctuations interfere with the degree of nonlinear distortion.
To visualize and stimulate this controlled growth, we employed style transfer and image synthesis capabilities of Generative Adversarial Network algorithms to analyze and combine the visual data into custom generated models. Later we collected audio data groups and used Grasshopper to generate the movement path and the shapes at multiple time periods. The shapes were then imported to Cinema4D and animated. Signaled by the environment, starting from digital noise to pixel, structures slowly grow and morph in response to the environmental changes.
More about the project: From Noise To Form Website
Back To The Gallery