Perpetual Nirvana

MassArt Foundation Fellowship - Work in progress, 2018

Perpetual Nirvana is a live composing and performing sound installation in which a complex computer program attempts to continuously create and play new songs in the style of the 1990s band Nirvana. The piece uses Artificial Intelligence and Deep Learning systems that study and analyze large sets of data in order to create algorithms for the continuous creation of text, instrumental and vocal tracks.

In the last two years I have been investigating digital generative processes in which ‘writing machines’ and ‘deep learning’ technologies present new opportunities for critical artistic creation. These processes and machines vary in details, from forms of Dadaist and Surrealist writing games to the use of computer programs for scrambling text. As these technologies advance, they also rise questions about humanity’s role in cultural production and what an artist's work is in the era of “creative bots” and autonomous artificial intelligence: How do we, as a society, assign value to contemporary cultural production? Is the “creative genius” a machine that can be learned from? What is the value of a historical cultural artifact when it can be hacked and infinitely reproduced? Could we identify genius in an automated creation?

“Perpetual Nirvana” aims to create an immersive space in which these questions become central to the artistic form. By studying the discography and musical influences behind the band Nirvana, processing this information as data through a Deep Learning program and generating a synthetic performing program, the installation aims to infinitely play new auto-generated versions of the sonic phenomenon that was Nirvana, inciting a reflection on contemporary originality, stylistic innovation, and generative authorship, and —at the same time— questioning the mythology of the generational genius. “Perpetual Nirvana” also has the potential of inspiring an updated reading into notions of transcendental states and levels of consciousness. Expanding on the Buddhist concept of “Nirvana,” this project proposes a new form of genius capable of generating multiple creative selves while reframing notions of death and rebirth in popular culture.

 

Timeline

Fall 2017
Planning and initial analysis with developer
Development and sequencing of learning system
Sampling and production at sound studio with producer

Spring 2018
First run of tests and experiments with implementation of performing software

Summer 2018
Test for individual instrumental track generation
Testing of vocal modeling and generation
Studies of space, installation requirements, and possibilities of visual video signal in response to sound output
Final tests and program arrangements for mixing and track completion

Spring 2019
Final installation and exhibition

 

Sketchbook (Click to enlarge)


First Tests, Summer 2018

These results are using an algorithm that is built using Python and works through the following stages:

First, a harmonic body of work is analyzed. For this series we chose first the album Nevermind as a test run. Using this harmonic structure (chords, keys, structure, and rhythm) we build a probability matrix that is essentially one large Markov chain that will make the larger compositional decisions about the new pieces. This process runs and then produces an abstract computer representation of the song that is a list of arrays of information.

The next step is to transfer this abstract data into something that can be "played." An output from step one is taken —the abstract structure of the song— and turn it into a set of midi notes that can be played. It is worth noting that these midi notes are not direct relations to the notes that are heard in the final product but rather triggers that will relate to specific samples that have been designed for this project.

Finally using the digital audio workstation, Ableton Live, a set of digital sample-based instruments is designed. They take the resulting midi from the algorithm in step 2 and then produce the riffs that are played.

The next steps for this project is to begin to flesh out more instrument samples and a system for building a larger structures of music: Songs. This will challenge the system to be working with melody and vocals, as the current system focuses mostly around harmony.

Another challenge is beginning to tackle the idea of what it means to create a perpetual machine? How long should the piece be that this machine generates? Can it do it in real time? How should this machine structure it’s compositions to work together?