Perpetual Nirvana

MassArt Foundation Fellowship - Work in progress, 2018

Perpetual Nirvana is a live composing and performing sound installation in which a complex computer program attempts to continuously create and play new songs in the style of the 1990s band Nirvana. The piece uses Artificial Intelligence and Deep Learning systems that study and analyze large sets of data in order to create algorithms for the continuous creation of text, instrumental and vocal tracks.

In the last two years I have been investigating digital generative processes in which ‘writing machines’ and ‘deep learning’ technologies present new opportunities for critical artistic creation. These processes and machines vary in details, from forms of Dadaist and Surrealist writing games to the use of computer programs for scrambling text. As these technologies advance, they also rise questions about humanity’s role in cultural production and what an artist's work is in the era of “creative bots” and autonomous artificial intelligence: How do we, as a society, assign value to contemporary cultural production? Is the “creative genius” a machine that can be learned from? What is the value of a historical cultural artifact when it can be hacked and infinitely reproduced? Could we identify genius in an automated creation?

“Perpetual Nirvana” aims to create an immersive space in which these questions become central to the artistic form. By studying the discography and musical influences behind the band Nirvana, processing this information as data through a Deep Learning program and generating a synthetic performing program, the installation aims to infinitely play new auto-generated versions of the sonic phenomenon that was Nirvana, inciting a reflection on contemporary originality, stylistic innovation, and generative authorship, and —at the same time— questioning the mythology of the generational genius. “Perpetual Nirvana” also has the potential of inspiring an updated reading into notions of transcendental states and levels of consciousness. Expanding on the Buddhist concept of “Nirvana,” this project proposes a new form of genius capable of generating multiple creative selves while reframing notions of death and rebirth in popular culture.



Fall 2017
Planning and initial analysis with developer
Development and sequencing of learning system
Sampling and production at sound studio with producer

Spring 2018
First run of tests and experiments with implementation of performing software

Summer 2018
Test for individual instrumental track generation
Testing of vocal modeling and generation
Studies of space, installation requirements, and possibilities of visual video signal in response to sound output
Final tests and program arrangements for mixing and track completion

Spring 2019
Final installation and exhibition


Sketchbook (Click to enlarge)

First Tests, Early Summer 2018

These results are using an algorithm that is built on Python and works through the following stages:

1) A harmonic body of work is analyzed. For this series the album Nevermind (1991) was chosen first as a test run. Using the harmonic structures found in the album (chords, keys, structure, and rhythm) a probability matrix is built. This is essentially one large Markov chain that will make the larger compositional decisions about the new pieces. This process runs and then produces an abstract computer representation of the "song" that is a list of arrays of information. These results dictate chord progressions and strumming patterns, focusing primarily on the guitar as leading instrument. 

2) This resulting abstract data is translated into something that can be "played." An output from step one is taken —the abstract structure of the song— and turn into a set of midi notes that can be played. It is worth noting that these midi notes are not direct relations to the notes that are heard in the final product but rather triggers that will relate to specific samples that have been designed for this project.

3) Using the digital audio workstation "Ableton Live", a set of digital sample-based instruments is designed. These instruments are built by synthesizing samples found on the original recordings from the album "Nevermind" and assigning them to movements with the characteristics of an analog instrument (hit volume, strumming patterns, tempo). These instruments take the resulting midi from the algorithm in step 2 and then produce the riffs that are played in the video above.


Late Summer, 2018

The next steps for the project have focused on refining the instrument samples, perfecting the performing synthesizers,  and a designing a system for building larger song-like structures of music. This challenges the process, as the building of songs in real time requires a complex and heavy computational system to be working with melody and vocals, as the current system focuses mostly around harmony.

More than creating songs with a beginning and an end (like in an album), the system seems to work best when the three main instruments "roll free" and are directed by the universal composing program to "look for each other", almost in the same way that a band will do while "jamming". This means that while a global program sets universal rules of composition, tempo, volumes, tones and keys, there are three other individual programs regulating the behaviors of each instrument. Here are some samples of the instruments playing by themselves:

This a sample of the three instruments jamming, including a synthesizer with vocal samples trigered by the program as well: 

It is worth noting that in the above sample, for previewing purposes, the transitions between riffs and song parts is very quick. In a live situation the instruments would be jamming for longer periods until finally locking into a groove, attempting to sustain it, and then working on creating a following part, and so on.


Installation Proposal

4 Audio Channels, floating projection screen with automated video, custom furniture (CLICK TO ENLARGE)

Possible Activations and Events

  • Collaboration with local/college radio to air parts of the show
  • Creation of a 24 hour radio station / website that allows experiencing the piece in real time
  • Merchandise with sets of lyrics generated by the system
  • A closing event for the release of a limited-series record containing parts of the performance