An Introduction to "Metropolitan"

I feel like I write my best music when I have a strong sense of purpose and intention behind what I'm doing.  When I moved to New York City, I knew I wanted to write an album somehow connected to the city, but I wasn't sure how to anchor it.  It was probably on my third or fourth trip to The Metropolitan Museum of Art that I realised how much I loved the Met, and how much it meant to me to be there.  That is when I decided to write an album about that museum.  

I chose nine artworks from The Met that really resonated with me, used custom software programmed by Gregory Long specifically for this project to analyze an image of each artwork to create sounds, and then incorporated those sounds into my compositions. 

This blog provides some insight into how each composition came about.  

I hope you enjoy!

Kolářová, Letters from Portugal

This photograph by Běla Kolářová caught my attention as I walked past it in the museum. It's a photograph of her hair.  I love its subtlety and ambiguity, and while I wasn't sure straight away I wanted to use it, I kept coming back to it.  

In terms of the tech, the path-to-melody algorithm is a variation on the traced-to-melody algorithm. Where the traced-to-melody algorithm traverses an image from left to right (and only left-to-right), creating a melody for each different-colored path, the path-to-melody algorithm is able follow a path in any direction.

For a path - where a path is a continuous line of the same color - the left-most point is found, and is used as the starting point. The path is then followed, up/down/left/right (with diagonals) through the image. For each pixel in the path a note is added to the melody, the pitch of which is determined by the pixel's height within the image.

Here is an example of how we can draw some paths through the artwork, each of these paths is converted into its own melody:

9-trace-2.noaa.png

This track went through quite a few different iterations  before I found the sounds and feel that I wanted.  It was a fairly frustrating process but I'm really happy with the result.  Here's an excerpt of the track:

Motherwell, Elegy to the Spanish Republic No. 70

To process the Motherwell we wanted to do something different; instead of turning an image into MIDI notes, we turned an image into MIDI signals.

For previous images the output of the processing algorithm was a collection of notes. Here’s our Haskell data structure that defines a note:

data Note = Note { noteStartPosition :: Int
                 , noteValue :: Int
                 , noteLength :: Int
                 , noteVolume :: Int } deriving (Eq, Show)

The noteStartPosition defines where in the piece the note is played, the noteValue is a number between 0 (C) and 127 (G) that defines the pitch of the note to be played, the noteLength defines how long the note should be played for, and the noteVolume, a value between 0 (soft) and 100 (loud), defines how loud the note should be.

So a list of notes, forming a melody, might look like the following:

[ Note { noteStartPosition = 10
       , noteValue = 81
       , noteLength = 8
       , noteVolume = 14 }
, Note { noteStartPosition = 3
       , noteValue = 4
       , noteLength = 4
       , noteVolume = 7 }
, Note { noteStartPosition = 22
       , noteValue = 23
       , noteLength = 8
       , noteVolume = 26 } ]

For this piece we generated MIDI signals instead of notes so we needed to create a new data structure to represent a signal. Here’s how we defined a volume signal, for example:

data Volume = Volume { volumePosition :: Int
                     , volumeValue :: Int } deriving (Eq, Show)

The volumePosition, like noteStartPosition, defines where in the piece the volume is changed, and the volumeValue defines a volume, between 0 (soft) and 127 (loud), for the track.

We processed the image, left to right, taking the amount of black in each column, converting it to a MIDI signal. The X (horizontal) position in the image defined the volumePosition, and the amount of black in the column defined the volumeValue.

As usual the full source code is available at: https://bitbucket.org/gregorydavidlong/imagetosoundhaskell/src

I personally found this artwork to be very dynamic and somewhat visually aggressive, so I have reflected this with my choice of drums and noise.  I used the midi input Greg created to control aspects including volume and distortion levels of certain tracks.  

Here's an excerpt of the Motherwell track:

kelly, spectrum v

Onto Ellsworth Kelly, an artist whose work I truly love.  There were a few of his paintings on the shortlist of works I was interested in for this album, but I ultimately went with Spectrum V, a fantastic work covering an entire wall. 

Spectrum V

Greg wrote an algorithm that turns colors into chords.  Trace a line, left to right, through the middle of the image. For each pixel on the line create three notes - fitted to a scale - one for the red, green, and blue and components each. These three notes essentially form a three-note chord.

This allows us to get a chord for each color in the Spectrum.

7-kelly-spectrum-v.png

I've then taken the chords that Greg has given me, and created a piece around them, repeating the chord progression 3 times and adding extra textures and voices. 

Here's an excerpt from Kelly, Spectrum V:

Kiefer, Astral Snake

Greg and I saw "Astral Snake" by Anselm Kiefer in the Met Breuer a couple of weekends ago, and it caught our attention. Coming off the back of the Pape track, it seemed like an logical choice to also interpret this picture by looking at the very distinct circular shape and creating a melody based on that.  

In Gregs words, for the Astral Snake we used the same processing algorithm as the Pape Pintura,  tracing the "body" of the snake and generating a melody based on it. Each colour forms a different melody; the higher the pixel in the image, the higher the note generated.

Although we used the same processing algorithm as another image we get a completely different result.

6-Kiefer-trace.png

This shape translate to the following midi track:

Screen Shot 2018-03-20 at 10.44.04 PM.png

I then layered fragments of these midi tracks over one another to create the idea of a circular concept of time, which is a reference to the picture itself.  I also used public domain NASA sounds (which I previously used in my EP, "Lunar") to incorporate a 'space' quality to the piece, and voila, I have my track!

Here's an excerpt of the final piece: 

Pape, Picture 1953

This track has been in the works forever! It was one of the things I was working on in hospital when I was stuck in there last August, and a couple of nights ago I finally had the breakthrough I needed to get it finished.  

I saw this painting by Lygia Pape last year at a retrospective exhibit for her at the Met Breuer and was immediately taken with it.  

Greg's explanation of what he did is below:

"This algorithm requires some human "pre-processing" before being applied. Looking at the Pape we can see that the image has some implied "lines" or "paths". I traced each of these paths in a different color (see image below).

The algorithm works by looking at each of these paths - it uses the different colors to distinguish each path - and generates a melody based on the height of the path in the image; the higher the path in the image, the higher the note in the melody. This generates a number of overlapping melodies, one per path, that start and end at different points in time. "

5-Pintura-traced.png

The midi file Greg gave me looked like a lot of disjointed samples, each representing a line from the painting.  Here's a sample of some (but not all) of the midi file:

Screen Shot 2018-03-10 at 4.46.03 PM.png

It took forever to try to navigate how I was going to turn this into a piece of music, and the track went through a lot of iterations.  In the end I chose some "lines" from the painting and assigned sounds to them whilst also writing an over-arching theme.  Here's an audio sample of the finished track:

Hartigan, Untitled

I've been sitting on this track for a long time trying to make it work.  Although I instinctively wanted to write a solo piano piece, I kept trying to make it more complicated than it needed to be.  Since I've come back to this album after a few months break with fresh ears, it's funny how easy it is to hear this as a solo piano piece, and fully commit to that. 

The truth is also, that the midi file Greg gave to me after analysing and processing the original image was so strikingly beautiful, that in the end I've done very very little to it to create this piece.  

Here is Grace Hartigan's untitled painting:

And here is Greg's analysis of how he analysed the image:

This algorithm essentially emulates raindrops falling onto the image. Each “rain drop” falls in a random location, growing outwards as a circle. The growing circle creates a broken chord based on the colors the growing circle touches.

Here are the specifics of the algorithm:

  Expanding Circles Processing Algorithm

  1. Generate a random ordering of all the (x,y) co-ordinates in the image. These co-ordinates form the center points of circles.

  2. For each of these center points, generate increasingly larger concentric circles up to a maximum circle radius of eight pixels. Each circle grows by a radius of one pixel. This gives a group of eight circles per center point, each larger than the previous.

  3. Transform each center point into a broken chord by transforming each circle for that center point into a note, based on the colors the growing circle touches.  This is done by summing the RGB values of each circle circumference, which is something I have discussed in previous blog posts.  Each RGB value is “assigned” a note based on the C major scale.  

And back to me now.  Here's the first minute or so of the track:

An update!

Wow, I've completely neglected to update the blog here, and yet sooo much has happened since my last post back in June!

Firstly, I gave birth to my second daughter in September, which was tremendously exciting but also came after a pretty stressful ending to my pregnancy which saw me on hospital bed rest for around 5 weeks, and my daughter being born 5 weeks preemie.  Both my daughter an I are doing great now, and happy to put all that behind us.  

Secondly, I was honored and privileged to have collaborated with Valeria Gonzalez and the Valleto dance company to write and produce a dance score to their evening length work "SOS" which premiered on 11 November at the Agnes Varis Performing Arts Center.  I somehow managed to write 70 minutes of music from my hospital bed, and then when I was back at home looking after a newborn.  I'll be releasing an EP of select pieces from "SOS" shortly, so stay tuned!

So with all that past me, I'm very excited to get back into writing this album!!

x

 

Pollock, Autumn Rhythm (Number 30)

For track three I chose Jackson Pollock's Autumn Rhythm (Number 30). From the beginning of this project I knew I wanted to do a Pollock painting and see how we could distill the distinctive visuals into audio form. 

So here's the tech update from Greg:

"For the Jackson Pollock we used a similar approach as before, generating a chord based on each pixel. We were inspired by the painting to develop a more random algorithm for this piece.

Here are the basics:

1. Read the image into a list of pixels, where each pixel comprises three numbers, a value for red, green, and blue.
2. Shuffle the list of pixels into a random order.
3. For each pixel in the shuffled list, derive notes based on the value of the red component of the pixel where midiNote = pixel.red / 2. The value of midiNote is between 0 and 127. The value for a pixel is between 0 and 255.
4. For each pixel in the shuffled list, derive notes based on the value of the green component of the pixel.
5. For each pixel in the shuffled list, derive notes based on the value of the blue component of the pixel.
6. For each note in the red, green, and blue note lists, create a chord of three notes [red, green, blue].

Because the algorithm randomly shuffles the pixels in the image, the MIDI generated will be different for each execution of the program.

On a tech note, I've switched programming languages from Kotlin to Haskell - mostly because I am interested in learning more Haskell.

If you want to look at the source code then I've made it publicly available here: https://bitbucket.org/gregorydavidlong/imagetosoundhaskell/src

My plan is to continue iterating on this code base for the remaining pieces on the album."

And back to me.  Greg gave me 3 amazing pieces of midi to play around with (one for each of red, blue and green), and in it's entirety they about 9 hours long each (don't panic, the piece I'm working on is only around 4 min!). I wanted my piece of music to reflect what I perceive to be the chaotic nature of Pollock's painting, but also reflect the beauty of this painting as a whole and they way it feels balanced, organised and beautiful. 

Here is a sample of the midi that Greg gave me (this is the 3 midi 'colours' playing concurrently):

And here is an excerpt of the piece that I've written.  You'll see I've basically left the midi to play out, and I've added some simple minimal piano over the top. 

Rothko, No. 16

Moving on to the second track, we chose the painting "No. 16" by Mark Rothko:

Firstly, over to Greg to explain how he has converted the image to audio:

"For No. 16 by Rothko, we combined a few different approaches.  Because the image is basically four different colours, red, black, brown, and blue, we calculated four chords - one for each colour.

An image is made up of numerous little pieces called pixels - short for picture elements - with each pixel having its own colour. One way of representing a pixel’s colour is using three numbers, a value for the red, blue, and green components of the colour. This is also called RGB.. These three values are combined by the computer into a single colour for display.

For each of the four colours in the image (red, black, brown, and blue), the RGB value for the colour was transformed into a three-note chord (the number next to each letter represents which octave the note belongs to):

Red block: [C4, D3, G1]

Black block: [C4, F3, B3]

Brown block: [A0, G1, E3]

Blue background: [B0, G2, B3]

To determine the length that these chords would be played for we calculated the ratio of each of the block sizes compared to the whole image:

Red: 13%

Black: 30%

Brown: 20%

Background: 37%

So, the chord derived from the red block plays for 13% of the phrase, the chord derived from the black block plays for 30% of the phrase, and so on.

Finally, we used a similar approach as with the previous piece of music, moving top-to-bottom across the image, to derive a melody."

Back to me.  Here's an audio representation of the chords (using my voice as the instrument):

Here's an audio representation of the melody: 

The final composition uses the melody and chords as a foundation while expanding on them and adding additional layers.  Here's an excerpt of where I'm at with the Rothko track:

Riley, Blaze 1 (Part 2)

It's been a while!  I've been working on Bridget Riley's "Blaze 1" and I've started on a second piece which I'll write about soon.  

 

In the meantime I thought I'd post a snippet of where my "Blaze 1" track is at.  I used the midi file that Greg created and filled out the sounds with our Dave Smith Mophoand also the Alchemy built in synthesizers in Logic.  I then played around with transposing and layering the different synthesizer tracks and added bass to underpin and provide some tonal context.  

 

 

Riley, Blaze 1 (Part 1)

We have had an excellent week nutting things out and have made a big step forward with the software, so much so that I've been able to start writing music for the first track of the album which as I mentioned last week will be an interpretation of Bridget Riley's 1962 painting "Blaze 1". 

Ok over to Greg for a software update:

"Last post I mentioned that I took the color values from the image and converted them to white noise. Since then I've modified my program to generate actual notes by mapping color values to note frequencies. For example, here are notes with associated frequency values:

enum class NoteFrequency(val frequency: Double, val wavelength: Double) {

    C0(16.35, 2109.89),
    Cs0_Db0(17.32, 1991.47),
    D0(18.35, 1879.69),
    Ds0_Eb0(19.45, 1774.20),
    E0(20.60, 1674.62),
    F0(21.83, 1580.63),
    Fs0_Gb0(3.12, 1491.91),
    G0(24.50, 1408.18),
    Gs0_Ab0(25.96, 1329.14),
    A0(27.50, 1254.55),
    // ...
    As8_Bb8(7458.62, 4.63),
    B8(7902.13, 4.37);
}

I can choose notes from this list to form a scale, for example, C major:

enum class CScale(val noteFrequency: NoteFrequency) {
    C0(NoteFrequency.C0),
    D0(NoteFrequency.D0),
    E0(NoteFrequency.E0),
    F0(NoteFrequency.F0),
    G0(NoteFrequency.G0),
    A0(NoteFrequency.A0),
    // ..
    A8(NoteFrequency.A8),
    B8(NoteFrequency.B8);

}

Finally, I can find notes in this scale based on the color values from the image:

    fun findNote(frequency : Double, values : List<NoteFrequency>) : NoteFrequency {
        val notes = values.takeWhile({ it.frequency < frequency })
        if (notes.size > 0) {
            return notes.last()
        } else {
            return NoteFrequency.C0
        }
    }

These notes are then converted to midi and sent to Madeleine."

And back to me!  Incidentally the midi track looks like this:

I ran the midi track through our Dave Smith Mopho synthesizer, this is what I came up:

I totally love it, and and it's given me a ton to work with.  

A New Year

A new year, a new blog!  Having lived in New York City for nearly a year now, one of my New Year's resolutions is to write an album, hopefully completing it before 2017 is done.  

I previously wrote a blog called "Fifty Two Weeks" which documented my time in Seattle and my project where I wrote a piece of music every week (well, it kind of averaged one every week & a half, but semantics right?!), and that blog ended up being the basis of my first album, 'Cascadia'.  I've recently realised how much I have missed writing a blog, but I haven't really had a particular project to write about.... until now!

This blog is going to document the process of writing my new album, tentatively titled 'Metropolitan', inspired by the New York Metropolitan Museum of Art.  

So bring it on 2017!