Hartigan, Untitled

I've been sitting on this track for a long time trying to make it work.  Although I instinctively wanted to write a solo piano piece, I kept trying to make it more complicated than it needed to be.  Since I've come back to this album after a few months break with fresh ears, it's funny how easy it is to hear this as a solo piano piece, and fully commit to that. 

The truth is also, that the midi file Greg gave to me after analysing and processing the original image was so strikingly beautiful, that in the end I've done very very little to it to create this piece.  

Here is Grace Hartigan's untitled painting:

And here is Greg's analysis of how he analysed the image:

This algorithm essentially emulates raindrops falling onto the image. Each “rain drop” falls in a random location, growing outwards as a circle. The growing circle creates a broken chord based on the colors the growing circle touches.

Here are the specifics of the algorithm:

  Expanding Circles Processing Algorithm

  1. Generate a random ordering of all the (x,y) co-ordinates in the image. These co-ordinates form the center points of circles.

  2. For each of these center points, generate increasingly larger concentric circles up to a maximum circle radius of eight pixels. Each circle grows by a radius of one pixel. This gives a group of eight circles per center point, each larger than the previous.

  3. Transform each center point into a broken chord by transforming each circle for that center point into a note, based on the colors the growing circle touches.  This is done by summing the RGB values of each circle circumference, which is something I have discussed in previous blog posts.  Each RGB value is “assigned” a note based on the C major scale.  

And back to me now.  Here's the first minute or so of the track:

An update!

Wow, I've completely neglected to update the blog here, and yet sooo much has happened since my last post back in June!

Firstly, I gave birth to my second daughter in September, which was tremendously exciting but also came after a pretty stressful ending to my pregnancy which saw me on hospital bed rest for around 5 weeks, and my daughter being born 5 weeks preemie.  Both my daughter an I are doing great now, and happy to put all that behind us.  

Secondly, I was honored and privileged to have collaborated with Valeria Gonzalez and the Valleto dance company to write and produce a dance score to their evening length work "SOS" which premiered on 11 November at the Agnes Varis Performing Arts Center.  I somehow managed to write 70 minutes of music from my hospital bed, and then when I was back at home looking after a newborn.  I'll be releasing an EP of select pieces from "SOS" shortly, so stay tuned!

So with all that past me, I'm very excited to get back into writing this album!!

x

 

Pollock, Autumn Rhythm (Number 30)

For track three I chose Jackson Pollock's Autumn Rhythm (Number 30). From the beginning of this project I knew I wanted to do a Pollock painting and see how we could distill the distinctive visuals into audio form. 

So here's the tech update from Greg:

"For the Jackson Pollock we used a similar approach as before, generating a chord based on each pixel. We were inspired by the painting to develop a more random algorithm for this piece.

Here are the basics:

1. Read the image into a list of pixels, where each pixel comprises three numbers, a value for red, green, and blue.
2. Shuffle the list of pixels into a random order.
3. For each pixel in the shuffled list, derive notes based on the value of the red component of the pixel where midiNote = pixel.red / 2. The value of midiNote is between 0 and 127. The value for a pixel is between 0 and 255.
4. For each pixel in the shuffled list, derive notes based on the value of the green component of the pixel.
5. For each pixel in the shuffled list, derive notes based on the value of the blue component of the pixel.
6. For each note in the red, green, and blue note lists, create a chord of three notes [red, green, blue].

Because the algorithm randomly shuffles the pixels in the image, the MIDI generated will be different for each execution of the program.

On a tech note, I've switched programming languages from Kotlin to Haskell - mostly because I am interested in learning more Haskell.

If you want to look at the source code then I've made it publicly available here: https://bitbucket.org/gregorydavidlong/imagetosoundhaskell/src

My plan is to continue iterating on this code base for the remaining pieces on the album."

And back to me.  Greg gave me 3 amazing pieces of midi to play around with (one for each of red, blue and green), and in it's entirety they about 9 hours long each (don't panic, the piece I'm working on is only around 4 min!). I wanted my piece of music to reflect what I perceive to be the chaotic nature of Pollock's painting, but also reflect the beauty of this painting as a whole and they way it feels balanced, organised and beautiful. 

Here is a sample of the midi that Greg gave me (this is the 3 midi 'colours' playing concurrently):

And here is an excerpt of the piece that I've written.  You'll see I've basically left the midi to play out, and I've added some simple minimal piano over the top. 

Rothko, No. 16

Moving on to the second track, we chose the painting "No. 16" by Mark Rothko:

Firstly, over to Greg to explain how he has converted the image to audio:

"For No. 16 by Rothko, we combined a few different approaches.  Because the image is basically four different colours, red, black, brown, and blue, we calculated four chords - one for each colour.

An image is made up of numerous little pieces called pixels - short for picture elements - with each pixel having its own colour. One way of representing a pixel’s colour is using three numbers, a value for the red, blue, and green components of the colour. This is also called RGB.. These three values are combined by the computer into a single colour for display.

For each of the four colours in the image (red, black, brown, and blue), the RGB value for the colour was transformed into a three-note chord (the number next to each letter represents which octave the note belongs to):

Red block: [C4, D3, G1]

Black block: [C4, F3, B3]

Brown block: [A0, G1, E3]

Blue background: [B0, G2, B3]

To determine the length that these chords would be played for we calculated the ratio of each of the block sizes compared to the whole image:

Red: 13%

Black: 30%

Brown: 20%

Background: 37%

So, the chord derived from the red block plays for 13% of the phrase, the chord derived from the black block plays for 30% of the phrase, and so on.

Finally, we used a similar approach as with the previous piece of music, moving top-to-bottom across the image, to derive a melody."

Back to me.  Here's an audio representation of the chords (using my voice as the instrument):

Here's an audio representation of the melody: 

The final composition uses the melody and chords as a foundation while expanding on them and adding additional layers.  Here's an excerpt of where I'm at with the Rothko track:

Riley, Blaze 1

It's been a while!  I've been working on Bridget Riley's "Blaze 1" and I've started on a second piece which I'll write about soon.  

 

In the meantime I thought I'd post a snippet of where my "Blaze 1" track is at.  I used the midi file that Greg created and filled out the sounds with our Dave Smith Mophoand also the Alchemy built in synthesizers in Logic.  I then played around with transposing and layering the different synthesizer tracks and added bass to underpin and provide some tonal context.  

 

 

Midi

We have had an excellent week nutting things out and have made a big step forward with the software, so much so that I've been able to start writing music for the first track of the album which as I mentioned last week will be an interpretation of Bridget Riley's 1962 painting "Blaze 1". 

Ok over to Greg for a software update:

"Last post I mentioned that I took the color values from the image and converted them to white noise. Since then I've modified my program to generate actual notes by mapping color values to note frequencies. For example, here are notes with associated frequency values:

enum class NoteFrequency(val frequency: Double, val wavelength: Double) {

    C0(16.35, 2109.89),
    Cs0_Db0(17.32, 1991.47),
    D0(18.35, 1879.69),
    Ds0_Eb0(19.45, 1774.20),
    E0(20.60, 1674.62),
    F0(21.83, 1580.63),
    Fs0_Gb0(3.12, 1491.91),
    G0(24.50, 1408.18),
    Gs0_Ab0(25.96, 1329.14),
    A0(27.50, 1254.55),
    // ...
    As8_Bb8(7458.62, 4.63),
    B8(7902.13, 4.37);
}

I can choose notes from this list to form a scale, for example, C major:

enum class CScale(val noteFrequency: NoteFrequency) {
    C0(NoteFrequency.C0),
    D0(NoteFrequency.D0),
    E0(NoteFrequency.E0),
    F0(NoteFrequency.F0),
    G0(NoteFrequency.G0),
    A0(NoteFrequency.A0),
    // ..
    A8(NoteFrequency.A8),
    B8(NoteFrequency.B8);

}

Finally, I can find notes in this scale based on the color values from the image:

    fun findNote(frequency : Double, values : List<NoteFrequency>) : NoteFrequency {
        val notes = values.takeWhile({ it.frequency < frequency })
        if (notes.size > 0) {
            return notes.last()
        } else {
            return NoteFrequency.C0
        }
    }

These notes are then converted to midi and sent to Madeleine."

And back to me!  Incidentally the midi track looks like this:

I ran the midi track through our Dave Smith Mopho synthesizer, this is what I came up:

I totally love it, and and it's given me a ton to work with.  

Software Version 0.01

It's been a while since my last blog post, but we've been working behind the scenes here!  A lot of this album is going to be about getting the software right, and having a strong audio interpretation of the images to work from.

I thought that I'd hand this blog post over to Greg to explain a bit more about the software, and the preliminary stages of where it's at.  Here he's talking about an audio conversion of Bridget Riley's 1962 painting "Blaze 1".  

 Over to you Greg: 

"I've started looking at how I can generate sounds from the pictures that Madeleine has taken. I don't have any previous experience in image processing so I'm starting from scratch.

When I'm presented with a problem like this I often try to do the dumbest thing that I think will work, and then build from there.

The first image Madeleine has given me is quite monochromatic, and is essentially a spiral (sorry Ms Riley).  Therefore I thought I could "traverse" the image in various directions, looking at the colors of pixels, and generate sounds based on those colors.

With this image I can traverse left-to-right, and generate a graph of the color value at each position:

Then I could say, when the value is large, play some white noise, and when the value is small, play nothing:

It's simple, but it's a thread to pull on."

The MET

Last week I mentioned that my album will be inspired by the Metropolitan Museum of Art, and I thought I'd try to this week write some words explaining what I mean by that. 

I feel like I write my best music when I have a strong sense of purpose and intention behind what I'm doing.  I really need a focal point and something that I want to get across through my music.  New York City is such an incredible city with so much stimulus it's overwhelming, and for a while I knew I wanted to write an album somehow connected to the city, but I wasn't sure how to anchor it.  It was probably on my third or fourth trip to the MET that I realised how much I loved the MET, and how much it meant to me to be there, and that is when I decided to write my album about that museum.  

So how will I do it?  I've decided to photograph about a dozen or so artworks on display at the MET.  My husband Greg will be writing software that is going to convert those images into sounds, and I'm going to use those sounds as the basis for different pieces.  So each piece is going to have a direct relationship with a specific artwork.  

Greg has started writing the program, and I'm deciding on what images I want to start with, so it's all starting to happen!

The MET (photo credit: Greg)

A New Year

A new year, a new blog!  Having lived in New York City for nearly a year now, one of my New Year's resolutions is to write an album, hopefully completing it before 2017 is done.  

I previously wrote a blog called "Fifty Two Weeks" which documented my time in Seattle and my project where I wrote a piece of music every week (well, it kind of averaged one every week & a half, but semantics right?!), and that blog ended up being the basis of my first album, 'Cascadia'.  I've recently realised how much I have missed writing a blog, but I haven't really had a particular project to write about.... until now!

This blog is going to document the process of writing my new album, tentatively titled 'Metropolitan', inspired by the New York Metropolitan Museum of Art.  And while I'm here, I'll also be blogging about my life in New York!

So bring it on 2017!