Pollock, Autumn Rhythm (Number 30)

For track three I chose Jackson Pollock's Autumn Rhythm (Number 30). From the beginning of this project I knew I wanted to do a Pollock painting and see how we could distill the distinctive visuals into audio form. 

So here's the tech update from Greg:

"For the Jackson Pollock we used a similar approach as before, generating a chord based on each pixel. We were inspired by the painting to develop a more random algorithm for this piece.

Here are the basics:

1. Read the image into a list of pixels, where each pixel comprises three numbers, a value for red, green, and blue.
2. Shuffle the list of pixels into a random order.
3. For each pixel in the shuffled list, derive notes based on the value of the red component of the pixel where midiNote = pixel.red / 2. The value of midiNote is between 0 and 127. The value for a pixel is between 0 and 255.
4. For each pixel in the shuffled list, derive notes based on the value of the green component of the pixel.
5. For each pixel in the shuffled list, derive notes based on the value of the blue component of the pixel.
6. For each note in the red, green, and blue note lists, create a chord of three notes [red, green, blue].

Because the algorithm randomly shuffles the pixels in the image, the MIDI generated will be different for each execution of the program.

On a tech note, I've switched programming languages from Kotlin to Haskell - mostly because I am interested in learning more Haskell.

If you want to look at the source code then I've made it publicly available here: https://bitbucket.org/gregorydavidlong/imagetosoundhaskell/src

My plan is to continue iterating on this code base for the remaining pieces on the album."

And back to me.  Greg gave me 3 amazing pieces of midi to play around with (one for each of red, blue and green), and in it's entirety they about 9 hours long each (don't panic, the piece I'm working on is only around 4 min!). I wanted my piece of music to reflect what I perceive to be the chaotic nature of Pollock's painting, but also reflect the beauty of this painting as a whole and they way it feels balanced, organised and beautiful. 

Here is a sample of the midi that Greg gave me (this is the 3 midi 'colours' playing concurrently):

And here is an excerpt of the piece that I've written.  You'll see I've basically left the midi to play out, and I've added some simple minimal piano over the top. 

Rothko, No. 16

Moving on to the second track, we chose the painting "No. 16" by Mark Rothko:

Firstly, over to Greg to explain how he has converted the image to audio:

"For No. 16 by Rothko, we combined a few different approaches.  Because the image is basically four different colours, red, black, brown, and blue, we calculated four chords - one for each colour.

An image is made up of numerous little pieces called pixels - short for picture elements - with each pixel having its own colour. One way of representing a pixel’s colour is using three numbers, a value for the red, blue, and green components of the colour. This is also called RGB.. These three values are combined by the computer into a single colour for display.

For each of the four colours in the image (red, black, brown, and blue), the RGB value for the colour was transformed into a three-note chord (the number next to each letter represents which octave the note belongs to):

Red block: [C4, D3, G1]

Black block: [C4, F3, B3]

Brown block: [A0, G1, E3]

Blue background: [B0, G2, B3]

To determine the length that these chords would be played for we calculated the ratio of each of the block sizes compared to the whole image:

Red: 13%

Black: 30%

Brown: 20%

Background: 37%

So, the chord derived from the red block plays for 13% of the phrase, the chord derived from the black block plays for 30% of the phrase, and so on.

Finally, we used a similar approach as with the previous piece of music, moving top-to-bottom across the image, to derive a melody."

Back to me.  Here's an audio representation of the chords (using my voice as the instrument):

Here's an audio representation of the melody: 

The final composition uses the melody and chords as a foundation while expanding on them and adding additional layers.  Here's an excerpt of where I'm at with the Rothko track:

Riley, Blaze 1

It's been a while!  I've been working on Bridget Riley's "Blaze 1" and I've started on a second piece which I'll write about soon.  


In the meantime I thought I'd post a snippet of where my "Blaze 1" track is at.  I used the midi file that Greg created and filled out the sounds with our Dave Smith Mophoand also the Alchemy built in synthesizers in Logic.  I then played around with transposing and layering the different synthesizer tracks and added bass to underpin and provide some tonal context.  




We have had an excellent week nutting things out and have made a big step forward with the software, so much so that I've been able to start writing music for the first track of the album which as I mentioned last week will be an interpretation of Bridget Riley's 1962 painting "Blaze 1". 

Ok over to Greg for a software update:

"Last post I mentioned that I took the color values from the image and converted them to white noise. Since then I've modified my program to generate actual notes by mapping color values to note frequencies. For example, here are notes with associated frequency values:

enum class NoteFrequency(val frequency: Double, val wavelength: Double) {

    C0(16.35, 2109.89),
    Cs0_Db0(17.32, 1991.47),
    D0(18.35, 1879.69),
    Ds0_Eb0(19.45, 1774.20),
    E0(20.60, 1674.62),
    F0(21.83, 1580.63),
    Fs0_Gb0(3.12, 1491.91),
    G0(24.50, 1408.18),
    Gs0_Ab0(25.96, 1329.14),
    A0(27.50, 1254.55),
    // ...
    As8_Bb8(7458.62, 4.63),
    B8(7902.13, 4.37);

I can choose notes from this list to form a scale, for example, C major:

enum class CScale(val noteFrequency: NoteFrequency) {
    // ..


Finally, I can find notes in this scale based on the color values from the image:

    fun findNote(frequency : Double, values : List<NoteFrequency>) : NoteFrequency {
        val notes = values.takeWhile({ it.frequency < frequency })
        if (notes.size > 0) {
            return notes.last()
        } else {
            return NoteFrequency.C0

These notes are then converted to midi and sent to Madeleine."

And back to me!  Incidentally the midi track looks like this:

I ran the midi track through our Dave Smith Mopho synthesizer, this is what I came up:

I totally love it, and and it's given me a ton to work with.  

Software Version 0.01

It's been a while since my last blog post, but we've been working behind the scenes here!  A lot of this album is going to be about getting the software right, and having a strong audio interpretation of the images to work from.

I thought that I'd hand this blog post over to Greg to explain a bit more about the software, and the preliminary stages of where it's at.  Here he's talking about an audio conversion of Bridget Riley's 1962 painting "Blaze 1".  

 Over to you Greg: 

"I've started looking at how I can generate sounds from the pictures that Madeleine has taken. I don't have any previous experience in image processing so I'm starting from scratch.

When I'm presented with a problem like this I often try to do the dumbest thing that I think will work, and then build from there.

The first image Madeleine has given me is quite monochromatic, and is essentially a spiral (sorry Ms Riley).  Therefore I thought I could "traverse" the image in various directions, looking at the colors of pixels, and generate sounds based on those colors.

With this image I can traverse left-to-right, and generate a graph of the color value at each position:

Then I could say, when the value is large, play some white noise, and when the value is small, play nothing:

It's simple, but it's a thread to pull on."


Last week I mentioned that my album will be inspired by the Metropolitan Museum of Art, and I thought I'd try to this week write some words explaining what I mean by that. 

I feel like I write my best music when I have a strong sense of purpose and intention behind what I'm doing.  I really need a focal point and something that I want to get across through my music.  New York City is such an incredible city with so much stimulus it's overwhelming, and for a while I knew I wanted to write an album somehow connected to the city, but I wasn't sure how to anchor it.  It was probably on my third or fourth trip to the MET that I realised how much I loved the MET, and how much it meant to me to be there, and that is when I decided to write my album about that museum.  

So how will I do it?  I've decided to photograph about a dozen or so artworks on display at the MET.  My husband Greg will be writing software that is going to convert those images into sounds, and I'm going to use those sounds as the basis for different pieces.  So each piece is going to have a direct relationship with a specific artwork.  

Greg has started writing the program, and I'm deciding on what images I want to start with, so it's all starting to happen!

The MET (photo credit: Greg)

A New Year

A new year, a new blog!  Having lived in New York City for nearly a year now, one of my New Year's resolutions is to write an album, hopefully completing it before 2017 is done.  

I previously wrote a blog called "Fifty Two Weeks" which documented my time in Seattle and my project where I wrote a piece of music every week (well, it kind of averaged one every week & a half, but semantics right?!), and that blog ended up being the basis of my first album, 'Cascadia'.  I've recently realised how much I have missed writing a blog, but I haven't really had a particular project to write about.... until now!

This blog is going to document the process of writing my new album, tentatively titled 'Metropolitan', inspired by the New York Metropolitan Museum of Art.  And while I'm here, I'll also be blogging about my life in New York!

So bring it on 2017!