Emily Howell

Emily Howell

UUnderstanding creativity or composing by proxy?

It's interesting that in the last week I've been seeing things that get us to start questioning what makes us human, as if the very nature of our being was being challenged. Last week it was Cynthia, the world's first synthetic organism courtesy of Craig Venter...this week I discovered Emily Howell.

Click the image above and have a listen to a composition by Emily Howell. Emily Howell, unveiled in the last few months, has actually been around longer than Cynthia and is a computer program created by US Santa Cruz Emeritus Professor David Cope that creates original, modern music. Some people find it indistinguishable from a human composer. David Cope has spent 30 years creating computer algorithms that 'compose': first with 'Emmy' (Experiments in Musical Intelligence - EMI) from 1980, and now with Emily Howell. Via decades of studying Bach chorales and other classical composers, he has essentially coded the syntax that is the 'language' foundation of certain western classical music styles as they follow strict harmonic rules. Essentially he's taught it to speak a language fluently by piecing together known phrases that have a certain juxtaposing logic (so the sequence is understandable as a thread). While it sounds as though it is 'played', that it is 'interpreted' sound rather than 'generated' sound, the key questions it raises are:

1. Whether it is generating that interpretation itself. Or whether it has come from the human that programmed it.

2. If the net effect of a composition on the listener is the same emotional response, regardless of whether it has been generated by a human or a machine, is there any difference, and thus, does it matter?

I'd argue that it is essentially second-hand emotion via the programming of rules on harmonic progressions and timing which have been transcribed through careful study. Which is not the same as generating that interpretation and direction autonomously. It's a facsimile of human interpretation programmed to mimic the way we interpret 'feeling'. But like any computer program, it's only as good as the instructions it has been given. It is certainly a step up from the notion that if you plonk a monkey down in front of a typewriter for long enough, it will eventually type out the complete works of Shakespeare, which it does more through coincidence than design. Emily just fast-tracks this with a set of rules that tell her which bits are acceptable after which. A little bit like the 18th Century dice games, that Haydn and Mozart were purportedly fans of. But how small do those fragments or 'words' need to be in order for the output to be considered 'creative'? The dice game mentioned above uses phrases, not individual notes. The smallest unit in language is the letter, and in music, the note. If it is making the compositions from the interactions of individual notes relative to each other, rather than set phrases, then that is impressive programming. Essentially, it's the difference between a DJ putting together a track based on samples and someone who plays in each note from scratch on a real instrument. But is one necessarily more or less 'creative' than the other? Just because one is using larger chunks than the other doesn't mean it is any less creative. The advantage of playing each note individually as a musician does when they are playing an instrument is that they have control over microtiming, which I refer to in this post about an improvised composition I did: http://www.kevinpollard.com/blog/?p=355. When I improvise I am definitely drawing on my grasp of melody, harmonics, timing and space. It gives me the ability to play exactly what I hear in my head. I've always been fascinated by looking at the MIDI version of what I just played into my keyboard in my studio...the track is all MIDI and it took me an hour and a half or so using my 88-note keyboard to put in the 4 twenty minute passes for the three instruments. (Interestingly, to methodically program it to sound exactly like this by inputing it note by note using the pencil would take weeks). It is now all represented by dots on a grid in my laptop. Playing it back, it sounds 'emotive' and generates an emotional response in the listener, even though now it is coming out of a box. But that is because it is a direct recording of what I played. Now, break down the sequences into small phrases and get the computer to reassemble it in different ways. It would still sound emotive, but would lack direction, which would be less of an issue in this case as it is improvised and only has a loose direction anyway. Now break it down to single notes and give the computer the rules that govern harmonic scales and sympathetic progressions, intervals, tempo and microtiming. Et voila, computer emotion. But it's essentially still derivative of my original piece. But so what? Is there any difference? After all, we as humans are programmed to respond (behave) to certain situations in certain ways, aren't we just following instructions coded in our genes and replaying fragments from our memory? At what point does assisted creativity become genuine creativity? When does an idea become original? Cope puts it thus: “Nobody’s original. We are what we eat, and in music, we are what we hear. What we do is look through history and listen to music. Everybody copies from everybody. The skill is in how large a fragment you choose to copy and how elegantly you can put them together.” Emily Howell has a musical conversation that includes "words" (white nodes) and the connections between them. (Catherine Karnow) But if that was the case then there would be no innovation, and there are plenty of examples of that around us in day-to-day objects, but not necessarily in music...there hasn't really been much widespread original music since the 1980s, but that's largely to do with the decline of consensus through the ubiquitous availability of vast amounts of information made possible by the internet. Music is not inherently absolute...one person's Mozart is another person's Rage Against The Machine...but the fact that so many people agree that Mozart's music should be held in as high esteem as it is means that there must be a universal set of mathematical laws governing what frequencies work with each other, according to our western tastes, which means it can be categorized and replicated. I would wager that innovation is incremental. Even major breakthroughs build on the logic of the existing and tested. It's often a mistake that someone makes that leads to a new avenue. Which might be the key. If you program a computer to make a mistake and program it how to react to that mistake, then is that the basis for an innovative outcome? A creative direction? Emily Howell does this. She even knows where she is in a piece she hasn't written yet and when to trigger the coda algorithms. So does that make her notionally self-aware? The one thing that I would say is missing is why. Humans can now program a computer to know what a Mozart chorale sounds like and how to make one, or to combine the syles of Mozart and Scott Joplin, but the computer doesn't know why it's doing it. Only David Cope knows. And it's that understanding of 'why' that allows humans to make value judgements about which mistakes are worth pursuing and which are ones go in the bin. Humans have the advantage of understanding context and a bigger picture which inform their decisions. Once Emily can do that, she would be truly creative. Until then she's more just a proxy for David's compositions. The thing about music is that it is ruled by emotion, not just logic, so it's harder to predict where it's going to go. It's also why you don't necessarily need degrees and a formal educaton to succeed in music. I'd have thought that Mozart / Beethoven / The Beatles / Elvis / Michael Jackson didn't know why they were making a new type of music, it just felt right to them, and that was their 'why'. It would be interesting to find out how many attempts were needed before each of these pieces reached their final form. Was it like cloning where 99% of attempts are abortive and 1% make it? Or did come out finished first time? How much editing was needed? Even my improvisations needed a couple of minutes editing to get rid of the odd duff note where I missed the one I wanted. How many attempts should there be before the piece is considered 'coincidence', rather than 'composed'? Where is that boundary? Needless to say the reaction to Emily Howell has been quite heated. I'd be interested to do a session with an on-the-fly version of Emily or her successor where phrases are played in a call-and-response manner in real-time...improvised...and see what happens; how I would respond to the musical directions generated by the computer and how it would react to my response. To accomplish that it would need to interpret what I was playing, understand which harmonic direction and tempo it belonged to, and respond by perhaps including some of the elements but not others, deciding the relevant points and maybe adding its own direction whilst adhering to the pulse, dynamics and the unfolding structure of the shared piece, which is essentially what I do when I improvise. Maybe I'll try to set it up... You can hear another sample of Emily Howell's work here. A longer article about David Cope and Emily Howell is here on Miller-McCune, which includes the way he programs the machine to produce the compositions.

The first CD of Emily Howell compositions, entitled From Darkness, Light, is available via iTunes and Amazon.