🎼 Functionally programming The King of Pop
A quick note
Well over a year has passed since I started this post – the real world got in the way. I’ve spent “some” time revisiting the post itself but also the code, music and process around something it. As something a little “out there” in both (semi-)musical and programming endeavours, I hope it was worth it…
Oh, but if you’re feeling a bit TL;DR, skip to the music.
Where were we?
In this part, we’ll take some of these ideas further, and create something resembling a full piece of music, in fact a cover of a well known song. As it’s concentrating more on the audio side than the coding side, I assume some basic music / audio engineering knowledge, but don’t let that put you off.
Choosing a song
The King of Pop
It seems appropriate in the (EDIT: still) current light of EPA-destruction and climate treaty withdrawals to choose the mid-90s ode to the salvation of the planet by the late King of Pop himself, Michael Jackson.
Yes, I’m talking about Jackson’s most successful UK single and 1995’s UK Christmas #1, Earth Song!
Of course, it is completely inappropriate to choose this song, as:
- It’s over six minutes long (on the album)
- Has an epic, non-conventional song structure
- That key change mid-song…
- It’s heavily produced – layered synths, R&B-like sections, rock guitars, live drums, strings sections, sound effects, gospel choirs, a harp (!), brass sections.
- Lots of funk elements (which rely on years of player experience), i.e. hidden complexity. This is a problem with code too.
- The default SoundFont I’m using to do this is pretty… limited, even for General Midi.
But then, it wouldn’t be fun without a few challenges!
Making some music
What about us?
ℹ️ Interactive is always more fun, so I’ll be providing a few GHCId (interactive REPL) commands along the way. To get set up, you can see the previous post, or here’s the short version:
- First make sure you have your MIDI synth running (try
timidity -iA -Os -A 300 -ffor example).
- Then, in your Euterpea project directory (you can clone this sandbox if you want),
- Work out what MIDI channel you’re on using the
devicescommand, and call this
They’ll be marked up a bit like this:
λ> playDev channel $ c 4 wn
For simplicity I used 3 voices almost everywhere:
aChordof helper function (which saves a lot of typing / promotes re-use). It takes a list of notes (without duration) and creates some
A quick aside on
Wait – what’s
$dur? Is this Bash or PHP suddenly? 😁
ℹ️ Haskell’s ubiquitous yet mysterious-to-newbies
$ operator can be used for more than just avoiding brackets, as I found out a while ago myself, to some confusion. For example, you can apply a parameter to a partially applied function:
λ> inc x = x + 1 -- or just: inc x = (+1) λ> ($ 123) inc 124
The parser doesn’t need a space for it, either (but you do need the parentheses):
λ> map ($3) [(+1), (*2), (div 6)] [4,6,2]
So in Euterpea
($en) (c 4) is just applying
c 4 (the note, needing a duration) to
en (eighth note, A.K.A. quaver, the duration)!
What about us?
In your REPL:
λ> playDev channel $ rest chord [af 4 en, cf 4 en , ef 4 en] 🎼...
or something a bit more melodic:
λ> aChordOf notes dur = chord $ map ($dur) notes λ> abMinor = aChordOf [af 4, cf 4, ef 4] λ> introRun = aMinor qn :+: line [cf 6 en, bf 5 en, af 5 en, ef 5 en] λ> playDev channel $ introRun 🎼...
Luckily, the drum pattern here is relatively simple. The subtleties and performance are another matter, but we’re in the business of approximating here.
I find that the list format of
line is useful for drum patterns, and it’s best if possible to keep these even measures, or you go mad. Eighth notes are good here, so let’s adopt that.
Just like clean coding, I find well named music variables can help understand the code. Here I’ve used vaguely onomatopoeiac names for drums, and later extracted these to allow duration variations; yes, I think I’ve just coined higher-ordered percussion. WAT.
-- Helpers of type Dur -> Music (Pitch, Volume) -- ...will depend a lot on your synth and SoundFont / patchset boomOf = addVolume 115 . perc BassDrum1 smakOf = addVolume 70 . perc AcousticSnare tssOf = addVolume 40 . perc ClosedHiHat -- Basic sounds boom = boomOf en smak = smakOf en tss = tssOf en --Rest notes aliases da = rest en d_ = rest sn
Once we have that vocabulary defined, it’s easy to talk about simple drum patterns!
-- A basic bass, snare rock beat rockBeat = forever $ line [boom, da, smak, da] shaker = forever $ line [addVolume 10 $ perc Tambourine sn] -- Re-use existing aliases then double the speed fill = tempo 2 $ line [boom, boom, smak, boom :=: tss] -- An 8-bar beat with a fast fill at the very end beat = cut 8 $ (cut 7.75 rockBeat :+: fill)
Note how we did the fill by re-using existing eighth-note definitions and using the
tempo modifier to double the speed of the whole phrase. DRY in action! (the fill itself needs some work though, sorry, purists).
cut function is very useful – note how it can take non-integer durations too – so reusing drum patterns is easy, as you can cut off the end (adding a fill instead) rather than duplicating a large part of the content. More DRY, essentially.
What about us?
Try it in your REPL:
λ> playDevS channel $ rest qn :+: beat
- we add a small rest (
rest qn) to help with the hiccups with some MIDI synths when starting playback
playDevSis the strict variant of
playDev, which does timing better (see the interesting arguments around this behaviour from the creators).
Layering sounds in MIDI
Is this a good idea?
Not only is this definitely not semantic either (a bit like View code in your Model in MVC), but this is not even vaguely a good idea anyway: Your results will definitely differ. Rendering the output is the only guarantee, and even that’s not so much fun. 1
So that said we’ll continue regardless 😄…
I found the SoundFont I was using (Timidity plus FreePat, I think) was nice in places but horrible in others, and especially lacked some thickness in the drums. So creating that huge 90s drum sound was challenging.
- Thicker bass drums by layering a low tom (!), more like an 808 kick TBH, on to the bassdrum at a lower volume when we wanted the full weight of the drums (chorus, outro, etc)
- Layering the electric snare on top of the acoustic one made a nice punchy sound.
- The clap sound for the (second) bridge / pre-chorus I eventually got closer to (oh – the SoundFont’s actual clap was totally broken for me) by using a little rimshot (too thin / short by itself), a high wood block at low volume to give the reverby strong percussive sound, with a bit of Cabasa (a latin shaker-like instrument) for the missing noise in the upper part of the spectrum. Phew.
- Also for the second bridge, the drummer uses a gentle but very tight closed hihat. The sound was far too open (splashy / sizzly) even when closed. For the very tight sound, I ended up overlaying a mute triangle (sharp decay, high tone) with a very quiet closed hihat. Not ideal, but better.
Phatten the bass
Subharmonics (octaves below the actual note) are nice when used well. This is a very blunt way of doing this. Remember we haven’t got any signal processing capabilities here – we’re committed to 100% General MIDI! 🤘
Spread the pads
Now I will confess a complete lack of knowledge around arranging pads and strings, so most of this feels very wrong, and pretty basic at best. Perhaps someone out there can advise better practices.
The idea was to take the simple three-part harmonies and replicate across instruments, layering and mixing to get nearer to the sound. Several parts were transposed an octave up (
transpose 12) to occupy more of the spectrum. Some parts had tension notes (e.g.
sus9), which generally were only in the upper registers, sounded a bit too muddy otherwise. All in all, I had to keep turning the velocities (volumes) down, it’s easy to drown the mix in midrange, when the actual song is very full.
Building up the song structure
Repetition and sections
In the previous post I explained how coding could be used to avoid repetition in the notation of the music (at the expense of some complexity, of course). With the amount of variation that crept in, the usefulness of this actually decreased quite a lot, but never mind…
Like with any refactoring, a good start is simply extracting well named constants / functions. Then we can use built-in combinators (see the useful Euterpea quick reference for more):
forever) to repeat sections
cut tto chop from the start (especially if
tempo 2(doubling the speed)
As mentioned, I found it useful to keep named constants for parts of sections. For example:
It’s worth mentioning the
phrase function. This allows us to add all sorts of performance-type changes including dynamics like Crescendos, staccato (I used a
phrase [Art $ Staccato 0.9] to dampen bass strings – though this is a real fudge as any even beginner bass players know not to let notes ring, plus the common slide or glissando used here isn’t possible AFAICT).
As I incessantly tweaked further, this seems like the only way to start to get dynamics (though can’t be used for fades unfortunately).
Percussion is tricky, and fills are definitely hard to learn or notate. As always it helps to be able to hear the parts individually (this drumming video was useful).
Eventually it comes down to the amount of time you can devote to obsessing over the exact fills – whether it’s getting them exactly right, or even bothering with the variations at all. Session musicians are often very, very good at what they do (without grabbing the spotlight!), and I have even more respect for them now.
A useful tip was to run the whole thing at double speed using
tempo 2, meaning that the eighth-note shortcuts defined earlier are re-usable for sixteenth notes.
As an former / occasional dabbler in bass guitar and various bass-driven genres, I’ve long been aware the subtleties of “simple” basslines. As usual though, this one caught me out completely: the main four notes (A♭, B, D♭… and E♭) are clear… but the excellent performance throughout the track has complexity in the form of timing nuances (funk!), dynamics, ornaments and many variations / fills that make this sequencing very hard. I’m not going to lie: a huge amount of trial and error, and referencing bass tabs was necessary, and even then the result is massively simplified.
I’ve skipped the many definitions of functions used , but by now you should be able to imagine what they look like.
earthSong :: Music (Pitch, Volume) earthSong = bpm 69.16 $ line [ rest (sn) -- ♬ (intro) ♬ , cut 4 (introBacking :=: introPiano) -- ♬ "What about sunrise?" ♬ , voicesFor verseChords -- ♬ "Did you ever stop to notice.." ♬ , thinBassOf bridgeBassLine :=: (phrase [Dyn $ Loudness 01] . phrase [Dyn $ Crescendo 0.2]) (stringsFor bridgeChords) :=: voicesFor bridgeChords -- TODO: harp arpeggios -- ♬ oohh ooh oooooohayahh ♬ , stringsFor chorusChords :=: voicesFor chorusChords :=: (cut 7.5 shaker :+: ss) :=: (rest 7.75 :+: chillGuitarLick) -- ♬ "What have we done to the world? Look what we've done..." ♬ , stringsFor verseChords :=: voicesFor verseChords -- ♬ "Did you ever stop to notice..." ♬ , thinBassOf bridgeBassLine :=: stringsFor bridgeChords :=: voicesFor bridgeChords :=: (rest 3.75 :+: fastFill) -- ♬ oohh ooh oooooohayahh ♬ , cut 7 (chorusMusic :=: phatBeat) :+: (boom :=: cshh :=: eb7Fade) -- ♬ "I used to dream, used to glance beyond the stars..." ♬ , midBassOf funkBridgeBassLine :=: thinBeat :=: stringsFor bridgeChords :=: voicesFor bridgeChords -- ♬ oohh ooh oooooohayahh ♬ , phatBeat :=: chorusMusic -- ♬ ....heyeyyeeyeaa! ♬ , (cut 7 phatBeat :+: longFill) :=: chorusMusicUp -- TODO: pre-beat guitar bends -- ♬ "What about yesterday? (what about us?)" ♬ , times 4 (phatBeat :=: chorusMusicUp :=: powerGuitarUp :=: brass) , cut 7 $ chorusMusicUp :=: powerGuitarUp :=: phatBeat :=: brass , boom :=: cshh :=: endChord ]
A multi-step process… ended up getting a bit DevOpsy after all hundreds of the run-throughs (aargh).
- Compile program (GHC, Stack)
- Run executable to produce MIDI file
- Run Timidity (now in Docker) to produce OGG
- Rebuild Hakyll site
- Deploy here to wrap in HTML 52
Using Web Audio, the complete, generated audio, together with Jackson’s acapella audio (hacked out from the music video by the Internets).
Try some basic remixing – you can alter the volume of each audio.3 Unfortunately the syncing / timing breaks in some browsers a bit, which is pretty annoying. Recommendation: don’t fast-forward / rewind (the first time, at least).
I learnt this the hard way when I recently changed my default SoundFont (to a better one, even) 18 months after starting, and the audio was unrecognisable. Docker to the rescue though…↩
Lesson learnt: browser media caching is… annoying.↩