Euclidean Rhythms, Python, and EarSketch

This is the second project of two that were assigned as part of the online course “Survey of Music Technology“. The course is (was, really – it’s almost over) available on Coursera.org and is taught by Dr. Jason Freeman from the Georgia Institute of Technology, with assistance from TA Brad Short.

The course covered a lot of ground – if you’re curious about the syllabus or the project descriptions, check out the links.  Many students have been posting their projects to SoundCloud if you want to hear them:

This project was all about algorithmic composition. We were to use the musical programming framework called EarSketch, which is another creation from the folks at Georgia Tech. EarSketch is also a web-based Digital Audio Workstation (DAW) that uses Python and the EarSketch framework to build tracks and add effects. Apparently you can use JavaScript, too.

I liked using this method – and this framework – for creating compositions. I was able to do multiple runs and render multiple takes in WAV or MP3. You can even export your work as a project collection that can be opened in the Reaper desktop DAW. My only complaint was not being able to use more of Python’s power. I understand why: EarSketch is in a sense a “sandboxed solution” – it gives users tools to work with but tries not to give users power that could harm the application as a whole.

The script became musical crack for me after the first couple of renderings.

The core of it – the hardest piece to build – was the function for building the beat patterns. I wanted to use Euclidean rhythms, which more often than not are quite funky and represent much of the world’s grooves. There are a couple of algorithms for creating Euclidean rhythms, and I had a hard time finding a version in python. The two algorithms used to make these rhythms are from E. Bjorklund and Jack Elton Bresenham. I reverse-engineered some ChucK code in the electro-music.com discussion forum that used the Bresenham algorithm. Here’s what my function looks like:

 

def EuclideanGenerator ( pulses, steps ):

# Euclidean rhythm generator based on Bresenham’s algorithm

# pulses – amount of pulses
# steps – amount of discrete timing intervals
# generates a beat string pattern where
# 0 indicates the beginning of a sample,
# – indicates silence,
# + indicates continued play of the sample

seq = [‘0’]*steps
error = 0
breakorcontinue = [‘+’, ‘-‘][randint(0,1)]
for i in range(steps):
error = error + pulses
if error > 0:
seq[i] = ‘0’
error = error – steps
else:
seq[i] = breakorcontinue

return ”.join(seq)

 

Hopefully, if anyone wants to do this Python it won’t be so hard to get started. Other peeople have done this in Python, but I had a hard time finding links that were active. As of this writing, I did find someone who did post some code using the Bjorklund algorithm.

It’s probably not perfect – buy hey, it’s funky enough for government work 🙂

So, back to the musical crack..

The final script produces unique works every time you render it: four tracks of Euclidean rhythms, and four sections of what I call “soundscape clouds.” Each soundscape cloud consists of four tracks containing bits and pieces of samples scattered across a set number of measures using a gaussian distribution.  A couple of effects were applied to almost every track. There are a lot of random decisions made, but there’s enough method to the madness to keep it interesting.

I won’t bore you with the details, but if you want to run the code in EarSketch, here it is.

I’ve posted fifteen “Rendings” to my SoundCloud page under a single playlist to give you an idea of the range of what this one program could generate:

Personally it came out better than I’d hoped. I had a lot of ideas running around in my head, most of which were unworkable given the constraints of the EarSketch web interface and its architecture. In the end, after a lot of math and programming, I got something that works, works consistently, and works reasonably well.

The most important thing I got out of this exercise – and the course as a whole – is that algorithmic composition is not just about generating some “bloop-bleep” computer music. It’s about applying processes that can make composition faster, provide creative pressure, and open up new possibilities in creative expression.

There will be more to come…