These days I’m having an amazing time reading Dan Charnas’ book Dilla Time, enjoying the blending of historic biography and musical breakdowns. If you read my blog you’ll know I’m a fan of visual representations of music and sound and Dilla Time’s central sentiment is best summarized by this image of J Dilla’s main approach to juxtaposing varying rhythmic feels:
Straight time is best described as rhythm that is evenly spaced and metronomic, like a clock ticking. Swing time typically means that the second half of each beat is delayed in a laid-back feel. (think of most Count Basie tunes.)
Dilla time means that there are multiple rhythmic time feels simultaneously, some straight, some swinging, either right on the grid or ahead of or behind the grid. Listen to what J Dilla did with Herbie Hancock’s classic Watermelon Man:
J Dilla’s approach to creating music is not easily summarized with traditional musical notation and goes further than simply layering multiple rhythmic patterns in a typical polyrhythmic style. It’s as though different drummers were playing each component of a drum set. The person playing the kick drum would play behind the beat, the one playing the snare would play just a hair ahead – pushing the beat and the high hats roughly on the grid or slightly behind with a bassline just behind the beat. This approach was so unique when it came to creating beats on drum machines that J Dilla has earned a spot in music history.
There’s more to dig into about Dilla’s signature grooves and I’m sure to write a few more posts about it…
There has never been a better time to get into music technology than now
– Haig Armen
These days there is so many choices of software and hardware platforms and resources designed specifically for any level in what was once an extremely specialized field.
Not that kind of Hacking…
‘Hacker culture’ is a subgroup of the DIY ethic, so by ‘hacking’ I don’t mean the process of illegally breaching the security of computers as the term is often incorrectly defined.
Hacking is the process of building software and hardware through experimentation, and includes extending a system’s functionality or repurposing technologies for other uses. Examples of music tech hacks include; developing custom software that allows a games controller to control your music software, or adding various sensors to your guitar to control its sound in unique ways.
1. Learn Graphical Programming
Graphical or visual programming languages are probably the easiest way to get into software development, as they just involve connecting or ‘patching’ together graphical objects instead of having to write any code. The two most popular graphical programming languages for music and audio are Max (which you may know in it’s Max For Live form) developed by Cycling ’74, and the community-driven Pure Data (aka Pd). Both are very similar in their functionality, however while Max provides a more user-friendly interface and professional support, Pd is free and can run on a higher number of platforms and devices.
2. Learn Some Beginner-Friendly Coding
The next step up from learning a graphical programming language is to learn a textual programming language. While it may seem like a daunting task, there are now many coding languages, environments and toolkits designed specifically for beginners to create music and audio programs quickly and easily. Even though there is a steeper learning curve for coding compared to graphical programming, textual languages generally offer a lot more flexibility and provide you with a much better and more expandable skillset for not just DIY and hacking but also for more serious software development endeavours. As you’ll see later on in this article, knowing textual programming languages is essential for hacking and developing for certain hardware platforms and devices.
There are two broad types of coding platforms to mention here – Audio Programming Environments, which are designed specifically for creating music and audio programs; and Creative Coding platforms, which are used for developing software containing a range of different types of multimedia including music and audio. Here are some examples of the most popular coding platforms within the music tech DIY & hacking community:
Processing – Processing is a creative coding environment and programming language originally designed for teaching programming, however it has since been adopted by the DIY community for creating rich multimedia applications.
openFrameworks – openFrameworks is a creative coding toolkit that uses the C++ programming language and a range of third party development environments. While less beginner-friendly compared to Processing, it is more flexible with much stronger audio capabilities.
Arduino has many core and extension libraries for dealing with MIDI, synthesis, and almost everything else music related, and there are many hardware ‘shields’ for Arduino boards for providing extended music-related functionality and IO, such as MIDI, audio playback, FX, and synthesis.
4. Hack an Audio Platform Device
Programmable audio devices are hardware/software platforms designed specifically for developing your own electronic musical instrument, synth, MIDI controller, FX unit, sequencer and so on. They usually consist of a piece of hardware containing freely assignable controls and inputs/outputs, as well as a software element for programming exactly how the hardware behaves. Compared to using a general purpose platform – such as Arduino – they’re generally a bit quicker and easier to use, however with that you may sacrifice some of the flexibility and configurability that general purpose platforms provide.
Here are some examples of currently available programmable audio platforms:
Hoxton OWL – The OWL is an open source programmable audio platform that comes in the form of either a guitar FX pedal or a Eurorack synthesiser module. You can then program your own FX or synth patches using a range of different programming languages and environments including C++, Pure Data, Max/MSP, and FAUST. For more info see here.
Shantea Controls OpenDeck – OpenDeck is described as an “open-source platform for building custom MIDI controllers compatible with any MIDI software and hardware on any OS”. No need to do any coding here – it comes with a web-based software editor for configuring the hardware’s functionality. See here for a full run-through of OpenDeck.
Pisound for the Raspberry Pi. (Check out our review here.)
5. Hack an instrument
You don’t need to wait for permission to hack an instrument, and some products are more hackable than others – whether it’s a synth, FX unit, MIDI controller or similar – that you are able to modify and customize in great detail so that it can operate in ways more specific to your needs. I’m not talking about opening up your expensive gear, prodding the electronics and voiding your warranty in the process – these are products that the manufacturer has allowed the end-user to hack via official methods. They’re not primarily designed for DIY development like the above list of platforms and therefore aren’t always as flexible in this aspect, however having the product’s existing functionalities and controls at your disposal can provide a quicker and easier hacking process.
Here are some examples of currently available hackable musical devices:
ROLI Lightpad BLOCK – You can customize this 3D touchpad MIDI controller using a simple and specifically-designed programming language and application. See this tutorial to learn how you can go about hacking the Lightpad.
Critter & Guitari Organelle – This small desktop device allows you to run your own Pure Data patches on it, allowing it to become a personalized standalone synth, sampler, FX unit, or anything in between. Therefore you simply just need to learn how to use Pure Data to hack this device.
Bastl Instruments Kastle Synth v1.5 – This is a mini modular digital synthesiser that is DIY-friendly due to it running on two Arduino-compatible chips that the user can reprogram to modify all aspects of the synth’s engine. Prior knowledge of the Arduino platform is expected if you want to hack this device.
I’ve been posting my experiments exploring sound/musical instrument design and prototyping, and it occurred to me that although my writing had focussed on the creative process and user experience of playing instruments, it would you the reader to have more context and explanations of the technical side to go beyond just embedding links throughout my posts. Today i’d like to introduce you to Pure Data, an amazingly deep yet seemingly-simple music and sound creation development environment. Here’s the official description on the puredata.info website:
Pure Data (Pd) is a visual signal programming language which makes it easy to construct programs to operate on signals. We are going to use it extensively in this textbook as a tool for sound design. The program is in active development and improving all the time. It is a free alternative to Max/MSP that many see as an improvement.
As I learn more about Pd I realize that it has a number of redeeming characteristics that make it incredibly resilient. Apart from MaxMSP and VVVV, Pure Data is uniquely visual the only piece of software that allows you to program your own applications using a visual flowchart-like graphical user interface. Pd is open-source and platform-agnostic – working consistently across Windows, Mac and Linux platforms (and yes, RaspberryPi!). Pure Data is also extremely extendible, you can install libraries (Externals) to add new capabilities and many people write their one libraries. Finally, Pure Data can be embedded into other frameworks and hardware, there’s a libpd library that is used for iOS, Android and OpenFrameworks application development.
Ultimately, Pd enables musicians, visual artists, performers, researchers, and developers to create software graphically without writing lines of code.
Pd can be used to process and generate sound, video, 2D/3D graphics, and interface sensors, input devices, and MIDI. Pd can easily work over local and remote networks to integrate wearable technology, motor systems, lighting rigs, and other equipment. It is suitable for learning basic multimedia processing and visual programming methods as well as for realizing complex systems for large-scale projects.
Here are some of the basic components of Pure Data:
In Pd we use a flowchart with lines connecting boxes together to build programs. We call these boxes objects. Stuff goes in, stuff comes out. For it to pass into or out of them, objects must have inlets or outlets. Inlets are at the top of an object box, outlets are at the bottom. Here is an object that has two inlets and one outlet. They are shown by small “tabs” on the edge of the object box.
The connections between objects are sometimes called cords or wires. They are drawn in a straight line between the outlet of one object and the inlet of another. It is okay for them to cross, but you should try to avoid this since it makes the patch diagram harder to read.
The stuff or data being processed comes in a few flavours: sound signals, and messages. Objects give clues about what kind of data they process by their name. For example, an object that adds together two sound signals looks like |+ ~|. The + means that this is an addition object, and the ∼ (tilde character) means that its object operates on audio signals.
When you create a new object from the menu, Pd automatically enters edit mode, so if you just completed the instructions above you should currently be in edit mode. In this mode you can make connections between objects or delete objects and connections.
Hovering over an outlet will change the mouse cursor to a new “wiring tool.” If you click and hold the mouse when the tool is active you will be able to drag a connection away from the object.
This is the most fundamental and smallest message. It just means “compute something.” Bangs cause most objects to output their current value or advance to their next state. Other messages have an implicit bang so they don’t need to be followed with a bang to make them work.
“Floats” is another name for numbers. As well as regular (integer) numbers like 1, 2, 3 and negative numbers like −10 we need numbers with decimal points like −198753.2 or 10.576 to accurately represent numerical data. These are called floating point numbers, because of the way computers represent the decimal point position.
For float numbers we have already met the number box, which is a dual-purpose GUI element. Its function is to either display a number or allow you to input one. A bevelled top right corner like this denotes that this object is a number box. Numbers received on the inlet are displayed and passed directly to the outlet. To input a number click and hold the mouse over the value field and move the mouse up or down. You can also type in numbers. Click on a number box, type the number and hit RETURN.
Another object that works with floats is a toggle box. Like a checkbox on any standard GUI or web form, this has only two states, on or off. When clicked a cross appears in the box like this and it sends out a number 1; clicking again causes it to send out a number 0 and removes the cross so that it looks like this .
Sliders and Other Numerical GUI Elements
GUI elements for horizontal and vertical sliders can be used as input and display elements. Their default range is 0 to 127, nice for MIDI controllers, but like all other GUI objects this can be changed in their properties window. Unlike those found in some other GUI systems, Pd sliders do not have a step value.
These are visual containers for user-definable messages. They can be used to input or store a message. The right edge of a message box is curved inwards like this , and it always has only one inlet and one outlet. They behave as GUI elements, so when you click a message box it sends its contents to the outlet. This action can also be triggered if the message box receives a bang message on its inlet.
A symbol generally is a word or some text. A symbol can represent anything; it is the most basic textual message in Pure Data. Technically a symbol in Pd can contain any printable or nonprintable character. But most of the time you will only encounter symbols made out of letters, numbers, and some interpunctuation characters like a dash, dot, or underscore.
A list is an ordered collection of any things, floats, symbols, or pointers that are treated as one. Lists of floats might be used for building melody sequences or setting the time values for an envelope generator. Lists of symbols can be used to represent text data from a file or keyboard input.
As in other programming languages, a pointer is the address of some other piece of data. We can use them to build more complex data structures, such as a pointer to a list of pointers to lists of floats and symbols.
Tables, Arrays, and Graphs
A table is sometimes used interchangeably with an array to mean a two-dimensional data structure. An array is one of the few invisible objects. Once declared it just exists in memory.
Most of my research these days is about getting to the heart of how we interact with musical instruments, exploring the essence of a nuanced touch that a piano player has or that subtle vibrato that makes one guitar player different than another. As a departure or brief interlude I’ve been thinking also about how to make an instrument that plays itself. It’s not a new idea, there are plenty of generative art projects that create their own ambient soundtracks but I’d like to look into how an instrument might create music from data it gathers from an environment.
The NonInstrument is a sonic interaction experiment that scans bluetooth devices and creates melodies from the UID of the device. The project explores how our devices are constantly talking to each other without us even being aware of these exchanges.
What’s a UID?
A unique identifier (UID) is a numeric or alphanumeric string that is associated with a single device. In other words, a unique sequence of numbers or letters that can be used to identify your device from ever other device in a huge ocean of devices.
How it works
With the Sonic Interactions Kit (SIK) I installed Bluez, the Linux Bluetooth system, there’s a decent guide on how to install it at Adafruit. Then I wrote a simple Python script that uses Bluez to scan for devices and send the UIDs to PureData (Pd) using UDP protocol. Once that data is in Pd, the data is parsed into ascii and number values which are then converted from MIDI notes into frequencies. Each UID becomes a sequence of 16 notes which are saved into Tables/Arrays. The sequences are then played back and playback tempo and delay can be adjusted by potentiometers on the Lots of Pots expansion board (LOP) on the Pi.
Here’s it in action on Instagram
For the next steps on this project I’m thinking about putting the device in public locations to see what it picks up – scanning people’s devices and recording the melodies. I imagine each place will have a totally different sound and texture.
Some questions come up like:
How do I make this device portable and durable? Battery-powered and in a metal pedal case maybe
Should the device have it’s own amp and speaker to playback while on location?
How do you think I this project should evolve? Leave a comment below.
Rather than using RaspberryPi and Pd (PureData) as the sound generator in this experiment I wanted to use another sound source, something that resonates acoustically that I could alter the sound of but retain the playability of the original instrument. Why not a ukulele?
I know what you’re thinking, you probably have a mental picture of a chicken running around without a head, but in Pi parlance, running headless is about running your Raspberry Pi without a monitor (screen) or keyboard.
One of the reasons I like working with a RaspberryPi over an Arduino is that, unlike the Arduino the RaspberryPi is a standalone computer with an operating system, network capabilities and video output built-in. It can be a desktop computer or embedded within another object or installation. Lots of possibilities open up.
To run a RaspberryPi headless there are a lot of tutorials out there, you can start with mine. Unlike most “how to run headless” tutorials I had to figure out how to launch a script that starts two files to run automatically when the Pi is booting up. Let’s have a look at those files:
Like experiment SI01 we start by grabbing data from somewhere else to bring into Pd. In this case it’s data from the Lots Of Pots board made by Modern Device, a RaspberryPi expansion board with 8 Pots (potentiometers), thus the name, and Analog/Digital converters to send the data from the pots to the Pi. The python script grabs the data from the pots and 4 buttons and sends it to Pd via UDP communications.
There are a lot of different things we can do to the sound coming into Pd, pretty much any digital sound processing you can think of, like distortion, delays or echos, chorusing and any of those typical guitar pedal effects but I should to create a Ring Modulator effect which makes the ukulele sound more like a sequenced synthesizer.
Quick Tangent about Ring Modulation
Ring Modulation began being used in music as early as 1956 by people like Stockhausen and later by John McLaughlin in the Mahavishnu Orchestra and Miles Davis in the 1970s. You might know that sound from Black Sabbath’s Paranoid or perhaps the heavily modulated voice of the Daleks on Doctor Who in 1960’s. More info about it here: https://en.wikipedia.org/wiki/Ring_modulation
Tricky Startup Business
The trickiest part of this experiment was getting the files to launch automatically. There seems to be a bit of voodoo here. I think it’s mostly because the files need to have specific permissions by the root user and be in the right location.
Here’s how it works, first you need to edit the rc.local file, which you’ll need root permission to do (SUDO). Add the following line as in this file:
sleep 10 && /etc/profile.d/pd_startup.sh
Then the Pd_startup.sh file needs to launch the python and Pd files, like so:
then make sure the permissions of this file are set to: ownership is root:root (chown root:root) permissions should be executable by owner (chmod 755)
Yes, it’s unavoidable, annoying but necessary for you to learn this stuff. Luckily I had to learn it when I began making websites but it’s a handy thing to know when you begin to get under-the-hood of any computer. If you need to know more about File Permissions and Ownership try this article.
I’m going to start documenting each Sonic Interactions experiment for the purpose of marking where I am in the process. Each one of these is merely a rough sketch to build upon and are in no means finished. My first experiment takes data from the accelerometer of a SenseHat and uses it to change parameters of a simple synth.
Goal: use an accelerometer to control the frequencies of a synth, experiment with gestural interfaces for music
Questions: How do we tame the wild data coming out of the accelerometer to use in a musical way in synth? How do we use the joystick and middle click to add to the interaction?
Write a python script to retrieve data from sense-hat and send to Pd
Use data from python in Pd to alter the frequencies of oscillators:
3. Determine the mapping of data to synth parameters, I started with this:
The Pitch (x plane) from the Accelerometer was mapped to OSC 1 (oscillator frequency) The Roll (y plane) was mapped to OSC 2 The Yaw (z plane) was mapped to OSC 3
Let me know what you’d want to see done with this experiment next? To make it more musical or more expressive, would you add a finer scale to the sensitivity of the accelerometer data so that you could, for example, play scales more easier?
These days I’ve been telling people about my recent return to playing music seriously. Quite a few people ask whether I was professional or amateur which usually makes me pause to think. When you’re referred to as an amateur, it’s usually implied that you might be less qualified or even less talented than a professional. It is assumed that an amateur is one who would have liked to be a professional but who was unable to reach that level. But contrary to these negative implications, when you look up the word “amateur” you’ll see that is actually means “lover of” and there are many amateurs in all fields who are working at a very high level.
Consider a hobby other than music, that you do with your free time. Maybe you brew beer, take nature photographs, or fix cars. Whatever it is, have you ever even considered doing it professionally? Probably not. And most likely this isn’t because you’re not good enough (and whether you are or not is probably irrelevant to your decision), but rather because the very fact that it’s a hobby means that it’s something you do that isn’t work. Instead, it’s a chance to spend time on something fun and fulfilling that doesn’t saddle you with any outside pressure to succeed, earn a living, etc.
Musicians, more so than other amateurs, seem to have a more difficult time simply engaging with music as a hobby. Perhaps this is because tools like DAWs are fundamentally designed around a recording and production mentality. Compared to an acoustic guitar player, someone with a laptop can actually produce a polished album of music (remember those?). While the guitar player can just pull out their guitar and playing it for a few minutes while sitting on the couch may be the extent of their musical aspirations. And they don’t see this as failure. They are unlikely lamenting their inability to get gigs or write more music or get record deals. They’re having exactly the relationship with music that they want. In fact, they’re usually not even recording what they play; once it’s in the air, it’s gone.
By definition, being a professional means having to spend at least some amount of time thinking about the marketplace. Is there an audience for the music you’re making? If not, you’re guaranteed to fail. Amateurs, on the other hand, never have to think about this question at all. This frees them to make music entirely for themselves, on their own terms.
An easy way to do this is to put yourself into a musical context in which you actually are an amateur—by experimenting with a genre in which you have no prior experience. Are you a committed hip-hop producer? Try making a jazz track. Your expectations are bound to be lower, simply because you have no prior successes or failures against which to gauge your current work. Even if you hate the results, it’s likely that you’ll learn something from the experience.
Even if you do aspire to make a living out of creating original music, it might be helpful to think like an amateur in order to lower your stress and bring the fun back to your music-making time. Amateurs often have a genuinely more pleasurable experience than professionals working in the same field, and this is almost certainly because they’re free from outside pressure. If you can instill this mindset into your own work, you’ll probably have both better results and a better time.
As you may or may not know, I’m currently enjoying a sabbatical that has given me the time to explore my love of music and musical instruments. My research is about how we interact when we create music, both with instruments, other people and environments.
I’m going to begin with what I know best, the interplay that happens with others when creating music. For the purposes of this discussion, I define that interplay or musical interaction as involving one or more members of an ensemble improvising spontaneously in response to what other participants are playing.
Here’s what a few searches bring up:
In the wake of Paul Berliner’s and Ingrid Monson’s landmark interview-based research of the mid-1990s, the notion that “good jazz improvisation is sociable and interactive just like a conversation” (Monson 1996, 84)
Playing jazz is as much about active listening as it is being able to express yourself on an instrument.
Trying to categorize the interactions within a musical context:
Microinteractions takes place at a very fine level of musical detail, too small in scale to be quantified by standard Western notation, and includes such phenomena as the tiny adjustments in tempo, dynamics, pitch, and articulation that musicians make while playing together.
Macrointeractions involve the broad sorts of collective coordination whereby improvising musicians play in unified stylistic idioms (Gratier 2008, 88) and at mutually coherent intensity levels. For instance, if one ensemble member, mid-performance, starts playing louder, or with shorter rhythmic values, or with increasingly dissonant harmonies, others may follow suit by reinforcing, complementing, or otherwise accommodating this strategy.
Creating music can give you so much joy. When you’re in a state of flow and you finish a song and it sounds great. But if you ask most music producers they all agree that there seem to be as many or more moments of agony. Despite our will, there are lots of real reasons why we sometimes procrastinate, including fear of failure, fear of success, and simple laziness.
If you’re a chronic procrastinator, you’re not alone. There are many creative (and non-creative) people who suffer from task aversion and will find any excuse to avoid doing the work that really needs to get done. One strategy for overcoming procrastination that’s commonly used in the software development world is known as timeboxing.
Timeboxing simply means setting a fixed amount of time for a particular task. The amount of time you choose is up to you, but it should be short enough so that it’s easily manageable by even the most determined procrastinators. I’ve been using the Pomodoro Technique for years as a designer but I’m never used it to creating or producing music. I am curious to see if it is as effective.
Here’s what I’ll try as an experiment for timeboxing for songwriting:
Create a drumbeat with 3 variations (25mins)
Write and record a bass line with 2 separate parts (25mins)
Write a melody/lead line and chords for the bass parts (25mins)
Arrange the song structure (intro, verse, chorus, ending) (25mins)
I’ll post what I create tomorrow. Don’t judge me, that’s not what this is about.
Normally I get shit done, but with music composition I tend to take my time. That normally leads to other distractions and songs just don’t get finished. In fact, they barely get started, I have a melody or a rhythm kicking around in my head and I play it into Ableton and save it. That’s not a song it’s just a seed and needs arranging and structure and sound design.
Give yourself a deadline. Nothing motivates like a due date. Since work always expands to fill the available time, it’s necessary to actually put a limit on that time. If you find self-imposed deadlines to be too “soft,” try having someone else assign the deadline for you, with the requirement that you show them the work at the end to ensure accountability. Or engage in a collective challenge, such as February Album Writing Month.
Schedule tasks as if they were appointments with yourself. Try using a calendar to restrict specific types of work to specific times. For example:
Sound design: 7-8pm
Form/song structure: 8-9pm
Mixing: 9-10pm Timeboxing specific tasks serves two purposes: It forces you to narrow your focus while simultaneously eliminating the risk of non-musical distractions (Facebook, etc.). You wouldn’t check your email in the middle of a business meeting, so treat these “appointments” with the same kind of care.