noninstrument

SI004 – NonInstrument

Most of my research these days is about getting to the heart of how we interact with musical instruments, exploring the essence of a nuanced touch that a piano player has or that subtle vibrato that makes one guitar player different than another. As a departure or brief interlude I’ve been thinking also about how to make an instrument that plays itself. It’s not a new idea, there are plenty of generative art projects that create their own ambient soundtracks but I’d like to look into how an instrument might create music from data it gathers from an environment.

The NonInstrument is a sonic interaction experiment that scans bluetooth devices and creates melodies from the UID of the device. The project explores how our devices are constantly talking to each other without us even being aware of these exchanges.

What’s a UID?

A unique identifier (UID) is a numeric or alphanumeric string that is associated with a single device. In other words, a unique sequence of numbers or letters that can be used to identify your device from ever other device in a huge ocean of devices.

The UID can be found in the line Address: F4-5C-89-AB-18-48

How it works

With the Sonic Interactions Kit (SIK) I installed Bluez, the Linux Bluetooth system, there’s a decent guide on how to install it at Adafruit. Then I wrote a simple Python script that uses Bluez to scan for devices and send the UIDs to PureData (Pd) using UDP protocol. Once that data is in Pd, the data is parsed into ascii and number values which are then converted from MIDI notes into frequencies. Each UID becomes a sequence of 16 notes which are saved into Tables/Arrays. The sequences are then played back and playback tempo and delay can be adjusted by potentiometers on the Lots of Pots expansion board (LOP) on the Pi.

Here’s it in action on Instagram

https://www.instagram.com/p/BvAXCZGhs1Z/?utm_source=ig_web_button_share_sheet

For the next steps on this project I’m thinking about putting the device in public locations to see what it picks up – scanning people’s devices and recording the melodies. I imagine each place will have a totally different sound and texture.

Some questions come up like:

  1. How do I make this device portable and durable? Battery-powered and in a metal pedal case maybe
  2. Should the device have it’s own amp and speaker to playback while on location?

How do you think I this project should evolve? Leave a comment below.

nsynth-sq

SI03 Experiment 3 – nSynth

Everyone is talking about Artificial Intelligence(AI) and Machine Learning(ML) and I beginning to investigate how it may shape the way we design musical instruments. First let’s get the terminology straight, AI and Machine Learning are not the same thing, although many use the words interchangeably.

Artificial Intelligence is a large umbrella term for when computing could be perceived as thinking autonomously. Under the umbrella of this term are concepts like computer vision, pattern recognition – like facial and speech recognition, generative creativity, natural language processing and yes, you guessed it, machine learning.

Machine Learning is one of the ways we may achieve AI. Machine learning relies on working with large data-sets, by examining and comparing the data to find common patterns and explore nuances.

Machine learning is the study of computer algorithms that improve automatically through experience.

Former Chair of the Machine Learning Department at Carnegie Mellon University, Tom M. Mitchell

My first foray into Machine Learning was taking a fantastic online course by Rebecca Fiebrink called Machine Learning for Artists and Musicians. I highly recommend the course if you’re interested in the topic and the way the course is structured provides a solid understanding and practical working knowledge of machine learning.

Next I chose to build Google’s open source project NSynth with some of my students over the past summer and I’m finally getting around to understanding it and playing around with it. Their team did a great job of documenting how to build it using, yes, a Raspberry Pi. Instructions on how to build it are on the NSynth Github.

According the Magenta team that build the NSynth:
NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics, and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.

NSynth is an algorithm that can generate new sounds by combining the features of existing sounds. To do that, the algorithm takes different sounds as input.

The Magenta team from Google also have some great open source tools that are worth exploring. More on that later…

nukulele

SI02 Experiment 2 – Nukulele

Rather than using RaspberryPi and Pd (PureData) as the sound generator in this experiment I wanted to use another sound source, something that resonates acoustically that I could alter the sound of but retain the playability of the original instrument. Why not a ukulele?

Hacking a ukulele:
cutting hole for the RaspberryPi to be accessible from the front of the instrument, notic the cheap Pizzo microphone taped next to bridge of ukulele to pick up sound and bring it into Pi.

Running Headless

I know what you’re thinking, you probably have a mental picture of a chicken running around without a head, but in Pi parlance, running headless is about running your Raspberry Pi without a monitor (screen) or keyboard.

One of the reasons I like working with a RaspberryPi over an Arduino is that, unlike the Arduino the RaspberryPi is a standalone computer with an operating system, network capabilities and video output built-in. It can be a desktop computer or embedded within another object or installation. Lots of possibilities open up.

To run a RaspberryPi headless there are a lot of tutorials out there, you can start with mine. Unlike most “how to run headless” tutorials I had to figure out how to launch a script that starts two files to run automatically when the Pi is booting up. Let’s have a look at those files:

Python script

Like experiment SI01 we start by grabbing data from somewhere else to bring into Pd. In this case it’s data from the Lots Of Pots board made by Modern Device, a RaspberryPi expansion board with 8 Pots (potentiometers), thus the name, and Analog/Digital converters to send the data from the pots to the Pi. The python script grabs the data from the pots and 4 buttons and sends it to Pd via UDP communications.

You can look at the lop2pd.py script here.

Altering the ukulele sound using Pd

There are a lot of different things we can do to the sound coming into Pd, pretty much any digital sound processing you can think of, like distortion, delays or echos, chorusing and any of those typical guitar pedal effects but I should to create a Ring Modulator effect which makes the ukulele sound more like a sequenced synthesizer.

Quick Tangent about Ring Modulation

Ring Modulation began being used in music as early as 1956 by people like Stockhausen and later by John McLaughlin in the Mahavishnu Orchestra and Miles Davis in the 1970s. You might know that sound from Black Sabbath’s Paranoid or perhaps the heavily modulated voice of the Daleks on Doctor Who in 1960’s. More info about it here: https://en.wikipedia.org/wiki/Ring_modulation

Dalek from Doctor Who

Tricky Startup Business

The trickiest part of this experiment was getting the files to launch automatically. There seems to be a bit of voodoo here. I think it’s mostly because the files need to have specific permissions by the root user and be in the right location.

Here’s how it works, first you need to edit the rc.local file, which you’ll need root permission to do (SUDO). Add the following line as in this file:

sleep 10 && /etc/profile.d/pd_startup.sh

Then the Pd_startup.sh file needs to launch the python and Pd files, like so:

sudo -H -u pi bash -c bash "echo 'starting Pd now'"
pd -nogui /path/to/folder/pd_file.pd &

/path/to/folder/lop2pd.py &

then make sure the permissions of this file are set to:
ownership is root:root (chown root:root)
permissions should be executable by owner (chmod 755)

Yes, it’s unavoidable, annoying but necessary for you to learn this stuff. Luckily I had to learn it when I began making websites but it’s a handy thing to know when you begin to get under-the-hood of any computer. If you need to know more about File Permissions and Ownership try this article.

Sense-HAT-square

SI01 Experiment 1 – SenseSynth

I’m going to start documenting each Sonic Interactions experiment for the purpose of marking where I am in the process. Each one of these is merely a rough sketch to build upon and are in no means finished. My first experiment takes data from the accelerometer of a SenseHat and uses it to change parameters of a simple synth.

Goal: use an accelerometer to control the frequencies of a synth, experiment with gestural interfaces for music

Questions:
How do we tame the wild data coming out of the accelerometer to use in a musical way in synth?
How do we use the joystick and middle click to add to the interaction?

Process:

  1. Write a python script to retrieve data from sense-hat and send to Pd
  2. Use data from python in Pd to alter the frequencies of oscillators:

3. Determine the mapping of data to synth parameters, I started with this:

The Pitch (x plane) from the Accelerometer was mapped to OSC 1 (oscillator frequency)
The Roll (y plane) was mapped to OSC 2
The Yaw (z plane) was mapped to OSC 3

All the code from this experiment can be found at the Sonic Interactions Github project. Python script is here
and the Pd file is here.

Let me know what you’d want to see done with this experiment next?
To make it more musical or more expressive, would you add a finer scale to the sensitivity of the accelerometer data so that you could, for example, play scales more easier?

microphone-sml

Research Note 014: Amateurs & Professionals

These days I’ve been telling people about my recent return to playing music seriously. Quite a few people ask whether I was professional or amateur which usually makes me pause to think.
When you’re referred to as an amateur, it’s usually implied that you might be less qualified or even less talented than a professional. It is assumed that an amateur is one who would have liked to be a professional but who was unable to reach that level. But contrary to these negative implications, when you look up the word “amateur” you’ll see that is actually means “lover of” and there are many amateurs in all fields who are working at a very high level.

Consider a hobby other than music, that you do with your free time. Maybe you brew beer, take nature photographs, or fix cars. Whatever it is, have you ever even considered doing it professionally? Probably not. And most likely this isn’t because you’re not good enough (and whether you are or not is probably irrelevant to your decision), but rather because the very fact that it’s a hobby means that it’s something you do that isn’t work. Instead, it’s a chance to spend time on something fun and fulfilling that doesn’t saddle you with any outside pressure to succeed, earn a living, etc.

Musicians, more so than other amateurs, seem to have a more difficult time simply engaging with music as a hobby. Perhaps this is because tools like DAWs are fundamentally designed around a recording and production mentality. Compared to an acoustic guitar player, someone with a laptop can actually produce a polished album of music (remember those?). While the guitar player can just pull out their guitar and playing it for a few minutes while sitting on the couch may be the extent of their musical aspirations. And they don’t see this as failure. They are unlikely lamenting their inability to get gigs or write more music or get record deals. They’re having exactly the relationship with music that they want. In fact, they’re usually not even recording what they play; once it’s in the air, it’s gone.

By definition, being a professional means having to spend at least some amount of time thinking about the marketplace. Is there an audience for the music you’re making? If not, you’re guaranteed to fail. Amateurs, on the other hand, never have to think about this question at all. This frees them to make music entirely for themselves, on their own terms.

An easy way to do this is to put yourself into a musical context in which you actually are an amateur—by experimenting with a genre in which you have no prior experience. Are you a committed hip-hop producer? Try making a jazz track. Your expectations are bound to be lower, simply because you have no prior successes or failures against which to gauge your current work. Even if you hate the results, it’s likely that you’ll learn something from the experience.

Even if you do aspire to make a living out of creating original music, it might be helpful to think like an amateur in order to lower your stress and bring the fun back to your music-making time. Amateurs often have a genuinely more pleasurable experience than professionals working in the same field, and this is almost certainly because they’re free from outside pressure. If you can instill this mindset into your own work, you’ll probably have both better results and a better time.

Billie-Holiday-2

Research Note 13: Interplay in Jazz

As you may or may not know, I’m currently enjoying a sabbatical that has given me the time to explore my love of music and musical instruments. My research is about how we interact when we create music, both with instruments, other people and environments.

I’m going to begin with what I know best, the interplay that happens with others when creating music. For the purposes of this discussion, I define that interplay or musical interaction as involving one or more members of an ensemble improvising spontaneously in response to what other participants are playing.

Here’s what a few searches bring up:

  1. In the wake of Paul Berliner’s and Ingrid Monson’s landmark interview-based research of the mid-1990s, the notion that “good jazz improvisation is sociable and interactive just like a conversation” (Monson 1996, 84)
  2. Playing jazz is as much about active listening as it is being able to express yourself on an instrument.

Trying to categorize the interactions within a musical context:

Microinteractions takes place at a very fine level of musical detail, too small in scale to be quantified by standard Western notation, and includes such phenomena as the tiny adjustments in tempo, dynamics, pitch, and articulation that musicians make while playing together.

Macrointeractions involve the broad sorts of collective coordination whereby improvising musicians play in unified stylistic idioms (Gratier 2008, 88) and at mutually coherent intensity levels. For instance, if one ensemble member, mid-performance, starts playing louder, or with shorter rhythmic values, or with increasingly dissonant harmonies, others may follow suit by reinforcing, complementing, or otherwise accommodating this strategy.

Edward Tufte shows some more sophisticated song structure visualizations on his forum

Research Note 12: Timeboxing for Music

Creating music can give you so much joy. When you’re in a state of flow and you finish a song and it sounds great. But if you ask most music producers they all agree that there seem to be as many or more moments of agony. Despite our will, there are lots of real reasons why we sometimes procrastinate, including fear of failure, fear of success, and simple laziness.

If you’re a chronic procrastinator, you’re not alone. There are many creative (and non-creative) people who suffer from task aversion and will find any excuse to avoid doing the work that really needs to get done. One strategy for overcoming procrastination that’s commonly used in the software development world is known as timeboxing.

Timeboxing simply means setting a fixed amount of time for a particular task. The amount of time you choose is up to you, but it should be short enough so that it’s easily manageable by even the most determined procrastinators. I’ve been using the Pomodoro Technique for years as a designer but I’m never used it to creating or producing music. I am curious to see if it is as effective.

Here’s what I’ll try as an experiment for timeboxing for songwriting:

  1. Create a drumbeat with 3 variations (25mins)
  2. Write and record a bass line with 2 separate parts (25mins)
  3. Write a melody/lead line and chords for the bass parts (25mins)
  4. Arrange the song structure (intro, verse, chorus, ending) (25mins)

I’ll post what I create tomorrow. Don’t judge me, that’s not what this is about.

productivity-music

Research Note 011: Creating Contraints to Create

Normally I get shit done, but with music composition I tend to take my time. That normally leads to other distractions and songs just don’t get finished. In fact, they barely get started, I have a melody or a rhythm kicking around in my head and I play it into Ableton and save it. That’s not a song it’s just a seed and needs arranging and structure and sound design.

I’ve decided to set myself some time constraints.

Here’s what the Ableton Creative Strategies for Electronic Music Producers, a fantastic resource, says:

  • Give yourself a deadline. Nothing motivates like a due date. Since work always expands to fill the available time, it’s necessary to actually put a limit on that time. If you find self-imposed deadlines to be too “soft,” try having someone else assign the deadline for you, with the requirement that you show them the work at the end to ensure accountability. Or engage in a collective challenge, such as February Album Writing Month.
  • Schedule tasks as if they were appointments with yourself. Try using a calendar to restrict specific types of work to specific times. For example:
    • Sound design: 7-8pm
    • Form/song structure: 8-9pm
    • Mixing: 9-10pm Timeboxing specific tasks serves two purposes: It forces you to narrow your focus while simultaneously eliminating the risk of non-musical distractions (Facebook, etc.). You wouldn’t check your email in the middle of a business meeting, so treat these “appointments” with the same kind of care.
guitar-sq

Research Note 010

This week I’ve been deliberating over buying a new guitar after selling my old Roland SH-101. The guitar I’m interested in is an archtop hand-made from 400yr old Red Cedar from Stanley Park by Thomas Groppi.
But that inner voice is asking, but you have lots of guitars why would you need another?

For me, guitars have such different characteristics, and these specific nuances change not only change how different the instrument sounds but also how I play it. Thomas has been kind enough to lend me a few of his guitars in the past and I have to say that, although, each one is unique in its tone and playability there is a consistency and attention to craft that puts Thomas at a level of luthiery that few achieve within the decade he’s been building guitars.

Ain’t she a beaut! Thomas Groppi’s Stanley Blues

I’ll play the Stanley Blues and make a decision within a week. I’ll try to post some videos of me play it and you can help me decide.

blurred-music-sheet-sml

Research Note 009: Is improvisation just spontaneous composition?

Well, yes and no…

Yes, when you improvise you are certainly generating musical ideas.
yes and these ideas contain what is referred to as “compositional elements”.
yes and as a skilled improviser, you are often constructing a solo in a sophisticated “compositional” manner.

But describing improvisation as “spontaneous composition” is an incomplete (and usually inaccurate) description of the improvisational process.

I’ve had a bit more time to think and read about this and I believe that in the most fundamental sense, the difference between improvisation and composition comes down to a matter of conscious deliberation.

Conscious deliberation gives us the ability to change perspective and reflect on the global and long-term implications of our decisions. Deliberative and conscious thoughts have to pass through the narrow straits of short-term memory, which hold only a few symbols (approximately six), and can attend to only one thing at a time (or perhaps two or three, by alternating attention). Recent research using functional molecular resonance imaging to record neuronal activity has shown that even simple acts (like reading a short sentence) employ a fairly intricate sequence of neural processes. Essentially, the rational or ‘composing’ mind tends to be, by nature, using experience and tradition to help drive decisions and the ‘improvising’ or non-conscious mind is tapping into a huge wealth of long-term memory and experience that is stored subconsciously. This would suggest that using improvisation to compose may lead to more unexpected ideas.

Take human speech, as an example. The vast majority of the time you are speaking (talking with friends, explaining something to someone, etc.), you are actually improvising. Sure, you might have a topic (like “where would you like to eat lunch?), but you aren’t planning, word for word, what you’re going to say. You’re simply following the immediate need to communicate, in a ‘flow’. In essence, you’re reacting in real-time.

Now contrast that with writing something. Writing gives you a chance to choose your words or overall message more carefully. You can take your ideas out of “real-time”, and consciously craft them with the kind of nuance that best suits your intentions.

Musical improvisation and composition have a similar relationship. When you improvise, you are reacting, moment to moment (whether you think you are, or not).

Scientifically, improvisation involves a largely different neurological process than composing. As neuroscientist and jazz pianist Charles Limb discovered in his research, the main parts of the brain that “light up” for a skilled improviser are the parts that have to do with immediate communication. Check out his TED talk below.

The skilled improviser is essentially in the realm of attempting to communicate. More specifically, to connect  with the other musicians with whom she is playing as well as the audience.

Communication involves not only taking into account the ideas that you have an impulse to express, but equally important, that which you are hearing and reacting to.

Listening is at the heart of it all.

The best improvisors in Jazz are those that listen deeply, and respond in accordance to what they hear. And of course, listening is a very active thing to do. To listen deeply is to be fully present.

And it’s not just about listening to the others with whom you’re playing. It’s also about listening deeply to yourself. It’s about not being stuck in the “deliberation” of your musical ideas at the expense of losing your improvisational flow.