noninstrument

SI004 – NonInstrument

Most of my research these days is about getting to the heart of how we interact with musical instruments, exploring the essence of a nuanced touch that a piano player has or that subtle vibrato that makes one guitar player different than another. As a departure or brief interlude I’ve been thinking also about how to make an instrument that plays itself. It’s not a new idea, there are plenty of generative art projects that create their own ambient soundtracks but I’d like to look into how an instrument might create music from data it gathers from an environment.

The NonInstrument is a sonic interaction experiment that scans bluetooth devices and creates melodies from the UID of the device. The project explores how our devices are constantly talking to each other without us even being aware of these exchanges.

What’s a UID?

A unique identifier (UID) is a numeric or alphanumeric string that is associated with a single device. In other words, a unique sequence of numbers or letters that can be used to identify your device from ever other device in a huge ocean of devices.

The UID can be found in the line Address: F4-5C-89-AB-18-48

How it works

With the Sonic Interactions Kit (SIK) I installed Bluez, the Linux Bluetooth system, there’s a decent guide on how to install it at Adafruit. Then I wrote a simple Python script that uses Bluez to scan for devices and send the UIDs to PureData (Pd) using UDP protocol. Once that data is in Pd, the data is parsed into ascii and number values which are then converted from MIDI notes into frequencies. Each UID becomes a sequence of 16 notes which are saved into Tables/Arrays. The sequences are then played back and playback tempo and delay can be adjusted by potentiometers on the Lots of Pots expansion board (LOP) on the Pi.

Here’s it in action on Instagram

https://www.instagram.com/p/BvAXCZGhs1Z/?utm_source=ig_web_button_share_sheet

For the next steps on this project I’m thinking about putting the device in public locations to see what it picks up – scanning people’s devices and recording the melodies. I imagine each place will have a totally different sound and texture.

Some questions come up like:

  1. How do I make this device portable and durable? Battery-powered and in a metal pedal case maybe
  2. Should the device have it’s own amp and speaker to playback while on location?

How do you think I this project should evolve? Leave a comment below.

nsynth-sq

SI03 Experiment 3 – nSynth

Everyone is talking about Artificial Intelligence(AI) and Machine Learning(ML) and I beginning to investigate how it may shape the way we design musical instruments. First let’s get the terminology straight, AI and Machine Learning are not the same thing, although many use the words interchangeably.

Artificial Intelligence is a large umbrella term for when computing could be perceived as thinking autonomously. Under the umbrella of this term are concepts like computer vision, pattern recognition – like facial and speech recognition, generative creativity, natural language processing and yes, you guessed it, machine learning.

Machine Learning is one of the ways we may achieve AI. Machine learning relies on working with large data-sets, by examining and comparing the data to find common patterns and explore nuances.

Machine learning is the study of computer algorithms that improve automatically through experience.

Former Chair of the Machine Learning Department at Carnegie Mellon University, Tom M. Mitchell

My first foray into Machine Learning was taking a fantastic online course by Rebecca Fiebrink called Machine Learning for Artists and Musicians. I highly recommend the course if you’re interested in the topic and the way the course is structured provides a solid understanding and practical working knowledge of machine learning.

Next I chose to build Google’s open source project NSynth with some of my students over the past summer and I’m finally getting around to understanding it and playing around with it. Their team did a great job of documenting how to build it using, yes, a Raspberry Pi. Instructions on how to build it are on the NSynth Github.

According the Magenta team that build the NSynth:
NSynth uses deep neural networks to generate sounds at the level of individual samples. Learning directly from data, NSynth provides artists with intuitive control over timbre and dynamics, and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.

NSynth is an algorithm that can generate new sounds by combining the features of existing sounds. To do that, the algorithm takes different sounds as input.

The Magenta team from Google also have some great open source tools that are worth exploring. More on that later…

nukulele

SI02 Experiment 2 – Nukulele

Rather than using RaspberryPi and Pd (PureData) as the sound generator in this experiment I wanted to use another sound source, something that resonates acoustically that I could alter the sound of but retain the playability of the original instrument. Why not a ukulele?

Hacking a ukulele:
cutting hole for the RaspberryPi to be accessible from the front of the instrument, notic the cheap Pizzo microphone taped next to bridge of ukulele to pick up sound and bring it into Pi.

Running Headless

I know what you’re thinking, you probably have a mental picture of a chicken running around without a head, but in Pi parlance, running headless is about running your Raspberry Pi without a monitor (screen) or keyboard.

One of the reasons I like working with a RaspberryPi over an Arduino is that, unlike the Arduino the RaspberryPi is a standalone computer with an operating system, network capabilities and video output built-in. It can be a desktop computer or embedded within another object or installation. Lots of possibilities open up.

To run a RaspberryPi headless there are a lot of tutorials out there, you can start with mine. Unlike most “how to run headless” tutorials I had to figure out how to launch a script that starts two files to run automatically when the Pi is booting up. Let’s have a look at those files:

Python script

Like experiment SI01 we start by grabbing data from somewhere else to bring into Pd. In this case it’s data from the Lots Of Pots board made by Modern Device, a RaspberryPi expansion board with 8 Pots (potentiometers), thus the name, and Analog/Digital converters to send the data from the pots to the Pi. The python script grabs the data from the pots and 4 buttons and sends it to Pd via UDP communications.

You can look at the lop2pd.py script here.

Altering the ukulele sound using Pd

There are a lot of different things we can do to the sound coming into Pd, pretty much any digital sound processing you can think of, like distortion, delays or echos, chorusing and any of those typical guitar pedal effects but I should to create a Ring Modulator effect which makes the ukulele sound more like a sequenced synthesizer.

Quick Tangent about Ring Modulation

Ring Modulation began being used in music as early as 1956 by people like Stockhausen and later by John McLaughlin in the Mahavishnu Orchestra and Miles Davis in the 1970s. You might know that sound from Black Sabbath’s Paranoid or perhaps the heavily modulated voice of the Daleks on Doctor Who in 1960’s. More info about it here: https://en.wikipedia.org/wiki/Ring_modulation

Dalek from Doctor Who

Tricky Startup Business

The trickiest part of this experiment was getting the files to launch automatically. There seems to be a bit of voodoo here. I think it’s mostly because the files need to have specific permissions by the root user and be in the right location.

Here’s how it works, first you need to edit the rc.local file, which you’ll need root permission to do (SUDO). Add the following line as in this file:

sleep 10 && /etc/profile.d/pd_startup.sh

Then the Pd_startup.sh file needs to launch the python and Pd files, like so:

sudo -H -u pi bash -c bash "echo 'starting Pd now'"
pd -nogui /path/to/folder/pd_file.pd &

/path/to/folder/lop2pd.py &

then make sure the permissions of this file are set to:
ownership is root:root (chown root:root)
permissions should be executable by owner (chmod 755)

Yes, it’s unavoidable, annoying but necessary for you to learn this stuff. Luckily I had to learn it when I began making websites but it’s a handy thing to know when you begin to get under-the-hood of any computer. If you need to know more about File Permissions and Ownership try this article.

Sense-HAT-square

SI01 Experiment 1 – SenseSynth

I’m going to start documenting each Sonic Interactions experiment for the purpose of marking where I am in the process. Each one of these is merely a rough sketch to build upon and are in no means finished. My first experiment takes data from the accelerometer of a SenseHat and uses it to change parameters of a simple synth.

Goal: use an accelerometer to control the frequencies of a synth, experiment with gestural interfaces for music

Questions:
How do we tame the wild data coming out of the accelerometer to use in a musical way in synth?
How do we use the joystick and middle click to add to the interaction?

Process:

  1. Write a python script to retrieve data from sense-hat and send to Pd
  2. Use data from python in Pd to alter the frequencies of oscillators:

3. Determine the mapping of data to synth parameters, I started with this:

The Pitch (x plane) from the Accelerometer was mapped to OSC 1 (oscillator frequency)
The Roll (y plane) was mapped to OSC 2
The Yaw (z plane) was mapped to OSC 3

All the code from this experiment can be found at the Sonic Interactions Github project. Python script is here
and the Pd file is here.

Let me know what you’d want to see done with this experiment next?
To make it more musical or more expressive, would you add a finer scale to the sensitivity of the accelerometer data so that you could, for example, play scales more easier?

tree_reflection.jpg

Tree in a Forest

tree_reflection.jpg

I’m curious about genealogy. Maybe it has something to do with being Armenian and having your family tree cut off after only a few generations, but I’ve recently been looking into software to enable genealogy tracking. What I’ve found is the following software:
Family Tree Maker (PC)
Legacy Family Tree (PC)
RootsMagic (PC)
REUNION (Mac)
MacFamily Tree (Mac)

They all seemed somewhat amateur and small, if not completely dated. I realized that this is a perfect opportunity for an online application. After all, you need a community of people inputing data into it, to be effective. Wikipedia had a few open-source genealogy options.

As a trial, I’ve installed phpGedView at treeinaforest.com to give it a whirl. My first impression is that it seems fairly comprehensive but lacking in usability and aesthetics.