IMG_4571

Updating Bluez on RPi

Update (Jul 9, 2019):
Now, Raspbian Buster, released on 6/20/2019, has bluez 5.50 by default!


This post shows how to update BlueZ on Raspberry Pi from 5.43 (the default version comes with Raspbian Stretch) to 5.50 (released notes [1]). In this post, I assume that you already have a Raspberry Pi 3 B+ or Raspberry Pi Zero W running Raspbian Stretch.

Steps [2]
1. Check Current BlueZ Version
1-1. Before starting, let’s check the current BlueZ version.

bluetoothctl -v

In case of Raspbian Stretch, the BlueZ version should be 5.43.

bluetoothctl -v5.43

2. Install Dependencies
2-1. Update the package list.

sudo apt-get update

2-1. Install the dependencies.

sudo apt-get install libdbus-1-dev libglib2.0-dev libudev-dev libical-dev libreadline-dev -y

3. Install BlueZ
3-1. Download BlueZ source code.

wget www.kernel.org/pub/linux/bluetooth/bluez-5.50.tar.xz

3-2. Uncompress the downloaded file.

tar xvf bluez-5.50.tar.xz && cd bluez-5.50

3-3. Configure.

./configure --prefix=/usr --mandir=/usr/share/man --sysconfdir=/etc --localstatedir=/var --enable-experimental 

3-4. Compile the source code.

make -j4

3-5. Install.

sudo make install

3-6. Reboot Raspberry Pi 3.

sudo reboot

4. Verify Update[3] [4]
4-1. Verify the BlueZ version by issuing the command below.

bluetoothctl -v

The result should be like this:

bluetoothctl -vbluetoothctl: 5.50

References
[1] BlueZ Release Notes
[2] Installing Bluez 5.44 onto Raspbian? – Stack Exchange
[3] Bluetooth LE upgrade not working – Raspberry Pi Forum
[4] Why does bluetoothd show version 5.37 after installing Bluez 5.45? – Stack Exchange

pd-examle_patch

Introducing PureData

I’ve been posting my experiments exploring sound/musical instrument design and prototyping, and it occurred to me that although my writing had focussed on the creative process and user experience of playing instruments, it would you the reader to have more context and explanations of the technical side to go beyond just embedding links throughout my posts. Today i’d like to introduce you to Pure Data, an amazingly deep yet seemingly-simple music and sound creation development environment. Here’s the official description on the puredata.info website:

Pure Data

Pure Data (Pd) is a visual signal programming language which makes it easy to construct programs to operate on signals. We are going to use it extensively in this textbook as a tool for sound design. The program is in active development and improving all the time. It is a free alternative to Max/MSP that many see as an improvement.

As I learn more about Pd I realize that it has a number of redeeming characteristics that make it incredibly resilient. Apart from MaxMSP and VVVV, Pure Data is uniquely visual the only piece of software that allows you to program your own applications using a visual flowchart-like graphical user interface. Pd is open-source and platform-agnostic – working consistently across Windows, Mac and Linux platforms (and yes, RaspberryPi!). Pure Data is also extremely extendible, you can install libraries (Externals) to add new capabilities and many people write their one libraries. Finally, Pure Data can be embedded into other frameworks and hardware, there’s a libpd library that is used for iOS, Android and OpenFrameworks application development.

Ultimately, Pd enables musicians, visual artists, performers, researchers, and developers to create software graphically without writing lines of code.

Pd can be used to process and generate sound, video, 2D/3D graphics, and interface sensors, input devices, and MIDI. Pd can easily work over local and remote networks to integrate wearable technology, motor systems, lighting rigs, and other equipment. It is suitable for learning basic multimedia processing and visual programming methods as well as for realizing complex systems for large-scale projects.

Here are some of the basic components of Pure Data:

Objects

In Pd we use a flowchart with lines connecting boxes together to build programs. We call these boxes objects. Stuff goes in, stuff comes out. For it to pass into or out of them, objects must have inlets or outlets. Inlets are at the top of an object box, outlets are at the bottom. Here is an object that has two inlets and one outlet. They are shown by small “tabs” on the edge of the object box.

Connections

The connections between objects are sometimes called cords or wires. They
are drawn in a straight line between the outlet of one object and the inlet of
another. It is okay for them to cross, but you should try to avoid this since it
makes the patch diagram harder to read.

Data

The stuff or data being processed comes in a few flavours: sound signals, and messages. Objects give clues about what kind of data they process by their name. For example, an object that adds together two sound signals looks like |+ ~|. The + means that this is an addition object, and the ∼ (tilde character) means that its object operates on audio signals.

Edit Mode

When you create a new object from the menu, Pd automatically enters edit mode, so if you just completed the instructions above you should currently be in edit mode. In this mode you can make connections between objects or delete objects and connections.

Wiring

Hovering over an outlet will change the mouse cursor to a new “wiring tool.” If you click and hold the mouse when the tool is active you will be able to drag a connection away from the object.

Bang Message

This is the most fundamental and smallest message. It just means “compute something.” Bangs cause most objects to output their current value or advance to their next state. Other messages have an implicit bang so they don’t need to be followed with a bang to make them work.

Float Messages

“Floats” is another name for numbers. As well as regular (integer) numbers like 1, 2, 3 and negative numbers like −10 we need numbers with decimal points like −198753.2 or 10.576 to accurately represent numerical data. These are called floating point numbers, because of the way computers represent the decimal point position.

Number Box

For float numbers we have already met the number box, which is a dual-purpose GUI element. Its function is to either display a number or allow you to input one. A bevelled top right corner like this denotes that this object is a number box. Numbers received on the inlet are displayed and passed directly to the outlet. To input a number click and hold the mouse over the value field and move the mouse up or down. You can also type in numbers. Click on a number box, type the number and hit RETURN.

Toggle box

Another object that works with floats is a toggle box. Like a checkbox on any standard GUI or web form, this has only two states, on or off. When clicked a cross appears in the box like this and it sends out a number 1; clicking again causes it to send out a number 0 and removes the cross so that it looks like this .

Sliders and Other Numerical GUI Elements

GUI elements for horizontal and vertical sliders can be used as input and display elements. Their default range is 0 to 127, nice for MIDI controllers, but like all other GUI objects this can be changed in their properties window. Unlike those found in some other GUI systems, Pd sliders do not have a step value.

Message Box

These are visual containers for user-definable messages. They can be used to input or store a message. The right edge of a message box is curved inwards like this , and it always has only one inlet and one outlet. They behave as GUI elements, so when you click a message box it sends its contents to the outlet. This action can also be triggered if the message box receives a bang message on its inlet.

Symbolic Messages

A symbol generally is a word or some text. A symbol can represent anything; it is the most basic textual message in Pure Data. Technically a symbol in Pd can contain any printable or nonprintable character. But most of the time you will only encounter symbols made out of letters, numbers, and some interpunctuation characters like a dash, dot, or underscore.

Lists

A list is an ordered collection of any things, floats, symbols, or pointers that are treated as one. Lists of floats might be used for building melody sequences or setting the time values for an envelope generator. Lists of symbols can be used to represent text data from a file or keyboard input.

Pointers

As in other programming languages, a pointer is the address of some other
piece of data. We can use them to build more complex data structures, such
as a pointer to a list of pointers to lists of floats and symbols.

Tables, Arrays, and Graphs

A table is sometimes used interchangeably with an array to mean a two-dimensional data structure. An array is one of the few invisible objects. Once declared it just exists in memory.

noninstrument

SI004 – NonInstrument

Most of my research these days is about getting to the heart of how we interact with musical instruments, exploring the essence of a nuanced touch that a piano player has or that subtle vibrato that makes one guitar player different than another. As a departure or brief interlude I’ve been thinking also about how to make an instrument that plays itself. It’s not a new idea, there are plenty of generative art projects that create their own ambient soundtracks but I’d like to look into how an instrument might create music from data it gathers from an environment.

The NonInstrument is a sonic interaction experiment that scans bluetooth devices and creates melodies from the UID of the device. The project explores how our devices are constantly talking to each other without us even being aware of these exchanges.

What’s a UID?

A unique identifier (UID) is a numeric or alphanumeric string that is associated with a single device. In other words, a unique sequence of numbers or letters that can be used to identify your device from ever other device in a huge ocean of devices.

The UID can be found in the line Address: F4-5C-89-AB-18-48

How it works

With the Sonic Interactions Kit (SIK) I installed Bluez, the Linux Bluetooth system, there’s a decent guide on how to install it at Adafruit. Then I wrote a simple Python script that uses Bluez to scan for devices and send the UIDs to PureData (Pd) using UDP protocol. Once that data is in Pd, the data is parsed into ascii and number values which are then converted from MIDI notes into frequencies. Each UID becomes a sequence of 16 notes which are saved into Tables/Arrays. The sequences are then played back and playback tempo and delay can be adjusted by potentiometers on the Lots of Pots expansion board (LOP) on the Pi.

Here’s it in action on Instagram

https://www.instagram.com/p/BvAXCZGhs1Z/?utm_source=ig_web_button_share_sheet

For the next steps on this project I’m thinking about putting the device in public locations to see what it picks up – scanning people’s devices and recording the melodies. I imagine each place will have a totally different sound and texture.

Some questions come up like:

  1. How do I make this device portable and durable? Battery-powered and in a metal pedal case maybe
  2. Should the device have it’s own amp and speaker to playback while on location?

How do you think I this project should evolve? Leave a comment below.

Sense-HAT-square

SI01 Experiment 1 – SenseSynth

I’m going to start documenting each Sonic Interactions experiment for the purpose of marking where I am in the process. Each one of these is merely a rough sketch to build upon and are in no means finished. My first experiment takes data from the accelerometer of a SenseHat and uses it to change parameters of a simple synth.

Goal: use an accelerometer to control the frequencies of a synth, experiment with gestural interfaces for music

Questions:
How do we tame the wild data coming out of the accelerometer to use in a musical way in synth?
How do we use the joystick and middle click to add to the interaction?

Process:

  1. Write a python script to retrieve data from sense-hat and send to Pd
  2. Use data from python in Pd to alter the frequencies of oscillators:

3. Determine the mapping of data to synth parameters, I started with this:

The Pitch (x plane) from the Accelerometer was mapped to OSC 1 (oscillator frequency)
The Roll (y plane) was mapped to OSC 2
The Yaw (z plane) was mapped to OSC 3

All the code from this experiment can be found at the Sonic Interactions Github project. Python script is here
and the Pd file is here.

Let me know what you’d want to see done with this experiment next?
To make it more musical or more expressive, would you add a finer scale to the sensitivity of the accelerometer data so that you could, for example, play scales more easier?

pd-rpi

Installing the latest PureData on a Raspberry Pi

After a fair bit of deliberation I was able to install Pure Data 0.49 on the Pi running Raspbian Stretch. I’ll walk through this step by step in case you’re not familiar with command line in Linux.

The default way to install PureData (Pd) is:

sudo apt-get install puredata

The only problem with this method of installing Pd is that you’ll get 0.47.

STEP1. Open Terminal

STEP2. Creating folder to organize compilation (optional):

mkdir src

STEP3. Enter in src folder / install dependencies / download pd source code / unpack downloaded file:

cd src
sudo apt install build-essential autoconf automake libtool gettext git libasound2-dev libjack-jackd2-dev libfftw3-3 libfftw3-dev tcl tk
wget http://msp.ucsd.edu/Software/pd-0.49-0.src.tar.gz
tar -xzf pd-0.49-0.src.tar.gz

STEP4. Compiling Pd:

cd pd-0.49-0
./autogen.sh
./configure --enable-jack --enable-fftw
make

STEP5. Confirming if compilation are ok:

cd bin
./pd

STEP6. If it run, you can install Pd in your raspbian:

cd ..
sudo make install

sensors

Sensors into OSC with Python

How do we move from analogue to digital? There seems to be quite a number of ways including 2D & 3D scanning, photography, and sensors. For the purposes of intentionally limiting our options we’re going to create a physical interface with potentiometers and buttons. We’ll be using the LOP board with a Raspberry Pi.

Once we’ve soldered the potentiometers and header on the LOP board we have to install a few things and enable some of the system level software on the RPi.

Starting with enabling SPI

  1. Run sudo raspi-config .
  2. Use the down arrow to select 9 Advanced Options.
  3. Arrow down to A6 SPI .
  4. Select yes when it asks you to enable SPI,
  5. Also select yes when it asks about automatically loading the kernel module.
  6. Use the right arrow to select the <Finish> button.

Installing the Python Tools:

sudo apt-get update
sudo apt-get install python-setuptools
sudo apt-get install python-pip python-dev

Now we're ready to run the code, wait! we need to 

#!/usr/bin/python

import spidev
import time
import os
import OSC
import RPi.GPIO as GPIO

# Open SPI bus
spi = spidev.SpiDev()
spi.open(0,0)

# Open osc

send_address = "127.0.0.1" , 9000
c = OSC.OSCClient()
c.connect(send_address)

# Function to read SPI data from MCP3008 chip
# Channel must be an integer 0-7
def ReadChannel(channel):
 adc = spi.xfer2([1,(8+channel)<<4,0])
 data = ((adc[1]&3) << 8) + adc[2]
 return data

# Define sensor channels
l0 = 0
l1 = 1
l2 = 2
l3 = 3
l4 = 4
l5 = 5
l6 = 6
l7 = 7

# Define delay between readings
delay = 0.1

while True:
 # Read the light sensor data
 ll0 = ReadChannel(l0)
 ll1 = ReadChannel(l1)
 ll2 = ReadChannel(l2)
 ll3 = ReadChannel(l3)
 ll4 = ReadChannel(l4)
 ll5 = ReadChannel(l5)
 ll6 = ReadChannel(l6)
 ll7 = ReadChannel(l7)
 print("channel 1: ",ll0)

# send off the OSC message with sensor values

msg = OSC.OSCMessage()
 msg.setAddress("print")
 msg.append(ll0)
 msg.append(ll1)
 msg.append(ll2)
 msg.append(ll3)
 msg.append(ll4)
 msg.append(ll5)
 msg.append(ll6)
 msg.append(ll7)
metacube-2014

MetaCube Paper

MetaCube: Using Tangible Interactions to Shift Between Divergent & Convergent Thinking

A research Paper submission By Haig Armen, 2015

ABSTRACT
For many decades, we have observed and studied how people create, what the characteristics of creative people are and what the process of creativity is. Many of these studies have focused on the cognitive abilities of individuals – what happens in our minds when we are creative? This paper describes a research tool for building a better understanding about how creative teams move between divergent, exploratory and convergent ways of thinking. With the proliferation of embedded technologies, there are emerging opportunities for employing tangible or embodied interaction within the creative process. In this paper, we make the case that the creative process can be augmented, observed and supported by metaphorical interactions via a hand-held tangible computing device.

Author Keywords

Interaction design; Tangible user interfaces; Embodied interaction; Design research; People-centered approach; Metaphor; Creative Process; Divergent-Convergent Thinking

ACM Classification Keywords

Human-centered computing, Interaction design theory, concepts and paradigms, Human-centered computing, Collaborative and social computing devices

General Terms
Design; Human Factors; Theory

INTRODUCTION
Today’s contemporary design teams have a wide array of tools to aid in the design process and even the most digital savvy teams still use tangible tools like whiteboards to help in the brainstorming sessions. There has been a great deal of studies in the area of creative process in the context of design and

brainstorming, predominantly about the varying exercises in divergent (generative), exploratory (connecting & combining ideas) and convergent (analytical) cognitive modes. Yet how teams or individuals transition between these modes of thinking relatively unexplored. This project explores how a tangible object might emphasize meaningful gestural interactions not as a departure from, but rather as an integrated part of the creative process. We propose that a tangible user interface will help in the creative process by shedding light on the transitions between modes of thinking. Tangible analogical interactions can be a powerful way to support modes of cognitive activity and ultimately provide a better understanding of when different strategies may be most effective. In this paper, we call to question the connection between tangible gestural interactions as analogical mappings to abstract modes of cognition by way of a conceptual prototype called the MetaCube.

To best understand how a tool could improve the creative process we first observe that creative teams are most productive when shifting between divergent and convergent modes of thinking. The ability to efficiently shift between modes may be an important feature underlying the capacity to be creative [12], and possibly, of particular importance in professions such as design [8]. There are a wide variety of creative activities, exercises and games that have been categorized into divergent and convergent categories [7] that act as useful frameworks for creative thinking and conceptual development. Physically interacting with an analogical concept makes the abstract become more concrete.
Building from the theory of embodied interaction we propose a tangible computing device that helps to bring a clearer collective understanding of how we shift cognitive modes using tangible interaction. Beyond embodied interaction, this case additionally considers the importance of flow within creative sessions as well as their collaborative nature. We hypothesize that by building a better understanding of how, when and why we shift our cognitive modes in creative sessions we can begin to create frameworks of knowledge around the collective creative process. The MetaCube project revolves around the following research question: Does rotating a tangible computing cube help creative teams better observe and gain insight into shifting between divergent, explorative to convergent cognitive modes based on specific time intervals?

Case studies of this type are important at this juncture in the area of tangible computing; as designers strive to understand what the most natural gestural affordances are for tangible user interfaces (TUI). Discovering ways of encouraging people to interact using analogy are crucial for the Interaction Design field to create a vernacular around these gestural interactions. Does turning an object towards you imply ‘inward-looking’ convergent, logical and critical thought? Does rotating an object to the right signify thinking into the future or conversely the act of rotating an object to the left representing thinking about the past or precedence of a problem space?
THEORETICAL BACKGROUND
Although cognitive modes in the creative process have been well documented, it is unclear that there are best practices in the frequency and periods in which to transition from one mode to another. Furthermore, though there are many generative and analytical activities, little has been discovered about whether certain combinations of activities are better or worse than others, or whether randomization of activities fosters effective creative thinking. Furthermore, flexible thinking involves the ability to shift cognitive functioning from common applications to the uncommon; namely, breaking through cognitive blocks and restructuring thinking so that a problem is analyzed from multiple perspectives.[12] Yet “Most do not easily switch divergent and convergent thought, but they need to do so because continued learning that blocks ideation is not helpful to the overall effort, and neither is continued ideation that blocks solution choice [2,9].
By decoding the transitions in cognitive mode we can begin to understand where we have trouble shifting and can address and improve our abilities to move easily between cognitive modes. The MetaCube aims to demystifying these mode transitions by employing theories in embodied interaction. Using tangible tools to help in brainstorming can prove to be extremely effective. As Lakoff and Johnson [10] point out, metaphor and analogy are more than mere language and literary devices, but rather conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior profoundly. For example: you may recognize that Shakespearean tragedies have a similar structure: a phase of increasing conflict between opposed sides or characters, a major confrontation between the opposed characters, and a phase in which the opposition is worked out and resolved in one character’s victory and the other’s defeat. It may then occur to you that this structure is very like the shape of a pyramid isosceles triangle, which rises from a baseline to a central point and then falls back to its baseline. You have then perceived an analogy between a temporal phenomenon and a spatial one. In the case of the MetaCube, the device represents a noun, in the context of a brainstorming session this may be the problem at hand and the act of rotating the cube is analogous to seeing the problem from another perspective. In another case study Antle [1] elaborates: Gestures may lighten the cognitive load because they are a motor act; because they help people link words to the world (e.g. deictic gestures); or because they help a person organize spatial information into speech (e.g. iconic or metaphoric gestures).
Along with modulations in cognitive modes, flow is a crucial aspect of the creative process, specifically in brainstorming sessions. In Csikszentmihalyi’s seminal book, Flow [4] is described as a state of concentration or complete absorption with the activity at hand and the situation. When exploring the requirements of our MetaCube, we must consider the flow of the individuals in the creative team. The momentum and immersion can only be achieved with the absence of interruptions from the creative team. Achieving momentum in a creative brainstorming session requires time management. Commonly time is blocked out and a facilitator is tasked with being timekeeper. A number of questions arise; what should the time period between cognitive modes be? Should each mode take the same amount of time? One of the widely adopted time blocking methods for focused periods of concentration is the Pomodoro Technique [3], which suggest 25-minute increments of activity followed by 5-minute breaks. The research on collaborative creativity is extensive and widely varying based on the type of creativity and field. The most relevant conclusions that can be draw are that shared engagement fluctuates with changes in activities within creative teams. This finding suggests that careful consideration must be taken in designing a device that will keep people’s attention on brainstorming and topics of discussion rather than on the tools being used. It is clear that a device for collaborative creativity will require the affordances of many to interact with it, not just an experience for an individual. The device will require the capability of providing a feedback mechanism that will communicate to a number of people within the context of a room and not necessarily one person like most computing devices.

Related Work
Although there are no examples of work directly related to this area on inquiry, there are a few good examples of conceptual design projects that are at all related that we may draw possible considerations from. The research project, “A cube to Learn” by Terrenghi, Kranz, Holleis and Schmidt [10] describes a Learning Cube as a novel tangible learning appliance used as general learning platform for teaching vocabulary and 3D views to children through gestures and test-based quizzes. In 2001, Terry [11] outlines a project called Task Blocks that employs blocks as the tangible interface representing computational functions for creative exploration within the programming context. The design of the system encourages hands-on, active experimentation by allowing users to directly insert, delete, or modify any function in the computational “pipeline”.

DESIGN CONSIDERATIONS

The goal of the device is to aid creative teams to collectively shift modes of thinking without losing their momentum as well as regulating the frequency of the mode transitions. The MetaCube has the potential to become a powerful tool for facilitating creative sessions by providing users with gestural affordances that create analogies while modulating through various creative thought modes. The design of the prototype must reflect the collaborative nature of creative problem-solving teams. When providing feedback to the user/team it is important that the device is able to communicate to more than one person. If color is the main mechanism to communicate the cognitive mode, it is imperative that the color be visible from all viewing angles if the team is sitting around the cube. Although seemingly unimportant, the shape of the cube is instrumental in implying specific gestural affordances. Unlike a sphere a cube’s physicality suggests rotational gestures on the X and Z-axis. Additionally, the device could possibly communicate the changing of cognitive modes using sound or wirelessly transmitting information but these options were shelved to concentrate on the core of the study, opting for a subtle non-digital form of user feedback.

DESIGN SOLUTION & RATIONALE

By creating MetaCube – a small hand-held tangible prototype capable of measuring its own rotation, we are able to address our research question. Participants use the MetaCube by rotating its orientation to mark the transition from one way of thinking to another. Imagine the scenario where a member of a creative team in a brainstorming session is prompted to pick up and rotate the MetaCube tool. The MetaCube’s orientation triggers a new glowing color that marks the transition between one way of thinking to another. The team has been told in advance the following light mappings:

1. Blue glow indicates divergent (generative) thought mode
2. Green glow represents exploratory mode
3. Red glow signifies convergent (analytical) thought mode
4. Flashing light of any color prompts rotating the cube
For example rotating the cube in one orbit would yield divergent thought mode and rotating the cube in another orbit indicates that participants proceed with convergent activities. The working prototype will be able to detect rotation and its own orientation. Once rotated on its X-axis or Z-axis the object is triggered and communicates its new cognitive mode to the team. The cube will utilize an Inertia Measurement Unit (IMU) – 5 Degrees of Freedom IDG500/ADXL335, which is essentially a combination integrated circuit board with both accelerometer and gyroscopic sensors to sense orientation and rotation. An important key feature of the MetaCube is the specific time intervals that prompt the members of the creative team to interact and change cognitive modes.

In the initial stage of exploration the modes will be communicated by use of contrasting colors and later iterations by include broadcasting activities via web applications served by the cube to surrounding computers. With a built-in web server the MetaCube could dynamically creates activity cards that are served to the client-side browsers of the team connected to the cube via a wifi network. These last features were not included in the original prototype as it was beyond the core research question.

INFORMAL OBSERVATIONS

To begin to validate the hypothesis of this study the MetaCube prototype acts as a proof of concept. The basic prototype was assembled and programmed to test amongst participants in a number of informal settings. The purpose of the cube is first explained to participants prior to their brainstorming activities. A simple creative process will be facilitated and the use of the cube will be observed and captured to later reflect upon. During the session participants’ reactions were observed, anything they said and their facial expressions, we tried to capture. Participants were then asked the following types of questions: Did the cube help or distract the team’s creative flow? Did rotating the cube strengthen the idea of shifting modes of thinking? Did the colored light help users understand the shift in modes? Could this method of observing shifting cognitive modes be useful for creative teams? We were able to informally test our assumptions by putting the prototype into a brainstorming session and explaining how the team could use it to help them shift between divergent and convergent creative activities. Our observations were generally positive but further formal studies would be necessary to draw definite conclusions.

The people within the observation session welcomed the idea and felt it was an intriguing idea in the context of creative problem solving. The interaction paradigm was easily understood and the team was able to integrate the MetaCube into their flow. The following findings were discovered from our informal study: 1. The cube helped the team creatively once the members of the team all understood its purpose. 2. Rotating the MetaCube did indeed strengthen the idea of shifting modes of thinking both for individuals and for the team as a collective. 3. The colored light did help understand the mode changes but a legend mapping the colors to mode was frequently glanced at. 4. There was a great deal of agreement that by observing shifting cognitive modes both teams and people would become more effective during creative problem-solving sessions. Additionally, we observed that although the cube was able to indicate the change in cognitive mode, the team still broke their flow by having to discuss which creative activity they would proceed with. This suggests that there is the opportunity for the device to communicate an activity.

DISCUSSION

After creating MetaCube and later presenting and explaining its purpose to various designers and writers the response was generally of interest and many began to think of other analogies to apply to the rotational interaction. Ideas were generated about ‘hinging’ from one way of thinking to another and using the metaphorical expression of “taking a 180 degree turn” to represent a pivot in direction. There was a slight cognitive disconnection between the six sides of a cube and the three cognitive modes. This added an element of unpredictability to using the MetaCube, which not all participants understood. Although this research tool was created primarily to experiment with ideas for the creative process, the prototype and its reception act as an informal validation of a possible product. In the initial conceptualization of the MetaCube it was decided that not having the cube display any digital information to minimize the perception of a computing device was in retrospection, a good decision and any further exploration of this idea will be to continue following this same line of reasoning.

8. CONCLUSION

In this paper we present a short study that investigates the benefits of a tangible computing device that enables hands-on interaction to help creative teams while brainstorming. Our contributions include a concept-driven design project and prototype. We concluded that the MetaCube shows promise as a unique tangible non-disrupting way of conducting collaborative creative brainstorming sessions. The physical interactions gave the creative teams a concrete way of thinking about when and how to transition from one way of thinking creatively to another. We concluded that effective Tangible User Interfaces (TUI) design can result in epistemic, exploratory, collaborative and cognitive benefits within the context of collaborative creative contexts.

REFERENCES

1. Antle, Alissa N. Exploring how children use their hands to think: an embodied interactional analysis. Behaviour and Information Technology (2011)

2. Brophy, D.R. Comparing the Attributes, Activities, and Performance of Divergent, Convergent & Combination Thinkers. Creativity Research Journal. 2001

3. Cirillo, Francesco. The Pomodoro Technique FC Garage GmbH 2013

4. Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper Perennial (2007)

5. Gray, Dave, Brown, Sunni and Macanufo, James. Gamestorming. O’Reilly. 2010

6. Hatchuel, Armand. Le masson, Pascal. and Weil, Benoit Teaching innovative design reasoning: How concept knowledge theory can help overcome fixation effects. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Cambridge 2011

7. Lakoff, George and Johnson, Mark. Metaphors We Live By. The university of Chicago press. 1980

8. Pringle, Andrew J. Shifting between modes of thought: a mechanism underlying creative performance. 8th ACM conference on Creativity and Cognition, 2011

9. Sak, Ugur and Maker, C. June. Divergence and convergence of mental forces of children in open and closed mathematical problems. International Education Journal, 2005

10. Terrenghi, Lucia, Kranz, Matthias, Holleis, Paul and Schmidt, Albrecht. A cube to learn: a tangible user interface for the design of a learning appliance. Personal and Ubiquitous Computing April 2006

11. Terry, Michael. Task Blocks: Tangible Interfaces for Creative Exploration. CHI ’01 Extended Abstracts on Human Factors in Computing Systems (2001)

12. von Oech, R. (1992). Creativity Whack Pack, Stamford, CT: U.S. Games Systems, Inc

final-product-shot

Mineblock Product Design

Design Process

To me, the MINEBLOCK project represents an excellent example of the convergence of a vast number of technical, digital, physical and human constraints. Knowing that being able to keep a close eye on all of these considerations simultaneously was going to be next to impossible, I asked a few friends to help with the product design aspect of the project. I requested Jason Miller’s assistance, he’s a recent graduate of the Emily Carr University Industrial Design program and the talented Afshin Mehin of Woke Design (Woke.co) who I meet working on the Recon Instruments project and acted essentially in a consulting capacity.
The following article documents our design process and perhaps acts as a bookmark for some of the issues that we encountered along the way. The software for MINEBLOCK went through about six small iterations to get to the point of a satisfactory user experience but the product design, the challenge in this realm was sewing a number of different technologies together to work seamlessly and alleviate end users from difficult configurations, this took roughly 4 weeks. In comparison, the product design which also took 4 weeks went through about twice as many iterations and varied greatly from the original intention. The prototyping of the product was framed by the criteria of Form, Material and Construction, which lead to discussions about final production techniques and cost of labour. The physical prototyping was done in parallel with the software prototyping and sometimes informed each other. In retrospect this was although difficult with a small team, it was clearly the right way to proceed and without the clear interplay between physical and digital the product would not have ended up in such an elegant minimal style.

Early Sketches

– exploring horizontal bottom, vertical and diagonal board placement
cube-sketches-1-768x1024

cube-sketches-2-1024x768

The Cube

Since the early days of the project, I was adamant the product reflect Minecraft’s unique visual language in a subtle understated way, the cube was the most obvious form to begin experimenting with. Because the large lego-like perfect cubes were the elemental construction blocks of Minecraft, it seemed only fitting to have the MINEBLOCK reference the same aesthetic. I was set on having the enclosure for MINEBLOCK be wood but was unsure of the size, type and the construction. I believe that there’s an interesting tension or contrast between the digital nature and precision of the Raspberry Pi and the organic materiality of a wooden box.

The size of the Raspberry Pi was one hard constraint with it’s longest dimension being 85mm and secondly with RPi Model B the I/O was one three sides of the PCB and the fourth side had a SD card. One of the questions that came up was, does the MINEBLOCK allow for access to all the inputs and outputs or should it conceal the ones that are not necessary?
For the first physical prototype a lego box was assembled with my 8 year old son to give us a sense of the size of a cube that would contain a Raspberry Pi. Early on, it seemed that a cube would be quite large and was starting to give the impression of an object that is not portable. This would be an issue as one of the key values of the MINEBLOCK is its portable nature.

lego-box
Lego box

wood-cube
wooden plywood construction

The next prototype was a wooden box constructed of quarter-inch plywood glued together. This ended up feeling somewhat large both in your hand as well as in a backpack or handbag. It became immediately clear that MINEBLOCK should not be a pure cube and convey the sense of cubes in another way.
For more cube explorations see MetaCube Article (written for Tangible Computing Course July 2014)

Flattened Cube

The next round of prototypes took the shape of a flattened cube, all with different types of wood and construction. First we tried laser-cutting side panels with box joints for a simple construction but visually the prototype looked cluttered and rather than referencing the visual world as Lego or Minecraft, its reading was more in the Arts & Crafts realm. Then we tried hollowing out a 2 x 4″” piece of wood to contain the Raspberry Pi and as a group we agreed unanimously about the direction of using a solid block of wood.

flattened-cube
CNC (Computer Numerical Cutter) cut flattened cube

flattened-cube-lasercut-1024x887
Laser-cut and box-jointed

Over the next week we experimented with a variety of ways of hollowing out wood. The first approach was arguably the easiest. Using a computer numerical control (CNC) machine we were able to get a clean accurate cut in the form of the Illustrator file that we provided the operator with. Our first attempt was with soft spruce 2x4x4″ block and cost 6 minutes of time on the CNC ($1.50/min). The second approach was the quickest, using a drill press and Forstner bit (See diagram) to drill 5 holes into the block of wood. Even with the final chiselling to finish/clean up some of the unwanted residue this approach was fast but inaccurate, hard to reproduce and gave us little insight into the actual future production technique. The third approach was to build a template jig to help guide a handheld router with a 1/4″ routing bit. This technique was challenging as each pass of the router was to be 1/4″ maximum which ended up being 5 passes. This technique would not be appropriate for production as it took a great deal of time and it would be extremely difficult to yield consistent results in this manner.

drill-press-1
Drill Press to hollow out block

drill-press-rpi-igram
RaspberryPi in hollowed wooden block

The final experiment was to return to the CNC machine with our final choice of wood, Birch and the exact depth for hollowing a block. This approach also cost 6 minutes in time and the results were accurate and beautiful. Additionally the machine operator assured us that if you were to do a larger number of blocks you could have them all be hollowed out of one large plank and then cut would be an efficient way to produce a larger production run. This approach was by far the most promising production technique.

cnc-makerlabs-1-sml

cnc-makerlabs-2-sml
CNC prototypes at Makerlabs

Closing the box

We tried a number of ways of giving the box a lid or closing panel. The option that seemed most appropriate was the 1/4″ acrylic bottom plate, which if in a smoked, dark tint would give the impression that the wooden box was floating with a slight drop shadow. Next came the issue of exposing the inputs and outputs of the Raspberry Pi. The biggest issue being birch’s density and thickness not allowing us to laser-cut onto the hollowed block. After many experiments we discovered that the best way would be to laser cut two acrylic panels that would allow for the RPi’s inputs and outputs. These panels could be cut out of the same material as the bottom plate and could be glued together to reduce the number of pieces while still allowing users to be able to access their RPi by only loosening a few screws on the bottom plate.

side-panel-resolving-sml
Large cutouts for RPi I/O

side-panels-1
Lasercut tinted acrylic panels

Our MINEBLOCK size changed quite a bit over our process. The final change was to make the cube 105 x 105 x 45mm. Although this was only a 1/4″ smaller than our original cube if felt much better in your hand.

3D-sketches

MineBlock_Final3-sml

MineBlock_Final5

Identity Design

I may have left the brand identity component to the end of this article but it is by no means less important or the one we thought about the least. The were many sketches and ideas for how the visual language would be represented on the MINEBLOCK. Where would the logo sit? How would the block exude a cube visual language? How would it touch on the Minecraft aesthetic in a subtle way. The initial logo had a simple isometrically-drawn cube in it and the idea of bringing that cube into a 3D representation came about.

Initial Logo design

We experimented with quite a few approaches before we decided on one. Here are some of the more successful ones:

  1. laser-cut cube
  2. shallow router indentation of cube
  3. Cube cutout
  4. Acrylic block glued into corner

logo-laser
Lasercut Burned Logo

logo-cutting-cube-1
cube cut out and embossed with router

Final Design

final-product-shot
MINEBLOCK with acrylic cube glued into corner

final-product-bottom
Acrylic Bottom plates in fluorescent colours adds a beautiful glow

serial-cable-connected

Connecting directly to RPi

One of the first things you do with a Raspberry Pi is connect it to a wifi or ethernet network but what if you don’t have an extra keyboard or screen available, but you have a laptop?

There are times when a HDMI monitor is not available to use with your Raspberry Pi.  In those circumstances it can be very useful to remote connect using a nearby network and a laptop (see the Guide to…Remote Connections).  However, sometimes there isn’t a network available either!

So how can we make use of a laptop screen and keyboard when there is no network?

As discussed in Meltwater Raspberry Pi Hardware’s Guide to Remote Connections, there is the option of using a TTL-serial cable, however this provides rather slow access.  The serial connection won’t support X11 (which allows us to run graphical programs), and you won’t be able use VNC or shared folders if you have them setup.

The answer is a simple network cable!

It is advisable to set this up before you need it so you can be sure that it is configured and working correctly.  As long as you have an SD-Card reader available, you can switch between the configurations using a laptop (or directly on the Raspberry Pi if you have a screen/keyboard at the time).

Remember if you need the wired network for your computer (i.e. to get internet) then you shall have to make a choice about which one you wish to use (or get an extra network port by adding a USB network dongle).  If you use wireless connections, then you can still have both!

Any standard network cable should be suitable (needs to have a male RJ45 connector on each end), most cables available will be fine for our needs.

Note: You can use a normal network cable since the Raspberry Pi LAN chip is smart enough to reconfigure itself for direct network connections (in the past older computers would have needed a special “cross-over” cable).

Before you boot up your RPi you’ll need to edit the /boot/cmdline.txt file. This can be done either on the RPi itself using nano or your preferred editor or by mounting the SD card on another computer with a card reader and editing the file there:

Here’s how you do that:


sudo nano /boot/cmdline.txt

5. Edit cmdline.txt and add the IP address at the end (be sure you don’t add any extra lines).

For network settings where the IP address is obtained automatically, use an address in the range 169.254.X.X (169.254.0.0 – 169.254.255.255):

type “ip=169.254.0.2” at the end of the cmdline.txt file.

If you’ve edited the cmdline.txt file on another computer you’ll need to put it back into the RPi and boot it up.
Otherwise, you’ll then need to reboot your RPi by typing:

sudo reboot

Make sure your static IP stuck, you’ll see the IP address announced during the bootup procedure.
You should get a result similar to this:

The IP address is 169.254.0.2

Or you can double check to discover your RPi’s static IP:

sudo hostname -I

On your laptop

Connect an ethernet cable between your laptop and the RPi and open the Terminal and ssh into your Raspberry Pi:

ssh pi@169.254.0.2

You’ll be asked to create a network certificate and then prompted for a password to your RPi, by default it is “raspberrypi” if you haven’t changed it.

wifimypi

Wifi my Pi

Over the week I’m working on two current areas of practical research for Mineblock. The first has to do with one of the biggest gotchas of working with networked devices, particularly ones with no screen: getting it on the network and talking to it. The second is about installation and interaction: choosing the level of flexibility. This post is about the work on networking. I’ll write up the rest soon.

The problem with setup/connecting to wifi

For Raspberry Pis in general, people generally use them either with a screen and keyboard, getting networking information using the screen, or else headlessly, i.e. without a screen. This is where the problem comes. Moving the devices between networks means that you need to get them on the network to talk to them and for them to talk to the outside world, without being able to see the usual information on a screen.

For example,  I was able to get a Raspberry Pi (RPi) connected at my home office months ago but bringing it over to the Centre For Digital Media felt like starting from scratch. It had its home network programmed into it (using wpa supplicant, a linux networking tool that allows you to plug in and use wifi devices and specific which networks with passwords that they should attempt to connect to in a textfile.)

When moving it off a network it knows about, you’ll have to do one of two things in order to connect it up:

  • connect the RPi to a screen via its HDMI port and use a keyboard to enter the network information
  • connect it up to your own computer via an ethernet connection and share your network with it and then connect to it over ssh to use wpa supplicant or similar to get it on a network

The first of these may not be very convenient, because smallish HDMI screens may not be available. Modern TVs usually have HDMI sockets, but not all monitors do, and in any case you don’t necessarily want to carry one around with you. The second approach is feasible with Mac OS X (though not when the wifi network is certificate-protected), and it’s not for everyone.

Once you have managed to tell the device how to get on to a network, you then may need to connect to it the next time you want to talk to it. For a developer working on a RPi, typically you’ll want to try something out and reboot it.

If you are sharing your network with it, you can do this again, either finding the IP address via the console or nmap and sshing to it that way, or by installing Avahi on it and thereby giving it a known name so you can connect to it on the same network (e.g. ‘mineblock.local’). (Typically anything giving out IP addresses using DHCP will tend to give out the same one to the same device, but you can’t rely on that).

If you have not installed Avahi on it, or if you are on a different network, you will need to find its IP address somehow. Again, you will need to either connect it up to a monitor and keyboard to see its IP (‘ifconfig’), or use nmap or Mac OS X console if you’re on the same network / sharing; or use some sort of display connected to the GPIO. Or you could make it say its IP address using Espeak, which is easy to do but is often hard to hear and difficult to remember the IP address by just hearing it.

Reconnecting and developing

For Mineblock ease of use is of utmost importance, the target audience being busy non-techie parents. I am working to make it easy for others who don’t have huge amounts of time or great skills in networking on linux-like systems to get started. None of the current solutions are very easy. It’s a substantial hurdle to getting things working for people.

People will have two distinct problems to contend with:

  • telling the device about the network
  • talking to the device on the network

The two problems have parallels in the consumer area too. Printers, networked web cameras and other consumer devices have this problem: they need to get on the network (perhaps with no easy user input device or feedback device) and other devices need to talk to them once they are on there. It’s obviously something that many Raspberry Pi and Arduino developers have thought about. Now that products like Nest have made network configuration seem so easy it has raised the bar for all networked objects.

A possible solution

For now and to solve the immediate problem, I’ve decided to take a similar approach to BBC’s Radiodan project that I saw at the Solid conference a few weeks ago. If the Mineblock doesn’t find a wifi network it knows about, it broadcasts its own wifi network with a known name and uses Avahi to enable the consumer to connect to it at a known identifier via ssh. The consumer can then ssh in and add wifi networks using wpa supplicant. In the future I plan to make it more easily configurable over a web interface, but for now, this greatly simplifies the issues of having connecting to it on various networks.