So You’re Going To Make A Game For The Very First Time?

You’ve thought about making games for a long time, but you haven’t seriously pursued it. Until you get serious about it, you’ve accomplished nothing, you’re a mere dilettante. So today you’ve decided to make a game. How are you going to go about it?

First, unless you have well-developed programming skills you’re going to have a much better chance of achieving something if you make a tabletop game, or (perhaps) make a level for a videogame. The most important thing is to get to where you play the game. All the idea generation and other preliminary stuff is effectively airy-fairy head-in-the-clouds daydreaming that almost anyone can do but which does them no good if it doesn’t result in a playable prototype. Without well-developed programming skills or at least a good working knowledge of small game engine such as Gamemaker, you won’t be able to make a videogame prototype soon enough for it to be practical. You may be able to use a level editor that’s included in an existing game to make a variation, and that can be a good way to start.

Second, beginners almost always make a game based on another game. Often the best way to start out is to make a variation of an existing game, because it takes a lot less time and work to get to the point where you can play it. Again this applies to tabletop games or to video games that provide ways to modify them, usually a level/scenario editor. If you can’t bring yourself to make tabletop games then the level editor is definitely the easiest way to start out, even though you’ll have to learn how to use the level editor, a non-trivial task.

Third, reign back your ambition. Try to pick a type or form of game that is fairly common, not one that’s unusual; unusual forms are frequently more difficult to achieve, that’s why they’re unusual. For example, cooperative games are especially difficult in tabletop form because it’s so hard to provide significant opposition. This is much easier to do with a videogame IF you have the programming and “artificial intelligence” skills. But it is still much harder to program a game that can be played by two or more people at the same time, than to program one that is played by one person at a time.

In other words try to choose a project you actually have a chance to complete. This can be generalized to “keep it simple”. Making a game is almost always harder than it seems at first, even for experienced people. The most common mistake of people seriously trying to make a video game is to plan a project that they have virtually no chance of ever finishing, because it will take much too long. Remember, AAA video games take hundreds of man-years to complete for professionals with vast budgets.

Fourth, focus on the gameplay not on the appearance (or the story) of the game. You’re making a prototype, not a finished game. You want something that people can play so that you find out whether they enjoy playing, and how you can improve it. You can’t rely on flashy looks to make games fun, even if you’re an outstanding artist. A major mistake of novice game designers is to make something that’s pretty rather than something that’s functional. If you have something that just looks functional and people like to play then imagine how much more they’ll enjoy it when it looks professionally pretty. You only need it to look good enough that playtesters will be willing to play, and that depends in great part on what playtesters are available, how well you know them, how persuasive you are, and many other factors not related to the game itself.

In most cases, you may be excited about your story, but other people won’t be. Most games are played for the game, not the story (which is often only an excuse to get to the action). If you’re heavily into story, write a novel, don’t design a game! When you’re experienced you may be able to rely on a story to make a game enjoyable, when you start out that’s a big mistake.

Fifth, when you have a playable prototype play it yourself, solo, before you inflict on other people. I say “inflict” deliberately. You may be super excited, you may think it’s the greatest thing ever, but in reality it will be like almost every other initial prototype of a game, it will suck. Experienced designers have a much better chance of recognizing what will suck before the game is played: they play the game in their mind’s eye, so to speak, and anticipate many problems before it’s ever played in reality. Beginners should try to do the same but will be much less successful at spotting the flaws. What solo testing can do is quickly reveal where the game really sucks so that you can change it before other people have to put up with it. In other words, be nice to your playtesters: get rid of the really bad aspects yourself rather than foist them on other people who want to play a fun game.

Some people confronted with the notion of solo playing a multiplayer tabletop game will say they just can’t do it, they just can’t dissociate themselves from one side when they play another side. Wags like to say “well at least when you play solo always win”. Of course you also always lose. But the point of solo playtesting is not to win or lose, it’s to find out whether the game is worthwhile and how it can be improved. And that dispassionate dissociation from one side to another when you play a solo game will actually help you recognize what’s good and bad about the game.

I cannot say this enough: play the game yourself before anybody else plays.

Sixth, if you got this far you’re doing really well. But you’ve only just begun. The really hard part of making a game is a last 20% of improvement that takes 80% of the time. This is a process of playtesting, evaluating the results, modifying the game to improve it in light of the results, playtesting again, and going through the whole cycle again and again and again. This is called the iterative and incremental development of the game. If you want to make a really good game then you are probably going to be sick and tired of it by the time you get toward the end of this process.

Finally, the game is never really done, you just come to a point where the value of the improvement is less than the cost of the time required to achieve it (Law of Diminishing Marginal Returns). Moreover, you might think you’re “done”, and then find out that improvements need to be made either for your peace of mind or because the publisher requires it.

Good luck. And remember: “A designer knows he has achieved perfection not
when there is nothing left to add, but when there is nothing
left to take away.” Antoine de Saint-Exup’ery

• make a tabletop game, or use a simple level editor to modify an existing videogame
• make something based on a game you know
• reign in your ambition–try to complete a small project, not a large one
• focus on gameplay not prettiness or story
• play the game yourself before anybody else plays, even if it isn’t intended to be a one person game
• iteratively and incrementally playtest and improve the game
• your never really finish


Algorithmic Composition

Teaching a workshop on Algorithmic Composition at an amazing school in Yerevan, Armenia called Tumo – Centre for Creative Technologies. As a person with equal parts enthusiasm for music, design and code, I’ve always been intrigued by the idea of using technology to enhance and propel our experiences with music, both creating and enjoying.

I chose to work on algorithmic music composition with the p5.sound library. P5.sound is an add-on library that equips p5.js with the ability to do all manner of audio-related tasks, from playing a sound file to applying audio effects directly on the browser.

The p5.sound library provided basic support for synthesizing sounds at different frequencies, and gave access to the Web Audio clock, which allows for accurate audio-scheduling. These features provided a solid foundation for working with generative music.

Many Questions

There were so many questions to consider about how a composition program might look in the context of a p5.js sketch: How do we represent musical qualities like pitch and velocity in code? What about timing information? How do we write a program which handles composition tasks and visual animation simultaneously, and how do we make sure both tasks can interact and sync with one another? Most importantly, how do we make all of this simple and intuitive to use?

Interestingly, I found that the more I worked on the examples and tried to make them sound good – more musical, the more I had to hand-engineer ideas from musical theory into the code. Simultaneously, I never really knew what results I would get when I put new rules into the system. This was challenging yet exciting at the same time, and suggests that perhaps the role of algorithms in music will never be to replace humans entirely, but to facilitate new ideas and give us new ways to be creative.


MetaCube Paper

MetaCube: Using Tangible Interactions to Shift Between Divergent & Convergent Thinking

A research Paper submission By Haig Armen, 2015

For many decades, we have observed and studied how people create, what the characteristics of creative people are and what the process of creativity is. Many of these studies have focused on the cognitive abilities of individuals – what happens in our minds when we are creative? This paper describes a research tool for building a better understanding about how creative teams move between divergent, exploratory and convergent ways of thinking. With the proliferation of embedded technologies, there are emerging opportunities for employing tangible or embodied interaction within the creative process. In this paper, we make the case that the creative process can be augmented, observed and supported by metaphorical interactions via a hand-held tangible computing device.

Author Keywords

Interaction design; Tangible user interfaces; Embodied interaction; Design research; People-centered approach; Metaphor; Creative Process; Divergent-Convergent Thinking

ACM Classification Keywords

Human-centered computing, Interaction design theory, concepts and paradigms, Human-centered computing, Collaborative and social computing devices

General Terms
Design; Human Factors; Theory

Today’s contemporary design teams have a wide array of tools to aid in the design process and even the most digital savvy teams still use tangible tools like whiteboards to help in the brainstorming sessions. There has been a great deal of studies in the area of creative process in the context of design and

brainstorming, predominantly about the varying exercises in divergent (generative), exploratory (connecting & combining ideas) and convergent (analytical) cognitive modes. Yet how teams or individuals transition between these modes of thinking relatively unexplored. This project explores how a tangible object might emphasize meaningful gestural interactions not as a departure from, but rather as an integrated part of the creative process. We propose that a tangible user interface will help in the creative process by shedding light on the transitions between modes of thinking. Tangible analogical interactions can be a powerful way to support modes of cognitive activity and ultimately provide a better understanding of when different strategies may be most effective. In this paper, we call to question the connection between tangible gestural interactions as analogical mappings to abstract modes of cognition by way of a conceptual prototype called the MetaCube.

To best understand how a tool could improve the creative process we first observe that creative teams are most productive when shifting between divergent and convergent modes of thinking. The ability to efficiently shift between modes may be an important feature underlying the capacity to be creative [12], and possibly, of particular importance in professions such as design [8]. There are a wide variety of creative activities, exercises and games that have been categorized into divergent and convergent categories [7] that act as useful frameworks for creative thinking and conceptual development. Physically interacting with an analogical concept makes the abstract become more concrete.
Building from the theory of embodied interaction we propose a tangible computing device that helps to bring a clearer collective understanding of how we shift cognitive modes using tangible interaction. Beyond embodied interaction, this case additionally considers the importance of flow within creative sessions as well as their collaborative nature. We hypothesize that by building a better understanding of how, when and why we shift our cognitive modes in creative sessions we can begin to create frameworks of knowledge around the collective creative process. The MetaCube project revolves around the following research question: Does rotating a tangible computing cube help creative teams better observe and gain insight into shifting between divergent, explorative to convergent cognitive modes based on specific time intervals?

Case studies of this type are important at this juncture in the area of tangible computing; as designers strive to understand what the most natural gestural affordances are for tangible user interfaces (TUI). Discovering ways of encouraging people to interact using analogy are crucial for the Interaction Design field to create a vernacular around these gestural interactions. Does turning an object towards you imply ‘inward-looking’ convergent, logical and critical thought? Does rotating an object to the right signify thinking into the future or conversely the act of rotating an object to the left representing thinking about the past or precedence of a problem space?
Although cognitive modes in the creative process have been well documented, it is unclear that there are best practices in the frequency and periods in which to transition from one mode to another. Furthermore, though there are many generative and analytical activities, little has been discovered about whether certain combinations of activities are better or worse than others, or whether randomization of activities fosters effective creative thinking. Furthermore, flexible thinking involves the ability to shift cognitive functioning from common applications to the uncommon; namely, breaking through cognitive blocks and restructuring thinking so that a problem is analyzed from multiple perspectives.[12] Yet “Most do not easily switch divergent and convergent thought, but they need to do so because continued learning that blocks ideation is not helpful to the overall effort, and neither is continued ideation that blocks solution choice [2,9].
By decoding the transitions in cognitive mode we can begin to understand where we have trouble shifting and can address and improve our abilities to move easily between cognitive modes. The MetaCube aims to demystifying these mode transitions by employing theories in embodied interaction. Using tangible tools to help in brainstorming can prove to be extremely effective. As Lakoff and Johnson [10] point out, metaphor and analogy are more than mere language and literary devices, but rather conceptual in nature and represented physically in the brain. As a result, such metaphorical brain circuitry can affect behavior profoundly. For example: you may recognize that Shakespearean tragedies have a similar structure: a phase of increasing conflict between opposed sides or characters, a major confrontation between the opposed characters, and a phase in which the opposition is worked out and resolved in one character’s victory and the other’s defeat. It may then occur to you that this structure is very like the shape of a pyramid isosceles triangle, which rises from a baseline to a central point and then falls back to its baseline. You have then perceived an analogy between a temporal phenomenon and a spatial one. In the case of the MetaCube, the device represents a noun, in the context of a brainstorming session this may be the problem at hand and the act of rotating the cube is analogous to seeing the problem from another perspective. In another case study Antle [1] elaborates: Gestures may lighten the cognitive load because they are a motor act; because they help people link words to the world (e.g. deictic gestures); or because they help a person organize spatial information into speech (e.g. iconic or metaphoric gestures).
Along with modulations in cognitive modes, flow is a crucial aspect of the creative process, specifically in brainstorming sessions. In Csikszentmihalyi’s seminal book, Flow [4] is described as a state of concentration or complete absorption with the activity at hand and the situation. When exploring the requirements of our MetaCube, we must consider the flow of the individuals in the creative team. The momentum and immersion can only be achieved with the absence of interruptions from the creative team. Achieving momentum in a creative brainstorming session requires time management. Commonly time is blocked out and a facilitator is tasked with being timekeeper. A number of questions arise; what should the time period between cognitive modes be? Should each mode take the same amount of time? One of the widely adopted time blocking methods for focused periods of concentration is the Pomodoro Technique [3], which suggest 25-minute increments of activity followed by 5-minute breaks. The research on collaborative creativity is extensive and widely varying based on the type of creativity and field. The most relevant conclusions that can be draw are that shared engagement fluctuates with changes in activities within creative teams. This finding suggests that careful consideration must be taken in designing a device that will keep people’s attention on brainstorming and topics of discussion rather than on the tools being used. It is clear that a device for collaborative creativity will require the affordances of many to interact with it, not just an experience for an individual. The device will require the capability of providing a feedback mechanism that will communicate to a number of people within the context of a room and not necessarily one person like most computing devices.

Related Work
Although there are no examples of work directly related to this area on inquiry, there are a few good examples of conceptual design projects that are at all related that we may draw possible considerations from. The research project, “A cube to Learn” by Terrenghi, Kranz, Holleis and Schmidt [10] describes a Learning Cube as a novel tangible learning appliance used as general learning platform for teaching vocabulary and 3D views to children through gestures and test-based quizzes. In 2001, Terry [11] outlines a project called Task Blocks that employs blocks as the tangible interface representing computational functions for creative exploration within the programming context. The design of the system encourages hands-on, active experimentation by allowing users to directly insert, delete, or modify any function in the computational “pipeline”.


The goal of the device is to aid creative teams to collectively shift modes of thinking without losing their momentum as well as regulating the frequency of the mode transitions. The MetaCube has the potential to become a powerful tool for facilitating creative sessions by providing users with gestural affordances that create analogies while modulating through various creative thought modes. The design of the prototype must reflect the collaborative nature of creative problem-solving teams. When providing feedback to the user/team it is important that the device is able to communicate to more than one person. If color is the main mechanism to communicate the cognitive mode, it is imperative that the color be visible from all viewing angles if the team is sitting around the cube. Although seemingly unimportant, the shape of the cube is instrumental in implying specific gestural affordances. Unlike a sphere a cube’s physicality suggests rotational gestures on the X and Z-axis. Additionally, the device could possibly communicate the changing of cognitive modes using sound or wirelessly transmitting information but these options were shelved to concentrate on the core of the study, opting for a subtle non-digital form of user feedback.


By creating MetaCube – a small hand-held tangible prototype capable of measuring its own rotation, we are able to address our research question. Participants use the MetaCube by rotating its orientation to mark the transition from one way of thinking to another. Imagine the scenario where a member of a creative team in a brainstorming session is prompted to pick up and rotate the MetaCube tool. The MetaCube’s orientation triggers a new glowing color that marks the transition between one way of thinking to another. The team has been told in advance the following light mappings:

1. Blue glow indicates divergent (generative) thought mode
2. Green glow represents exploratory mode
3. Red glow signifies convergent (analytical) thought mode
4. Flashing light of any color prompts rotating the cube
For example rotating the cube in one orbit would yield divergent thought mode and rotating the cube in another orbit indicates that participants proceed with convergent activities. The working prototype will be able to detect rotation and its own orientation. Once rotated on its X-axis or Z-axis the object is triggered and communicates its new cognitive mode to the team. The cube will utilize an Inertia Measurement Unit (IMU) – 5 Degrees of Freedom IDG500/ADXL335, which is essentially a combination integrated circuit board with both accelerometer and gyroscopic sensors to sense orientation and rotation. An important key feature of the MetaCube is the specific time intervals that prompt the members of the creative team to interact and change cognitive modes.

In the initial stage of exploration the modes will be communicated by use of contrasting colors and later iterations by include broadcasting activities via web applications served by the cube to surrounding computers. With a built-in web server the MetaCube could dynamically creates activity cards that are served to the client-side browsers of the team connected to the cube via a wifi network. These last features were not included in the original prototype as it was beyond the core research question.


To begin to validate the hypothesis of this study the MetaCube prototype acts as a proof of concept. The basic prototype was assembled and programmed to test amongst participants in a number of informal settings. The purpose of the cube is first explained to participants prior to their brainstorming activities. A simple creative process will be facilitated and the use of the cube will be observed and captured to later reflect upon. During the session participants’ reactions were observed, anything they said and their facial expressions, we tried to capture. Participants were then asked the following types of questions: Did the cube help or distract the team’s creative flow? Did rotating the cube strengthen the idea of shifting modes of thinking? Did the colored light help users understand the shift in modes? Could this method of observing shifting cognitive modes be useful for creative teams? We were able to informally test our assumptions by putting the prototype into a brainstorming session and explaining how the team could use it to help them shift between divergent and convergent creative activities. Our observations were generally positive but further formal studies would be necessary to draw definite conclusions.

The people within the observation session welcomed the idea and felt it was an intriguing idea in the context of creative problem solving. The interaction paradigm was easily understood and the team was able to integrate the MetaCube into their flow. The following findings were discovered from our informal study: 1. The cube helped the team creatively once the members of the team all understood its purpose. 2. Rotating the MetaCube did indeed strengthen the idea of shifting modes of thinking both for individuals and for the team as a collective. 3. The colored light did help understand the mode changes but a legend mapping the colors to mode was frequently glanced at. 4. There was a great deal of agreement that by observing shifting cognitive modes both teams and people would become more effective during creative problem-solving sessions. Additionally, we observed that although the cube was able to indicate the change in cognitive mode, the team still broke their flow by having to discuss which creative activity they would proceed with. This suggests that there is the opportunity for the device to communicate an activity.


After creating MetaCube and later presenting and explaining its purpose to various designers and writers the response was generally of interest and many began to think of other analogies to apply to the rotational interaction. Ideas were generated about ‘hinging’ from one way of thinking to another and using the metaphorical expression of “taking a 180 degree turn” to represent a pivot in direction. There was a slight cognitive disconnection between the six sides of a cube and the three cognitive modes. This added an element of unpredictability to using the MetaCube, which not all participants understood. Although this research tool was created primarily to experiment with ideas for the creative process, the prototype and its reception act as an informal validation of a possible product. In the initial conceptualization of the MetaCube it was decided that not having the cube display any digital information to minimize the perception of a computing device was in retrospection, a good decision and any further exploration of this idea will be to continue following this same line of reasoning.


In this paper we present a short study that investigates the benefits of a tangible computing device that enables hands-on interaction to help creative teams while brainstorming. Our contributions include a concept-driven design project and prototype. We concluded that the MetaCube shows promise as a unique tangible non-disrupting way of conducting collaborative creative brainstorming sessions. The physical interactions gave the creative teams a concrete way of thinking about when and how to transition from one way of thinking creatively to another. We concluded that effective Tangible User Interfaces (TUI) design can result in epistemic, exploratory, collaborative and cognitive benefits within the context of collaborative creative contexts.


1. Antle, Alissa N. Exploring how children use their hands to think: an embodied interactional analysis. Behaviour and Information Technology (2011)

2. Brophy, D.R. Comparing the Attributes, Activities, and Performance of Divergent, Convergent & Combination Thinkers. Creativity Research Journal. 2001

3. Cirillo, Francesco. The Pomodoro Technique FC Garage GmbH 2013

4. Csikszentmihalyi, Mihaly. Flow: The Psychology of Optimal Experience. Harper Perennial (2007)

5. Gray, Dave, Brown, Sunni and Macanufo, James. Gamestorming. O’Reilly. 2010

6. Hatchuel, Armand. Le masson, Pascal. and Weil, Benoit Teaching innovative design reasoning: How concept knowledge theory can help overcome fixation effects. Artificial Intelligence for Engineering Design, Analysis and Manufacturing, Cambridge 2011

7. Lakoff, George and Johnson, Mark. Metaphors We Live By. The university of Chicago press. 1980

8. Pringle, Andrew J. Shifting between modes of thought: a mechanism underlying creative performance. 8th ACM conference on Creativity and Cognition, 2011

9. Sak, Ugur and Maker, C. June. Divergence and convergence of mental forces of children in open and closed mathematical problems. International Education Journal, 2005

10. Terrenghi, Lucia, Kranz, Matthias, Holleis, Paul and Schmidt, Albrecht. A cube to learn: a tangible user interface for the design of a learning appliance. Personal and Ubiquitous Computing April 2006

11. Terry, Michael. Task Blocks: Tangible Interfaces for Creative Exploration. CHI ’01 Extended Abstracts on Human Factors in Computing Systems (2001)

12. von Oech, R. (1992). Creativity Whack Pack, Stamford, CT: U.S. Games Systems, Inc


Where the Wild Things Are: Research Paper

Seeking Improvisation on the Open Web Platform

Haig Armen – Emily Carr University of Art & Design
John Maxwell – Simon Fraser University
Kate Pullinger – Bath Spa University

Teaching creativity in digital media comes with its challenges. Obvious questions arise: what is the most effective creative process? what tools should be used? what is the right format? As Maslow’s Hammer reminds us, if you only have a hammer, everything looks like a nail; perhaps the challenge is that of perception. In this article we reflect on an experimental digital fiction workshop that took place in the summer of 2014. The workshop aimed to explore the boundaries of digital fiction using open web technology. The article discusses how process and tools forge the pathways and ultimately shape our outcomes. As we work to establish fully digital workflows and widely-adopted tools for creating digital narratives, we inadvertently limit our possibilities and fall into producing the same formats and modes of interaction over and over. In this nascent form, digital fiction requires efforts aimed for the fringes, to explore the boundaries of medium and genre. Today’s software authoring tools are designed for carefully composing and organizing content, not collaboratively improvising to create new forms of content, navigation and experience. Without a better understanding of the materiality of digital media gained through code proficiency and improvising on the form, we are limiting our ability to achieve more sophisticated forms of expression.

The universe of improvisation is constantly being created; or rather, in each moment a new universe is created… At any moment, an event may occur for no reason at all, with no relation at all to the preceding event… In this universe each moment is an entelechy, with both its cause and its end contained in itself.
In June, John Maxwell, Kate Pullinger and I ran a week-long digital fiction workshop hosted by Simon Fraser University in Vancouver, BC. The experimental workshop was billed as a way to explore collaborative writing and production of digital fiction, inspired and directed by Kate Pullinger, a pioneer of the form.

Inspired by a conversation we had at Books in Browsers 2013 and a mutual admiration of Adam Hyde’s BookSprints , the idea of collaborating on a digital book workshop was born. Apart from gathering people together for an intensive limited timeframe to produce an outcome, this workshop had little resemblance to an actual BookSprint. We liked the idea of creating a framework – planning the activities and the technologies we’d use around creating a piece of digital fiction in a tight, high-intensity collaborative environment.

Setting the Stage
As the Digital Fiction workshop became a reality in the spring of 2014, we prepared by establishing the structure, content and technology for the event. Months before the scheduled workshop John and Haig discussed establishing a structure for the week of the gathering. Yet not knowing exactly what type of people the workshop would attract made it difficult to forge a strict structure. We wanted to bring together a group of interested authors and creators to participate in a real-time collaborative effort, writing and designing a multi-modal work over a short period of time. So we needed to carefully scaffold things to allow our workshop participants to move forward quickly.

John and I saw eye to eye about the kind of tools to provide to our workshop participants. For perhaps the first time in the history of publishing there is a common platform and software ecosystem that is free and open to all. The wildly popular technologies underpinning the Open Web Platform act as an alternative to the specialized, expensive toolsets that have dominated publishing (Linotype, InDesign, even Flash), the open web lets the finely tuned skills and sensibilities of authors, designers, and producers be decoupled from the exclusivity of proprietary tools. We both see Open Web technology as the only probable and preferable way forward.

In the weeks leading up to the workshop, we created a workflow that began with a wiki as a central collaborative writing hub where participants of the workshop were able to write in plain text, markdown, and HTML markup: markdown for formatting text and HTML to specify code for differentiating data types and insert media elements.

The next step was to find a way to generate HTML from the wiki. After a number of attempts John settled on using Pandoc, John MacFarlane’s free command-line tool for converting just about any flavour of markup and markdown. The final piece of our workflow was our target production framework, Caleb Troughton’s excellent Deck.js, a jQuery library designed for slide presentations that proved to be both flexible and extensible. The Deck.js library gave us the ability to create visual transitions from slide to slide, trigger html elements to animate across the screen and control audio/video—all of this, in an elegant, full-screen presentation that was compatible across multiple browsers and platforms.

Our Plan
Our intended workflow was to have our workshop participants begin sketching and storyboarding on sticky notes, index cards and whiteboards, then move onto laptops, from writing in markdown on our wiki to HTML5 via Pandoc. In the wiki, we could write a “script” for each story segment, including not only the text, but the images and audio as well. The wiki allowed everyone in the group to edit, make quick changes, create new segments, and quickly see and click through their results in a browser window.

John and I worked out extensions to Deck.js to facilitate audio and video triggers at different places, a global navigation system, and some custom effects for builds. The idea was to keep the gory details of the HTML5 production contained in an iterative production pipeline, in order to o facilitate a group of writers working out the story and how to tell it, letting them work through it in an agile, iterative way.
We gathered 12 people in June 2014 for the workshop: Kyle Carpenter, Alexandra Caufin, Jodie Childers, Jennifer Delner, Bob Fletcher, Rochelle Gold, Nicola Harwood, Inba Kehoe, Shazia Ramji, Kaitlyn Till, Jessica Tremblay. They were mostly graduate students and academics, but also writers, editors and photographers.

As Adam Hyde has pointed out, to run a successful Book Sprint you need: good people, a good venue, good food and an experienced facilitator. We had just two of these: a good venue and good people. We didn’t have the budget for catering and we were certainly not experienced at this sort of engagement!. What we did have was 15 hungry people in a room with lots of computers and technology on tap, and paper of all sizes.

Over the space of a week we came together on a storyline, a script, imagery, and ideas for animation and audio. Our workshop goal was to create, experiment and discover new tools, to explore new methods, to collaborate ‘digitally’, to forge new territory. Collectively we called what we created The Last Cartographer, a ‘neo-diluvian’ tale of love and loss, set in near-future Vancouver. By the end of the week-long session, we had a five-part, not-quite-linear narrative, mostly created using text and images, but with audio, video, and animation elements as well. (fig-0-LastCartographer.png – “The Last Cartographer’s opening screen”)

As a separate effort, The Last Cartographer was later further developed by my 3rd. Year design students and I at Emily Carr University of Art & Design in September. There are now 5 versions of the Cartographer story, based on the same story, but differing in visual and interactive treatments.

Working on paper
We liked to say that The Last Cartographer was produced entirely using free open web technologies. That was our intention but the reality is that this piece was produced mostly on paper. (fig-1-Collaborating using paper)

Our participants largely worked on paper, whether on stickies, flipcharts or index cards and whiteboards. They drew and talked, wrote and talked. In fact, probably the bulk of the week was spent sketching on paper. Why? Because paper affords extremely quick, easy, social development of ideas.
(fig-2-cardsorting.jpg – “Moving story fragments around on paper”)

It’s also worth mentioning that the workshop attracted people who identified as writers, not designers, nor coders. But despite that, they mostly produced sketches and storyboards, rather than any amount of text. They thought primarily in terms of a small bit of text in the foreground and a background image, much like a picture book. Where did this model come from? Was it from Kate’s influence via Flight Paths? Or perhaps presenting my work on CBC Radio 3 magazine during the week? Or something else?
(fig-4-sequencing.jpg “Storyboarding, sequencing and visualizing”)

We’ve all read our fair share of picture books. Could this pattern be so ingrained that it seeps into our thoughts of onscreen fiction inconspicuously enabling us to fall into the patterns – the modes of storytelling – that are most comfortable and familiar to us? John and I provided a straightforward production workflow but it wasn’t fluid enough to really allow our writers to work creatively within it. The ideas were being generated outside of the tools.

Jamming it out: The Wolf
In the weeks before the workshop, while John and I were working out how we would scaffold things together, and while we were evaluating frameworks, we threw together a little story about a lonely wolf, using Deck.js. The idea was mostly to test out what we were doing: image, text, placement, backgrounds, masking. John created a few screens, I created a few, John added more, etc. We bounced a story back and forth between us. It was a jam session, akin to two musicians tuning up our instruments and warming up. Between the two of us we have something approaching 40 years of web development experience; we improvised our Wolf story, in markdown, raw HTML, CSS, jQuery, while ‘tuning up’. In contrast, in our June workshop, we improvised using paper: on stickies and flipcharts, in words, in sketches and in arrangements.

With hindsight we are now able to look critically on our Digital Fiction Workshop experience. We see three main points of reflection. First, the role of proficiency in creating digital fiction. Second, how profoundly the tools shape our outcomes. And third, the important distinction between composing and improvising in the context of creating digital media.

The Role of Proficiency
Our workshop participants mostly defaulted to creating on paper and pencil. Although they were all proficient with digital tools, when it came to creating collaboratively, paper simply didn’t get in the way of their ideas and creative process. Paper does a surprisingly good job of enabling fast, iterative idea development—something not lost on designers, nor software developers.

We also put software in front of our writers: easy to use tools like wikis. Yet software only gets out of the way of one’s creative inspiration when you know it intimately. This is something that John and I were able to do, jamming through our improvised Wolf story, because of many years of coding, designing, and web development. Chimero puts it well, “Let’s talk about making tools. The things we make should either reduce pain, increase pleasure, or do some mix of the two.” Yet this was not an experience open to our workshop participants. Fortunately, they found ways around it, photographing their storyboards and sequencing their narratives in a variety of crafty ways that often approached but perhaps never quite reached final resolution.

In our talk at Books in Browsers 2013, we spoke about the craft of publishing and the importance of understanding the grain or materiality of digital media. With a better understanding of materiality we are able to craft with nuance in mind. Taking advantage of the idiosyncrasies of the medium, to create more sophisticated forms of expression, forms that are appropriate for and unique to digital narratives.

In the workshop, there was a comfort with exploring and experimenting with characters, setting, themes and plot. This is where most of the improvisation occurred. Our group was less able to experiment with the presentation level with software. We believe that with a stronger understanding of the possibilities that come with digital media and some solid coding skills, our group would have been able to jam out some amazing concepts directly in the digital realm.
Tools & Frameworks
In our workshop we gave our participants tools that we thought would be open and malleable in shaping a digital piece of work. Yet even these loosely connected open web tools implied a very specific way to present content. To move beyond the prescribed way that, for instance, Deck.js wants to be used requires a level of comfort and proficiency with software of that kind. It appears that the tools we chose clearly dictated the type of narrative that would be produced. Many of the commercially available software tools for creating digital books are just as limiting.

My colleague Celeste Martin at Emily Carr University of Art + Design has catalogued the various interaction patterns that a majority of electronic books follow . Each pattern clearly details a style of navigation & content delivery – essential learning for designers with digital books. Yet these rudimentary patterns are directly linked to specific software platforms. Each platform is optimized for a specific interaction pattern and we are not easily able to push beyond the prescribed navigation and content presentation paradigms.
(fig-6-ebook-patterns.png – Celeste Martin’s Book Patterns)

Software can transcend mere utility when it allows its user to simultaneously think expansively while providing abstract constraints that help organize and build mental models. Consider how Jazz musicians use the harmonic structures of standard jazz repertoire as a shared foundation to improvise upon:

Many of the most popular jazz compositions — the standards — are repeatedly transcribed and compiled into Real Books and often used as learning tools. Real Books, as well as their many variations (Fake, Latin Jazz, Jazz Rock and, latterly, iReal Books), provide conventional harmonic sequences and phrase components that are acquired and employed as parts of each new musician’s improvisational complex vocabulary.

Just as Haftor Medbøe explains the importance of creating frameworks for Jazz musicians to build shared understanding and foundational structure for improvisational collaboration. Liz Danzico also described how designers of software could approach improvisation as a goal:

“Just as Miles Davis created a new form of jazz that allowed a new generation of musicians to play beyond themselves, so do we have the opportunity to create frameworks for audiences to create in realtime.”

Apart from iBooks Author, all the other platforms require a dual mode of creation: compose then preview, compose then preview, and repeat. This switching of modes is slow and tedious and makes the tools too opaque. Where as the visual (direct manipulation) tools are the ones that seem to melt away, becoming transparent and allowing for improvised moments. How can we seriously approach creating digital fiction without having (a) a mature, visual, Direct Manipulation toolkit, or (b) ten years or more experience with the code so that you can visualize what your code will produce before you see it rendered? The latter seems to be what experienced web developers do, thinking in code but imagining what appears in the browser. It’s a mode that clearly works, but it seems to limit creative engagement.

Composing & Improvising
Composing, in music, requires that same kind of abstract sense, where you can imagine the orchestra playing the notes you write. In the same sense, coding with web technology requires you to imagine how the browser will render your code. Most of the software we use on our personal computers today are designed for composition not improvisation. Yet improvisation is critical if digital fiction is ever going to be art.

As Alan Kay puts it,

There is the desire of a consumer society to have no learning curves. This tends to result in very dumbed-down products that are easy to get started on, but are generally worthless and/or debilitating. We can contrast this with technologies that do have learning curves, but pay off well and allow users to become experts (for example, musical instruments, writing, bicycles…)

We seem to be always striving to improvise with consumer level software but unable to reach interesting results because those tools support general patterns and conventions rather than the deep engagement that facilitates improvisation. Alan Kay urged us to improvise on the early generations of the personal computer. He wanted us to build our own tools that would allow us to put things together dynamically.

Designer Bruce Mau also suggested we make your own tools in his Incomplete Manifesto for Growth: “Hybridizing your tools in order to build unique things. Even simple tools that are your own can yield entirely new avenues of exploration.”

Creating small open tools for publishers to build their own unique platforms will amplify our capacities, allow the improvisation and fluid ideation that is necessary in creating new works of digital fiction. Even though software companies urge us toward fully-digital workflows, we are reminded that creative people collaborate well using plain paper. We aspire to carry the spirit of improvisation and brainstorming through the entire creative process and not compartmentalize it within the early pre-digital phase of a project. Improvising throughout a project has the potential of introducing media experimentation along with conceptual ideation.
Media researcher Paul Nemirovsky argues, “in order to facilitate this kind of exploration, (1) computational tools must actively participate in the creative process and (2) the interaction framework must allow structural exploration of media. This leads to our main claim: improvisation should be considered a valid and appropriate paradigm for media interaction.”

We might ask ourselves how might Stein, Woolf, Vonnegut, Burgess, Burroughs or Dahl have approached digital fiction narratives? How would they have found ways to experiment with media and create experiential narrative structures? It is not until we forge our own tools that allow us to engage deeply and fluidly with media, and meaningfully explore creativity and new genres.



Last week O’Reilly Media hosted the Solid Conference in San Francisco I thought I’d post a quick summary of the high level themes that I heard talked about.

I know what you’re thinking, another conference where speakers spit out buzzwords like 3D printing, internet of things, drones, crowd-funding, and so on but I was utterly impressed by the rigour and depth of the presentations and curation of content at O’Reilly’s Solid conference in San Francisco last week. Although anyone looking carefully at design and technology can see that there’s been a steady shift towards the harmonic unity of hardware and software – a merging of digital services and physical products, it is still important to see O’Reilly formalize this shift in thinking and acknowledge the momentum that has gathered at this conference.

1. Hardware and Software are converging

You probably heard this one before, but what does it mean exactly? At Solid many presentations suggested that not only are many of today’s products a combination of physical materials and digital media, but some of the more successful examples of hardware/software hybrids are discovering harmony between digital and physical attributes across objects and services.

2. New Digitial Materiality

Not only are hardware and software merging together but Neil Gershenfield (MIT) and a few others suggest that we’re close to creating new materials that have digital computing capabilities within. It’s difficult to conceive but we’ll be able to turn data into things and things into data. There was a particular quote that I enjoyed by Gershenfield, “There is no machine, the material is assembling itself” when describing their new research project.


Sample of MIT’s Cellular composite material (Image: Kenneth Cheung)

Gershenfield went on to describe a few applications of this new digital material starting with airplanes, then moving to the work MIT is doing with Homeland Security to deploying the material to assemble mountains that act as barriers to hurricanes. Yes, I know, mind-blowing.

3. World becomes an API

As the world becomes embedded with sensors and semi-smart objects networked together we’re able to bind them all together with software. The opportunities become abundant when these devices are interconnected and software APIs allow us to request and push data throughout systems.

4. Designing above the single device level

Probably the single most quoted example at Solid was Nest and it’s easy to see why. The Nest product is a neatly packaged bundle of hardware and software that takes advantage of its network capability. As you add more Nests to your home the value of their service increases. This intentional design perspective is emerging as we move towards creating networked objects.

On day two, Tim O’Reilly drove home the point of user experience becoming a critical component of a usable internet of things. He urges that we think deeply about both the implicit and explicit levels of interactions of networked objects and their systems.

5. Design beyond the Screen

Much like Spike Jonze’s vision of the not-so-distant future (Her 2013) suggests that screens might not be as necessary or desirable as many of the predictions of glass interfaces everywhere may have implied. A number of examples of projects at Solid show inventive ways of interfacing with users using inventive ways of acquiring input through sensors and computer vision and user feedback and communication with LED or sound actuators. I believe Josh Clark’s talk summarized it well:

“We can use new hardware sensors not just to capture data or talk to other devices, but to create new interactions”.

Although I’m unable to attend the Eyeo Festival this year I feel grateful to have been a part of Solid and am pleased to hear that O’Reilly intends on making it an annual event. There was a general feeling of excitement and optimism in the audience and Andrew Crow of GE’s quote sums it up well –

“Rarely do conferences sit on the edge of future possibility like this.”

I strongly recommend taking some time to watch the Keynote talks on youtube:
Youtube playlist of the Keynote presentations

Speaking at mPub: future of Publishing

I’d like to express my gratitude to John Maxwell for extending an invitation to speak at SFU’s Masters of Publishing program about Interaction Design and its relevance to publishing. It was great to meet the mPUB cohort and we had a great time discussing the user experience of a digital/print hybrid user experience.

Here is the slide deck from my talk:
SFU 2014 Publishing in a Post-Digital Age PDF

Daily Data Visualization

After attending the EYEO conference this year and hanging out with Fire Friends I was inspired to challenge myself to creating a daily data visualization during the month of July. As it happens, I’m in the south of France & London for most of the month and I’d like the visualizations to center around our family holiday. I am working to gather data and images as I go and I’m sure the results will both vary greatly and bring a new perspective on personal data, information design and coding.

You’ll be able to check back with me on how it’s going each day. Although I haven’t given myself a time deadline for each day because well, you know, I’m on holiday.

See the Daily Visualizations here:

Day 01: Jetlag
Day 02: Colour Abstraction
Day 03: Lethargy

If you have any ideas for specific types of visualizations let me know I’ll try them out.

EYEO 2013

This past week Minneapolis’ Walker Art Center has been filled with artists, coders, and interactive intelligentsia from around the world for the Eyeo Festival. What is Eyeo? a media art, interaction and information conference? Is it creative coding? Is it a data visualization conference? Is it design? Storytelling? “Yeah,” said festival co-founder Dave Schroeder, addressing an auditorium, “It is all of those things.”
Now in its third iteration, the festival is four days of talks, workshops, and social interactions that acknowledge technology and art, interaction, and information — and their intersections The projects that emerge from these territories are exciting and seems they are all being shared here. Data is also changing; data is no longer numbers — it’s words, a social media feed, a color, a sensor, a houseplant, or a ship. Access points to data are expanding and processes and tool sets that manage data are evolving, becoming more transparent, and are now open, malleable and ready for us to shape to tell stories .
What happens when possibilities, ideas and community come together? Great design, alternative storytelling, and inspiring theory ensue. Here are five reasons to follow the festival, and its practitioners, as this community grows and continues to leave brilliance in its path.

One of the aspects of Eyeo that I most appreciate is its brandlessness. Yes, I know, Eyeo itself is a brand, and there are certainly intersections between art, code, and advertising, but “interactive” isn’t limited to the next hot startup, million-dollar app, or the latest service. Eyeo distinguishes itself from other festivals, like SXSW Interactive, for its lack of commercialization and focus on the intelligence of good projects. Eyeo reminds us that art is essential to digital innovation and the ethics of the community prioritizes responsive ideas, creative solutions, and alternative storytelling rather then trying to make a buck. As one panelist joked, “Data visualization artists are kind of the free R&D departments for Ad agencies.” Perhaps a sarcastic side effect, but producing cool work on ones own volition, for me, is a true artistic gesture.

Eyeo 2013 – Kyle McDonald from Eyeo Festival // INSTINT on Vimeo.

Ideas are better when they are shared

Media artist Kyle McDonald finds inspiration in a collective and continual awareness of how and what is released to the ether of the Internet. We only give things half of our attention anyway, so McDonald encourages us to think of projects in small but elegant and sharable terms and calls us to action with tweet-sized proposals for projects to take and run with. His brainchildren, each of them less then 140 characters, include open-ended proposals for the public to realize like “sand-sorting machine to automate sand granule tonalities” or “subtractive modeling in foam with high-frequency heterodyning.” Take these and do with them what you will. Others the artist turns into real artists projects, like a “scattered array of 50 mirror balls reflect light from three projectors, filling a room completely, casting patterns that fill the visitor’s peripheral vision,” which evolved into Light Leaksor “a room full of Sonos speakers that follow you through the space” turned into a interactive installation and collaboration with musicians the XX for their music video for “Missing.”

Eyeo 2013 – Casey Reas from Eyeo Festival // INSTINT on Vimeo.

Software is a relevant art form
Artist and professor Casey Reas offered to dispel the density of software as visual arts medium, as well as the context for viewing and understanding software as art form. A professor at UCLA, Reas articulates that software-as-art arrived as early as the 1970s, and has been ushered out for decades, in tandem with Conceptual Art. Software meets the criteria of an artistic medium as it is both a tool set and matrial. Reas is not only a proponent of this thinking, he developed a series of principals for code that replace the antiquated ‘principals of art’ you may have learned in high school – Unity, Harmony, Variety, Balance – are replaced with computation-specific variables including Repeat, Parameterize, Transform, Visualize, and Simulate. These are not methods of process for emerging software artists, but also by extension criteria by which we can bring clarity to, and critical discussion around, digital art forms.

Chocolate, History Flow (2003). Image courtesy of
Data is not (just) numbers.
Visualization typically happens with numbers; quantitative truths are achieved by objectivity. Fernanda Viegas and Martin Wattenberg of hint.fmask us to consider the subjective truths — what people are thinking, or rather, obsessing about: the data on the periphery of the data. The interesting link between objective and subjective data, and maybe an overarching theme at the conference, is the notion of the self-appointed project. What better example of the self-appointed project than Wikipedia! Viegas and Wattenberg use words with a color-coded ledger as data to uncover the secret obsessions of self-appointed Wikipedian entries, edits, and patrolling, in History Flow (2003). The result is a Missoni-esque pattern in florescent colors only native to hex-codes, riddled with subjective data and human interruptions and vandalism. In a more recent project, the collaborative creates composites from varying discontinuities of digital versions of famous artworks in Reproduction (2011).

Visualizing Painters Lives, All rights reserved by

A short, well-designed story
The “show don’t tell” mantra applies for data-visualization artist Giorgia Lupi, who acquaints us with the notion that stories don’t have to be told with articles or event statements. Storytelling through data mapping allows for retelling of non-linear and layered stories in ways that are clear and in data that can represent reductive, but complete, information. Often constraints — like time, space, and information — are also resources. The founder of Italian data visualization studio Accurat continued to show, not tell, us about the lives and works of 10 abstract painters through clean, well designed diagrams highlighting palette, size and artistic period of masterpieces, as well as love affairs and life events, throughout their career trajectories. The designer is also an advocate of drawing out ideas to visualize as she works, reminding us that the Italian verb for “draw” is synonymous with “design” or “plan.”
This summary is just the tip of the iceberg, EYEO was packed with inspirational moments, sometimes even between the talks and workshops. Try to get there next year, I know I will.


Minecraft on Raspberry Pi


After a few attempts I was able to install a Minecraft server on one of my Raspberry Pi. I don’t want to sugarcoat it but it wasn’t too difficult thanks to this shell script from Thanks KM_James!

The initial minecraft tests were decent and I was able to have others log in from remote locations using a static IP address with port-forwarding. It’s not entirely secure yet but that will be the next step along with installing some plugins into the craftbukkit installation.

Next project, getting Processing running on the Raspberry Pi. Any takers?

Designing in a Post Digital Era

To understand the notion of ‘Post-Digital’ I have written this short formal essay to represent my perspective on a conceptual exploration I have been on with a few fellow professors over the past year. This is strickly my position in a mental exercise that Vjeko Sager, Duane Elverum and I agreed to participate in and does not reflect our overall group perspective.

The purpose of this essay is to introduce the concept of Post-Digital and suggest ways for people to be creative in a Post-Digital environment. This may be important to anyone interested in how Digital Media has had impact on culture, creation, communication and the idea of property.

Although related, I feel this essay discusses the Post-Digital concept in a very different way than James Bridle’s New Aesthetic, which looks at how digital artifacts and glitches can be used as a stylistic movement in design or art. I believe that the concept of Post-Digital should be a deeper dive conceptually than a surface-level glance at how digital tools can influence our design fashion.

Our initial attempt to define ‘Post-Digital’ came by describing what we felt we had lost and found in our experiences with being creative in the digital space. This thought experiment challenged us to think hard about the things we did that were conceptually different before and after we began creating in the digital world.

My first inclination was to say that the Digital era has given us ‘new eyes’. Much like how Charles & Ray Eames gave us a way of thinking and understanding scale and distance in their milestone film, Power of Ten. Some say the digital era has given us the ability to stretch time and space, at the very least it has given us a way to see beyond our normal capacity – magnifying incredibly fine details of images or sound or panning out to ‘see’ how nine Beethoven symphonies plot out over time. The notion of being able to convert something that happens over a great length of time or space into one macro view is a digital one. Although it has been done before we had digital technology, digital culture has brought that type of thinking to the every-day designer or artist.

This brings me to what I believe is the most important aspect of what I have found as a creator in the digital era. The digital world let’s us traverse media seamlessly. When I am creating with digital tools I can live in the moment and improvise without the borders that we have in the physical or analogue space.

In the digital space, everything becomes your raw material for creating. Everything is up for grabs – duplicatable and malleable. Everything can be converted from one medium to another. The boundaries melt away. I’m able to make something out of something else.

The digital medium let’s me convert sound into visual, and visual into sound. My multi-disciplinary tendencies are unconfined and my creative pursuit is unencumbered by artificial constructs. Perhaps digital media is allowing us to be truly multi-disciplinary.

What I believe to be an amazing advantage in the digital realm may also the cause of my greatest loss – my focus. In my teens I played guitar with laser focus and by university I was playing professionally and had gained a mastery of the instrument. The single-minded intensity and desire brought me the level of proficiency and intimacy with the guitar that in turn gave me the ability to express myself in extraordinary ways. Essentially, I am still striving for that same level of self-expression in the digital sphere. Is it even possible or reasonable to have the same aspiration?

Certain aspects of the digital sphere has an intoxicating allure, that tend to splinter focus and encourage tangential exploration. It offers keyword connections, a vast array of choices for any one niche and multple ways of doing the same thing.

Often these tangents bring me back to my original goal with new-found fodder, and sometimes it fragments a project into a thousand pieces.

In the conclusion I’d like to suggest ways of working in the digital space so as to not fracture and dilutes your original goal. Perhaps we should only work in the digital environment when we have to and complete whatever is possible in a physical or analogue way. How would confining part of your process to the digital world effect your outcome? Why would I want to do that? Because I feel that physical expressions in the design world may be an important key for communicating complex ideas and information. Additionally, allowing people to interact with data in a tangible form may help them understand the data in ways we have not been able to do before. But perhaps that’s another topic to be explored in another essay.