Rogue Two: Reflections on the Creative and Technological Development of the Audiovisual Duo—The Rebel Scum
Dancecult: Journal of Electronic Dance Music Culture 10(1): 63–82
ISSN 1947-5403 ©2018 Dancecult http://dj.dancecult.net
http://dx.doi.org/10.12801/1947-5403.2018.10.01.04
Rogue Two
Reflections on the Creative and Technological Development of the
Audiovisual Duo—The Rebel Scum
Ryan Ross Smith and Shawn Lawson
Monash University (Australia) / Rensselaer Polytechnic Institute (US)
Abstract
This paper examines the development of the audiovisual duo Obi-Wan Codenobi
and The Wookie (authors Shawn Lawson and Ryan Ross Smith respectively). The
authors trace a now four-year trajectory of technological and artistic development,
while highlighting the impact that a more recent physical displacement has had on
the creative and collaborative aspects of the project. We seek to reflect upon the
creative and technological journey through our collaboration, including Lawson’s
development of The Force, an OpenGL shader-based live-coding environment for
generative visuals, while illuminating our experiences with, and takeaways from, live
coding in practice and performance, EDM in general and algorave culture specifically.
Keywords: live coding ; collaboration; EDM; audiovisual; Star Wars
Ryan Ross Smith is a composer, performer and educator based in Melbourne, Australia. Smith
has performed and had his music performed in North America, Iceland, Denmark, Australia
and the UK, and has presented his work and research at conferences including NIME, ISEA,
ICLI, ICLC, SMF and TENOR. Smith is also known for his work with Animated Notation,
and his Ph.D. research website is archived at animatednotation.com. He is a Lecturer in
composition and creative music technolog y at Monash University in Melbourne, Australia.
Email: ryanrosssmith [@] gmail [.] com . Web:
Shawn Lawson is a visual media artist creating the computational sublime. As Obi-Wan Codenobi,
he live-codes, real-time computer graphics with his software: The Force & The Dark Side. He has
performed or exhibited in England, Scotland, Germany, Spain, Denmark, Russia, Italy, Korea,
Portugal, Netherlands, Australia, Brazil, Turkey, Malaysia, Iran, Canada, Mexico and the USA.
Lawson studied at CMU and ÉNSBA, receiving his MFA from SAIC. He is a Professor in the
Department of Art at RPI. Email: lawsos2 [@] rpi [.] edu .Web:
Feature Article
http://dx.doi.org/10.12801/1947-5403.2018.10.01.04
http://animatednotation.com/
http://www.ryanrosssmith.com/
http://www.shawnlawson.com/
Dancecult 10(1)64
Figure 1. Obi-Wan Codenobi and The Wookie performing at ReFest in New York City (2015).
In a Galaxy Far Far Away: 2014
Obi-Wan Codenobi and The Wookie, AKA The Rebel Scum, AKA Shawn Lawson and
Ryan Ross Smith, began as a parody wrapped in an enigma, with a more than passing interest
in what was to us an exciting new field of performance practice: live coding. For those who
may be unfamiliar with the term, Live Coding simply describes the editing of the source
code or algorithm of a piece of software while that software is running. The practice of live
coding has existed for some time, although a significant touchstone of sorts occurred in 1996
with the publication of Supercollider at the International Computer Music Conference
(ICMC). This is followed by a coalescing of practices by Nick Collins et al. in 2003, Ge
Wang and Perry Cook in 2004, Andrew Sorensen in 2005, the creation of Live Algorithm
Programming and a Temporary Organization for its Promotion (TOPLAP) in 2004 and
the accompanying book chapter with the same title in Read_Me: “In a new discipline of
live coding or on-the-fly programming the control structures of the algorithms themselves
are malleable at run-time. Such algorithmic fine detail is most naturally explored through a
textual interpreted programming language” (Ward et al. 2004: 243). Eleven years after the
publication in Read_Me, the first International Conference on Live Coding (ICLC) was
held at the University of Leeds in 2015. ICLC and the related International Conference on
Live Interfaces (ICLI), have been instrumental in our development as an audiovisual live
coding duo, but our collaboration first started in a galaxy far, far away in Troy, NY.
In 2014, Smith was a graduate student at Rensselaer Polytechnic Institute researching
Animated Notation, and Lawson, a Professor of Computer Visualization. After a couple
years we floated the idea of an audiovisual collaboration based loosely on the Star Wars
Smith and Lawson | Rogue Two 65
metaverse. The first performance set we created, if it can be called that, was hastily thrown
together for an evening of audiovisual performances held at the Electronic Media Performing
Arts Center (EMPAC) in Troy, NY in April, 2014. Creating a system where Obi-Wan
Codenobi could directly engage with his visual materials in real time was in large part the
impetus for this project, and so in preparation, Obi-Wan Codenobi built The Force, a live-
coding OpenGL shader environment that analyzed and reacted to the spectral content of
The Wookie’s sound.
Like live coding languages in general, The Force enables on the fly creation and
manipulation of functions to elicit an (almost) immediate visual response. As Collins et
al. note, “As long as the program has to be compiled in order to be able to run and to
simulate a user interface, the time delay between creating the tool and using it seems to be
very dominant” (2003: 327). Confronting this delay was an important consideration in
the development of this project, and to that end, The Force auto-compiles and attempts
to execute the shader code as the code is being written. Successful compiles are executed,
unsuccessful compiles retain the previous successful compilation, and although there is
a small delay time between typing , compilation and execution, the process feels nearly
instantaneous.1 This immediacy facilitates fast paced graphical changes in performance. The
audiovisual synchronicity is based on the aforementioned spectral content from the sound.
The Wookie’s audio output is analyzed through a Fast Fourier Transform (FFT) process
which is parsed into four bins. The four bin values are packed into a Vector4 data type for
easy transmission to the graphics card and used to modulate properties of the imagery.
Smith had previously been working with a handful of standard fare software and hardware,
including Max/MSP (visual programming/patching environment), Ableton Live, Pro
Tools and Logic, modular and semi-modular synthesizers, amongst others. Smith’s music
traversed a wide range, from folk and pop to IDM and experimental, inspired by artists
like Squarepusher and Aphex Twin from Warp Records, Venetian Snares and μ-Ziq from
Planet-Mu, the Clicks & Cuts series from Mille Plateux, Telefon Tel Aviv, Autechre and
many others. For those interested in digging , Smith had also collaborated on a remix of
Public Enemy’s “B Side Wins Again” with Jeff Snyder (aka Scattershot, the inventor of the
incredible Manta controller). For The Wookie this project was a good excuse to return to
some of the more beat and pattern-based music he had put on hold during graduate school.
With time being a significant factor, The Wookie assembled a fairly simple, tempo-based,
sample-mangling patch in Max/MSP to process several earlier works of his in order to
inject a sense of rhythmic regularity. Specifically, the Max/MSP patch randomly selected
start and stop points within an audio file that adhered to some small, metered subdivision
based on the predefined tempo and then looped these short sections. These sections were
not necessarily defined by any significant transient content, but the repetitive nature of
these micro-loops produced a sense of rhythm based on the tempo-dependent relationships
between one another. In this case, The Wookie’s musical selections for this performance
leaned much more toward the electronic music side of things than dance music. Still, the
musical characteristics of EDM had great appeal to us, and it is under this umbrella that The
Dancecult 10(1)66
Wookie’s current music and many of the live coding musicians that populate the algorave
scene operate within, including Mike Hodnick, Alex McLean, Renick Bell and many others.
Now, it must be noted that during our performance at EMPAC, nobody danced, and
really, why should they? Smith had never DJ’d, but then again this certainly wasn’t a
DJ set, and the idea of bringing the party hadn’t really crossed our minds. This was, first
and foremost, a fun project by a visual artist and musician in a decidedly artsy space for
engineering students who were hoping for Skrillex and Girl Talk mashups, at least until
Smith’s computer crashed. Still, we enjoyed the project and each other’s company, and that
was reason enough to reflect on our experience in order to determine a better and more
artistically fulfilling path. The main culprit was the music. We felt that the relationship
between the music and visuals did not read as cohesively as we had hoped, and in order
to increase the likelihood of significant audiovisual correspondence over the course of a
performance, a sonic palette containing more transient-rich and repetitive material would
be built from the ground up, including slight changes to the audio interpreter in The Force.
In summation, this first collaborative attempt uncovered a wealth of flawed materials
that were perfect for reflection, rebuilding and refinement and inspired a solid foundation
of potential practice.
A New Hope: 2015–2016
While our goals with The Force and The Wookie’s sample-mangling at that first
performance was to inject some rhythmic regularity, leave room for improvisation and
generate cohesiveness between the audio and the visuals, the time constraints left us with
little time to develop any compositional identity. And so, following our performance at
EMPAC, and inspired by the audiovisual cohesiveness of groups and artists like Daft Punk
and Squarepusher (specifically his face-melting performance at the Creators Project in
San Francisco in 2012), which appeared more fixed than improvisatory, we created a more
composed work: Kessel Run (2014). The visuals and audio for Kessel Run were developed
in closer correspondence in order to produce a more cohesive audiovisual connection, and
musically, leaned more heavily on strong rhythmic material and instrumentation more
closely associated with EDM. Kessel Run led to a couple performances in Spain (Radical
Db) and Portugal (ICLI). The Kessel Run video is linked here: Kessel Run.
The performance in Portugal was particularly influential as it was our introduction to
the algorave. An algorave describes an event in which performers deploy music generating
algorithms of some sort (hence the “algo” prefix) in front of an audience, often controlled via
live coding (Cheshire 2013). The performances frequently, but certainly not always, borrow
heavily from various sub-genres under the EDM descriptor, from Gabba to Breakbeat
(check out Neil C Smith’s live-coding AMEN $ Mother Function) to more obtuse musical
forms, although often replacing the visual pomp and circumstance associated with large-
scale EDM events with the performers’ projected code.
https://vimeo.com/130277124
https://www.youtube.com/watch?v=SgE9POc5BdA
Smith and Lawson | Rogue Two 67
Inspired by our experiences in Spain and Portugal, and invigorated by the conceptual
and social framework of the algorave, we created Sarlacc (2015) with the support of a
residency at CultureHub. CultureHub is an Arts and Technolog y center in New York City
with affiliations to La Mama also in New York City and The Seoul Institute of the Arts in
South Korea. CultureHub provides residencies for artists, educational opportunities for
youths and hosts festivals. During our time at CultureHub we used their multi-projector,
multi-channel audio systems to increase the scale of our production and presentation.
Figure 2. Obi-Wan Codenobi and The Wookie performing at ISEA in Vancouver (2015).
At this point it is important to note that the audio components for Kessel Run and Sarlacc
were not created and performed with code, but with Ableton Live. With Live, The Wookie
retained improvisatory and structural control during performance while following a
malleable set list of precomped fragments. In similar fashion, while the visuals were being
live-coded by Obi-Wan Codenobi with The Force, he too followed a coding score so as to
maintain a tight visual correspondence with the music and to hit structurally significant
cue points throughout the set. We achieved the cohesiveness we had hoped for, but ended
up feeling , well, a bit bored just playing the same stuff over and over. After rounding out
our residency at CultureHub with a performance that included like-minded composer-
performers Dataf1ow and Bevin Kelley (see Figure 1), we performed Sarlacc in Scotland
(ACM CC), England (ICLC), Canada (ISEA), Germany (Generate!), the Netherlands
(LPM) and several local US venues. The Sarlacc video is linked here: Sarlacc.
https://vimeo.com/121493283
Dancecult 10(1)68
Episode V: 2016
For our next project, Owego System Trade Routes (2016), (OSTR) The Wookie began using
the live coding language TidalCycles (McLean et al. n.d.) in conjunction with a modular
synthesizer. The details of the modular rig are fuzzy as it has changed dramatically since that
time, but the primary sound sources were the Make Noise DPO and Noise Engineering
Basilimus Iteritas, sequenced by a Make Noise Rene, clocked by ALM’s Pamela’s Workout
and modulated/modified by a series of function generators, LFOs and VCAs. The decidedly
improvisatory nature of this work explicitly countered the fixed structure of Kessel Run
and Sarlacc, while the TidalCycles elements retained the transient-rich, pattern-based
rhythmic material. In this setup, the modular synth was not in any way synchronized with
TidalCycles, but functioned as a standalone noise generator of sorts, influenced by, and
influencing the behaviour of The Force. Video samples of Owego System Trade Routes album
are linked here: Owego System Trade Routes samples.
Figure 3. AppiOSC device outside and inside (2016).
In an attempt to further counter the scored predictability of the previous works, and to add
a new dimension to the interaction between the audio and visual components, we created
the AppiOSC in collaboration with Frank Appio (Lawson, Smith and Appio 2016). The
AppiOSC is a hardware device that converts text code into control voltage (CV ) for use
on the modular synthesizer, and can generate, modulate and sequence basic functions,
including saw, square, triangle and sine waves. Open Sound Control (OSC) messages sent
to the AppiOSC determined frequency, amplitude and function type, and assigned values
could be static or set to be randomized per frequency period. The text-to-CV algorithm
searched for letters, spacing , or keywords in The Force to attain sums that were further
adjusted to fit within a range suitable for the AppiOSC. For example, if a block of code
contains 15 “-” characters, that value 15 would be modded by a specified max, for example,
let’s say 10 resulting in a value of 5. This 5 value is further scaled within that 10 maximum
to return a float value between 0.0 and 1.0 resulting in .5. Mathematically this would look
like the following :
finalValue = (characterCount % specifiedMax) / specifiedMax
https://vimeo.com/153029100
Smith and Lawson | Rogue Two 69
The final value and property to change are sent to the AppiOSC. As code is added or deleted,
that final value might change, impacting the CV signals being sent from the AppiOSC to
the modular synth.
These CV signals could be applied to various modules within the synthesizer to affect
LFO and function speed, pitch/tuning , filter cutoff and resonance and any other parameter
that accepts CV. The CV coming from the AppiOSC could be quite erratic at times, which
introduced a fantastic source of uncertainty. CV from the modular could also be sent back
into the AppiOSC, where it would be converted into streams of numbers available for use
in The Force. Unfortunately, our use of this device was under-explored due to the eventual
long distance nature of our collaboration. Simply put, we didn’t have enough time with
the AppiOSC, and this lack of experience inherently and repeatedly pushed us towards a
surprisingly consistent sonic texture. Well, that and the conversion algorithm computing the
text of the fragment shader into the values of the control voltage. Golan Levin postulated a
similar scenario as: “. . . the premise that any information can be algorithmically sonified or
visualized is the starting point for a conceptual transformation and/or aesthetic experience.
Such projects may or may not reveal the origin of their input data in any obvious way . . .”
(2010: 273 –4). Levin further reveals the potential fault by highlighting the relationship
between raw data and the artistic content it may produce:
Most commonly, the transmutability of data per se is not itself the primary subject of a
work, but is rather used as a means to an end, in enabling some data stream of interest
to be understood, experienced, or made perceptible in a new way (2010: 274).
Moreover, due to OSTR’s heavy reliance on spectral analysis for communication from audio
to visual, we inadvertently supported the argument that visuals are secondary to the audio,
not unlike the iTunes visualizer (Alexander and Collins 2007: 134).
Despite its shortcomings, the concept of the AppiOSC in and of itself was an intriguing
one: integrate a complex control voltage scenario with the relevant leftovers of contextually-
irrelevant live-coding. Still, in revisiting our means-to-an-end we found that Obi-Wan
Codenobi’s live-coding text to sonic conversions were simply repetitive and frequently
disappeared into the overall sonic texture. Perhaps we had been seduced by the potential of
some perceptible relationship, confusing the randomness of the text-to-CV conversion with
what was little more than an imperceptible 1:1 relationship. A stronger path may have been
to explore how the raw text data could have been mapped onto a more musically-significant
structure. Another solution would have been to apply a global scale and/or quantizer to
the data stream as it leaves the AppiOSC. This type of control, a conductor of sorts, could
oversee the text-to-CV/CV-to-text conversion at the low-level while applying a high-level
structure to compartmentalize the data into more usable or perceptible bursts of information
rather than slower changes to a stream of continuous values. The OSTR audiovisual album
was published on the Spanish label naucleshg , and we had the opportunity to bring this
work and the AppiOSC to Canada (ICLC), England (ICLI) and Australia (NIME).
Dancecult 10(1)70
As mentioned above, once our collaboration turned long-distance we were unable to
continue exploring the possibilities that the AppiOSC may have afforded us. But this was
a bit of a blessing , and in the spirit of healthy self-deprecation, we felt that we had let the
intriguing possibilities of the hardware lead our project down an aesthetic path that had
little positive impact for us or anyone who saw us perform.
Recognizing this failing , we wanted to again integrate the audiovisuals and show the
artist’s hand in the work. The artist’s hand in the visual arts refers to mark making , as in the
quality and personality of the line, brushstroke, etching and so on. It also implies that the
work itself feels divorced from the artist, meaning that artist her/himself does not seem
present in the work. Or, this may indicate that something is too slick or refined, hence a
machine-made copy. For us it meant to lose the AppiOSC’s black boxness and get everything
up on the screen.
Luke’s Side Quest to Dagobah (As In, So, What Are We Really Doing?): 2016
Before developing a solution to both our long-distance collaboration and our desire to
remove the black box, we took a moment to consider what it was we were doing , and
how the algorave scene in general, might fit into a broader artistic and historical narrative.
Furthermore, and influenced in part by the massive interest in contemporary EDM, we
found ourselves looking a bit closer to try and gain a better understanding.
While exploring the visual components found in some EDM performances we found
ourselves traveling through an ancestral tree of methods and technologies including color
organs, animation (artistic and commercial), film, video arts, performance, theater, music,
light shows, expanded cinema, music videos, live cinema, Gesamtkunstwerk, psychedelia
and synesthesia with the most closely generalizable precedent being the Video Jockey
(VJ) (Spinrad 2005; Crevits 2006; Eskander 2006; Shaughnessy 2006; Alexander and
Collins 2007; Alexander 2010). More specifically, within the VJ category there are sub-
categorizations for scratch video, clip-based work, video synths and code-based procedures
to mention a few (Watz 2006; Alexander and Collins 2007). Most revealing was the
frequent, emergent thoughts regarding a subservience the VJ had to the audio:
[F]or many, vjing [sic] is a dirty word, artists view it as eye candy for the clubbing
generation, musicians view it as a secondary accompaniment to their music at best,
vjing [sic] is regarded as audio-visual wallpaper, not worthy of serious consideration.
[Y]et to my eyes, the best vjs are creating a new, fluid interface between sound and
image—one that is genuinely mould-breaking and aesthetically invigorating , and one
that deserves to be recognized as a 21st century art form (Faulkner 2006: 9).
This mirrors Marius Watz’s experience of being a VJ:
Still VJ Culture is in its nascent stage and the VJ rarely becomes a full-fledged member
of the band, typically remaining a visual commentator. . . . However, any visual artists
and audio-visual collaborative projects seek to reach new levels of integration between
sound and image (2006: 5).
Smith and Lawson | Rogue Two 71
Taking a moment to consider the integrated aspect of audiovisuals, there has been much
discussion about the connection between EDM with visuals and their synesthetic affects
(Crevits 2006; Eskander 2006; Watz 2006; Alexander and Collins 2007). Crevits goes so
far as to state that if the EDM drug culture had been different then VJs may not have existed:
The VJing at house parties reproduces this [synesthesic] experience. Whereas ecstasy
does create a ‘spiritual’ symbiosis of sensation, it doesn’t evoke many concrete visual
hallucinations compared to LSD. One could say that if LSD had been the drug of
the house scene there would have been little or no need to compensate for the lack of
performance or low visual character of a DJ set. There would be no VJing (2006: 15).
Even if we disregard the hubris of this statement we can’t overlook the multiple references
to real or perceived synesthetic effects of audiovisual performances. We have encountered
performance attendees who reported having some degree of synesthesia; however, Obi-Wan
Codenobi and The Wookie neither claim to be synesthetes nor have aspired to intentionally
create synaesthetic work, and believe this speaks more to the integrative collaborative
approach, direct mappings, or learnt synesthesia (Alexander and Collins 2007: 137). A
contemporary synesthetic condition could also be a result of the post-digital human condition
as per Watz, “one could just as easily claim that the thirst for synaesthetic experiences is a
response to our multimedia-saturated world, where instant sensory gratification is the order
of the day” (2006: 6). Large-scale audiovisual EDM spectacles may simply quench that thirst.
Figure 4. Obi-Wan Codenobi performing at ACM CC in Glasgow (2015).
Dancecult 10(1)72
In addition to our desire for more structural and audiovisual cohesiveness, we were
beginning to identify more and more with Alexander’s comments on the fluidity between
VJs and live cinema artists (2010: 202), and as laptop performers, we have sought to expand
the narrative aspects of our performance by perpetuating our loose narrative around the
Star Wars metaverse.2 But beyond our naming conventions and Obi-Wan Codenobi’s
jedi stage attire, we did not intend to create strict, Star Wars-based textual, storyboard,
or compositional narrative for audiovisuals, as that would reduce the potential for artistic
flexibility and improvisation. Rather, it gave a couple Star Wars fans reason to find inspiration
in more obscure references, like R5-D4 or Dannik Jerriko, nevermind the inherent value
in inspiring conversations regarding the merits of the original tentacle-less sarlacc. Still,
however insignificant this conceptual basis might be, it is worth noting the in-between
space of VJing and live cinema:
A third, and lesser-known type of audiovisual performance practice operates within
a performing arts context while also drawing from conceptual, performance art, and
new media art practices. In the absence of a commonly agreed-upon name for this
practice, we can refer to it here as ‘conceptual audiovisual performance’ (Alexander
and Collins 2006: 135–6).
Given the relative infancy of live coding practices in historical terms, it seems appropriate
to consider the algorave as a conceptual audiovisual performance environment, but not
one that need adhere to any specific type of performance. And so, the Star Wars concept
disappears, easily outweighed by the far more interesting and broader concept of live coding.
Luke’s Return to Dagobah (Are We Sure About What Are We Really Doing?): 2016
From the musical perspective, we have been considering EDM as a high-level container for
any and all music that is A) largely created and/or performed by/with electronic means and
B) contains the musical attributes (beats and patterns) and, in some cases, the live social
contexts of EDM’s 70s and 80s prototypes. Yet, it is also the case that EDM as a musical,
cultural and capitalist phenomenon, that Simon Reynolds refers to as nothing more than
a “rebranding coup”, may represent a more contemporary set of micro-genres that preclude
one’s understanding of their history (Reynolds 2012). Naturally, artists working within
genres are not necessarily keen to embrace whatever label is placed upon them, and as
Collins notes: “Genre is a contentious area at the best of times, but an especial minefield
in electronic dance music, where producers, journalists and consumers are always eager to
promote new micro-genres” (2012: 1). In-line with Collins, Gresham-Lancaster notes that
within the major online (streaming and download) distributors:
. . . the history that I have experienced over the last four decades is not represented
at all. ‘Electronic Music’ - in the various forms offered by the pull-down menus of
these apps - refers to a form of dance music from the late 1990s on and bears little
resemblance to the ‘electronic music’ that has been such an important part of my own
musical life (2017: 76).
Smith and Lawson | Rogue Two 73
Still, the immense growth of this musical culture has brought electronic music to a massive
audience, and despite this kind of commercial success, it is fair to say that a lot of this music
is decidedly experimental in nature. From Juan Atkins and Frankie Knuckles, Kraftwerk, to
the wonderfully pornographic performance practice of Anklepants and many others, the
performative act is a necessarily visceral and/or tangible one, and the methods by which
these sounds were made possible were always changing. In Michaelangelo Matos’ extensive
tome on the rise of EDM through the multiple lenses of the artists, party promoters and
attendees, he suggests that the liveness these artists brought to the stage were of exceptional
importance. For instance, Moby’s use of DAT tapes on stage became a flashpoint of sorts,
as, “ . . . party flyers around the U.S. were promising “live PAs” from artists. Being able to
bring it onstage with a bunch of gear and no traditional instrumentation was starting to
matter” (Matos 2015: 149). Moby’s response questioned the value structures associated
with dance culture with the eye of a historical musicologist:
. . . people who make an issue out of ‘is it live?’ techno are dangerously reminiscent of
people who can describe eric clapton’s [sic] guitar solos in depth and who dismissed punk,
techno, hip-hop (and jazz and rock and roll for that matter) as not being valid because
you didn’t need a masters degree in music theory to appreciate them (Matos 2015: 153).
Yet this separation of process and presentation is necessary when considering the logistical
nightmare and massive expectations of large-scale performance events. A computer-based
(or anything-based for that matter) live performance that is largely improvisatory is likely
ill-advised if the spectacle requires perfection in its execution (think about the cost of a
stadium concert).
Figure 5. Obi-Wan Codenobi and The Wookie performing at ICLI in Brighton (2016).
Dancecult 10(1)74
Our experiences at several algoraves, festivals and conferences imparted a fantastic feeling
of social engagement, community and experimentation. And while not all participants in
each situation may have veered toward some form of EDM (although many did, including
inspired performances by Mike Hodnick, the AlgoBabez, Alex McLean, Charlie Roberts
and Renick Bell to name a few), the very context of a bar, club or concert setting and the
transparency of the projected code, enabled a wide range of forms to not just coexist, but
to encourage communal engagement. The ubiquitousness of EDM in the popular sphere
presented a uniquely fertile opportunity to bring it back underground—academically,
ironically or otherwise.
At an algorave, people not only care how you make something , but want to see the code
you are using to make it in real time. In some ways, this is not unlike a DJ, turntablist or
finger drummer, and we certainly aspired to connect more directly with our equipment from
the musical, visual and physical perspectives by sharing this process and our screens with the
audience, as is the common practice of live coding (Ward et al. 2004). Amy Alexander notes
that “Laptop performers are now beginning to address the question of performativity”
(2010: 204). The algorave has, in some sense, become a beacon for the integration of
populist aesthetics (here EDM musical attributes and visuals that reflect upon them) with
good old-fashioned laptop performance and the somewhat pedagogically-inclined practice
of live coding.
And so, our approach, like many others, to EDM-inspired live coding practices in the
context of the algorave environment supplants preprogrammed perfection with a direct
engagement with the possibility of failure (crashes, performance anxiety, lack of good ideas,
etc.) while retaining what we consider to be the most salient and generalizable visual and
musical qualities of EDM: repetitive, danceable rhythms and correspondent visuals, even if
those visuals may sometimes be just code.
Return of the Jedi: 2017
As mentioned earlier, we eventually found ourselves looking into our own personal sarlacc:
long-distance collaboration. With The Wookie moving from Troy to the mountains, a
new approach was required. To this end, Obi-Wan Codenobi created a new live coding
environment, The Dark Side (Lawson 2017). This new IDE is browser-based, telematic and
supports both TidalCycles and OpenGL shader languages in a single text buffer (Lawson
and Smith 2017). Performers use the familiar The Force interface and are presented
with a text editing experience similar to the collaborative functionality of Google Docs:
multiple performers edit the code simultaneously from any internet connection while
all text edits, text cursor movements and window scrolls are recorded with timestamps
to a small JSON formatted file.3 Audio and visuals are rendered client side, meaning that
each performer receives the highest quality possible audio and visuals that their hardware
permits. Furthermore, the recorded text file can be played back with the highest possible
quality audio and visuals available to the end receiver.4 With The Dark Side we were able
to continue collaborating in real-time from our respective homes while retaining full
audiovisual resolution.
Smith and Lawson | Rogue Two 75
With The Dark Side supporting our new approach to rehearsing telematically we
completed a new work, EV9D9 (2017). The EV9D9 video is linked here: EV9D9 from
GENERATE! Festival. The Dark Side has also enabled performances in which one or both
of Obi-Wan Codenobi and The Wookie perform remotely, and in one scenario our text
recording was performed by a third computer.5
Other performances included The Center for New Music in San Francisco (one performer
remote), Sample Music Festival in Berlin (one performer remote), Generate! in Tübingen
(both performers local) and ICLC in Morelia (one performer remote).6
Figure 6. Obi-Wan Codenobi and The Wookie (remote) performing at ICLC in Morelia (2017).
Moving to the audio side of things, with sample-based live coding languages, including
McLean’s TidalCycles (the audio live-coding language used alongside The Force or in The
Dark Side), the performer is limited not only by what functions are available, but by what
sample material the composer/performer has made available to themselves and how this
material is exploited in performance. When developing the materials for EV9D9 we created
raw materials that represented the musical space we sought to occupy, while building in
https://vimeo.com/261648424
https://vimeo.com/261648424
Dancecult 10(1)76
room to discover alternative compositional spaces that might signify alternative genres
in part or in whole. No big tricks here, just a compositional practice informed by EDM,
assembled/manipulated with live-coding.7
EV9D9 contains five separate pieces that can be performed in whole or part. A
performance containing all five pieces will last approximately 30 minutes, although each
section can be compressed or extended depending on set duration. Each piece contains a
sample set including standard percussion elements (kick, snare, hi-hat), intro and/or outro
and bass, melodic and harmonic material. The sample set for each piece was generated using
Pro Tools, Ableton Live and Maschine. Since TidalCycles reads samples from folders, each
piece contains a series of folders with descriptive labels. For instance, the 2nd piece, EV123,
contains folders for kick, snare and hihat (evk123, evs123 and evh123), folders for intro and
outro material (evint123 and evout123), folders for verse and chorus (evv123 and evc123)
and folders for a filtergate sample and an arpeggio sample (evfg123 and evarp123). In some
folders there is only one sample, but in others, specifically chorus’, there may be numerous
samples in order to inject some variety into certain sections. The “ev” at the beginning of
each folder name refers to the “EV ” in EV9D9. The letter(s) following “ev” refer to the
content of that folder (“k” refers to kick, “fg” refers to filtergate, etc.), and the number
“123” refers to BPM.
In a performance of EV9D9 the elements of each piece are coded and executed in order,
although there is a lot of room for flexibility. For instance, there is no set beat/pattern for
each piece, and patterns developed for one piece can often be carried over to the next piece.
One of the more interesting problems to solve when performing EV9D9 in its entirety
are the transitions between different tempi, due in part to the use of longer samples that
are not tempo-dependent. Beyond these structural or skeletal elements and their tempo-
dependence, each piece is wide open for improvisation using a wide range of samples
selected for the project.
It isn’t really in the scope of this article to go into depth about the full functionality
of TidalCycles (look to the website for more informations: TidalCycles), or any of the
many other live coding languages in use. Still, the similarities between making beat and/or
pattern-based music with TidalCycles and other off-the-shelf products is worth noting. As
written on the TidalCycles home page, “Tidal allows you to express music with very flexible
timing , providing a little language for describing patterns as step sequences” (McLean et
al.: n.d.). This statement is similar to any number of products, from the Korg Volca series to
Ableton to modular sequencers, with the exception of the word language. With TidalCycles
you are representing the sonic output you want with code; no fancy interface, no visual
representation of the audio, just the computer’s blank screen that you populate over the
course of a performance. Yet, it is the very non-flashiness of this environment that requires
a different mode of thinking , and creates a performative and creative situation far removed
from other electronic music models. For example, to create a simple beat, one might type:
d1 $ s “[bd*4 , [~ sn]*2 , hh*4]”
https://tidalcycles.org/
Smith and Lawson | Rogue Two 77
In this example, the bd (bass drum) and hh (hi-hat) are playing on every quarter-note, while
the snare drum plays on the offbeats. Once compiled this pattern will continue until it is
changed or silenced using the aptly named function “hush”. Far more interesting is to take
advantage of scheduling and random functions. In the following example, the “sometimes”
function is used to occasionally reverse the sound and/or slow the pattern down to half
speed. The samples used are segments from the Amen break in 8th notes (although
occasionally removed due to the “?”) and a Gabba kick on the 1 and 3. Lastly, samples
are chosen randomly from the two folders called “amencutup” and “gabba” respectively in
order to impart even more variation.
d1 $ sometimes (# speed “-1”) $ rarely (slow 2) $ s “[amencutup*8? , gabba*2]” # n (irand 16)
A slightly more verbose example pulled from the EV9D9 set (below) demonstrates
additional functionality including pitch-shifting , scheduled solos, sequenced modifiers,
weighted randomness and local and global speed malleability. A code block like this has
a generative quality to it, producing a variety of sonic results over time while retaining the
musical foundations of this particular section.
d1 $ every 11 (const (s “[evk110*16? , [~ evs110]*4 , evh110*16 , notes*8?]” # n (irand 5))) $
slowspread ($) [id,rev,(|+| accelerate “-1 1”),stut 8 0.8 0.125,slow 2,chop 4,slow 1] $
stack [
every 7 (striate 4) $ sometimes (|+| accelerate “-1”) $ s “”
# n (irand 6) # end “0.1” # up (sine*16),
sometimes (|+| up (choose [2,4,6,8,10])) $ every 11 (striate 2) $ every 9 (slow 0.5) $
every 7 (slow 2) $ every 5 (|+| accelerate “-1 1”) $ every 6 (stut 8 0.8 0.125) $ s
“[evk152*4? , [~ evh152*2]*2 , [~ evs152]*2]”,
sometimes (jux (iter 4)) $ s “s13*8?” # n (irand 12) # cut “1”,
randcat [
s “k1*8 speed*8?” # n (irand 12) # end “0.1”,
s “gabba*8 stab*4?” # n (irand 12) # end “0.1”,
s “numbers*8” # end “0.05” # up (sine*8),
s “~ notes*4” # end “0.1”,
s “” # n (irand 10) # up (sine*32)
]
]
As is hopefully obvious, this is not even scratching the surface. The variety of methods
for handling sample and cycle manipulation is deep, and as the introduction of new
technologies or exploitation of existing technologies have often made significant impacts on
compositional and performative directions, different live coding languages enable different
musical outcomes. Similarly, EDM has evolved in parallel with technological development
and adoption, from turntables and the TB-303, to the introduction of MIDI, sampling ,
the wide range of DJ software, controllers and everything else. The long-term impact of live
Dancecult 10(1)78
coding languages as a fairly new musical technolog y can’t possibly be predicted, but it is
fair to say that the introduction of EDM-inspired musical practices into this micro-micro-
sub-genre of live coding will continue to inspire new ideas that straddle that weird space
between popular music and scholarly enquiry.
Epilogue: 2018 and Beyond
Much of this paper focuses on the technological and compositional path we have moved
along over the last several years in order to highlight our process more objectively, but at its
roots the writing of this paper has been an opportunity for us to subjectively and aesthetically
evaluate the processes and results of a collaboration that has been very meaningful to us and
continues to challenge our creativity.
In closing , it seems apt to share Alex McLean’s recollection of a 2011 car ride published
in Wired:
We [Alex McLean and Nick Collins] tuned into a pirate station playing happy
hardcore, and we thought it would be good to [computer] program some rave music
. . . It’s kind of changing the way people think about computer music . . . And also
breaking the limits of what electronic music can be (Cheshire 2013).
Indeed. Thank you for reading and may the Force be with us.
Notes
1 The delay time is 200ms set on a keypress timer callback. Each time a key is pressed the timer is
refreshed back to 200ms. If no other keys are pressed in that amount of time, the callback sends
the code to be compiled and executed if successful.
2 Our Star Wars obsession is the impetus for many aspects of this collaborative endeavor. This
includes stage names, titles of audiovisual works, titles of written papers, titles of software and
performance attire with robe and lightsaber. Some of the more esoteric titles are listed in the
references under Wookieepedia. A few titles are fan-fiction generated, which not surprisingly
have been less successful.
Repurposing Star Wars, speaks to a digital-postmodern condition and opening the door to a
nostalgic futurism, where authenticity and originality are more ambiguous. This is in-line with
EDM genres, where samples are often appropriated, and a sense of futurism and science fiction
is often pervasive. Our extended-metaphor-parody provides both a geeky entry point as well as
a secondary narrative of conceptual context.
3 JSON is the acronym for JavaScript Object Notation, and its data structure is that of an
unordered set of key/value pairs. This format is usually saved or transmitted as text, not binary,
for human readability.
4 Compression algorithms for audio and video use lossy data algorithms to make files smaller,
typically compromising the information. With the text recording files, The Dark Side plays
Smith and Lawson | Rogue Two 79
back the text edits in real-time, thereby re-creating a simulacra of the original performance
with 60fps, pixel-perfect video and uncompressed audio. We use the term simulacra because
each time the text file is played any random numbers are regenerated, so the playback is
incredibly similar to the original but never exactly the same, although it may be possible
to regenerate exactly the same piece by seeding the random number generators. In our
observations an hour long performance would result in a 10-15MB text recording , while a
screen-recording of equal length could be >100GB and already data lossy.
5 A performance at the New York Electroacoustic Music Festival in 2017 at the Abrons Art Center.
6 The Dark Side has not only been crucial to our continued collaboration, but has enabled
several presentations that would have been logistically improbable if not impossible. For
example, we used The Dark Side at the Sample Music Festival in Berlin during a lecture Smith
gave on creating pattern-based music with TidalCycles. It was determined well in advance of
the lecture that Lawson would not physically attend, but would be present within The Dark
Side. In another case, a series of events left Smith stranded in Newark, NJ and unable to join
Lawson in Morelia for ICLC. This necessitated Smith’s virtual presence at the formal paper
presentation of The Dark Side. We were able to prove that the system works, making Smith’s
travel woes and Lawson’s inability to be in Berlin incentives. The performance of EV9D9
scheduled for the last day of ICLC in Morelia clearly highlighted the fact that one half of The
Rebel Scum was missing from a performance standpoint, but served as another proof of the
project, and left us wondering what other possibilities there were beyond using The Dark Side
as a method for rehearsals, lecture-demos and as a safeguard against debilitating flight delays.
An additional demonstration of The Dark Side backend went un-announced at Morelia
ICLC performance. Since many algoraves and clubs have notoriously bad or non-existent wifi
connections, The Dark Side was designed to be low-bandwidth, such that only the code edits
are transmitted. Because only minimal data needs to be transmitted, an international phone
data-plan is more than sufficient, and in fact, is what we used in Morelia.
7 It may of interest to note that computer-based audio content analysis (Anderson and
Eigenfeldt 2011; Collins 2012; Panteli, Bogaards and Honingh 2014) and generative systems
based on ear-based content analysis (Anderson, Eigenfeldt and Pasquier 2013; Eigenfeldt and
Pasquier 2013) do exist, and provide valuable insight into certain sonic characteristics that
may elude our ears during a causal listen (e.g., the specific offset in milliseconds of a swing
pattern from the ¼ or ⅛ note divisions). As Anderson et al. writes regarding their GEDMAS
system, “The compositions are based on a corpus of transcribed musical data collected
through a process of detailed human transcription,” and it is this kind of familiarity with the
corpus (computer-guided or based entirely on one’s own personal understanding of a style)
that may help aid in the creation of musical material reminiscent of whatever genre one seeks
to emulate (2013: 5).
Dancecult 10(1)80
References
Alexander, Amy. 2010. “Live Visuals”. In Audiovisuolog y Compendium, ed. Dieter Daniels and
Sandra Naumann,198–211. Köln: Walther König.
Alexander, Amy, Nick Collins. 2007. “A History of Audiovisual Performance”. In Cambridge
Companion to Electronic Music, ed. Nick Collins and Julio d’Escrivan, 126–39. Cambridge:
Cambridge University Press.
Christopher Anderson, Arne Eigenfeldt and Philippe Pasquier. 2013. “The Generative Electronic
Dance Music Algorithmic System”. Northeastern University (Massachusetts): AAAI
Publications, Ninth Artificial Intelligence and Interactive Digital Entertainment Conference.
Christopher Anderson and Arne Eigenfeldt. 2011. “A New Analytical Method for the Musical
Study of Electronica”. Sforzando! (New York): Electroacoustic Music Studies Conference.
Cheshire, Tom. 2013. “Dance + Code = Algorave”. Wired, September: 85.
Collins, Nick. 2012. “Influence in Early Electronic Dance Music: An Audio Content Analysis
Investigation”. Porto (Portugal): The 13th International Society for Music Information
Retrieval Conference.
Collins, Nick, Alex McLean, J. Rohrhuber and A. Ward. 2003. “Live Coding Techniques in
Laptop Performance”. Organized Sound 8(3): 321–30.
Crevits, Bram. 2006. “The Roots of VJing A Historical Overview”. In Audio-Visual Art + VJ
Culture. ed. Michael Faulkner/D-FUSE,14–9. China: Laurence King Publishing Ltd.
Crypton. “Who is Hatsune Miku?”. Crypton Future Media.
(accessed 3 January 2018).
Eskander, Xárene. 2006. “Introduction”. In `vE-``jA: Art + Technolog y of Live Audio/Video, ed.
Xárene Eskandar and Prisna Nuegsigkapian, 4–5. China: h4 San Francisco.
Eulerroom. 2017. “Algorave 2017 24h Birthday Stream”. YouTube. Uploaded on 18 March
2017. (accessed 3 January 2018).
Falkner, Michael. 2006. Audio-Visual Art + VJ Culture, ed. Michael Faulkner/D-FUSE, 9. China:
Laurence King Publishing Ltd.
Gresham-Lancaster, Scot. 2017. “A Personal Reminiscence on the Roots of Computer Network
Music”. Leonardo Music Journal 27: 71–7. .
Levin, Golan. 2010. “Software Art”. In Audiovisuolog y Compendium, ed. Dieter Daniels and
Sandra Naumann, 270–83. Köln: Walther König.
Lawson, Shawn. 2014. “The Force”. GitHub.
(accessed 1 January 2018).
———. 2017. “The Dark Side”. GitHub.
(accessed 1 January 2018).
Lawson, Shawn, Ryan Ross Smith. 2017. “The Dark Side”. Centro Mexicano para la Música y las
Arts Sonoras (Mexico): Proceedings of the Third International Conference on Live Coding.
Lawson, Shawn, Ryan Ross Smith, Frank Appio. 2016. “Closing the Circuit: Live Coding the
Modular Synth”. McMaster University (Canada): Proceedings of the Second International
Conference on Live Coding.
https://ec.crypton.co.jp/pages/prod/vocaloid/cv01_us
https://www.youtube.com/watch?v=LZUHjg6EyJk&list=PLMBIpibV-wQKbbM_uOQpa62QmctO4psgQ
https://www.youtube.com/watch?v=LZUHjg6EyJk&list=PLMBIpibV-wQKbbM_uOQpa62QmctO4psgQ
http://dx.doi.org/10.1162/LMJ_a_01022
https://github.com/shawnlawson/The_Force
https://github.com/shawnlawson/TheDarkSide
Smith and Lawson | Rogue Two 81
Matos, Michaelangelo. 2015. “The Underground is Massive: How Electronic Dance Music
Conquered America”. New York, NY: Harper Collins Publishers.
McCartney, James. 1996. “Supercollider: A New Real Time Synthesis Language”. Hong Kong
University of Science and Technolog y (China): Proceedings of the International Computer
Music Conference.
McLean, Alex, David Ogborn, Sean Lee, Julian Rohrhuber, Ben Gold, Tom Murphy, Eric
Fairbanks, Scott Fradkin, pd3v, Mike Hodnick, Lennart Melzer. “TidalCycles”. TidalCycles.
(accessed 1 January 2018).
Panteli, Maria, Niels Bogaards, and Aline Honingh. 2014. “Modeling Rhythm Similarity
For Electronic Dance Music”. Málaga (Spain): The 13th International Society for Music
Information Retrieval Conference.
Shaughnessy, Adrian. 2006. “Last Night a VJ Zapped My Retinas The Rise and Rise of VJing”. In
Audio-Visual Art + VJ Culture, ed. Michael Faulkner/D-FUSE, 10–3. China: Laurence King
Publishing Ltd.
Smith, Eoin. 2010. “Electronic Dance Music and Academic Music: Genre, Culture and
Turntables”. De Montfort University (United Kingdom): Proceedings of Sound, Sight, Space
and Play.
Sorensen, Andrew. 2005. “Impromptu: An Interactive Programming Environment For
Composition and Performance”. Queensland University of Technolog y (Australia):
Proceedings of the Australasian Computer Music Conference.
Spinrad, Paul. 2005. “History”. In The VJ Book Inspirations and Practical Advice for Live Visuals
Performance, ed. Paul Spinrad, 17–25. Los Angeles: Feral House.
Reynolds, Simon. 2012. “How Rave Music Conquered America”. The Guardian, 2 August.
(accessed 3 January 2018).
Rubin, Courtney. 2015. “Silent Discos Let you Dance to Your Own Beat”. The New York Times,
17 June. (accessed 3 January 2018).
Wang , Ge, Perry R . Cook. 2004. “On-the-fly Programming : Using Code as an Expressive Musical
Instrument”. Shizuoka University of Art and Culture ( Japan): Proceedings of the 2004
International Conference on New Interfaces for Musical Expression.
Ward, Adrian, Julian Rohrhuber, Fredrik Olofsson, Alex Mclean, Dave Griffiths, Nick Collins,
Amy Alexander. 2004. “Live Algorithm Programming a Temporary Organization for its
Promotion”. In Read Me: Software Art & Cultures, ed. Olga Goriunoca and Alexi Shulgin,
243–61. Aarhus University Press.
Watz, Marius. 2006. “More Points on the Chicken: Visual Instruments and New Directions
in Improvised Visual Performance”. In `vE-``jA: Art + Technolog y of Live Audio/Video. ed.
Xárene Eskandar and Prisna Nuegsigkapian. 6–7. China: h4 San Francisco.
Wookieepedia et. al. “EV9D9”. Wookiepedia. Last modified 25 October 2017.
(accessed 3 January 2018).
———. “Kessel Run”. Wookiepedia. Last modified 3 December 2017.
(accessed 3 January 2018).
https://tidalcycles.org
https://www.theguardian.com/music/2012/aug/02/how-rave-music-conquered-america
https://www.nytimes.com/2015/06/18/style/silent-discos-let-you-dance-to-your-own-beat.html
https://www.nytimes.com/2015/06/18/style/silent-discos-let-you-dance-to-your-own-beat.html
http://starwars.wikia.com/wiki/EV-9D9
http://starwars.wikia.com/wiki/Kessel_Run
Dancecult 10(1)82
———. “Sarlacc”. Wookiepedia. Last modified 24 December 2017.
(accessed 3 January 2018).
———. “Rouge Two”. Wookiepedia. Last modified 10 November 2017.
(accessed 3 January 2018).
Live Code-ography
Lawson, Shawn. Ryan Ross Smith. 2014. Kessel Run. Vimeo, 38:48.
(accessed 28 September 2017).
———. 2015. Sarlacc. Vimeo, 21:19. (accessed 28 September 2017).
———. 2016. Owego System Trade Routes. naucleshg , 79:00. (accessed 28 September 2017).
———. 2016. Owego System Trade Routes (samples). Vimeo, 4:07.
< https://vimeo.com/153029100> (accessed 28 September 2017).
———. 2017. EV9D9. Vimeo, 26:06. (accessed 28 September 2017).
———. 2017. EV9D9 - A real-time text-capture performance version (use Google Chrome). Github,
11:24. GitHub. (accessed 28 September 2017).
http://starwars.wikia.com/wiki/Sarlacc
http://starwars.wikia.com/wiki/Rogue_Two
https://vimeo.com/130277124
https://vimeo.com/121493283
http://naucleshg.com/shawn-lawson-ryan-ross-smith-owego-system-trade-routes
http://naucleshg.com/shawn-lawson-ryan-ross-smith-owego-system-trade-routes
https://vimeo.com/153029100
https://vimeo.com/261648424
https://github.com/shawnlawson/EV9D9