March 25-27, 2016 - NYU

SOURCE is two evenings of livecoding: music and other performances made with computer programming. It's being held on the 20th birthday of the SuperCollider livecoding environment. Over the two nights, 16 performers will write code live to make music, visuals, and even a dance performance, with the code visible to the audience.

Livecoding is a growing art form, but performances in NYC are rare!

During the day there'll also be workshops and talks about livecoding. If you're excited enough by the performances that you want to learn more, the talks are free with registration and many are beginner-friendly.


Friday 6-7:30pm Concert I - room 303
(Dinner break)
Friday 8:30-10:30pm Concert II - room 303

Saturday 3:30-5:30pm Tech talks - 6th floor
Saturday 5:30-6pm Lighting talks - 6th floor
(Dinner break)
Saturday 7-8:30pm Concert III - 6th floor
Saturday 9-10:30pm Concert IV - room 303

Sunday 12-3pm Workshops - 6th floor


NYU, 35 w4th st, Manhattan
$25 for the full weekend

If you're unable to pay but would like to come, send us an email: we may be able to accommodate you.

Performers and Speakers

Scott Carver is a software engineer and composer. He has built software control and composition systems for art installations and performances, as well as for his own music and video work. He is a sporadic contributor and long-time user of SuperCollider, and spends his weekdays writing code at Adobe Systems. He studied at the Center for Digital Arts and Experimental Media in Seattle, and currently lives between Seattle, WA and Syracuse, NY.

Scott Cazan is a Los Angeles based composer, performer, creative coder, and sound artist working in fields such as experimental electronic music, sound installation, chamber music, and software art where he explores cybernetics, aesthetic computing, and emergent forms resulting from human interactions with technology. His work often involves the use of feedback networks where misunderstanding and chaotic elements act as a catalyst for emergent forms in art and music. He is is currently a faculty member of the California Institute of the Arts where he teaches topics on the intersections between art and electronics.

William K Chang is a computational biologist, electronic musician, guitarist and bassist. Scientifically, he is interested in data analysis and complex systems theory; musically, he primarily works in a semi-improvised ambient style, using Ableton Live, custom Max patches, and sampled instrumentation. Originally from Taipei, he received a PhD in systems biology from Cornell University and is currently a postdoctoral researcher at the Albert Einstein College of Medicine, where he is investigating microbial ecology, evolution, and metagenomics.

Michael Clemow is an interdisciplinary performance artist and composer based in Brooklyn, NY. His work has been shown at Issue Project Room, River to River Festival, Eyebeam, Spectrum, Exit Art, Diapason, The Tank, and FestivalMOD (Guadalajara, MX) among others. He was awarded the Sonic Mmabolela field recording residency in Limpopo, South Africa, in 2013 and 2015, and the Team Effort! residency 2014 in Scotland. He has performed in the noise group “Murder”, and Balinese gamelan ensemble, “Gamelan Dharma Swara,” and is a member of the sci-fi rap group, “C∆N-D” with Amy Khoshbin, and improvised piano and electronics duo, “Waver.”

Shawn Lawson is an experiential media artist creating the computational sublime. As Obi-Wan Codenobi, he live-codes, real-time computer graphics with his open source software, The Force. He has performed or exhibited in England, Scotland, Spain, Denmark, Russia, Italy, Korea, Portugal, Brazil, Turkey, Malaysia, Iran, Canada, and the USA. He received grants from NYSCA and the Experimental Television Center, and he has been in residence at CultureHub and Signal Culture. Lawson studied at CMU and ÉNSBA. He received his MFA in Art and Technology Studies from SAIC. He is an Associate Professor in the Department of Art at RPI.

Sean Lee is a vagabond researcher at seanleelabs, where he explores higher abstractions and their applications in life. He is a programmer by day, pipe-dreamer by night, and musician whenever in between.

Norah Lorway is a live coder, composer and computer music researcher who performs at Algoraves and other such events. She holds PhD in Computer Music from University of Birmingham, where she worked on music and software in SuperCollider and performed on the BEAST multichannel system. She has had works performed throughout North America and Europe, and is involved with various new media collaborations in the UK and Canada. Over the last year, she worked as a Postdoctoral Fellow at the University of British Columbia working at the intersection of live coding and gesture control, building a new Digital Musical Instrument (DMI). Norah is currently a Lecturer in Creative Music Technology at Falmouth University (UK) where she teaches and researches interactive creative computing.

Michael McCrea (Seattle, b. 1985) performs multiple roles in art production and arts-driven research. He received his BFA from the University of Washington’s Center for Digital Arts and Experimental Media (DXARTS) in 2009, with an emphasis in spatial sound and mechatronic art. Since then he has authored and collaborated on a variety of works from real-time sound installation and performance, to video and light animation, sensing and control systems, and more. He is currently a Research Scientist for DXARTS, and in this capacity uses spatial and hyper-directional sound as an investigative medium, developing new tools for composition and performance.

Michael Musick is a media artist, music technologist, composer, performer, improviser, and researcher. His current work focuses on the creation of and research into interactive performance systems. The Sonic Spaces project, which is a series of dynamic interactive sonic ecosystem compositions, is the most recent example of this work. Michael is a Music Technology Ph.D. Candidate at NYU and is part of the Computer Music Group at the Music and Audio Research Lab (MARL).

Marcin Pączkowski is a composer, conductor, and digital artist, working with both traditional and electronic media. Currently he is a doctoral candidate in the Center for Digital Arts and Experimental Media (DXARTS) at the University of Washington.

Daniel Palkowski studied music at Manhattan School of Music and Columbia University. He has taught Audio at ITP (NYU), MSM and Columbia, and is currently a video and audio producer and engineer at Ernst & Young. He has been a Supercollider enthusiast since the Wesleyan symposium a few years back.

Joo Won Park (b.1980) wants to make everyday sound beautiful and strange so that everyday becomes beautiful and strange. He performs live with toys, consumer electronics, kitchenware, vegetables, and other non-musical objects by digitally processing their sounds. He also makes pieces with field recordings, sine waves, and any other sources that he can record or synthesize. Joo Won draws inspirations from Florida swamps, Philadelphia skyscrapers, his two sons, and other soundscapes surrounding him. He has studied at Berklee College of Music and the University of Florida, and currently serves as a Visiting Assistant Professor of Computer Music at the Oberlin Conservatory. Joo Won’s music and writings are available on ICMC DVD, Spectrum Press, MIT Press, PARMA, Visceral Media, MCSD, SEAMUS CD Series, and No Remixes labels.

Tae Hong Park is a composer, music technologist, and bassist. His work focuses on composition of electro-acoustic and acoustic music, machine learning and computer-aided music analysis, research in multi-dimensional aspects of timbre, and audio digital signal processing. Dr. Park has presented his music at national and international conferences and festivals including Bourges, ICMC, MATA, SCIMF, and SEAMUS. He is the Chief Editor of Journal SEAMUS, serves as Editiorial Consultant for Computer Music Journal, served as President of the International Computer Music Association (ICMA), and is Director of NYU Steinhardt's Composition program.

Pepper founded the image remixing site Photoblaster ( and the video sharing community Scannerjammer ( He frequently performs improvised music with live code at the nightclub The Lash in Downtown Los Angeles. He uses Tidal, Gibber and his own Python-based live coding environment CrunchTime in live coding performances. He is currently developing Songshark, software for automatic music, at the Harvard Innovation Lab in Cambridge, MA.

Kamron SanieeKamron Saniee is an Iranian American electronic musician and data scientist based in NYC. He holds an AB Summa Cum Laude in mathematics from Princeton and maintains activities in live electronic performance under his own name, extending previous training as a classical violinist and in composition at Mannes Conservatory. He has performed original multichannel music with the CT::SWaM spatial sound series at the Knockdown Center, at the Fridman Gallery in SoHo, and at Neu West Berlin in Berlin Neukolln.

Kate Sicchio works at the interface of technology and choreography. Her work includes performances, installations, web and video projects. She has presented work internationally across the US, Canada, Germany, Australia, Belguim and the UK at venues such as the V&A (London), EU Parliament (Brussels), Banff New Media Institute (Banff) and Arnolfini Art Centre (Bristol UK). She currently is Adjunct Faculty at Parsons the New School for Design and New York University. See

Ryan Ross Smith is a composer and performer currently based in Fremont Center, NY. Smith has performed throughout the US, Europe and UK, including performances at MoMA and PS1 [NYC] and Le Centre Pompidou [Paris, FR], has had his music performed throughout North America, Iceland, Australia and the UK, has presented his work and research at conferences including NIME, ISEA, ICLI, the Deep Listening Conference and Tenor2015, and has lectured at various colleges and universities. Smith earned his MFA in Electronic Music from Mills College in 2012, and is currently a PhD candidate in Electronic Arts at the Rensselaer Polytechnic Institute in Troy, NY.

Slub (performing remotely) is a live coding trio, who have been making people dance to their algorithms since the year 2001. Slub sound emerges from slub software; melodic and chordal studies, generative experiments and beat processes. They make live coded music using hand crafted programming languages in networked synchrony. With roots in UK electronica and critical tech culture, slub build their own software environments for creating music in realtime; everything you hear is formed by human minds. Slub project their screens to open up their live software development process, which does not adhere to industry quality control standards. They communicate using OSC over UDP and eyebrow gestures. The music output ranges from intelligent extra-slow gabber skiffle to progressive hardcore algorave. Slub have performed widely over the past 16 years, including at /* vivo */ Mexico City, Sonic Acts Amsterdam, Sonar Barcelona, Club Transmediale Berlin, Ars Electronica Linz, STRP Eindhoven, Ultrasound Huddersfield and Hoxton Foundry London.


Tech Talks - 3:30pm Saturday

Tae Hong Park

The Citygram project

Our first iteration of Citygram (CG) aims to create dynamic, interactive, real-time soundmaps that capture the sonic ebb-and-flow of urban neighborhoods through the CG sensor network framework and API. CG allows anyone with a computing device, audio input, and Internet access to actively participate as a "streamer" and utilize spatio-temporal soundscape data for creating data-driven artworks.

Shawn Lawson

Live Coding GLSL Graphics with The Force

This talk will give an overview into using OpenGL Fragment Shaders for live-coding graphics. We will tour an open-source toolkit, The Force, designed around WebGL for Google Chrome or Firefox. Some specifics pointed out will be how it's constructed, working with audio input, and how to program in an auto-compiling text editor.

Marcin Pączkowski

Presentation of WsGUI

WsGUI is a web-based interface system for SuperCollider. WsGUI enables creating dynamic user interface accessible through a browser for remote display and control purposes.

Michael Clemow

A discussion of using SuperCollider for field recording

Michael McCrea

SuperCollider for system control and spatial modulation

As a programming language created for music making, SuperCollider is exceptionally well suited to orchestrate time­varying and state­based processes. Through a survey of artworks and research projects, this talk aims to illustrate SuperCollider’s strengths not only as sound synthesis engine, but as a tool for realtime control of mechatronic systems, installations, and experiment­driven research.

Lightning Talks - 5:30pm Saturday

Scott Carver

Connection Quark

A quick-and-dirty introduction to Connection, a quark for connecting and synchronizing parameters, data, UI, synths, buses, and external controllers.

Leandra Tejedor

William K Chang

Complex adaptive sequencing

An application of complex systems modeling to sequencing and generative composition, using feedback to generate patterns from randomness.

Brian Fay

Triggerfish demo

Triggerfish an audio patching environment that uses a web-based editor to create, destroy, control, and connect audio objects that the user is building in SuperCollider, sort of like a modular synthesizer (think Pd, Max/MSP, Reaktor, etc.)


12:00pm Sunday

Michael McCrea

The Ambisonic Toolkit in SuperCollider

The Ambisonic Toolkit in SuperCollider: an extensive set of tools for artists looking to create and manipulate immersive sound environments for 3D audio.

Workshop participants will learn the core tools and concepts of working in ambisonics, a versatile 3D spatial sound format. This workshop aims to be foundational, outlining and illustrating core building blocks that can then be used to form more complex networks— from encoding mono or multichannel recordings into 3D, to synthesizing enveloping sonic environments, and sculpting and filtering soundfields. With these foundations, participants will be able to author and transform soundfields in intuitive yet perceptually complex ways that would otherwise be difficult when working in other spatial sound formats.

Notes for participants:

This workshop is intended for SC users with intermediate level experience. That is to say, those who are comfortable executing and modifying code on the fly, have a comfortable understanding of Buffers, Busses and Synths.

What you’ll need for the workshop:

Headphones: you’ll be playing your sound through binaural stereo. Please have SC3­plugins installed (this is where the core ATK installation lives) Mathlib Quark (can be installed by running Quarks.gui) The ATK requires some additional special installation steps: and there are a set of external files which will need to be downloaded for a complete installation (example b- format recordings, convolution kernels), as well as the workshop files themselves in SC’s help file format.

These files, along with simple installation instructions can be downloaded here (~365 MB):

alternate host:

Scott Carver

Version Control and Artistic Iteration

Making creative work in SuperCollider means mixing the practicalities of software development with the vagaries of honing a creating idea to it's end. Using a source control system like git to track source code, and capturing and documenting in-progress creative work are both critically important to a project - and are the first things to fall by the wayside when we're working at full-steam. This workshop will provide a brief introduction to using the git version control system, and then use those basic techniques to trace out a strategy for how to track both your code and your creative work, iterating the technical and creative components in tandem. Along the way, we'll discuss balancing code-writing and creative work in the same project, experimenting without derailing your project, archiving completed projects, as well as sidestepping common version control gotchas. The workshop is tailored towards git-newbie's, though git-pro's will also find it useful. Beginners should come with GitHub Desktop installed, a GitHub account, and a working SuperCollider project of your own to follow along.


Workshop is intended for people who want to learn git / version control, people who have git/scm experience but want to learn to better manage their projects, and anyone who has a file named e.g. "stria-october-performance copy(4) - WORKING!.scd" on their computer.