CeReNeM’s Creative Coding Lab (CCL) is an international research hub for audio software research, led by Dr Alex Harker and linked to Dr Hyunkook Lee’s Applied Psychoacoustics Lab within the School of Computing & Engineering. The CCL focuses on software programming for creative purposes, with current projects covering applied psychoacoustics, 3D spatialisation, interactive analysis, large database navigation and resynthesis, and creative DSP tools. The aim of the CCL is to produce high-quality research that leads to new open-source software tools, and to enable the further development of these tools through relationships with the lab’s industry partners.
Key to the concept of the ‘laboratory’ of the CCL is an enhanced integration between postgraduate students and staff, providing a mixed model of funded study and paid work on staff projects for exceptionally promising student programmers. The other central focus is interdisciplinary exchange, drawing on the technical expertise and creative user base of the internationally renowned practitioners within CeReNeM’s network to create innovative software tools. Knowledge transfer inside and outside of the university—including work with international partners and industry consultants—is crucial in this process, drawing on existing partnerships as well as supporting new relationships with high-profile institutions.
The CCL holds regular open meetings and presentations to provide a platform for its members and guests to share work in progress. The CCL holds an annual symposium as part of the University’s annual Electric Spring Festival.
Director: Dr Alex Harker
Dr Hyunkook Lee (Applied Psychoacoustics Lab)
Prof Pierre Alexandre Tremblay (FluCoMa)
Dr Owen Green (FluCoMa)
Dr Gerard Roma (FluCoMa)
Prof Michael Clarke (IRiMaS)
Dr Frédéric Dufeu (IRiMaS)
Dr Keitaro Takahashi (IRiMaS)
Dr Kristina Wolfe
Dr Sam Pluta - Creative Coding Lab Visiting Research Fellow; Assistant Professor, University of Chicago; Director UoC Studio
Dr Miller Puckette - Creative Coding Lab Visiting Research Professor; Professor, University of California San Diego
NoTAM
University of California San Diego
University of Chicago CHIME studios
Université de Montréal
iem Graz
CCL PhD Scholarship Recipient: Oli Larkin (2016-2019)
A DSP framework for arbitrary size frame processing with arbitrary sub-sample accurate timing.
Block-based processing of audio streams commonly employed in realtime audio environments such as Max, Pd and SuperCollider is ill-suited to digital signal processing which functions on discrete chunks, or frames, of audio. Such environments currently lack comprehensive support for complex multi-rate processing. Consequently, welldocumented frame-based processing techniques requiring sophisticated multi-rate processing DSP graphs are currently under-exploited in the creative coding community. FrameLib provides an extensible open-source library for realtime frame-based audio processing, and is currently available as a set of Max externals and a C++ codebase. It enables rapid prototyping and creation of DSP networks involving dynamically sized frames processed at arbitrary rates. Unlike prior solutions, FrameLib provides novel systems for scheduling and memory management, reducing complexity for the user.
Beta release code available via the github repository.
A Set of 80+ Externals for a variety of tasks in Max.
A brief overview of some areas addressed:
general purpose scaling for Max and MSP
efficient partitioned + non- partitioned convolution
comprehensive descriptor analysis (realtime + non-realtime)
enhanced audio multi-threading / dynamic patch loading
efficient buffer playback and storage
improved Wii remote communication object
high-quality random number generators for Max and MSP
sample accurate voice management and more
thread debugging and switching
utility objects
SIMD versions of 35 basic MSP objects
The AHarker Externals are licensed freely for non-commercial use. Portions of this work have been supported by the Arts and Humanities Research Council and the HISSTools Project at the University of Huddersfield. Available for download.
Additional externals to assist in making the M4L Convolution Reverb devices. Available as source code from the github repository.
Winner of the Faust Award 2018, iPlug is a free open source software development framework that many companies use to create audio plug-ins that work on multiple operating systems and in multiple plug-in formats e.g. Steinberg’s VST, Apple's AudioUnit and Avid's AAX. CCL PhD researcher Oli Larkin has been maintaining the project for many years, and recently in collaboration with Dr Alex Harker has been working on a new version “iPlug 2" to be released in 2018, featuring a much improved code base and some exciting and innovative new features such as support for Web Audio. Available from the github respository.
An interactive and intelligent tool for sound source localisation prediction in recording. It can interactively visualise the predicted perceived position of each sound object for any microphone array configuration, and can also recommend the correct microphone array configuration for a desired stereo width in recording. Useful for recording engineers and students, the app can be downloaded freely from iOS and Android app stores. Available for download and as an app for iOS and Android devices.
Open-access library of microphone array impulse responses (IRs), including over 2000 IRs captured in Huddersfield’s St. Paul’s Hall for 13 loudspeaker positions with 39 different microphone configurations from 2-channel stereo to 9-channel 3D audio. The library comes with a convenient Max-based convolution renderer so the user can easily compare and mix different microphone techniques and create virtual ensemble recordings in 3D. Available as source code from the github repository.
ReaCoMa is a package of ‘ReaScripts’ that bring the Fluid Corpus Manipulation tools to REAPER. The scripts allow you to apply algorithms presented as part of the first FluCoMa toolbox on native REAPER items, opening up unique possibilities for mixing, composition and sound exploration. In ReaCoMa’s current release, there are several decomposition and segmentation algorithms including:
- Non-negative matrix factorisation (blind source separation)
- Harmonic percussive source separation
- Transient extraction
- Sinusoidal resynthesis
- Novelty slicing
- Two amplitude slicers
- Transient slicing
- Spectral slicing
...and more to come with the continued development of the FluCoMa project.
Full installation and documentation available here.
MTools for Live (MT4L) is a suite of ambisonic Max for Live devices created by Mortimer Pavlitski (Music MA). The tools enable the encoding and processing of 5th order ambisonic signals in Ableton Live. The research focuses on enabling rapid interaction with spatialisation and an emphasis on creativity, not utility. MT4L can be downloaded here.
The suite features four main effects:
- The MT4L Granulator device granulates from a buffer. Individual grains are paned to positions in the ambisonic field that correspond to the position of a controllable boids simulation. The device features per grain pitch shift, filter and wave-shaping.
- The MT4L TapDelay device takes a stereo input and pans the tap outs of a delay line to different positions on the ambisonic sphere. The device features per tap pitch shift, filter and reverb.
- The MT4L Smudge device takes a stereo input and pans the frequency bands of a filter bank to different positions on the ambisonic sphere. The device features a per band spectral gate and per band spectral delay.
- The MT4L AmbiDelay device is an ambisonic delay. The device features a filter and rotator inside the feedback path of the ambisonic delay. Parameters can be adjusted randomly with a manually trigger or automatically using, onset detection, a MIDI note-on message or synced to a beat of Ableton Live.
A hybrid waveguide and ray-tracing room acoustic simulator with GPU acceleration, created by Masters student Reuben Thomas.
The aim of room acoustics simulation is to simulate the reverberant properties of a space without having to physically build anything. This is useful for a variety of applications: architects need to be able to evaluate the acoustics of a building before construction begins; sound editors for film sometimes need to mix in recordings which were not made on location; electronic musicians like to conjure imaginary or impossible spaces in their music, and virtual-reality experiences must use audio cues to convince the user that they have been transported to a new environment. The Wayverb project makes available a graphical tool for impulse response synthesis. It combines geometric and wave-modelling simulation techniques, providing an adjustable balance between speed and accuracy. It is also free to download, can be run immediately on commodity hardware, and the source code can be used and extended under the terms of the GNU General Public License (GPL).
Written in C++ using OpenCL and JUCE. Free and open source, available for download.
A tracker interface and an event sequencer for live coding. Siren is a JavaScript-based web application, based on a hierarchical structure of data, and a tracker-inspired user interface, initially intending to build on the concepts and technology of TidalCycles. The main idea is to support a hybrid interaction paradigm where the musical building blocks of patterns are encoded in a textual programming language, while the arranging and dispatching of patterns is done via a grid-based user interface inspired by musical trackers.
The back-end, which interfaces with GHC, is built using Node.js and the front end is implemented using Reactjs. For the academic community I have published a conference paper at the 2017 International Computer Music Conference (ICMC) titled Siren: Hierarchical Composition Interface. In addition to the conference proceedings, Siren has been featured in a recently crowd-sourced book on electronic music instruments, Push Turn Move, beside its predecessors such as SuperCollider, PureData and TidalCycles.
Available from for download. More information available here.
Prof Miller Puckette (University of California San Diego, USA)
22–27 March 2018
Guest residency & Composition Masterclass, Creative Coding Lab
Prof Hans Tutschku (Harvard University, USA)
6 November 2017
Technology: The expressive extension of my artistic sensibility
Prof Miller Puckette (University of California San Diego, USA)
28-31 March 2017
Roles for scores for electronic music; and
Designing electronic music instruments
Dr Sam Pluta (University of Chicago, USA)
22 November 2016
Improvisation and Openness: Workshop, in association with hcmf// 2016
A DSP framework for arbitrary size frame processing with arbitrary sub-sample accurate timing
A set of 80+ externals for a variety of tasks in Max/MSP
Additional externals to assist in making the M4L Convolution Reverb devices
An award winning, open source framework used to create audio plug-ins that work on multiple operating systems and in multiple plug-in formats
Open source software tools to address issues related to the composition, performance and presentation of electronic music
MARRS is an interactive and intelligent tool for sound source localisation prediction in recording
Open-access library of microphone array impulse responses (IRs) and Max-based convolution renderer
ReaCoMa is a package of ‘ReaScripts’ that bring the Fluid Corpus Manipulation tools to REAPER
MTools for Live is a suite of ambisonic M4L devices by Mortimer Pavlitski enabling the encoding and processing of 5OA in Ableton Live
Hybrid waveguide and ray-tracing room acoustic simulator with GPU acceleration. Written in C++ using OpenCL and JUCE
A tracker interface and an event sequencer for live coding
The HISS is a loudspeaker orchestra specialised in the concert rendition of electronic music of all kind
---