CeReNeM’s Creative Coding Lab (CCL) is an international research hub for audio software research, led by Dr Alex Harker and linked to Dr Hyunkook Lee’s Applied Psychoacoustics Lab within the School of Computing & Engineering. The CCL focuses on software programming for creative purposes, with current projects covering applied psychoacoustics, 3D spatialisation, interactive analysis, large database navigation and resynthesis, and creative DSP tools. The aim of the CCL is to produce high-quality research that leads to new open-source software tools, and to enable the further development of these tools through relationships with the lab’s industry partners.
Key to the concept of the ‘laboratory’ of the CCL is an enhanced integration between postgraduate students and staff, providing a mixed model of funded study and paid work on staff projects for exceptionally promising student programmers. The other central focus is interdisciplinary exchange, drawing on the technical expertise and creative user base of the internationally renowned practitioners within CeReNeM’s network to create innovative software tools. Knowledge transfer inside and outside of the university—including work with international partners and industry consultants—is crucial in this process, drawing on existing partnerships as well as supporting new relationships with high-profile institutions.
The CCL holds regular open meetings and presentations to provide a platform for its members and guests to share work in progress. The CCL holds an annual symposium as part of the University’s annual Electric Spring Festival.
Director: Dr Alex Harker
Dr Hyunkook Lee (Applied Psychoacoustics Lab)
Prof Pierre Alexandre Tremblay (FluCoMa)
Dr Owen Green (FluCoMa)
Dr Gerard Roma (FluCoMa)
Prof Michael Clarke (IRiMaS)
Dr Frédéric Dufeu (IRiMaS)
Dr Keitaro Takahashi (IRiMaS)
Dr Kristina Wolfe
Dr Sam Pluta - Creative Coding Lab Visiting Research Fellow; Assistant Professor, University of Chicago; Director UoC Studio
Dr Miller Puckette - Creative Coding Lab Visiting Research Professor; Professor, University of California San Diego
University of California San Diego
University of Chicago CHIME studios
Université de Montréal
CCL PhD Scholarship Recipient: Oli Larkin (2016-2019)
A DSP framework for arbitrary size frame processing with arbitrary sub-sample accurate timing.
Block-based processing of audio streams commonly employed in realtime audio environments such as Max, Pd and SuperCollider is ill-suited to digital signal processing which functions on discrete chunks, or frames, of audio. Such environments currently lack comprehensive support for complex multi-rate processing. Consequently, welldocumented frame-based processing techniques requiring sophisticated multi-rate processing DSP graphs are currently under-exploited in the creative coding community. FrameLib provides an extensible open-source library for realtime frame-based audio processing, and is currently available as a set of Max externals and a C++ codebase. It enables rapid prototyping and creation of DSP networks involving dynamically sized frames processed at arbitrary rates. Unlike prior solutions, FrameLib provides novel systems for scheduling and memory management, reducing complexity for the user.
Beta release code available via the github repository.
A Set of 80+ Externals for a variety of tasks in Max.
A brief overview of some areas addressed:
general purpose scaling for Max and MSP
efficient partitioned + non- partitioned convolution
comprehensive descriptor analysis (realtime + non-realtime)
enhanced audio multi-threading / dynamic patch loading
efficient buffer playback and storage
improved Wii remote communication object
high-quality random number generators for Max and MSP
sample accurate voice management and more
thread debugging and switching
SIMD versions of 35 basic MSP objects
The AHarker Externals are licensed freely for non-commercial use. Portions of this work have been supported by the Arts and Humanities Research Council and the HISSTools Project at the University of Huddersfield. Available for download.
Additional externals to assist in making the M4L Convolution Reverb devices. Available as source code from the github repository.
Winner of the Faust Award 2018, iPlug is a free open source software development framework that many companies use to create audio plug-ins that work on multiple operating systems and in multiple plug-in formats e.g. Steinberg’s VST, Apple's AudioUnit and Avid's AAX. CCL PhD researcher Oli Larkin has been maintaining the project for many years, and recently in collaboration with Dr Alex Harker has been working on a new version “iPlug 2" to be released in 2018, featuring a much improved code base and some exciting and innovative new features such as support for Web Audio. Available from the github respository.
An interactive and intelligent tool for sound source localisation prediction in recording. It can interactively visualise the predicted perceived position of each sound object for any microphone array configuration, and can also recommend the correct microphone array configuration for a desired stereo width in recording. Useful for recording engineers and students, the app can be downloaded freely from iOS and Android app stores. Available for download and as an app for iOS and Android devices.
Open-access library of microphone array impulse responses (IRs), including over 2000 IRs captured in Huddersfield’s St. Paul’s Hall for 13 loudspeaker positions with 39 different microphone configurations from 2-channel stereo to 9-channel 3D audio. The library comes with a convenient Max-based convolution renderer so the user can easily compare and mix different microphone techniques and create virtual ensemble recordings in 3D. Available as source code from the github repository.
A hybrid waveguide and ray-tracing room acoustic simulator with GPU acceleration, created by Masters student Reuben Thomas.
The aim of room acoustics simulation is to simulate the reverberant properties of a space without having to physically build anything. This is useful for a variety of applications: architects need to be able to evaluate the acoustics of a building before construction begins; sound editors for film sometimes need to mix in recordings which were not made on location; electronic musicians like to conjure imaginary or impossible spaces in their music, and virtual-reality experiences must use audio cues to convince the user that they have been transported to a new environment. The Wayverb project makes available a graphical tool for impulse response synthesis. It combines geometric and wave-modelling simulation techniques, providing an adjustable balance between speed and accuracy. It is also free to download, can be run immediately on commodity hardware, and the source code can be used and extended under the terms of the GNU General Public License (GPL).
Written in C++ using OpenCL and JUCE. Free and open source, available for download.
The back-end, which interfaces with GHC, is built using Node.js and the front end is implemented using Reactjs. For the academic community I have published a conference paper at the 2017 International Computer Music Conference (ICMC) titled Siren: Hierarchical Composition Interface. In addition to the conference proceedings, Siren has been featured in a recently crowd-sourced book on electronic music instruments, Push Turn Move, beside its predecessors such as SuperCollider, PureData and TidalCycles.
Prof Miller Puckette (University of California San Diego, USA)
22–27 March 2018
Guest residency & Composition Masterclass, Creative Coding Lab
Prof Hans Tutschku (Harvard University, USA)
6 November 2017
Technology: The expressive extension of my artistic sensibility
Prof Miller Puckette (University of California San Diego, USA)
28-31 March 2017
Roles for scores for electronic music; and
Designing electronic music instruments
Dr Sam Pluta (University of Chicago, USA)
22 November 2016
Improvisation and Openness: Workshop, in association with hcmf// 2016