08:30 - 09:20 REGISTRATION / COFFEE  
09:20 - 09:30 WELCOME  
09:30 - 10:40 INVITED TALKS  
  Alessandro Palladini, Music Tribe Intelligent Audio Machines
  Ben Supper, ROLI Ever more connections: AI in service of the learning musician
10:40 - 11:00 COFFEE BREAK  
11:00 - 12:10 INVITED TALKS  
  Lauren Ward, University of Salford Integrating Expert Knowledge into Intelligent and Interactive Systems
  Thomas Lund, Genelec On Human Perceptual Bandwidth and Slow Listening
12:10 - 13:10 LUNCH  
12:50 - 15:00 POSTERS AND DEMOS AT OA7/29  
  Brecht De Man, Nick Jillings and Ryan Stables, Birmingham City University Comparing stage metaphor interfaces as a controller for stereo position and level
  Will Gale and Jonathan Wakefield, University of Huddersfield Investigating the use of Virtual Reality to Solve the Underlying Problems with the 3D Stage Paradigm
  David Moffat, Florian Thalmann, Mark B. Sandler, Queen Mary University of London Towards a Semantic Web Representation and Application of Audio Mixing Rules
  Hugh O’Dwyer, Enda Bates and Francis M. Boland, Trinity College Dublin A Machine Learning Approach to Sound Source Elevation Detection in Adverse Environments
  Dale Johnson and Hyunkook Lee, University of Huddersfield Perceptually Optimised Virtual Acoustics 
  Sean McGrath, Manchester Metropolitan University User Experience Design for Interactive Music Production Tools
  Dominic Ward, Russell D. Mason, Ryan Chungeun Kim, Fabian-Robert Stöter, Antoine Liutkus and Mark D. Plumbley, University of Surrey, Inria and LIRMM, University of Montpellier SiSEC 2018: State of the Art in Musical Audio Source Separation - Subjective Selection of the Best Algorithm 
  Andrew Parker and Steve Fenton, University of Huddersfield Real-Time System for the Measurement of Perceived Punch
  Nikita Goddard and Hyunkook Lee, University of Huddersfield MARRS for the Web: A Microphone Array Recording and Reproduction Simulator developed using the Web Audio API
  Ana Monte, DELTA Soundworks The Stanford Virtual Heart
  Justin Paterson, University of West London VariPlay: The Interactive Album App
  Jonathan Wakefield, Christopher Dewey and Matthew Tindall, University of Huddersfield

Grid Based Stage Paradigm for 'Flat Mix' Production
KBDJ: MIDI Keyboard Defined DJ Performance System

  Jonathan Wakefield, Christopher Dewey and Will Gale, University of Huddersfield LAMI: Leap Motion Based Audio Mixing Interface

13:50 - 14:20

and

14:30 - 15:00

DEMO SESSIONS AT SEPARATE LOCATIONS

(You will receive an email for sign up after registration)

 
 

Richard J. Hughes, James Woodcock, Jon Francombe, Kristian Hentschel
(at OA7/26)

The Vostok-K Incident – an immersive audio drama for ad hoc arrays of media devices
 

Augustine Leuder
(at SPIRAL, Music Department)

Holomorph  (Interactive 3D audio demo)
 

Richard Garett
(at APL, Technology Block)

Bubbles: an object-oriented approach to object-based sound for spatial composition and beyond (Multichannel 3D audio demo)
 

Thomas Lund
(at OA7/27)

Reference monitoring for stereo and immersive
15:00 - 15:20  COFFEE BREAK  
15:20 - 16:30  INVITED TALKS  
  Duncan Williams, University of York Biophysiological signals as audio meters and control signals
  Amy Beeston, University of Sheffield Unmaking acoustics: Bio-inspired sound information retrieval for an audio-driven artwork
16:30 - 17:30 PANEL DISCUSSION

Topic: User-Centric Design of Intelligent Music Technology

Panellists:

 - Florian Camerer (ORF)
 - Mirek Stiles (Abbey Road Studios)
 - Ana Monte (DELTA Soundworks)
 - Olga FitzRoy (Freelance recording/mix engineer)
 - Jon Burton (Freelance live sound engineer)

17:30 END  
17:40  RECEPTION