Sonification of IceCube Data

The sonfication of IceCube data sets have been developed with the goal of working towards a new form of instrumentation for the identification of event types. There is a vast potential for the sonification of data as a stand-alone method for analysis and as an augmentation of a visual system. This is based on the perceptual advantages of sonification, which is well suited to problems of identification that require high speed accuracy, such as the case with IceCube events.
The sonfication tool, known as the IceCube Event Player, was created using SuperCollider (, an environment and programming language for real time audio synthesis and algorithmic composition. SuperCollider is open source and available on Mac, PC, and Linux.
Strategy behind the IceCube Event Player is based on granular synthesis techniques IceCube Event Player. Granular synthesis[1] is s technique that is based on the production of a high density of small acoustic events called grains, which are less than 50 ms in duration. This technique is well suited to the mapping of IceCube events by mapping grain events to the light detected in the each of the DOMs in the IceCube array (see IceCube Display for details). The design of this software can broken down into three elements: data analysis, sound mapping/synthesis, and  spatialization.

Data Analysis
Similar to other computer vision techniques, data analysis involves the filtering, scale and transform of data sets so that that meaningful data is preserved while unwanted data is removed. Much of this happens before it is given over to be used in the IceCube Event Player, but to be heard that  data is scaled, normalized, and transposed so that it can be mapped and interpreted.

Sound Mapping/Synthesis
In the sound mapping and synthesis stage data is interpreted by mapping data types to sound events. There are five parameters to consider: frequency, amplitude, timbre, speed, duration, and spatial location. Frequency and amplitude are ranges within the human hearing range that can be manipulated by the user to thier preference. Timbre is determined by the waveform and envelop of the grain. Speed and duration determine the speed of playback and the duration of each grain (controlling overlap).
The spatialization strategy attempts to place the grain in space as if the grains were coming from the DOMs in a scaled model of the IceCube detector as if it were in front of the listener. This is for mapping to the IceCube Display, for example, in order to create a direct correlation between sonic and visual representations of the IceCube events. In terms of speaker set-up, there are several arrangements possible. The one we use most often is a four channel system spaced in a square configuration in front of the listener to allow for left-right (azimuth) and up-down (elevation) panning.This uses an ambisonic panning technique - coupled with lowpass filtering to create a depth coordinate on the z-plane.

Project Credits

Mark-David Hosale, n-D::StudioLab, Digital Media, School of the Arts, Media, Performance & Design, York University

Jim Madsen,  Associate Director for Education and Outreach, IceCube Collaboration, Professor, Chair of Physics, University of Wisconsin River-Falls



[1] Roads, Curtis. 2001. Microsound. Cambridge, Mass.: MIT Press.


York University • School of the Arts, Media, Performance, and Design • Toronto, Canada
this site and its contents © 2024 Mark-David Hosale