The words MUSEUM SENSES are framed by 2 columns. Concentric circles radiate from the left column. Dotted lines extend from the right column.

Exploring Sonification: Representing Data With Sound

Exploring Sonification: Representing Data With Sound

Share This Article

The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio. Sonification is an effective communication method because the auditory system excels at pattern recognition and distinguishing signals from noise. Sonification creates an alternate way to engage with complex visual data.
The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio. Sonification is an effective communication method because the auditory system excels at pattern recognition and distinguishing signals from noise. Sonification creates an alternate way to engage with complex visual data.
A blue light wave pattern on a black background, representing data with sound in Exploring Sonification.

Paper authored by Cheryl Fogle-Hatch, Ph.D. and submitted to the MuseWeb Conference, April 3-5, 2023, Washington D.C.

Abstract

The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio. Sonification is an effective communication method because the auditory system excels at pattern recognition and distinguishing signals from noise. Sonification creates an alternate way to engage with complex visual data. 

This paper discusses the potential for sonification as a storytelling technique including design considerations for creating sonification in the form of musical composition or soundscapes. Then it describes tools that can automate the production of sonification making it more widely available to people who are not musicians or sound designers. Finally, this paper will conclude with an all-to-brief discussion of the rare instances where sonification has been used in museums.

Introduction

The Georgia Tech sonification lab defines sonification as representing data with nonspeech audio. This communication technique creates an alternate way to engage with complex visual data for people who may not feel confident in their ability to interpret complex charts and graphs (Sawe et Al., 2020). Sonification also enables access for people who are blind or have low vision (Levy and Lahav, 2011; Noel-Storr and Willebrands, 2022; Zanella et Al., 2022). When the Museum of Science added sonification to prototypes of digital interactives to make them accessible for visitors who are blind, their evaluators reported that half of the visitors who are sighted said the audio helped them to understand the exhibits (Malandain et Al. 2020).

There is a growing community of scientists and educators using sonification for research and science communication. Lenzi et al. (2020) curate the Data Sonification Archive with links to data sonification on a diverse range of topics: Covid-19 statistics and other medical data; climate change and extreme weather events; astronomical data; and a quantum-driven robotic swarm! Another notable sonification project is Accessible Oceans hosted by the Woods Hole Oceanographic Institution.

Many sonification projects are in the field of astronomy because telescope observations produce large datasets that are multidimensional in time and space. Sonification has “the potential to enhance scientific discovery within complex datasets, by utilizing the inherent multidimensionality of sound and the ability of our hearing to filter signals from noise” (Zanella et Al., 2022:1). The United Nations Office For Outer Space Affairs (UNOOSA, 2022) recorded video presentations by leaders of several sonification projects in astronomy.

This paper begins by describing the advantages of sonification–the human sense of hearing is fine-tuned for pattern recognition. The second section discusses the potential for sonification as a storytelling technique including design considerations for creating sonification in the form of musical composition or soundscapes. The third section discusses tools that can automate the production of sonification making it more widely available to people who are not musicians or sound designers. Finally, this paper will conclude with an all-to-brief discussion of the rare instances where sonification has been used in museum contexts. This paper presents information about sonification that is intended to encourage the wider adoption of sonification in museum exhibits. 

Advantages of Sonification 

Sonification is an effective communication method because the auditory system is well-suited to pattern detection and trend identification (Walker and Nees, 2011). For example, Loui et al. (2014) found that auditory perception is a more intuitive way to recognize brain waves associated with seizures than by looking at visual displays recorded with electroencephalography (EEG) data.

Walker and Nees (2011:12) categorize the use-cases for sonification as:

  • Alarms, alerts, and warnings
  • Status, process, and monitoring messages
  • Data exploration
  • Art and entertainment

A blaring fire alarm is effective because loudness and repetition convey urgency. Medical equipment is an example of sonifying status messages because it produces continuous sound while in operation. The use-cases of sonification classified as data exploration or arts and entertainment demonstrate that the technique can express concepts and feelings that are more complex than the basic pattern recognition required to identify alarms or status messages.

Sonification as Pattern Recognition

Alarms and status messages are examples of sonification that is concerned with pattern recognition. Advertising jingles are also based on pattern recognition, and by repetition—they are associated with a specific product. A specific sound may be paired with a visual logo. For example, the MGM lion roars when it is seen on screen (Eschner, 2017).

The inventions of radio and recorded sound allowed two broadcasting companies, the BBC and NBC, to create signature chimes that they played on air. The BBC  chimes heard at the start of each hour were recorded from famous clocks in London (Parliamentary Archives, 2020); Big Ben (rings the note of E), and the Westminster chimes ring the four quarter bells (G sharp, F sharp, E, and B). In the United States, three musical notes (GEC) were played on NBC radio to conclude the news broadcast from the 1920s until the late 1980s (Twenty Thousand Hertz, 2016).

Sonification as a Storytelling Technique

Sonification can enhance storytelling.  Generations of children have learned about the individual instruments in an orchestra listening to “Peter and the Wolf” composed by Sergei Prokofiev in 1936. Peter is represented by strings. Other characters are highlighted with woodwinds, brass, and percussion. 

Moving from the classical to the contemporary, scientists working with skilled musicians and sound designers use sonification as a storytelling technique. A sonification of the center of the Milky Way galaxy combines data from NASA’s Chandra X-ray Observatory, Hubble Space Telescope, and Spitzer Space Telescope. Each telescope reveals different phenomena and is represented by a different instrument. The piano was chosen for Spitzer, strings for Hubble, and the chimes of the glockenspiel for Chandra. Stars are converted to individual notes while the intensity of the light controls the volume.

Beyond expressing the wonder of a complicated image, the sonification of the center of the Milky Way demonstrates that Sound can be used to explain scientific concepts. This sonification includes changes in pitch that correspond to different wavelengths in the electromagnetic spectrum: infrared, optical, and x-ray. Low notes represent infrared. Middle notes are in the optical range of the electromagnetic spectrum that is visible to the human eye. High notes represent x-ray wavelengths. The team of musicians and scientists used conventions in Western music theory to compose a piece of music that is both aesthetically  pleasing and informative when the listener is shown how to interpret the composition.

Design considerations for sonification using music composition or soundscapes 

Design choices for sonification used to communicate information should be considered carefully because they affect the listener’s understanding of the data being sonified. In musical contexts, data can be mapped to sonic dimensions, such as volume, pitch, tempo, timbre and location in the stereo field (Sawe et Al., 2020; Walker and Nees, 2011). A sound designer may choose acoustic or synthesized sounds, and create a sonification in a variety of musical styles.

Choosing a major or a minor key alters the mood of the musical piece influencing the way the listener interprets the data being sonified. For example, Sawe et Al., (2020) created a sonification of changes in the frequency of tree species in the Alaskan forest through time. They used a d minor scale to express the falling numbers of the yellow cedar evoking sadness about climate change. An alternative musical choice that they did not make would have been to represent the rising number of western hemlock trees in a major key evoking a mood of happiness. However this would have been an ineffective message because the western hemlock is adapted to warmer temperatures and is moving northward due to global warming, the same phenomenon that is causing yellow cedar numbers to decline.

In one study (Zhang et al., 2022), Sound designers composed sonifications using Pro Tools, Logic Pro, or Garage Band. Ambient sounds that are specific to the project can be added to these compositions. Public domain audio files are available in online databases such as Freesound and BBC Sound Effects.

Tools that automate production of sonification

Researchers need to analyze large datasets, and they may not have the time to create the custom-made sonifications that are effective for public outreach. Automated tools are suitable for sonifying large datasets and for running the many queries necessary for data analysis.

Sonification is an effective tool for analyzing scientific data.

Audio graphs can be made with rising pitch indicating higher numbers. For example, Astronify is a software package developed by the Space Telescope Science Institute that uses the Python programming language to turn telescope observations into sound. After installing Astronify, users download astronomical data and execute a command to render it as a sonification.

Astronify can detect exoplanets as they orbit stars outside of our solar system. Visually, the light from a star is interrupted when an exoplanet moves between its star and the lens of a telescope observing it. Astronomers call this a transit. Sonically, Astronify represents time as a constant drone, observations with change in pitch, and the transit as an interruption to the sounds for time and pitch.

Sonification can be rendered on websites using the SAS Graphics Accelerator or the IMAGE browser extension from McGill  University, or data can be loaded into a web-based tool called Highcharts Sonification Studio

The SAS Graphics Accelerator adds features to data visualizations including text descriptions, tabular data, and interactive sonification. The SAS Graphics Accelerator is an extension for Chrome that works with various data analysis tools developed by a company called SAS Statistical Analysis System. Using the SAS Graphics Accelerator is not intuitive because it requires prior knowledge of SAS products.

IMAGE is a browser extension that will send a selected graphic to the IMAGE server. The graphic is rendered as a sonification with spatial audio(McGill University, 2022). The user can explore the sonification to explore objects in the graphic. Image is a project of the Shared Reality Lab at McGill University, Montreal, Canada.

Highcharts is a partnership between Highcharts and the Sonification Lab at the Georgia Institute of Technology. Highcharts Sonification Studio is a “web-based charting and sonification technology” that can be used without having to write code, and without prior sonification expertise”.

Sonification in Museum Contexts

The examples of sonification described in this paper are part of a “broader research endeavor in which data, sonification and design converge to explore the potential of sound in complementing other modes of representation and broadening the publics of data. With visualization still being one of the prominent forms of data transformation, we believe that sound can both enrich the experience of data and build new publics” (Lenzi et Al., 2020). One audience for sonification of data are people who may lack “disciplinary expertise” in a particular scientific field (Woods Hole Oceanographic Institution, n.d.). Another audience for sonification is comprised of people who are blind or have low vision. Sonification is a technique that can increase visitor engagement with museum exhibits.

Using Sonification with Audio Description

Sonification can be combined with text-based audio description just as visual graphs incorporate text-based labels. For example, Siu et al. (2022), automatically generated sonification and audio descriptions for time-series data typically displayed as line graphs on websites of news media outlets or in other sources of online information. They created an audio data narrative using sonification for trend lines and a synthesized voice for common text labels on graphs such as time period, (month and year), and rates, (10%, or 50% etc.). Sonification gives the listener first-hand knowledge of the data trends, and the text labels provided by audio description are equivalent to print labels accompanying visual graphs.

The Museum of Science designed a prototype of an exhibit using sonification and audio description. The Wind Lab was a computer-based, multisensory interactive that let visitors explore data about the wind energy generated from the turbines on the roof of the building (Malandain et Al., 2020). Tracing the line of a graph on a touch screen produced a tone that rose to indicate increased wind speed; pausing activated a verbal announcement of the number displayed at that point (O’Hara, 2014).

Sonification and audio description can be combined when automated tools are used to create the sonification. It may be more difficult to edit musical compositions or soundscapes to include audio description. For this reason, the Chandra photo album sonification collection includes text-based descriptions immediately below the embedded web player for each sonification.

Most of the sonification examples in this paper were designed by researchers or educators. With the exception of the Harvard/Smithsonian Center for Astrophysics and the Museum of Science, sonification is not common in museum contexts. By default, the sense of vision is the primary mode for presenting information in most museum exhibits because they are designed by people who are sighted. Explaining concepts in a auditory way is a learned skill that many people who are sighted do not have the opportunity to develop. The examples of sonification described in this paper are engaging, and it is hoped that they will encourage exhibit designers to include sonification in their future work. Ideally, data sonification would be integrated into exhibits that rely on visual displays of data. This would provide multisensory opportunities for everyone, and it would increase access for people who are blind.

References

Skip to content