Blog ENG

News and trends in the world of data science

Lucija Jusup

In the previous blog post about news in the world of data science, we wrote about types of artificial intelligence and Gato, a model developed by Deepmind that presents itself as a model of artificial general intelligence. This time we will write about data sonification, we are going to explain what this process represents and what are the latest achievements in this area.

Data sonification

From stock values ​​to changes in the Earth’s temperature, data that changes over time is most often represented by some kind of visuals, charts and/or diagrams. What if instead of that we could listen to a drop in stock prices or a steady rise in global temperatures? Such data representation is enabled by data sonification, the process of converting data into sound waves.

Data sonification represents the conversion of data in the form of sound signals. We can say that it is the auditory equivalent of the well-known data visualization. The process of data sonification consists of routing the digital media of a dataset through a software synthesizer into a digital-to-analog converter to produce sound that humans can hear. The target demographic for the use of data sonification is the blind community. The ‘clicking’ sound that can be heard at pedestrian traffic lights is one example of how sonification helps blind and partially sighted people.

Figure 1. Pedestrian traffic lights are often accompanied by a device that emits two sound signals, depending on whether the traffic light is showing a green or red light

Since humans interpret audio information faster than visual information, data sonification can also be useful for interpreting science. The human ear can discern more tone levels than the human eye can discern color levels. Another advantage of data sonification is multidimensional data analysis that involves understanding the relationships between many different features or properties of sound. Plotting data in ten or more dimensions at the same time is too complex, and their interpretation is very confusing. However, the same data can be understood much more easily through sonification. As it turns out, the human ear can immediately distinguish between the sound of a trumpet and a flute, even if both instruments play the same note (frequency) at the same volume and for the same duration.

Some of the earliest work on sonification was done more than 70 years ago by an American seismologist and professor at the California Institute of Technology named Hugo Benioff. He joined Caltech’s seismology laboratory and in 1932 invented a seismograph that recorded tectonic activity on a roll of paper. Variants of Benioff’s original seismograph are used around the world today. However, in his free time, Benioff was engaged in making instruments, and in 1953 he recorded a series of audio recordings of earthquakes on one side of an LP called “Out of This World”.

LP record “Out of This World” containing audio recordings of the earthquake

Since the range of human hearing is roughly 20 Hz to 20 kHz, well above the frequency of many earthquake signals, in order to raise the pitch to the range of human hearing, Benioff recorded the earthquake data on magnetic tape and then simply sped it up. The resulting tapes allowed people to hear and experience the Earth in motion for the first time. This process of data transformation, time stretching and pitch change to bring it into the range of human hearing is called audification and is one of several categories of sonification.

The sound of a black hole

Data sonification is not a new technology, we can say that it dates back to 1908 with the invention of the Geiger counter. However, recently the sonification process has seen a revival thanks to some projects in the field of astronomy. NASA’s missions that explore the deepest parts of the universe and then convert the obtained data into sound signals allow us to hear some distant galaxies and experience various phenomena in space.

There is actually an entire library of NASA sonification projects, called “Universe of Sound”, where the listener can explore deep space objects. NASA has sonified a distant galaxy, the sound of an exoplanet discovery, and the vibration of the surface or interior of Mars, known as Marsquake. One example of NASA’s sonification is “Sounds of the Sun”, which was released in 2018 using data from the Solar and Heliospheric Observatory. The sound is accelerated by a factor of 42,000 to be within the range of human hearing, and how the Sun sounds can be heard in the video below.

The popular misconception that there is no sound in space stems from the fact that most of the universe is essentially a vacuum, providing no medium through which sound waves can travel. A cluster of galaxies, on the other hand, has large amounts of gas and dust that envelop hundreds or even thousands of galaxies within it, and can serve as a medium for propagating sound waves.

NASA’s latest achievement in data sonification is an audio recording of a single black hole, or at least the closest thing to the sound of a black hole. It is a black hole located 250 million light-years away in the center of the Perseus cluster, which has been associated with sound since 2003. That’s because astronomers discovered that the pressure waves sent out by that black hole cause ripples in the cluster’s hot gas that can be translated into a note, but one that humans can’t hear. This is precisely why this sonification is in some ways different from any previous one, as it examines the actual sound waves detected in data from NASA’s Chandra X-ray Observatory. The resulting sound is quite spooky, and you can listen to what a black hole sounds like in the video.

References:

  1. https://en.wikipedia.org/wiki/Data_sonification
  2. https://science.howstuffworks.com/sonification.htm
  3. https://indianexpress.com/article/technology/science/nasa-black-hole-audio-perseus-m87-8107216/
  4. https://www.nasa.gov/feature/goddard/2018/sounds-of-the-sun
  5. https://chandra.harvard.edu/sound/index.html#tycho