Sonification of the world
We don’t live on the earth. We are the Earth. Every cell, each protein of our bodies is an emergent property of the Earth's cycles of light and darkness, cold and warm seasons, of the gravitational pull of Earth’s moon and the protective membrane of Earth's magnetic field. The Earth and we respire together, self-similar oscillators on multiple time-scales—a complex system of cycles, quasi-cycles and chaos that appears tantalizingly similar to music in the dynamic and interactive way it unfolds over time.
How can we hear this music of our sphere? Mindful listening and taking a recording device out into the environment is a very good start—but that only scratches the surface. What about the vibrations that lie below or above the range of human hearing? Or the oscillations that are chemical or magnetic or electronic, rather than the variations in air pressure our ears are so exquisitely adapted to detect? And what about the cycles that occur on timescales longer than a single human lifetime?
In a kind of global recapitulation of neurogenesis, a distributed network of sensors has been slowly growing to enmesh, encircle, penetrate, and orbit the Earth. We are taking a global-scale, continuous Selfie, and streaming the data through our distributed networks of computing devices. This is what we should be listening to—data. That’s how we can hear the music of our sphere.
As electronic and computer musicians, turning data into sound is second-nature. To us sound is data (and vice versa). We take voltage variations from a microphone, sample them at regular intervals and then start doing arithmetic on those numbers and call it digital signal processing. Even more exciting, we throw away the microphone and use a parameterized algorithm to generate a stream of data and call it digital sound synthesis. Or use a gesture to generate a stream of numbers from an accelerometer and use it to modulate one of the parameters of a sound synthesis algorithm and call it a new performance instrument.
Substitute one of those aforementioned data streams in a computer music setup with data from the network of sensors surrounding the Earth, and you turn that sensor data into sound! By turning a stream of non-audio data into time-varying air pressure variations (or using it to modulate a parameterized synthesis model), you can literally put your Ear to the Earth. And hear the music of the spheres.
— January 12, 2015
The image above is Carla Scaletti on October 4, 2014, conducting a workshop on Kyma in the meeting room of the Music Technology program at NYU in New York.
Carla Scaletti is a composer. She is also a co-founder of Symbolic Sound, the company that developed Kyma, a widely used software sound design environment. She is also an Ear to the Earth artist.
Listen to excerpts from the music she composed to accompany choreographer Gilles Jobin's QUANTUM, a performance he created as an artist-in-residence at CERN, the world's largest particle physics laboratory, located near Geneva in Switzerland.