Google is using artificial intelligence to synthesize sounds that no human has ever heard before, with the goal of expanding the musician’s toolkit and by extension our soundscape. The project, called NSynth, takes samples from real instruments, analyzes and blends them using the mathematical characteristics of the notes they produce in order to create a brand new “instrument”.
It’s not the same as layering the sound of one instrument atop that of another. Rather, the resulting sounds are unique hybrids of instruments that we are already familiar with, such as the flute or the glockenspiel. Here are some samples of brand new sounds created by NSynth, courtesy of Wired:
NSynth (short for neural synthesizer) was only announced in April. It’s the product of Google Magenta, a team of AI researchers whose stated aim is to generate music and art using machine learning and deep neural networks. Last year, Magenta debuted a 90-second piano melody, which was its first piece of algorithmic art. The team has since expanded its efforts to encompass creating entirely new sounds.
“Unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples,” the Magenta team explains in a blog post. It gives artists “intuitive control over timbre and dynamics and the ability to explore new sounds that would be difficult or impossible to produce with a hand-tuned synthesizer.”
Magenta was inspired by DeepDream, a Google project to visualize what deep neural networks were learning that yielded fascinating (and somewhat disturbing) images that were highly publicized back in 2015. Its success prompted Google to wonder, “Can we use machine learning to create compelling art and music? If so, how? If not, why not?” These are the questions Magenta has been tasked with answering.
And why not? Music critic Marc Weidenbaum tells Wired that blending instruments is an age-old practice that has been part and parcel of the evolution of music. “Artistically, it could yield some cool stuff, and because it’s Google, people will follow their lead,” he says.
So far, NSynth is working with a database of musical notes collected from about a thousand instruments, yielding countless hybrids of markedly different instruments– ranging from the flugelhorn to the bass, and everything in between. On top of that, the Magenta team has built a two-dimensional interface that works with samples from four instruments at once, further pushing the boundaries of musical composition.
Google hopes to build on Magenta’s work to one day create an open source community of artists and machine learning researchers. Magenta’s open-source infrastructure is built around TensorFlow and its code is published on Github, to help artists connect with machine learning models. In the meantime, you can check out a live demo of NSynth at the upcoming Moogfest, an annual music and arts festival, which will be held in Durham, North Carolina.