Technology has long played a role in creating new types of sounds – from Jimi Hendrix guitar effects to drum machines and Jean-Michel Jarre synthesisers. What Google seems to be bringing to the party is the idea that advances in machine learning and neural networks can open up possibilities for sound generation….
And, as is increasingly the way with Google’s new prototype systems and initiatives, there is a Raspberry Pi at the heart of the system.
You can select intruments, effects and controls, and play MIDI files though the device, manually changing the tones as you wish. Check out the video below of the Open NSynth Super in action.
The Gadget Master behind the device – only a “small community of musicians” have received the prototype – is Andrew Back, and you can read about his work on RS Components’ Design Spark.
It’s a serious bit of work and it’s fully documented, with pictures documentation and links.
Why is it “Open”? It’s built using open source libraries, including TensorFlow and openFrameworks. Also, the PCB design, microcontroller firmware, software and enclosure design are all published under open source licences, which means that anyone is free to build their own Open NSynth Super.
Basically the system includes four rotary dials for selecting the instruments that are assigned to the corners of the interface, and a high-contrast OLED display to show the state of the instrument.
There are also six ‘Fine Control’ dials along the bottom of the device, to further customize the audio output:
- ‘Position’ sets the initial position of the wave, allowing you to cut out the attack of a waveform, or to start from the tail of a newly created sound.
- ‘Attack’ controls the time taken for initial run-up of level from nil to peak.
- ‘Decay’ controls the time taken for the subsequent run down from the attack level to the designated sustain level.
- ‘Sustain’ sets the level during the main sequence of the sound’s duration, until the key is released.
- ‘Release’ controls the time taken for the level to decay from the sustain level to zero after the key is released.
- ‘Volume’ adjusts the overall output volume of the device.
Finally, for a touch interface there is a capacitive sensor, like the mouse pad on a laptop, for “exploring the world of new sounds that NSynth has generated between your chosen source audio”.
The main components used are.
- 1x Raspberry Pi 3 Model B (896-8660)
- 6x Alps RK09K Series Potentiometers (729-3603)
- 4x Bourns PEC11R-4315F-N0012 Rotary Encoders
- 2x Microchip AT42QT2120-XU Touch Controller ICs (899-6707)
- 1x STMicroelectronics STM32F030K6T6, 32bit ARM Cortex Microcontroller (829-4644)
- 1x TI PCM5122PW, Audio Converter DAC Dual 32 bit (814-3732)
- 1x Adafruit 1.3″ OLED display
See the full list on GitHub.
It’s stated that NSynth uses a “deep” neural network to learn the characteristics of sounds, and then create new sounds based on these characteristics.
NSynth Super is created by Magenta, a research project within Google that aims to explore how “machine learning tools can help artists create art and music in new ways”.
“As part of this exploration, they’ve created NSynth Super in collaboration with Google Creative Lab. It’s an open source experimental instrument which gives musicians the ability to make music using completely new sounds generated by the NSynth algorithm from 4 different source sounds. The experience prototype was shared with a small community of musicians to better understand how they might use it in their creative process.”