Think of all of the graphics technologies that were enabled by OpenGL/Direct3D/... and consider the equivalent for sound/music.
The trend is clearly going into the direction of application-specific processors and a lot of hardware already has dedicated DSP chips for audio. It is high time that a standard language is proposed to access them in a unified way just like OpenGL did.
Full disclosure: I worked with Jules and Cesare at ROLI/JUCE where Jules was my mentor and colleague. If anyone can pull this off, it’s them.
The PS 2 had a timid attempt with ES GL 1.0 + Cg, and some Nintendo models do support a subset, miniGL sytle, and that is about it.
Most studios kept writing engines with plugins for various 3D APIs.
> What is the licensing/business model?
> Our intention is to make SOUL entirely free and unencumbered for developers to use. All our public source code is permissively (ISC) licensed. We’re currently keeping some of our secret sauce closed-source, but the EULA allows use of it freely to encourage its adoption in 3rd party hardware and software. Ultimately, we plan to commercialise SOUL by licensing back-end drivers and other IP for use by vendors who are building SOUL-compatible hardware products.
How would it be possible to allay your concerns? I think we've tried to be as clear as possible with our intentions - how could we rule out such possibilities with our license agreement?
- you can compile Faust code to SOUL, and export it (as the .soul and .soulpatch files) from the Faust Web IDE https://faustide.grame.fr/ (or https://fausteditor.grame.fr for a simpler version)
- lower-level tools like "faust2soul" are part of the Faust distribution: https://github.com/grame-cncm/faust and https://github.com/grame-cncm/faust/tree/master-dev/architec...
- using Faust/SOUL on Bela and having SOUL as the intermediate language to JIT compile Faust code is certainly possible, but not the easiest way ! Bela developer Giulio experimented a more direct Faust JIT support on Bela (since Faust can directly generate LLVM IR code and JIT it) here: https://github.com/giuliomoro/bela-faust-jit, but this projet is a bit frozen by lack of time to improve it. Feel free to bring it to life again.
"Faust (Functional Audio Stream) is a functional programming language for sound synthesis and audio processing with a strong focus on the design of synthesizers, musical instruments, audio effects, etc. Faust targets high-performance signal processing applications and audio plug-ins for a variety of platforms and standards."
"The core component of Faust is its compiler. It allows to "translate" any Faust digital signal processing (DSP) specification to a wide range of non-domain specific languages such as C++, C, JAVA, JavaScript, LLVM bit code, WebAssembly, etc. In this regard, Faust can be seen as an alternative to C++ but is much simpler and intuitive to learn."
Faust also has a more advanced syntax. SOUL basically looks like C, while Faust is more functional and designed from the ground up to represent graphs, as such it can look pretty alien. In particular, partial application in Faust is great for graphs but it also means there are 100 ways of writing a program.
For those interested in audio-specific languages, I've had some success with Vult, a transpiled language which runs everywhere, compiles to Pure Data and runs on the Teensy. I used it to make some really powerful, extremely performant filters.
It's a language for writing the absolute lowest-level, close-to-the-metal, bit-twiddling realtime code which you'd then glue together using a higher-level language like C++, javascript, python, lua, etc.
* Can I use SOUL to create freestanding interactive performance environments a la Pure Data?
* What prospects are there for beefing up hardware to run more DSP and bring down latencies? Are there, or will there be open-hardware DSP projects which run SOUL, or open-source runtimes which achieve high performance on standard issue CPUs? If the company behind SOUL goes under (Roli, I think), what happens?
* Could I write a program in SOUL that does real time beat detection given an audio signal, say a steady drum beat (to make things simple)?
SOUL does have a JIT compiler and you can live-code it, and you can certainly use it to write programs that generate musical patterns etc.. So sure, the language could be used like that.
Our background and focus has always been more from the pro-audio side of things rather than performance, though, so the tools we've built so far aren't really targeted at performance use-cases, it's more about development of apps and plugins.
> What prospects are there for beefing up hardware to run more DSP and bring down latencies? Are there, or will there be open-hardware DSP projects which run SOUL, or open-source runtimes which achieve high performance on standard issue CPUs? If the company behind SOUL goes under (Roli, I think), what happens?
Right now we've not had the resources to get stuck into that side of things deeply yet - hoping to be able to do more on it this year. (And ROLI aren't going to go under, ha!)
> Could I write a program in SOUL that does real time beat detection given an audio signal, say a steady drum beat (to make things simple)?
Yep, totally the kind of thing we're expecting people to do with it.
Well that's the million dollar question. We're imagining things moving (hopefully) along similar lines to how GPUs have developed, with dedicated processors for audio processing. It's pretty clear though that the current model of distributing binaries for the CPU is not magically going to enable hardware vendors to sell audio accelerator cards, and hence why we feel a change of direction for the audio industry is required.
The way we see this being enabled would be a JIT based approach where the audio driver translates device independent code to run on their given hardware (and the 'soft' rendering on the CPU which is where we currently are for machines without dedicated audio accelerators).
The design of SOUL has this sort of support in mind, with per sample processing and the parallel structure of the DSP still visible within the language, which allows the driver to make threading decisions at the JIT stage, enabling parallelism.
The binary release includes a build of the soul command for Bela, which is a very cool low latency board + OS for audio (https://bela.io/) based on xenomai. It's probably as close as we get to open hardware for DSP, and it puts you in the sub millisecond latency range.
It would be very nice to include this, as it would open up some useful patterns for analog modelling and ML runtimes.
I have a few questions, I hope are not stupid. I'll confess to not having delved into the heart of SOUL source yet.
1) what are these "audio processors" that computers already have embedded in them? (DSP cores and associated extensions or an MCU controlling the audio-HW-codec state machine and providing a buffer for the computer or?)
2) Does SOUL engage in the audio sub-system hardware in any way or perform any firmware like activities? DMA engine configuration typically driving the audio engine I/O, right?
3) Do external audio DSP hardware like UAD provide, for example, pipe audio out and then back into the computer, typically? Could such hardware be addressed if it weren't proprietary?
- Is time/frequency analysis and its analogs a first class citizen? (STFT, wavelets, other real-time spectral/cepstral algorithms)
- How do I embed the SOUL runtime into an application? Can SOUL scripts be used like user scripts in a larger application?
We've tried to make embedding it as easy as possible - there's a DLL, and a simple COM interface and a few header-only C++ classes to load and JIT-compile a patch dynamically. We have an example project showing how to do this. FWIW Tracktion Waveform is doing exactly this, and only has a few hundred lines of glue code to enable patches in a full-blown DAW.
Why COM (and is it actually COM, or is it COM-inspired like VST3)? It's a bit of a pain to interface with it from hosting languages (C is much more straightforward, particularly when worrying about C++ exception safety and garbage collection).
I have a (probably dumb) question. I've looked through the docs, and I can't see how you would play a sample back (i.e. to create a simple sampler, which is one of the projects I'm most interested in creating). Am I missing something, or is that something that wouldn't be present at that level, and you'd need to roll your own?
EDIT: I've just read the code for the simple piano, and I see that it implements the sample playback itself, so I guess that answers my own question!
Thank you for addressing this. I was meaning to get into making VST plugins of my own, but currently in order to do that one has to jump quite a few hoops and often as the work grows in complexity so does the required expertise.
One of the supported features of the soul system is to export a JUCE project from a soulpatch - it cross compiles to a C++ project supporting all the standard plugin formats, so you can get AU support for OSX, and VST3 support on Windows out of that one project.
JUCE is one of Jules' previous projects and is pretty much the standard for building cross platform audio plugins used by commercial plugin vendors (https://juce.com/)
You can also browse the example code on the github: https://github.com/soul-lang/SOUL/tree/master/examples/patch...
Looks like an interesting project!
Would love to see an example with a gui ! My dream is to make a sample chopper I can use directly from maschine
Anyway to do that in a Docker container, maybe a ci/cd pipeline where I can just push my code and get an VST as a build artifact.
What's the latest on using SOUL with platforms like Faust and Bela?
Faust has also had SOUL support for a while. Not sure what's new over there but maybe Stephane will see this and comment :)
On licensing, how will it work for creators of open source hardware? Will they still need to pay a licensing fee if they are targeting a specific device with closed driver IP? Or could there be an exception there?
Do you also see yourselves supporting open low-level hardware (RISC-V, FPGA, ...?), and in that case would you consider opening your driver IP for those targets?
So you're saying it should be technically feasible to live code/hot reload faust.dsp on Bela with no glitching?
OSC is more something you'd want to use for a stand-alone program, which needn't be written as a patch. But yes, would be nice to add some OSC wrappers one day. Maybe it's the kind of thing that someone else will contribute before we get around to it.
Please view on desktop/tablet.’
I was curious to see some code, but I guess that has to wait.
EDIT: ah, I misunderstood you.. You mean you'd found the examples but the playground page didn't fit on a phone. Yeah, it's tricky to squeeze all the stuff needed into a small space! We're planning to do a huge redesign of the site in 2021 to turn it into a much more powerful developer portal, and we'll certainly look at other form-factors in doing that.