Here's a video: https://www.youtube.com/watch?v=6efDQ9GmRpg
We're not pivoting to VSTs, it's just that it was a practical way of investigating several issues and helping us with the ongoing development of our upcoming Kickstarter-backed synth (Anyma Omega) and MPE controller (Loom), and a gift to thank our backers for the wait they gave to go through due to several manufacturing and production issues.
I enjoy reading music-related entries here, so I thought I'd contribute this time and I hope it will interest some. I'm here for any question or remark.
What do you see as setting your synths and hardware apart from, say, the Osmose and Hydrasynth?
If you don't mind me asking, for your hardware, what's running under the hood? Big ARM cores / SOC? RTOS on a Cortex-M? What challenges have you faced working on whichever you're less used to? (The VST if you have more hardware background, the hardware if you have more desktop software background)
The synth engine in the Anyma Phi runs on a STM32F4. The UI and MIDI routing runs on a separate STM32F4. No RTOS, we find it much easier to reason with cooperative multitasking, and easier to debug. So far, we don't have any latency/jitter issue with this approach, although it required writing some things (e.g. graphics) in a specific way. The Omega runs on a mix of Cortex-A7 and STM32.
I have a pure software background but I came to appreciate the stability, predictability and simplicity of embedded development: you have a single runtime environment to master and you can use it fully, a Makefile is enough, and you have to be so careful with third-party code that you generally know how everything works from end to end. The really annoying downside is the total amount of hair lost chasing bugs where it's hard to know whether the hardware or the software is at fault. In contrast, programming a cross-platform GUI is sometimes hell, and a VST has to deal with much more different configurations than a hardware synth, you're never sure of the assumptions you can make. The first version of Anyma V crashed for many people but we never had the case on the dozen machines we tested it on.
I'm mostly an embedded guy (Usually much lower power ST parts), so it's neat to hear about how you approached it. Having multiple chips separate so can't underrun as easily if the UI needs to react is really nice design!
I see a lot of your engine is modified from from Mutable Instruments, but you do have a good selection of original sound sources as well. What sets yours apart? Did you have a strong background in DSP before Aodyo?
Oh my. So, how much processing load are you typically at now?
You know your backers are, from what can be gained from the KS comms, (to put it mildly) not too convinced Aodyo will provide more than enough juice (!=JUCE) this time, for chaining up enough modules while guaranteeing 16 note poly? And this with a multi-timbral design?
(you might refer to your end of 2023 update, regarding the 4+1 core concept which had to be changed creating further delay, and so on)
Any advice for someone on the product side looking to get into the synth development scene? I’m a designer and have so far partnered with a DSP developer on one project, a plugin for Reason based on Mutable Instruments’ Plaits (https://soundlabs.presteign.com), but haven’t really figured out where to go next.
- Akai EWI (one of the first MIDI wind instrument, and still well known and well used)
- Roland Aerophone
- Berglund NuRAD
Have you tried them and where do you think they fall short?
The problem is that these instruments are not physically similar to the clarinet, for example regarding the system of keys and levers.
I hope that some electronic instrument would make that jump.
That would allow clarinetists to silently practice anywhere, as well as seamlessly engage in electronic music and digital creation without having to change the muscle memory.
I’ve found the GUI the hardest part of VST development (but I’m not on a traditional C++ Juce stack).
In 2019 I had an early version of the Anyma engine running on Dear Imgui, it was really fun, but it would have required too much effort to properly manage audio/MIDI/plugin aspects in a cross-platform way, and the backends were incomplete at the time. JUCE was too much of a time saver to ignore for a team of 1.5.
I'm curious, if you don't use C++ and JUCE, what is your stack?
[0]https://gearspace.com/board/new-product-alert/1432677-aodyo-...
High regards!
Physical modelling is really fascinating... Currently testing this and it sounds good!
The UI is a little overwhelming though. But of course it's a difficult task to allow manipulating many parameters in a simple way. (Reason's modelling synth Objekt does a reasonably good job at that, I think).
Anyway, congrats! HN loves music, please post more! (A month ago I did a ShowHN for a "random" sequencer: https://billard.medusis.com [0]; it works well when connected to unusual sound generators such as this.)