Zukunftsarchiv experimenteller Software-Synthesizer
![]() |
![]() |
![]() |
Analog Bread and Butter | Schoko FM4 | Tuning Fork |
Demos on Samply
Demos here on Github |
Demos on Samply
Demos here on Github |
Demos here on Github |
![]() |
![]() |
![]() |
Unicorn Wave | Mini Clavier | FM4 (Puredata) |
Demos on Samply
Demos here on Github |
Send me a link to your music if you are using one of these.
While Csound and Puredata are interpreted languages (similar to Python or Lua but without the byte-code) running these synthesizers on your own hardware should be straight-forward. Depending on how the synthesizer was built you will need one of those programs installed:
-
Puredata Patches: PD-L2Ork – the Puredata version built for the Linux Laptop Orchestra.
- PD-L2Ork: Works best for all patches.
- PD Vanilla: Works with limitations (see Puredata/README.md).
- Plugdata: Should work in theory, but crashes.
-
Start
PD L2Ork
and make sure it connects to your audio hardware (via the audio settings found in Edit → Preferences). -
Make sure that the DSP switch is on (green switch in the screenshot above).
-
If you are using the Jack Audio Server (highly recommended on Linux but also on Windows and Mac), you need to manually connect your MIDI devices or automate this with a script.
-
Last but not least in
PD L2Ork
load the patch you are interested it. Usually you want to run themain.pd
file, which includes a full UI made with Puredata primitives. On headless devices you usually wantmain1.pd
which includes everything minus the UI and for embedding in a custom application you may usemain2.pd
which is the same minus preset management.
Don't touch the mouse while you're playing on your MIDI devices when the Puredata
UI is visible. Complex user interfaces like for the Unicorn Wave
synthesizer
draw considerable CPU power in Puredata. Oddly enough you may get audio drop-outs
just by moving the mouse cursor a few pixels. Other than that, if it doesn't stutter
(your CPU can handle the workload), it should run stable.
-
Start Cabbage and -- when running for the first time -- go to Edit → Settings, to configure your audio hardware. Unfortunately it seems that Cabbage doesn't support the Jack Audio Server yet. But on Linux the ALSA backend works very well.
-
Load a Csound file and click on the "play" button to run it.
While running the button turns into a "stop" button. The other buttons next to the file name allow to open the UI window and the UI editor (don't use the last one, it tends to destroy the source code).
-
When loading another patch you need to stop the previously running patch via the stop button. Otherwise both will play at the same time, responding to all MIDI events.
Note, that when you close a file, the patch might still be active in background. To be sure, go to View → Show Cabbage Patcher to see which patches are active. Here you can remove unneeded instances via the right-click context menu of their nodes.
-
That's basically it. But you might want to export a standalone executable and run this instead, if you want to save and load preset files. Unfortunately this feature is not available when running the patches from within Cabbage itself. Also this makes it easier to auto-start a synthesizer when the computer boots.
You find the export option under File → Export as Standalone Application. Place the exported file in a new empty directory. There you will find the executable (targeted at your platform and computer architecture) and a copy of the
.csd
file. Always keep those two together!On Linux, Mac, Unix, … you need to manually set the executable flag for the executable file (
chmod +x
).
This project is mainly a playground for me to learn cross-platform audio DSP programming with Puredata and Csound. The synthesizers are meant to be somewhat useful for keyboard players in music production or live performance.
Please don't expect something like the Arturia V Collection here. Commercial software synthesizers are built by specialist teams within months or years to accurately emulate the electrical circuit of yesteryear's hardware. I, on the other hand, am just a random person on the Internet happening to be a software developer and musician. Still I hope that the synthesizers I built for learning can be useful for other people, too.
Besides that, this is also a research project to evaluate the modern Android ecosystem (as of 2025) for real-time audio usage. Apple has been best in class for low-latency audio even with the very first iPhone, achieving 7ms latency with no problem. Android, in the early years, had a very basic audio server that - if I dare say so - did everything wrong that could be done wrong, starting with a blocking push model (similar to ancient Open Sound System on Unix) where applications call blocking functions to "push" blocks of audio to the hardware, rather than the hardware "pulling" new data when it is needed. The latter is a pre-condition for reliable low latency, no matter the type of system.
Fortunately the situation has changed by now and at least in theory any stock Android device should be low-latency capable. But is it really so? And can we use languages like Csound and Puredata for rapidly building software synthesizers for mobile devices? This is, what I want to find out.
© Dennis Schulmeister-Zimolong ([email protected])
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.