AlgoCompSynth-One v1.0.0 Released

algocompsynth PyTorch torchaudio cuSignal JupyterLab Mambaforge NVIDIA Jetson differentiable digital signal processing

Release notes for v1.0.0


Author

Affiliation

M. Edward (Ed) Borasky

 

Published

May 11, 2022

Citation

Borasky, 2022


After an intense bout of refactoring / dogfooding / soul searching, I have released AlgoCompSynth-One v1.0.0 for the Jetson platform.

What is it?

AlgoCompSynth-One is a collection of tools for composing music and synthesizing sound on systems with NVIDIA® GPUs. The current implementation is focused on the Jetson™ platform. However, a future release will run on Windows 11 with Windows Subsystem for Linux (WSL).

What can it do?

The current implementation creates a Mambaforge virtual environment containing:

The above tools are all optimized for the Jetson platform, enabling a variety of digital signal processing and artificial intelligence applications, including the emerging field of differentiable digital signal processing.

The above capabilities are for the most part Python-based. As you probably know, I’m mostly an R programmer. So I’ve provided installers for the R Jupyter kernel, R packages that interface with Python, R package development tools, and R sound processing packages .

How does it work?

AlgoCompSynth-One is a collection of installers. This has a number of advantages:

The downside is that, especially on the Jetson Nano, the components that need to be compiled, torchaudio and cuSignal, take a fair amount of time to build. I have included logfiles of my builds on a 4 GB Jetson Nano, an 8 GB Xavier NX and a 16 GB AGX Xavier, so you can get an idea of what to expect for build times.

The installers create a virtual desktop inside the repository. The Python wheels downloaded or built are cached in AlgoCompSynth-One/JetPack/Wheels, and the install scripts will look for those first rather than doing a new build.

Who is it for?

At the moment, it’s primarily for developers with a Jetson Developer Kit. I am planning an upward-compatible version for Windows 11 with WSL. Target applications include neural waveshaping synthesis , synthesis via variational autoencoders and WaveGrad .

How do I get started?

The short version is:

  1. Get a Jetson Developer Kit.

  2. Clone the AlgoCompSynth-One repository. Or download and unpack the release source tarball.

  3. At the terminal:

    cd AlgoCompSynth-One/JetPack
    ./00mambaforge.sh # sets up the Mambaforge package and environment manager
    ./05install.sh # installs the Python components
    ./10R-addons.sh # optional for R programmers
    ./20R-sound.sh # optional for R programmers

That will install everything. If you’re not an R programmer, there’s no reason for you to run 10R-addons.sh or 20R-sound.sh.

How do I test it?

cd AlgoCompSynth-One/JetPack
./start-jupyter-lab.sh

This will ask you to create a strong password, then start up a JupyterLab server listening on 0.0.0.0:8888. If you’re on the Jetson GUI, you can browse to localhost:8888 and log in with the password you created. If you’re on a different machine on the same local area network, browse to the.jetson.ip.address:8888.

Once you’re logged in, there will be a list of folders on the left. Open the Notebooks folder. You’ll see three notebooks:

These are copies of the cuSignal end-to-end test notebook, which exercises both cuSignal and PyTorch. The signal sizes are adjusted to fit in RAM. Pick the one that matches the RAM in your Jetson and run all the cells.

What about JetPack 5.0 DP and the Orin?

I have tested this with the 8 GB Xavier NX running JetPack 5.0 developer preview and everything works. In fact, it’s slightly better because JetPack 5.0 is based on Python 3.8 and CUDA 11.4, while JetPack 4.6.1 is frozen at Python 3.6 and CUDA 10.2.

I do not currently have the budget for an AGX Orin Developer Kit, but if you have one and run into issues with AlgoCompSynth-One on it, open an issue and I’ll try to help troubleshoot it.

Road map

As it was first conceived, AlgoCompSynth-One was a synthesizer. But there are quite a few micro-controller and single-board computer synthesizer options to choose from already. My personal favorites are the Bela, the Dirtywave M8 tracker, and the Electro-Smith Daisy.

The main issue with the previous versions of AlgoCompSynth-One was that the defining feature of the Jetson hardware, the NVIDIA GPU, was only being used as an option for doing FFTs in CSound. The rest of it would run comfortably in a Raspberry Pi or Beaglebone Black; indeed, the second of those is precisely what the Bela is!

So the refocused AlgoCompSynth-One is a platform for doing sound analysis and synthesis in the GPU whenever possible. In the current release, only JupyterLab and the R components are CPU-only.

The next release will include an upward compatible version for Ubuntu 20.04 LTS or Ubuntu 22.04 LTS running on Windows 11 WSL. Unlike the Jetson, I believe all the components are available as x86_64 CPU and NVIDIA GPU optimized binary form from Mambaforge-compatible repositories. The R packages may still need to be compiled from source.

The way forward on the Jetsons is unclear for a number of reasons:

  1. The Nano will not support JetPack 5.0 and later releases. The JetPack version on the Nano, 4.6.1, is based on Ubuntu 18.04 LTS, which goes end-of-life in less than a year.
  2. The PyTorch wheels that NVIDIA build for the Nano will not advance past PyTorch 1.10.0. This is because JetPack 4.6.1 uses Python 3.6, which has already reached end-of-life; The last version of PyTorch that the PyTorch project supports on Python 3.6 is 1.10.0.
  3. New Jetson hardware is a lot more expensive now than it was two years ago when I got my Nano, AGX Xavier and Xavier NX development kits. I can continue maintaining the Jetson versions as long as my Jetsons stay functional, but I don’t have a budget for new ones at present.

Any enhancements for the Jetson platform will involve acquiring or building Mambaforge-compatible Jetson PyTorch wheels for a newer version of Python, preferably 3.9. A PyTorch build from source takes several hours to complete on a Jetson, and it’s not clear the licenses would allow me to distribute the resulting binaries. Building binaries for publication is something that should be done via continuous integration on servers, not by a hobbyist on two-year-old development kits!

And because of the cost increases on Jetson hardware, one can get a laptop with more GPU power than the new AGX Orin develper kit for roughly the same price, albeit with higher power draw and no vision-specific hardware and software. The sweet spot for the Jetsons is low-power industrial computer vision; turning them into synthesizers doesn’t exploit them to their fullest.

So my plan is to build the WSL version in the next few weeks and start performance-testing some of the above-listed experimental open source PyTorch differentiable digital signal processing applications. I’ll almost certainly be able to do inference on the Jetsons, although I may need to do training on a GTX 1650 Ti or RTX 3090. If the Jetsons turn out to be cost-competitive at today’s end-user prices, I’ll research building current PyTorch wheels for Mambaforge Python 3.9.

Footnotes

    References

    Caillon, Antoine, and Philippe Esling. 2021. “RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis.” arXiv. https://doi.org/10.48550/ARXIV.2111.05011.
    Chen, Nanxin, Yu Zhang, Heiga Zen, Ron J. Weiss, Mohammad Norouzi, and William Chan. 2020. “WaveGrad: Estimating Gradients for Waveform Generation.” arXiv. https://doi.org/10.48550/ARXIV.2009.00713.
    Hayes, Ben, Charalampos Saitis, and György Fazekas. 2021. “Neural Waveshaping Synthesis.” arXiv. https://doi.org/10.48550/ARXIV.2107.05050.
    Sueur, J. 2018. Sound Analysis and Synthesis with r. Use r! Springer International Publishing. https://books.google.com/books?id=zfVeDwAAQBAJ.

    Reuse

    Text and figures are licensed under Creative Commons Attribution CC BY-SA 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".

    Citation

    For attribution, please cite this work as

    Borasky (2022, May 11). AlgoCompSynth by znmeb: AlgoCompSynth-One v1.0.0 Released. Retrieved from https://www.algocompsynth.com/posts/2022-05-11-algocompsynth-one-100-released/

    BibTeX citation

    @misc{borasky2022algocompsynth-one,
      author = {Borasky, M. Edward (Ed)},
      title = {AlgoCompSynth by znmeb: AlgoCompSynth-One v1.0.0 Released},
      url = {https://www.algocompsynth.com/posts/2022-05-11-algocompsynth-one-100-released/},
      year = {2022}
    }