<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://musicalentropy.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://musicalentropy.github.io/" rel="alternate" type="text/html" /><updated>2026-04-10T12:18:23+00:00</updated><id>https://musicalentropy.github.io/feed.xml</id><title type="html">Musical Entropy (Blog)</title><subtitle>Smart tools for happy musicians</subtitle><entry><title type="html">The Great Escape Final Release</title><link href="https://musicalentropy.github.io/TheGreatEscape/" rel="alternate" type="text/html" title="The Great Escape Final Release" /><published>2026-03-18T00:00:00+00:00</published><updated>2026-03-18T00:00:00+00:00</updated><id>https://musicalentropy.github.io/TheGreatEscape</id><content type="html" xml:base="https://musicalentropy.github.io/TheGreatEscape/"><![CDATA[<p>Hello everyone! Today, I’m going to talk to you about the latest version of <strong>my freeware The Great Escape</strong>, and also a little bit about the development history of this plug-in, something I didn’t have the opportunity to do here when it was released. You can still find it here:</p>

<p><a href="https://musicalentropy.com/TheGreatEscape.html">The Great Escape by Musical Entropy</a></p>

<p>The plug-in was therefore created at the end of March 2020, during the infamous COVID lockdown. I had seen a few initiatives from artists and developers who wanted to communicate the importance of respecting the lockdown and who were offering ways to keep people busy and make it more fun, and I thought the idea sounded very interesting. So I decided to do something along those lines and developed everything in just a few days, without any vibe coding :)</p>

<p>Moreover, a few weeks earlier, I had the chance to participate in a show in Paris, called Festival United Guitars 2020, organized by Ludovic Egraz, editor-in-chief of <a href="https://guitarextrememag.com/">Guitare Xtreme magazine</a>, and leader of the musical project of the same name <a href="https://united-guitars.fr/">United Guitars</a> which showcases compositions produced by some of the best French guitarists.</p>

<p><img src="/images/artwork-UG4-retouch-1015x1024.jpg" alt="United Guitars Volume 4" /></p>

<p>At that show, I had the chance to try out a few pedals, and I was particularly fond of the famous The Great Escape by Thrilltone. This tremolo sounded really good, and using dynamics to manipulate its parameters with just a few knobs struck me as a brilliant idea, and a fun way to rediscover playing with a tremolo. Just a few days later, I loved the concept so much that I tried to recreate it in a plug-in, mostly by ear, especially the dynamics side. And that resulted in this magnificent prototype :)</p>

<p><img src="/images/TheGreatEscapePrototype.png" alt="Prototype du plug-in The Great Escape" /></p>

<p>When I decided to release a new freeware, the idea of completing this modeling work seemed very good to me, especially since I had the opportunity to discuss with <a href="http://thrilltone.fr/">Pierre-Benoît Prud’homme</a>, the founder of the pedal and the Thrilltone brand, which also makes great distortion pedals + others, whose graphic aesthetic I greatly appreciate (just look at this beauty):</p>

<p><img src="/images/Northern-Lights-009-600x600.png" alt="Thrilltone Northern Lights" /></p>

<p>So we got in touch, and the idea of releasing a paid software tremolo emulation, initially with a rough model in a free version, seemed like a good one. The freeware plug-in was released as a VST/VST3/AU/AAX and even a Linux version, a first for me at the time. The plug-in regularly displayed lockdown recommendations, which I removed in a later version, and used the original pedal graphical designs, generously provided by Pierre-Benoît.</p>

<p>The idea was to discuss collaboration again for the paid version later, and to properly promote the original pedal as well, which apparently sold a bit better thanks to me! We’ll also skip over the fact that the freeware included an Easter Egg that’s still there and is quite entertaining in itself :)</p>

<p>I’ll mention again that this is an analog tremolo emulation, with <strong>its subtle touch of signal coloration</strong> even when the tremolo isn’t active. Beyond its sounds, it distinguishes itself from a classic tremolo by its ability to easily <strong>add variety to the effect</strong> itself, using a parametric envelope follower that can vary the speed and amount of the tremolo based on the signal’s dynamics.</p>

<p><img src="/images/the_great_escape_manual.png" alt="The Great Escape 1.2.0" /></p>

<p>The whole thing is also very well explained in the original manual, in the educational aspect of my plug-in which I had particularly pushed, and applies equally well to guitars as to vocals or synthesizers sounds.</p>

<p><img src="/images/the_great_escape_tutorial.png" alt="The Great Escape 1.2.0" /></p>

<p>Then, very recently, and as you know, I released a new version of the plug-in.</p>

<p><img src="/images/TheGreatEscape-1.2.0.png" alt="The Great Escape 1.2.0" /></p>

<p>This project allowed me to incorporate the latest updates to the JUCE framework and my codebase, properly implement support for the latest operating systems, add CLAP to the list of available formats (currently only on Linux), and to refine the emulation to better match the behavior of the original pedal, which I now own. However, paid collaboration is unlikely, and for this reason I’ve focused on creating an emulation that closely resembles, but isn’t strictly identical to the original, with a few improvements over the initial version I released.</p>

<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https://soundcloud.com/musicalentropy/little-wing-65-tge?in=musicalentropy/sets/the-great-escape-audio-demos&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>

<p>In any case, I’m very proud of this plug-in, very happy to have been able to work with Pierre-Benoît on the emulation of his awesome software tremolo, and I’ll keep you updated on my upcoming releases and work. I’ll have some announcements to make very soon about that :)</p>

<p>Stay safe and stay tuned!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Hello everyone! Today, I’m going to talk to you about the latest version of my freeware The Great Escape, and also a little bit about the development history of this plug-in, something I didn’t have the opportunity to do here when it was released. You can still find it here:]]></summary></entry><entry><title type="html">Spaceship Delay Reboot (version 1.4.4)</title><link href="https://musicalentropy.github.io/Spaceship-Delay-1.4.4/" rel="alternate" type="text/html" title="Spaceship Delay Reboot (version 1.4.4)" /><published>2026-02-25T00:00:00+00:00</published><updated>2026-02-25T00:00:00+00:00</updated><id>https://musicalentropy.github.io/Spaceship-Delay-1.4.4</id><content type="html" xml:base="https://musicalentropy.github.io/Spaceship-Delay-1.4.4/"><![CDATA[<p>Hey everyone, it’s been a while! Today I’m posting a new message on my blog, and I hope I can do this regularly from now on.</p>

<p>I decided to get back to work on my Musical Entropy plug-ins, after spending a few years focusing exclusively on my freelance developer career. As you may have noticed, I have just released a new version of my freeware plug-in Spaceship Delay, which was essentially an opportunity to relaunch the project and clean up all the source code. I did the same with The Great Escape, and this allowed me to make available the latest versions of these projects, with bonus installers, and all the classic plug-in formats under Linux. You can find them in the usual locations here or on my website, always free and accessible without any kind of registration process. By the way, I will also mention The Great Escape in a separate post, as I have not yet had the honor of talking about it here.</p>

<p><a href="https://www.musicalentropy.com/SpaceshipDelay.html">Spaceship Delay</a></p>

<p><a href="https://www.musicalentropy.com/TheGreatEscape.html">The Great Escape</a></p>

<p><img src="/images/spaceship_delay.png" alt="Original theme Spaceship Delay" /></p>

<p>So Spaceship Delay has had its fair share of updates, mainly to ensure compatibility with the latest OSes, fix a few bugs here and there, improve the usability and visibility of the embedded tutorial section, add resizing, a dark theme, improved the phaser algorithm with a zero-delay feedback structure (used in my recent design for the <a href="https://firmware.phazerville.com/">Eurorack module Ornaments &amp; Crime alternate firmware Phazerville</a> as well), and optimize the execution of certain parts of the code, such as the convolution spring reverb (it’s an improved version of the code I sold a few years ago to JUCE for the DSP module). Over a hundred fixes later, I find the plug-in much more stable, it does exactly what it’s supposed to do without any drift (which wasn’t the case before, lol), and it’s therefore ready for me to add cool new features.</p>

<p><img src="/images/spaceship_delay_darkmode.png" alt="Dark mode Spaceship Delay" /></p>

<p>What kind of new features you’re asking? A new, modern design for the user interface obviously, but also things that will make Spaceship Delay even more unique and true to the vision I had when I first released it at the same time, as well as a synthesis of everything I’ve worked on since then, and all the things I love in delay FXs now that I’m a bit more involved in music production in general than before.</p>

<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https://soundcloud.com/musicalentropy/four-hundred-forty&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false"></iframe>

<p>Stay tuned!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Hey everyone, it’s been a while! Today I’m posting a new message on my blog, and I hope I can do this regularly from now on.]]></summary></entry><entry><title type="html">JUCE 5.1 DSP module</title><link href="https://musicalentropy.github.io/JUCE-DSP-module/" rel="alternate" type="text/html" title="JUCE 5.1 DSP module" /><published>2017-10-31T00:00:00+00:00</published><updated>2017-10-31T00:00:00+00:00</updated><id>https://musicalentropy.github.io/JUCE-DSP-module</id><content type="html" xml:base="https://musicalentropy.github.io/JUCE-DSP-module/"><![CDATA[<p>In July 2017, I did work again for <a href="https://roli.com/">ROLI</a> and the <a href="http://www.juce.com">JUCE</a> team, to improve the SDK I use all the time to develop multi-platform audio applications and plug-ins, and to provide some DSP code that has been included since in the so-called <a href="https://juce.com/releases/juce-5-1">DSP module</a>.</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/ZrxrhGg7_xA" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>

<p>I even spent one week in <a href="https://fxpansion.com/">FXpansion</a> headquarters to work in a hellish but fun pace ! And I had a great time hanging out with the JUCE + FXpansion guys. Basically, I provided some code to help JUCE users writing plug-ins who need a convolution engine, filter design and processing functions, oversampling, some fast approximation of functions…</p>

<p>I have written this blog message today because recently I have create a few topics on the JUCE forum related with my work, and I wanted to post links to them there :</p>

<ul>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-structure-of-audio-plug-ins-api/23589">DSP module discussion / Structure of audio plug-ins API</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-new-oversampling-class/24153">DSP module discussion / New Oversampling class</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-new-audioblock-class/24154">DSP module discussion / New AudioBlock class</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-iir-filter-and-statevariablefilter/23891">DSP module discussion / IIR::Filter and StateVariableFilter</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-fast-function-computation-classes/24905">DSP module discussion / Fast function computation classes</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-new-class-simdregister/24911">DSP module discussion / New class SIMDRegister</a></li>
  <li><a href="https://forum.juce.com/t/dsp-module-discussion-new-classes-in-the-maths-folder/24908">DSP module discussion / New classes in the maths folder</a></li>
</ul>

<p>Otherwise, I’ll be again at <a href="https://www.juce.com/adc-2017">ADC 17</a> this year in London, presenting a talk called <a href="https://www.juce.com/adc-2017/talks#fifty-shades-of-distortion">Fifty Shades of Distortion</a> (which will be on Youtube a few weeks later), and I’ll update Spaceship Delay when I’m back with a few improvements + new functionalities!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In July 2017, I did work again for ROLI and the JUCE team, to improve the SDK I use all the time to develop multi-platform audio applications and plug-ins, and to provide some DSP code that has been included since in the so-called DSP module.]]></summary></entry><entry><title type="html">Machine Learning Hackathon</title><link href="https://musicalentropy.github.io/Machine-Learning-Hackathon/" rel="alternate" type="text/html" title="Machine Learning Hackathon" /><published>2017-01-02T00:00:00+00:00</published><updated>2017-01-02T00:00:00+00:00</updated><id>https://musicalentropy.github.io/Machine-Learning-Hackathon</id><content type="html" xml:base="https://musicalentropy.github.io/Machine-Learning-Hackathon/"><![CDATA[<p>As you might not know yet, my main occupation today is being a freelance developer and audio signal processing engineer. I do consulting jobs mainly for companies in the audio industry. Last month, <a href="https://roli.com/">ROLI</a> and the <a href="http://www.juce.com">JUCE</a> team asked me to work on a Machine Learning JUCE module, in order to make it available for a special event, a Hackathon in London. JUCE is the famous SDK that is being used by more and more developers (including me) to release multi-platform audio applications and plug-ins.</p>

<p><img src="https://scontent-cdg2-1.xx.fbcdn.net/v/t31.0-8/15194335_1372787506067788_7919598285914712537_o.jpg?oh=905efe0d82e2f71921f422a1098a4b0f&amp;oe=59193DE7" alt="Machine Learning Hackathon" title="Machine Learning Hackathon" /></p>

<p>They started a partnership with the <a href="http://www.gold.ac.uk/">Goldsmith University</a> in London and the <a href="http://www.ircam.fr">IRCAM</a> in Paris, about their european project called <a href="http://rapidmix.goldsmithsdigital.com/">RapidMix</a> for Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive eXpressive Technology (yes I know it’s long). To summarize, it’s a Machine Learning C++ toolkit/API, made for giving a fast and easy access to Machine Learning algorithms to developers and artists, in order to create innovative human-computer interfaces, for gesture recognition and creation of new musical instruments.</p>

<p>So, two weeks before the hackathon, the JUCE team asked me to develop a JUCE module wrapping a section of RapidMix called RapidLib, created by <a href="http://www.mikezed.com/">Michael Zbyszyński</a>. It is the little brother of the <a href="http://www.wekinator.org/">Wekinator</a>, designed by <a href="https://www.doc.gold.ac.uk/~mas01rf/Rebecca_Fiebrink_Goldsmiths/welcome.html">Rebecca Fiebrink</a> also at Goldsmith. You might know them for the amazing talks they did at the <a href="https://www.youtube.com/watch?v=8IEVWj_OYhM">Audio Developer Conference 2016</a> in November, and for the MOOC <a href="https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info">Machine Learning for Musicians and Artists</a>. I did code the module, some JUCE examples, and I did also a talk at the beginning of the Hackathon so everybody could grab the basic concepts and start doing something cool.</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/8IEVWj_OYhM" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>

<p>You can see the slides I have used during my talk here : <a href="/files/HackathonSlides.pdf">Hackathon Slides</a>.</p>

<h2 id="machine-learning-">Machine Learning ?</h2>

<p>What’s interesting here is that I didn’t know anything at all or nearly about Machine Learning before! But I learnt everything I could about the subject in that short amount of time in order to do the job successfully, and to be able to explain how to use the JUCE module and do cool things with it. Ultimately, everything worked as expected, and I also got nice reviews about my work and my talk from the people involved in the RapidMix project, for which I am very proud!</p>

<p><img src="http://blog.euratechnologies.com/content/uploads/2015/04/machine-learning.jpg" alt="Machine Learning" title="Machine Learning" /></p>

<p>Anyway, the reason I’m talking about that is to tell you how much I am excited about this collaboration and the things that are going to happen next. The JUCE module isn’t available to public yet, since it is in a very early alpha version right now, but it will be released in the future, and the people who have been able to have fun with it already in the Hackathon were very happy to catch this opportunity too.</p>

<p>Then, why am I so excited about this ? Since I’m new in this area, I wouldn’t be able to explain in details what is Machine Learning and how it works, but here is what I have learnt already (you can have a look for my talk slides too) :</p>

<ul>
  <li>
    <p>The very principle of Machine Learning is the use of an algorithm, which is <strong>trained</strong> with some data provided by the user. The <strong>training set</strong> of data is made of samples with inputs + associated outputs. This algorithm then is expected to have a given behaviour, when receiving some <strong>random</strong> input data.</p>
  </li>
  <li>
    <p>There are two main kinds of Machine Learning algorithms, ones to do <strong>classification</strong>, and ones to do <strong>regression</strong>.</p>
  </li>
  <li>
    <p>In <strong>classification</strong> applications, the output of the algorithm is an integer variable or a class label. For example, a classification algorithm is trained with pictures of animals, and the associated animal names. When a new picture is given to it, the algorithm is supposed to be able to tell you what animal is in the picture.</p>
  </li>
  <li>
    <p>In <strong>regression</strong> applications however, the outputs are continuous, they can be any float/double variables. As an example of regression, imagine you want to model a mathematical function with a Machine Learning algorithm, like an hyperbolic tangent. You train your regression algorithm with a set of input + output data that you calculate yourself. Then the algorithm processes any input value and returns an output which is supposed to be as close as possible of the original mathematical function result.</p>
  </li>
  <li>
    <p>In order to train properly Machine Learning algorithms, it is common to use <strong>features</strong>, which means sampling a large amount of data and compressing it or extracting some meaningful information from it. For example in audio, it is possible to train an algorithm with a RMS value taken from a 200 ms buffer of audio samples, instead of feeding the algorithm directly with that buffer. It is also possible to extract some information such as the pitch, the frequency response etc. With a video signal, you can reduce the resolution of the image or extract a general brightness value.</p>
  </li>
</ul>

<h2 id="applications-of-machine-learning">Applications of Machine Learning</h2>

<p>Then, what can we do from all of this stuff ? In the audio examples I saw, regression algorithms were used to map and interpolate parameters of an audio synthesizer using controllers such as the computer mouse, a <a href="https://www.leapmotion.com/">Leap Motion</a>, some <a href="https://roli.com/products/blocks">ROLI Blocks</a>, a joystick etc. In her videos, Rebecca Fiebrink uses a lot of weird controllers and the Wekinator to create new instruments with new ways of interacting with a computer and making music. It is fun also to see that the musician can be involved during the training phase, and it is not all the time the duty of the application developer to train the Machine Learning algorithm.</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/rnlCGw-0R8g" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>

<p>During the Hackathon, I coded something cool too (but I wasn’t able to complete it before the end of the event unfortunately). I developed a drum machine application which can play 3 samples (a kick, a snare and a hi-hat sample), with 3 patterns that can be modified by the user with its mouse, or live if he connects a ROLI Blocks to its computer in USB. Then, he can “train” a classification algorithm with the microphone input of its computer, in order to associate outside sounds to one of the three patterns, using audio features extraction. This way, he can play the pattern 1 using a low-pitched sound for example, the pattern 2 with a white noise like impulse, the pattern 3 with anything else, whatever the user chooses. And finally, there is a simple delay audio effect with 4 parameters (delay time, feedback, mix, lowpass filter frequency) which can be “trained” with the mouse cursor. That means the user can associate the mouse cursor position with a given set of the 4 parameters, and then play with the audio effect simply by dragging the mouse over the window, which will move continuously its parameters like in the Michael Zbyszyński’s talk. Sounds fun isn’t it? I promise, I’ll make it available in a way or another at some point when completed.</p>

<p>The other participants did cool things too, even if it was difficult in a short amount of time to discover a new tool, a new technology, and to do something with them… But the beauty of JUCE is that it is possible to create a new project, handle all the dependencies of everything, and start coding in a few seconds! The JUCE module during the hackathon featured simple classification and regression classes, but unfortunately nothing yet on the feature extraction side, which was the main concern of the participants at the time. We all had to code some on our own, which explains partly why I wasn’t able to finish my application…</p>

<h2 id="conclusion">Conclusion</h2>

<p>Anyway, there is a JUCE module RapidMix being developed right now. Machine Learning is a hot topic nowadays thanks to all the communication about Deep Learning, or about some current applications related with audio such as speech recognition or automatic composition of music. I’m still discovering the topic right now, but in my opinion having something as simple as a JUCE module to let audio developers have fun with Machine Learning, thanks to the work of the RapidMix team, could be something huge for musicians and sound engineers. Not because Machine Learning would help creating really innovative stuff, I mean all the applications that I have talked about could be done without it. But because a lot of complicated problems can become very simple to solve thanks to Machine Learning, and because these algorithms allow to experiment a lot of things very quickly! Finally, I think that current users of Machine Learning APIs / algorithms right now are most of the time in the academic world, and having something like a module in the JUCE library could make these features available easily to new people, with new application ideas, which is awesome.</p>

<p>I’d like to say also that I’m curious about the next developments in RapidMix, to see how it will perform compared with other Machine Learning APIs for C++. And I would love to add in Spaceship Delay some Machine Learning features I have in mind…</p>

<p>Anyway, thanks to all the people involved in this hackathon for organizing it, it was great to talk with them and with the participants back then!</p>

<h2 id="bibliography">Bibliography</h2>

<p>If you want to learn more about Machine Learning, there are tons of information sources on the internet, but I suggest you to have a look there in priority :</p>

<ul>
  <li><a href="https://ml.berkeley.edu/blog/2016/11/06/tutorial-1/">Machine Learning Crash Course Part 1, by Daniel Geng and Shannon Shih</a></li>
  <li><a href="https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info">Machine Learning for Musicians and Artists</a></li>
  <li><a href="https://www.coursera.org/learn/machine-learning/home/welcome">The classic Andrew Ng’s Machine Learning course on Coursera</a></li>
</ul>]]></content><author><name></name></author><summary type="html"><![CDATA[As you might not know yet, my main occupation today is being a freelance developer and audio signal processing engineer. I do consulting jobs mainly for companies in the audio industry. Last month, ROLI and the JUCE team asked me to work on a Machine Learning JUCE module, in order to make it available for a special event, a Hackathon in London. JUCE is the famous SDK that is being used by more and more developers (including me) to release multi-platform audio applications and plug-ins.]]></summary></entry><entry><title type="html">KVR DC 2016 results and a few announcements</title><link href="https://musicalentropy.github.io/KVR-DC-Results/" rel="alternate" type="text/html" title="KVR DC 2016 results and a few announcements" /><published>2016-12-21T00:00:00+00:00</published><updated>2016-12-21T00:00:00+00:00</updated><id>https://musicalentropy.github.io/KVR-DC-Results</id><content type="html" xml:base="https://musicalentropy.github.io/KVR-DC-Results/"><![CDATA[<p>So, you might already know, but the <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/">KVR DC 16 is finished</a>, and I ended at rank #3! I would like to thank a lot all the people who have been supporting <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay</a> during the contest and who are still planning to use it, particularly the ones who helped me to make the Pro Tools version on Mac OS X in the forums or with Skype, or the amazing guy who worked during the KVR DC on a custom skin! Thanks a lot again!</p>

<p><img src="/images/Reskin.png" alt="Reskin" /></p>

<p>So the winners are Youlean with its <a href="http://www.kvraudio.com/product/youlean-loudness-meter-by-youlean/details">Youlean Loudness Meter</a> and Ursa DSP / Dave Elton with its delay <a href="http://bedroomproducersblog.com/2016/12/01/ursa-dsp-lagrange/">Lagrange</a>. It’s funny to see two delay plug-ins in the top 3!! Youlean Loudness Meter is probably one of the best loudness meter that you can find for free, and I have seen a video comparing it with commercial alternatives. Lagrange is probably a less versatile delay than mine, but it is using granular methods to get very interesting and singular sounds that cannot be obtained with other methods, close to what can be got with stutter effects or some specific flanger / chorus effects. I suggest you to try them if you have not yet.</p>

<p><img src="http://static.kvraudio.com/i/b/lagrange084.png" alt="Lagrange" /></p>

<p>I would like also to tell you that I’m very happy about having a blog post about Spaceship Delay in Peter Kirn’s blog, Create Digital Music, that I read a lot for a long time. The article is here : <a href="http://cdm.link/2016/12/spaceship-delay-is-an-insane-free-plug-in-inspired-by-hardware/">Spaceship Delay is an insane free plug-in inspired by hardware</a></p>

<p>And last thing, I have just updated Spaceship Delay to <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">version 1.0.5</a>. The new version includes a few things people wanted for a long time : mono/stereo versions of the plug-in for Pro Tools, Logic Pro X etc., a selector for the location of the Post-FX (before or after the mix control), a new tremolo Post FX and extra modulation options applied to the low-pass filters. There are also new presets and some extra tutorial content, so I suggest you to update your presets / tutorial folders as well.</p>

<p>That’s it for today. I’ll probably talk again about Spaceship Delay very soon since I have big plans for it in the future!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[So, you might already know, but the KVR DC 16 is finished, and I ended at rank #3! I would like to thank a lot all the people who have been supporting Spaceship Delay during the contest and who are still planning to use it, particularly the ones who helped me to make the Pro Tools version on Mac OS X in the forums or with Skype, or the amazing guy who worked during the KVR DC on a custom skin! Thanks a lot again!]]></summary></entry><entry><title type="html">The filters in Spaceship Delay</title><link href="https://musicalentropy.github.io/The-filters-in-Spaceship-Delay/" rel="alternate" type="text/html" title="The filters in Spaceship Delay" /><published>2016-12-12T00:00:00+00:00</published><updated>2016-12-12T00:00:00+00:00</updated><id>https://musicalentropy.github.io/The-filters-in-Spaceship-Delay</id><content type="html" xml:base="https://musicalentropy.github.io/The-filters-in-Spaceship-Delay/"><![CDATA[<p>In that new blog post I’m going to talk about the filter algorithms that are implemented in <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay</a>! Right now, you can see 4 different filter types in the filter section : Low/High Cut, Low/High Shelf, Japanese, and German/Canadian.</p>

<h2 id="lowhigh-cut-and-lowhigh-shelf">Low/High Cut and Low/High Shelf</h2>

<p>These filters are more or less the standard 2nd order filters or “biquads”, modeling the analog circuit called <strong>State Variable Filter (SVF)</strong> behaviour, with a -12 dB/octave attenuation. The equations are derived from the famous <a href="http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt">Robert-Bristow Johnson EQ Audio Cookbook</a> that every DSP engineer in the world knows, and uses sometimes without even knowing. Same for most of filter algorithms available in commercial software.</p>

<p>However, the original implementation does have some drawbacks well known, that I have been talking about in my recent <a href="https://www.youtube.com/watch?v=esjHXGPyrhg">Audio Developer Conference talk</a>. The two main drawbacks applies to the behaviour of the filters in the high frequency range, and when the cutoff frequency is modulated fast. In the first case, a lowpass filter for example is <strong>attenuating way too much the high frequencies</strong> close to half of the sampling rate. In the second case, a quick modulation with the original implementation, using a simulation structure called <a href="https://en.wikipedia.org/wiki/Digital_filter#Direct_form_I">Direct Form</a> <strong>produces high amplitude artefacts at modulation</strong>.</p>

<p>In Spaceship Delay, I have solved these issues by using another simulation structure called <strong>Topology-Preserving Transform</strong>, which is well documented on KVR forums (DSP section) and in the free e-book of Native Instruments Vadim Zavalishin, <a href="https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf">The Art of V.A. Filter Design</a>. I have also oversampled the filtering algorithm, since the section can produce some nonlinear stuff as we will see with the other kinds of filters. That gives me the right behaviour in high frequencies and when a parameter is changed by the user with automation. It will be even more useful if I decide to add in the modulation section in the future an envelope follower or a LFO modulating the cutoff frequencies. Moreover, this structure is already used in the phaser algorithm, for obvious reasons.</p>

<p>I’d like to say there is absolutely nothing really innovative there, since this kind of filtering algorithm can already be found in most of current commercial plug-ins involving filtering. The Low/High Cut feature in Spaceship Delay has nothing groundbreaking either, and that’s something that we find in most of other delay plug-ins, so its presence here is more than relevant without being something particularly new. However, I have not seen a lot of times some low and high shelf filters in this context, and I thought it would be a nice addition to drive further the nonlinear sections, put the delay line into resonant feedback in an interesting way, or simply to cut some frequency content like with the Low/High Cut filters in a different way. It’s my friend François-Maxime from <a href="http://www.lesliensduson.com/">Les Liens du Son</a> who suggested me this to do this and he was right!</p>

<h2 id="japanese">Japanese</h2>

<p>The Japanese filter is obviously my take on the MS-20 / Monotron filters. You can find a very interesting study of its behaviour on Tim Stinchcombe’s website, that I have been reading a lot during the development.</p>

<p><a href="http://www.timstinchcombe.co.uk/index.php?pge=korg">Tim Stinchcombe study of Korg MS-20 filter</a></p>

<p>I’m not that proud of the realism and simulation quality of the result, since I’m still new in synthesizer filter modeling. But I think that the “japanse filter” in Spaceship Delay sounds quite good and is very interesting when put in a delay line, to get that screaming oscillating feedbacks everybody loves in “analog” delay software/hardware. As I said before, doing a delay and using that filter algorithm was the starting point of Spaceship Delay in terms of DSP. I wanted something that sounds a little like my Korg Monotron Delay, that I could use as a plug-in. I thought also that using other kinds of “Virtual Analog” filters in the delay line should be interesting too. So I did some research, trying also to dismiss the too obvious choices for adding a extra V.A. filter in the plug-in, and then I ended up doing modeling of the filter I called “German/Canadian”.</p>

<h2 id="germancanadian">German/Canadian</h2>

<p>For those who have read the embedded tutorial in Spaceship Delay, or seen the other blog posts, you already know the true identity of that filter. It is the one that can be found in the <a href="https://meeblip.com/">Meeblip Anode and Triode synthesizers</a>, made by Peter Kirn from <a href="http://cdm.link/">Create Digital Music</a> and James Grahame from <a href="https://meeblip.com/">Blipsonic / Meeblip</a>.</p>

<p><img src="/images/Meeblip-synths.png" alt="Meeblip Anode and Triode" style="textalign: center;" /></p>

<p>I have spent a lot of time studying the schematic of the filter that is available on <a href="https://github.com/meeblip">GitHub</a>, covered by a permissive Creative Commons and GPLv3 license, since the Meeblip hardware + software is open source! I wanted a filter in Spaceship Delay which would have second order attenuation like the Korg MS-20 filter (and not like the famous ladder filters), and the result sounded surprinsingly good too when put in a delay line.</p>

<p>My implementation, like for the Korg MS-20 filter, isn’t that realistic yet, since I used a simplified model, and because I have not reproduced yet the same mapping for the cutoff and resonance knobs than the one in the original units. However, my model is based on the equations of the original circuit, that I have studied first in <a href="http://www.linear.com/designtools/software/">LTSpice</a> and then in a simulation context to make it sound as good as possible.</p>

<p><img src="/images/Meeblip-LTSpice.png" alt="LTSpice Screenshot" style="textalign: center;" /></p>

<p>As you can see, for people knowing how to read a synthesizer filter schematic, it’s a filter looking more or less like a Twin-T VCF, acting like a lowpass filter, but removing also a little in the bass frequency range, giving it a very interesting sound signature in my opinion. It is possible to study it further by determinating its transfer function from the electronic equations :</p>

\[H(s) = - \frac{R_2}{R_1} \frac{1 + C s (2 R_F + R_Q) + R_F R_Q (C s)^2}{1 + C s (2 R_F + R_Q) + R_F (R_2 + R_Q) (C s)^2}\]

<p>I’ve made it zero delay feedback, but the way it saturates is still far from the original in my opinion, so I’m going to improve my model over the next weeks, and I’ll probably start by updating the mapping of the controls so it behaves like the original at least in a strictly linear sense.</p>

<p>As a side note, don’t forget to get the last version of Spaceship Delay to experiment with this filter, since the algorithm has changed a little since the beginning of the KVR Developer Challenge, to solve a few issues I had.</p>

<h2 id="next-steps-for-spaceship-delay">Next steps for Spaceship Delay</h2>

<p>I got already tons of very nice comments, and feature requests for Spaceship Delay. I even got a very nice new skin proposal from a user of the KVR forums. I’ll probably spend a lot of time after the voting period to add the most interesting features people have submitted, and a few things I had also in mind and that I have not been able to finish before the deadline. But, what I can already tell you, is that I’m already working on the AAX version of the plug-in, and it should be available in a few days!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In that new blog post I’m going to talk about the filter algorithms that are implemented in Spaceship Delay! Right now, you can see 4 different filter types in the filter section : Low/High Cut, Low/High Shelf, Japanese, and German/Canadian.]]></summary></entry><entry><title type="html">New Spaceship Delay version online!</title><link href="https://musicalentropy.github.io/Spaceship-Delay-1.0.2/" rel="alternate" type="text/html" title="New Spaceship Delay version online!" /><published>2016-12-08T00:00:00+00:00</published><updated>2016-12-08T00:00:00+00:00</updated><id>https://musicalentropy.github.io/Spaceship-Delay-1.0.2</id><content type="html" xml:base="https://musicalentropy.github.io/Spaceship-Delay-1.0.2/"><![CDATA[<p>I have just updated Spaceship Delay on KVR audio. You can get the new version here: <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay 1.0.2</a></p>

<p>In the new version, I have solved a few bugs, and I have changed a little the way you can install it, to answer some user calls. Now, you can just put the data folder in the same place than the plug-in itself, so it will be easier to start playing with it for newcomers.</p>

<p>I’d like also to recall that Spaceship Delay has now a few presets, and a tutoriel system I’m very proud of, with tips and tricks embedded in the application, available from the “about tab” by clicking on Spaceship Delay logo. It looks like this:</p>

<p><img src="http://static.kvraudio.com/i/b/screenshot-5.png" alt="Spaceship Delay Screenshot" /></p>

<p>I hope you’ll enjoy Spaceship Delay, and again, if you are a KVR forum member, don’t hesitate to <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">vote for me</a> if you like the plug-in!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I have just updated Spaceship Delay on KVR audio. You can get the new version here: Spaceship Delay 1.0.2]]></summary></entry><entry><title type="html">Spaceship Delay Anecdotes</title><link href="https://musicalentropy.github.io/Spaceship-Delay-Anecdotes/" rel="alternate" type="text/html" title="Spaceship Delay Anecdotes" /><published>2016-12-07T00:00:00+00:00</published><updated>2016-12-07T00:00:00+00:00</updated><id>https://musicalentropy.github.io/Spaceship-Delay-Anecdotes</id><content type="html" xml:base="https://musicalentropy.github.io/Spaceship-Delay-Anecdotes/"><![CDATA[<p>I’d like to share something cool with you today!</p>

<p>When I was working on the modeling of the Echocord Super 76, I spent a lot of time doing measurement, and I have only tried to do something from them a few days before the KVR Developer Challenge deadline, to include the spring reverb impulse response in my plug-in. However, after the first calculations, I got something really strange as a result…</p>

<p>I was scared because I thought that I did something wrong with the measurement itself, like having a feedback loop in the recording that I didn’t saw at first, and I thought all my recordings were screwed up. You can imagine what was the impact of this a few days before deadline, with no way to redo easily the recordings since I don’t own the device. Fortunately, it turns out that there was a bug in my deconvolution algorithm, and so everything was finally fine!</p>

<p>However, I tried to re use the “wrong” impulse response again a few days ago, and I thought that the sound I got is somehow really cool, so I thought it might be a good idea to share it with you! I have added some artificial decay on the reverb tail so it doesn’t stop too suddenly. To use it, you need to open your favorite convolution reverb plug-in, and to load the impulse response inside. You can find it here :</p>

<p><a href="https://musicalentropy.github.io/files/ErrorImpulseResponse.wav">Impulse Response</a></p>

<p>It happens a lot of times during development to have some kind of “interesting accidents”, when a bug or an unexpected thing happens in the code, and gives interesting sonic results…</p>

<p>Enjoy!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[I’d like to share something cool with you today!]]></summary></entry><entry><title type="html">My other contributions to KVR Developer Challenges</title><link href="https://musicalentropy.github.io/My-Other-Contributions-To-KVR-DC/" rel="alternate" type="text/html" title="My other contributions to KVR Developer Challenges" /><published>2016-12-06T00:00:00+00:00</published><updated>2016-12-06T00:00:00+00:00</updated><id>https://musicalentropy.github.io/My-Other-Contributions-To-KVR-DC</id><content type="html" xml:base="https://musicalentropy.github.io/My-Other-Contributions-To-KVR-DC/"><![CDATA[<p>In this new post, I’m going to talk a little about the things I did in the past.</p>

<p>I choose the brand name Musical Entropy 4 years ago when I release <a href="http://www.kvraudio.com/product/inspiration-by-musical-entropy/details">Inspiration</a>, my take on the concept of Brian Eno’s Oblique Strategies ported into a standalone application and inspired by some of my lectures from that time. It was for <a href="http://www.kvraudio.com/kvr-developer-challenge/2012/">KVR Developer Challenge 2012</a>, and I think I helped a few people with it to find ways to fight the blank page syndrom. I really loved at this time to start creating things on my own and releasing them.</p>

<p><img src="http://static.kvraudio.com/i/b/inspiration-capture5.png" alt="Inspiration Screenshot" style="textalign: center;" /></p>

<p>Two years later, it was <a href="http://www.kvraudio.com/product/guitar-gadgets-by-musical-entropy/details">Guitar Gadgets</a>, a VST/AU plug-in which was a compilation of “fake analog pedals”, a way to present a few audio effects algorithms put together in the same place, and which could be very interesting for guitarists. I was designed for <a href="http://www.kvraudio.com/kvr-developer-challenge/2014/">KVR Developer Challenge 2014</a>, and I updated it a few times after the end of the contest. From time to time, I can see that some people are still using it, and I’m very happy about that.</p>

<p><img src="http://static.kvraudio.com/i/b/screenshot2.1406976926.png" alt="Guitar Gadgets Screenshot" /></p>

<p>When I created it, I had just started to be a freelance developer, after having worked with the company Two Notes on the Torpedo product line for a couple of years. Today, I’m still working with them from time to time, and with a few other companies (I’ve made some contributions to <a href="https://www.sonicacademy.com/products/kick-2">Sonic Academy Kick 2</a> or <a href="https://www.sonicacademy.com/products/kick-2">TSE X50</a> for example). I’ll probably release more personal stuff next year!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[In this new post, I’m going to talk a little about the things I did in the past.]]></summary></entry><entry><title type="html">Spaceship Delay presentation and KVR DC 16</title><link href="https://musicalentropy.github.io/Spaceship-Delay/" rel="alternate" type="text/html" title="Spaceship Delay presentation and KVR DC 16" /><published>2016-12-04T00:00:00+00:00</published><updated>2016-12-04T00:00:00+00:00</updated><id>https://musicalentropy.github.io/Spaceship-Delay</id><content type="html" xml:base="https://musicalentropy.github.io/Spaceship-Delay/"><![CDATA[<p>Today, I’m going to present you the new freeware audio plug-in that I have released for the <a href="http://www.kvraudio.com/kvr-developer-challenge/2016/">KVR Developer Challenge 2016</a>. It is called Spaceship Delay, and you can grab it <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">here</a>.</p>

<p><img src="http://static.kvraudio.com/i/b/screenshot-4.png" alt="Spaceship Delay Screenshot" /></p>

<h2 id="past-kvr-developer-challenges">Past KVR Developer Challenges</h2>

<p>I’d like to recall what the KVR Developer Challenge is. It’s a contest for audio developers, where they have 2-3 months to develop something audio related of course (most of the time a plug-in, but also standalone applications, or sound librairies). Then, when the development period is ended, a voting period of 3-4 weeks start. All of the KVR audio website members, which is a very big audio/music forum, are invited to vote for their 5 preferred creations, and give them a rank. Then, at the end of the voting period, all the votes are counted, the ranks of all the contributions are established, and the creators can win a few things depending on the results.</p>

<p>I did that contest two times already. In 2012, I released <a href="http://www.kvraudio.com/product/inspiration-by-musical-entropy/details">Inspiration</a>, my take on the concept of Brian Eno’s Oblique Strategies ported into a standalone application and inspired by some of my lectures from that time. Then, in 2014, I released <a href="http://www.kvraudio.com/product/guitar-gadgets-by-musical-entropy/details">Guitar Gadgets</a>, a VST/AU plug-in that time, which was a compilation of “fake analog pedals”, a way to present a few audio effects algorithms put together in the same plug-in, and which could be very interesting for guitarists. My ranks at these times were 19th and 6th, which is honorable. I got also very nice reviews from a few places, such as a full page on Computer Music UK, and I learnt a lot of things in the process, both about coding/DSP and marketing. So, without thinking about it twice, I decided to do something again this year!</p>

<h2 id="the-concept-of-spaceship-delay">The concept of Spaceship Delay</h2>

<p>In fact, I came up with the idea of Spaceship Delay a long time ago already, I just decided to give it a go for the KVR DC 16 when I saw that KVR organized again a contest this year. The thing is I have all the Korg Monotron little synths at home, and even if I don’t use that much, I love the idea behind, and more specifically I love this one:</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/CNXOI1AIjKo" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>

<p>So I was thinking : what if you could use it as an effect ? And what about designing a convenient plug-in to do so? A few hours later, I was coding a simple Korg MS-20 filter simulation, and I put it into a delay line. And it sounded already really amazing! Then, 3-4 weeks ago, I started coding a plug-in using that simulation as a basis, and the all the important things that we would want in a good delay plug-in. I had also the chance to rent another amazing device, called the Dynacord Echocord Super 76, which is a tape delay machine with a spring reverb, more focused than a Roland Space Echo, but which sounds really good too. Now I don’t have it anymore, but I have studied it for a few weeks, and I had captured some impulse responses from it. So, I decided also to include a simulation of that spring reverb thanks to convolution also, and I am very happy with the results. And I gave a go to the modeling of a synthesizer Twin-T filter you might know as well.</p>

<p><img src="/images/Super76.png" alt="Dynacord Echocord Super 76" /></p>

<p>Then, I did some digging about all the things that people like in delay plug-ins, and about the associated technology. I worked on a prototype which allows me to use different kinds of delay lines, using various strategies for fractional delay (linear+cubic interpolation, artefact-free implementations of time-varying allpass IIR filters), and delay changes (with or without pitch changes). I tried to put various audio effects in the delay line path, and kept only what sounded the best. I added also extra filters on post processing and a phaser, and the attack mode I loved from the Guitar Gadgets delay. I also saw that video which made me want add a freeze button and increase a little the maximum delay value.</p>

<iframe width="560" height="315" src="https://www.youtube.com/embed/LhkXNCmctHw" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>

<p>I had also a very good idea for the user interface, which is why I called my plug-in “Spaceship Delay”, but… unfortunately I underestimated a lot the amount of time necessary to make it a reality, so with a lot of frustration, I had to keep using the prototype user interface for the KVR DC 16 plug-in itself as well. However, I plan to update the UI when the contest is done! And I got that idea of putting a manual / tutorial embedded in a plug-in for a long time, from the moment I saw that feature in Ableton Live. I was thinking about putting it in Guitar Gadgets 2 years ago but I didn’t have enough time. For the KVR DC 16, I did succeed, and I’m happy with the result there too. Designing something like that for my recent <a href="https://www.youtube.com/channel/UCaF6fKdDrSmPDmiZcl9KLnQ">JUCE Summit / ADC talks</a> might have helped too.</p>

<h2 id="final-words">Final words</h2>

<p>So, here we are again! I’m very happy to do that contest again and I hope I will “perform” better than last times but still, I know I will learn a lot of things again! And I’m already happy with the first reviews I got. I’m also very happy to see Mr. Wavesfactory doing something for the KVR DC 16, since I have been giving him JUCE/DSP lessons over the past months, and he did the Snare Buzz plug-in all alone. I did use his toy and the Siren from Noise Machines to do a little audio demo for my entry yesterday.</p>

<p>I hope you’ll enjoy Spaceship Delay, and if you are a KVR forum member, don’t hesitate to <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">vote for me</a> if you like the plug-in!</p>]]></content><author><name></name></author><summary type="html"><![CDATA[Today, I’m going to present you the new freeware audio plug-in that I have released for the KVR Developer Challenge 2016. It is called Spaceship Delay, and you can grab it here.]]></summary></entry></feed>