Jekyll2019-01-20T09:22:56+00:00http://musicalentropy.github.io/feed.xmlMusical Entropy (Blog)Smart tools for happy musiciansJUCE 5.1 DSP module2017-10-31T00:00:00+00:002017-10-31T00:00:00+00:00http://musicalentropy.github.io/JUCE-DSP-module<p>In July 2017, I did work again for <a href="https://roli.com/">ROLI</a> and the <a href="http://www.juce.com">JUCE</a> team, to improve the SDK I use all the time to develop multi-platform audio applications and plug-ins, and to provide some DSP code that has been included since in the so-called <a href="https://juce.com/releases/juce-5-1">DSP module</a>.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/ZrxrhGg7_xA" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>
<p>I even spent one week in <a href="https://fxpansion.com/">FXpansion</a> headquarters to work in a hellish but fun pace ! And I had a great time hanging out with the JUCE + FXpansion guys. Basically, I provided some code to help JUCE users writing plug-ins who need a convolution engine, filter design and processing functions, oversampling, some fast approximation of functions…</p>
<p>I have written this blog message today because recently I have create a few topics on the JUCE forum related with my work, and I wanted to post links to them there :</p>
<ul>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-structure-of-audio-plug-ins-api/23589">DSP module discussion / Structure of audio plug-ins API</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-new-oversampling-class/24153">DSP module discussion / New Oversampling class</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-new-audioblock-class/24154">DSP module discussion / New AudioBlock class</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-iir-filter-and-statevariablefilter/23891">DSP module discussion / IIR::Filter and StateVariableFilter</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-fast-function-computation-classes/24905">DSP module discussion / Fast function computation classes</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-new-class-simdregister/24911">DSP module discussion / New class SIMDRegister</a></li>
<li><a href="https://forum.juce.com/t/dsp-module-discussion-new-classes-in-the-maths-folder/24908">DSP module discussion / New classes in the maths folder</a></li>
</ul>
<p>Otherwise, I’ll be again at <a href="https://www.juce.com/adc-2017">ADC 17</a> this year in London, presenting a talk called <a href="https://www.juce.com/adc-2017/talks#fifty-shades-of-distortion">Fifty Shades of Distortion</a> (which will be on Youtube a few weeks later), and I’ll update Spaceship Delay when I’m back with a few improvements + new functionalities !</p>In July 2017, I did work again for ROLI and the JUCE team, to improve the SDK I use all the time to develop multi-platform audio applications and plug-ins, and to provide some DSP code that has been included since in the so-called DSP module.Machine Learning Hackathon2017-01-02T00:00:00+00:002017-01-02T00:00:00+00:00http://musicalentropy.github.io/Machine-Learning-Hackathon<p>As you might not know yet, my main occupation today is being a freelance developer and audio signal processing engineer. I do consulting jobs mainly for companies in the audio industry. Last month, <a href="https://roli.com/">ROLI</a> and the <a href="http://www.juce.com">JUCE</a> team asked me to work on a Machine Learning JUCE module, in order to make it available for a special event, a Hackathon in London. JUCE is the famous SDK that is being used by more and more developers (including me) to release multi-platform audio applications and plug-ins.</p>
<p><img src="https://scontent-cdg2-1.xx.fbcdn.net/v/t31.0-8/15194335_1372787506067788_7919598285914712537_o.jpg?oh=905efe0d82e2f71921f422a1098a4b0f&oe=59193DE7" alt="Machine Learning Hackathon" title="Machine Learning Hackathon" /></p>
<p>They started a partnership with the <a href="http://www.gold.ac.uk/">Goldsmith University</a> in London and the <a href="http://www.ircam.fr">IRCAM</a> in Paris, about their european project called <a href="http://rapidmix.goldsmithsdigital.com/">RapidMix</a> for Realtime Adaptive Prototyping for Industrial Design of Multimodal Interactive eXpressive Technology (yes I know it’s long). To summarize, it’s a Machine Learning C++ toolkit/API, made for giving a fast and easy access to Machine Learning algorithms to developers and artists, in order to create innovative human-computer interfaces, for gesture recognition and creation of new musical instruments.</p>
<p>So, two weeks before the hackathon, the JUCE team asked me to develop a JUCE module wrapping a section of RapidMix called RapidLib, created by <a href="http://www.mikezed.com/">Michael Zbyszyński</a>. It is the little brother of the <a href="http://www.wekinator.org/">Wekinator</a>, designed by <a href="https://www.doc.gold.ac.uk/~mas01rf/Rebecca_Fiebrink_Goldsmiths/welcome.html">Rebecca Fiebrink</a> also at Goldsmith. You might know them for the amazing talks they did at the <a href="https://www.youtube.com/watch?v=8IEVWj_OYhM">Audio Developer Conference 2016</a> in November, and for the MOOC <a href="https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info">Machine Learning for Musicians and Artists</a>. I did code the module, some JUCE examples, and I did also a talk at the beginning of the Hackathon so everybody could grab the basic concepts and start doing something cool.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/8IEVWj_OYhM" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>
<p>You can see the slides I have used during my talk here : <a href="/files/HackathonSlides.pdf">Hackathon Slides</a>.</p>
<h2 id="machine-learning-">Machine Learning ?</h2>
<p>What’s interesting here is that I didn’t know anything at all or nearly about Machine Learning before ! But I learnt everything I could about the subject in that short amount of time in order to do the job successfully, and to be able to explain how to use the JUCE module and do cool things with it. Ultimately, everything worked as expected, and I also got nice reviews about my work and my talk from the people involved in the RapidMix project, for which I am very proud !</p>
<p><img src="http://blog.euratechnologies.com/content/uploads/2015/04/machine-learning.jpg" alt="Machine Learning" title="Machine Learning" /></p>
<p>Anyway, the reason I’m talking about that is to tell you how much I am excited about this collaboration and the things that are going to happen next. The JUCE module isn’t available to public yet, since it is in a very early alpha version right now, but it will be released in the future, and the people who have been able to have fun with it already in the Hackathon were very happy to catch this opportunity too.</p>
<p>Then, why am I so excited about this ? Since I’m new in this area, I wouldn’t be able to explain in details what is Machine Learning and how it works, but here is what I have learnt already (you can have a look for my talk slides too) :</p>
<ul>
<li>
<p>The very principle of Machine Learning is the use of an algorithm, which is <strong>trained</strong> with some data provided by the user. The <strong>training set</strong> of data is made of samples with inputs + associated outputs. This algorithm then is expected to have a given behaviour, when receiving some <strong>random</strong> input data.</p>
</li>
<li>
<p>There are two main kinds of Machine Learning algorithms, ones to do <strong>classification</strong>, and ones to do <strong>regression</strong>.</p>
</li>
<li>
<p>In <strong>classification</strong> applications, the output of the algorithm is an integer variable or a class label. For example, a classification algorithm is trained with pictures of animals, and the associated animal names. When a new picture is given to it, the algorithm is supposed to be able to tell you what animal is in the picture.</p>
</li>
<li>
<p>In <strong>regression</strong> applications however, the outputs are continuous, they can be any float/double variables. As an example of regression, imagine you want to model a mathematical function with a Machine Learning algorithm, like an hyperbolic tangent. You train your regression algorithm with a set of input + output data that you calculate yourself. Then the algorithm processes any input value and returns an output which is supposed to be as close as possible of the original mathematical function result.</p>
</li>
<li>
<p>In order to train properly Machine Learning algorithms, it is common to use <strong>features</strong>, which means sampling a large amount of data and compressing it or extracting some meaningful information from it. For example in audio, it is possible to train an algorithm with a RMS value taken from a 200 ms buffer of audio samples, instead of feeding the algorithm directly with that buffer. It is also possible to extract some information such as the pitch, the frequency response etc. With a video signal, you can reduce the resolution of the image or extract a general brightness value.</p>
</li>
</ul>
<h2 id="applications-of-machine-learning">Applications of Machine Learning</h2>
<p>Then, what can we do from all of this stuff ? In the audio examples I saw, regression algorithms were used to map and interpolate parameters of an audio synthesizer using controllers such as the computer mouse, a <a href="https://www.leapmotion.com/">Leap Motion</a>, some <a href="https://roli.com/products/blocks">ROLI Blocks</a>, a joystick etc. In her videos, Rebecca Fiebrink uses a lot of weird controllers and the Wekinator to create new instruments with new ways of interacting with a computer and making music. It is fun also to see that the musician can be involved during the training phase, and it is not all the time the duty of the application developer to train the Machine Learning algorithm.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/rnlCGw-0R8g" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>
<p>During the Hackathon, I coded something cool too (but I wasn’t able to complete it before the end of the event unfortunately). I developed a drum machine application which can play 3 samples (a kick, a snare and a hi-hat sample), with 3 patterns that can be modified by the user with its mouse, or live if he connects a ROLI Blocks to its computer in USB. Then, he can “train” a classification algorithm with the microphone input of its computer, in order to associate outside sounds to one of the three patterns, using audio features extraction. This way, he can play the pattern 1 using a low-pitched sound for example, the pattern 2 with a white noise like impulse, the pattern 3 with anything else, whatever the user chooses. And finally, there is a simple delay audio effect with 4 parameters (delay time, feedback, mix, lowpass filter frequency) which can be “trained” with the mouse cursor. That means the user can associate the mouse cursor position with a given set of the 4 parameters, and then play with the audio effect simply by dragging the mouse over the window, which will move continuously its parameters like in the Michael Zbyszyński’s talk. Sounds fun isn’t it ? I promise, I’ll make it available in a way or another at some point when completed.</p>
<p>The other participants did cool things too, even if it was difficult in a short amount of time to discover a new tool, a new technology, and to do something with them… But the beauty of JUCE is that it is possible to create a new project, handle all the dependencies of everything, and start coding in a few seconds ! The JUCE module during the hackathon featured simple classification and regression classes, but unfortunately nothing yet on the feature extraction side, which was the main concern of the participants at the time. We all had to code some on our own, which explains partly why I wasn’t able to finish my application…</p>
<h2 id="conclusion">Conclusion</h2>
<p>Anyway, there is a JUCE module RapidMix being developed right now. Machine Learning is a hot topic nowadays thanks to all the communication about Deep Learning, or about some current applications related with audio such as speech recognition or automatic composition of music. I’m still discovering the topic right now, but in my opinion having something as simple as a JUCE module to let audio developers have fun with Machine Learning, thanks to the work of the RapidMix team, could be something huge for musicians and sound engineers. Not because Machine Learning would help creating really innovative stuff, I mean all the applications that I have talked about could be done without it. But because a lot of complicated problems can become very simple to solve thanks to Machine Learning, and because these algorithms allow to experiment a lot of things very quickly ! Finally, I think that current users of Machine Learning APIs / algorithms right now are most of the time in the academic world, and having something like a module in the JUCE library could make these features available easily to new people, with new application ideas, which is awesome.</p>
<p>I’d like to say also that I’m curious about the next developments in RapidMix, to see how it will perform compared with other Machine Learning APIs for C++. And I would love to add in Spaceship Delay some Machine Learning features I have in mind…</p>
<p>Anyway, thanks to all the people involved in this hackathon for organizing it, it was great to talk with them and with the participants back then !</p>
<h2 id="bibliography">Bibliography</h2>
<p>If you want to learn more about Machine Learning, there are tons of information sources on the internet, but I suggest you to have a look there in priority :</p>
<ul>
<li><a href="https://ml.berkeley.edu/blog/2016/11/06/tutorial-1/">Machine Learning Crash Course Part 1, by Daniel Geng and Shannon Shih</a></li>
<li><a href="https://www.kadenze.com/courses/machine-learning-for-musicians-and-artists/info">Machine Learning for Musicians and Artists</a></li>
<li><a href="https://www.coursera.org/learn/machine-learning/home/welcome">The classic Andrew Ng’s Machine Learning course on Coursera</a></li>
</ul>As you might not know yet, my main occupation today is being a freelance developer and audio signal processing engineer. I do consulting jobs mainly for companies in the audio industry. Last month, ROLI and the JUCE team asked me to work on a Machine Learning JUCE module, in order to make it available for a special event, a Hackathon in London. JUCE is the famous SDK that is being used by more and more developers (including me) to release multi-platform audio applications and plug-ins.KVR DC 2016 results and a few announcements2016-12-21T00:00:00+00:002016-12-21T00:00:00+00:00http://musicalentropy.github.io/KVR-DC-Results<p>So, you might already know, but the <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/">KVR DC 16 is finished</a>, and I ended at rank #3 ! I would like to thank a lot all the people who have been supporting <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay</a> during the contest and who are still planning to use it, particularly the ones who helped me to make the Pro Tools version on Mac OS X in the forums or with Skype, or the amazing guy who worked during the KVR DC on a custom skin ! Thanks a lot again !</p>
<p><img src="/images/Reskin.png" alt="Reskin" /></p>
<p>So the winners are Youlean with its <a href="http://www.kvraudio.com/product/youlean-loudness-meter-by-youlean/details">Youlean Loudness Meter</a> and Ursa DSP / Dave Elton with its delay <a href="http://bedroomproducersblog.com/2016/12/01/ursa-dsp-lagrange/">Lagrange</a>. It’s funny to see two delay plug-ins in the top 3 !! Youlean Loudness Meter is probably one of the best loudness meter that you can find for free, and I have seen a video comparing it with commercial alternatives. Lagrange is probably a less versatile delay than mine, but it is using granular methods to get very interesting and singular sounds that cannot be obtained with other methods, close to what can be got with stutter effects or some specific flanger / chorus effects. I suggest you to try them if you have not yet.</p>
<p><img src="http://static.kvraudio.com/i/b/lagrange084.png" alt="Lagrange" /></p>
<p>I would like also to tell you that I’m very happy about having a blog post about Spaceship Delay in Peter Kirn’s blog, Create Digital Music, that I read a lot for a long time. The article is here : <a href="http://cdm.link/2016/12/spaceship-delay-is-an-insane-free-plug-in-inspired-by-hardware/">Spaceship Delay is an insane free plug-in inspired by hardware</a></p>
<p>And last thing, I have just updated Spaceship Delay to <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">version 1.0.5</a>. The new version includes a few things people wanted for a long time : mono/stereo versions of the plug-in for Pro Tools, Logic Pro X etc., a selector for the location of the Post-FX (before or after the mix control), a new tremolo Post FX and extra modulation options applied to the low-pass filters. There are also new presets and some extra tutorial content, so I suggest you to update your presets / tutorial folders as well.</p>
<p>That’s it for today. I’ll probably talk again about Spaceship Delay very soon since I have big plans for it in the future !</p>So, you might already know, but the KVR DC 16 is finished, and I ended at rank #3 ! I would like to thank a lot all the people who have been supporting Spaceship Delay during the contest and who are still planning to use it, particularly the ones who helped me to make the Pro Tools version on Mac OS X in the forums or with Skype, or the amazing guy who worked during the KVR DC on a custom skin ! Thanks a lot again !The filters in Spaceship Delay2016-12-12T00:00:00+00:002016-12-12T00:00:00+00:00http://musicalentropy.github.io/The-filters-in-Spaceship-Delay<p>In that new blog post I’m going to talk about the filter algorithms that are implemented in <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay</a> ! Right now, you can see 4 different filter types in the filter section : Low/High Cut, Low/High Shelf, Japanese, and German/Canadian.</p>
<h2 id="lowhigh-cut-and-lowhigh-shelf">Low/High Cut and Low/High Shelf</h2>
<p>These filters are more or less the standard 2nd order filters or “biquads”, modeling the analog circuit called <strong>State Variable Filter (SVF)</strong> behaviour, with a -12 dB/octave attenuation. The equations are derived from the famous <a href="http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt">Robert-Bristow Johnson EQ Audio Cookbook</a> that every DSP engineer in the world knows, and uses sometimes without even knowing. Same for most of filter algorithms available in commercial software.</p>
<p>However, the original implementation does have some drawbacks well known, that I have been talking about in my recent <a href="https://www.youtube.com/watch?v=esjHXGPyrhg">Audio Developer Conference talk</a>. The two main drawbacks applies to the behaviour of the filters in the high frequency range, and when the cutoff frequency is modulated fast. In the first case, a lowpass filter for example is <strong>attenuating way too much the high frequencies</strong> close to half of the sampling rate. In the second case, a quick modulation with the original implementation, using a simulation structure called <a href="https://en.wikipedia.org/wiki/Digital_filter#Direct_form_I">Direct Form</a> <strong>produces high amplitude artefacts at modulation</strong>.</p>
<p>In Spaceship Delay, I have solved these issues by using another simulation structure called <strong>Topology-Preserving Transform</strong>, which is well documented on KVR forums (DSP section) and in the free e-book of Native Instruments Vadim Zavalishin, <a href="https://www.native-instruments.com/fileadmin/ni_media/downloads/pdf/VAFilterDesign_1.1.1.pdf">The Art of V.A. Filter Design</a>. I have also oversampled the filtering algorithm, since the section can produce some nonlinear stuff as we will see with the other kinds of filters. That gives me the right behaviour in high frequencies and when a parameter is changed by the user with automation. It will be even more useful if I decide to add in the modulation section in the future an envelope follower or a LFO modulating the cutoff frequencies. Moreover, this structure is already used in the phaser algorithm, for obvious reasons.</p>
<p>I’d like to say there is absolutely nothing really innovative there, since this kind of filtering algorithm can already be found in most of current commercial plug-ins involving filtering. The Low/High Cut feature in Spaceship Delay has nothing groundbreaking either, and that’s something that we find in most of other delay plug-ins, so its presence here is more than relevant without being something particularly new. However, I have not seen a lot of times some low and high shelf filters in this context, and I thought it would be a nice addition to drive further the nonlinear sections, put the delay line into resonant feedback in an interesting way, or simply to cut some frequency content like with the Low/High Cut filters in a different way. It’s my friend François-Maxime from <a href="http://www.lesliensduson.com/">Les Liens du Son</a> who suggested me this to do this and he was right !</p>
<h2 id="japanese">Japanese</h2>
<p>The Japanese filter is obviously my take on the MS-20 / Monotron filters. You can find a very interesting study of its behaviour on Tim Stinchcombe’s website, that I have been reading a lot during the development.</p>
<p><a href="http://www.timstinchcombe.co.uk/index.php?pge=korg">Tim Stinchcombe study of Korg MS-20 filter</a></p>
<p>I’m not that proud of the realism and simulation quality of the result, since I’m still new in synthesizer filter modeling. But I think that the “japanse filter” in Spaceship Delay sounds quite good and is very interesting when put in a delay line, to get that screaming oscillating feedbacks everybody loves in “analog” delay software/hardware. As I said before, doing a delay and using that filter algorithm was the starting point of Spaceship Delay in terms of DSP. I wanted something that sounds a little like my Korg Monotron Delay, that I could use as a plug-in. I thought also that using other kinds of “Virtual Analog” filters in the delay line should be interesting too. So I did some research, trying also to dismiss the too obvious choices for adding a extra V.A. filter in the plug-in, and then I ended up doing modeling of the filter I called “German/Canadian”.</p>
<h2 id="germancanadian">German/Canadian</h2>
<p>For those who have read the embedded tutorial in Spaceship Delay, or seen the other blog posts, you already know the true identity of that filter. It is the one that can be found in the <a href="https://meeblip.com/">Meeblip Anode and Triode synthesizers</a>, made by Peter Kirn from <a href="http://cdm.link/">Create Digital Music</a> and James Grahame from <a href="https://meeblip.com/">Blipsonic / Meeblip</a>.</p>
<p><img src="/images/Meeblip-synths.png" alt="Meeblip Anode and Triode" style="textalign: center;" /></p>
<p>I have spent a lot of time studying the schematic of the filter that is available on <a href="https://github.com/meeblip">GitHub</a>, covered by a permissive Creative Commons and GPLv3 license, since the Meeblip hardware + software is open source ! I wanted a filter in Spaceship Delay which would have second order attenuation like the Korg MS-20 filter (and not like the famous ladder filters), and the result sounded surprinsingly good too when put in a delay line.</p>
<p>My implementation, like for the Korg MS-20 filter, isn’t that realistic yet, since I used a simplified model, and because I have not reproduced yet the same mapping for the cutoff and resonance knobs than the one in the original units. However, my model is based on the equations of the original circuit, that I have studied first in <a href="http://www.linear.com/designtools/software/">LTSpice</a> and then in a simulation context to make it sound as good as possible.</p>
<p><img src="/images/Meeblip-LTSpice.png" alt="LTSpice Screenshot" style="textalign: center;" /></p>
<p>As you can see, for people knowing how to read a synthesizer filter schematic, it’s a filter looking more or less like a Twin-T VCF, acting like a lowpass filter, but removing also a little in the bass frequency range, giving it a very interesting sound signature in my opinion. It is possible to study it further by determinating its transfer function from the electronic equations :</p>
<script type="math/tex; mode=display">H(s) = - \frac{R_2}{R_1} \frac{1 + C s (2 R_F + R_Q) + R_F R_Q (C s)^2}{1 + C s (2 R_F + R_Q) + R_F (R_2 + R_Q) (C s)^2}</script>
<p>I’ve made it zero delay feedback, but the way it saturates is still far from the original in my opinion, so I’m going to improve my model over the next weeks, and I’ll probably start by updating the mapping of the controls so it behaves like the original at least in a strictly linear sense.</p>
<p>As a side note, don’t forget to get the last version of Spaceship Delay to experiment with this filter, since the algorithm has changed a little since the beginning of the KVR Developer Challenge, to solve a few issues I had.</p>
<h2 id="next-steps-for-spaceship-delay">Next steps for Spaceship Delay</h2>
<p>I got already tons of very nice comments, and feature requests for Spaceship Delay. I even got a very nice new skin proposal from a user of the KVR forums. I’ll probably spend a lot of time after the voting period to add the most interesting features people have submitted, and a few things I had also in mind and that I have not been able to finish before the deadline. But, what I can already tell you, is that I’m already working on the AAX version of the plug-in, and it should be available in a few days !</p>In that new blog post I’m going to talk about the filter algorithms that are implemented in Spaceship Delay ! Right now, you can see 4 different filter types in the filter section : Low/High Cut, Low/High Shelf, Japanese, and German/Canadian.New Spaceship Delay version online !2016-12-08T00:00:00+00:002016-12-08T00:00:00+00:00http://musicalentropy.github.io/Spaceship-Delay-1.0.2<p>I have just updated Spaceship Delay on KVR audio. You can get the new version here : <a href="http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details">Spaceship Delay 1.0.2</a></p>
<p>In the new version, I have solved a few bugs, and I have changed a little the way you can install it, to answer some user calls. Now, you can just put the data folder in the same place than the plug-in itself, so it will be easier to start playing with it for newcomers.</p>
<p>I’d like also to recall that Spaceship Delay has now a few presets, and a tutoriel system I’m very proud of, with tips and tricks embedded in the application, available from the “about tab” by clicking on Spaceship Delay logo. It looks like this :</p>
<p><img src="http://static.kvraudio.com/i/b/screenshot-5.png" alt="Spaceship Delay Screenshot" /></p>
<p>Finally, I would like to show you if you have not heard it yet the audio demos I did for Spaceship Delay. It was funny for me to do a jazz/blues cover in dub style, knowing that I’m more into rock/metal songs in general !</p>
<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https://api.soundcloud.com/playlists/281718983&color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false"></iframe>
<p>I hope you’ll enjoy Spaceship Delay, and again, if you are a KVR forum member, don’t hesitate to <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">vote for me</a> if you like the plug-in !</p>I have just updated Spaceship Delay on KVR audio. You can get the new version here : Spaceship Delay 1.0.2Spaceship Delay Anecdotes2016-12-07T00:00:00+00:002016-12-07T00:00:00+00:00http://musicalentropy.github.io/Spaceship-Delay-Anecdotes<p>I’d like to share something cool with you today !</p>
<p>When I was working on the modeling of the Echocord Super 76, I spent a lot of time doing measurement, and I have only tried to do something from them a few days before the KVR Developer Challenge deadline, to include the spring reverb impulse response in my plug-in. However, after the first calculations, here is what I got :</p>
<iframe width="100%" height="166" scrolling="no" frameborder="no" src="https://w.soundcloud.com/player/?url=https://api.soundcloud.com/playlists/281820680&color=ff5500&auto_play=false&hide_related=false&show_comments=true&show_user=true&show_reposts=false"></iframe>
<p>I was scared because I thought that I did something wrong with the measurement itself, like having a feedback loop in the recording that I didn’t saw at first, and I thought all my recordings were screwed up. You can imagine what was the impact of this a few days before deadline, with no way to redo easily the recordings since I don’t own the device. Fortunately, it turns out that there was a bug in my deconvolution algorithm, and so everything was finally fine !</p>
<p>However, I tried to re use the “wrong” impulse response again a few days ago, and I thought that the sound I got is somehow really cool, so I thought it might be a good idea to share it with you ! I have added some artificial decay on the reverb tail so it doesn’t stop too suddenly. To use it, you need to open your favorite convolution reverb plug-in, and to load the impulse response inside. You can find it here :</p>
<p><a href="http://musicalentropy.github.io/files/ErrorImpulseResponse.wav">Impulse Response</a></p>
<p>It happens a lot of times during development to have some kind of “interesting accidents”, when a bug or an unexpected thing happens in the code, and gives interesting sonic results…</p>
<p>Enjoy !</p>I’d like to share something cool with you today !My other contributions to KVR Developer Challenges2016-12-06T00:00:00+00:002016-12-06T00:00:00+00:00http://musicalentropy.github.io/My-Other-Contributions-To-KVR-DC<p>In this new post, I’m going to talk a little about the things I did in the past.</p>
<p>I choose the brand name Musical Entropy 4 years ago when I release <a href="http://www.kvraudio.com/product/inspiration-by-musical-entropy/details">Inspiration</a>, my take on the concept of Brian Eno’s Oblique Strategies ported into a standalone application and inspired by some of my lectures from that time. It was for <a href="http://www.kvraudio.com/kvr-developer-challenge/2012/">KVR Developer Challenge 2012</a>, and I think I helped a few people with it to find ways to fight the blank page syndrom. I really loved at this time to start creating things on my own and releasing them.</p>
<p><img src="http://static.kvraudio.com/i/b/inspiration-capture5.png" alt="Inspiration Screenshot" style="textalign: center;" /></p>
<p>Two years later, it was <a href="http://www.kvraudio.com/product/guitar-gadgets-by-musical-entropy/details">Guitar Gadgets</a>, a VST/AU plug-in which was a compilation of “fake analog pedals”, a way to present a few audio effects algorithms put together in the same place, and which could be very interesting for guitarists. I was designed for <a href="http://www.kvraudio.com/kvr-developer-challenge/2014/">KVR Developer Challenge 2014</a>, and I updated it a few times after the end of the contest. From time to time, I can see that some people are still using it, and I’m very happy about that.</p>
<p><img src="http://static.kvraudio.com/i/b/screenshot2.1406976926.png" alt="Guitar Gadgets Screenshot" /></p>
<p>When I created it, I had just started to be a freelance developer, after having worked with the company Two Notes on the Torpedo product line for a couple of years. Today, I’m still working with them from time to time, and with a few other companies (I’ve made some contributions to <a href="https://www.sonicacademy.com/products/kick-2">Sonic Academy Kick 2</a> or <a href="https://www.sonicacademy.com/products/kick-2">TSE X50</a> for example). I’ll probably release more personal stuff next year !</p>In this new post, I’m going to talk a little about the things I did in the past.Spaceship Delay presentation and KVR DC 162016-12-04T00:00:00+00:002016-12-04T00:00:00+00:00http://musicalentropy.github.io/Spaceship-Delay<p>Today, I’m going to present you the new freeware audio plug-in that I have released for the <a href="http://www.kvraudio.com/kvr-developer-challenge/2016/">KVR Developer Challenge 2016</a>. It is called Spaceship Delay, and you can grab it <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">here</a>.</p>
<p><img src="http://static.kvraudio.com/i/b/screenshot-4.png" alt="Spaceship Delay Screenshot" /></p>
<h2 id="past-kvr-developer-challenges">Past KVR Developer Challenges</h2>
<p>I’d like to recall what the KVR Developer Challenge is. It’s a contest for audio developers, where they have 2-3 months to develop something audio related of course (most of the time a plug-in, but also standalone applications, or sound librairies). Then, when the development period is ended, a voting period of 3-4 weeks start. All of the KVR audio website members, which is a very big audio/music forum, are invited to vote for their 5 preferred creations, and give them a rank. Then, at the end of the voting period, all the votes are counted, the ranks of all the contributions are established, and the creators can win a few things depending on the results.</p>
<p>I did that contest two times already. In 2012, I released <a href="http://www.kvraudio.com/product/inspiration-by-musical-entropy/details">Inspiration</a>, my take on the concept of Brian Eno’s Oblique Strategies ported into a standalone application and inspired by some of my lectures from that time. Then, in 2014, I released <a href="http://www.kvraudio.com/product/guitar-gadgets-by-musical-entropy/details">Guitar Gadgets</a>, a VST/AU plug-in that time, which was a compilation of “fake analog pedals”, a way to present a few audio effects algorithms put together in the same plug-in, and which could be very interesting for guitarists. My ranks at these times were 19th and 6th, which is honorable. I got also very nice reviews from a few places, such as a full page on Computer Music UK, and I learnt a lot of things in the process, both about coding/DSP and marketing. So, without thinking about it twice, I decided to do something again this year !</p>
<h2 id="the-concept-of-spaceship-delay">The concept of Spaceship Delay</h2>
<p>In fact, I came up with the idea of Spaceship Delay a long time ago already, I just decided to give it a go for the KVR DC 16 when I saw that KVR organized again a contest this year. The thing is I have all the Korg Monotron little synths at home, and even if I don’t use that much, I love the idea behind, and more specifically I love this one :</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/CNXOI1AIjKo" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>
<p>So I was thinking : what if you could use it as an effect ? And what about designing a convenient plug-in to do so ? A few hours later, I was coding a simple Korg MS-20 filter simulation, and I put it into a delay line. And it sounded already really amazing ! Then, 3-4 weeks ago, I started coding a plug-in using that simulation as a basis, and the all the important things that we would want in a good delay plug-in. I had also the chance to rent another amazing device, called the Dynacord Echocord Super 76, which is a tape delay machine with a spring reverb, more focused than a Roland Space Echo, but which sounds really good too. Now I don’t have it anymore, but I have studied it for a few weeks, and I had captured some impulse responses from it. So, I decided also to include a simulation of that spring reverb thanks to convolution also, and I am very happy with the results. And I gave a go to the modeling of a synthesizer Twin-T filter you might know as well.</p>
<p><img src="/images/Super76.png" alt="Dynacord Echocord Super 76" /></p>
<p>Then, I did some digging about all the things that people like in delay plug-ins, and about the associated technology. I worked on a prototype which allows me to use different kinds of delay lines, using various strategies for fractional delay (linear+cubic interpolation, artefact-free implementations of time-varying allpass IIR filters), and delay changes (with or without pitch changes). I tried to put various audio effects in the delay line path, and kept only what sounded the best. I added also extra filters on post processing and a phaser, and the attack mode I loved from the Guitar Gadgets delay. I also saw that video which made me want add a freeze button and increase a little the maximum delay value.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/LhkXNCmctHw" frameborder="0" style="margin-left: auto; margin-right: auto; display:block; text-align:center;" allowfullscreen=""></iframe>
<p>I had also a very good idea for the user interface, which is why I called my plug-in “Spaceship Delay”, but… unfortunately I underestimated a lot the amount of time necessary to make it a reality, so with a lot of frustration, I had to keep using the prototype user interface for the KVR DC 16 plug-in itself as well. However, I plan to update the UI when the contest is done ! And I got that idea of putting a manual / tutorial embedded in a plug-in for a long time, from the moment I saw that feature in Ableton Live. I was thinking about putting it in Guitar Gadgets 2 years ago but I didn’t have enough time. For the KVR DC 16, I did succeed, and I’m happy with the result there too. Designing something like that for my recent <a href="https://www.youtube.com/channel/UCaF6fKdDrSmPDmiZcl9KLnQ">JUCE Summit / ADC talks</a> might have helped too.</p>
<h2 id="final-words">Final words</h2>
<p>So, here we are again ! I’m very happy to do that contest again and I hope I will “perform” better than last times but still, I know I will learn a lot of things again ! And I’m already happy with the first reviews I got. I’m also very happy to see Mr. Wavesfactory doing something for the KVR DC 16, since I have been giving him JUCE/DSP lessons over the past months, and he did the Snare Buzz plug-in all alone. I did use his toy and the Siren from Noise Machines to do a little audio demo for my entry yesterday.</p>
<p>I hope you’ll enjoy Spaceship Delay, and if you are a KVR forum member, don’t hesitate to <a href="https://www.kvraudio.com/kvr-developer-challenge/2016/#dc16-12755">vote for me</a> if you like the plug-in !</p>Today, I’m going to present you the new freeware audio plug-in that I have released for the KVR Developer Challenge 2016. It is called Spaceship Delay, and you can grab it here.New blog on GitHub pages2016-12-03T00:00:00+00:002016-12-03T00:00:00+00:00http://musicalentropy.github.io/New-Blog<p>Hello everybody !</p>
<p>I have moved my main blog on GitHub Pages, because it’s easier to create new posts from here, and because I wanted to try another blog platform. I’m probably going to update my website also very soon.</p>
<p>So, welcome in the new blog of Musical Entropy, and I promise I’m going to be more active than before, so stay tuned and thanks for watching !</p>Hello everybody !