Brad Ormand - Renegade Rambler Audio UI idea

2016.03.01 – Audio Granulator Progress

While in the process of writing an audio program for the ARM chip and another display, I began to also write code for Mac OS 10 and iOS.  I hit some good vibes and continued going forward with the Mac app and left the ARM system on the bench, as is, to come back to later after I finish this Mac app.  I switched gears, as I normally do, but I now have the pieces together to make this app, and it’s multiplying my motivation…

I have been studying DSP and audio algorithms in C++ quite a bit lately (and I had already made great progress with the DFT and FFT in 2015), and am having a great deal of fun analyzing and manipulating sound.  My degree and background are from this area, but I have never had to actually code a chorus or reverb or EQ (although I’ve built hardware analog EQ’s and compressors).  So… I decided to go through making this higher-level abstraction app to beef up more of my real-time digital audio knowledge.  I’m using Core Audio and Audio Units.  Then, after one or two of these apps, I can come back to the bare ARM system, with which I’ll have to write these “units” from scratch (and I can’t wait – looking forward to it.  Just need a quick win first, ‘cuz I’m gon spend some time on it – like a year).  It’ll also help me decide how I’m going to organize the higher-level abstractions from my low-level C++ code once I get back to coding the embedded ARM system.

ARM Breadboard Circuit 1 Brad Ormand

I’ll have the fast FFT implementations and FIR and IIR filters in the CMSIS DSP lib, etc, and I’ll at least have a fast sine and cosine routine, but it’s a lowest-level implementation that I’ll have to “hand” assemble to be a 4-pole LPF or a phaser or even a simple notch, etc.  I’ll have to make my post-DAC “Nyquist” filter at 22kHz and all of that stuff on the ARM system in hardware, etc.  It’ll be at least 44.1kHz at 16-bit – I want (people) to be able to actually use the audio generated by it – some really killer and sonicall-pleasing sounds.  So, that’s coming up…

Brad Ormand's Second Fourier Transform - Noise

It’s kind of a tall order for me.  I have some work to do before I can write a digital audio system from scratch at the chip/embedded level – from Math to code to electrical components. I can’t wait to do it and spend time on it, but I must prepare.  So, I did a few mathematics problems this weekend dealing with impulse response and the summation of the FIR filter to get to know what I’m dealing with.  So, I’m going to do it all step-by-step in my free time until I’m able to grasp it and code good implementations.

M A C   A U D I O   A P P

I have been successful at building a Core-Audio-based sampler for iOS in C++ and Swift.  I have a functional demo that starts and ends the time window at particular points along the audio clip using touch – all real-time.  My next step is to draw the waveform out into a SpriteKit view and to get the app to respond to the touch drags to resize the play window to the visual waveform on the UI.  Just that part itself has been a bit tedious, not-to-mention any zooming of the waveform, which hasn’t even been considered, yet.  Then, of course, I’ll need to render out the playhead rolling along as the samples get played.  There’s a lot of interpolation that has to be done since there aren’t enough pixels to show every sample, and I’m trying to get that stuff out to it’s own thread and to see if I can somehow pre-calculate it all when it first comes in.  I made a pencil sketch of the UI to come – it’s the initial view, but I’ll have a keyboard or sample pad view of sorts.

Brad Ormand - Renegade Rambler Audio UI idea

As for the audio source, right now I have it playing from a file.  But, I don’t think I’ll let the user bring in files with  it.  It’s just too weird on the legal side since I’ll let the user save it back to disk, and I have to detect and convert what type of files they attempt to load in and stuff (mp3, ogg, AAC, which wav, etc), and well…  I really just want the user to be able to press record and mess with the stuff that gets recorded, and probably with a 15-second limit – focus – kind of like Twitter’s 140 char limit.  That audio will then be recorded into and played from a file of course, but I can count on it being a 32-bit float PCM format, and just run with it.

In the end, I want the thing to act as a granulator, where you bring in audio and are able to loop sections of audio at really tight intervals, or even with randomized time and pitch parameters, where it acts as a sound design tool.  I do this in Pro Tools all the time by hand and cut samples like 1000 at a time and shift them incrementally and copy and paste them offset next to each other for effect, but it’s definitely time-consuming.  I’ll probably still do that because I have ultimate control, but I’d like to be able to go into the app environment and get sounds from machines or birds or rubber bands or my voice or even the wind and allow the user to really fuck with them to make them something else entirely.  Of course, they’ll be able to save the original tracks and save the performance.  And, I’ll offer a few time-based effects and definitely some distortion and crush on there, too.

So…  that’s what I’ve been getting into.  It’s after-work stuff, so it’s kind of slow-going after a day of already programming for hours at the day job, but I’m making definite progress and can’t wait to circle back to the embedded system, as well.  I have many, many things to look forward to on this front.