Show #1 – 25th June 2010

northern-indymedia.org/articles/809

Show #2 – 23rd July 2010

northern-indymedia.org/articles/853

Show #3 – 27th August 2010

northern-indymedia.org/articles/1034/ running time: 59 minutes

Transfer from HD24 to Ardour

This show was recorded as a stereo mix from the main outputs of the desk into two channels of the HD24. It was transferred from the HD24’s hard disk, which uses a proprietary format, to another machine for editing. The hard disks inside the HD24 caddies are normal IDE (PATA) drives which can be read by plugging into a spare channel of a PATA-equipped desktop PC. The software for ripping out the WAVs is called hd24tools. The WAVs were then loaded into ardour, tidied up a bit, dynamically compressed and mixed down. The exported file was then converted to mp3 and ogg at the command-line.

Show #4 – 24th September 2010

northern-indymedia.org/articles/1040/

Live from The Treehouse!

Compact setup

For this one, the studio was set up in the back of a van outside the venue. There was a feed in from the stage, and mics, mixer, playback machine, streaming machine and HD24 for recording, powered by an extension cable from the venue.

Actual radio!

The network uplink this time was over a 3G smartphone. People who listened live to the stream have told us that there were no drop-outs (a recurring problem on the landlines in remote Cleckhuddersfax).

Reconstruction in Ardour

There should have been a stereo mix recorded on the HD24, but because of a dodgy connection we ended up with only a mono recording. To complicated things further, the levels varied widely as the live team didn’t have a great monitoring set-up, so that pre-recorded material, talk sections inside the van and stage material were all very different. Not only was the dynamic range hard to work with, but one of the mics made a really loud (-24dB) hum when it was included in the mix.

The solution I chose was to copy the source to three channels in the DAW and create non-destructive edit points, essentially separating the live, talk and recorded sources. Each of these was faded in/out and overlapped to reduce the impression of source switching. Corrections and enhancements could then be applied to each source individually, before remixing. Click here for a view of the DAW channel routing and effects.

As you can see, the live music was EQed and compressed pre-fade, then post-fade was passed through a comb filter and stereo reverb to provide a more spacious sound than the mono source. Even with the compressor, I had to draw in a lot of gain automation (pale green line over the waveform display.)

The playback (“straight” channel) just had a comb filter with a slightly different notch interval. The “hum notch” channel was used for sections with the bad mic. The final mix went through a fast look-ahead limiter. Even with this, the pre-fade compressors and the gain automation, there’s still more dynamic range in the final mix than I really wanted, but maybe that’s OK for a folk gig.

Show #5 – 22nd October 2010

northern-indymedia.org/articles/1051/

Real multi-track editing

This time we got a whole 5 tracks recorded on the HD24 and loaded into ardour. The challenge for post-production was a bit different from last time: the sources were all taken from jacks half-way-in to the insert socket of each channel, i.e. pre-fade sends, so the recording kept all of the in-between background noise even when the channel faders for transmission were closed. So reproducing the desired output meant a lot of muting & unmuting channels using non-destructive edit points (see example here).

Another problem with taking recording feeds from the half-way-in insert jacks is that a small wiggle of the plug can cause a horrendous crackle. Most of these I edited out, but you can still hear a few in the mix.

Show #6 – 24th Nov 2010

The legendary Lost Recording. This was the best radio programme ever produced, but we made the mistake of using only a single, untested method of recording it – which failed. So if you were one of the lucky few who heard the live transmission, you should tell your grandchildren about it.

What went wrong then? We configured darkice to keep a local copy of the mp3 stream. When we came to listen to the dumped file at the end, it was only 12 minutes long (should have been about 2 hours). We never figured out exactly why, but we did decide that next time we’d test our recording method and have an independent back-up recording running.

Show #7 – 30th Dec 2010

Further travails with darkice dumps

It’s attractive to have the same mp3 stream dumped locally as pushed to icecast, as there should be no extra processor load from encoding, only occasional disk-writes depending on how the local file-system is configured. This is why Marker did some testing and tried to get it working on a different laptop.

This setup was extensively tested, including a full 2-hour test-broadcast & skill-share using the same network infrastructure as we use for the show. All was good, we thought. But on the night, we lost our network connection twice. When darkice loses its connection to icecast, the process exits, which means that the local dump also stops – very bad! So we decided that in future, whatever else we do, any software recording should be by a different process (eg. ardour, timemachine). This also means that if it is successful, we can just top and tail that recording and put it on the website the same night.

We got a back-up recording by linking the stereo output of the soundcraft desk to the HD24. This needs to be pulled off the disk using HD24tools, which is installed on the “studio computer”. Hopefully we can connect it using a firewire caddy this time, rather than dismantling the IDE cables like last time.

Networking not working

On each occasion we lost our uplink it was when the landline phone rang, which made us think that we needed to do some more work sorting out the aDSL connection. We’ve already done quite a bit on this, so maybe it’s time to hassle the phonecoop again. Marker agreed to do this if someone can give him the account paperwork.

Meanwhile, other options were discussed. Jess has had success over a 3G connection using a smartphone; Jimdog has had success using 3G dongles and is keen on trying out his high-gain antenna.

Dynamic processing

We used a Behringer Multicom 4600 to insert independent dynamic processing after the pre-amp for each of the 3 mic channels. This had two benefits – firstly, there was a high level of background noise due to the digital projector (see below) so it was good to be able to gate the mics – secondly, not all of us have perfect vocal technique, and the mics sometimes moved away from mouths due to comfy chairs and droopy stands, so it was good to compress the dynamic range (above the noise threshold) of each contributor independently. The result was that less riding of the faders was required – they were just open or closed. It would be good to spend a bit longer configuring the compressor in future, we didn’t get to do much of this as we had a last-minute CD-ripping sprint on.

Monitoring

We had a loudspeaker system fed from the headphone output on the desk. There was no audible feedback – not having to make extreme changes to channel levels helped achieve this, and we kept reminding each other to get close to the mics rather than boosting levels.

The laptop doing the streaming had ppm meters and a headphone output to provide a final check on output – although it was buffered locally for 92ms which made it difficult to talk whilst listening to.

Playback & projection

We had a 2nd laptop cueing up pre-recorded interviews and music using mixxx, and also keeping running programme notes. This display was beamed onto a projection screen, so everyone could see what was coming up and prepare appropriately. Everyone was into it.