EasyManua.ls Logo

PrismSound Lyra - Technical Topics; Stability and Latency

PrismSound Lyra
70 pages
Print Icon
To Next Page IconTo Next Page
To Next Page IconTo Next Page
To Previous Page IconTo Previous Page
To Previous Page IconTo Previous Page
Loading...
1.42
Operation Manual
© 2013 Prism Media Products Ltd
Revision 1.00Prism Sound Lyra
7 Technical topics
The following sections contain detailed discussions of various relevant technical issues. The content
of these sections is not required to operate Lyra, but is provided merely as background information.
7.1 Stability and latency
Ever since audio production has found its way inside the computer, new problems concerning issues
of stability and latency have arisen.
Pre-computer digital audio gear introduced the concept of delays through devices, which hadn't
usually been the case with analogue equipment. This was an inevitable consequence of sampling
the audio, and passing the samples through multiple layers of buffering during conversion,
processing and interfacing operations. However, the 'latency' (buffer delay) was generally quite short
and didn't usually cause problems even in delay-sensitive applications such as live sound or
over-dubbing. Reliable operation was generally guaranteed, since the digital devices were
essentially 'sausage machines' performing nothing but the same limited series of operations
repeatedly.
When general-purpose computers began to be used for audio production, problems with latency and
stability suddenly had to be addressed. The reason is that computers are always busy doing other
things than processing audio, even in situations where the operator is only interested in performing
that dedicated task. Because of this, the computer generally accumulates a large buffer of incoming
audio samples, which are then processed whilst a new buffer is being collected. Even though the
required processing can (hopefully) be accomplished faster than real-time (i.e. the sample
processing rate is faster than the sample rate), there is always the possibility that the computer may
be called upon to interrupt its processing of the audio in order to deal with some other essential
routine task, such as maintaining screen graphics, moving data on and off disc, servicing other
programs etc. In non-optimized systems, tasks such as collecting emails, virus-checking and
countless low-importance system operations can interrupt audio processing. Without the
accumulation of sample buffers, any interruption taking longer than about one sample period (1/fs)
would cause incoming audio samples to be missed, resulting in disruption of the audio signal. Nearly
every kind of interruption is long enough to do this. However, with a large enough buffer, the
interruptions don't cause audio to be disrupted so long as the computer has enough time available
during the buffer period to process the entire buffer. This problem doesn't only happen for incoming
samples: audio outputs from the computer must likewise be buffered so that a continuous output
stream can be maintained even when the processor is called away for a while.
Why is this a problem? First of all, the amount of latency required in order for a particular computer
with a particular audio processing and non-audio workload not to suffer audio disruptions can be
problematically large. This is particularly the case in live sound and over-dubbing situations where
the delay between the computer's input and output has to be essentially imperceptible. This is often
difficult or impossible to achieve, unless the computer has a powerful processor, a lot of memory, a
heavily audio-optimized operating system workload, an efficiently written audio processing program,
and not too many audio channels, not too much audio processing complexity, and not too high a
sample rate. The operator merely has to make sure that all these conditions are met, and all will be
well!
But how do you do that? Even if we worry only about the computer and operating system
themselves, the duration and frequency of interruptions is very non-deterministic: something can
happen very infrequently which causes a huge interruption. This might not be a problem: you can
always run that track again (assuming you noticed the glitch) - but what if you're recording an
important one-off live event? Even worse, the onset of trouble is greatly affected by audio factors
such as number of tracks, sample rate, how many EQs are in use, etc. This makes the onset of
instability even harder to predict reliably.
On the other hand, situations where latency is critical are relatively few, so it is normally OK to