Está en la página 1de 25

Analogue Radio vs.

Digital Radio

Traditional radio systems such as AM/FM/LW/SW systems have always used analogue
technology. DAB is the first (to my knowledge) major digital system designed specifically
for digital broadcast radio.
Analogue Radio
Analogue radio transmission consists of transmitting the actual audio signal modulated onto
the RF carrier. Analogue basically means that the signal can take on any value (within the
limits set by the transmitter). The problem with transmitting analogue audio signals is that
because any noise, interference or self-interference (multipath effect) is added to the signal at
any point then this cannot be removed from the audio signal and this degrades the audio
quality of the signal or causes hiss.
Digital Radio
Digital radio systems such as DAB or digital radio that is delivered via digital satellite (DSat)
or Freeview, addresses the disadvantages that hamper the analogue transmission systems
(although these can usually be overcome by improving your FM reception by purchasing a
better FM aerial and/or relocating the FM aerial). Digital radio systems consist of
transmitting digital waveforms on the carrier to the receiver. These waveforms are then
decoded to binary format to make up the digital words that carry the amplitude values of the
audio waveform (this is expanded upon on the MPEG Coding page).
At the radio receiver, the carrier part of the signal is removed by ‘downconverting’ from
radio frequency (RF) to low frequency (termed 'baseband') which just leaves the digital signal
along with the noise and interference that has been added to the signal at the transmitter, in
the air, and at the receiver itself. The receiver then decides which symbols were transmitted.
In the simple binary case, the digital 0s and 1s will be transmitted as equal amplitude but
opposite in sign. The receiver will then just look at whether the signal is above or below zero
volts and for example if the voltage is above zero volts then it will decide the bit transmitted
was a 1 and if it is below zero volts then it will decide that a 0 was transmitted. Because of
the noise and interference the voltage value at the decision instant might not be the same sign
as the bit that was transmitted and a ‘bit error’ will be made. The proportion of the bits that
are received in error (the bit error rate, or BER, calculated by dividing the total number of bit
errors by the total number of bits transmitted) is directly related to the received signal power,
which is why installing an external aerial pays off because an external aerial will invariably
receive a higher signal power than an internal aerial. Unfortunately there are strict limitations
to the power level at which broadcasters are allowed to transmit at. This is a necessary
limitation because otherwise the transmitted signals would cause too much interference in
adjacent frequency bands.
DAB and digital terrestrial TV (DTT) use a more advanced form of digital modulation than is
explained above. Both of these systems use COFDM, which stands for Coded Orthogonal
Frequency Multiplexing.

Error Correction
The main advantage of transmitting radio digitally is that the bit errors that are made when
the receiver chooses the incorrect symbol can in most cases be corrected. This is achieved by
using forward error correction coding (FEC coding). The FEC encoder adds redundant bits to
the original bitstream so that the receiver can ‘decode’ the signal and in the vast majority of
cases correct any errors. An example of a very simple error correction code (this is not a
practical code because it is very inefficient and is not very effective at correcting errors) uses
the rule that if a 1 is to be transmitted then it is repeated 3 times. Then the receiver takes a
majority decision so that it decides that a 1 was transmitted if 2 or 3 ones are received.
Therefore if one error is made out of the three bits transmitted that error can be corrected.
This is an example of a block error correction code because a group of input bits (in this case
one input bit) are transmitted as a larger block of coded bits. A measure of the redundancy
and of the power of an error correction code is its code rate which is given by the number of
input bits to the forward error correction encoder at the transmitter divided by the number of
transmitted bits. For the example above, one input bit goes into the FEC encoder at a time
and three come out. Therefore its coding rate equals 1/3. The lower the value of the coding
rate the more powerful the code will be at correcting errors and vice versa. For wireless
transmission, block coding is not the primary form of FEC coding, although it may be used as
well. The preferred type of FEC coding for wireless systems is called convolutional coding
and is more powerful than block coding. Typical code rate values used are 1/3, 1/2, 2/3 and
3/4.
The type of FEC coding used on DAB is a form of convolutional coding where different parts
of the audio bitstream use different code rates. For example, scale factors are shared between
a lot of samples and therefore it is more important that these scale factors should not be
received in error compared to the individual samples, so a lower code rate (higher protection
level) is used to protect the scale factors than the individual samples.
A similar variable code rate will be applied to the video stream on DTT.

Introduction to Digital Audio Broadcasting (DAB)

The DAB system was designed in the late 1980s, and its main original objectives were to
provide radio at CD-quality; to provide better in-car reception quality than on FM; to use the
spectrum more efficiently; to allow tuning by the name of the station rather than by
frequency; and to allow data to be transmitted. DAB fulfills most of these objectives, but with
one rather important exception: DAB sounds worse than FM.

Why is the sound quality so bad?


The main reason why there is a problem with the audio quality on DAB is due to the
broadcasters using bit rate levels that are too low to provide good audio quality. The reason
why they're using insufficient bit rate levels is due to DAB using the inefficient MP2 audio
codec, which needs to be used at bit rate levels of at least 192 kbps to provide good audio
quality -- FM provides an audio quality which is equivalent to 192 - 224 kbps MP2.
Unfortunately, 98% of all of the stereo stations on DAB in the UK are using a bit rate level of
128 kbps, hence the audio quality is poor. This problem of using low bit rates doesn't only
affect the UK, either, because the handful of other countries that are trying to promote the old
DAB system -- Denmark, Norway and Switzerland -- are also using low bit rate levels.
The reason why such low bit rate levels are being used is because the broadcasters have
decided to launch quite a lot of new digital-only stations, but as there is only a limited amount
of spectrum available for DAB to use, the broadcasters decided to use low bit rate levels in
order to fit these new stations onto DAB even though they knew full well that the audio
quality would be lower than on FM.
The broadcasters and Ofcom try to make this out as being a "trade-off", but the reality is that
audio quality was sacrificed in order to provide more stations. For example, the broadcasters
decided to use 128 kbps for stereo stations, and this allows 9 stations to be carried in a DAB
multiplex. If they reduced the number of stereo stations to 8 rather than 9 then half of the
stations could transmit at a bit rate of 160 kbps, which would provide a significant
improvement in quality, albeit that it would still sound worse than on FM.
The Incompetent Adoption of DAB in the UK

Justification for the use of the word "incompetent"


When you look at the history of radio broadcasting, one of the most striking things is how
long the systems have lasted:
• AM radio was first commercially broadcast in 1920
• FM was invented in 1935, there was an FM broadcast band in the US in the
1940s, and the Zenith-GE pilot tone system was standardised in 1961 to
provide FM stereo, and FM stereo has remained unchanged up to the
present day
DAB on the other hand was "properly" launched in the UK in 2002, yet just 3 years later the
WorldDAB Forum pulled the plug on the old DAB system by ordering that the AAC+ audio
codec be adopted, which led to the design of the new DAB+ system, which will make all
DAB receivers obsolete in the coming years.
Because 3 years is such an extremely short duration in broadcasting system terms, the
launch of the old DAB system in the UK has got to go down as the most incompetent
technical decision ever made in the history of broadcasting -- and that includes both TV
and radio.
"Incompetent" is a strong word to use, but I'm afraid that in this instance I feel it is perfectly
jusified.
And if you're now thinking "it's easy to say this in hindsight", the technologies had existed for
years before DAB was launched in the UK, but the BBC didn't upgrade DAB prior to
launching it. For example, the AAC audio codec had been standardised in 1997, and Reed-
Solomon error correction coding is used as the error correction on CDs, so it had been in
widespread use since the 1980s. If these two technologies had been used to upgrade the DAB
system prior to it being launched, the audio quality and the reception quality would be far
better than they are with the current system, and DAB would actually be able to carry all of
the analogue stations, whereas Ofcom has admitted that around 90 analogue stations will
never be able to fit on DAB due to either being unable to afford the sky-high transmission
costs or due to the local DAB multiplexes being full.
Of course there have been other broadcasting system failures, but nothing comes close to
matching the significance of the launch of a system that was meant to be the digital
replacement for the ubiquitous FM system followed just 3 years later by its ruling body
scrapping it.
It also should not be forgotten that when the UK launched DAB in 2002 they did so thinking
they were pioneering DAB and that the rest of Europe and then most of the rest of the world
would naturally follow their lead. But DAB+ was only actually designed because this plan
went so incredibly wrong that the only countries the UK could get to commit to using the old
DAB system were Denmark and then more recently Norway, with virtually every other
country that said anything on the subject of digital radio being opposed to using the old DAB
system.
Put simply, if DAB+ hadn't been designed, the UK, Denmark and Norway would have been
the only countries stuck using the old DAB system, while all other countries would have
adopted one of the modern and far more efficient systems that can be used to carry digital
radio, such as DVB-H, T-DMB, HD Radio or DRM+. And the saddest thing about this whole
story is that it was so easily avoidable, for the reasons I will expand upon below.

The "true" launch of DAB in 2002


Although the BBC began transmitting DAB in 1995, there were no DAB receivers on sale at
all until Arcam brought out its Alpha 10 tuner in December 1999, which cost £800.
But DAB was only "properly" launched in March 2002 when the BBC launched 6 Music,
which was the first of five new BBC digital-only stations to launch that year. But the crucial
element that made 2002 the true launch-date of DAB was the beginning of the advertising
blitz that coincided with the launch of the BBC's digital-only stations, and as of today
(18/4/07) the BBC has broadcast 19 high-impact TV advertising campaigns for DAB, which
I've calculated would have cost £155 million1 if the BBC had to pay for these adverts to be
broadcast on commercial TV -- with DAB receiver sales standing at around the 4 million
mark, that means the BBC has pseudo-subsidised DAB to the tune of £155m / 4m = £38.75
per DAB radio!
Then just three and a half years later in October 2005, the WorldDAB Forum (now called the
WorldDMB Forum) ordered the Technical Committee to do the work necessary to add the
AAC+ audio codec, which was a decision that will make all of the existing DAB receivers
obsolete in the coming years.

The design of the old DAB system


This section is a short summary of how the old DAB system was designed, but for a slightly
more in-depth description see here.
The old DAB system was designed in the late 1980s and early 1990s, and all of the main
component technologies -- the audio codec, the modulation and error correction coding -- had
been chosen by early 1991, and they remain unchanged to the present day.
The designers of the old DAB system chose to use low complexity technologies, which was
mainly due to the fact that microprocessors weren't very powerful in the late 1980s. Examples
of this were that the designers chose to use the MP2 audio codec instead of MP3, and they
chose to use simple rather than the more complex but stronger error correction coding they
could have used.
The downside of choosing to use these low complexity technologies was that they made DAB
an incredibly inefficient system. For example, whereas MP3 was designed to be used at 128
kbps, MP2 was designed to be used at bit rate levels between 192 - 256 kbps, and this is
borne out by the following quote in a BBC R&D report about DAB from 1994:

"A value of 256 kbit/s has been judged to provide a


high quality stereo broadcast signal. However, a small
reduction, to 224 kbit/s is often adequate, and in some
cases it may be possible to accept a further reduction to
192 kbit/s, especially if redundancy in the stereo signal
is exploited by a process of 'joint stereo' encoding (i.e.
some sounds appearing at the centre of the stereo image
need not be sent twice). At 192 kbit/s, it is relatively
easy to hear imperfections in critical audio material."

And combining the inefficiency of the MP2 audio codec with the weak error correction
coding used on DAB (weak error correction coding leads to DAB multiplexes having a low
data capacity -- i.e. a low spectral efficiency), DAB multiplexes can only carry a very small
number of radio stations:

Stereo radio Audio quality level Numbe Bandwi


station bit rate r of dth per
kbps station station1
s per
DAB kHz
multipl
ex

256 Near CD-quality 4 428

224 FM-quality 5 342

192 Near FM-quality 6 285

1 - the bandwidth of a DAB multiplex is 1,710 kHz

DAB at the BBC -- 1990 - 2002


The following bullet points are a time-line of some relevant events to do with DAB at the
BBC in the 1990s and early 2000s:
• January 1990 - BBC R&D department first trialed DAB by transmitting it
from Crystal Palace
• 1991 - BBC R&D demonstrated DAB to press
• 1992 - "Extending Choice: The BBC's Role in the New Broadcasting Age"
document was published
• September 1995 - BBC national DAB multiplex began broadcasting Radios
1-5 nationally in 1995 (BBC World Service was added at a later date)
• May 1996 - "Extending Choice in the Digital Age" document published by
BBC Director-General John Birt
• September 1998 - bbc.co.uk mentions BBC's plans to launch 4 digital-only
radio stations (BBC Paliament, Asian Network, new music station, new
sports station)
• December 1999 - The first DAB tuner went on sale -- the Arcam Alpha 10
tuner -- costing £800
• Summer 2001 - BBC deliberately withholds information that the launch of
new stations will drastically degrade the audio quality of existing stations
in its public consultation for new digital radio stations (this was admitted
in an email in 2002 by the then Controller of Radio & Music Interactive,
Simon Nelson)
• November 2001 - Mediocre feedback from consultation (2 of the proposed
stations get less than 50% acceptance figures and the only station to get
more than 60% acceptance was BBC7)
• 2002 - 5 new digital-only stations are launched (6 Music, 1Xtra, BBC7,
Asian Network, Radio 5 Sports Extra)

Soon after the BBC's national DAB multiplex was launched in 1995, it was carrying the
following stations:

Station Bit rate


kbps

Radio 1 192

Radio 2 192

Radio 3 192

Radio 4 192

Radio 5 96

World
96
Service

Space left
192
over

The BBC will have known from the early 1990s -- straight after the listening tests at Swedish
Radio in 1990 that led to the MP2 audio codec being adopted for DAB -- that DAB would
need to use a bit rate level of 256 kbps to provide near CD-quality (the provision of near CD-
quality was the main reason why DAB was designed in the first place), and yet when the
BBC launched its national DAB multiplex in 1995 it obviously couldn't use a bit rate level of
256 kbps or else Radio 5 and the World Service wouldn't have been able to be carried on the
multiplex. So, instead, the BBC reduced the bit rate levels of Radios 1-4 to 192 kbps -- i.e.
when they launched their national DAB multiplex they had already reduced the audio quality
level to below the 224 kbps require to match FM! However, this didn't really matter at the
time too much, because you couldn't buy a DAB receiver until December 1999, and then they
cost £800.
It is just staggeringly incompetent that the BBC knew from the early 1990s that they
would be adding a number of new digital-only stations to their national DAB multiplex,
yet they only had enough space left for one additional stereo station, and then in 2002
they actually added 5 new digital-only stations. How long did they need to figure out
that DAB wasn't up to the job??
The bit rate levels of all of the BBC's music stations apart from Radio 3 are now half the bit
rate levels that were suggested to be used originally and the audio quality is awful.
Number of radio stations available
The majority of people in the UK can currently receive 4 DAB multiplexes, which consists of
the BBC and Digital One national multiplexes, a regional and a local multiplex. Once the
forthcoming DAB expansion has finished, the majority of people will be able to receive 5
DAB multiplexes due to the addition of a second commercial national multiplex that will
launch in 2008. This means that if the recommended bit rate levels of 256 or 224 kbps had
been used to provide the near CD-quality or at least FM-quality that DAB was originally
designed to provide, people would be able to receive the following number of stations:

Bit rate Number of Numb


kbps stations available er of
up to 20081 statio
ns
availa
ble
from
20081

256 20 25

224 24 30

1 - the table includes the effect of mono stations using half the bit rate of stereo stations -- 80% of radio stations on DAB are stereo stations and 20%
are mono, and this works out as adding one extra station per multiplex whether 256 kbps or 224 kbps is used

The new spectrum for the expansion of DAB is all there's going to be DAB, because this new
spectrum was acquired from the Regional Radio Conference (RRC-06) in Geneva last year,
and the last time there was a frequency planning conference on that scale was in Stockholm
in 1961!
Twenty-five to thirty radio stations -- or 36 stations tops in London -- was never going
to be enough, so the above table basically shows that it was inevitable that poor audio
quality would be provided on DAB. I would therefore suggest that the BBC and the
Radio Authority (the regulators of commercial radio pre-Ofcom) were grossly
incompetent to use a digital radio system where it was inevitable that poor audio quality
would be provided.

Transmission costs per radio station


I was provided with some actual DAB and FM transmission cost figures once by someone in
the DAB industry, which are contained in the following table along with how much it would
cost to transmit at 224 kbps or 256 kbps based on the fact that DAB transmission costs are
pro rata with the number of capacity units (CU) consumed (capacity units are usually but not
always linearly proportional to the bit rate levels):

Transmission Coverage area Transmis


type sion cost
per
annum

256 kbps DAB Local £192,000

224 kbps DAB Local £168,000

128 kbps DAB Local £96,000

FM Local £60,000

According to Ofcom, 50% of all existing analogue radio stations make no profit. I would
therefore suggest that the above table shows that using DAB makes providing poor audio
quality inevitable, because commercial radio stations wouldn't be able to afford to pay the
transmission costs to provide good audio quality. The Radio Authority was therefore
incompetent to propose that DAB should be used as the digital radio system in the UK.
The above table also goes some way to explain why only around 45% of all existing analogue
radio stations are on DAB. The big stations owned by the big commercial radio groups are on
DAB -- the big commercial radio groups own the commercial DAB multiplexes -- but the
small and medium-sized stations are not on DAB, because most simply cannot afford the
transmission costs. DAB is a great way for the big commercial radio groups to monopolise
digital radio by excluding access to their competition.
Ofcom has said that even after the expansion of DAB, 90 out of the existing 326 commercial
radio stations won't be able to transmit on DAB either due to not being able to afford it
(which probably accounts for most of them) or because the multiplexes will be full.

Better technologies were ready and waiting to be used


When I've criticised DAB in the past numerous people have said "oh, it's easy to see mistakes
with the aid of 20/20 hindsight" or words to that effect. The reality is that technologies were
sitting there ready and waiting to be used, but those in charge of DAB stuck their heads in the
sand, and the people at the BBC, the Radio Authority and in commercial radio were probably
too oblivious to there even being a problem.
As I mentioned earlier, the problem with DAB is that it is an extremely inefficient system, so
below I'll say which technologies existed that could have vastly increased the efficiency,
which would have avoided the current problems altogether.

Reed-Solomon error correction coding -- invented in 1960


Reed-Solomon (RS) coding is used as the error correction coding on CDs, and its efficacy for
use in conjunction with OFDM modulation had been shown in an EBU Technical Review
article (edition 224) by Alard and Lasalle in August 1987, so it was obviously around when
DAB was originally being designed, but they obviously either ignored it or deemed that it
was too computationally complex. I'm afraid that if they deemed it to be too computationally
complex then they were trying to design a digital radio system before digital processing was
fast enough to handle it, and they should have waited until Moore's Law caught up.
RS coding could have increased the multiplex data capacity by approximately 40%. RS
coding has now been adopted for DAB+.
MP3
MP3 was the joint-winner with MP2 of the listening test at Swedish Radio in 1990 that led to
the designers of DAB adopting MP2, so MP3 obviously could have been adopted at any point
during the 1990s.
MP2 was targeted at bit rate levels between 192 - 256 kbps, whereas MP3 was optimised for
use at 128 kbps. However, MP3 at 128 kbps would have provided similar audio quality to
192 kbps MP2, so MP3 was 50% more efficient than MP2.

AAC
Development of AAC began in 1993 when it was shown that superior compression
performance could be achieved by removing the requirement for codecs to be backwardly
compatible with MP2 and MP3. AAC was standardised in 1997.
If the designers of DAB had spotted that their system was far too inefficient they could have
worked to adopt AAC in parallel to the ongoing development work and incorporate AAC
soon after it had been standardised in 1997. And considering that the first DAB receiver
didn't go on sale until December 1999, AAC obviously could have been adopted for DAB.
AAC is twice as efficient as MP2 -- ironically, it was in listening tests carried out by BBC
R&D on AAC that proved that AAC was twice as efficient as MP2: first for a multi-channel
test in 1996, then for a stereo test in 1998. To be fair to the BBC R&D engineers, they did say
in subsequent documents how good AAC was, so presumably the BBC management not
having a clue about technology that was to blame for the glaring error.

Combination of more efficient audio and error correction coding


The following table shows the combined effect of using the more efficient audio and error
correction coding relative to that used on DAB:

Audio + error Efficiency relative to


correction coding DAB

MP3 + RS coding 2.1

AAC + RS coding 2.8

SBR - mp3Pro & AAC+


Spectral Band Replication (SBR) is the technology that makes AAC+ more efficient than
AAC, and some of you may remember a codec that was around a few years ago called
mp3Pro, which, like AAC+, consists of the addition of SBR, but this time to MP3. So if
either MP3 or AAC had been adopted then it would have been relatively painless to add SBR
in order to make DAB even more efficient.

Conclusion
If either of the above two options had been adopted instead of the technologies that the old
DAB system actually uses then there wouldn't have been a problem, because the broadcasters
wouldn't have needed to provide low audio quality in order to provide the number of stations
they are providing in the UK.
However, unfortunately the above two options weren't used, and the audio quality on DAB is
low, and it will be low for the next few years. The problem with the audio quality will be
sorted out once DAB+ has been fully adopted -- there will be no reason to provide low audio
quality then, albeit that I wouldn't put it past some of the most tight-first commercial radio
groups continuing to provide the same low audio quality as they do now, in particular GCap
Media and Emap, where they actually seem to be proud of providing as low audio quality as
possible.
But it will be several years before all of the legacy MP2 services have been switched off,
albeit that we will see DAB+ stations launch within the next 3 years or so. But MP2 and
AAC+ services will have to go through a period of being transmitted in parallel before the
time comes when incompetent Ofcom allows the MP2 services to be switched off, so the
quality on DAB is -- amazingly -- going to get worse before it gets better.
We've already had 5 years of a sub-standard service, so the effect of the incompetent adoption
of DAB in the UK will have lasted a long time before the final MP2 service has been
switched off.

COFDM

The modulation scheme that DAB uses is Coded Orthogonal Frequency Division
Multiplexing (COFDM). COFDM uses a very different method of transmission to older
digital radio modulation schemes and has been specifically designed to combat the effects of
multipath interference for mobile receivers.

Multipath

is the term for the different paths that a signal takes in reaching an aerial from the transmitter.
For example, one path may be a line-of-sight path from the transmitter to the aerial whereas
another path may bounce off a hill or building before reaching the aerial. In this example, the
signal that travels along the line-of-sight path arrives at the aerial first followed a short period
later by the path that has bounced off the hill or building.
As the different paths travelled are of different length the time taken for the signal to reach
the receiver will be different, with the direct path (if there is one) reaching the receiver first,
followed by reflected paths. The effect that these multipaths have on the received signal at the
antenna is that the amplitude of the received signal fluctuates. The reason for this fluctuation
is due to the relative phase angle between the different paths. The received signal is very-high
frequency sinusoidal carrier signal with comparatively a very slowly changing information
signal that has been modulated onto the carrier. Therefore, a good way to model a carrier
signal is to ignore the low frequency modulating signal and just assume that the multipaths
are each high frequency sinusoids with different amplitudes due to the different distances
covered (the amplitude reduces the further it travels) and relative phase angle due to the
different delay. To find out the instantaneous amplitude that is received at the antenna a
vector diagram can be drawn on which each multipath is represented by its amplitude (the
length of the vector) and its phase angle relative to, say, the phase angle of the direct path
(which gives the vector's direction). An example of a vector diagram is given below (ignore
the N and E)

Ignoring the labels on the above diagram, the diagram could represent a two-path signal
where the direct path is the pink vector and the sky blue vector is the delayed path, and the
vector addition produces the red vector, and it is the resultant red vector that the receiver
actually "sees".
As a mobile receiver moves relative to the transmitter the distances travelled by the paths also
changes and because the wavelength of a radio signal is of the order of 3 metres for VHF FM
signals and about 1.5 metres for DAB signals in Band III the relative phase angles between
the paths changes rapidly and randomly. For example, if there were two multipaths that are
in-phase (zero relative phase difference) then one of the paths only has to travel half a
wavelength further than the other (about 75 cm for Band III DAB signals) for the relative
phase of the path to change by 1800. If you look at the vector diagram above, if the blue
vector had a relative phase of 1800 and a length equal to the pink vector then it would be
facing in the opposite direction to the pink vector so the pink and sky blue vectors would
completely cancel one another and the length of the resultant red vector would be zero. As I
explained above, the antenna "sees" the red vector, so the amplitude that the antenna sees is
also zero. The term for this in physics is "destructive interference" and the signal is said to be
in a "deep fade".
Deep fades occur more frequently the faster the mobile is travelling, but the duration that the
signal is in a deep fades decreases as the speed of the mobile increases. A typical graph of the
amplitude of the carrier signal that the mobile antenna sees as it travels is shown below:
Wideband & Narrowband Wireless Transmission
The effect of multipath fading in the frequency domain is that wideband signals suffer from
"frequency selective fading", which means that different parts of the spectrum are faded more
than others. Narrowband signals on the other hand suffer from "flat fading" where the whole
signal spectrum fades, so for example, a narrowband signal's spectrum would be multiplied
by the above graph, which would mean that for example after travelling about 2.7 metres,
destructive interference occurs and the whole spectrum will fade, hence the term 'flat fading'.
Whether a wireless digital communication system is wideband or narrowband depends on the
duration of the transmitted symbols over the mobile channel. The mobile channel can be
represented by what is called a power delay profile, which shows the received power after the
transmission of a very short pulse, called an impulse, and the power of the signal received
varies with time due to the different multipaths that arrive at the receiver. The duration from
the first received path to the last received path that has significant power gives the maximum
delay of the channel. A typical power delay profile varies between approximately 4 µs for
urban environments up to about 20 µs for a rural environment.
A wireless digital communication system transmits "symbols" through the channel, for
example, for a single-carrier binary phase shift keying (BPSK, which uses either 00 phase
angle or 1800 phase angles, and a carrier phase of 00 represents a bit value of 0, and a carrier
phase of 1800 represents a bit value of 1, so each transmitted "symbol" represents one bit of
data) modulation scheme then the symbol duration is the duration between when the phase
angles can change.
And a wireless digital communication system uses narrowband transmission if the channel
symbol duration is greater than the maximum delay of the mobile channel (e.g. 4 µs for urban
and about 20 µs for a rural environment) and the system is wideband otherwise.
In a digital wireless communication system, the bit errors are far more likely to occur when
the signal is in a deep fade. Therefore these systems must mitigate the negative effects that
multipath causes and different systems go about it in different ways. The two best known
modern wireless digital communication transmission schemes are CDMA and OFDM.
CDMA is used on the new 3G mobile phone system and is a wideband transmission scheme,
which means that the channel symbols (which are called chips for CDMA) are far shorter
than the maximum delay of the mobile channel. OFDM, as used on DAB and Freeview
actually uses narrowband channels (subcarriers), but there are many of these narrowband
channels transmitted in parallel, so the overall spectrum is wide (but this doesn't mean that it
uses wideband transmission principles).

Error Correction Coding


The result of OFDM using a large number of narrowband subcarriers is that each subcarrier
suffers from flat fading, as described above. Because the subcarriers are subject to flat fading,
DAB uses COFDM (coded OFDM) which means that the data transmitted on the subcarriers
is protected by forward error correction (FEC) coding. The type of error correction coding
that is used in COFDM is convolutional coding and the effect of convolutional coding is that
for every one bit input to the error correction encoder, more than one bit is output depending
on the "code rate" being used. For example, a code rate of 1/3 would mean that for every bit
input to the error correction encoder, 3 bits will be output and these 3 bits are transmitted.
Error correction coding therefore adds redundancy to the signal in order for the receiver to be
able to correct any bits that are received in error. The error correction decoder used in
COFDM is the Viterbi algorithm which tries to decode what bits were sent depending on the
received sampled values.
COFDM also allows different groups of bits to be protected with a different strength code
rate because some bits are more important for the correct reproduction of the audio than some
of the other bits. For example, important parameters in the MPEG audio stream are the filter
parameters, so these are coded with a lower code rate (a lower code rate provides higher
protection as more redundancy is added) so that the Viterbi error correction decoder has a
higher chance of correcting any errors.

Interleaving
Unfortunately, the Viterbi algorithm performs poorly when it is presented with bit errors that
are all bunched together in the stream, and because the subcarriers are subject to flat fading
bit errors usually do occur in groups when a subcarrier is in a deep fade. To protect against
this, DAB uses time interleaving and frequency interleaving.

An example of how time interleaving is used is shown in the above table. The data symbols
are written into the interleaving block in column order, then once the block is full, the
symbols are read out in row order, so for example the symbols would be read out in the
following order: 0, 8, 16, 24, 32, 1, 9, 17 and so on.
At the receiver, the received symbols are written into the same sized interleaving block in
row order, and once the block is full, the symbols are read out in column order to return the
symbols to the original order.
The effect of this is to spread out symbol errors that occur grouped together. For example, as
the first few symbols transmitted in the above table would be 0, 8, 16, 24 and so on, and if a
deep fade occurs which makes symbols 8 and 16 to be received in error, then because of the
re-ordering carried out in the receiver the errors end up spread out in time, which allows the
Viterbi decoder to have a better chance of correcting all of the symbols.
As I explained in the Wideband & Narrowband Wireless Transmission section, wideband
wireless signals are subject to frequency selective fading, and because the number of
subcarriers used is large (for example DAB transmissions in the UK use Transmission Mode
1, which uses 1536 subcarriers each with a bandwidth of 1kHz), the overall DAB signal
spectrum is wideband, so not only are the narrowband subcarriers subject to flat fading, the
spectrum as a whole is subject to frequency selective fading. The result of this is that groups
of neighbouring subcarriers may all be faded. To mitigate against this, DAB uses frequency
interleaving as well as time interleaving so that after the time interleaver, the symbols read
out are put on subcarriers that are a certain distance in frequency apart. Again, the receiver
reverses this and the overall effect is that the Viterbi decoder sees the data symbols in the
original order, but errors are uniformly spread out in the stream.
Interleaving is a powerful method to improve the error correction capabilities of a wireless
system that is subject to fading, but of course it cannot perform miracles, and if too many
symbols are decoded incorrectly then it will fail and you're then likely to hear the usual
"bubbling mud" sound that is characteristic of reception problems.

COFDM Transmitter

After the bit-stream is re-ordered in the time interleaver block, 3072 bits (for Transmission
Mode 1 which uses 1536 subcarriers) enter the OFDM modulator. The bit-stream is first split
up into 1536 pairs of bits and each pair is mapped to one of four quaternary phase shift
keying (QPSK) symbols.
DAB uses differential QPSK, which means that the bits are mapped to phase changes rather
than to an absolute transmitted phase. An example mapping might be as follows:
Phase Change
Data Bits
(degrees)

00 0

01 90

11 180
10 270

The above mapping is called a Gray code mapping, because adjacent symbols (or in this case
phase changes) only differ by the value of one bit, which lowers the probability of there being
two bit errors for one symbol.
After the 1536 pairs of bits have been mapped to one of the four phase changes these phase
changes are applied to the 1536 subcarriers. The previously transmitted QPSK symbol on
each subcarrier will be placed in memory in the transmitter, and the phase change will then
rotate this symbol. For example, if the previous transmitted symbol on a subcarrier was the
top, right-hand point (at 45o) in the figure below (called a signal constellation diagram) and
the bits that are being mapped onto the subcarrier are '11' then the phase will rotate by 180o so
that the bottom, left-hand point (at 225o) will be transmitted on that subcarrier.

The QPSK symbols shown in the signal constellation diagram above are represented
numerically by their co-ordinates on the diagram. The 'Re' axis is the 'real' axis and the 'Im'
axis is the so-called 'imaginary' axis, which are the terms for diagrams that display what are
called 'complex numbers'. A complex number consists of the combination of a real plus an
imaginary number:
I+jQ
where I is the real part of the complex number and Q is the imaginary part of the complex
number, and the 'j' is always multiplied by the imaginary number. The actual meaning of 'j' is
that it is equal to the square-root of -1, which doesn't actually exist, and that is why it is called
an imaginary number, but complex numbers are a very useful mathematical concept and the
fact that the imaginary number doesn't actually exist doesn't matter.
To read an excellent tutorial about complex numbers and their use in digital signal processing
download this Acrobat file: http://www.dspguru.com/info/tutor/QuadSignals.pdf (136 KB).
The transmitted symbols have the following (normalized) co-ordinates:

Rectangular Co- Carrier Phase


ordinate

0.707 + j0.707 45o

-0.707 + j0.707 135o

-0.707 - j0.707 225o

0.707 - j0.707 315o

Complex numbers are used to represent signal points on a constellation diagram because the
real and imaginary axes are at 900 apart and a sinewave and a cosine wave (both with the
same frequency) are also 900 out of phase. This allows real number co-ordinates represent the
amplitude of a cosine wave, and an imaginary number represent the amplitude of the
sinewave, then adding the amplitude modulated sinewave and cosine wave together forms a
'quadrature' signal.
For example, COFDM is also used as the transmission scheme for DVB-T (Freeview) which
has the option of QPSK, 16-QAM and 64-QAM signal constellations to modulate the
subcarriers. QAM stands for quadrature amplitude modulation and to generate one of the
signal points on the constellation you amplitude modulate the cosine wave and the sinewave
with the co-ordinates of the point on the signal constellation and then add the cosine wave
and the sinewave together and the resultant signal is an amplitude and phase modulated
signal, which is beneficial because you don't have to phase and amplitude modulate:

The benefit of using 16-QAM or 64-QAM is that each symbol on each subcarrier can carry
more bits of information. The number of bits that each symbol can carry is given by the
following equation:
number of bits = log2 M
where log2 is the logarithm to the base 2 and M is the order of the constellation. So QPSK
symbols (M=4) can carry 2 bits of information, 16-QAM symbols (M=16) can carry 4 bits of
information, and 64-QAM symbols (M=64) can carry 6 bits of information.
Of course, it is better to use a higher level constellation so that the overall capacity can be
higher, but the drawback is that the points are closer together which makes the transmission
less robust to errors. As explained earlier, fading alters both the amplitude and phase of a
carrier or subcarrier, and in the mobile channel the frequency of the subcarriers are altered by
a Doppler shift. Also, thermal noise produced by devices in the receiver such as the RF mixer
is added to the received signal, and it is this noise that is used in the signal to noise ratio
(SNR) calculations.
The reason why most symbol errors occur when the signal is in a deep fade can be explained
using the following diagram which shows how the thermal noise moves the signal point:

On DAB (using differential QPSK), if a symbol is transmitted and the subcarrier is in a deep
fade then the amplitude of the subcarrier is reduced. This moves the received signal point
closer to the origin of the diagram (co-ordinates of 0,0) and when noise is added to this in the
receiver's RF front end then because the point is already near the origin then it is easy for the
noise to move the point to a position where the difference in the angle does not fall within the
decision region allowed for a correct decision.

OFDM Modulator

After the symbol mapping is carried out, as explained above, the frequency interleaving will
re-order the symbols (not shown in diagram) and then the 1536 complex numbers that
represent the symbols to be transmitted on each of the subcarriers will be sent to a serial-to-
parallel converter and "placed" on each of the subcarriers. As all of this is done in the digital
domain then the above diagram just serves as a way to visualise what happens. In reality the
1536 complex numbers will be stored in two buffers, with one buffer containing the real
values of the complex number, and the other buffer containing the imaginary values of the
complex numbers.
The OFDM modulator consists of the block in the diagram that is labelled 'IDFT', which
stands for inverse discrete Fourier transform. Again, in reality, the actual process carried out
is the inverse fast Fourier transform (IFFT), because the IFFT is, as the name suggest, a fast
way to calculate the IDFT.
The IDFT calculates the following equation:

x(n) is the nth output signal complex value (time domain), X(k) is the complex symbol value
on the kth subcarrier (frequency domain), and (for DAB transmission mode TM1) N = 2048 is
the number of output signal points calculated, and also the number of input frequency points.
The equation is a summation from 0 to N-1 for each output value x(n), X(k).e j.2.k.pi.n / N is
summed from k=0 to k=N-1. For example, for x(2) the sum would be:
x(2) = X(0) e j.0.2.pi.2 / N + X(1) e j.1.2.pi.2 / N + X(2) e j.2.2.pi.2 / N + X(3) e j.3.2.pi.2 / N + X(4) e j.4.2.pi.2 / N +
..........
To understand what the IDFT does, you first need to understand what the discrete Fourier
transform (DFT) does for which the IDFT is the inverse. The DFT calculates the discrete
frequency spectrum from a block of discrete time samples of the signal (by 'discrete' I mean
that a discrete signal or discrete spectrum is only defined at discrete moments of time, e.g. at
the sampling instant for a time signal, or at a given frequency for a frequency spectrum).
Therefore, the inverse DFT calculates the discrete time samples from a discrete frequency
spectrum. This means that the frequency spectrum of the transmitted signal is given by the
values of the complex data symbols on the subcarriers.
There are a lot of redundant operations in the DFT, and for an N-point DFT this requires N2
complex multiplications, which for example for a 2048 point DFT as would be used for
transmission mode 1 this would require 4,194,304 multiplications. The fast Fourier transform
(FFT) is, as its name suggest, a fast way to calculate the DFT as many of the redundant
operations are discarded, and this allows the FFT to be calculated in (N/2) log2 N
multiplications, which for a 2048 point FFT requires only 11,264 multiplications, which is a
massive saving compared to the DFT.
One of the properties of the DFT is what makes it suitable for OFDM, and really what makes
OFDM feasible for practicaly implementation in the first place. This property is that the
discrete frequency spectrum that is calculated by a DFT from a block of data samples has
frequency samples that are all equally spaced in frequency, and this spacing equals 1/T,
where T is the total duration of the time samples in the block. For example, for DAB
transmission mode 1 (TM1), the "useful" duration of OFDM symbols (not data symbols on
the subcarriers, OFDM symbols carry the data symbols on the subcarriers) is 1 ms (i.e. T = 1
ms), so 1/T = 1 kHz, and all the subcarriers are spaced by 1kHz. It is these equally spaced
subcarriers that equal the useful symbol duration that gives OFDM its "orthogonal" property
in its name orthogonal frequency division multiplexing.
The property of orthogonality for communication signals means that signals that are
orthogonal to each other can be transmitted together and they don't interfere with each other.
So having the subcarriers all orthogonal to one another (each subcarrier is orthogonal to all
the other 1535 subcarriers) means that you can transmit the subcarriers in parallel and they
won't interfere with each other. This means that the individual spectra for each of the
subcarriers can overlap, and they still won't interfere with one another. A diagram that shows
what the frequency spectra of subcarriers looks like for DAB is shown below, and the number
of subcarriers for TM1 will be 1536:

As you can see from the figure above, for the frequency in red, all the 4 neighbouring spectra
are zero where the red spectra is at its peak, and so there is no "intercarrier interference"; this
is due to the orthogonality principle.
The reason why the DFT makes OFDM practically feasible is that if you want to transmit
1536 subcarriers that are all orthogonal to each other then you would need 1536 oscillators
which are all separated by 1kHz and 1536 filters at the transmitter, and 1536 filters and
oscillators in each receiver, which is obviously not practical.
After the IFFT has been calculated, the 1536 output complex numbers are parallel to serial
converted (the P/S block in the diagram above), and following this the cyclic prefix (or guard
period) is inserted (see diagram at the start of the COFDM transmitter).
The cyclic prefix copies the complex numbers from the end of the block of output values and
"pastes" them onto the front of the block (or from the front of the block copied to the end).
The reason why the values from the end of the block are copied to the front is to retain
orthogonality in the multipath channel.

The reason why the end of the block is copied to the front is so that the delayed paths from
the symbol fall within the guard period. To show why this retains orthogonality you have to
consider that the OFDM signal consists of the addition of all the subcarrier signals, which are
all at different frequencies f0 and with different values of an and bn as shown in the equation
and the waveforms that are added to make the bottom OFDM signal:
Then, so long as all the multipaths fall within the cyclic prefix duration and if samples are
taken over the "useful" symbol duration (as opposed to the total symbol duration that includes
the cyclic prefix) then the DFT equation that is calculated in the receiver "integrates" over an
integer number of full sinewave cycles, which is a requirement for orthogonality to hold.
Following the insertion of the cyclic prefix, the values are fed to digital to analogue
converters (DAC) and lowpass filters for each of the real and imaginary streams. The real
values of the complex numbers are then amplitude modulated onto a cosine RF (radio
frequency, i.e. about 210 MHz for Band III) carrier, and the imaginary values of the complex
numbers are amplitude modulated onto a sine RF carrier. The sine and cosine carriers are
then added together, and sent through a bandpass filter and then sent to the antenna for
transmission.
The insertion of the guard period between the useful symbols also enables DAB to use single-
frequency networks (SFNs):

Using a cyclic prefix means that receivers can receive signals on the same frequency from
different transmitters so long as the delay between the first and last signal to arrive falls
within the cyclic prefix duration. So signals from transmitters whose signals are delayed
relative to the signals from a closer transmitter are treated as "artificial" multipath.
SFNs allow the same frequency to be used for a given area and this means that a few low
power transmitters can be used as opposed to having one very high power transmitter.
Overall, the power required using the SFN concept is lower for transmitting to a given area.
SFNs are also spectrally efficient when it comes to frequency planning because for example,
both the BBC and Digital One use the same frequency right across the UK, so the situation
where there are multiple frequencies required is avoided.
I've found that there is a common misconception that only the BBC and Digital One
multiplexes use the SFN concept. This is not so, and all DAB multiplexes that have more
than one transmitter for a given area use the SFN concept, and this is the vast majority of
multiplexes that I'm aware of in the UK.

COFDM Receiver

After the signals are received at the antenna, the signals are I/Q downconverted from RF to
generate the real (I) and imaginary (Q) streams, lowpass filtered (LPF) and digitized in the
analogue to digital converters (ADC, one ADC for each stream). Following the ADC, the
cyclic prefix is stripped off and the remaining sampled values are serial to parallel converted
and once there is a full block of samples (1536 for TM1) the DFT is calculated (in reality the
FFT is calculated as the FFT requires far fewer multiplications to be carried out than the
DFT).
After the FFT (the FFT is the OFDM demodulator), the originally transmitted symbols will
be received, but they will be corrupted in that the amplitude and phase will be altered by the
channel response for each subcarrier, and noise will be added in the receiver which moves the
received point in a random direction and with a random amplitude.
As DAB uses differential modulation, only the difference in phase between the previous and
present symbol on each subcarrier needs to be found to decode what was sent (ignoring
errors).
The phase angle of a complex number can be found from the following formula:
theta = tan-1 (I / Q)
To find the phase difference between the previous and present symbol the complex conjugate
of the previous received point is multiplied by the present received point, then the angle of
the result of this multiplication is the phase change. The complex conjugate of a complex
number just changes the sign of the imaginary part, for example, if you have 1 + j2, then its
complex conjugate is 1 - j2.
Unfortunately, when DAB was specified in 1991 the engineers decided to use differential
modulation instead of coherent (or synchronised) modulation. Synchronised modulation
means that the absolute phase of the symbol is transmitted, rather than the difference between
phases. In 1991 differential modulation may have been seen to be a good choice, but
synchronised modulation will be used in all modern communication systems because it is
easy to synchronise the carrier, and differential modulation doubles the number of bit errors
compared to synchronised modulation. The reason for why the number of bit errors are
doubled is because if one received symbol has been rotated by the channel or by noise to an
extent that it causes an error, then because the probability of a bit error is low, there is a very
high probability that the following symbol is also received in error.
For example, for a typical probability of error of about 0.0001, if one error occurs then the
probability that the following symbol is in error is 1-0.0001 = 0.9999, i.e. virtually certain, so
overall differential modulation doubles the number of bit errors.
Following the determination of the change of phase on each of the subcarriers, first the
frequency interleaving is reversed and then the time interleaving is reversed, and the values
are fed into the Viterbi error correction decoder.
The output bitstream from the Viterbi decoder is then forwarded to software or hardware that
goes about splitting the multiplexed data into its constituent streams followed by sending the
audio data to the MPEG decoder to generate the PCM bitstream that is sent to the DACs,
amplified and sent to the speakers.

My Proposal to Improve DAB

Synchronous Modulation
First as I've just described, DAB uses differential modulation. It would be easy for receivers
to be designed to use synchronised demodulation, and this wouldn't affect the existing
receivers that use different demodulation. This would remove the unnecessary doubling of
the number of bit errors.

Hierarchical Modulation
As I described earlier (in the COFDM Transmitter section), the capacity of a DAB multiplex
depends on the number of points in the signal constellation. DAB uses QPSK which has 4
signal points, which means that each data symbol on each subcarrier carries 2 bits of data.
Moving to a 16-QAM signal constellation would be problematic from a backward
compatibility point of view, unless the transmitter powers were significantly increased. But
an 8-ASPK constellation would be possible:
This scheme is called "hierarchical modulation" and a similar scheme is specified in the
DVB-T (Freeview) specification. It is called hierarchical modulation because receivers with a
lower signal to noise ratio still receive the lower bit rate stream (called the high priority (HP)
stream) while receivers with a high enough signal to noise ratio receive the higher bit rate
stream (called the low priority (LP) stream).
This would increase the capacity of a DAB multiplex by 50%, and would not cause problems
in terms of backwards compatibility with existing receivers because the existing differential
phase modulation could be used, but for newer receivers with a high enough signal to noise
ratio then they would be able to decode which of the two "rings" the transmitted point was
from, and hence decode the extra bit per symbol per subcarrier, which means that instead of 2
symbols per subcarrier you decode 3, hence the 50% increase in capacity.
This only requires a relatively small increase in transmitter power, and the increase in
transmitter power benefits the receivers that are only receiving QPSK.
In order to be backwardly compatible with existing receivers, the HP stream would have to be
transmitted as it is transmitted now, while the extra capacity can be used to provide extra
information to modify the audio bitstream in order to improve the accuracy of the audio
decoding. This would require development of additional electronics hardware, but that is a
trivial task, and certainly a task worth undertaking for the reward of an extra 50% of capacity
and the significantly improved audio quality that would result.

Low Density Parity Check (LDPC) Coding


LDPC codes are an old form of FEC code (invented by a famous coding theoretician called
Gallagher) that have been "re-discovered" and have attracted a lot of attention from the
information theory research community because of their near-optimum performance. These
codes acquire their power due to them being decoded using the so-called turbo principle,
which is an iterative decoding technique from which these (and turbo codes) derive their
near-optimum performance. FEC codes that use the turbo principle were not re-discovered
until after DAB and DVB-T had been standardised, and so could not be used, which is a
shame because FEC codes that use the turbo principle outperform the FEC coding used on
DAB by a very large margin. The two-layer FEC coding used on DVB-T (DAB uses a single
layer of FEC coding, and therefore the error protection is significantly weaker than for DVB-
T) is also outperformed quite significantly.
Turbo codes were the first type of FEC codes to use the turbo principle, but a major
advantage of LDPC codes are that they are far less computationally complex to decode at the
receiver, and hence the power consumption in the receiver will be less for LDPC codes than
for turbo codes, yet LDPC codes achieve approximately the same, near-optimum
performance that turbo codes achieve. It is for this reason that the DVB organisation chose
LDPC codes for the new DVB-S2 digital satellite standard, which supposedly performs so
close to being optimal that the DVB claim it will never need to be replaced.
Using LDPC coding along with hierarchical modulation -- if the transmission parameters are
chosen wisely -- would allow a large percentage of people that can decode the high-priority
QPSK backwardly compatible stream to also decode the lower priority streams.

Variable Bit Rate Coding


VBR is far more efficient than constant bit rate (CBR) coding, and the Internet
audio coding community all seem to favour VBR.
Variable bit rate coding varies the bit rate on a frame-by-frame basis according
to the difficulty in encoding the audio frame so that more difficult to encode
frames use a higher bit rate, while easier to encode frames use a lower bit rate.
This is a sensible way to allocate bit rate and provides the best compression for a
given audio quality. Layer 2 as used on DAB has the option of VBR.
Using VBR allows the use of statistical multiplexing across the whole multiplex,
so if the low-priority stream was used to carry VBR information that has been
statistically multiplexed then the low-priority stream information could be used
as effectively as possible to provide higher bit rates where they’re needed and
thus the audio quality across the multiplex could be dramatically improved.
To implement this you would transmit the high-priority stream as usual with the
normal bit rates, then if the receiver can decode the low-priority stream, the VBR
information could modify the high-priority audio information to make the
frequency components more accurate, and therefore provide a higher audio
quality.
This would place most of the “intelligence” at the broadcaster’s end and a
receiver would just need to have some chip or software to modify the MPEG
audio data prior to the data entering the MPEG decoder.
Overall, the scheme proposed above would allow the audio quality on DAB to be
transformed from mediocre in the extreme, to high audio quality worthy of the
term "near-CD quality".
I have forwarded my proposal to someone at the BBC that can influence these
things, but judging by the present state that DAB is in then I won't hold my
breath before the audio quality is any good, and will continue to listen to
Freeview until that happens.

DAB Summary
DAB could be a good system because it is capable of delivering a very high quality audio
signal. It does offer a greater choice of stations than you would be able to receive on FM and
it has a data channel which can send song titles and artist names and news and travel
information which will be useful for future hand-held devices.
However, because there are too many stations crammed into the existing multiplexes, the bit
rates of the majority of services have had to be reduced. Because the BBC have introduced 5
new stations onto their mutliplex the bit rates of all the BBC's stations have had to be reduced
to a level that is not even as good as that provided by FM. On top of this, Radios 1 & 2 and
most of the commercial stations use high levels of audio processing to increase the loudness
of their station's sound, and this further degrades the already low audio quality.
Listeners to stations that are only otherwise available on AM would benefit from buying a
DAB radio, but DAB sounding better than AM is hardly much to boast about...
If you listen on portables then because of the improved reception over FM then it may be
worth buying a DAB radio (portables are very expensive at the time of writing though).
DAB car stereos are still quite expensive, starting at around £200, so I would put off getting a
DAB car stereo until prices dropped. Also digitally-enhanced FM receivers for car CD
players have been added by the likes of Pioneer (look for ones with D4Q) and Blaupunkt
(DigiCeiver) which significantly improve FM reception in cars and are much cheaper than
car stereos with DAB.
The problem with audio quality on DAB could easily be solved if the radio stations increased
their bit rates, but the commercial radio stations don't seem to want to do this (they leave
unused space on their multiplexes, when they could just as easily increase bit rates to fill up
the multiplexes) and the BBC may not get any more DAB bandwidth. This would leave DAB
A better system for listening at home than DAB is receiving digital radio via satellite. This is
described in more detail on the Digital Satellite (DSat) Radio page, but briefly, it is as cheap
if not cheaper than a DAB radio (you don't have to pay a subscription to get the digital radio
stations, see what you get without a subscription here), the bit rates on the vast majority of
stations are higher than on DAB (none are lower on DSat than on DAB) and therefore the
audio quality is far superior to that on DAB. For example, Radios 1-4 are all transmitted at
192kbps on satellite whereas Radios 1, 2 and 4 all use 128kbps on DAB.
Therefore, for home listening I would recommend people to get satellite.

También podría gustarte