Está en la página 1de 60

LTE Drive Testing, Radio Network Planning & Optimization Page i

LTE Drive Testing, Radio Network


Planning & Optimization

Version 01.01 approved
Prepared by Ajay Chhalotre
Nex-G Exuberant Solutions Pvt. Ltd.
27
th
May, 2013
LTE Drive Testing, Radio Network Planning & Optimization Page ii

Table of Contents
1. RADIO NETWORK PLANNING..........................................................................................3
1.1 PATH LOSS BASED APPROACH ............................................................................................. 5
1.2 SIMULATION BASED APPROACH ......................................................................................... 8
2 LINK BUDGET .....................................................................................................................11
2.1 UPLINK LINK BUDGET .......................................................................................................... 12
2.2 DOWNLINK LINK BUDGET ................................................................................................... 19
3 LTE LINK BUDGET EXERCISE .......................................................................................23
4 LTE COVERAGE SCENARIOS .........................................................................................26
5 LTE COVERAGE IN NOISE LIMITED SCENARIO ......................................................26
5.1 DEFINATION OF AVERAGE SNR ......................................................................................... 26
5.2 REQUIRED SINR ...................................................................................................................... 27
5.2.1 Spectral Efficiency:- ............................................................................................... 27
5.3 INTERFERENCE ....................................................................................................................... 30
5.4 COVERAGE-BASED SITE COUNT ...................................................................................... 31
6 CAPACITY PLANNING .......................................................................................................33
6.1 AVERAGE CELL THROUGHPUT CALCULATION ............................................................. 33
6.2 SENSITIVITY ............................................................................................................................ 35
7 LTE KEY PERFORMANCE INDICATORS FOR LTE RF DESIGN .............................36
7.1 Reference Signal Received Power (RSRP):- ...................................................................... 36
7.2 Reference Signal Received Quality (RSRQ) :- ........................................................................... 36
7.3 Overlapping Zones (Number of Servers):- ........................................................................... 39
Design KPI for Overlapping Zones (Number of Servers): ............................................................. 40
7.4 DL Cell Aggregate Throughput .............................................................................................. 40
8 PEAK RATE CALCULATION IN LTE .............................................................................41
9 CELL EDGE PERFORMANCE ..........................................................................................42
9.1 Enhancing cell Edge Performance using ICIC........................................................................... 42
9.2 Cell-Edge SINR from Simulation, Trial and Real-Life Networks .............................................. 42
9.3 Effects Caused by Mobility:- ...................................................................................................... 45
9.4 Radio Link Failure and PDCCH Performance ............................................................................ 49
9.5 Inter-Cell Interference Coordination (ICIC) ............................................................................... 53
10 CELL EDGE THROUGHPUT ............................................................................................56
11 VOIP CAPACITY CALCULATION ..................................................................................58
12 REFERENCES .......................................................................................................................60


Revision History
Name Date Reason For Changes Version
Ajay Chhalotre 27
th
May, 2013 Requirements added by Mr. Joseph Bada 1.01




LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 3


Nex-G Exuberant Solutions Private Limited
1. RADIO NETWORK PLANNING
Radio network planning is used to identify the geographic location soft heeNmkB. It is
also used to identify the associated antenna configurations in terms of antenna type.
height, azimuth and tilt

The high level interaction of radio network planning with dimensioning, site acquisition.
site design and site building is illustrated in figure 1


Figure 1

Radio network planning is preceded by system dimensioning. Dimensioning results are
used to generate an estimate of the expected site density. This provides a guide to the
number of sites required to achieve the target coverage and capacity

Site acquisition defines the actual sites which arc available to the radio network planners.
The site acquisition team may provide radio network planners with a list of candidate sites.
Alternatively .radio network planners may identify the requirement for one or more sites at
locations where there are currently no candidates

Sites represent a relatively expensive investment for the operator. Selecting sites which
have a poor location or which limit good site design can lead to reduced system
performance irrespective of any subsequent RF and parameter optimization

Evaluating candidate sites should involve a site visit as well as modeling within the radio
network planning tool. In practice, the interaction between site acquisition and radio
network planning is likely to be a compromise between obtaining ideal site locations and
the actual locations which are available

If an operator has already deployed a 2G or 3G network then it is likely to be beneficial to
re-use as many of the existing site locations as possible .Re-use of existing sites
introduces the option of sharing antenna sub-systems between 2G, 3G and LTE, i.e.
feeder cables and Antennas can be shared.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 4


Nex-G Exuberant Solutions Private Limited
A potential drawback associated with sharing antennas is that it may limit the radio
network planner's ability to apply independent 2G, 3G and LTE RF optimization. e.g. .if
the LTE system requires mechanical antenna down tilt, then 2G and 3G systems must
also be evaluated

Site selection should also account for site design requirements. Site design involves
identifying specific locations for the eNodeB cabinet and antennas. It also involves
identifying the route for the feeder cable to connect the eNodeB cabinet to each antenna.
The feeder cable should be routed as directly as possible to help reduce its length and
minimize feeder losses

Antenna placement represents the part of site design which has a direct impact upon
radio network planning:

o Antennas should not be placed behind any obstacles. Roof-top site designs
should ensure that antennas are either at the edge of the roof-top or sufficiently
high to avoid the edge of the roof-top clipping the antenna gain pattern

o If antennas are mounted on the side of a building then the azimuth requirement
should be considered to ensure that the wall of the building do not shield the
horizontal beamwidth

o If site is already accommodated with other antennas then the physical separation
to those antenna should be considered to avoid interference

There are 2 fundamental approaches to radio network planning - Path Loss Based
approach and Simulation Based approach
o the path loss based approach is relatively simple and allows the radio network to
be planned without modeling any subscriber traffic. This approach uses link
budget results to define minimum signal strength thresholds. Results are typically
presented in terms of service coverage and best server areas

o The simulation based approach is more complex and time consuming but
generates a greater quantity of information. Simulations are typically based upon
static rather than dynamic simulations. Results can be presented in terms of
coverage, cell and connection throughputs, transmit powers, interference levels

Simulations may be used to supplement the path loss based approach. In this case, the
main planning activity is completed using path [ass calculations while simulations provide
additional and more detailed information for specific sections of the network

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 5


Nex-G Exuberant Solutions Private Limited

1.1 PATH LOSS BASED APPROACH
The path loss based approach to radio network planning requires a planning tool
capable-of evaluating path loss calculations and displaying geographic areas-where
specific path loss, thresholds have been exceeded. The planning tool should also be
capable of displaying best server areas

The inputs to the planning tool for the path loss based approach are illustrated in Figure


Figure 2

The LTE site data should include site locations, antenna types, antenna heights, antenna
tilts, antenna azimuths, feeder types, feeder lengths, RF carrier and eNode B types

The propagation model should be tuned from measurements. Accurate propagation
modeling is fundamental to radio network planning. Propagation model tuning involves
minimizing the standard deviation of the error between the predicted and measured
propagation loss while maintaining a mean error which is close to 0 Db

A different propagation model can be defined for each environment type. Environment
dependant correction factors can be applied to a common propagation model.

Different propagation models can also be defined for different cell ranges, antenna
heights and operating bands

The digital terrain map should be adequate in terms of resolution, number of clutter
categories and accuracy. The resolution should be relatively high for urban and suburban
areas but can be reduced for rural areas. The appropriate number of clutter categories
depends upon the geographic area. If the number of categories is large then the
propagation tuning exercise becomes more difficult

If rnicrocells or picocells with below roof-top antennas are to be planned then it may be
necessary to purchase a map-which includes building vectors. Building vectors can be
defined in either two-or three dimensions


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 6


Nex-G Exuberant Solutions Private Limited
The signal strength thresholds should be based upon a link budget analysis. Link budgets
can be used to generate a set of maximum allowed path loss thresholds for a range of
target throughputs within each environment type. In general, planning tools display
contours of downlink signal strength rather than downlink path loss. This makes it
necessary to translate the maximum allowed path loss into a minimum allowed signal
strength

A relatively arbitrary base station transmit power can be selected and the signal strength
thresholds calculated by subtracting the maximum allowed path loss.

The transmit power could be set equal to the cell specific reference signal transmit power
per resource element. This allows signal strengths calculated by the planning tool to be
interpreted as RSRP (Reference Signal Receive Power)

An example of the translation from a link budget maximum allowed path loss to a
planning tool signal strength threshold is presented in table 1. In this example, the
downlink transmit power is based upon a total downlink transmit power capability of
43dBm and a channel bandwidth of 20 MHz (1200 Resource Elements),

i.e. the cell specific Reference Signal transmit power per Resource Element is
43 10 x LOG (1200) = 12dBm (assuming equal transmit powers for all Resource
Elements)

Table 1 assumes that the maximum allowed path loss from the LTE link budget is.145
dB. This figure could have originated from either an uplink or downlink link budget. In both
cases, the planning tool is used to display contours of down link signal strength, i.e.'
signal strength is used to quantify path loss

Table 1


Calculating the planning tool signal strength threshold also requires consideration of the
antenna gains and feeder loss values assumed in the link budget. The planning tool will
apply its own antenna gains and feeder loss values which can be site specific and may
not equal the values assumed in the link budget

The antenna gain values assumed in the link budget can be removed from the maximum
allowed path loss result, so the planning tool is able to apply its own values on a site by
site basis, This is done by subtracting the link budget antenna gain values from the
maximum allowed path loss

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 7


Nex-G Exuberant Solutions Private Limited

Handling feeder loss values can be more complex:

o if feeder loss values within the planning tool are set to 0 dB then the maximum
allowed path loss does not require any feeder loss adjustment

o If feeder loss values within the planning tool are set to their actual values then
tile link budget feeder loss should be added to the maximum allowed path loss
result. This will generate only an approximate result if the link budget analysis
resulted in uplink limited coverage with the use of Mast Head Amplifiers (MHA).
The result is approximate because the maximum allowed path loss generated by
the link budget is independent of the feeder loss(assuming the MHA exactly
compensates the feeder loss) while the signal strength calculated by the
planning tool is dependent upon the feeder loss

The resultant planning tool signal strength threshold in table 1 is calculated as the
downlink transmit power,-- the adjusted maximum allowed path loss,
i.e. 12 - (145 - 18 - 0 + 2) = -117 dbm

The path loss based approach to radio network planning should include an analysis of the
best server areas, This helps to ensure good dominance and a relatively even distribution
of network loading, Best server areas should be contiguous and should not be
fragmented.

Non-contiguous best server areas indicate that there is likely to be relatively poor
dominance and increased levels of inter-cell interference. In general, neighboring best
server areas should be of approximately equal size. If there is a known traffic hotspot
then an eNode B should be located as close as possible to that hotspot and the
dominance area can be smaller

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 8


Nex-G Exuberant Solutions Private Limited
1.2 SIMULATION BASED APPROACH
The simulation based approach to radio network planning requires a more sophisticated
radio network planning tool. The majority of radio network planning tools use Monte Carlo
simulations. Monte Carlo simulations are static rather than dynamic. This means that
system performance is evaluated by considering many independent instants (snap shots)
in time. In general, dynamic simulations are more time consuming than static simulations

Figure 2 compares the principles of static and dynamic simulations:

Static simulations are able to generate probability distributions and averages
Network behavior during one simulation snap shot does not impact the
behavior during the next snap shot. Snapshots are random and do not follow
any time history. When compared to dynamic simulations, fewer time instants
are required to generate result

Dynamic simulations are able to generate time histories in addition to
probability distributions and averages. The time history of network behavior is
modeled with relatively high resolution. Certain aspects of network behavior
can be modeled in a more realistic manner, e.g. a packet scheduler can
account for allocations during previous time instants, and hysteresis in hand
over thresholds can be followed


Figure 3

Both the static and dynamic simulations referenced above are system level simulations
rather than link level simulations. The results from link level simulations usually form an
input to system level simulations, e.g. the system level simulation look-up table defining
the relationship between throughput and signal to noise ratio originates from link level
simulations

System level simulations model the network at a relatively high level. UE and BTS
physical locations are modeled along with their antenna gains, transmit powers
and noise floors. System level simulations do not model the transfer of individual
modulation symbols. Static simulations calculate the signal to noise ratio
conditions and subsequently look-up a corresponding throughput. Dynamic

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 9


Nex-G Exuberant Solutions Private Limited
simulations may model the transfer of individual transport blocks and then use
look-up tables to determine the probability of successful reception

Link level simulations model the low level details of the transmitter and receiver.
The physical layer algorithms used by the UE and BTS are modeled. These
include channel coding, interleaving and error detection. Individual bits are
mapped onto modulation symbols before being filtered and transferred across
specific propagation channels. Receiver modeling is likely to include
synchronization, channel estimation and equalization. Bit Error Rates (BER) and
Block Error Rates (BLER) are calculated as a function of throughput and signal to
noise ratio.

The general principle for Monte Carlo static simulations is presented in Fig


Figure 4

Each simulation snap shot is started by distributing a population of LTE across the
simulation area. This distribution could be based upon a uniform random distribution or a
weighted random distribution. Weightings can be based upon Erlang maps generated
from the traffic belonging to an existing network. Alternatively, weightings could be
environment type dependant so for example, urban areas could be specified to have a
higher traffic density than suburban areas

Once the population of LTE has been distributed, the simulation results are generated for
that snapshot. The planning tool determines uplink and downlink signal to noise ratios for
each physical channel and Reference Signal. Throughputs are derived using look-up
tables which define the relationship between throughput and signal to noise ratio. These
look-up tables typically originate from separate link level simulations.

The results are recorded at the end of a simulation snapshot and the process is repeated.
This allows the simulation to generate. Probability distributions and to quantify the
probability of certain events occurring, e.g. the probability that a LIE will be able to
establish a connection at a specific location. The number of snap shots necessary to

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 10


Nex-G Exuberant Solutions Private Limited
generate statistically stable simulation results tends to depend upon the quantity of traffic
distributed during each snapshot. Distributing relatively little traffic tends to increase the.
number of snap shots required

The inputs to the planning tool for the simulation based approach to radio network
planning are illustrated in Figure



Figure 5

The first three inputs are similar to those required by the path loss based approach. The
LTE site data may be more complex in terms of requiring greater information to describe
the BTS capability, e.g. baseband processing capability
Propagation modeling is also more complex for a simulation if slow fading is modeled .
The path loss based approach uses a link budget threshold which includes a slow fade
margin. Simulations typically model slow fading explicitly making it necessary to specify a
standard deviation and correlation factor. The correlation factor is used to specify the
coherence of the fading experienced by the signals between a LIE and the set of
surrounding BTS. A high correlation factor means that when one signal is experiencing a
fade there is a high probability that the other signals will also be experiencing a fade

Typical LTE parameter assumptions include channel bandwidth, maximum uplink and
downlink transmit powers, Reference Signal transmit powers, uplink and downlink noise
figures, look-up tables between signal to noise ratio and throughput for each propagation
channel and throughput requirements for a range of services

Traffic profiles are specified in terms of the services used by the population of UE.
Defining accurate traffic profiles can be difficult and it is reasonable to start a simulation
exercise by modeling one service at a time, e.g, generate coverage and capacity results
for the data service. Mid then generate similar results for the also .requires the
geographic distribution of the UE to be defined.
The main benefit of completing simulations is relative large quantity of information which
is generated. This information can help guide planning decisions as well as provide more
extensive expectation of network performance


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 11


Nex-G Exuberant Solutions Private Limited

2 LINK BUDGET
Link. Budgets are used during both system dimensioning and radio network planning.
The dimensioning process provides an estimate. Of the number of network elements
required to achieve a specific coverage and capacity performance. Link budgets an: used
during dimensioning to estimate the maximum allowed path loss and the corresponding
cell range

The results from a dimensioning exercise can be used as an input for a business case
analysis. The number of network elements and their associated
Configuration allows the cost of the network to be quantified. The path loss based approach
to radio network planning makes lise of link budget results to define maximum allowed path loss
thresholds

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 12


Nex-G Exuberant Solutions Private Limited
2.1 UPLINK LINK BUDGET
A set of example link budgets For the PUSCH is presented in
Table 2


The channel Bandwidth is not used explicitly within the link budget but it has an impact
upon the Signal to Interference plus Noise Ratio (SINR) requirement. SINR requirements
are reduced for larger channel bandwidths when frequency domain scheduling is channel
aware. Larger channel bandwidths provide the packet scheduler with greater flexibility to
allocate Resource Blocks experiencing good channel conditions. The channel bandwidth
also places an upper limit upon the number of Resource Blocks which can be allocated to
the UE

The total number of Resource Blocks is included for information and is not used explicitly
within the link budget

The number of allocated Resource Blocks defines a trade-off between the cell capacity
and the cell range. Allocating a small number of Resource Blocks to the connection leads
to less air-interface redundancy (higher coding rate and less benefit from channel
coding). This means the cell range must be limited otherwise the eNode B will not be able
to successfully decode the transport blocks. The benefit of allocating a small number of

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 13


Nex-G Exuberant Solutions Private Limited
Resource Blocks is that more Resource Blocks are available for other connections and
the cell capacity is typically increased

The example presented in Table allocates 8 Resource Blocks for a 500 kbps
connection. Based upon Table3, a connection with 8 Resource Blocks can use a
transport block size of 552 bits to achieve 500 kbps. Table 2 indicates that QPSK is used
as a modulation scheme, which is appropriate for cell edge connections

Table 3


The set of 8 Resource Blocks is a relatively large allocation for a transport block size of
552 bits. A single Resource Block allocation with the normal cyclic prefix can
accommodate 2 * 72 = 144 modulation symbols after accounting for the uplink
Demodulation Reference Signal and the pairing of Resource Blocks in the time domain.
This corresponds to a total allocated capacity of 2304 bits after accounting for all 8
Resource Blocks and the QPSK modulation scheme. The coding rate is then defined by
the ratio of 552 / 2304 = 0.24. This represents a low coding rate so the corresponding
SINR requirement should be relatively small and the maximum allowed path loss should
be relatively large

The example presented in Table 4 allocates 27 Resource Blocks for a 1 Mbps
connection. Based upon figure8 , a connection with 27 Resource Blocks can use a

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 14


Nex-G Exuberant Solutions Private Limited
transport block size of 1192 bits to achieve 1 Mbps. Table 2 indicates that QPSK is used
as a modulation scheme. The corresponding coding rate can be calculated as 1192 /
7776 = 0.15

Table 4



LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 15


Nex-G Exuberant Solutions Private Limited
Table 5


The example presented in Table 5 allocates 25 Resource Blocks for a 2 Mbps
connection. Based upon Table 3, a connection with 25 Resource Blocks can use a
transport block size of 2216 bits to achieve 2 Mbps. Table 2 indicates that QPSK is used
as a modulation scheme. The corresponding coding rate can be calculated as 2216 /
7200 = 0.31

The number of allocated subcarriers is given by 12 * the number of allocated Resource
Blocks. The number of allocated subcarriers is used when aggregating the total thermal
noise received by the BTS, i.e. it defines the noise bandwidth

The maximum transmit power of 23 dBm corresponds to the capability of UE power class
3 specified by 3GPP within TS 36.101. This capability has a tolerance of +- 2 dB so 23
dBm could be optimistic for some devices. The maximum transmit power has been
reduced to 22 dBm for the 1 Mbps and 2 Mbps examples because 3GPP TS 36.101
specifies a 1 dB Maximum Power Reduction (MPR) when more than 18 Resource Blocks
are allocated from the 20 MHz channel bandwidth while using QPSK

The terminal antenna gain can vary from one UE model to another. Data cards may have
a higher antenna gain than handheld devices. UE typically have an antenna gain in the
order of 0 dBi


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 16


Nex-G Exuberant Solutions Private Limited
Body loss is dependent upon the relative positions of the UE, the end-user and the
serving cell. A figure of 3 dB is typically assumed when the UE is held to one side of the
end-users head. A figure of 0 dB is typically assumed when the UE is positioned away
from the body

The first main result from the uplink link budget is the transmit Effective Isotropic
Radiated Power (EIRP). This is defined using the expression below:

Transmit EIRP = Maximum Transmit Power + UE Antenna Gain Body Loss

The thermal noise per subcarrier quantifies the noise power within the bandwidth of a
single 15 kHz subcarrier. This is calculated using 10 * LOG(kTB), where k is the
Boltzmann constant (1.38 * 10-23), T is the temperature (290 Kelvin) and B is the
bandwidth (15000 Hz)

The aggregate thermal noise quantifies the noise power within the total allocated
bandwidth. It is given by the thermal noise per subcarrier + 10 * LOG(number of allocated
subcarriers)

The receiver noise figure assumption reflects the performance of the eNode B receiver
subsystem. The noise figure belonging to the eNode B cabinet should be used if the
receiver subsystem does not include a Mast Head Amplifier (MHA). If an MHA is
included, the noise figure should be the composite noise figure of the MHA,
cable/connectors and eNode B cabinet. The composite noise figure can be calculated
using Friis equation:



With the exception of the composite noise figure result, all of the variables within the
preceding equation have linear units. The noise figure of the cable and connectors is
equal to their loss. For example, the noise figure is 2 dB in log units and 1.6 in linear units
if the cable and connector loss is 2 dB. The gain of the cable and connectors is equal to
their loss, i.e. -2 dB in log units and 0.6 in linear units. Friis equation illustrates that when
the MHA has a high gain, the noise figure of the receiver sub-system is dominated by the
noise figure of the MHA. This emphasizes the importance of having a low noise, high gain
amplifier for the MHA

The interference margin is generated by co-channel interference from UE served by
neighboring cells. The interference margin is likely to be greater in urban areas where the
site density is relatively high. Heterogeneous network architectures can also lead to
increased co-channel interference. Inter-Cell Interference Coordination (ICIC) is intended
to help manage levels of co-channel interference

The interference floor is defined using the expression below:

Interference Floor = Aggregate Thermal Noise + Receiver Noise Figure + Interference
Margin

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 17


Nex-G Exuberant Solutions Private Limited

The Signal to Interference plus Noise Ratio (SINR) requirement is defined by link level
simulations which model the BTS receiver performance when the allocated transport
block size is transferred using the allocated number of Resource Blocks with a specific
Block Error Rate (BLER). Link level simulations model a specific propagation channel so
the SINR requirement is specific for that channel. Propagation channel modeling
includes fast fading (unless it is a static channel) so the resultant SINR includes the
impact of fast fading

The SINR examples in Table 1 do not demonstrate an obvious trend between SINR
requirement and bit rate requirement, i.e. the 500 kbps example has a higher SINR
requirement than the 1 Mbps example, but a lower SINR requirement than the 2 Mbps
example. This results from the different Resource Block allocations and different coding
rates. The SINR requirement increases as the coding rate increases

The second main result from the uplink link budget is the Received Signal Strength
Requirement. This is defined using the expression below:

Received Signal Strength Requirement = Interference Floor + SINR Requirement

As expected, the Received Signal Strength Requirement increases as the bit rate
requirement increases. The higher interference floors associated with the larger
Resource Block allocations increase the resultant Received Signal Strength Requirement

The eNode B antenna gain should be representative of the antenna type planned for
network deployment. In practice, networks may include a range of antenna types.
Antenna gains tend to decrease as the horizontal and vertical beam widths increase and
the antenna becomes less directional

The antenna gain figure can incorporate a polarisation loss of approximately 0.5 dB.
Antenna gains are typically quoted from measurements which have been recorded using
a receiving antenna element which has exactly the same polarisation as the transmitting
antenna element. In practice, the two antenna elements have different polarizations and a
polarisation loss is experienced. Reflections change the polarization of a radio signal and
this helps to reduce the loss because many signals with different polarizations can reach
the receiving antenna. Cross polar antennas also reduce the potential for polarisation
loss because the maximum angular offset is compared 45 to 90 for a vertically polarized
antenna

The cable loss variable within the uplink link budget is only applicable if it has not already
been included as part of the receiver noise figure. The uplink cable loss is included within
the composite noise figure if an MHA has been assumed. Otherwise, the cable loss
should equal all cable and connector losses between the eNode B cabinet and antenna.
The example presented in Table 1 assumes an MHA is used so the cable loss is set to 0
dB

The slow fading margin is calculated from an indoor location probability and an indoor
standard deviation. The indoor location probability is often specified as an average
probability of experiencing indoor coverage across the cell area. This figure is translated

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 18


Nex-G Exuberant Solutions Private Limited
to an equivalent cell edge indoor coverage probability before combining with the standard
deviation to generate the slow fading margin. The indoor location probability at cell edge
will be less than the average as a result of the higher UE transmit power requirement.
The indoor standard deviation represents a combination of the outdoor standard deviation
associated with slow fading, and the standard deviation generated by the variance of the
building penetration loss

The shadowing handover gain is generated by allowing the UE to handover onto the best
server. When the UE is at cell edge and there are multiple potential serving cells, the UE
is able select the best cell to help avoid experiencing fades. The shadowing handover
gain is reduced when handovers have hysteresis to avoid ping-pongs between cells. The
shadowing handover gain would reduce to 0 dB at the edge of network coverage where
there are no neighboring cells to act as handover candidates. This may also be the case
at some indoor locations

Including the building penetration loss as part of the link budget generates an outdoor
maximum allowed path loss result which includes sufficient margin to allow UE at the cell
edge to establish and maintain connections from within buildings. The building
penetration loss may be replaced by a vehicle penetration loss if link budgets are
generated for a section of motorway or a rural area

The assumptions for building penetration loss typically depend upon the environment
type, e.g. building penetration could be greater within an urban environment than within a
suburban environment. The building penetration loss usually represents the link budget
assumption with the greatest uncertainty. SINR figures and shadowing handover gains
could have an uncertainty of (+-1) dB while building penetration loss could have an
uncertainty of (+-5) dB. This uncertainty is included within the link budget result by
calculating the slow fading margin from an indoor standard deviation which incorporates
the variance of the building penetration loss. Building penetration loss assumptions are
relatively difficult to validate in the field due to the large variance between different
buildings and the geometry of those buildings with respect to the radio network plan

The third main result from the uplink link budget is the Isotropic Power Requirement. This
is defined using the expression below:

Isotropic Power Requirement = Received Signal Requirement eNode B antenna Gain
+ Cable Loss + Slow Fading Margin Shadowing Handover Gain + Building Penetration
Loss

The Maximum Allowed Path Loss is then calculated as the difference between the
transmit EIRP and the isotropic power requirement:

Maximum Allowed Path Loss = Transmit EIRP Isotropic Power Requirement

This maximum allowed path loss result can be compared with the equivalent downlink
result to determine whether coverage is uplink or downlink limited. This comparison
requires an offset to account for the frequency difference between the uplink and
downlink frequency bands. Higher frequencies tend to experience greater path loss so
coverage will tend to be downlink limited if both the uplink and downlink link budgets

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 19


Nex-G Exuberant Solutions Private Limited
generate equal maximum allowed path loss results. The frequency dependant term within
a typical Okumura-Hata path loss model is given by the equation below:

Frequency Depended loss= 33.9*LOG(Frequency(MHz))

The equation indicate for the fixed cell range .
2.2 DOWNLINK LINK BUDGET
A set of example link budget for PDSCH is presented in Table 5

Table 6

The channel Bandwidth is not used explicitly within the link budget but it has an impact
upon the Signal to Interference plus Noise Ratio (SINR) requirement. SINR requirements
are reduced for larger channel bandwidths when frequency domain scheduling is 'channel
aware. The channel bandwidth 'also places an upper limit upon the number of Resource
Blocks which can be allocated to the UE

The total number of Resource Blocks is used When calculating the transmit power for the
allocated Resource Blocks, i.e. the maximum transmit power is distributed across the
total number of Resource Blocks so only a proportion of that power is available for the
allocated Resource Blocks.


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 20


Nex-G Exuberant Solutions Private Limited
The number of allocated Resource Blocks defines a trade-off between the cell capacity
and the cell range. Allocating a small number of Resource Blocks to the connection leads
to less air-interface redundancy (higher coding rate and less benefit from channel
coding). This means the cell range must be limited otherwise the UE will not be able to
successfully decode the transport blocks. The benefit of allocating a small number of
Resource Blocks is that more Resource Blocks are available for other connections and
the cell capacity is typically increased

The number of allocated Resource Blocks in Table 6 have been kept similar to those in
Table2 to allow a comparison between uplink and downlink

An allocation of 8 Resource Blocks has been assumed for the 500 kbps connection.
Similar to the uplink, a transport block size of 552 bits can be used with QPSK as a
modulation scheme. 2x2 MIMO is assumed for the link budgets in Table 3 but single
layer .transmission with a single codeword (single transport block) is assumed at cell
edge

The set of 8 Resource Blocks is a relatively large allocation for a transport block size of
552 bits, Assuming that the PCFICH.PHICH and PDCCH occupy the first 2 OFDMA
symbols within the sub frame and that 2x2 MIMO is used (impacts the number of
Resource Elements allocated to the cell specific Reference Signal ) then the number of
Resource Elements remaining for the PDSCH is given by 8 x [(12 x 7 x 2) - (12 x 2 +(2)] =
1056. this corresponds to a capacity of 2112 bits when using QPSK. The coding rate is
then defined by-the ratio 01'5521 2112bits=O.26.

The example presented in Table 6 allocates 26 resource Blocks for a I Mbps connection.
based upon Table 3, a connection with . 26 Resource Blocks can use a transport block
size. of 1160 bits to achieve 1 Mbps. Table 2 indicates that QPSK is used as a ,
modulation scheme. The corresponding coding rate can be calculated as 1.160/6864 =
0.17.

The example presented in Table 6 allocates 25 Resource Blocks for a2 Mbps
connection. Based upon Table 3,a connection with 25Resource I310cks can use a
transport block size of 2216 bits to achieve 2 Mbps. Table 2 indicates that QPSK is used
as a modulation scheme. The corresponding coding rate can be calculated as 2216/ 6600
= 0.34.

The number of allocated subcarriers is given by 12 x the number of allocated Resource
Blocks. The number of allocated subcarriers is used when aggregating the total thermal
noise received by the UE i.e. it defines the noise bandwidth

The maximum transmit power of 43dBm corresponds to a typical rnacrocell downlink
transmit power capability

The downlink transmit power is distributed across the entire channel bandwidth so only a
percentage of the total power is available for the Resource blocks allocated to the
connection being considered in the link budget. In the case of the 500 kbps connection, 8
out of 100 Resource Blocks have been allocated so the downlink transmit power for the

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 21


Nex-G Exuberant Solutions Private Limited
connection is 43 - 10 x LOG( I00/8) = 32.0dBm. Likewise. the transmit powers available
for the 1 Mbps and2 Mbps connections are 37.1 dBmand37.0dBm respectively

The link budgets presented in Table 344assume the use of 2x2 MIMO so there are 2
transmitting antenna ports and the downlink transmit powers across the air-interface are
increased by3 dB

The eNodB antenna gain should be representative of the antenna type planned for
network deployment. In practice, networks may include a range of antenna types.
Antenna gains tend to decrease as the horizontal and vertical beam widths increase and
the antenna becomes less directional

The cable loss depends upon the site design and the use of remote radio heads or active
antenna systems. assumes a conventional site design with 2dB of cable loss in the
downlink direction This cable loss includes the MHA insertion loss when an MHA is
present

The first main result from the downlink link budget is the transmit EIRP which is defined
as:
Transmit ElRP = Transmit Power for Allocated Recourse Blocks +Multiple Antenna
Increase + Antenna Gain - Cable Loss

The thermal noise per subcarrier quantifies the noise power within the band width of a
single 15 kHz subcarrier, This is calculated using 10x LOG(kTB), where k is the
Boltzmann constant(1.38 x l 0-23) , T is the temperature (290 Kelvin) and B is the
bandwidth(1500 Hz)

The aggregate thermal noise quantifies the noise power within the total allocated
bandwidth. It is given by the thermal noise per subcarrier + 10x LOG(number of allocated
subcarriers

The receiver noise figure assumption reflects the performance of the. UE receiver and is
likely to vary between UE models. A noise figure of 7 dl3 represents a typical assumption.

The interference margin is generated by co-channel interference from neighboring
eNode B. The interference margin is likely to be greater in urban areas where the site
density is relatively high. Heterogeneous network architectures can also lead to increased
co channel interference. Inter-Cell Interference Coordination (IClC) is intended to help
manage levels of co-channel interference

The interference floor is defined using the expression below:

Interference Floor = Aggregate Thermal Noise + Receiver Noise Figure + Interference
Margin

The Signal to Interference plus Noise Ratio (SINR) requirement is defined by link level
simulations which model the UE receiver performance when the allocated transport block
size is transferred using the allocated number of Resource Blocks with a specific Block
Error Rate (BLER). Link level simulations model a specific propagation channel so the

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 22


Nex-G Exuberant Solutions Private Limited
SINR requirement is specific for that channel. Propagation channel modeling includes
fast fading (unless it is a static channel) so the resultant SINR includes the impact of fast
fading

The SINR examples in Table2 show the same trend as for the uplink, i.e. the 500 kbps
example has a higher SINR requirement than the I Mbps example, but a lower SINR
requirement than the 2 Mbps example. This results from the different Resource Block
allocations and different coding rates. The S[NR requirement increases as the coding rate
increases

The second main result from the downlink link budget is the Received Signal Strength
Requirement. This is defined as:

Received Signal Strength Requirement = interference Floor + SINR Requirement

As expected.jhe Received Signal Strength Requirement increases as the bit rate
requirement increases, The higher interference floors 'associated with the larger
Resource Block-allocations increase the resultant Received Signal Strength Requirement

The UE antenna gain, body loss, slow fading margin, shadowing handover gain and
building penetration .loss assumptions arc the same as: those for the uplink link budget

The third main result from the from the Downlink budget is the isotropic power
Requirement. This define as :

Isotropic power Requirement = Received signal Requirement UE antenna gain + Body
Loss + slow fading margin - Shadowing Handover Gain + building penetration loss


Maximum Allowable path Loss is given by :

MAPL = Transmit EIRP- Isotropic Power Requirement


Two Examples are there for link budget calculation in the uplink & downlink.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 23


Nex-G Exuberant Solutions Private Limited

3 LTE LINK BUDGET EXERCISE
Link Budget Exercise:-

An operator wishes to deploy an LTE network in 3GPP band 1. It is expected that the
channel bandwidth deployed will be 10MHz and the maximum number of resource
blocks allocated to a single user will be no more than 25. However the uplink should
support both QPSK, 16QAM modulation schemes.

The base stations will be classified as local area base station and initially will support only
a single antenna port. The equipment vendor advises that the eNB configuration will
include a maximum of 3dB of losses from the feeder, connectors and filters. It is also
expected that the sites will be deployed as 3 sector sites using a direction panel antenna
with a gain of 17.3dBi, the UE antenna can be assumed to have a unity gain (0dBi).

The vendors receivers conform to the LTE specification for SINR requirements however
it is expected that the UE receiver will have a NF of 7dB and an additional
implementation margin of 2.5dB for QPSK, 3dB for 16QAM and 4dB for 64QAM. The
eNB will have the same implementation margins but the NF will be lower better, 5dB.

Determine the System Gain for QPSK1/2, 16QAM3/4 and 64QAM4/5 in the downlink
direction and for QPSK1/2 and 16QAM3/4 in the uplink. Assume worst case for resource
block allocations i.e 25.

Table 7



LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 24


Nex-G Exuberant Solutions Private Limited
Table 8




Maximum Path loss:-

The LTE system is to be deployed in a dense urban area, with an area reliability of 95%.
Some in-building coverage is expected and preliminary investigation suggests that single
wall penetration losses in this frequency band should be approximately 10dB. There is no
significant vegetation in the area where deployment is to begin. The user equipment is
initially based on USB dongles therefore body losses will be kept to a minimum. Whilst
interference from neighboring cells is not expected to be at network launch an
interference margin of 5dB should be included in the preliminary design. Given the above
description determine the MAPL of the system. (Use the system gain figures from the first
part of this exercise)


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 25


Nex-G Exuberant Solutions Private Limited
Table 9


Determining Cell Radius:-

Assuming the COST-231 Propagation Model for an urban area what are uplink and
downlink ranges for the system above. The eNB antennas will be at 25m and UE
antenna height is assumed to be approximately 1.5m.

Table 10


SS3 LTE Link Budgets
and sensitivity.xls

LTE Cost Model
V3.0.xlsm
SS2 RSRQ.xlsx SS4 Fade
Margin.xlsx
SS5 Cascaded
Noise.xls


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 26


Nex-G Exuberant Solutions Private Limited

4 LTE COVERAGE SCENARIOS


For PDSCH and PUSCH. For L1/L2 control and common channels other coding schemes
can be used.
Downlink Only static Reference Signal power boosting has been specied by 3GPP,
vendor-specic implementations possible.
With UE-specic reference signals (vendor-specic beam forming) arbitrary number of Tx
antennas.
For simultaneous transmission. Transmit antenna selection is possible with RF switching.
Multiuser MIMO possible in UL in 3GPP Release 8.
Intra-TTI and inter-TTI frequency hopping is possible in UL.
5 LTE COVERAGE IN NOISE LIMITED SCENARIO
5.1 DEFINATION OF AVERAGE SNR
Probably the most useful performance metric for LTE radio planner is the average signal-
to interference-and-noise ratio, SINR avg, dened as
SINR AVERAGE = S/I+N,
Where S is the average received signal power, Iis the average interference power, and
N is the noise power. In measurement and simulation analysis, the sample averages
should be taken over small-scale fading of S and I and over large number of HARQ
retransmissions (sever a hundreds of TTIs, preferably). The average interference power
can be further decomposed as



LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 27


Nex-G Exuberant Solutions Private Limited
5.2 REQUIRED SINR
Required SINR is the main performance indicator for LTE. Cell edge is defined according
to the Required SINR for a given cell throughput. Therefore, the accurate knowledge of
Required SINR is central to the authenticity of the RLB and thus the process of
dimensioning. Required SINR depends up on the following factors:

Modulation and Coding Schemes (MCS)
Propagation Channel Mode

Higher the MCS used, higher the required SINR and vice versa. This means that using
QPSK will have a lower required SINR than 16-QAM .


Required SINR can be estimated by two different methods.

By using the Throughput vs. average SNR tables. These tables are obtained as an
output of link level simulations. For each type of propagation channel models and
different antenna configurations, different tables are needed. One important thing to note
here is that noise is modeled as AWGN noise; therefore, SNR is used instead of SINR.

By using the Alpha-Shannon formula. Alpha-Shannon formula provides an
approximation of the link level results. Thus, in this case, no actual simulations are
needed, but factors used in Alpha-Shannon formula are needed for different scenarios.
5.2.1 Spectral Efficiency:-

In case the cell edge is defined by the input required throughput, the corresponding
spectral efficiency has to be derived. The spectral efficiency is derived under the following
assumptions:

The layer 2 protocol overhead (MAC and RLC) is negligible [23]
Link level simulation do not take into account the L1 overhead due to control
channels (pilot and allocation table)

Given the required cell throughput at cell border Cell Edge Throughput, the L1 throughput
is calculated as follows

Layer Throughput = Cell Edge throughput /overhead factor

Where
Over head Factor = Data Symbol per sub Frame /Total symbol per sub Frame

The Overhead Factor values for DL and UL are respectively 5/7 and 4/7 , assuming short
cyclic prefix.

Thus, the spectral efficiency is:
Spectral Efficiency= Layer 1 Throughput /Cell Band width

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 28


Nex-G Exuberant Solutions Private Limited

Spectral efficiency is then used to find out the Required SINR using Alpha-Shannon
formula. Shannon capacity formula for maximum channel efficiency as a function of SNR
can be written as:


This maximum capacity cannot be obtained in LTE due to the following factors

Limited coding block length
Frequency selective fading across the transmission bandwidth
Non-avoidable system overhead
Implementation margins ( channel estimation, CQI)

Thus, in order to fit the Shannon formula to LTE link performance two elements are
introduced

bandwidth efficiency factor
SNR efficiency factor, denominated Imp Factor


The modified Alpha-Shannon Formula can be written as:


Note that also depends on the antenna configuration. The formula is valid between the
limits specified by a minimum and a maximum value of spectral efficiency. The figure
below shows how the Shannon-Alpha formula is used to approximate the envelope of the
spectral efficiency vs. SNR curve in case of SISO (1 transmission and 1 reception
antenna) and AWGN. Two values of and Imp Factor are considered.


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 29


Nex-G Exuberant Solutions Private Limited

Figure 6

Figure1 --LTE spectral efficiency as function of G-factor (in dB) including
curves for best Shannon fit

To map these results to system level performance, we need to consider the G-factor
distribution, PDF(G), over the cell area. Assuming uniform user distribution, the obtained
G-factors for the LTE capacity evaluation are plotted in Figure1. The distributions are
obtained by deploying Macro Cell and Micro Cell hexagonal cellular layouts according to .
The probability density function of G is obtained from Figure 2. It is assumed that all
users have equal session times (e.g. infinite buffer assumption)


Figure 7


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 30


Nex-G Exuberant Solutions Private Limited
5.3 INTERFERENCE
In order to evaluate the other-cell interference, a simple network model in which the load
is equally distributed among cells is assumed. The overall effect of interference can be
estimated the following factors

A term that takes into account the loss in G due to the handover margin
(CellOverlapMargin). The G-factor distribution is defined as the average own
cell power to the other-cell power plus noise ratio. In fact, a handover margin is
needed for avoiding ping-pong effect. As a consequence, the serving cell is not
necessary the one that is received with the strongest signal.

A gain due to interference control mechanisms (e.g. Soft Frequency Reuse or
Smart Frequency Domain Packet Scheduler), denominated IntControlGain.

For the uplink, the issue of interference is dealt as follows. The uplink other cell
interference margin (OtherCellInterferenceUL in the maximum uplink path loss equation)
was studied by means of system level simulations, using a network scenario with 19
three-sector sites, i.e., in total 57 cells. The sites were positioned on a regular hexagonal
grid. Inter-site distanced of 1732 m with penetration loss of 20 dB and UE power class of
21 dBm was used. Interference coordination was not used in this simulation. Simulations
were carried out with three different values of system load. Allocated bandwidth per user
equals to 312.5 kHz.

Slow power control was used in this simulation.
Target for power control was set in such a way that it provides a good trade-
off between the cell edge throughput and average cell throughput
The interference margin was calculated using the following expression

Interference margin = SNR/SINR

Figure shows the obtained interference margin as a function of load. The interference
margin is observed from 5% point of CDF. The table instead, shows the list of
Interference Margin Values obtained using linear interpolation.

Table 11


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 31


Nex-G Exuberant Solutions Private Limited
Table 12

load versus Interference margin
5.4 COVERAGE-BASED SITE COUNT
The maximum allowed path loss can be used to calculate the cell radius (CellRadius) by
using a propagation model. COST231 model is use to compute the path loss for cell
radius. This model is normally used for carrier frequencies between 1500 and 2000 MHz
The same COST231 model can be used for carrier frequency of 2600 MHz, since we
assume that the loss due to the higher frequency is compensated by the increase in the
antenna gain. For the 900 MHz deployment option, the Hata model can be used instead.
Other propagation models can be included as well, for instance, UMTS models [18].
Given the cell radius, the cell coverage area (that we assume to be hexagonal) depends
on the site configuration.

Figure 8

Three different types of sites (Omni-directional, bi-sector, tri-sector)
For three hexagonal cell models, site areas can be calculated as follows.
o Omni-directional site` Site Area = 2.6 * CellRadius^2
o Bi sector Site Site Area = 1.3*2.6* cell radius^2
o Tri sector Site Site Area = 1.95*2.6*Cell Radius^2

The number of sites to be deployed can be easily calculated from the Cell Area and the
input value of the deployment area (Deployment Area).

Number of Sites coverage = Deployment Area/Site Area

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 32


Nex-G Exuberant Solutions Private Limited
Table 13

BS PA Output (dBm) Ant gain (dBi) BS EIRP (dBm) Oper Freq (MHz) Area Type
35 13 48 2600 Dense Urban
Modulation Scheme Bit Rate (Mbps) Rx Sens. (dBm) COST 231 HATA
QPSK 1/2 Rate 3.6 -96.5 Ant gain (dBi) Additional Losses (dB)
16-QAM 3/4 Rate 10.7 -85.3 0 Penetration loss Fade Margin
64-QAM 4/5 Rate 17.1 -78.4 10 10
Modulation Scheme MPL (dB) Radius (m) % of Area Average Capacity
QPSK 1/2 Rate 124.5 268.1 76.88% 5.8 Mbps
16-QAM 3/4 Rate 113.3 128.9 13.74% 13.3 Mbps
64-QAM 4/5 Rate 106.4 82.1 9.38% 17.1 Mbps
FALSE
MAPL QPSK 125 dB a(Hm) -1.82088774 -0.000919047
Freqency 2600 MHz 141.6528617 -20 dB -1
BS Height 30 m 139.832893 -15 dB -0
Mobile height 2 m 141.6528617 -5 dB -0
Distance 141.6528617 10 dB 0
Dense Urban 0.27 Km
Urban 0.37 Km
Suburban 0.71 Km
Rural 1.90 Km
MAPL 16 QAM 113 dB a(Hm) -1.82088774 -0.000919047
Freqency 2600 MHz 141.6528617 -31 dB -1
BS Height 30 m 139.832893 -27 dB -1
Mobile height 2 m 141.6528617 -16 dB -0
Distance 141.6528617 -1 dB -0
Dense Urban 0.13 Km
Urban 0.18 Km
Suburban 0.34 Km
Rural 0.92 Km
MAPL 64 QAM 106 dB a(Hm) -1.82088774 -0.000919047
Freqency 2600 MHz 141.6528617 -38 dB -1
BS Height 30 m 139.832893 -33 dB -1
Mobile height 2 m 141.6528617 -23 dB -1
Distance 141.6528617 -8 dB -0
Dense Urban 0.08 Km
Urban 0.11 Km
Suburban 0.22 Km
Rural 0.58 Km
QPSK 16 QAM 64 QAM
Dense Urban 0.27 Km 0.13 Km 0.08 Km
Urban 0.37 Km 0.18 Km 0.11 Km
Suburban 0.71 Km 0.34 Km 0.22 Km
Rural 1.90 Km 0.92 Km 0.58 Km
This is the example for the Coverage & capacity Calculation for 2600Mhz Freq. for
different area.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 33


Nex-G Exuberant Solutions Private Limited

6 CAPACITY PLANNING
Capacity planning gives an estimate of the resources needed for supporting a specified
offered traffic with a certain level of QoS (e.g. throughput or blocking probability).
Theoretical capacity of the network is limited by the number of eNodeBs installed in the
network. Cell capacity in LTE is impacted by several factors, which includes interference
level, packet scheduler implementation and supported modulation and coding schemes.
Link Budget (Coverage Planning) gives the maximum allowed path loss and the
maximum range of the cell, whereas coverage Planning takes into account the
interference by

Providing a suitable model. LTE also exhibits soft capacity like its predecessor 3G
systems. Therefore, the increase in interference and noise by increasing the number of
users will decrease the cell coverage forcing the cell radius to become smaller.

In LTE, the main indicator of capacity is SINR distribution in the cell. In this study, for the
sake of simplicity, LTE access network is assumed to be limited in coverage by UL
direction and capacity by DL.

The evaluation of capacity needs the following two tasks to be completed

Being able to estimate the cell throughput corresponding to the settings
used to derive the cell radius
Analyzing the traffic inputs provided by the operator to derive the traffic
demand, which include the amount of subscribers, the traffic mix and data
about the geographical spread of subscribers in the deployment are


6.1 AVERAGE CELL THROUGHPUT CALCULATION
The target of capacity planning exercise is to get an estimate of the site count based on
the capacity requirements. Capacity requirements are set forth by the network operators
based on their predicted traffic. Average cell throughput is needed to calculate the
capacity-based site count

The most accurate evaluation of cell capacity (throughput under certain constraints) is
given by running simulations. Since, the dimensioning is usually done using an excel
workbook, the best solution to derive cell throughput is direct mapping of SINR
distribution obtained from a simulator into MCS (thus, bit rate) or directly into throughput
using appropriate link level results. .

Thus, capacity estimation requires the following simulation results


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 34


Nex-G Exuberant Solutions Private Limited
Average SINR distribution table (system level result), which provides the SINR
probability
Average throughput or spectral efficiency versus average SINR table (link level
result)

Among other factors, different propagation environments (propagation models, inter-site
distance) and antenna configurations have an impact on the above results. Thus, multiple
tables should be available for example for urban, suburban and rural areas. SINR
probability is obtained by calculating the probability of occurrence of a given SINR value
at cell edge. All these system level simulations are run with a predefined inter-site
distance. In this method, the bit rates for each MCS are derived from the OFDM
parameters for LTE. Then the SINR values to support each MCS are derived from look-
up tables that are generated from link level simulations.

Subsequently, MCS supported by each value of SINR is selected by using the minimum
allowed SINR from the link level results. This gives the corresponding data rate that is
supported by that MCS. In this way, data rate corresponding to each SINR value is
obtained for a specific scenario. For urban channel model and a fixed inter-site distance
of 1732m, downlink throughput for LTE is shown in table

Table 14


Let us consider an example regarding table 5-1 (Urban/1732m inter-site distance). For an
SINR value of 2dB, QPSK is selected from the above table, and it gives a throughput
of 6Mbps at 2dB. In the same way, an SINR value of 3dB corresponds to 6Mbps, 4dB to
8Mbps and 7dB to 12Mbps in DL. Once all the values are calculated by using the lookup
table, cell throughput is derived as follows:


Where,SINR _Occurrence _probability = Probability of occurrence of a specific SINR
value at cell edge obtained using simulations

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 35


Nex-G Exuberant Solutions Private Limited

Ave Throughput SINR = Average throughput corresponding to SINR value
6.2 SENSITIVITY
Sensitivity depend upon number of resource block if number of resource block are
increases sensitivity decrease


Table 15
n
RB
n
RB
n
RB
1 12 25
kT MCS SNR Sens Sens Sens
-174 QPSK 1/3 -1 -116.9 -106.2 -103.0 Reference
-174 QPSK 1/2 2 -113.9 -103.2 -100.0
-174 QPSK 2/3 4.3 -111.6 -100.9 -97.7
-174 QPSK 3/4 5.5 -110.4 -99.7 -96.5
-174 QPSK 4/5 6.2 -109.7 -99.0 -95.8
-174 16QAM 1/2 7.9 -107.5 -96.8 -93.6
-174 16QAM 2/3 11.3 -104.1 -93.4 -90.2
-174 16QAM 3/4 12.2 -103.2 -92.5 -89.3
-174 16QAM 4/5 12.8 -102.6 -91.9 -88.7
-174 64QAM 2/3 15.3 -99.1 -88.4 -85.2
-174 64QAM 3/4 17.5 -96.9 -86.2 -83.0
-174 64QAM 4/5 18.6 -95.8 -85.1 -81.9
k*T NF IM G
div
-174 6 2.5 -3
3
4
-120.0
-115.0
-110.0
-105.0
-100.0
-
-
-
-
S
e
n
s
i
t
i
v
i
t
y


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 36


Nex-G Exuberant Solutions Private Limited


7 LTE KEY PERFORMANCE INDICATORS FOR LTE RF
DESIGN
LTE is still a developing technology, and it is important to note that as more field trials are
carried out and results validated against the deployed LTE network performance goals,
the design targets outlined in this section are subject to change. The quality of the LTE
RF design will be evaluated using planning Tool. This will be based on a combination of
area predictions and Monte Carlo simulations. It is important to note that the emphasis of
the design evaluation will be on focusing where demand is and where potential LTE users
are located. The following are a non comprehensive list of key performance indicators
that will be used to validate the quality of the LTE RF network design.
7.1 Reference Signal Received Power (RSRP):-
Reference signal received power (RSRP) identifies the signal level of the Reference
Signal. It is defined as the linear average over the power contributions of the resource
elements that carry cell-specific reference signals within the considered measurement
frequency bandwidth.

Design KPI for RSRP:-
10MHz Channel Bandwidth (700MHz & AWS): -98 dBm /-103 dBm
5MHz Channel Bandwidth (700MHz & AWS): -98 dBm /-103 dBm

A minimum of 95% of the weighted average of the LTE design service area (Cluster or Polygon)
must meet the RSRP targets specified above. The criterion of 95% is based on a weighting using
the same clutter weights used for traffic spreading. The target specified above is after taking into
consideration the indoor loss values assigned per clutter type (In-building losses enabled).

Note: The targets for AWS are only applicable in cases where the AWS design is being carried
out as a standalone design and not be used as a capacity layer over an existing 700 MHz layer
LTE network.
7.2 Reference Signal Received Quality (RSRQ) :-
Reference Signal Received Quality (RSRQ) identifies the quality of the Reference Signal. It is
defined as the ratio NRSRP/(E-UTRA carrier RSSI), where N is the number of RB"s of the E-
UTRA carrier RSSI measurement bandwidth. The measurements in the numerator and
denominator shall be made over the same set of resource blocks.

E-UTRA Carrier Received Signal Strength Indicator (RSSI), comprises the linear average of the
total received power observed only in OFDM symbols containing reference symbols for antenna
port 0, in the measurement bandwidth, over N number of resource blocks by the UE from all

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 37


Nex-G Exuberant Solutions Private Limited
sources, including co channel serving and non-serving cells ,adjacent channel interference,
thermal noise etc.

Design KPI for RSRQ:

2 Transmit Paths:
50% Load: -15 dB
100% Load: - 18 dB
A minimum of 95% of the weighted average of the LTE design service area (Cluster or Polygon)
must meet the RSRQ targets specified above. The criterion of 95% is based on a weighting
using the same clutter weights used for traffic spreading.

RSRQ CRITERIA FOR 0 % LOAD
Power per RE (w)
0
0
0 Load % RSSI (w) RSRP (w) RSRQ (dB) Power (w)
0 0% 0.17 0.083 -3.0 dB 1
0 RSRP Boost
0.083 0 dB
0 External Interference
0 0
0
0 #RB #SC Power/Carrier (w) Power/SC (w) Power/RS (w)
0 25 300 1 0.0033 0.0033
0.083 30.0 dBm 5.2 dBm 5.2 dBm
# non RS SC # RS SC
250 50
Pathloss (dB)
130
RSSI (w) RSRP (w)
1 0.0033
RSSI (dBm) RSRP (dBm) RSSI - RSRP Offset
-100.0 -124.8 -24.8 dB
RSSI and RSRP Measured only
in symbols containing RS
RSRQ calculation is done for a single RB using the entered value for
power, no pathloss is used since the values are relative to each other
and PL (or TxPwr in fact) will not affect this simple calculator
this will be the power difference
between RSSI and
channel BW (RB)
Figure 9

RSRQ CRITERIA FOR 30 % LOAD


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 38


Nex-G Exuberant Solutions Private Limited
Power per RE (w)
0.083
0
0 Load % RSSI (w) RSRP (w) RSRQ (dB) Power (w)
0.083 30% 0.42 0.083 -7.0 dB 1
0 RSRP Boost
0.083 0 dB
0.083 External Interference
0 0
0
0 #RB #SC Power/Carrier (w) Power/SC (w) Power/RS (w)
0 25 300 1 0.0033 0.0033
0.083 30.0 dBm 5.2 dBm 5.2 dBm
# non RS SC # RS SC
250 50
Pathloss (dB)
130
RSSI (w) RSRP (w)
1 0.0033
RSSI (dBm) RSRP (dBm) RSSI - RSRP Offset
-100.0 -124.8 -24.8 dB
RSSI and RSRP Measured only
in symbols containing RS
RSRQ calculation is done for a single RB using the entered value for
power, no pathloss is used since the values are relative to each other
and PL (or TxPwr in fact) will not affect this simple calculator
this will be the power difference
between RSSI and
channel BW (RB)
Figure 10
RSRQ CRITERIA FOR 60% LOAD

Power per RE (w)
0.083
0.083
0 Load % RSSI (w) RSRP (w) RSRQ (dB) Power (w)
0.083 60% 0.67 0.083 -9.0 dB 1
0 RSRP Boost
0.083 0 dB
0.083 External Interference
0.083 0
0
0.083 #RB #SC Power/Carrier (w) Power/SC (w) Power/RS (w)
0 25 300 1 0.0033 0.0033
0.083 30.0 dBm 5.2 dBm 5.2 dBm
# non RS SC # RS SC
250 50
Pathloss (dB)
130
RSSI (w) RSRP (w)
1 0.0033
RSSI (dBm) RSRP (dBm) RSSI - RSRP Offset
-100.0 -124.8 -24.8 dB
RSSI and RSRP Measured only
in symbols containing RS
RSRQ calculation is done for a single RB using the entered value for
power, no pathloss is used since the values are relative to each other
and PL (or TxPwr in fact) will not affect this simple calculator
this will be the power difference
between RSSI and
channel BW (RB)
Figure 11
RSRQ CRITERA FOR 80 % LOAD


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 39


Nex-G Exuberant Solutions Private Limited
Power per RE (w)
0.083
0.083
0.083 Load % RSSI (w) RSRP (w) RSRQ (dB) Power (w)
0.083 80% 0.83 0.083 -10.0 dB 1
0 RSRP Boost
0.083 0 dB
0.083 External Interference
0.083 0
0.083
0.083 #RB #SC Power/Carrier (w) Power/SC (w) Power/RS (w)
0 25 300 1 0.0033 0.0033
0.083 30.0 dBm 5.2 dBm 5.2 dBm
# non RS SC # RS SC
250 50
Pathloss (dB)
130
RSSI (w) RSRP (w)
1 0.0033
RSSI (dBm) RSRP (dBm) RSSI - RSRP Offset
-100.0 -124.8 -24.8 dB
RSSI and RSRP Measured only
in symbols containing RS
RSRQ calculation is done for a single RB using the entered value for
power, no pathloss is used since the values are relative to each other
and PL (or TxPwr in fact) will not affect this simple calculator
this will be the power difference
between RSSI and
channel BW (RB)
Figure 12
RSRQ CRITERIA FOR 100 % LOAD

Power per RE (w)
0.083
0.083
0.083 Load % RSSI (w) RSRP (w) RSRQ (dB) Power (w)
0.083 100% 1.00 0.083 -10.8 dB 1
0.083 RSRP Boost
0.083 0 dB
0.083 External Interference
0.083 0
0.083
0.083 #RB #SC Power/Carrier (w) Power/SC (w) Power/RS (w)
0.083 25 300 1 0.0033 0.0033
0.083 30.0 dBm 5.2 dBm 5.2 dBm
# non RS SC # RS SC
250 50
Pathloss (dB)
130
RSSI (w) RSRP (w)
1 0.0033
RSSI (dBm) RSRP (dBm) RSSI - RSRP Offset
-100.0 -124.8 -24.8 dB
RSSI and RSRP Measured only
in symbols containing RS
RSRQ calculation is done for a single RB using the entered value for
power, no pathloss is used since the values are relative to each other
and PL (or TxPwr in fact) will not affect this simple calculator
this will be the power difference
between RSSI and
channel BW (RB)
Figure 13
7.3 Overlapping Zones (Number of Servers):-
The overlapping zones (number of servers) criteria are used to establish the quality of the RF
propagation environment from an interference point of view. The goal of the number of
serverscriteria is to establish dominance and reduce the waste of network resources and
degraded network performance that may occur when multiple servers exist in the same

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 40


Nex-G Exuberant Solutions Private Limited
geographic area. The calculation is based on the reference Signal (RS) signal levels of the
servers.

Design KPI for Overlapping Zones (Number of Servers):
Within 5 dB of the best server
% area with 4 or more servers should be < 2%.
% of area with 2 or more servers should be < 30%.

Within 10dB of the best server
% of area with 7 or more servers should be < 2%.

The calculation is based on area importance. The clutter weights used for traffic spreading
establishes the importance of the geographic area. The idea here is to focus the LTE design
where LTE users are located (for example, core urban areas, convention centers, major
stadiums, etc.) instead of areas within the LTE polygon with no users (for example, scrublands,
forests, etc.)
7.4 DL Cell Aggregate Throughput
The DL Cell Aggregate throughput is the sum of the throughputs to all the users in the cell at an
instant in time. This is to be measured following Monte Carlo simulations only.

Design KPI for DL Cell Aggregate Throughput:

10MHz Channel Bandwidth: 13.4 Mbps per cell
5MHz Channel Bandwidth: 6.7 Mbps per cell

A minimum of 90% of all users in the LTE design reference area should have the DL Cell Edge
User Throughput exceeding the minimum design KPI values specified above. No more than 2%
of the users should have a DL Cell Edge User Throughput less than 50% of this KPI target.

All the statistics for the LTE designs must be generated on a cluster by cluster or super cluster
basis following the criteria defined later in the document. In addition to the quantitative evaluation
of the LTE design using the KPIs stated above, a qualitative evaluation of the design will also be
carried out as outlined in the design evaluation. The exit criteria of a design are met when both
the quantitative (KPIs) and qualitative evaluation of the designs are successfully completed.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 41


Nex-G Exuberant Solutions Private Limited

8 PEAK RATE CALCULATION IN LTE
From the 3gpp specification:
-1 Radio Frame = 10 Sub-frame
-1 Sub-frame = 2 Time-slots
-1 Time-slot = 0.5 ms (i.e. 1 Sub-frame = 1 ms)
-1 Time-slot = 7 Modulation Symbols (when normal CP length is used)
-1 Modulation Symbols = 6 bits; if 64 QAM is used as modulation scheme

Radio resource is manage in LTE as resource grid....
-1 Resource Block (RB) = 12 Sub-carriers

Assume 20 MHz channel bandwidth (100 RBs), normal CP

Therefore, number of bits in a sub-frame

= 100RBs x 12 sub-carriers x 2 slots x 7 modulation symbols x 6 bits

= 100800 bits

Hence, data rate = 100800 bits / 1 ms = 100.8 Mbps

* If 4x4 MIMO is used, then the peak data rate would be 4 x 100.8 Mbps = 403 Mbps.

* If 3/4 coding is used to protect the data, we still get 0.75 x 403 Mbps = 302 Mbps as data rate.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 42


Nex-G Exuberant Solutions Private Limited

9 CELL EDGE PERFORMANCE
9.1 Enhancing cell Edge Performance using ICIC
LTE uses universal frequency reuse (N=1) without soft handoff. Consequently, high levels
of interference and low SINR can be expected near the cell edge. Traffic channel
performance at the cell edge can be enhanced via ICIC.

However, it is commonly believed that control channels (such as the PDCCH) are more
robust, so ICIC is not applied to the PDCCH. Real-life deployments of large N=1
heterogeneous networks under heavy load conditions can result in a long tail in the
SINR distribution; high mobility makes the situation even worse. The cell-edge SINR can
be so poor that even the most robust control channel will not function properly without
some kind of ICIC.


This paper examines the challenge of optimizing cell-edge SINR, and discusses various
strategies to enhance the cell-edge performance of PDCCH using the existing LTE
standard.

9.2 Cell-Edge SINR from Simulation, Trial and Real-Life Networks
Before a real-life large-scale LTE network is deployed, network performance is estimated
via results from system-level simulations and trial networks. The 3GPP Model [1] is
commonly used for system-level simulation (Figure 1).


Figure 14
Figure 14: Under the 3GPP model, the cell-edge SINR values rarely go below
SINR = 8 dB

There are several obvious limitations which make the 3GPP model ill-suited to estimate
cell-edge performance. The model is too idealistic; it is difficult at best to find a real-life

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 43


Nex-G Exuberant Solutions Private Limited
cellular network that contains only 19 eNodeBs with identical cell radii, identical antennas,
identical tower heights and uniform user distribution. The effect of using this idealistic
model is that the measured cell-edge SINR values from a large-scale real-life network
(Figure 2) can be much worse than the SINR generated from simulation (Figure 1); in
other words, the model is likely to underestimate these values.

Figure 15

Figure 15: SINR distribution of a real-life N=1 OFDMA network has a much
longer tail reaching SINR = 12 dB
Most trial networks only contain a few base stations; these few neighbor cells do not
contribute sufficient out-of-cell interference even if they are all fully loaded. It is not cost-
effective to maintain a trial network of appropriate size merely to create sufficient out-of-
cell interference to generate realistic cell-edge SINR distributions. Some people believe
that the out-of-cell interference is not important if it originates from cells that are
physically far away from the center cell. This is not always true; cells that are located far
away can still cause significant interference if they have line-of-sight (LOS) condition (see
Figure 3).

The consequences of relying on results from simulation and trial networks is that,
although performance can look very promising, cell-edge performance in an actual large-
scale deployment can be much worse than originally anticipated, especially under heavy
traffic loads. This puts the responsibility for problem solving in the hands of the field
engineers. Sometimes performance can be only brought up to acceptable levels through
lengthy trial-and-error processes; other times performance cannot be corrected because
the defects are in the current release of the standard. Once the industry realizes that the
problems are valid, corrections are made in the next release of the standard. But it takes
years for the next release of the standard to be implemented in the field. In the meantime,
network performance can only be as good as the field engineers can make it.


This kind of situation is a common occurrence, especially when a new system is
deployed. For example, when the first system with universal frequency reuse (N=1) was
deployed, no one realized how bad the out-of-cell interference could be when the load
became heavy. The measured SINR distributions from the field turned out to be much
worse than the SINR distributions obtained from simulations or from trial networks (a long
tail toward the negative values with SINR < 12 to 15 dB).

What are the causes of such low cell-edge SINR? Multiple factors are involved, but the
main cause is irregularities from real-life cellular networks. These irregularities can

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 44


Nex-G Exuberant Solutions Private Limited
cause very negative SINR in some locations (although they can result in good SINR in
other locations); the net result is a long tail in the SINR distribution.

The situation is better for N>1 networks (such as GSM or AMPS) because in that case,
the interferers(co-channels) are physically located farther apart from each other due to
the frequency reuse distance. The situation is worst for N=1 networks since in that case,
every cell is an interferer. Pilot pollution (or no dominant server) describes a situation
where power transmitted from many different cells appears in a location, but none is
significantly better than others. As a result, the composite signal level is high, but the
SINR from any single cell is poor because the total interference is too high. The result is
poor RF performance even with a high overall signal level.

Figure 3 shows one common cause of the no dominant server problem. The cells
located nearby have unfavorable RF propagation conditions; but the cells located far
away have favorable RF propagation conditions. The net result is: many servers appear
in a location but no server is offering a strong enough signal. Without soft handoff, the
terminal can only treat received power from one cell as signal; power from all other cells
is treated as interference. Thus, the no dominant server problem means high
interference and poor SINR:


Where S = received signal level; n is the thermal noise energy
which is a constant,

Is the total out-of-cell interference contributed from all neighbor cells The larger the
value of I, the worse the SINR.


Figure 16


Figure 16: No dominant server problem caused by special 3D terrain conditions

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 45


Nex-G Exuberant Solutions Private Limited

Figure 4 shows an indoor deployment. The upper floors of high-rise buildings have LOS,
with many cells on the ground. The composite signal levels on the upper floors can be
very high, but no server is much better than the others, so the combined SINR from every
server is poor.

Figure 17

Figure 17: Pilot pollution of upper floors caused by cells on the ground

9.3 Effects Caused by Mobility:-
Vehicles moving at high speed may be subject to much worse cell-edge SINR due to a
handover dragging effect. Essentially, this is caused by the fact that a fast-moving UE
cannot always be served by the best server, because handover is not triggered until the
UE has moved across the cell border, and there is a time lapse while handover
completes.

When a UE moves across the cell border, before handover completion, it is served by the
original cell (serving cell), which now becomes the second- or third-best server. The UE
will not be served by the best server until after the handover is successfully completed
(Figure 18). The reason the serving cell cannot be the best server is due to the handover
trigger condition (Event A3 as defined in [2]): The RSRP from a new candidate cell must
be better than the RSRP of the current serving cell by a certain margin (=Hysteresis) in
order for the handover trigger to happen. Therefore, at the handover trigger point, the
serving cell is not the best server, but the candidate cell is. Also there is a time-to-trigger

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 46


Nex-G Exuberant Solutions Private Limited
after Event A3, so the faster the moving speed, the farther away the UE will go before it
can be switched to the best server.


Figure 18
However, the UE must first have message exchanges to the current serving sell (which is
not the best server), and handover can only be successful after these message
exchanges are successful with the current serving cell (Figure 19). Handover message
exchanges are defined in [3].


Figure 19
Figure 19: Handover message exchanges, first with the serving (source) cell, then
with the candidate (target) cell

The common problem is that near the cell edge, the SINR from the best server is already
very poor, and the SINR values from the second- and third-best servers are even worse.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 47


Nex-G Exuberant Solutions Private Limited
Figure 7 shows the overlapping regions among the best, second-best third-best servers
near the cell edges. 3GPP simulation only shows the SINR distribution from the best
server. However, in real-life situations, the UE also has to work with the second- or third-
best server, so the real-life situation is less favorable.

Figure 20
Figure 7: Overlapping areas of the best-, second-best and third-best servers near
the cell edge

Failure rates for message exchanges with the serving cell, under the worst-case
scenario, can be high. Figure 8 shows the simulated cell-edge SINR values for different
mobile speeds [4], [5]. One can see that the high-speed mobile may see cell-edge SINR
values worse than 30 dB; no channel can operate with such poor SINR values.


Figure 21
Figure 21: SINR distributions of UE with low mobility (left) and high mobility (right)


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 48


Nex-G Exuberant Solutions Private Limited
Note that Figure 21 is based on simulations using the 3GPP model, which is very
idealistic; every handoff has a well-defined cell border and unique target cell. In reality,
these boundaries and targets are far from clear, especially in areas with no dominant
server (Figure 22). There can be multiple cell borders and different target cells at
different times, due to fast variation of SINR from each server (Figure 10). This SINR
fluctuation is the reason a relatively large hysteresis is needed in the handover triggering
condition, otherwise it will ping pong. However, the larger the value of the hysteresis,
the worse the handover dragging effect will be, and the worse the cell-edge SINR from
the serving cell.



Figure 22: Idealistic and real handover scenarios

This handover dragging effect is particularly severe for small cells with high-speed UEs,
which is a difficult situation for any cellular system. Some people argue that high-speed
UEs should not be in urban areas with small cells, due to frequent red-light stops. This is
not always true. In modern cities, highways can go through cities and cars can drive at
high speeds without making stops; bullet trains can also pass through cities at high
speeds, as shown in Figure 22.


Figure 22

Figure 11: Examples of high-speed vehicles in urban environment with microcells

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 49


Nex-G Exuberant Solutions Private Limited


9.4 Radio Link Failure and PDCCH Performance
PDCCHs performance is important not only because it delivers the scheduling
information to the UEs. It is also important because the RLF condition is based on BLER
on PDCCH/PCFICH.

When a UE first tries to access the network, PDCCH failure can result in
delayed access or access failure.
During handover, PDCCH failure will cause handover failure since downlink
messages (response from the eNodeB) cannot be successfully delivered to
the UE.
While BLER of 10% is normal for traffic channel in the first transmission
(thanks to H-ARQ re-transmissions), the BLER target for PDCCH must be
much lower, since H-ARQ cannot be applied to control channels. As a matter
of fact, PDCCH BLER exceeding 10% means RLF [6]. Figure 23 shows the
UE behavior during RLF, as defined by [7].


Figure 23

Figure 23: Two phases of RLF as defined in TS 36.300, Section 10.1.6

What kind of SINR will result from a PDCCH BLER in excess of 10%? Vendors have
performed simulations and the results are summarized in RAN-4 documents [8]-[14]. One
can see from Figure 24, that the BLER of PDCCH reaches 10% when SINR values drop
to between 3 dB and 5 dB.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 50


Nex-G Exuberant Solutions Private Limited

Figure 24
Figure 24: Comparison of PDCCH performance results from different vendors
simulations

Consider the best-case scenario, where SINR = 5 dB is needed to avoid RLF. How
much area can contours of SINR = 5 dB cover for N=1 networks? Figure 14 shows the
results from 3GPP-like modeling. One can see that even in such an idealistic network,
under the best-case scenario SINR = 5 dB contours cannot reach the cell edge with
75% of cell-edge reliability.


Figure 25
Contours of SINR = 5 dB cannot cover to the cell edge with 75% of edge reliability
Figure 25: Coverage contours of SINR=5 dB under N=1 scenario

One may wonder, if the contour of SINR = 5 dB cannot even reach the cell edge, how
handover can ever be successful since it requires the PDCCH to perform beyond the cell
edge?


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 51


Nex-G Exuberant Solutions Private Limited
The answer is, only under the true N=1 scenario (i.e., all subcarrier frequencies are
used in all cells), contours of SINR = 5 dB cannot reach the cell edge. However, in most
other cases, the interference scenario in PDCCH is not a true N=1. PDCCH has a way
to mitigate co-channel interference from direct neighbors, as long as it is not fully loaded.
It uses a scrambling mechanism in which the symbol quadruplets of the CCEs from each
cell are shifted to different sub-carrier frequencies, or different symbols, to randomize and
reduce the probability of collisions with direct neighbor cells, as shown in Figure 26. For
details about PDCCH multiplexing and scrambling, please refer to [26].

The result is that the PDCCHs co-channel interference in most cases is closer to N=3
instead of N=1, as long as the loading on PDCCH is light so that most REs in the PDCCH
control region are not occupied. On the other hand, if PDCCH from all cells are fully
loaded, then the co-channel interference scenario will be in the region of N=1.


Figure 26

Figure 26: Interference scenarios for PDCCH
Figure 16 shows the coverage of the contour SINR = 5 dB for N=3. Because for N=3,
the direct neighbors do not cause co-channel interference, one can see that the contour
of SINR = 5 dB can go much farther beyond the cell edge. This means that PDCCH will
probably not have cell-edge performance issues, as long as the loading level on PDCCH
is light so the out-of-cell interference scenario is more like that for N=3 instead of N=1.


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 52


Nex-G Exuberant Solutions Private Limited
This leaves one important question: what will produce heavy load on the PDCCH? This
depends on the type of data traffic; some traffic types tend to load traffic channels more
than control channels; other types of data models will load control channels more than
traffic channels.

When the network is supporting a small number of high data-rate users (e.g., FTP, video
streaming), it loads the traffic channels much more than the control channels. Because
of the small number of users per cell, the control channel is lightly loaded. This is likely to
be the case for initial LTE deployments.

The PDCCHs load will become heavy if there are a large number of low-rate users in the
cell, such as the case for VoIP deployment. A large number of low-rate users tends to
heavily load the PDCCH and lightly load the traffic channels. For VoIP, the overall
capacity limit is due to the PDCCH capacity limit. This is the case even with semi-
persistent scheduling [16].


Figure 27


LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 53


Nex-G Exuberant Solutions Private Limited
Cell-edge performance issues due to PDCCH will become worse after massive
deployment of VoIP, which tends to heavily load the PDCCH. Figure 28 shows
one simulation result of probability of PDCCH collision as a function of the loading
level.


Figure 28

Figure 28: CCE collision probability increases with the PDCCH loading level

When PDCCH is fully loaded, scrambling can no longer avoid collisions with the
neighbor cells. As a result, the interference scenario will be more like N=1, the
service area of SINR = 5 dB will shrink, and cell-edge performance will become
worse.
9.5 Inter-Cell Interference Coordination (ICIC)
A spread-spectrum system (like CDMA or UMTS) can work under very negative
SINR because of the large processing gain for low data rates; soft handoff also
helps tremendously. LTE air-link cannot work under the same negative SINR
conditions, and does not support soft handoff. The industry recognized these cell-
edge challenges, and responded by creating ICIC. Essentially, ICIC reduces the
co-channel interference cell-edge users experience from direct neighbor cells, by
increasing the cell-edge SINR values.

Traffic Channel ICIC
Although the scheduler has many dimensions to work with, the frequency and
power domains are the main areas traffic-channel ICICs work with [17], [18]-[20].

In the frequency domain, the scheduler has the freedom to allocate any RB
frequencies within the channel bandwidth. Working in the frequency domain is
easy for traffic channels because the UE can easily find scheduling information
from PDCCH, thus RB frequencies can be dynamically allocated. ICIC can
allocate different RB frequencies to cell-edge users in different cells to avoid or

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 54


Nex-G Exuberant Solutions Private Limited
minimize co-channel interference with direct neighbors. Furthermore, LTE defines
a few mechanisms to measure or notify direct neighbors of the out-of-cell
interference levels on each RB (HII, OI, RNTP) [21].

Working in the power domain is also relatively easy, as fast power control is
performed on the UL anyway. Also, the DL power level on selected RBs can be
changed, for example, power boost for cell-edge users. Figure 18 shows two
examples of traffic channel ICIC algorithms. The figure on the left shows an
example for frequency-domain ICIC: frequency allocations for cell-edge users are
restricted to about 1/3 of total channel bandwidth in order to avoid co-channel
interference with direct neighbors; cell-center users can use the full bandwidth.
The figure on the right shows an example of ICIC that works on both frequency
domain and power domain: cell-edge users use one-third of the bandwidth but
higher power; cell center users use full bandwidth but lower power.

Figure 29
Figure 29: Schematic diagram of various traffic channel ICIC solutions, showing ICIC in
frequency domain (left) and ICIC in frequency and power domains (right)


ICIC on PDCCH?
The existing traffic-channel ICIC algorithms will not directly work on PDCCH, because
PDCCH has a very different channel structure and is much less flexible. First of all, the
scheduler does not have the freedom of trying to avoid co-channel interference by
dynamically restricting the PDCCH bandwidth as it does with traffic channel RBs.
Secondly; there are no X2 messages that support ICIC on PDCCH, at least not in the
current release.
CCE-based power boost is one way the scheduler can play in the power domain.
CCE aggregation level can be 1, 2, 4 or 8 (CCE-1, CCE-2, CCE-4 or CCE-8). The
higher the aggregation level, the more robust it will be. However, high aggregation
levels also use more PDCCH resources. Therefore, cell-center users will use CCE-1
or CCE-2; users located somewhere in the middle will use CCE-2 or CCE-4; cell-edge
users will always use CCE-8.
CCE-based power boost can boost up the transmit power level on CCE-8, which can
potentially increase the signal level on CCEs for cell-edge users.
How effective is the CCE-based power boost? Cells in a network can be in one of the
following three scenarios:

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 55


Nex-G Exuberant Solutions Private Limited
COVERAGE-LIMITED ENVIRONMENT:
The cells are spaced very far apart from each other. Examples are rural and highway
cells. Typically the signal levels near the cell edges are already very low; as a result, the
out-of-cell interference levels are also very low. For coverage-limited environments, one
can approximate using the formula below:


In this case, boosting the signal power enhances S, and thus improves SNR since
thermal noise is just a constant. So CCE-based power boost is effective in a coverage-
limited environment.

INTERFERENCE-LIMITED ENVIRONMENT:
The cells are packed very close to each other. Examples are dense suburban, urban or
dense urban with small cells. Typically the cell-edge composite signal level is very high,
but the out-of-cell interference level is also very high. As a result, the cell-edge SINR is
still poor. For interference-limited environment, one can approximate using the formula
below:

In this case, CCE-based power boost will not be effective, because when signal power is
boosted up, the out-of-cell interference level is also increased, and as a result the SIR is
not improved. Generally, when cell-edge power level is already very high, boosting the
power further will not help.
o This phenomenon is the so-called cocktail party effect: in a cocktail party with
high noise level in the background, it does not improve audibility if everyone
increases their voice level; it just creates a higher level of background noise.

o Unfortunately, an interference-limited environment is the area where help is most
needed. Call drops happen most frequently in small cells, especially calls placed
from fast-moving vehicles.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 56


Nex-G Exuberant Solutions Private Limited
o
10 CELL EDGE THROUGHPUT
A number of factors impact the ultimate capacity offered by a cell site. Two critical factors
are interference and network loading. These factors are inter-related: higher network
loading, which is a measure of the number of active subscribers, results in greater
interference.
Interference determines signal quality and the modulation scheme: more bits can be sent
over the air with higher modulation schemes. Low interference is typically achieved close
to the cell center where distance to interfering cells is largest while highest interference is
present at the cell edge where signal from the serving cell is weakest. Therefore, data
rates are not even throughout the cell and they vary from highest close to the cell center
to lowest at the cell edge.
To improve the capacity of a cellular network, different frequencies are used on adjacent
cell sites. This is called frequency reuse factor. However, since LTE is designed to
provide broadband services, it uses a wide channel bandwidth: 5, 10 or 20 MHz are the
most common channel bandwidths (e.g. MetroPCS deployed 5 MHz system, Verizon
deployed 10 MHz system and Telia Sonera 20 MHz system). This implies that standards
frequency reuse as in traditional cellular networks is not possible due to lack of spectrum
(e.g. Verizons 700 MHz spectrum is 211 MHz). So, LTE networks are essentially reuse
1 networks. At full loading, one can expect significant interference, especially at the cell
edge. Dropping the modulation rate helps in mitigating interference at the cost of reduced
capacity. To further improve LTE capacity, clever techniques in assigning sub-carriers of
the OFDMA physical layer to users are deployed which I can address in a future posting.
The figure below shows the distribution of signal quality in a small network of 19 three-
sectored cells for different re-use plans. The right most plot is for a single frequency
reuse and shows the lowest signal quality performance, while the plot in the middle and
to the left are for three and nine channels, respectively, where we have better
performance.

Figure 30
Interference also works against MIMO spatial multiplexing which requires good signal
quality to operate. Hence, it is likely that in large percentage of the cell coverage area,
MIMO-SM is not operational due to interference, which results in lower
capacity. Consider that peak LTE rates include a MIMO-SM capacity gain factor of 2, so
the downlink LTE throughput for a 20 MHz channel is about 150 Mbps, or a spectral

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 57


Nex-G Exuberant Solutions Private Limited
efficiency (SE) of 7.5 b/s/Hz. Losing MIMO-SM halves the spectral efficiency down to
3.75 b/s/Hz.
The table below shows the average capacity for different LTE channelizations and
corresponding spectral efficiency. Smaller channels have provide lower spectral
efficiency that larger once because of control signaling overhead and loss of scheduling
gain.
Channel Bandwidth 1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
SE Relative to 10 MHz 62% 83% 99% 100% 100% 103%
Average Capacity (Mbps) 1.56 4.48 8.91 18 27 37.08
Average UL Capacity
(Mbps)
0.8 2.1 3.6 8 11 16

To conclude, single channel LTE networks will offer an improvement in spectral
efficiency over current 3G networks, but those expecting 150 Mbps download speeds will
be disappointed. Network operators cannot plan their network capacity based on peak
rates, but they will do so based on average rates which will be on the order of 1.4 1.8
b/s/Hz in the downlink.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 58


Nex-G Exuberant Solutions Private Limited

11 VOIP CAPACITY CALCULATION
LTE supports voice over IP (Internet Protocol) (VoIP) to provide voice services. A
simplified analysis is carried out next to approximately estimate the achievable VoIP
capacity per cell in case of 10 MHz downlink channel bandwidth and 10 MHz uplink
channel bandwidth. Refer to [2] for a comprehensive simulation-based analysis of VoIP
capacity. We will carry out a simplified analysis below to estimate the VoIP capacity
under a given set of assumptions. Refinement of assumptions and suitable modifications
to calculations would lead to a more accurate prediction of VoIP capacity.
Assume that full-rate 12.2 kbps Adaptive Multi Rate (AMR) speech codec is used. Every
20 ms, AMR speech codec generates (12.2 kbps * 20 ms= 244 bits) during the speech
on interval (i.e., the user is indeed talking and not just listening during such
interval). These bits are placed in an RTP/UDP/IP packet with about 3 bytes (=24 bits) of
overhead. IP header compression is assumed to be active. The VoIP packet entering
the air interface protocol stack would contain about (244 speech bits + 24 IP-related
header bits = 268) bits. The VoIP packet passes through these layers of the air interface
protocol stack- PDCP, RLC, MAC, and PHY. Lets add 4 bytes (=32 bits) to account for
headers added by PDCP (1 byte for short sequence number), RLC (1 byte for
Unacknowledged Mode operation with a 5-bit sequence number), and MAC (2 bytes)
layers, leading to the target payload of (268+32=300) bits entering the PHY layer from
the MAC layer.
Now, lets calculate how many Physical Resource Blocks (PRBs) are needed to carry the
target payload of 300 bits. According to Table A.3-1 of [36.104], 1 PRB can carry the
payload of 104 bits when the modulation scheme is QPSK and the coding rate is
(1/3). This payload is from the MAC layer to the PHY layer. Three PRBs would then be
able to carry (104 bits per PRB *3 PRBs =312 bits), which would be adequate for the
target payload of 300 bits. If a users channel conditions allow the modulation scheme of
16-QAM and the coding rate of (3/4), 1 PRB can carry 408 bits, which would suffice for
the target payload of 300 bits (see Table A.4-1 of [36.104]). When users are distributed
across the cell, some would have good channel conditions and can support (16-QAM,
coding rate=); others may have bad channels conditions and would require more robust
(QPSK, coding rate=1/3). If 50% of users are able to use (16-QAM, coding rate=) and
50% of users need (QPSK, coding rate=1/3), the average number of PRBs consumed by
a typical VoIP user in a cell would be (0.50*3 PRBs + 0.50*1 PRB = 2 PRBs). In 1 ms
sub frame, there are 50 PRBs, allowing (50 PRBs/2 PRBs per user = 25 users). Since
the AMR speech codec generates a new speech frame every 20 ms, during a span of 20
ms, we can have 20 sub frames carrying VoIP packets for (20 sub frames * 25 users per
sub frame= 500) users. These calculations assume that every single packet with a
specific modulation scheme and certain amount of coding is received without any errors
all the time. However, in practice, some packets would be lost, requiring HARQ
retransmission. If we need one (additional) retransmission on average, PRBS would
need to be allocated to a given VoIP user twice per 20 ms interval instead of just once
per 20 ms interval. Since a VoIP users is now consuming twice as many PRBs during
the 20 ms interval, the number of VoIP users would be reduced by half (i.e., 500/2=
250). In summary, for the assumptions made here, the VoIP capacity in LTE is 250 in

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 59


Nex-G Exuberant Solutions Private Limited
case of 10 MHz channel bandwidth. Comprehensive simulation-based analysis indicates
that 123 VoIP users can be supported in 5 MHz bandwidth [2], implying (123*2= 246)
users can be supported in 10 MHz channel bandwidth.
The VoIP capacity estimate calculated above can be adjusted by modifying assumptions
and making suitable adjustments to the calculations. For example, instead of using just
two combinations of modulation scheme and coding rate, multiple combinations can be
used to estimate the number of PRBs required by an average user in a cell. The overall
approach outlined above can still be used for an approximate VoIP capacity estimate.
Several factors would increase the VoIP capacity estimated above. If many users can
work with reduced degree of channel coding, capacity would be higher. We did not use
64-QAM in the analysis above, because only UE Category 5 can support such scheme in
the uplink, and we may not see such UEs for quite some time. Use of antenna
techniques would also increase the capacity. Consideration of the voice activity factor
would also increase capacity because we do not need to send hundreds of speech bits
during the silence interval. Some factors would decrease the VoIP capacity estimated
above. If many users in the cell need more redundancy than that provided by (1/3)
coding, we would need more PRBs per user, reducing the capacity. If semi-persistent
scheduling is not used, higher control channel overhead would decrease the achievable
VoIP capacity.
In summary, a 10 MHz channel bandwidth could support about 250 VoIP users in LTE.

LTE Drive Testing, Radio Network Planning & Optimization
Version -01.01
| 60


Nex-G Exuberant Solutions Private Limited

12 REFERENCES
[1] 3GPP, 36.104V8.7.0.
[2] 3GPP, R1-072570, Performance Evaluation Checkpoint: VoIP Summary.
[3] LTE in Bullets
[4] 3GPP TR-25.814- V7.1.0, Physical Layer Aspects for E-UTRA, ANNEX A: Simulation
Scenarios, page 116.

[5] 3GPP TS-36.331- V10.2.0, Radio Resource Control Protocol Specification, Section 5.5.4.4:
Event A3 (Neighbor becomes offset better than PCell), page 84.

[6] 3GPP TS-36.300- V10.4.0, Overall Description, Section 10.1.2, Mobility Management in
ECM-CONNECTED, subsection 10.1.2.1.1, C-plane handling, page 62.

[6] 3GPP TS 36.133- V10.3.0, Requirements for support of Radio
Resource Management, Section 7.6 Radio Link Monitoring, page 45.

[7] 3GPP TS-36.300 -V10.4.0, Overall Description, Section 10.12.6 Radio Link Failure, page
72.

También podría gustarte