Skip to content

Advertisement

  • Research Article
  • Open Access

QoS Modeling for End-to-End Performance Evaluation over Networks with Wireless Access

  • 1Email author,
  • 1,
  • 1 and
  • 1
EURASIP Journal on Wireless Communications and Networking20102010:831707

https://doi.org/10.1155/2010/831707

Received: 31 July 2009

Accepted: 31 January 2010

Published: 22 March 2010

Abstract

This paper presents an end-to-end Quality of Service (QoS) model for assessing the performance of data services over networks with wireless access. The proposed model deals with performance degradation across protocol layers using a bottom-up strategy, starting with the physical layer and moving on up to the application layer. This approach makes it possible to analytically assess performance at different layers, thereby facilitating a possible end-to-end optimization process. As a representative case, a scenario where a set of mobile terminals connected to a streaming server through an IP access node has been studied. UDP, TCP, and the new TCP-Friendly Rate Control (TFRC) protocols were analyzed at the transport layer. The radio interface consisted of a variable-rate multiuser and multichannel subsystem, including retransmissions and adaptive modulation and coding. The proposed analytical QoS model was validated on a real-time emulator of an end-to-end network with wireless access and proved to be very useful for the purposes of service performance estimation and optimization.

Keywords

  • Channel State Information
  • Transport Layer
  • Orthogonal Frequency Division Multiple Access
  • Medium Access Control Layer
  • Congestion Window

1. Introduction

Quality of Service (QoS) over networks with wireless access is a common research topic and is often studied in relation to end-to-end QoS or cross-layer architectures. Most authors focus on particular network elements or domains (e.g., terminals, radio interfaces, or core networks) or on specific protocol layers, such as congestion control schemes for wireless multimedia at the transport layer (TCP-friendly) [1] or QoS-scheduling techniques at the radio interface [2].

However, the QoS perceived by end users is an end-to-end issue and is therefore affected by every part of the network, the protocol layers, and the way they all interact. Moreover, seamless connectivity requires wireless and wired networks to operate in a coordinated manner in order to support packet data services with different QoS requirements. In such scenarios, data service performance assessment is usually addressed through active terminal monitoring over real networks [3]. However, such a method proves to be costly if the operator wants to collect statistics from a reasonable number of terminals, applications, and locations. It may also prove to be a highly time-consuming process due to the variety of potential scenarios, both in terms of the type of service being offered and their spatial location.

Only a small number of works in the literature describe a general framework for end-to-end QoS control. One such end-to-end QoS framework for streaming services in 3G mobile networks is considered in [4], analyzing the interaction between UMTS and IETF's protocols and mechanisms. In [5], several key elements in the end-to-end QoS support for video delivery are addressed, including network QoS provisioning and scalable video representation. A small number of works have begun to include proposals involving end-to-end QoS management over wireless networks. In [6], a theoretical model for integrated cross-layer control and optimization in wireless multimedia communications is introduced. The work presented in [7] proposes an adaptive protocol suite for optimizing service performance over wireless networks, including rate adaptation, congestion control, mobility support, and coding. An overview of the current cross-layer solutions for QoS support in multihop wireless networks including cooperative communication and networking or opportunistic transmission can be found in [8]. However, none of the previous works presents a method or tool for assessing and/or optimizing end-to-end QoS in a simple manner.

In this paper, the problem of providing accurate end-to-end performance estimations over networks with wireless access is addressed through a QoS model. The quality of packet data services is analyzed by calculating the performance degradation that occurs at each protocol layer. The overall degradation is analyzed starting from the physical layer up to the application layer. The performance assessment model described herein can be used to estimate the end-to-end performance of services in this type of networks before deployment. In addition, the proposed model is a useful tool for achieving end-to-end optimization, as it helps to find an appropriate configuration for each layer, thereby optimizing the end-to-end performance.

The proposed model was validated using a set of mobile terminals which were connected to a streaming server through an IP network with wireless access. We paid special attention to the impact of different radio interface mechanisms and transport layer protocols on streaming service performance.

The remainder of this paper is organized as follows. The general system model for multimedia streaming services over the wired-wireless network is outlined in Section 2. The QoS modeling process of the streaming protocol stack is presented in Section 3. Section 4 presents the end-to-end model results, whereas their validation results from a real-time emulator are shown in Section 5. Section 6 discusses the applicability of the proposed architecture for assessing the Quality of Experience (QoE) for data service users. Finally, Section 7 states the main conclusions of this work.

2. System Model

This section presents the scenario and protocol stack under analysis. As mentioned earlier, a streaming service was chosen as the representative case to be studied (see Figure 1). The system is divided into two subsystems: the radio access network segment and the transport network segment. An access node is responsible for interconnecting the two segments in order to provide an end-to-end connection between the User Equipment (UE) and streaming server.
Figure 1
Figure 1

Scenario and protocol stack under analysis.

Across the protocol stack, Packet Data Units (PDUs) of Layer will hereinafter be referred to as -PDUs. The size of the PDUs at each layer is denoted by and the -PDU header length is denoted by . The following terminology is used for performance indicators.
  1. (a)

    is the mean information rate offered to layeri.

     
  2. (b)

    is the mean net throughput achieved at layeri (at the receiver).

     
  3. (c)

    is the mean -PDU delay.

     
  4. (d)

    is the mean -PDU loss rate.

     
A description of the system model is given from Layer 1 (L1) to Layer 5 (L5).
  • (L1)A variable-rate multiuser and multichannel subsystem is considered for the radio interface. Channel multiplexing is performed at the PHYsical (PHY) layer, where the radio channel is divided into resources independently allocated to users. Also, the PHY layer performs adaptive modulation and channel coding [9].

  • (L2)The link layer is responsible for performing user multiplexing; that is, resources are temporarily assigned to users following a specific scheduling algorithm. Moreover, selective retransmissions of erroneous (if so configured) and the compression of upper layer headers are also performed at this layer. Traffic shaping is performed at the upper interface of the network side L2; when the network load is high, data may be lost due to overflow in the queue.

  • (L3)An IP-based radio access node is considered at the network layer (L3), through which mobile terminals connect to the streaming server.

  • (L4)At the transport layer (L4), several options were analyzed at the user plane (UDP, TCP, and TFRC [1]).

  • (L5)At the user plane, the Real-time Transport Protocol (RTP) carries delay-sensitive data while the Real-time Transport Control Protocol (RTCP) conveys information on the participants and monitors the quality of the RTP session. Performance analysis of streaming signaling protocols during session setup is out of the scope of this paper; however, further details can be found in [1, 5].

In this work, throughput, delay, and loss rate indicators at each layer are modeled analytically, except the delay associated to scheduling algorithms at the radio and IP domains, which is still an open issue and has been obtained from simulations.

For the traffic model, variable rate information sources are considered at the application layer. A sufficiently large application buffer is assumed; thus network jitter is compensated at this layer. A summary of the numerical parameters used in this work at all layers is given in Table 12 at the end of the paper.

3. Protocol Layer Modeling

3.1. Physical Layer Model

The physical radio resources considered in this work are based on an Orthogonal Frequency Division Multiple Access (OFDMA) scheme, as defined for 3 GPP Long-Term Evolution (LTE) [10, 11]. OFDM subcarriers are organized into channels, each of which groups subcarriers together that can be reallocated to users on a frame-by-frame basis. A frame is a set of OFDM symbols with a duration of TTI (Time Transmission Interval). The resource allocation unit ( subcarriers during a TTI) is referred to as a Physical Resource Block (PRB) and allows for the transmission of Quadrature Amplitude Modulation (QAM) symbols, as shown in Figure 2.
Figure 2
Figure 2

LTE physical resources structure.

Adaptive modulation is used to follow the fading behavior of the channels represented by its instantaneous Signal to Noise Ratio (SNR); such behavior is different for each user and PRB [12]. Let be a matrix representing the received instantaneous SNR for user and channel at frame , and let be the number of bits/symbol of a QAM constellation that should ideally fulfill a certain target bit error rate . Channel coding (with coding rate ) is used to obtain a certain coding gain that generally ranges from 2 to 10 dB.

The same constellation is used for all QAM symbols within a PRB, making it possible to transmit a total of bits. The term can be seen as the potential rate (in bits/frame) of channel k if it is assigned to user i (see MAC layer model in the following section). The actual rate of a channel will be , where represents the user who is actually allocated to channel k.

Regarding the radio channel's behavior represented by the random process , its temporal variation (fading) is assumed to follow the usual Jakes' model [9]; an exponential decay model with factor is assumed for correlation between channels ; independence is assumed between users .

The following expressions together with the parameters listed in Table 1 provide a summary of the performance indicators at the physical layer.
Table 1

Parameters associated to the physical layer model.

 

Parameter

Description

Physical layer

Target Bit Error Rate (BER)

 

Constellation set (modulation levels)

 

TTI

Transmission Time Interval

 

Number of channels

 

Correlation between consecutive channels

 

Frame duration

 

Number of QAM symbols multiplexed on a channel

 

Number of QAM symbols per TTI

 

Coding Rate

 

Channel Coding Gain

Radio Channel

Doppler spread

 

Average Signal to Noise Ratio (SNR)

PHY Model is
(1)
(2)
(3)

3.2. Link Layer Model

The link layer includes the Medium Access Control (MAC), Radio Link Control (RLC), and Packet Data Convergence Protocol (PDCP) sublayers (as shown in Figure 1).

3.2.1. MAC Layer Model

A set of N u users share the radio transmission resources. The MAC layer at the access node allocates channels to users on a frame-by-frame basis; that is, for each new frame, the system assigns each physical channel to a single user. OFDMA allocation is applied according to a particular scheduling algorithm, considering different PRBs with adaptive modulation per user. The actual number of bits extracted from the th user queue and allocated on channel k, denoted by , will be zero or , depending on the user scheduler decision. The total number of bits extracted from the th user queue at frame is given by.
(4)

Two scheduling algorithms were assessed: Round Robin (RR) and Modified Largest Weighted Delay First (M-LWDF) [12]. RR is fair among users, although it fails to achieve any multiuser or multichannel diversity gain. On the other hand, M-LWDF considers both channel quality and QoS indicators in its scheduling criteria by allocating the resources to the user with the highest potential rate and delay product. According to [2], the M-LWDF algorithm is throughput optimal; that is, it gets the maximum possible diversity gain for stable queues. Other scheduling algorithms such as Best Channel (BC) or Proportional Fair (PF) algorithms achieve better throughput for some users, but this comes at the expense of others, who experience throughput starvation [12]. As mentioned earlier, the delay associated to the scheduling process was obtained from simulations.

The error rate at the MAC layer ( ) depends on the BER achieved at the physical layer ( ) and the size of an -PDU ( ). In order to provide an expression for the Block Error Rate (BLER) at the MAC layer, , instantaneous BER is assumed to be equally distributed along bits, which is reasonably true if proper interleaving is performed.

A summary of the performance indicators and parameters at the MAC layer is shown in (5)-(6) and Table 2, respectively.
Table 2

Parameters associated to the MAC layer model.

Parameter

Description

Scheduling algorithm

B L2a

Size of -PDUs

H L2a

Header length of -PDUs

MAC Model is
(5)
(6)

3.2.2. RLC Layer Model

While some streaming applications are error-tolerant, others may require reliable data delivery. In this case, the network can optionally retransmit erroneous -PDUs (i.e., RLC blocks). Thus, the error rate can be lowered at the expense of decreasing throughput and increasing mean delay and jitter.

The retransmission mechanism analyzed in this paper considers a generic link layer retransmission scheme (based on the ARQ protocol) [13]. ARQ protocol behavior is described as follows. Incoming upper layer PDUs are segmented into - PDUs and buffered. The transmitter sends all -PDUs and polls the receiver in the last -PDU of a higher layer PDU ( -PDU). A status report request is issued if no response is received to the polling upon expiration of . Selective acknowledgement is used to report which -PDUs have been incorrectly received. Nonacknowledged -PDUs are retransmitted if the maximum number of retransmissions has not been reached; we call cycle the th (re)transmission attempt. Further details can be found in [14].

Assuming a maximum number of retransmission attempts , the loss rate is given by the probability that an -PDU is not correctly received after retransmissions, that is, .

MAC layer throughput comes from the aggregation of two types of PDUs: data and control -PDUs, that is, . The first contribution, , is computed as
(7)

where the term between brackets represents the average number of (re)transmissions per -PDU, and the last term corresponds to the RLC overhead.

The second contribution, , represents the throughput generated by status report requests. Such requests are sent whenever no answer to the last of a cycle is received. This contribution is given by
(8)

where represents the required mean number of retransmission cycles to send one -PDU; and represent the mean size of a data and control -PDU, respectively; represents the number of -PDUs per -PDU (including retransmissions).

If is the number of (re)transmitted -PDUs in cycle , then the average number of -PDUs per -PDU can be computed as
(9)
where is the probability of sending in cycle , given by the following recursion:
(10)
Solving the recursion
(11)
The required mean number of retransmission cycles to send one -PDU can be expressed as
(12)
where is the probability of requiring the th cycle to successfully complete the transmission, computed as [14]
(13)
From previous equations, RLC throughput is given by
(14)
To compute the mean -PDU delay, , we need to analyze the impact of retransmissions, which depend on the loss rate at the next lower layer, . In particular, the additional delay introduced by retransmissions at each cycle comes from (a) delay in retransmitting -PDUs, noted as ; and (b) delay in correctly receiving the status report from the receiver, noted as . These factors are combined as follows
(15)

where the terms and can be computed as a function of and , whose details can be found in [14].

A summary of the performance indicators and parameters at the RLC layer is shown in (16)–(18) and Table 3, respectively
(16)
(17)
(18)
Table 3

Parameters associated to the RLC layer model.

Parameter

Description

Maximum number of RLC retransmissions

Timeout to retransmit new polling request

Size of data -PDUs

Size of status report -PDUs

Header length of -PDUs

3.2.3. PDCP Layer Model

The PDCP layer is in charge of adapting the data to achieve efficient transport through the radio interface. This layer performs header compression, which reduces network and transport headers (e.g., TCP/IP or RTP/UDP/IP). The most advanced header compression technique is known as RObust Header Compression (ROHC) [15], which has been adopted by cellular standardization bodies such as 3 GPP. Using ROHC, the RTP/UDP/IPv4 header is compressed from 40 bytes to approximately 1 to 4 bytes, providing a compression gain .

-PDU loss rate, , comes from erroneous -PDUs ( ), and -PDUs discards at PDCP queues ( )
(19)

In the access node, there is one dedicated PDCP buffer for each connection, whose size is . The term is determined by the buffer size and the incoming traffic load.

Taking into account that an -PDU is correctly transmitted if the (in which it was segmented) arrive correctly at the receiver, the term is computed as the probability of requiring at least retransmissions, (see (13)):
(20)
The computation of PDCP throughput, , must take into account the lower layer throughput as well as the effect of ROHC. Assuming an average ROHC compression gain , is given by the following expression:
(21)
Average -PDU delay has been defined as the time elapsed from when a PDU arrives (from upper layers) to the PDCP sublayer at the transmitter until an acknowledgement is received from the receiver. Hence, the average delay at the PDCP layer, , comprises the time to correctly receive all -PDUs in which an -PDU is segmented; such delay includes potential -PDU retransmissions, up to a maximum of . Each retransmission cycle adds two delay contributions: (a) delay in (re)transmitting -PDUs ( ), and (b) delay in receiving the status report from the receiver :
(22)

where is the probability of requiring i (re)transmission cycles, as defined in (13), whereas where the terms and can be computed as a function of , and , whose details can be found in [14].

Finally, can be expressed as
(23)
A summary of the performance indicators and parameters at the PDCP layer is shown in (24)–(26) and Table 4, respectively
(24)
(25)
(26)
Table 4

Parameters associated to the PDCP layer model.

Parameter

Description

PDCP queues size

Header length of -PDU

Compression gain achieved by ROHC

3.3. Network Layer Model

The network layer is based on an end-to-end IP connection from the mobile terminal to the streaming server. IP links are assumed to be over-dimensioned compared to radio links. The well-known Weighted Fair Queuing (WFQ) multiplexing algorithm was assessed in the IP routers by means of simulations.

End-to-end IP performance is analyzed from the performance results obtained at IP-fixed and radio domains, as shown in Figure 1. The following considerations are made.
  1. (1)

    The -PDU loss rate can be computed as the aggregation of the -PDU losses occurred in each domain: radio ( ) and fixed ( ).

     
  2. (2)

    The mean throughput achieved by the mobile terminal is given by the most limiting point in the network, that is, radio interface ( ).

     
  3. (3)

    The mean end-to-end IP delay can be computed as the aggregation of the delays experienced in each domain: radio ( ) and fixed ( ).

     
Considering previous statements, performance indicators and parameters at the IP layer is shown in (27)–(29) and Table 5, respectively.
(27)
(28)
(29)
Table 5

Parameters associated to the IP layer model.

Parameter

Description

IP multiplexing algorithm

IP header length (version 4)

Number of IP nodes from server to client

Minimum IP link capacity

IP queue size

3.4. Transport Layer Model

This section aims to model the performance of three different transport protocols (UDP, TCP, and TFRC) based on performance indicators of the lower layers.

3.4.1. UDP Model

Since UDP does not include any congestion control or retransmission mechanisms, UDP throughput can be simply computed from the IP throughput by considering the header overhead. Performance indicators and parameters at the UDP layer is shown in (30)–(32) and Table 6, respectively
(30)
(31)
(32)
Table 6

Parameters associated to the UDP model.

Parameter

Description

Transport header length

3.4.2. TCP Model

TCP includes a congestion control mechanism to react against network congestion. When TCP is used as transport protocol, application throughput behavior depends on the specific TCP implementation. An analytic characterization of the steady-state throughput for TCP-Reno protocol has been applied in this work. This model characterizes TCP throughput as a function of loss rate in the network , Round-Trip-Time (RTT), Retransmission Time-Out duration ( ), maximum TCP window size (W) for a bulk transfer TCP flow, and the number of packets (b) acknowledged by each received ACK. The complete characterization of the TCP source rate, assuming that the maximum TCP window size has been reached, is computed in [16].

TCP performance is highly sensitive to packet losses because of its inherent congestion control mechanism, which decreases the window transmission, even if such losses are not due to congestion. Besides, the higher the RTT, the lower the throughput at the transport layer, because the congestion window is increased at a rate of RTT.

An appropriate congestion window setting (in addition to adequate queue dimensioning in network elements) is a key factor in optimizing end-to-end performance. In particular, the maximum window size is suggested to be slightly higher than the Bandwidth-Delay Product (BDP) [3] in order to exploit the available radio capacity. Consequently, a maximum TCP window size of W= 32 kB was chosen. Since queue sizes (per user) are higher than the W value, we may assume that the probability of overflow in the queues is negligible; thus, the contribution to the -PDU loss rate only comes from lost -PDUs at the radio interface. In a steady state, TCP source rate, that is, incoming rate to ( ), can be characterized by [16]
(33)

where denotes , where represents the loss rate in the network , and the RTT can be approximated by the mean two-way delay over the end-to-end network: .

From (33), the following dependence is clearly identified: where represents the TCP throughput (33). In addition, average delay and loss rate in the network depend on the total network load, , for example, high load in the network lead to higher delays and losses. Hence, the source rate can be computed by solving the following system of equations:
(34)
which can be expressed by the following equation:
(35)

In order to solve this nonlinear equation, the behavior of has been parameterized using standard curve fitting methods from the result of (29) and (26).

TCP delay depends on the probability of retransmissions and the period of time required by the transmitter to detect the need for a retransmission (via duplicated ACKs or timer expiration). As stated by[16],such a time period can be computed as
(36)
Thus, TCP delay ( ) can be computed from the IP level delay by adding the effect of TCP retransmissions:
(37)
Once is obtained by solving the aforementioned nonlinear equation, performance indicators and parameters at the TCP layer is shown in (38)–(40) and Table 7, respectively
(38)
(39)
(40)
Table 7

Parameters associated to the TCP model.

Parameter

Description

Maximum TCP window size

Number of packets that are acknowledged by a received ACK

Retransmission Time-Out

Transport header length

Note that the transport layer becomes error-free ( ) since TCP is a reliable protocol.

3.4.3. TFRC Model

TFRC has less throughput variation over time in comparison to TCP, which, in principle, makes it more suitable for real-time applications such as telephony or streaming media where a relatively smooth sending rate is important. The recommended TFRC throughput equation described in [1] was used, which is a simplified version of the throughput equation for Reno TCP when and no delayed-ACK is applied; that is, [17]. TFRC source throughput can be computed by [17]
(41)
The evaluation of is performed following the same procedure as in the TCP case, that is, resolving the nonlinear equation described in (35). A summary of the performance indicators and parameters at the TFRC layer is shown in (42)–(44) and Table 8, respectively
(42)
(43)
(44)
Table 8

Parameters associated to the TFRC model.

Parameter

Description

Number of packets that are acknowledged by a received ACK

TFRC timer used for rate adaptation

Transport header length

Since TFRC only includes the congestion control mechanism (and not retransmissions), losses remaining at the transport layer come from noncorrected errors at the radio link, and TFRC delay is similar to the network delay ( ).

3.5. Application Layer Model

The application layer is responsible for establishing the streaming session, and thereafter, for transferring the multimedia content (at the server side) and reproducing the content (at the client side).

The streaming server delivers application data to the transport layer at an average rate defined by the codec (see Table 10). However, if the transport layer includes a congestion control mechanism (e.g., TCP or TFRC), the socket between these layers must temporarily buffer the packets when the transport layer rate is lower than the codec rate. This mechanism has been approximated by an M/M/1/L queue system where the arrival rate is given by and the service rate is given by . The loss rate in an M/M/1/L queue is given by
(45)
whereas the average waiting time in the socket can be obtained from
(46)

On the receiver side, the application layer adds an additional delay because of the application buffer of the streaming player. A sufficiently large application buffer size that hides network jitter to application performance has been assumed. Then, considering that the application throughput is not interrupted by buffer starvation, the following expressions can be obtained.

Performance indicators and parameters at the aaplication layer is shown in (47)–(49) and Table 9, respectively.
(47)
(48)
(49)
where and contributions are only applicable to TCP- or TFRC-based applications.
Table 9

Parameters associated to the application layer model.

Parameter

Description

Number of users

RTP header length

Socket buffer size

Table 10

Content encoding description.

Parameter

Description

Value

Mean source rate at application layer

384 kbps

Video resolution

QVGA,  pixels

Frame rate

15 frames/sec

Video encoding format

3 GPP (based on MPEG-4)

From the end user perspective, the delay introduced by the application buffer, , can be considered as part of the session establishment, since the application does not start reproducing media until the buffer is full. The buffer usually spans from 1 to 10 (depending on the technology). However, in two-way streaming services (like Push-to-Talk over Cellular, PoC) the lower limit is generally small (not higher than 500 ms) since the interactivity requirements are much stricter than they are in one-way streaming services.

4. Results

The end-to-end QoS model shown in Figure 3 was used for different purposes. Firstly, the model is used to estimate the performance at different protocol layers for a UDP-based streaming solution. Then, a design example for TCP-based applications is described.
Figure 3
Figure 3

Summary of the end-to-end QoS model.

4.1. Performance Estimation

Figure 4 shows an example of performance estimation for a UDP-based streaming solution. Average throughput at different layers is shown as a function of the total application load, . Mean source rate per user was kept constant (  kbps) while the number of users in the system increased. Figures 4(a) and 4(c) on the left show throughput results without header compression (ROHC), whereas Figures 4(b) and 4(d) on the right include this feature.
Figure 4
Figure 4

Throughput results for UDP-based streaming. Without ROHC & RR schedulingWith ROHC & RR schedulingWithout ROHC & M-LWDF schedulingWith ROHC & M-LWDF scheduling

Analyzing the performance shown in Figure 4, the following effects can be observed at layer .
  • ( ) MAC layer throughput, , is rapidly degraded above a certain critical load point, which corresponds to the maximum achievable system throughput for a particular multiplexing algorithm. As expected, the M-LWDF algorithm achieves a higher system throughput (about 12 Mbps with scenario settings) than RR, since M-LWDF takes Channel State Information (CSI) into account, thus providing a higher diversity gain [12].

  • ( ) The RLC layer introduces additional throughput degradation due to retransmissions, as described in (17).

  • ( ) The use of ROHC makes it possible to decrease the required amount of resources below the PDCP layer while achieving the same application level throughput. Specifically, ROHC achieves a capacity gain of 7% in our scenario. Due to compression, the PDCP layer may even compute a higher throughput (after decompression) than the lower layers, as illustrated in Figures 4(b) and 4(d).

  • ( ) Throughput at the upper layers only suffers from RTP/UDP/IP header overheads.

Throughput curves in Figure 4 also provide very valuable information about the required resources at each layer in order to fulfill the desired QoS at the application level. For instance, the proposed model is able to map application level QoS requirements onto lower layer requirements; for example, a 384 kbps coding rate requires performing a resource reservation of 400 kbps at the IP level or assigning 450 kbps at MAC layer scheduling.

4.2. End-to-End Design

In this section, an end-to-end design example for TCP-based applications is described. The analysis is focused on those parameters having a higher influence on the overall performance: TCP window size (W), maximum number of RLC retransmissions ( ), and number of users in the system ( ). The following parameter values were used: packets and .

Figure 5 shows the maximum achievable TCP throughput and delay as a function of and loss rate at the MAC layer after decoding ( ) for  kB. Results are shown for two load conditions ( users and users).
Figure 5
Figure 5

Effect of the number of retransmissions on TCP throughput and delay. TCP throughput, usersTCP throughput, usersTCP delay, usersTCP delay, users

In terms of TCP throughput results, which are depicted in Figures 5(a) and 5(b), it is shown how high values require a higher number of RLC retransmissions to minimize data losses, and consequently, maximize throughput. For low load conditions ( users), potential TCP throughput is higher than the video codec rate (384 kbps) as long as a proper value is configured. However, for high load conditions ( users), TCP is not able to achieved the desired throughput.

Concerning TCP delay results, shown in Figures 5(c) and 5(d), two scenarios are analyzed.
  1. (a)

    Low load ( ): in general, high loss rates at MAC sublayer ( ) must be reduced by RLC retransmissions (configuring a high value of parameter). As the radio interface delay is very low in low load conditions, the impact of RLC retransmissions on TCP delay is almost negligible. Otherwise, if is set to a low value, TCP will be responsible for performing end-to-end retransmissions, thus increasing delay.

     
  2. (b)

    High load ( ): in addition to the previous effect, high load conditions increase the radio interface delay, and thus consecutive RLC retransmissions will increase the end-to-end RTT. As the TCP delay depends on the average RTT, a high will leads to high TCP delays. Besides, as the TCP throughput (per user) increases for high values, the overall load in the network is higher, thereby further increasing the TCP delay.

     

According to the results shown in Figure 5, for a given there is an optimum value that maximizes throughput while keeping delay as low as possible. This value depends on the loss rate in the network. For instance, for (obtained from a at the physical layer), the optimum value of is 6.

Figure 6 shows joint TCP throughput and delay results for different loss rates ( and ) and load conditions ( and users). For , the minimum value of that allows achieving the maximum potential throughput is , regardless of the number of users in the system, as shown in Figures 6(a) and 6(b). This minimum value of is selected in order to minimize the end-to-end delay. However, for , the value of that optimizes the transport layer performance is 8, as shown in Figures 6(c) and 6(d).
Figure 6
Figure 6

Potential throughput versus delay at the transport layer (TCP). users users users users

The impact of the maximum TCP window size (W) on TCP throughput and delay is shown in Figure 7. Performance results show that excessively small values of the maximum congestion window (W) do not allow one to make full use of network resources, which thus reduces the maximum throughput. On the other hand, excessively large values of W require a high reliability (in terms of loss rate) in order to use the whole window; thus, too many RLC retransmissions are required, which increases the end-to-end delay.
Figure 7
Figure 7

Impact of maximum TCP window size (W) on TCP ( and users).

In sum, the values of , and W parameters must be jointly decided upon, making trade-offs between throughput and delay. For a given , a trade-off value for was 6 in order to limit the end-to-end delay. For these values of and , the maximum TCP window that maximizes throughput was W= 32 kB.

5. Model Validation

The objective of this section is to validate the theoretical model proposed in this work. The validation process is divided in two phases: ( ) validation of the radio interface model, and ( ) validation of the upper layer model.

5.1. Radio Interface Model Validation

Since the radio technology under study is not yet available, the validation process of the radio subsystem is based on link level simulations. Such simulations have been performed for a frequency-selective Rayleigh fading channel using adaptive modulation with a . The feedback channel is assumed to be ideal (with no delay or losses).

Figure 8 shows the validation results for the radio interface model, assuming the M-LWDF multiplexing algorithm and . Since the QoS model is based on PHY/MAC layer simulations as a starting point, the goal of these simulations is to validate RLC and PDCP models. Performance estimations from the theoretical model, in terms of delay Figure 8(a) and throughput Figure 8(b), are compared to simulation results.
Figure 8
Figure 8

Radio Interface Model Validation.

5.2. Upper Layer Model Validation

The validation process of the upper layer model (i.e., network, transport, and application) was performed by developing a real-time end-to-end system [18]. Figure 9 shows the validation system architecture, which includes the following modules.
Figure 9
Figure 9

Validation system architecture [ 18 ].

(i) Streaming Server.

Darwin Streaming Server v5.5.5 was used on the server side. This server allows one to select UDP or TCP as the transport protocol. Streaming content is based on a single video flow whose parameters are listed in Table 10. A packet sniffer (Wireshark v0.99.7) is used on both sides (server and client) to capture and analyze the traffic between peers.

(ii) Real-Time Emulator.

Between the server and the client, a real-time emulator models the behavior of the whole network, so that the client-server connection experiences (in real-time) the quality degradation introduced over the end-to-end path. This emulator uses the packet filtering framework included in the Linux 2.4.x and 2.6.x kernel series together with the iptables utility: iptables allows one to configure the packet filtering rule set. Certain quality degradation (in terms of delay or packet loss) is applied to the filtered packets. Such degradation is set according to the quality indicators obtained at the IP layer: loss rate P L3 and delay D L3 . In this way, the emulator offers a real-time data flow that experiences the degradation introduced by the network with wireless access.

(iii) Streaming Client.

A VLC Media Player 0.8.6d is responsible for establishing the streaming session with the server and reproducing content. For the TCP-based solution, TweakMaster v2.50 was also used to align TCP settings on the client side with the parameters assumed in the theoretical model.

Figure 10 shows the instantaneous source rate generated at the transport layer on the server side, considering the content encoding characteristics described in Table 10. The aim of Figure 10 is to clarify the impact of the network conditions on the UDP and TCP source rate at the server ( ).
Figure 10
Figure 10

Impact of network load on the source rate at the server.

It is shown that UDP delivers data to the network at a source rate determined by the encoding process, independently of the network status (loss rate and delay). Average UDP source rate can be computed from the average application source rate (  kbps) and taking into account UDP headers, yielding  kbps. On the other hand, the TCP source rate at the server is highly influenced by network conditions as a consequence of the TCP congestion control mechanism, which tries to react against congestion. This mechanism leads to an important reduction in the average TCP source rate ( ), as the network load increases.

TCP throughput and delay results on the client side obtained from the analytical model are compared to real measurements in Figure 11. Good behavior of the theoretical model is observed. The proposed model provides less accurate values for high load conditions due to the assumption taken during the TCP modeling that the retransmission timeout duration is constant ( ). In a real implementation, is adaptively determined by estimating the mean and variance of the RTT [19], thus providing slightly better performance in the real system.
Figure 11
Figure 11

Application throughput validation (TCP-based solution).

Delay validation results are shown at the transport and application layers. TCP delays were measured by tracing the received ACKs from the terminal (using Wireshark and tcptrace software), taking into account that  RTT. Validation of RTP delay is more complex, as there is no feedback information from the receiver to measure the RTP RTT. The solution involves using an RTCP time stamp to measure the delay from sender to receiver; this solution requires the sender and receiver to be synchronized via Network Time Protocol (NTP).

6. Use of the Model for QoE Assessment

The proposed end-to-end emulator delivers a detailed real-time analysis and understanding of the service quality for any application and technology by applying a proper configuration. This approach provides a simple mapping from network-level performance indicators to service-level performance indicators.

From a mobile operator's point of view, knowing how subscribers perceive the performance of the services they are offered is a key issue. Quality of Experience (QoE) is the term used to describe this end user perception.

As the complexity of the lower layers in the end-to-end connection is simplified by means of network performance indicators from the QoS model, our proposed emulator is able to run in real-time. This real-time emulator provides certain quality degradation (in terms of delay or packet loss) to the filtered packets, offering a data flow experiencing the degradation that a real end-to-end network would add. In this manner, the user QoE can be assessed for different network types, configurations, and topologies.

Figure 12 shows a comparison between the throughput obtained from the end-to-end model and measurement results for a UDP-based solution. In addition, a snapshot of the video captured at the client side is shown for three different load levels in order to illustrate the image quality degradation as load increases.
Figure 12
Figure 12

Application throughput validation (UDP-based solution).

The end-to-end emulator can also be used to evaluate the video quality for different network configurations and conditions, either by means of objective metrics like PSNR (Peak-to-peak Signal-to-Noise Ratio) or subjective metrics like the MOS (Mean Opinion Score). Although recently a number of more complex metrics have been defined, in this paper, the PSNR metric was used to evaluate the video quality for different network loads as it is the most widely used objective video quality metric [20]. PSNR is defined by
(50)

where MaxErr represents the maximum possible absolute value of colour components difference, w is the video width, and h is the video height.

Table 11 shows the average PSNR results obtained from the MSU Video Quality Measurement Tool v2.01. Taking into account that PSNR values higher than 35 dB are usually considered good quality, this is only achieved under load conditions below 12 Mbps approximately.
Table 11

PSNR evaluation of video quality.

Network Load

Application throughput

Average PSNR

11.5 Mbps

381 kbps

36.7 dB

13.4 Mbps

350 kbps

25.1 dB

15.3 Mbps

304 kbps

16.9 dB

Table 12

Numerical Parameters at different layers.

Parameter

Value

Application Layer

2–50

12 bytes

L

64 kbytes

Transport Layer

W

32 kbytes

b

2 (TCP), 1 (TFRC)

8 bytes (UDP), 20 bytes (TCP), 16 bytes (TFRC)

Network Layer

Multiplexing

WFQ

20 bytes

3

20 Mbps

64 kbytes

PDCP Layer

32 kbytes

1 byte

10

RLC Layer

3 (UDP), 6 (TCP & TFRC)

200 ms

40 bytes

4 bytes

2 bytes

MAC Layer

Scheduling

RR, M-LWDF

40 bytes

0 bytes

Physical Layer

0, 2 (QPSK), 4 (16QAM), 6 (64QAM)

TTI

1 ms

50

0.6

TTI

12

12

8 dB

Radio Channel

8 Hz

15 dB

7. Conclusions

In this work, a detailed analysis of the end-to-end QoS assessment over networks with wireless access has been presented. This paper proposes a new modeling methodology based on QoS models for each protocol layer, providing a set of performance indicators across the protocol stack.

Based on this methodology, a QoS model for streaming services has been developed. This model can be used to estimate the performance at any protocol layer. In addition, the model makes it possible to identify the main factors affecting the quality of service, which is very useful for end-to-end parameter optimization. Finally, the model can also be used to map QoS needs at different layers from application requirements (e.g., to reserve appropriate resources at each layer). The framework applied in this work for streaming can be extended to other services (e.g., VoIP) and radio technologies (e.g., WiMax).

In terms of performance results, it was shown that multiplexing algorithms which take into account both channel state information and QoS indicators (such as M-LWDF) provide the best performance (in terms of capacity and fairness). The values of BER T , and W parameters must be jointly decided upon, making trade-offs between throughput and delay; for example, for a given of , the maximum number of RLC retransmissions should be set to 6 in order to limit the end-to-end delay. With these values, the maximum TCP window that maximizes throughput is W = 32 kB.

In order to validate the proposed QoS model, a real-time emulation platform was developed. Additionally, this emulator makes it possible to experience the end-to-end quality of service and facilitates QoE assessment using appropriate measurement tools.

Declarations

Acknowledgments

This work is partially supported by the Spanish Government under project TEC-2007-67289 and by the Junta de Andalucia under Proyecto de Excelencia P07-TIC-03226.

Authors’ Affiliations

(1)
Department of Communications Engineering, University of Malaga, Malaga, Spain

References

  1. Floyd S, Handley M, Padhye J, Widmer J: TCP friendly rate control (TFRC): protocol specification. RFC 3448, January 2003Google Scholar
  2. M. Andrews H, K. Kumaran G, K. Ramanan R, Stolyar A, Vijayakumar R, Whiting P: CDMA data QoS scheduling on the forward link with variable channel conditions. Bell Labs Technical Memorandum 2002.Google Scholar
  3. Gómez G, Sanchez R: End-to-end quality of service over cellular networks: data services performance optimization in 2G/3G. John Wiley & Sons, New York, NY, USA; 2005.View ArticleGoogle Scholar
  4. Montes H, Gómez G, Cuny R, Paris JF: Deployment of IP multimedia streaming services in third-generation mobile networks. IEEE Wireless Communications 2002, 9(5):84-92. 10.1109/MWC.2002.1043858View ArticleGoogle Scholar
  5. Zhang Q, Zhu W, Zhang Y-Q: End-to-end QoS for video delivery over wireless Internet. Proceedings of the IEEE 2005, 93(1):123-133.View ArticleGoogle Scholar
  6. Ci S, Wang H, Wu D: A theoretical framework for quality-aware cross-layer optimized wireless multimedia communications. Advances in Multimedia 2008, 2008:-10.Google Scholar
  7. Akyilduz I, Altunbasak Y, Fekri F, Sivakumar R: AdaptNet: an adaptive protocol suite for the next-generation wireless internet. IEEE Communications Magazine 2004, 42(3):128-136.View ArticleGoogle Scholar
  8. Zhang Q, Zheng Y-Q: Cross-layer design for qos support in multihop wireless networks. Proceedings of the IEEE 2008, 96(1):64-76.View ArticleGoogle Scholar
  9. Goldsmith AJ: Wireless Communications. Cambridge University Press, Cambridge, UK; 2005.View ArticleGoogle Scholar
  10. Gómez G, Morales-Jiménez D, López-Martínez FJ, Sánchez JJ, Entrambasaguas JT: Radio-interface physical layer. In Long Term Evolution: 3GPP LTE Radio and Cellular Technology. CRC Press, Boca Raton, Fla, USA; 2009.Google Scholar
  11. 3GPP 36.201 : Long Term Evolution (LTE) physical layer; general description. V8.3.0, March 2009Google Scholar
  12. Entrambasaguas JT, Aguayo-Torres MC, Gómez G, Paris JF: Multiuser capacity and fairness evaluation of channel/QoS-aware multiplexing algorithms. IEEE Network 2007, 21(3):24-30.View ArticleGoogle Scholar
  13. Peisa J, Meyer M: Analytical model for TCP file transfers over UMTS. Proceedings of International Conference on Third Generation Wireless and Beyond, June 2001, San Francisco, Calif, USA 42-47.Google Scholar
  14. Gómez G: QoS modeling for end-to-end streaming performance evaluation over wireless access networks, Ph.D. thesis. Departamento de Ingeniería de Comunicaciones, Universidad de Málaga, Malaga, Spain; 2009.Google Scholar
  15. Bormann C, Burmeister C, Degermark M, et al.: Robust Header Compression (ROHC). RFC 3095, July 2001Google Scholar
  16. Padhye J, Firoiu V, Towsley D, Kurose J: Modeling TCP throughput: a simple model and its empirical validation. Computer Communication Review 1998, 28(4):303-314. 10.1145/285243.285291View ArticleGoogle Scholar
  17. Xu L, Helzer J: Media streaming via TFRC: an analytical study of the impact of TFRC on user-perceived media quality. Proceedings of the 25th IEEE International Conference on Computer Communications (INFOCOM '06), April 2006, Barcelona, SpainGoogle Scholar
  18. Gómez G, Poncela-González J, Aguayo-Torres MC, Entrambasaguas JT: A real-time end-to-end testbed for evaluating the performance of multimedia services. Proceedings og the 2nd International Workshop on Future Multimedia Networking (FMN '09), 2009, Lecture Notes in Computer Science 5630: 212-217.View ArticleGoogle Scholar
  19. Jacobson V, Braden R, Borman D: TCP Extensions for High Performance. RFC 1323, May 1992Google Scholar
  20. Huynh-Thu Q, Ghanbari M: Scope of validity of PSNR in image/video quality assessment. Electronics Letters 2008, 44(13):800-801. 10.1049/el:20080522View ArticleGoogle Scholar

Copyright

© Gerardo Gómez et al. 2010

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement