Skip to main content
  • Research Article
  • Open access
  • Published:

HSUPA Transport Network Congestion Control

Abstract

The introduction of High Speed Uplink Packet Access (HSUPA) greatly improves achievable uplink bitrate but it presents new challenges to be solved in the WCDMA radio access network. In the transport network, bandwidth reservation for HSUPA is not efficient and TCP cannot efficiently resolve congestion because of lower layer retransmissions. This paper proposes an HSUPA transport network flow control algorithm that handles congestion situations efficiently and supports Quality of Service differentiation. In the Radio Network Controller (RNC), transport network congestion is detected. Relying on the standardized control frame, the RNC notifies the Node B about transport network congestion. In case of transport network congestion, the Node B part of the HSUPA flow control instructs the air interface scheduler to reduce the bitrate of the flow to eliminate congestion. The performance analysis concentrates on transport network limited scenarios. It is shown that TCP cannot provide efficient congestion control. The proposed algorithm can achieve high end-user perceived throughput, while maintaining low delay, loss, and good fairness in the transport network.

1. Introduction

In response to the increased need for higher bitrate and more efficient transmission of packet data over cellular networks, the WCDMA 3GPP Release extended the WCDMA specification with the High Speed Downlink Packet Access (HSDPA) [1]. The demand for uplink performance improvement is addressed by introducing Enhanced Dedicated Channel (E-DCH)—often referred as Enhanced Uplink (EUL) or High Speed Uplink Packet Access (HSUPA)—in 3GPP Release [2]. HSDPA and HSUPA together are often called High Speed Packet Access (HSPA). The main architectural novelty of HSPA is that certain parts of the control of radio resources have been moved from RNC to Node Bs.

HSUPA is further improved with the possibility of higher-order modulation in Release 7 [3]. The Release 6 and 7 improvements allow Layer 1 peak rates up to 5.7 Mbps and 11 Mbps in uplink. New Medium Access Control layers (MAC-e/es) were introduced to support the new features of HSUPA, that is, fast Hybrid Automatic Repeat Request (HARQ) with soft combining, reduced (2 ms) Transmission Time Interval (TTI) length and fast scheduling.

In spite of the fact that similar features have been introduced for HSDPA and HSUPA, there are several essential differences [2]. In case of HSDPA the High Speed Downlink Shared Channel (HS-DSCH) is shared in time domain among all users, for HSUPA the E-DCH is dedicated to a user. For HSDPA, the transmission power is kept more or less fixed and rate adaptation is used. However, this is not possible for HSUPA since the uplink is nonorthogonal, therefore, fast power control is needed for fast link adaption. Soft handover is not supported by HSDPA, while for HSUPA soft handover is used to decrease the interference from neighboring cells and to have macrodiversity gain. Consequently, for HSDPA the shared resources are the transmission power and the code space of the shared channel, but for HSUPA, the interference headroom.

Likewise to HSDPA [4], the Iub and Iur transport network links could be a bottleneck in the radio access network for HSUPA, since the increased air interface (Uu) capacity does not always come with similarly increased transport network capacity in practice. (Iub is between Node B and RNC, Iur is between Drift RNC and Serving RNC (SRNC).)The cost of transport links is still high in some cases and not expected to decrease dramatically [5]. The transport network links are expected to be bottleneck, for example, in case of E1, T1 transmissions and ADSL UL transmission but it is unlikely to have UL transport network bottleneck, for example, in case of E3 transmission or 100 Mbps Ethernet access. (The bitrate available for ATM cells is 1920 kbps in case of E1, 1536 kbps in case of T1 and 33920 kbps in case of E3.) In most networks, it is expected that in a significant percentage of the cases, the throughput is limited by the Iub/Iur transport network, especially in the initial deployment phase. As HSPA traffic is increasing in the network, most of the operators will expand their transport network to further enhance user experience.

The possible congestion situation over a transport link cannot be solved by Transmission Control Protocol (TCP) efficiently because of lower layer retransmissions. It has been identified in 3GPP that an HSUPA flow control can resolve these congestion situations if transport network congestion detection functionality is available. For this purpose a new control frame and a new Information Element for the Iub/Iur Framing Protocol E-DCH data frame was introduced in [6]. The requirements and principles of HSDPA and HSUPA congestion control are summarized in [7].

Various flow control algorithms are developed for different networks. The most known flow control algorithm is the TCP protocol used mainly in IP networks. TCP congestion control is widely investigated and improved. Past works include improvements based on rate and round trip delay estimation [8, 9]. Many papers discussed flow control in Asynchronous Transfer Mode (ATM) networks where the objective was to utilize the bandwidth not used by traffic carried on Constant Bit Rate (CBR) and Variable Bit Rate (VBR) Virtual Circuits (VCs) [10]. These algorithms cannot be directly applied for the HSDPA flow control due to difference in the architectures.

In [1113], the authors addressed HSDPA flow control. It is a common assumption in these papers that Iub transport network capacity is not limiting. These flow controls are optimized only for efficient use of the air interface. In [14], the authors introduced a transport network overload control algorithm for Best-Effort DCH traffic. They showed that already in the case of DCHs, this improves transport network utilization. In [15], the authors introduced cross-layer backpressure in the RNC, which allows good transport network utilization, when the transport network bottleneck buffer is in the RNC. In [4], an HSDPA flow control algorithm which solves not only the efficient usage of air interface but also the congestion situation on transport network is proposed. The proposed algorithm can be used in a more general transport network, because it does not require the transport network bottleneck buffer to be in the RNC. In [16], the authors proposed an extension of a related HSDPA flow control algorithm which provides fairness-optimal initial rates for HSDPA flows sharing the same transport network bottleneck.

In [17], the authors made an HSUPA performance analysis using congested transport assumption, but without using any transport network congestion control solution. In [18] the authors highlight the importance of enabling the HSUPA Iub congestion control algorithm in combination with the RLC AM (Radio Link Control Acknowledged Mode), however the congestion control algorithm details are not revealed.

We propose an HSUPA flow control algorithm that supports scenarios where the transport network is the limiting factor. This algorithm uses the flow control framework standardized by 3GPP [6]. The present paper extends the work in [19] with description of transport protocols, and transport overhead. We provide more detailed overview of congestion control and retransmission mechanisms involved in HSUPA. The algorithm description is extended with detailed Uu scheduler description and considerations about the algorithm parameter settings. We provide an additional illustrative example about the working algorithm.

The rest of the paper is structured as follows. Section 2 gives a system overview. Section 3 describes the proposed HSUPA flow control algorithm. The performance of this flow control algorithm is evaluated in Section 4. Finally, Section 5 concludes the paper.

2. System Overview

The nodes and protocol layers involved in the HSUPA flow control (FC) are depicted in Figure 1 [20]. The figure also shows the location of the FC related functionalities in boxes with dashed line. The task of the FC is to regulate the transfer of MAC-es Protocol Data Units (PDUs) on the Iub/Iur Transport Network (TN) toward SRNC, that is, to perform TN congestion control. In the rest of the article, flow denotes this MAC-es PDU flow. Several of these flows may share the same air interface or TN bottleneck. Note that the regulation provided by FC is needed only when the TN limits the performance. When the TN is not limiting, the FC has no effect on the flows. Figure 2 depicts the protocol layers which perform congestion control and/or retransmission. The behavior of the different layers is detailed in this section.

Figure 1
figure 1

HSUPA protocol stack and flow control architecture.

Figure 2
figure 2

Protocol layers performing congestion control and/or retransmission.

When HSUPA is carrying moderate-speed Quality of Service (QoS) sensitive traffic, QoS can be guaranteed by TN bandwidth reservation by means of TN admission control and FC is not used. For best-effort (BE) traffic, bandwidth reservation is not efficient and FC is used instead. When QoS sensitive and best-effort traffic co-exists in a system, then usually the QoS sensitive traffic is prioritized over BE traffic and the capacity not used by the QoS sensitive traffic can be utilized by the BE traffic.

A User Equipment (UE) can be in Soft Handover (SHO), which means that its transmission is received by more than one cell. One of these cells, usually the one with the best radio connection, is called serving cell and the rest are called nonserving cells. When a UE is in SHO it has as many flows over TN as many Node Bs it is connected to.

The task of the TN is to transport Iub/Iur Framing Protocol E-DCH data frames (DFs) and control frames (CFs) between SRNC and Node B. The TN links and buffers are usually shared among the flows of the same Node B. The flows of several Node Bs may share part of the TN, if there is aggregation in TN.

AAL2/ATM (ATM Adaptation Layer 2) or UDP/IP (User Datagram Protocol/Internet Protocol) are used as transport protocols; in Figure 1 the UDP/IP/Ethernet solution is depicted as an example. In case of the AAL2/ATM transport solution, the DFs are segmented to AAL2 Common Part Sublayer (CPS) PDUs. These CPS PDUs are then fit into one or two ATM cells. There is no early packet discard for AAL2 queues, consequently the end of the DFs can be lost, while the beginning of the DF is still using the TN capacity. We call these DFs destroyed frames. This behavior can be disadvantageous in case of system overload. Detailed description of AAL2/ATM can be found in [21]. In case of UDP/IP, the DFs to be transmitted can be larger than the maximum transfer unit (MTU) of the system, especially in case of high throughput. In this case IP fragmentation is needed and DFs might be destroyed like in case of AAL2/ATM. In most of the cases, however the size of the DF is smaller than the MTU and then a DF is either completely lost or transmitted. In case of UDP/IP the most commonly used Layer 2 protocol is Ethernet, but other L2 protocols are possible, for example, Multi-Link Point-to-Point Protocol (MLPPP).

We define the TN overhead as amount of octets needed to be transmitted over the TN divided by the transmitted user-level IP octets. It depends on the size of the DF and the used TN protocols. The DF size mainly depends on the achieved user throughput and on the TTI. (This is because whenever a MAC-es frame received from the air interface, it is put to a DF and transmitter over the TN. For 2 ms TTI, the MAC-es PDUs from one or more TTIs may be bundled into one DF before being transferred [6]. With bundling up to 5 PDUs, the TN overhead in case of 2 ms TTI can be decreased very close to the overhead in case of 10 ms TTI. On Figure 3 no bundling was assumed for 2 ms TTI.)Figure 3 depicts the overhead in case of AAL2/ATM and UDP/IP/Ethernet TN for both  ms and  ms TTI. Apart from the transport protocol overhead, the overhead value also contains the Iub/Iur Framing Protocol, the MAC-es, and the RLC overhead. For the UDP/IP/Ethernet solution the overhead depends much more on the achieved user throughout, like in the case of AAL2/ATM. This is because for AAL2/ATM most headers are on segmented PDUs, resulting in a fixed percentage, while for UDP/IP/Ethernet, the headers are large, but apply to the data frame only once. This also explains the very large overhead in case of small throughput and UDP/IP/Ethernet transport. If MLPPP is used as L2 for UDP/IP, then IP header compression becomes possible and the overhead in case of small throughput is significantly reduced.

Figure 3
figure 3

Transport network overhead as the function of the achieved user throughput.

The TN bottleneck and the associated bottleneck buffer can be in the network at a point of aggregation and also in the nodes on the interface cards.(We consider the interface cards in the nodes to be the part of the TN.)The TN may support Transport Network Layer (TNL) QoS differentiation, which allows for different flow controlled flows to have different service over the TN based on, for example, subscription or service. Different flows of the same Node B may experience bottleneck at different parts of the network, not only due to different TNL QoS level but also due to, for example, some flows being transmitted over Iur or over parallel Iub links. Additionally, flows must be able to efficiently use the changing TN capacity remaining from high priority flows to ensure efficient utilization of the TN. The FC must be capable of regulating the flows in this changing environment and must maintain high end-user throughput and fairness while maintaining low end-to-end delay for delay sensitive applications (e.g., gaming over best effort HSUPA).

The HSUPA air interface scheduler (Uu scheduler) operates by sending scheduling grants to UE and receiving scheduling requests from UE [2]. Only the scheduling framework is standardized, the scheduling algorithm itself is not. There are two types of scheduling grants, Absolute Grant (AG) and Relative Grant (RG). AGs can be sent only by the serving cell and transmitted over the E-DCH Absolute Grant Channel (E-AGCH), which is a shared resource among all users of the cell. The AG defines how many bits can be transmitted every TTI and thus a maximum limit of the data rate. The AG is valid until a new scheduling grant is received. The RG can modify this rate up/down in the serving cell, or only down in the nonserving cell. The UE indicates by a flag called Happy Bit whether it would benefit from a higher rate grant or not.

The MAC-e/es protocol layers in the UE are responsible for HARQ and the transport format selection according to the scheduling grants. The created MAC-e PDU is transmitted over the air interface to the Node B. The MAC-e protocol layer in the Node B demultiplexes the MAC-e PDU to MAC-es PDUs which are transmitted over the TN to the SRNC. The MAC-es protocol layer in the SRNC handles the effect of the SHO by reordering, duplicate removal and macro combining to ensure in-sequence-delivery for the Radio Link Control (RLC) protocol layer.

While a connected UE may have several (MAC-es) flows multiplexed in one MAC-e flow, only one AG is assigned to the UE. This makes the congestion control challenging when some flows belonging to the same UE experience TN congestion while others not(e.g., when (MAC-es) flows have different TNL QoS: an admission controlled flow and a nonadmission controlled flow). In this case, as a simplification, the whole MAC-e flow can be treated as congested.

RLC Acknowledged Mode (AM), which is a Selective Repeat Automatic Repeat Request protocol, is used between the UE and SRNC [22]. RLC AM does not include congestion control functionality, because it assumes that RLC PDUs are transmitted by MAC-d layer according to the available capacity. RLC AM was originally included in order to retransmit lost Uu data frames in case of traditional DCH, where frame loss over the air is in the order of 1–10%. For E-DCH the frame loss on the air interface is significantly reduced by the HARQ retransmissions, but RLC AM was kept to allow seemless channel swithing between traditional DCH and E-DCH. The place of RLC AM and HARQ is depicted in Figure 2.

The RLC status messages, which are being sent regularly, trigger retransmission of all missing PDUs. This may result in unnecessary retransmissions because new status messages are sent before the retransmitted PDUs arrive, especially in case of long round trip time. Several unsuccessful retransmissions trigger an RLC reset and the whole RLC window (maximum 80 KByte) is discarded. The end-user IP packets never get lost in TN—unless the congestion causes RLC reset—and in this way TCP cannot detect TN congestion based on duplicate acknowledgments. TCP slow start rapidly increases the TCP window size to its maximum and it is normally kept at maximum during the whole transmission—unless a bottleneck other than the TN is experienced—because of the lack of IP packet loss and large enough RLC Service Data Unit (SDU) buffer. Too many retransmissions of the same PDU usually causes TCP timeout that degrades the TCP efficiency significantly. Consequently, TCP cannot control TN congestion efficiently and a system specific congestion control solution is needed.

Frame loss and the resulting RLC retransmission will be minimized because it significantly increases the delay variation of end-user. The TN delay be kept low due to delay sensitive applications over BE HSUPA and to minimize control loop delay for FC and RLC. The delay target for MAC-es PDUs over TN is typically in the order of  ms. This requirement is a compromise between performance and achievable utilization.

FC related Iub/Iur Framing Protocol (FP) data and control frames are standardized in [6] and define the HSUPA FC framework. The requirements and principles of congestion control are summarized in [7]. The FC algorithm itself is not standardized, each vendor can implement its own solution. The Iub/Iur Framing Protocol E-DCH data frame (DF) contains the user data, the Frame Sequence Number (FSN), the Connection Frame Number (CFN), and Subframe Number. CFN and Subframe Number are used for reordering, but can also be used to calculate a Delay Reference Time (DRT) which defines when the DF was sent from Node B. FSN and DRT can be used for TN congestion detection. Apart from congestion detection based on DF fields, also transport protocol specific congestion detection techniques are possible to use. The TNL Congestion Indication Control Frame (TCI CF) is used for reporting the congestion detected in SRNC. The TCI contains a congestion status field, which can indicate no congestion, congestion due to delay buildup or due to frame loss.

While the purpose of HSDPA flow control [4] and HSUPA flow control is similar, there are significant differences. Firstly, for HSUPA only the TN has to be regulated, while for HSDPA there are also Uu scheduler queues in the Node B to be regulated (called MAC-hs Priority Queues [4]). This also means that HSDPA FC must deal with Uu and TN bottlenecks, but in case of HSUPA FC the Uu bottleneck is completely handled by the Uu scheduler. Secondly, HSUPA can be in SHO, while HSDPA cannot. This means that for the same UE there can be several (one serving and zero or more nonserving) flows to be controlled.

3. Flow Control Algorithm Description

In this section, we introduce a rate-based per flow FC solution. A rate-based solution is chosen because this is aligned well with the standardized 3GPP framework. A per flow solution supports different TN bottlenecks for the flows of the same Node B and TNL QoS differentiation among the flows. An aggregated solution would require detailed information about the TN bottleneck(s) and QoS solution, also it should support aggregated TN connections, where flows of several Node Bs can experience bottleneck. While such solution is not impossible, its complexity would be too high compared to the achievable gains. The FC algorithm architecture is depicted in Figure 4.

Figure 4
figure 4

Flow control architecture.

The FC is designed to provide fair throughput sharing among the flows sharing the same TN bottleneck, when the TN is limiting the throughput. Behavior of flows is regulated by the Uu scheduler until a TN congestion is detected. The reason for this is that as long as the TN is not a bottleneck it is the task of the Uu scheduler to utilize the air interface as much as possible and to provide fairness among the flows. The Uu scheduler increases the granted bitrate with a reasonable speed to avoid large interference peaks. This also ensures that sudden overload of the TN is avoided.

When TN congestion is detected the FC dominates the behavior. During this time the flows are regulated according to an algorithm, which is conform with the additive increase multiplicative decrease (AIMD) property. In [23], it is shown that AIMD guarantees convergence to fairness; all flows converge to an equal share of resources in steady state, where no flows join or leave. A multiplication with a coefficient provides the multiplicative decrease and a constant increase rate after reduction provides the additive increase property. The AIMD property is met only for the serving cell behavior. However, a MAC-e PDU is normally received in the serving-cell with a higher probability, thus the end-user fairness is dominated by the serving-cell behavior.

The algorithm is detailed in the next subsections and it is illustrated by an example on Figure 5. The detailed description of this example can be found in Section 4.

Figure 5
figure 5

Flow control behavior in case of 1 user and 1 Mbps TN.

3.1. TN Congestion Detection in SRNC

The TN congestion detection part of the algorithm is performed whenever a DF arrives to the SRNC. Two different congestion detection methods are used at the same time, namely, the following.

(i)FSN gap detection. The 4-bit FSN in the DF can be used to detect lost DFs.

(ii)Dynamic Delay Detection (DDD). The Node B DRT is compared to a similar reference counter in SRNC when the DF is received. The difference between the two counters increases when the TN bottleneck buffer is built up. Congestion is detected when this difference increases too much compared to the minimum difference.

When performing DDD, the severity of the congestion is differentiated. In case of moderate dynamic delay increase (, e.g., 40 ms) it is soft congestion and in case of large increase (, e.g., 60 ms) it is hard congestion. Detected FSN gap is also reported as hard congestion.

The dynamic delay detection limits ( and ) have to be configured, by taking into account the frame delay variation caused by higher priority traffic. The limits have to be set higher than the noncongestion related frame delay variation, otherwise congestion will be detected, even when there is no congestion in the TN and these false congestion detections will result in performance degradation. Similarly, noncongestion related DF loss in the TN can result in false congestion detection, therefore, it should be minimized.

The detected congestion and its severity is reported to the Node B by a TCI CF, if no TCI CF was sent for a given time (). The purpose of is to avoid unnecessary reaction to the same congestion situation twice and its value is based on TN dynamics (e.g., propagation delay and TN buffer length).

3.2. Flow Control in Node B

Whenever a TCI is received by the Node B, it triggers a congestion action by the FC entity. Depending on the severity of the congestion a reduce request with a coefficient is issued. Different coefficients are applied in case of soft and hard congestions: (e.g., %), (e.g., %). A different coefficient can also be used for the first TCI received for a flow: (e.g., ). The motivation for this is that when there was no TNL congestion at all, then the Uu scheduler increases the granted bitrate with a higher speed. Consequently, these UEs can potentially overload the TN more than UEs already limited by the effect of flow control. The reduction according to the different coefficients can also be seen on Figure 5.

Depending on whether it is a serving cell flow or a nonserving cell flow, the rate reduce request is issued to the Uu scheduler or to the Frame dropping functionality.

3.3. Congestion Action in the Serving Cell

Until the first rate reduce request is received, the Uu scheduler behavior is not affected by FC at all. Based on air interface conditions, hardware resources, and Happy Bit information, the Uu scheduler determines the granted bitrate represented by the AG.

Upon receiving a rate reduce request, the scheduler decreases the granted bitrate by sending a new AG according to the received . Additionally, when a rate reduction request was issued for a flow, the scheduler is not increasing the absolute grant of that flow with more than a predefined rate (e.g., 20–200 kbps/s). The value of is determined based on typical TN bitrate and typical number of parallel flows and it affects the reaction speed and the stability of the algorithm.

The allowed bitrate according to the and is maintained in the Uu scheduler. The bitrate represented by the sent AG must be lower than this allowed bitrate. Note that there is only a certain number of different possible AG values to be sent. Consequently, the reduction in allowed bitrate and the reduction in AG might be different, according to the granularity of the possible AG values. The increase and reduction of the allowed bitrate and the AG determined by the value of the allowed bitrate are illustrated with an example on Figure 5.

The Uu scheduler used in the studied system does not send any RGs in the serving cell.

3.4. Congestion Action in Nonserving Cell

A TCI received in the nonserving cell will not trigger rate reduction by RG, because a MAC-e PDU is received in the best cell (usually the serving cell) with a higher probability. Consequently, if we reduced the bitrate due to TN limitations in the nonserving cell, we might reduce the bitrate of the end-user unnecessarily. However, congestion action still needs to be taken, thus a fraction of the received MAC-e PDUs are dropped. If these PDUs are not received in the serving-cell, then RLC AM still retransmits these missing PDUs.

A forwarding coefficient, determines the probability that a received MAC-e PDU is forwarded. It is at initialization and each received reduce request decreases with multiplication by the value received in the request. The is gradually increased to afterwards, for example, by adding to every 1 second. Note that this behavior is not conform to AIMD, but a MAC-e PDU is normally received in the serving-cell with a higher probability, thus the end-user fairness is dominated by the serving-cell behavior.

4. Performance Analysis

The FC and the Uu scheduler algorithms introduced in Section 3 were implemented in a WCDMA/HSPA system simulator. (Uu scheduling is more complex than the behavior described in Section 3. We used a Proportional Fair scheduler in our simulations, see more details in [24, Section  7.2.2]. However, for the simulations described in this section, the flows are TNL limited in most of the cases and scheduler behavior is dominated by the algorithm described in Section 3.3.) It contains the introduced HSUPA related protocol functions, namely TCP/IP, RLC, MAC-d, MAC-e/es and Iub/Iur Framing Protocol for E-DCH. (The maximum RLC window size is set to 80 KBytes, the maximum TCP window size is 256 KBytes. NewReno TCP is used.) The AAL2/ATM TN is modeled as a link with a buffer and fixed propagation delay ( ms). (Though AAL2/ATM TN was chosen for the analysis the FC performs similarly with UDP/IP TN.) The TN buffer is 200 ATM cells long unless otherwise mentioned. The multicell radio environment consists of standard models for distance attenuation, shadow fading and multipath fading, based on 3GPP typical urban channel model, see [25]. The simulator supports  ms TTI length for E-DCH and the user equipment was E-DCH Category 3 terminal [26] which supports approximately  Mbps peak rate on L1. The radio network used by the simulator consists of an RNC and a Node B with cells. The aggregate maximum peak rate of the 3 cells is 4.32 Mbps ( Mbps).

Figure 5 shows an example for flow control behavior in case of one user and  Mbps TN bottleneck. In the first  second the Uu scheduler ramps up the granted bitrate. At about  s a soft congestion is detected in the RNC, and the Node B is notified about it using a TCI. As it is the first congestion detected for the user it results in a reduction. From this point of time the allowed bitrate limits the possible absolute grant. As the allowed bitrate increases, an AG with a higher bitrate can be sent to the UE, however this AG results in soft congestion after a while. After the soft congestion TCI is received the Uu scheduler decreases the AG again to a lower level. This reduction is smaller, the intention is to reduce the AG to of the current value. Note that the actual reduction is larger due to the granularity of the AG table. Afterwards this behavior is repeated. It can be seen that after the first TCI is received the Uu scheduler behavior is regulated by the flow control, that is, the allowed bitrate.

To illustrate the need for system specific congestion control one uploading user is simulated when TN is limiting and TN capacity is varying. Figure 6 shows the  s average IP level throughput of the user with or without FC. The usage of FC provides high IP level throughput and reacts to the TN capacity changes very fast and accurately. (The protocol overhead depends on the actual user bitrate, see Section 2. The max. IP throughput in this figure was calculated assuming 1.3 as the overhead factor.) When relying only on TCP (i.e., no TN congestion control, see Figure 2) the performance is seriously degraded. In the beginning, TCP throughput is increased until the TN buffer becomes filled. The RLC PDUs start being lost and retransmitted. Retransmission further increases load and PDU loss ratio and thus retransmission. During the simulation TN loss ratio is 20% and 71% of all sent RLC PDUs are retransmissions, which results in much lower throughput. However with several retransmissions IP packets still reach the RNC, there is no IP packet loss or gap. Thus the congestion is not visible for TCP until an RLC reset and a consequent TCP timeout occurs at 60 seconds.

Figure 6
figure 6

 second Average IP level throughput when TN capacity is varying.

In the rest of the section we evaluate the performance of the proposed FC. In the investigated cases the TN capacity was or  Mbps and there was no DCH and HSDPA traffic. A traffic model which can load the TN and is simple enough to evaluate the system behavior in detail was implemented. This model has three parameters: number of users attached to the Node B, object size uploaded by the users ( Mbyte) and the mean of reading time which is the gap between two consecutive uploads of the same user ( seconds). The users are uniformly distributed among the cells.

We use the following performance measures to investigate the performance and potential protocol problems: average total IP level throughput, average E-DCH Iub/Iur data frame delay, and average of Jain's fairness index [23] of IP level throughput calculated for  s long intervals. The Jain's fairness index is calculated every  second as

(1)

where are the average IP level throughput of users . The simulations were run for  s to evaluate these measures. We selected the simulation scenarios to represent typical first deployments well and we chose the length of the simulations to be long enough to illustrate the typical performance of the algorithm. Note that due to the protocol overheads the maximum achievable IP level throughput is 3 Mbps in case of  Mbps TN capacity.

Figure 7 compares the average IP throughput as a function of the number of users over TN link using different TN buffer sizes, that is, 500 and 200 ATM cells (solid and dashed lines), and different TN capacities, that is, , and  Mbps. Table 1 shows the buffer sizes in ms for different TN capacities. The introduced FC uses efficiently the TN bottleneck and the throughput is hardly dependent on the TN buffer size, but in case of  Mbps TN capacity, there is a small throughput difference between the different TN buffer sizes. (In case of  Mbps TN capacity and 1 or 2 users the throughput is limited by the Uu peak rate.) This is because the 200 ATM cells long TN buffer is only  ms long which is less than the soft congestion limit, hence only frame loss based congestion detection resulting in hard congestion was possible. In case of the other TN capacities the DDD is also possible to use.

Table 1 TN buffer size for different TN capacities.
Figure 7
figure 7

Average IP level throughput for different TN capacities.

Figure 8 shows the average DF delay for different TN capacities and TN buffer sizes. The more users were connected to the Node B the higher average delay is measured, however these values are far below the  ms target. Note that the TN buffer length (Table 1) is the upper limit of the measured delay. The MAC-es PDU loss ratio was also investigated, but it was less than % even when the TN buffer was small and only frame loss-based congestion detection was possible to use. Low MAC-es PDU loss ratio is especially important in case of the AAL2/ATM transport solution to avoid not only frequent RLC retransmissions but also high percentage of destroyed data frames.

Figure 8
figure 8

Average DF delay on TN for different TN capacities.

Figures 9 and 10 show the average fairness index with  Mbps and  Mbps TN capacity for different TN buffer sizes. In case of ATM cells long buffer the fairness provided by FC is more than in both cases. Less fairness was measured in case of ATM cells long TN buffer for both cases. The dominant TN congestion type is changed and that causes the fairness degradation. In case of the ATM cells long buffer the dominant congestion type is dynamic delay, while for the ATM cells long buffer it is FSN gap detection, because the buffer is small time-wise (see, Table 1). FSN gap detection detects lost data frames in the tail drop TN buffer. We concluded that unfairness was caused by the tail drop based congestion detection [27].

Figure 9
figure 9

Jain's fairness index in case of 2 Mbps TN.

Figure 10
figure 10

Jain's fairness index in case of 4 Mbps TN.

5. Conclusions

With more and more increased air interface throughput, the efficient utilization of the often limiting transport network became more important. To meet this demand a per-flow HSUPA transport network flow control algorithm has been proposed. The need for transport network congestion control was shown and transport network congestion detection and avoidance techniques were described. The introduced algorithm can support quality of service differentiation among HSUPA flows as well as different transport network bottlenecks for the flows of the same Node B. It was shown by simulations that the proposed algorithm can maintain high transport network utilization and good fairness among the flows while also keeping the delay and loss in the transport network low. The solution has been compared to a scenario where we rely only on TCP congestion control and it has been shown that the lack of HSUPA flow control causes serious performance degradation in the system when the transport network capacity is limiting the throughput.

References

  1. Parkvall S, Englund E, Malm P, Hedberg T, Persson M, Peisa J: WCDMA evolved—high-speed packet-data services. Ericsson Review 2003, 80(2):56-65.

    Google Scholar 

  2. Parkvall S, Peisa J, Torsner J, Sågfors M, Malm P: WCDMA enhanced uplink—principles and basic operation. Proceedings of the 61th IEEE Vehicular Technology Conference (VTC '05), May 2005, Stockholm, Sweden 3: 1411-1415.

    Google Scholar 

  3. 3GPP TS 25.213 V7.4.0 : Spreading and modulation (FDD) (Release 7). 2007.

  4. Nádas S, Rácz S, Nagy Z, Molnár S: Providing congestion control in the Iub transport network for HSDPA. Proceedings of the IEEE Global Telecommunications Conference (GLOBECOM '07), November 2007, Washington, DC, USA 5293-5297.

    Google Scholar 

  5. Garcia A-B, Alvarez-Campana M, Vazquez E, Guenon G, Berrocal J: ATM transport between UMTS base stations and controllers: supporting topology and dimensioning decisions. Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '04), September 2004, Barcelona, Spain 3: 2176-2180.

    Google Scholar 

  6. 3GPP TS 25.427 V6.8.0 : UTRAN Iub/Iur interface user plane protocol for DCH data streams (Release 6). 2006.

  7. 3GPP TR 25.902 V7.1.0 : Iub/Iur Congestion Control (Release 7). 2007.

  8. Brakmo L, O'Malley S, Peterson L: TCP vegas: new techniques for congestion detection and avoidance. Proceedings of the ACM Conference on Communications Architectures, Protocols and Applications (SIGCOMM '94), August-September 1994, London, UK 24-35.

    Chapter  Google Scholar 

  9. Wei DX, Jin C, Low SH, Hegde S: FAST TCP: motivation, architecture, algorithms, performance. IEEE/ACM Transactions on Networking 2006, 14(6):1246-1259.

    Article  Google Scholar 

  10. Jain R: Congestion control and traffic management in ATM networks: recent advances and a survey. Computer Networks and ISDN Systems 1996, 28(13):1723-1738. 10.1016/0169-7552(96)00012-8

    Article  Google Scholar 

  11. Legg PJ: Optimised Iub flow control for UMTS HSDPA. Proceedings of the 61th IEEE Vehicular Technology Conference (VTC '05), May 2005, Stockholm, Sweden 4: 2389-2393.

    Google Scholar 

  12. Necker MC, Weber A: Impact of Iub flow control on HSDPA system performance. Proceedings of the 16th IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '05), September 2005, Berlin, Germany 3: 1703-1707.

    Google Scholar 

  13. Necker MC, Weber A: Parameter selection for HSDPA lub flow control. Proceedings of the 2nd International Symposium on Wireless Communication Systems (ISWCS '05), September 2005, Siena, Italy 233-237.

    Google Scholar 

  14. Sågfors M, Virkki V, Kuningas T: Overload control of best-effort traffic in the UTRAN transport network. Proceedings of the 64th IEEE Vehicular Technology Conference (VTC '06), September 2006, Montreal, Canada 1: 456-460.

    Google Scholar 

  15. Bajzik L, Korossy L, Veijalainen K, Vulkan C: Cross-layer backpressure to improve HSDPA performance. Proceedings of the IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC '06), September 2006, Helsinki, Finland 1-5.

    Google Scholar 

  16. Páyi PL, Rácz S, Nádas S: Fairness-optimal initial shaping rate for HSDPA transport network congestion control. Proceedings of the 11th IEEE Singapore International Conference on Communication Systems (ICCS '08), November 2008, Guangzhou, China 1415-1421.

    Google Scholar 

  17. Peisa J, Ekström H, Hannu H, Parkvall S: End-to-end performance of WCDMA enhanced uplink. Proceedings of the 61th IEEE Vehicular Technology Conference (VTC '05), May 2005, Stockholm, Sweden 3: 1432-1436.

    Google Scholar 

  18. Zaki Y, Weerawardane T, Li X, Timm-Giel A, Malafrontel GC, Görg C: Effect of the RLC and TNL congestion control on the HSUPA network performance, communications, computers and applications. Proceedings of the 3rd Mosharaka International Conference on Communications, Computers and Applications (MIC-CCA '08), August 2008, Amman, Jordan 1-7.

    Google Scholar 

  19. Nádas S, Nagy Z, Rácz S: HSUPA transport network congestion control. Proceedings of the IEEE GLOBECOM Workshops (GLOBECOM '08), November-December 2008, New Orleans, La, USA 1-6.

    Google Scholar 

  20. 3GPP TS 25.309 V6.6.0 : FDD Enhanced Uplink; Overall description; Stage 2 (Release 6). 2006.

  21. Karlander B, Nádas S, Rácz S, Reinius J: AAL2 switching in the WCDMA radio access network. Ericsson Review 2002, 79(3):114-123.

    Google Scholar 

  22. 3GPP TS 25.322 V6.9.0 : Radio Link Control (RLC) protocol specification (Release 6). 2006.

  23. Chiu D-M, Jain R: Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Computer Networks and ISDN Systems 1989, 17(1):1-14. 10.1016/0169-7552(89)90019-6

    Article  MATH  Google Scholar 

  24. Dahlman E, Parkvall S, Skold J, Beming P: 3G Evolution: HSPA and LTE for Mobile Broadband. 2nd edition. Academic Press, Oxford, UK; 2008.

    Google Scholar 

  25. 3GPP TS 25.943 V6.0.0 : Deployment aspects (Release 6). 2004.

  26. 3GPP TS 25.306 V6.12.0 : UE Radio Access capabilities (Release 6). 2007.

  27. Floyd S, Jacobson V: On traffic phase effects in packet-switched gateways. Internetworking: Research and Experience 1992, 3(3):115-156.

    Google Scholar 

Download references

Acknowledgment

The authors thank the colleagues at Ericsson for their support during the work, especially Zoltán Nagy, Peter Lundh, Pál L. Pályi, and János Farkas.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Szilveszter Nádas.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Nádas, S., Rácz, S. HSUPA Transport Network Congestion Control. J Wireless Com Network 2009, 924096 (2009). https://doi.org/10.1155/2009/924096

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/924096

Keywords