Skip to main content

QoE optimization of video multicast with heterogeneous channels and playback requirements

Abstract

We propose an application-layer forward error correction (AL-FEC) code rate allocation scheme to maximize the quality of experience (QoE) of a video multicast. The allocation dynamically assigns multicast clients to the quality layers of a scalable video bitstream, based on their heterogeneous channel qualities and video playback capabilities. Normalized mean opinion score (NMOS) is employed to value the client’s quality of experience across various possible adaptations of a multilayer video, coded using mixed spatial-temporal-amplitude scalability. The scheme provides assurance of reception of the video layers using fountain coding and effectively allocates coding rates across the layers to maximize a multicast utility measure. An advantageous feature of the proposed scheme is that the complexity of the optimization is independent of the number of clients. Additionally, a convex formulation is proposed that attains close to the best performance and offers a reliable alternative when further reduction in computational complexity is desired. The optimization is extended to perform suppression of QoE fluctuations for clients with marginal channel qualities. The scheme offers a means to trade off service utility for the entire multicast group and clients with the worst channels. According to the simulation results, the proposed optimization framework is robust against source rate variations and limited amount of client feedback.

1 Introduction

1.1 Motivation

Multimedia delivery systems can be optimized to maximize the overall throughput (best effort) or to satisfy client quality of experience (QoE) demands (QoS-guaranteed). QoE-guaranteed optimizations may suffer from being overly constrained, especially in large-scale multicasts. Tracking the media processing capability, QoE demand, and channel quality of every client can be daunting, prompting the search for better trade-offs between bandwidth usage efficiency and optimization complexity. Sometimes, no feasible solution exists due to bandwidth limitations and/or clients with poor channels that require forward error correction (FEC) codes with exceedingly large overheads. Therefore, having a screening process to reduce excessive QoE demands is essential, especially in large-scale multicasts. One may utilize a mechanism to dynamically assign clients to available media quality levels in order to improve resource utilization efficiency. For example, using scalable bitstreams, the multicast server may drop the highest enhancement layers when relatively few users with high-quality channels and high-resolution displays exist. The saved transmission resources could be redeployed to serve clients with poor channels. The multicast optimization needs to be performed repeatedly due to client channel and source bitstream variations, as well as to account for clients dynamically joining or leaving the multicast at random times. Thus, low complexity optimization methods are required.

In point-to-multipoint services such as multicast, the transmission to the multicast clients may traverse different paths. As a result, the end-to-end transmission channels may exhibit diverse behaviors and capacities. End-to-end QoE can be assured by providing sufficient error protection. Feedback-based error correction such as automatic repeat request (ARQ) and hybrid ARQ [1] may not be feasible due to latency and possible feedback implosion at the multicast server. An alternative which avoids these problems is to employ FEC coding. In multicasting, we are faced with an ensemble of channels with different loss processes and require FEC that is “universally” efficient. Fortunately, fountain codes [2] have been demonstrated to well approximate the ideal. With fountain codes, the receiver can recover the source symbols with high probability when the number of correctly received code symbols is slightly larger than the number of source symbols. Crucially, this recovery capability is independent of the loss pattern or channel memory. One implication of this “independence” from channel memory is that clients connected to distinct channels with differing memory behaviors that inflict the same amount of loss will see the same throughput.

This paper is concerned with an efficient application of fountain codes as an application-layer FEC (AL-FEC) code to meet the QoE demands of video multicast clients with heterogeneous channels and video quality requirements.

This approach offers the following advantages : 1) service versatility since the service is agnostic to the underlying network infrastructures, enabling clients to join the multicast through a variety of network connections; 2) quick service deployment or reconfiguration, eliminating the wait for infrastructure upgrade and enabling quick launch of third-party services; and 3) extending the capability of an existing network (infrastructure) [3].

1.2 Related approaches

Multicast schemes have evolved with advances in source and channel coding techniques. Receiver-driven layered multicast (RLM) [4] is a landmark technique for multicasting to clients with heterogeneous channels. RLM is a “client-pulled” scheme suitable for large-scale multicast over the Internet. Subsequently, unequal error protection (UEP) was proposed [5] and its application to multimedia transmission was studied [6]. Further works largely fall into one of the following three categories: AL-FEC design for UEP [713], link-layer scheduling [1418], and joint source-channel coding [19]. In practice, system design and provisioning usually prefer separate source and channel coding as well as low computation complexity.

Fountain codes are employed in many current multimedia delivery standards [20, 21] due to their structural benefits, e.g., linear time encoding/decoding algorithms and small overhead [22, 23]. Digital fountain-based approaches in the AL-FEC design category [810] mainly rely on altering the degree distribution and source symbol selection process to provide UEP across different source layers. In [24], the fountain-code degree distribution is optimized to provide short code length performance. The advantage of using rateless codes over conventional Reed-Solomon codes in providing graceful-degradation was reported in [12]. In [13], UEP and rateless coding are utilized in streaming a scalable video from multiple servers. This work aims to maximize the probability of successful decoding through proper rate allocation amongst video layers of different servers. Note that none of the above fountain-code-based works consider client channel heterogeneity in their design. Moreover, these schemes treat only one scalability dimension (PSNR) and do not optimize the visual perceptual quality.

There are a number of notable link-layer scheduling algorithms for multimedia multicast. A best-effort optimization framework is proposed in [14] for Internet protocol television broadcast over worldwide interoperability for microwave access (WiMAX) channels with consideration of capacity variation in the multicast channel. Sharangi et al. [17] proposed a scalable video transmission scheduling optimization scheme for multiple multicasts to share a set of WiMAX timeslots such that the average utility of the multicasts is maximized. A similar work with a more elaborate model of physical layer parameters and channel effects is proposed by Vukadinovic et al. [18]. While our problem (described below) and [17, 18] both strive to balance serving individual clients versus overall throughput, for our problem, the individuals are clients with heterogeneous channels and playback requirements within a multicast, whereas for [17, 18], the individuals are distinct multicasts each of which targeting one channel and one media quality.

Several multicast schemes benefiting from application-layer FEC and file delivery over unidirectional transport (FLUTE) [25] have been recently introduced [26, 27]. Adoption of dynamic adaptive streaming over HTTP (DASH) to support multicast services is discussed in [28, 29]. A hybrid multicast architecture based on FLUTE and DASH is proposed in [30] where FLUTE provides multicasting with application-layer FEC and DASH is utilized for retransmission of lost frames over a unicast channel. Bouras et al. [31] experimentally assessed the efficacy of using standard raptor [32] codes as application-layer FEC codes for multicasting video over 3GPP long-term evolution (LTE) wireless networks. The assessment employs non-scalable low-bit-rate video, and no service optimization is performed.

1.3 Proposed approach

allocation optimization problem for multicasting a scalable coded video (SVC) stream with the aim to maximize service utility. We consider client heterogeneity in terms of channel quality diversity and media decoding capability. Application-layer multicast obviates the need to access the lower network layers in order to control the transmission scheme. Clients may be connected to the service using different physical channels. For instance, mobile clients may be able to access multiple network infrastructures and engage in “vertical handoffs” across different networks. From the perspective of the multicast service, the end-to-end path to individual clients may traverse different network infrastructures with their underlying physical-layer error protection mechanisms. For the purpose of our AL-FEC coding optimization, the net effect of the end-to-end channel capacity is parameterized in the form of a “reception coefficient” (RC). The RC parameter enables the application layer to use a memoryless erasure channel model (see (10) below) to represent, for instance, lower layer FEC decoding performance in cellular networks or packet losses on the Internet. The diversity of client channel capacities is modeled using probability distributions. The utility is based on using an objective video quality measure to value client satisfaction across different possible adaptations of the video layers. A client may have a specific playback profile, which could be elastic in the sense that the client may be willing to accept (or even reject) playback of various layer adaptations, with corresponding degrees of utility gained.

The allocation is performed to maximize a utility measure that permits balancing between individual client utility and serving as many clients as possible. Our problem provides an answer to the question: given an application-layer multicast service bandwidth, a population of clients with heterogeneous end-to-end channels and devices (with different video playback capabilities), determine how best to provision fountain codes across the video layers in order to serve as many clients as possible while meeting their video perceptual-quality demands. A byproduct of our problem solution is indicating which clients cannot be served to meet their desired viewing quality.

Our problem is fashioned to enable using standard fountain codes or their equivalent. We believe this is a more attractive proposition for multicast equipment/service engineering than using customized fountain codes. A client utility measure is defined based on a visual perceptual model [33, 34] that admits mixed spatial-temporal-amplitude scalability. Our multicasting framework also offers the flexibility to admit other advanced video quality assessment models for mixed-scalability video. An advantageous feature of the proposed method is that the optimization complexity does not increase with the number of clients, a property particularly appealing for large-scale multicasts. Moreover, by employing statistical modeling of client reception capabilities, the optimization can be performed with different resolutions to trade off complexity and performance. The reliability of decoding the video layers in terms of outage probability (OP) is enforced to be commensurate with the probabilistic decoding nature of rateless codes. Compared to the previous multicast optimization techniques based on fountain codes in [8, 10, 35], our work considers clients with heterogeneous channels and video-playback quality demands and benefits from a simple yet accurate model [36] of the client decoding outage probability. The QoE of the proposed multicast scheme has both guaranteed and best-effort aspects. The qualities of the different video layers are guaranteed, provided the client’s channel has commensurate capacities. The best layer the client can access also depends on the client population channel qualities and demand profiles. Another aspect of our framework is that it does not require altering the video bitstream or rateless code, avoiding compatibility issues with existing and future standards, e.g., [3739].

Additionally, we extend our previous work on video multicast optimization [40] to suppress temporal quality fluctuations caused by source bit-rate variation. By utilizing a quality-aware optimization that admits source scalability, the proposed scheme provides a range of trade-offs between transmission resource utilization efficiency and stable client video playback quality. With some simplifications, we obtain a convex optimization problem. It turns out that the solution of the convex problem is a highly accurate approximation.

The rest of this paper is organized as follows. Section 2 is devoted to the general problem formulation as well as a convex formulation that admits lower computation with moderate loss in accuracy. In Section 3, we extend our formulation to a dynamic optimization that considers client dissatisfaction due to video quality fluctuations. In Section 4, we assign values to the client utility parameters in our formulated problem using a recently developed video quality metric. The performance of the proposed optimization framework is evaluated in Section 5. Finally, conclusions are drawn in Section 6. The basic notations used in this paper are listed in Table 1.

Table 1 Basic notations

2 Proposed multimedia multicast with heterogeneous clients

2.1 System setup

Figure 1 illustrates the system setup. A media server is responsible to provide various terminal (user device) classes with a multilayer media, e.g., an H.264/SVC encoded video stream. A hybrid network of wired and wireless clients with heterogeneous channels is depicted. For encoding, a sequence of video frames is partitioned into consecutive time segments. Each segment, which may comprise the frames say over a 1-s interval, is encoded into a scalable bitstream. The generated bitstream embeds L layers with S l source symbols per layer l, l=1,…,L. While the base layer is essential, the enhancement layers introduce higher spatial or temporal resolution or finer quantization resolution without altering the spatio-temporal resolution of the preceding layer. We assume that successful decoding of any layer relies on successful decoding of all of its preceding layers. This implies that layers with lower indices are more important in the decoding process. Fountain coding [2] in the form of raptor codes is applied to every layer of the bitstream to provide protection against erasures caused by channel errors in the physical layer. The code for layer l receives S l source symbols and generates N l encoded symbols. Unlike conventional Reed-Solomon codes, fountain codes can potentially generate an infinitely large code sequence, making the code rate S l /N l elastic, or the code “rateless.” Generation of the rateless code sequence is determined by specifying a degree distribution and a random number generator. Here, we exploit the elastic property by choosing the code rate S l /N l to best suit an optimization objective. Standardized raptor codes [41] have been optimized so that a receiver that correctly receives K l =S l (1+ε) encoded symbols from the transmission can recover the message, with ε>0 representing a small overhead typically below 2 %. Successful decoding is probabilistically ensured by the total number of transmitted symbols successfully recovered by the receiver [36]. For practical considerations, we assume that N l encoded symbols are transmitted for the l-th layer such that \(\sum _{l=1}^{L}N_{l}\leq N_{\max }\). N max, which we call the “service bandwidth,” is set as part of the service provisioning and may depend on the bandwidth available to the server, the temporal duration of the video segment, and other factors. For example, consider a video sequence which is partitioned into segments each with T seg second duration and a server-allocated bandwidth of Ω bit/s. Assuming that each symbol comprises B bits, the maximum number of available transmission symbols for each video segment is

$$\begin{array}{*{20}l} N_{\max}=\left\lfloor \frac{\Omega \, T_{\text{seg}}}{B}\right\rfloor \end{array} $$
((1))
Fig. 1
figure 1

System setup. System setup for the proposed rateless-code-based video multicast

and can be chosen and even varied across segments to meet deadline requirements in streaming applications. The multicast clients are modeled by M classes of media players, each class comprising players that are capable of decoding the media up to layer h m {1,…,L}, m=1,…,M, and have commensurate display resolutions. Classes are indexed in increasing order h 1<h 2,…,<h M . Clients with high-definition (HD) displays may demand decoding up to a HD layer, while smart-screen and portable device users may demand standard definition (SD) or a lower resolution to suit their application memory capacity and/or power consumption policies. For example in Fig. 1, multicast transmission of a source with L=8 layers to M=3 classes of users is considered. Mobile and portable TV clients can potentially decode the video up to layers h 1=3 and h 2=6, respectively, while all 8 layers are decodable by HD clients (h 3=L=8). Clients may also have different reception capabilities, e.g., due to having different bandwidths, antenna systems, and radio propagation characteristics. A reception coefficient (RC) 0≤δ≤1 is used to model the client reception capability, where 1−δ is the application-layer packet loss rate due to loss phenomena in the lower layers. We assume memoryless erasure channels (MECs) with independent and identically distributed (i.i.d.) erasures between the server and the clients. A client channel with RC δ c has an erasure rate of 1−δ c and receives an expected number of δ c N max transmitted fountain symbols in a transmission period of one video segment. Note that the actual number of the correctly received symbols depends on the channel symbol erasure events. We define the cumulative distribution function (CDF) of the channel quality of class m clients as F m (δ),m=1,…,M. Additionally, prior class probabilities π m >0, m=1,…,M with \(\sum _{m=1}^{M} \pi _{m}=1\) are used to reflect the distribution of client population across different classes.

The media layers are not of the same importance to the clients. QoE for a client depends on the probability of successfully acquiring the layers the client desires. It is possible for one or more desired layers not to be served due to resource or channel limitations. For those clients that are served a particular layer l, the probability of failing to decode the layer can be limited by setting outage probability constraints \(P^{l}_{\text {out}}, 1 \leq l \leq L\). While it is conceivable that the clients desiring the same layer might want different levels of decoding assurance, for simplicity, we assign one assurance level, in the form of probability \(1-P^{l}_{\text {out}}\), to each media layer. Ideally, every additional encoded symbol drawn from a digital fountain improves the decoding probability of the code. Thus, if N max is allowed to be sufficiently large, all clients with non-zero RC will eventually achieve the targeted quality of service (QoS). However, in a more realistic scenario with finite transmission resources N max, and any given set of N l ,l=1,…,L with \(\sum _{l=1}^{L} N_{l}=N_{\max }\), we can find a set of minimum needed reception coefficients (MNRCs) δ l such that those clients with RC δ c <δ l and desiring the layer l media will not reach the layer-decoding assurance probability \(1-P^{l}_{\text {out}}\). Since successful decoding of all layers j=1,…,l is necessary in order to enjoy the media quality of layer l, we impose an unequal error protection (UEP) condition

$$\begin{array}{*{20}l} 0 <\delta_{1} \leq \delta_{2} \leq \ldots \leq \delta_{L} \leq 1. \end{array} $$
((2))

Later, we prove that this condition is necessary for optimal utilization of transmission resources while simplifying the utility function.

2.2 Utility function

Let u m,l be the utility for class m clients decoding layer l with decoding failure probability guaranteed to be below a given outage probability threshold. Our “utility” differs from the conventional average utility found in best effort QoE formulations, wherein utilities associated with unacceptable decoding failure probabilities are included in the utility averaging. u m,l is a function of the number of clients who are able to decode layer l under the guarantee, as well as the amount of utility they gain,

$$ {\fontsize{9}{12}{\begin{aligned} u_{m,l}&= \alpha_{m,l}\!{\int_{0}^{1}}\!f_{m}(\xi) \mathcal{I}\left(\prod_{j=1}^{l}\!{\left[\!1- P(S_{j},N_{j},\xi)\!\right] \geq \!\left(1-P_{\text{out}}^{l}\right)}\right) \, d\xi. \end{aligned}}} $$
((3))

Here, f m (δ) is the RC probability distribution of clients in class m, \(\mathcal {I}(.)\) is the indicator function, P(S j ,N j ,δ) is the probability of failing to decode the fountain code in layer j, with S j source symbols and N j transmitted symbols, for a client with RC δ, and α m,l is the incremental utility gained by a class m client after decoding layer l, provided that all preceding layers are successfully decoded. α m,l is obtained from the utility-rate function of each client class, \(\mathcal {U}_{m}(R_{l})\), i.e.,

$$ \alpha_{m,l}=\mathcal{U}_{m}(R_{l})-\mathcal{U}_{m}(R_{l-1}),\quad \forall \ \ l,m>0. $$
((4))

We show in Section 4 a specific way of using this function to optimize viewing experience. In (4), \(R_{l}=\sum _{k=1}^{l}S_{k}/T_{\text {seg}}\) is the cumulative source symbol rate up to layer l with \(R_{0}\triangleq 0\) and \(\mathcal {U}_{m}(0)\triangleq 0\), m. The product term within the indicator function in (3) provides the probability of successfully decoding all layers up to and including layer l. With the MNRCs δ l defined earlier, we can write \(\prod _{j=1}^{l}{\big [1- P(S_{j},N_{j},\delta _{l}) \big ] = \left (1-P_{\text {out}}^{l}\right)}\) and then rewrite (3) as

$$\begin{array}{*{20}l} u_{m,l}&= \alpha_{m,l}{\int_{0}^{1}}f_{m}(\xi) \mathcal{I}(\xi \geq \delta_{l})\, d\xi\\ &=\alpha_{m,l}\int_{\delta_{l}}^{1}f_{m}(\xi)\ d\xi=\alpha_{m,l} [\!1-F_{m}(\delta_{l})]. \end{array} $$
((5))

We obtain the utility of class m clients U m by accumulating the guaranteed utility of all useful layers. However, we should make sure that the incremental utilities α m,l for enhancement layer l contributes to U m only when the clients can reliably decode the preceding layers. The UEP conditions embodied in (2) represent the hierarchical decoding dependencies of the scalable video layers and provide the needed assurance.

$$\begin{array}{*{20}l} U_{m}&=\sum_{l=1}^{h_{m}} u_{m,l}=\sum_{l=1}^{h_{m}} \alpha_{m,l} [\!1-F_{m}(\delta_{l})]. \end{array} $$
((6))

Not all the video layers can be useful for the clients of a class due to screen resolution or other playback constraints. Therefore, in (6), h m L denotes the highest video layer which can contribute to the utility of class m clients.

Finally, the overall utility is obtained by summing over the utilities of all client classes using the prior class probabilities π m >0, m=1,…,M,

$$\begin{array}{*{20}l} {U}_{\text{total}}&=\sum_{m=1}^{M} \pi_{m} U_{m}=\sum_{m=1}^{M} \pi_{m} \sum_{l=1}^{h_{m}} \alpha_{m,l}[\!1- F_{m}(\delta_{l}) ] \ \\[-.1cm] &\equiv {U_{\text{max}}}- \sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} F_{m}(\delta_{l}) \end{array} $$
((7))

where π m is absorbed into α m,l by defining \(\hat {\alpha }_{\textit {m,l}}=\pi _{m} \alpha _{\textit {m,l}}\) and \({U_{\max }=\sum _{m=1}^{M} \sum _{l=1}^{h_{m}} \hat {\alpha }_{\textit {m,l}}}\). Note that U max is an upper bound on the deliverable utility and only depends on the media source and the priors. This bound is achievable if the MNRCs \(\delta _{h_{m}}, m=1,\ldots,M\) are small enough so that no client has to settle for a quality layer lower than their maximum desired quality. However, this may not be possible since the service bandwidth N max and the OP constraints prevent the MNRCs from becoming arbitrarily small. As a result, clients with poor RCs may end up being not served their most desired video quality, or even worse, being unable to decode the base layer. U total is to be maximized, as shown below. We emphasize that the problem at hand is efficient utilization of the multicast service bandwidth N max to provide guaranteed utility to individual multicast clients while serving as many clients as possible. However, for a given N max and set of client RC distributions, the problem solution may not be able to service a portion of the clients with exceedingly poor channels. These clients may be served by increasing N max or providing alternate solutions, e.g., unicast (re)transmission, peer-assisted repair [42]. Such solutions are outside the scope of this paper.

2.3 Outage probability

Let P(S,N,δ) be the probability that a client fails to decode the S information symbols, given the client’s RC δ and the number of transmitted symbols N. The performance of a rateless decoder in decoding a source with S information symbols after receiving K code symbols is given by the decoding failure probability function P f (S,K). Assuming interleaving is used if needed, we consider a memoryless erasure channel (MEC) with symbol erasure rate 1−δ[0, 1] assumed to be fixed during the transmission period of a video segment. For a given number of transmitted code symbols, the outage probability can be obtained from

$$\begin{array}{*{20}l} P(\textit{S,N},\delta)=\mathbb{E}_{K|N} [P_{f}(S,K)] \end{array} $$
((8))

with erasure probability 1−δ and i.i.d. erasure events, and K is a binomial random variable. Moreover, the decoding failure probability of rateless codes can be modeled by [23]

$$ P_{f}(S,K)=\left\{ \begin{array}{ll} 1, & \text{if}~ K\leq {S}, \\ a b^{K-{S}}. & \text{if}~ K>{S}, \end{array}\right. $$
((9))

where a>0 and 0<b<1 vary with the rateless code structure, particularly the degree distribution, and the precode rate. For example, a=0.85 and b=0.567 were used for the raptor code in [23]. Combining (8) with (9), we obtain the outage probability of a rateless coded source over a MEC

$$\begin{array}{*{20}l} {}P(S,N,\delta)&=\sum_{k=0}^{N} {N \choose k}\delta^{k}(1-\delta)^{N-k} P_{f}(S,k)\\ &=\text{Bin}_{N,\delta}(S)+\sum_{k=S+1}^{N}{N\choose k}\delta^{k}(1-\delta)^{N-k}ab^{k-S}. \end{array} $$
((10))

Here, Bin N,δ (.) is the binomial CDF with parameters N and δ. Despite its accuracy, the closed-form representation in (10) is not convenient for optimization in which one needs to express other parameters as an explicit function of the OP. To deal with this shortcoming, the following parametric model that was previously derived in [36] offers a convenient approximation of (10):

$$\begin{array}{*{20}l} \widetilde{P}(S,N,\delta)= 0.5\exp\left[-\frac{\delta (N-{S}/{\delta})^{H}}{S (1-\delta)}\right] \text{for} \ N\geq {S}/{\delta}. \end{array} $$
((11))

Note that H≈1.8 for the rateless codes used in [23]. As shown in Fig. 2, this model accurately estimates the outage probability (10) for various channel parameters.

Fig. 2
figure 2

Comparison between the closed-form outage probability and approximated outage probability model. Closed-form outage probability (10) and the outage probability obtained from the approximated model (11) as a function of transmitted fountain symbols for a source with size S=1000

In summary, we aim to maximize the utility in (7) subject to the bandwidth and the UEP constraints defined in Section 2.1. The first term in (7) is not a function of the optimization variables, MNRCs δ l , l=1,…,L. Hence, the utility maximization can be transformed into the following utility loss minimization problem:

Problem 1.

(General formulation)

$$\begin{array}{*{20}l} \ & \min_{\{\delta_{l}\}_{l=1}^{L}} \; \ \sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} F_{m}(\delta_{l}) &\\ \textrm{\textit{subject to}} \\ \ \text{UEP constraints:} & \delta_{1}\ge0, \\ & \delta_{l}-\delta_{l+1}\leq0, \qquad l=1,\ldots,L-1,\\ & \delta_{L}\leq1, \\ \ \text{BW constraint:} &\sum_{l=1}^{L} N_{l} \leq N_{\text{max}}. \end{array} $$
((12))

We first consider using exhaustive search to solve Problem 1. The set of all δ l ,l=1,…,L satisfying the UEP constraints forms an L-simplex in L dimension. For exhaustive search, the simplex volume is discretized using an L dimensional cubic lattice \(\mathcal {L}\) with \(|\mathcal {L}|\) points. For each point in \(\mathcal {L}\), say δ l , l=1,…,L, we first obtain the required per layer transmission resources N j in a forward procedure using

$$\begin{array}{*{20}l} N_{l}=\left\{\begin{array}{ll} P^{-1} \left(S_{1},\delta_{1},P_{\text{out}}^{l}\right) & l=1,\\ P^{-1}\left(S_{l},\delta_{l},1-\frac{1-P_{\text{out}}^{l}}{\prod_{j=1}^{l-1} [1- P(S_{j},N_{j},\delta_{l})]} \right) & l\geq 2, \end{array}\right. \end{array} $$
((13))

wherein P −1(S,δ,p) is the inverse outage probability function which yields the required number of transmitted symbols N as a function of the number of source symbols S, the reception coefficient δ, and the designated outage probability constraint p. A convenient closed form expression of P −1(S,δ,p) is obtained by rearranging the terms in the approximated OP model (11). Having N j ,j=1,…,L in hand, the bandwidth constraint is checked. If the constraint is satisfied, the cost function is calculated; otherwise, the cost is set to infinity. For a sufficiently fine discretization, we regard the minimum cost point in \(\mathcal {L}\) as the “optimal” solution. Note that by using the bandwidth constraint in the above manner, the exhaustive search can be conducted over L−1 dimensions. The complexity \(\mathcal {O}(D^{L-1})\) can be large, where D is the number of grid points on each dimension.

After obtaining the optimal MNRCs, \(\delta _{l}^{*}, l=1,..,L\), the corresponding transmission resources per layer \(N_{l}^{*}, \forall l\) are obtained. Clients whose highest media quality demand is layer l but whose RCs are below \(\delta _{l}^{*}\) have to settle for the lower quality of layer i where i is the largest layer index with \(\delta _{i}^{*}\) no greater than the client’s RC. Ultimately, clients with RCs below \(\delta _{1}^{*}\) are dropped from the multicast as they cannot decode the base layer with the assured probability.

In contrast to other formulations such as [16] in which clients are individually represented in the optimization, here multicast clients are grouped and represented by the distributions F m (δ) and associated priors π m . Consequently, the complexity of the proposed optimization is independent of the number of clients. Moreover, client-to-server feedback for the purpose of updating the RC distributions could be managed without feedback implosion, e.g., the server could broadcast a threshold value and clients with a locally generated random number above the threshold would send their RCs to the server. This threshold is adapted to the multicast population size such that the server is not overwhelmed by excessive amount of feedback messages. The RC distributions could be parametrized or discretized with a suitably chosen resolution to trade-off between computational complexity and accuracy.

2.4 Simplified formulation

Next, we exploit simplifications of the outage probability constraints to obtain a problem formulation that is amenable to solution using gradient search. Let \(Q_{l}(\delta)=\prod _{j=1}^{l}{\big [1- P(S_{j},N_{j},\delta) \big ]}\) be the probability of receiving layers 1 to l. Q l (δ) is monotonically non-increasing with l and monotonically non-decreasing with δ. Moreover, due to the fast-decaying nature of the decoding failure probability (9), Q l (δ) exhibits an abrupt transition for δ in the neighborhood of δ l . This can be seen from Fig. 3 which shows Q l (δ) and [ 1−P(S l ,N l ,δ)] for a closely spaced set of δ l ’s. It can be seen from Fig. 3 that in the neighborhood of δ l , the factor \(Q_{l-1}(\delta)=\prod _{j=1}^{l-1}{\big [1- P(S_{j},N_{j},\delta) \big ]}\) is nearly one and the transition behavior of Q l (δ) is dominated by [1−P(S l ,N l ,δ)]. Hence, we can use the approximation

$$\begin{array}{*{20}l} \prod_{j=1}^{l}{\big[1- P(S_{j},N_{j},\delta_{l}) \big]} \approx 1- P(S_{l},N_{l},\delta_{l}). \end{array} $$
((14))
Fig. 3
figure 3

Outage probability approximation. A comparison between Q l (δ) (solid blue line) and [1−P(S l ,N l ,δ)] (dashed red line) for the different layers of a three-layer video stream. δ 1=0.526,δ 1=0.572,δ 1=0.607 for outage probability constraints similar to those expressed in Section 5

Consequently, for a given set of δ l ,l=1,…,L, the per-layer transmission resources N j can be obtained from

$$\begin{array}{*{20}l} N_{l}=P^{-1}\bigg(S_{l},\delta_{l},P_{\text{out}}^{l} \bigg), \quad l=1,\ldots,L. \end{array} $$
((15))

Using (11) to estimate N l as a function of the outage probability, we have

$$\begin{array}{*{20}l} N_{l} &= S_{l}/{\delta_{l}}+\tau_{l}\sqrt[H]{\frac{1-\delta_{l}}{\delta_{l}}}, \end{array} $$
((16))

where

$$\begin{array}{*{20}l} \tau_{l}=\sqrt[H]{-S_{l} \ln\left(2{P^{l}_{\text{out}}}\right)}, \quad P^{l}_{\text{out}}\leq0.5. \end{array} $$
((17))

Using this, the bandwidth constraint becomes

$$\begin{array}{*{20}l} \sum_{l=1}^{L} \left(S_{l}/{\delta_{l}}+\tau_{l} \sqrt[H]{\frac{1-\delta_{l}}{\delta_{l}}} \right)\leq N_{\text{max}}, \quad 0<P^{l}_{\text{out}}\leq a. \end{array} $$
((18))

As a result, a new optimization problem can be formulated.

Problem 2.

(Simplified formulation)

$$\begin{array}{*{20}l} \ & \min_{\{\delta_{l}\}_{l=1}^{L}} \; \ \sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} F_{m}(\delta_{l}) &\\ \textrm{\textit{subject to}} \\ \ \text{UEP constraints:\quad} & \delta_{1}\ge0, \\ & \delta_{l}-\delta_{l+1}\leq0, \quad \;\;l=1,\ldots,L-1,\\ & \delta_{L}\leq1, \\ \ \text{BW constraint:\quad} & \sum_{l=1}^{L} \left(S_{l}/{\delta_{l}}+\tau_{l} \sqrt[H]{\frac{1-\delta_{l}}{\delta_{l}}} \right)\leq \!N_{\text{max}}. \end{array} $$

Unlike Problem 1, first order derivatives of the BW constraint can now be easily obtained. Hence, gradient descent algorithms with \(\mathcal {O}(L \log (1/e))\) complexity, where e is the required accuracy, can be deployed to solve Problem 2. Since Problem 2 may have multiple local minima, the quality of the gradient descent solution depends on the algorithm initialization. In the next section, a convex approximation to Problem 2 is obtained. In Section 5, we present numerical results demonstrating the effectiveness of the convex initialization to the gradient search.

2.5 Convex formulation

Problem 2 is not convex. We show that, by making further simplifying approximations, the problem can be recast into a convex optimization problem. In the first step, we propose the following parametric CDF approximations. For m=1,…,M,

$$ \begin{aligned} {F}_{m}(\delta)\approx\widetilde{F}_{m}(\delta)={c_{m}\delta^{p_{m}}}+1-c_{m},\quad 0<c_{m}\leq1,\\ \quad p_{m}>0,\; 0\leq\delta\leq1, \end{aligned} $$
((19))

where p m and c m are model parameters obtained by regression. In Section 5, we investigate the ability of the above approximations to represent client RC distributions.

Next, we further simplify the outage probability constraints. We use the following simpler model [36] for the outage probability in order to estimate N l for each layer:

$$\begin{array}{*{20}l} N_{l} &\approx\frac{S_{l}+\log_{b} {P^{l}_{\text{out}}/a}}{\delta_{l}}, \qquad 0<P^{l}_{\text{out}}\leq a, \end{array} $$
((20))

where a and b are obtained from the decoding failure probability function of the rateless code (9). Using this, the bandwidth constraint becomes

$$\begin{array}{*{20}l} \sum_{l=1}^{L} \frac{S_{l}+\log_{b} {P^{l}_{\text{out}}/a}}{\delta_{l}}\leq N_{\text{max}}, \qquad 0<P^{l}_{\text{out}}\leq a. \end{array} $$
((21))

After introducing a parameter transformation θ l =1/δ l ,l, we obtain

Problem 3.

(Convex formulation)

$$\begin{array}{*{20}l} \ & \min_{\{\theta_{l}\}_{l=1}^{L}} \sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} \widetilde{F}_{m}({1}/{\theta_{l}}) \end{array} $$

subject to

$$\begin{array}{*{20}l} \text{UEP Constraints:\quad} & \theta_{l+1}-\theta_{l}\leq0, \quad\;\; l=1,\ldots,L-1,\\ & \theta_{L}\geq1, \\ \text{BW Constraint:\quad} & \sum_{l=1}^{L}{\left(S_{l}+\log_{b} {P^{l}_{\text{out}}/a}\right)}\theta_{l} \leq N_{\text{max}}. \end{array} $$

We prove that Problem 3 is convex in the Appendix. In Section 5, we examine the three problem formulations numerically in different application scenarios and assess their accuracies.

3 Utility smoothing

Source rate and/or service bandwidth fluctuations across consecutive video segments could result in variations of the optimized MNRCs. Hence, clients with RCs close to the MNRCs may experience quality variations across successive segments. One may encode video segments of longer durations to reduce rate fluctuations at the cost of additional server/client-terminal complexity, memory requirements, and delay [43, 44]. Below, we reformulate our problem to include suppression of client dissatisfaction due to quality variations.

Major quality variations are due to unwanted switchings between different layers. This mainly results from the client’s RC crossing the MNRC of a layer subscribed by the client. For example, if a client’s RC is always above the MNRC for the base layer, no frame dropping would occur (within the statistical assurance of the base layer outage probability constraint). Below, we extend our problem formulation to include suppression of MNRC variation. Numerical results shown later demonstrate the effectiveness of the suppression in reducing quality switchings, and more specifically, base-layer outage occurrences.

Let us assume that the client RC distributions do not change significantly across consecutive video segments, i.e., \(F_{m}^{(k)}(.)\approx F_{m}^{(k-1)}(.), \forall m\), where k is the video segment index. Similar to (4), we define the incremental dissatisfaction coefficients β m,l ≥0 to model the client disappointment for not decoding layer l of the current video segment that was successfully decoded previously. Consequently, the disappointment of a class m client who enjoyed layer l of the previous video segment but can only decode the current video segment up to a lower layer \(\hat {l}<l\) is proportional to \(\sum _{j=\hat {l}+1}^{l}\beta _{m,j}\). Using β m,l , and considering the non-decreasing property (2) of the MNRCs δ l , the combined client dissatisfaction due to MNRC fluctuations is expressed by

$$ {\selectfont{\begin{aligned} \mathcal{D}^{(k)}=\!\sum_{m=1}^{M}\sum_{l=1}^{h_{m}}\hat{\beta}_{m,l}\left[F_{m}\left(\delta_{l}^{(k)}\right)-F_{m}\left(\delta_{l}^{(k-1)}\right)\right]\mathcal{I}\left(\delta_{l}^{(k)}\geq\delta_{l}^{(k-1)}\right), \end{aligned}}} $$
((22))

where \(\hat {\beta }_{\textit {m,l}}=\pi _{m} {\beta }_{\textit {m,l}}\), and \(\delta _{l}^{(k-1)}\) and \(\delta _{l}^{(k)}, l=1,\ldots,h_{m}, \forall m\) are the MNRCs for the previous and the current video segments, respectively. Subtracting \(\mathcal {D}^{(k)}\) from the total utility in (7) to instrument a variation-induced penalty term leads to the following optimization problem.

Problem 4.

(Dynamic optimization)

$$\begin{array}{*{20}l} \ & \min_{\left\{\delta_{l}^{(k)}\right\}_{l=1}^{L}} \; \ (1-\lambda)\mathcal{D}^{(k)} +\lambda\sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} F_{m}\left(\delta_{l}^{(k)}\right) &\\ \textrm{\textit{subject to}} \\ \ \text{UEP constraints:\quad} & \delta_{1}^{(k)}\ge0, \\ & \delta_{l}^{(k)}-\delta_{l+1}^{(k)}\leq0, \quad l=1,\ldots,L-1,\\ & \delta_{L}^{(k)}\leq1, \\ \ \text{BW constraint:\quad} & \sum_{l=1}^{L} N_{l}^{(k)} \leq N_{\text{max}}^{(k)}. \end{array} $$

0≤λ≤1 effects a balance between the two utility loss terms. A small λ tends to prevent the MNRCs from increasing excessively across two consecutive video segments. However, a longer-term gradual increase of the MNRCs due to variations of the RC distributions F m (.) and source bit rate is still possible. However, an exceedingly small λ may significantly reduce the overall utility provided to the clients. Hence, a judicious choice of λ would avoid letting clients with the worst channels from unduly influencing the solution.

4 Utility optimization using a perceptual quality metric

The proposed multicast optimization scheme can be tailored to fit different application scenarios. Here, we aim to maximize the clients’ subjective viewing experience by setting the marginal utility parameters α m,l using a perceptual quality model that was developed using subjective-viewing test results [33, 34]. Although peak signal-to-noise ratio (PSNR) [45] has been widely used as a measure of video quality, low correlation between PSNR and video quality ratings provided by human viewers—commonly reported as mean opinion scores (MOSs)—is reported. The shortcomings of PSNR are more pronounced when comparing video playback at different spatial and temporal resolutions. More versatile objective quality measures have been proposed as estimates of subjective quality ratings. The objective video quality metric introduced in [33] and [34] provides a normalized MOS (NMOS) that can be used to quantify the quality between different spatial, temporal, and quantization resolutions

$$ {\fontsize{9.2}{12}{\begin{aligned} {}\text{NMOS} (s,f, \text{PSNR})&= \left(\frac{1-e^{-b_{s}\frac{s}{s_{\text{max}}}}}{1-e^{-b_{s}}}\right)\left(\frac{1-e^{-b_{f}\frac{f}{f_{\text{max}}}}}{1-e^{-b_{f}}} \right)\\ & \qquad \times \left(1-\frac{1}{1+e^{0.34(\text{PSNR}-b_{p})}} \right). \end{aligned}}} $$
((23))

Here, s and f represent the number of pixels and frame rate, respectively, while s max and f max are their maximum values. b s , b f , and b p are model parameters that depend on the video content [33, 34]. This NMOS model is conveniently used to illustrate the method proposed herein. More elaborate quality estimation methods such as the video quality metric (VQM) [46] algorithm may be advantageously employed. We should mention that a slightly advanced version of the NMOS model used in this work was published in [47].

As an illustration, consider a scenario wherein the highest video layer successfully recovered by a terminal has a spatial resolution lower than the playback capability; specifically, a HD terminal receiving a SD video. The terminal may display the SD video as received in the middle of the HD display or adapt the video to the display by upsampling. The perceptual quality metric in (23) is used as a yardstick to compare the perceptual effects of various possible adaptations.

NMOS m,l , non-decreasing with layer index l, represents the highest NMOS corresponding to the best possible adaptation—within the capabilities of the class m terminals—that can be performed on the media up to layer lh m . Recall that h m is the highest layer of the video stream that class m terminals can potentially decode. Furthermore, we may also model client playback preferences that can be set independently of the achieved video quality. For example, a certain application may require the spatial resolution not to be lower than some specific level. We may use preference weights 0≤W m,l ≤1, non-decreasing with respect to index l, to map NMOS to multicast utility while accounting for clients’ playback preferences. If class m users are unwilling to settle for media playback at any layer l<h m , then W m,l =0 l<h m . Thus, we define our utility-rate function as

$$\begin{array}{*{20}l} \mathcal{U}_{m}(R_{l})=W_{m,l}\text{NMOS}_{m,l}, \end{array} $$
((24))

which can be applied to (4) to calculate the marginal utility coefficients α m,l ≥0.

5 Numerical simulations

In the following, we evaluate the performance of the proposed optimization scheme when applied to multicasting a video sequence with L=3 layers to heterogeneous clients. Bitstream parameters for three H.264/SVC encoded video sequences are summarized in Table 2. S l denotes the number of source symbols in each layer over a T seg=1 s time segment, and each symbol comprises 50 bytes.

Table 2 Specification of H.264/SVC coded video bitstreams

The bit rates given are for each layer. The OP constraints \(P_{\text {out}}=\left \{P_{\text {out}}^{1}, P_{\text {out}}^{2}, P_{\text {out}}^{3}\right \}=\left \{10^{-4}, 4\times 10^{-4},5\times 10^{-4}\right \} \) are enforced. An equal error protection (EEP) scheme is used as the baseline for the performance comparison. In the EEP scheme, the transmission resources allocated to each media layer is proportional to the relative size of that layer in the source bitstream, i.e.,

$$\begin{array}{*{20}l} N^{{e}}_{l}=\frac{{N_{\text{max}}}S_{l}}{\sum_{k=1}^{L} S_{k}} \qquad l=1,\ldots,L. \end{array} $$

We use the following metrics to evaluate the performance gain and efficiency of different schemes, respectively,

$$\begin{array}{*{20}l} \textrm{\(\eta_{{}}\uparrow\)}\triangleq\frac{{U}-{U_{{e}}}}{{U_{{e}}}} \textrm{\%}, \qquad \textrm{\(\varepsilon\)}\triangleq\frac{{U}}{{U_{{opt}}}} \textrm{\%}, \end{array} $$
((25))

where U is the utility delivered to the clients, U opt is the maximum attained by using the optimal MNRCs, and U e corresponds to the utility of the EEP scheme. The interval 0<δ≤1 is partitioned into small sub-intervals and an exhaustive search is performed to find the MNRCs δ l ,l=1,…,L and subsequently U opt . Nevertheless, this process could be computationally expensive for a large number of sub-intervals and source layers. To obtain a sub-optimal solution with much lower complexity, first, the convex problem (Problem 3) is solved. Next, a constrained gradient descent (GD) algorithm is deployed to solve the simplified formulation (Problem 2) using the convex solution as a starting point. The performance measures of these two solutions are superscripted “CV” and “GD,” respectively. The multicast clients may experience a wide variety of channel conditions depending on fading and their distance to the transmitting station [48]. For wide-area cells, the range of channel qualities can be expected to be broader than reported in [48]. Thus, the uniform distribution and truncated Gaussian mixtures in Fig. 4 are selected to reflect distinct types of client RC statistics with different balances between the number of clients with poor and good channels. One thousand clients are considered for these scenarios. In the multi-class scenario, each class inherits a portion of clients based on the priors π m ,m=1,…,M. Next, samples of the client reception coefficients (RCs) are generated for each distribution.

Fig. 4
figure 4

Class RC distributions. Prototypical (top) PDFs and their corresponding (bottom) CDFs. Approximated CDFs based on (19) are depicted in dashed red lines

5.1 Single-class scenario

In this scenario, all clients are assumed to be capable of decoding all three layers. Hence, M=1 and \(h\triangleq h_{1}={L=3}\). Table 3 exhibits the optimization results for N max=13,000 symbols.

Table 3 Performance of optimized allocation for the single-class scenario (N max=13,000, \(\alpha _{l} \triangleq \alpha _{1,l}, \forall l\))

The performance metrics are evaluated over four crafted utility settings. On average, the proposed optimization manages to increase the utility by a factor of more than 2 compared to the EEP solution. The EEP solution is highly inefficient when the majority of clients experience poor channels, as in the Δ-III distribution in Fig. 4. Note that the solution of the convex optimization yields an average efficiency of 95.25 %. Adding the GD search increases the efficiency to 99.50 %. The optimization results as well as the solution of the EEP approach for different service bandwidth constraints N max are depicted in Fig. 5. The metric values are averaged over the four distributions and utility settings (the 16 cases in Table 3).

Fig. 5
figure 5

Single-class optimization results. Average utility traces (left) and efficiency of the single-class video multicast optimization (right) for various transmission budgets

Due to the high efficiency of the initial convex solution, the GD step could be omitted in order to reduce computation without significant performance penalty.

5.2 Multi-class scenario

Next, we consider a scenario with M=2 client classes. The class 1 clients with CIF resolution displays may only decode the base layer and the first enhancement layer, i.e., h 1=2. The clients in class 2 have 4CIF resolution displays and decoders capable of decoding the entire video stream, i.e., h 2=3. The four sample distributions in Fig. 4 are used to model the client RC distributions of both client classes, resulting in 16 possible distribution pairings. For each pair of distributions, the simulation is performed with different prior values. The utility parameters are obtained using the perceptual quality metric in (24). The preference parameters are assumed to be \(W_{\textit {m,l}}=0.9^{{{h}_{m}}-l}, l\leq h_{m}\). The NMOS parameters for the test video sequence are extracted from [33] and [34]. The simulation results are shown in Table 4 in terms of metric values averaged over the 16 pairings.

Table 4 Simulation results for M=2 classes and L=3 layers (π 2=1−π 1)

On average, the initial allocation provided by the convex approximation achieves 97.57 % efficiency. Using the GD algorithm, the efficiency is increased to 99.80 %. Similar to the single-class scenario, most of the potential performance gain can be obtained using the convex optimization.

5.3 Reduced-feedback scenario

It is worthy to investigate the optimization performance when client RC statistics are collected only from a portion of the multicast clients. Limiting channel state information feedback could be an effective measure against feedback implosion at the server and for maintaining a low error rate for a multiple access feedback channel. For all 16 pairings of the RC distributions in Fig. 4 for the class 1 and 2 clients, an ensemble of size n m =1000,m={1,2} samples are drawn from each distribution to represent 1000 clients in each class (π 1=π 2=1/2). The performance is evaluated as a function of the fraction of clients from each class that successfully send their RC and media player capability information to the server—in terms of client-to-server feedback ratio (CSFR), 0≤CSFR≤1. This experiment is repeated 100 times for every CSFR and distribution pairing to ensure accuracy, especially for small CSFR values. The histograms of the received RC feedback messages are used as estimates of the actual class RC distributions and employed in the optimization. The performance is compared to the scenario in which full knowledge of all client RCs is revealed to the server, i.e., all clients successfully feedback their RCs to the server (CSFR = 1). For each CSFR, the performance metrics of all three tested video sequences are combined (4800 simulation runs per CSFR) and the results are illustrated in Fig. 6.

Fig. 6
figure 6

Optimization results for the reduced-feedback scenario. Performance of the proposed optimization as a function of the fraction of clients that successfully feedback their channel state information to the server for convex optimization (dashed lines) and GD method (solid lines)

The proposed optimization demonstrates good tolerance to limited RC feedback. Optimization based on RC feedback from only 5 % of the clients still provides performance close to 100 % feedback. Both convex optimization and the GD algorithm maintain their performance in the limited feedback regime. For a smaller pool of 100 clients per class, the CFSR needed goes up to about 20 %. However, the small number of feedback clients, 20 in this example, should be manageable. We believe this robustness comes from the ability of the parametric CDF in (19) to capture the general characteristics of the client RC distributions.

5.4 Variable rate source scenario

Performing a resource allocation optimization repeatedly for each video segment means that computation intensity depends on segment duration T seg. One way to reduce computation is to use a large T seg though T seg may be limited by other considerations such as media bitstream access and formatting requirements. Another way is to optimize the video less frequently by using longer-term statistics. In this section, we aim to quantify the performance penalty incurred when the optimization uses longer-term statistics as compared to segment by segment optimization. Note that a video bitstream may exhibit large bit rate variations due to intra-coded frames. Longer video segments can reduce the rate fluctuations at the cost of additional buffering. Let us consider \(R_{l}^{(k)}=S_{l}^{(k)}/{T_{\text {seg}}}\) as the source rate for layer l of video segment k with duration T seg seconds and \(S_{l}^{(k)}\) source symbols. We model the source bitstream variations across different segments by

$$ S_{l}^{(k)}=S_{l} \left(1+ {\gamma}_{l}^{(k)}\right), $$
((26))

where S l is the average length of layer l obtained from Table 2 and \({\gamma }_{l}^{(k)}, l=1,..,L, \forall k\) are L independent and identically distributed uniform variables with support [−γ max, +γ max]. For the special case γ max=0, the source becomes a constant-rate source (CRS) and the optimal allocation is independent of any particular video segment k provided that the service bandwidth, the utility coefficients, and client RC distributions are fixed. The following efficiency measure quantifies the performance penalty due to performing the resource allocation optimization using average statistics,

$$ {\Large\varepsilon_{_{CRS}}}= \langle {U_{CRS}}^{(k)}/{U}^{(k)}\rangle. $$
((27))

Here, 〈.〉 denotes averaging over segments. U CRS (k) is the utility achieved for the k-th video segment when the resource allocation optimization is performed only once based on average rate-distortion statistics. Conversely, U (k) is the maximum attainable utility when a separate resource allocation optimization is conducted for each video segment. The maximum number of transmitted packets N max and the client RC distributions are assumed to remain unchanged during the entire multicast. For every γ max and 16 pairings of the candidate distributions, 100 samples of \({\gamma }_{l}^{(k)},\forall l\) are generated to represent variable source rates for 100 video segments. The results are plotted in Fig. 7 as a function of the max-to-min rate ratio (MRR) for the video rates generated by (26) where \(\text {MRR} \triangleq \frac {1+\gamma _{\max }}{1-\gamma _{\max }}\).

Fig. 7
figure 7

Long-term statistics-based optimization results for a variable-rate source. Optimization efficiency based on the long-term statistics of a variable-rate source for N max=15,000 (left) and N max=19,000 (right). Sources with different max-to-min rate ratios (MRRs) are emulated by modifying the distribution of \({\gamma }_{l}^{(k)}\) in (26)

As expected, optimization based on long-term statistics results in lower efficiency. However, the performance penalty is moderate since \({\Large \varepsilon _{_{\textit {CRS}}}}\) remains at above 90 % efficiency even for a rate variation as large as MRR=19. We should mention that the efficiency \({\Large \varepsilon _{_{\textit {CRS}}}}\) of the EEP solution remains below 65 % for N max=15,000 and N max=19,000, respectively, reconfirming the poor performance of the EEP solution for quality-aware multicast transmission.

5.5 Multi-segment quality smoothing

In this scenario, the proposed dynamic utility maximization (Problem 3) which penalizes quality fluctuations for clients with marginal RCs is studied. We consider 9 consecutive segments of an H.264-SVC coded multilayer (L=3) Crew video sequence, each segment containing 32 frames with QCIF and CIF spatial layers, and the GOP size is 16 frames with one intra-coded frame starting each GOP. The base layer embeds the QCIF resolution with frame rate of 15 frames/s. The quantization parameter (QP) for the base layer is set to 44. The first enhancement layer increases the frame rate from 15 to 30 frames/s and additionally provides a better quantization resolution with QP = 32. Finally, the last enhancement layer embeds the CIF resolution with the frame rate and QP identical to the previous layer. The NMOS model parameter values for this video sequence are b f =7.23 dB, b s =3.49 dB, and b p =29.68 dB [33, 34]. We observed that the encoded sequence provides nearly steady PSNRs across the video segments. The achieved PSNRs are 30.5, 35.1, and 35.2 dB for the base layer and the enhancement layers, respectively. Based on these PSNR values and the model parameters [33, 34], the average NMOS values are 0.31 for the base layer, 0.48 for the second layer, and 0.86 for the third layer. These scores manifest a peak variation of less than 5 % across different video segments. Two client classes (M=2) with equal population size (π 1=π 2=0.5) are assumed. Δ-II and Δ-IV from Fig. 4 model the RC distributions of the class 1 and 2 clients with QCIF and CIF screen resolutions, respectively. The video decoders of the class 1 clients are assumed to be capable of decoding the base layer as well as the first enhancement layer, while class 2 clients are capable of decoding all video layers.

Here, we aim to optimize the provided utility under the constraint of limited service bandwidth N max. Additionally, failure in decoding the base layer is considered unacceptable for both classes. Therefore, we set the dissatisfaction coefficients β m,1=1 m and the rest of the dissatisfaction coefficients to zero. The server is assumed to transmit N max=11,000 symbols for each segment, where each symbol consists of 16 bytes.

The video segment size, the optimized utility for various values of λ, and the optimized MNRCs \(\delta _{l}^{(k)}, l=1,\ldots,3\) are plotted as a function of the segment index k in Fig. 8. This video sequence exhibits a significant rate increase at the fourth segment. This raises the MNRC for the base layer \(\delta _{1}^{(k)}\) when the quality fluctuation suppression term is nulled (λ=1). By increasing λ, the optimization increasingly penalizes solutions that allow the base layer MNRC to increase. Hence, the portion of clients that face temporal outage is reduced and a more stable visual experience is provided. This is reflected in lower \(\delta _{1}^{(k)}\) values with smaller variations. Given a fixed service bandwidth and considering the fact that β m,l =0 for l≥2, the reduction in the MNRC fluctuations for the base layer comes at the cost of increased variations of the MNRCs for the enhancement layers, as reflected in the \(\delta _{2}^{(k)}\) and \(\delta _{3}^{(k)}\) traces in Fig. 8 d and e. Note that the achieved utility is closer to the upper-bound U max for the video segments with fewer source symbols. U max depends on the video content and its viewing quality but not the source rate. However, the gap between the achieved utility and U max depends on the portion of clients who are unable to receive the video layers they desire. Hence, for a constant N max, increasing the source rate widens the gap, in proportion to the distribution of clients with marginal RCs. Additionally, we observe that the penalty term with different weights (1−λ) hardly affects the utility traces except for the fourth segment that contains the sudden rate increase.

Fig. 8
figure 8

Dynamic optimization statistics. Traces of a video segment size, b provided utility calculated using (7), and ce optimized MNRCs for the base layer and the enhancement layers after solving Problem 3 with N max=11,000

For the particular choice of β m,l values in this scenario, the average fraction of clients that successfully enjoys the base layer in one video segment but fails to decode the base layer in the next segment can be obtained by averaging the dissatisfaction measure (22) over the video segments \(\overline {\mathcal {D}}=\langle \mathcal {D}^{(k)}\rangle \). This metric is the marginal probability that a satisfied user encounters frame drops or freezes in the next video segment. Table 5 illustrates \(\overline {\mathcal {D}}\) for various service bandwidths and λ. As expected, higher bandwidth and smaller λ both contribute towards a more stable video quality experience. Table 5 also provides data for \(\mathcal {Z}\) which measures the percentage of clients that experience outage in decoding the base layer at least once during the nine video segments. Due to the client RC distributions modeling a significant portion of clients with poor channels, and a notable rate increase beyond the fourth segment, there is always a portion of clients that experience outage for a constant N max. When the variation suppression term \(\mathcal {D}\) is disabled, the outage percentage remains stubbornly high even as the service bandwidth is substantially increased. However, a lower outage rate \(\mathcal {Z}\) is attainable by increasing the penalty weight 1−λ. If segments 4 to 9 are excluded from the statistics for N max=11,000, \(\mathcal {Z}\) is reduced from 16.03 to 3.24 % for λ=1. However, for λ=0.3, exclusion of those segments reduces \(\mathcal {Z}\) slightly from 3.6 to 2.85 %. This signifies the performance of the proposed dynamic optimization in reducing the sensitivity of client dropout to high-rate video segments.

Table 5 \(\overline {\mathcal {D}}\) is the percentage of clients that experience outage in decoding the base layer. \(\mathcal {Z}\) is the percentage of clients that experience outage at least once over the 9 video segments

The outage statistics based on the \(\mathcal {Z}\) measure for the EEP solution is 19–30 % higher than the proposed dynamic optimization. Figure 9 provides the outage burst-length statistics assuming that client channel quality is unchanged during the transmission of the nine video segments. The results are normalized to the number of maximum length outage incidents for the EEP scenario. It is clear that the proposed optimization significantly reduces the number of outage incidents.

Fig. 9
figure 9

Outage burst statistics. Outage burst statistics for different solutions of the dynamic optimization, N max=11,000

Furthermore, we investigate the performance of the proposed algorithm for a client with time-varying RC. We consider a client with a poor average RC \(\overline {\delta _{c}}=0.2\). Based on the MNRC \(\delta _{1}^{(k)}\) traces in Fig. 8 c, the viewing experience of this client would be disturbed by base-layer outage. We use truncated normal distributions with mean μ=0.2 and different standard variations σ to model the probability distribution of its RC during the transmission of all nine segments. Examples of these distributions are depicted in Fig 10. We calculate the frame-freeze rate (FFR), defined as the percentage of frames not received and may be replaced by the last decoded frame. The results are depicted in Fig. 11. Using the proposed optimization, the FFR is reduced by as much as 11 and 7 % for narrow RC distributions (σ<0.02) and wide distributions (σ>0.02), respectively. Note that the FFR for the EEP solution is more than 99 % for this client due to significantly higher values of the corresponding \(\delta _{1}^{(k)}\) traces.

Fig. 10
figure 10

RC distribution of a client with dynamic channel. RC distribution of a client with an average RC \(\overline {\delta _{c}}=0.2\)

Fig. 11
figure 11

FFR for dynamic optimization. Frame-freeze rate for the test client with time-varying channel under dynamic optimization

In practice, it may be possible to vary the service bandwidth N max with the source symbol rate. For instance, a server simultaneously serving multiple independent video streams can exploit a well-known advantage offered by statistical multiplexing: the total source rate fluctuates far less than the individual source rates. In such case, allowing N max to vary, in conjunction with the proposed method, would enable suppression of outage to negligible levels. The MNRCs can also be transmitted as a side information with negligible cost. Therefore, a client can select a video layer for playback whose MNRC is at a safe margin below the client’s RC. The client may use the MNRCs for the previous segments as input to an algorithm that selects the actual enhancement layers for decoding and display, with the aim to produce the best viewing experience. MNRC smoothing helps the algorithm to achieve a good viewing experience.

6 Conclusions

Considering heterogeneity of client channels and their terminal capabilities, we introduced a QoE optimization framework for video multicast that benefits from the flexibility offered by scalable video coding and fountain coding. The client’s ability to decode different video quality layers is exploited to maximize the overall utility of the multicast transmission. Utility is formulated based on a perceptual quality metric that can differentiate between various possible adaptations of a multilayer video stream with a combination of spatial, temporal, and granular scalability. The optimization effects a balance between QoS-guaranteed service and best-effort service. Catering to the probabilistic decoding nature of rateless codes, outage probability constraints are applied to guarantee that the video quality layers are received with a high level of assurance. Clients that cannot be served meeting such guarantees may be served with a lower playback quality from the lower video layers. Clients demanding high-quality playback but present in small numbers may be similarly treated. Clients with exceedingly poor channels may be dropped from the multicast. On the other hand, given a sufficient transmission rate, clients are served the highest quality playback level they desire. The optimization complexity is independent of the number of clients and scales only with the number of client classes. Additionally, a convex optimization approximation is proposed which has shown to attain close-to-optimal performance with even lower computational complexity. The proposed optimization framework is also shown to provide robust performance when limited client feedback information is available. Finally, by introducing a penalty term to the multicast utility, the QoE optimization is extended to suppress client playback quality variations due to source bit rate and/or service bandwidth fluctuations. Despite the above promising results, a possible future work would be to assess the efficacy of the proposed scheme in more full-fledged application scenarios similar to [49].

7 Appendix

7.1 Convexity analysis of Problem 3

For the convexity analysis, we form the Hessian matrix H from the second derivatives of the cost function with respect to the optimization variables θ l ,l=1,…,L,

$$\begin{aligned} \mathbf{H}=\left[H_{jk}\right]&=\left[\frac{\partial^{2}}{\partial \theta_{j} \partial \theta_{k}} \sum_{m=1}^{M} \sum_{l=1}^{h_{m}} \hat{\alpha}_{m,l} \widetilde{F}_{m}({1}/{\theta_{l}})\right] \quad j,k=1,\ldots,L. \end{aligned} $$

The diagonal elements can be obtained from differentiating (19)

$$\begin{array}{*{20}l} H_{jj}&=\sum_{m=1}^{M} \hat{\alpha}_{m,j} \frac{\partial^{2}}{\partial {\theta_{j}^{2}}} \left({c_{m}\theta_{j}^{-p_{m}}+1-c_{m}}\right)\\ &=\sum_{m=1}^{M} \hat{\alpha}_{m,j}c_{m}p_{m}(p_{m}+1){\theta_{j}^{-(p_{m}+2)}}. \end{array} $$
((28))

Since \(c_{m},p_{m},\hat {\alpha }_{m,j}\geq 0\) and θ j ≥1, j, m, we conclude that H jj ≥0,j.

Similarly, it can be shown that the off-diagonal terms of the Hessian matrix H jk ,jk are zero. As a result, H is positive semidefinite and the cost function is convex [50].

The UEP constraints in Problem 3 are linear since they are of the form θ j+1θ j ≤0 with j=1,…,L and \(\theta _{L+1}\triangleq 1\). Furthermore, the bandwidth constraint is also linear. Hence, the constraints form a polyhedron which is a convex set. Since the convexity of the cost function was previously established, Problem 3 is a convex optimization problem.

References

  1. H Liu, M El Zarki, Performance of h.263 video transmission over wireless channels using hybrid arq. IEEE J. Sel. Areas Commun. 15(9), 1775–1786 (1997).

    Article  Google Scholar 

  2. DJC MacKay, Fountain codes. IEE Proc. Commun. 152(6), 1062–1068 (2005).

    Article  Google Scholar 

  3. D Gozálvez, D Gómez-Barquero, T Stockhammer, M Luby, AL-FEC for improved mobile reception of MPEG-2 DVB-T transport streams. Int. J. Digit. Multimed. Broadcast. 2009:, 10 (2009).

    Google Scholar 

  4. S McCanne, M Vetterli, V Jacobson, Low-complexity video coding for receiver-driven layered multicast. IEEE J. Sel. Areas Commun. 15(6), 983–1001 (1997).

    Article  Google Scholar 

  5. AE Mohr, EA Riskin, RE Ladner, Unequal loss protection: graceful degradation of image quality over packet erasure channels through forward error correction. IEEE J. Sel. Areas Commun. 18(6), 819–828 (2000).

    Article  Google Scholar 

  6. PA Chou, AE Mohr, A Wang, S Mehrotra, Error control for receiver-driven layered multicast of audio and video. Multimed. IEEE Trans. 3(1), 108–122 (2001).

    Article  Google Scholar 

  7. DG Sachs, A Raghavan, K Ramchandran, Wireless image transmission using multiple-description-based concatenated codes. Proc. SPIE. 3974:, 300–311 (2000).

    Article  Google Scholar 

  8. D Vukobratovic, V Stankovic, D Sejdinovic, L Stankovic, Z Xiong, Scalable video multicast using expanding window fountain codes. IEEE Trans. Multimed. 11(6), 1094–1104 (2009).

    Article  Google Scholar 

  9. MCO Bogino, P Cataldi, M Grangetto, E Magli, G Olmo, Sliding-window digital fountain codes for streaming of multimedia contents. Proc. IEEE Int. Symp. Circuits Systems (ISCAS ’07). 1:, 3467–3470 (2007).

    Article  Google Scholar 

  10. S Ahmad, R Hamzaoui, MM Al-Akaidi, Unequal error protection using fountain codes with applications to video communication. IEEE Trans. Multimed. 13(1), 92–101 (2011).

    Article  Google Scholar 

  11. Z Luo, L Song, S Zheng, N Ling, Raptor codes based unequal protection for compressed video according to packet priority. IEEE Trans. Multimed. 15(8), 2208–2213 (2013).

    Article  Google Scholar 

  12. CH Wang, JK Zao, HM Chen, PL Diao, CM Chiu, A rateless UEP convolutional code for robust SVC/MGS wireless broadcasting. IEEE International Symposium on Multimedia (ISM). 1:, 279–289 (2011).

    Google Scholar 

  13. J-P Wagner, J Chakareski, P Frossard, Streaming of scalable video from multiple servers using rateless codes. IEEE International Conference on Multimedia and Expo. 1:, 1501–1504 (2006).

    Google Scholar 

  14. P Wu, Y Hu, Optimal layered video IPTV multicast streaming over mobile WiMAX systems. IEEE Trans. Multimed. 13(6), 1395–1403 (2011).

    Article  Google Scholar 

  15. J Xu, R Hormis, X Wang, Scalable video multicast on broadcast channels. IEEE Glob. Telecommun. Conf. (GLOBECOM’09). 1:, 1–8 (2009).

    Google Scholar 

  16. W-H Kuo, W Liao, Utility-based radio resource allocation for QoS traffic in wireless networks. IEEE Trans. Wirel. Commun. 7(7), 2714–2722 (2008).

    Article  Google Scholar 

  17. S Sharangi, R Krishnamurti, M Hefeeda, Energy-efficient multicasting of scalable video streams over wimax networks. IEEE Trans. Multimed. 13(1), 102–115 (2011).

    Article  Google Scholar 

  18. V Vukadinovic, G Dán, in Proceedings of the First Annual ACM SIGMM Conference on Multimedia Systems. MMSys ’10. Multicast scheduling for scalable video streaming in wireless networks (ACMNew York, NY, USA, 2010), pp. 77–88.

    Chapter  Google Scholar 

  19. W Ji, Z Li, Y Chen, Joint source-channel coding and optimization for layered video broadcasting to heterogeneous devices. IEEE Trans. Multimed. 14(2), 443–455 (2012).

    Article  Google Scholar 

  20. European Telecommunications Standards Institute, Digital Video Broadcasting (DVB): Transmission system for handheld terminals (DVB-H), European Telecommunications Standards Institute, ETSI EN 302 304 V1.1.1 (2004), Available: http://www.etsi.org.

  21. 3rd Generation Partnership Project, multimedia broadcast/multicast service (MBMS); protocols and codecs, 3rd Generation Partnership Project, 26.346 V10.0.0 (2011) Available: http://www.3gpp.org.

  22. M Luby, M Watson, T Gasiba, T Stockhammer, W Xu, Raptor codes for reliable download delivery in wireless broadcast systems. 3rd IEEE Consum. Commun. Netw. Conf. (CCNC ’06). 1:, 192–197 (2006).

    Article  Google Scholar 

  23. M Luby, T Gasiba, T Stockhammer, M Watson, Reliable multimedia download delivery in cellular broadcast networks. IEEE Trans. Broadcast. 53(1), 235–246 (2007).

    Article  Google Scholar 

  24. KK Yen, YC Liao, CL Chen, JK Zao, H Chang, Integrating non-repetitive LT encoders with modified distribution to achieve unequal erasure protection. IEEE Trans. Multimed. 15(8), 2162–2175 (2013).

    Article  Google Scholar 

  25. T Paila, R Walsh, M Luby, RV Roca, FLUTE-file delivery over unidirectional transport (2012). Technical report, RFC 6726.

  26. I de Fez, JC Guerri, An adaptive mechanism for optimal content download in wireless networks. IEEE Trans. Multimed. 16(4), 1140–1155 (2014).

    Article  Google Scholar 

  27. D Lecompte, F Gabin, Evolved multimedia broadcast/multicast service (eMBMS) in LTE-advanced: overview and Rel-11 enhancements. IEEE Commun. Mag. 50(11), 68–74 (2012).

    Article  Google Scholar 

  28. T Stockhammer, MG Luby, in Visual Communications and Image Processing (VCIP), 2012 IEEE. Dash in mobile networks and services, (2012), pp. 1–6.

  29. SY Chang, H Chiao, in Communication Technology (ICCT), 2013 15th IEEE International Conference On. Adaptive streaming schemes for mpeg-dash overWiFi multicast, (2013), pp. 168–173.

  30. R Belda, I de Fez, F Fraile, P Arce, JC Guerri, Hybrid FLUTE/DASH video delivery over mobile wireless networks. Trans. Emerg. Telecommun. Technol. 25(11), 1070–1082 (2014).

    Article  Google Scholar 

  31. C Bouras, N Kanakis, V Kokkinos, A Papazois, Application layer forward error correction for multicast streaming over LTE networks. Int. J. Commun. Syst. 26(11), 1459–1474 (2013).

    Article  Google Scholar 

  32. A Shokrollahi, Raptor codes. IEEE Trans. Inf. Theory. 52(6), 2551–2567 (2006).

    Article  MathSciNet  MATH  Google Scholar 

  33. Y Xue, YF Ou, Z Ma, Y Wang, Perceptual video quality assessment on a mobile platform considering both spatial resolution and quantization artifacts. Proc. 18th Int. Packet Video Workshop (PVW ’10). 1:, 201–208 (2010).

    Article  Google Scholar 

  34. YF Ou, Z Ma, T Liu, Y Wang, Perceptual quality assessment of video considering both frame rate and quantization artifacts. IEEE Trans. Circuits Syst. Video Technol. 21(3), 286–298 (2011).

    Article  Google Scholar 

  35. P Cataldi, M Grangetto, T Tillo, E Magli, G Olmo, Sliding-window raptor codes for efficient scalable wireless video broadcasting with unequal loss protection. IEEE Trans. Image Process. 19(6), 1491–1503 (2010).

    Article  MathSciNet  Google Scholar 

  36. A Bakhshali, W-Y Chan, Y Cao, SD Blostein, Outage probability of rateless codes in memoryless erasure channels. 26th Queen’s Biennial Symposium Communications (QBSC’12), 150–153 (2012).

  37. GJ Sullivan, J Ohm, WJ Han, T Wiegand, Overview of the high efficiency video coding (HEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 22(12), 1649–1668 (2012).

    Article  Google Scholar 

  38. M Luby, A Shokrollahi, M Watson, T Stockhammer, L Minder, RaptorQ forward error correction scheme for object delivery, august 2011. ietf request for comments. Technical report, RFC 6330 (Standards Track/Proposed Standard).

  39. J Perry, PA Iannucci, KE Fleming, H Balakrishnan, D Shah, in Proceedings of the ACM SIGCOMM 2012 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication. Spinal codes (ACM, 2012), pp. 49–60.

  40. A Bakhshali, W-Y Chan, Y Cao, SD Blostein, Multi-scalable video multicast for heterogeneous playback requirements using a perceptual utility measure. Proc. Multimed. Signal Process. (MMSP’12) 2012 IEEE 14th Int. Workshop. 1:, 260–265 (2012).

    Article  Google Scholar 

  41. G Liva, E Paolini, M Chiani, Performance versus overhead for fountain codes over fq. IEEE Commun. Lett. 14(2), 178–180 (2010).

    Article  Google Scholar 

  42. Z Li, X Zhu, A Begen, B Girod, IPTV multicast with peer-assisted lossy error control. IEEE Trans. Circuits Syst. Video Technol. 22(3), 434–449 (2012).

    Article  Google Scholar 

  43. PA Assuncao, M Ghanbari, Buffer analysis and control in CBR video transcoding. IEEE Trans. Circuits Syst. Video Technol. 10(1), 83–92 (2000).

    Article  Google Scholar 

  44. S Chand, H Om, Modeling of buffer storage in video transmission. IEEE Trans. Broadcast. 53(4), 774–779 (2007).

    Article  Google Scholar 

  45. S Winkler, P Mohandas, The evolution of video quality measurement: from PSNR to hybrid metrics. IEEE Trans. Broadcast. 54(3), 660–668 (2008).

    Article  Google Scholar 

  46. MH Pinson, S Wolf, A new standardized method for objectively measuring video quality. IEEE Trans. Broadcast. 50(3), 312–322 (2004).

    Article  Google Scholar 

  47. Y Ou, Y Xue, Y Wang, Q-star: a perceptual video quality model considering impact of spatial, temporal, and amplitude resolutions. IEEE Trans. Image Process. 23(6), 2473–2486 (2014).

    Article  MathSciNet  Google Scholar 

  48. Z Liu, Z Wu, P Liu, H Liu, Y Wang, Layer bargaining: multicast layered video over wireless networks. IEEE J. Sel. Areas Commun. 28(3), 445–455 (2010).

    Article  MathSciNet  Google Scholar 

  49. J Wu, Y Shang, J Huang, X Zhang, B Cheng, J Chen, Joint source-channel coding and optimization for mobile video streaming in heterogeneous wireless networks. EURASIP Journal on Wireless Communications and Networking. 2013(1), 1–16 (2013).

    Article  Google Scholar 

  50. S Boyd, L Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004).

    Book  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported by the Natural Sciences and Engineering Research Council of Canada.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ali Bakhshali.

Additional information

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bakhshali, A., Chan, WY., Blostein, S.D. et al. QoE optimization of video multicast with heterogeneous channels and playback requirements. J Wireless Com Network 2015, 260 (2015). https://doi.org/10.1186/s13638-015-0485-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-015-0485-0

Keywords