Skip to main content

Performance research on time-triggered Ethernet based on network calculus

Abstract

In the paper, we research on the performance of time-triggered Ethernet based on network calculus. Three kinds of data with different priority are imported to research on time-triggered Ethernet. Firstly, we adopt network calculus to obtain the theory-bound value of the network performance parameters. Then, we design and implement the time-triggered Ethernet clock synchronization, redundancy fault-tolerant, multi-data communication, service-performance model, node model, and network model successfully in two different topologies. Lastly, we compare the simulation value with the network calculus value and obtain the comparison result. The result shows that the performance parameters of time-triggered Ethernet well meet the theory value, and time-triggered Ethernet specification is feasible.

1. Introduction

In recent years, the development of real-time tasks based on the existing Ethernet has become a hot-spot issue. In the fierce competition, the time-triggered Ethernet (TTE) stands out, which combines time-triggered technology certainty, fault-tolerant mechanisms, and real-time performance with the ordinary Ethernet's performances, like flexibility, dynamism, and best effort [1, 2]. TTE provides support for synchronous, highly reliable embedded computing and networking, fault-tolerant design. Because of its characteristics, TTE is widely used in the safety-critical system, such as aviation electronic technology, transport system, and industrial automation. It is a new method to analyze TTE using network calculus, and there is no existing research result.

‘Time-triggered’ [3] means predictable and deterministic, which means all activities in the network would run in a planned way over time. The definition of the time-triggered Ethernet is as follows [4, 5]:

TTE = Ethernet + Clock synchronization + Time-triggered communication + Rate-constrained traffic + Guaranteed transport.

In this article, we firstly construct a service-performance network calculus model which is triggered by time. Meanwhile, we construct a simulation model and realize the TTE clock synchronization algorithm, protocol, and network components and then we use the simulation model to analyze the performance and parameters of the TTE. Lastly, we compare the simulation results with the network calculus results based on network calculus model and simulation model. Conclusions show that TTE can well meet the needs of safety-critical systems.

2. The service-performance model of TTE based on network calculus

Network calculus is a newly developed network quality of service (QoS) theory, and it is based on the Idempotent Mathematics and Residuation Theory [6]. The foundation of network calculus lies in the mathematical theory of algebra, the min-plus algebra, and the max-plus algebra. Network calculus can give out the network performance boundary. Deterministic network calculus makes use of arrival curve and service curve to work out the deterministic boundary of the network performance parameters [7, 8], such as the maximum delay of the network data flow, the cache backlog data in the network communication nodes, and backlog length.

2.1 The service-performance model parameter of TTE

Based on the network calculus theory, we give some definitions as follows. The micro-data stream is defined as the data bit stream of the same type from the transmitting node.

2.2 Network service curve and service delay

We assume that the arrival curve of the micro-data stream with the priority p(j) is αj (t) = r j t + b j . So, Aj (t) = ∑ {i : p(i) = p(j)}αi (t) is the arrival curve of aggregated flow G j , and A j H = ∑ i : p i > p j α i t is the arrival curve of aggregated flow.

We also assume that the service capabilities of the switch are C. Namely, the total service curve of the switch providing all the data stream through this switch is βR,T (t) = C [t − 0]+. Based on corollary 6.2.1 [9], we obtain the service curve of the aggregated flow as formula 1:

β P G j , t = β R , T t − l max j + = C t − l max j / C + .
(1)

Based on corollary 6.2.1 [9], we obtain the service curve of the aggregated flow G j as formula 2:

β G j , t = β P G j , t − A j H t + .
(2)

Combining A j H = ∑ i : p i > p j α i t = ∑ i : p i > p j r i * t + ∑ i : p i > p j b i with Equation 2, we obtain formula 3:

β G j , t = C t − l max j / C − ∑ i : p i > p j r i * t − ∑ i : p i > p j b i + = C − ∑ i : p i > p j r i t − l max j + ∑ i : p i > p j b i C − ∑ i : p i > p j r i + .
(3)

So, the service rate and service delay of the data stream G j are, respectively, formulas 4 and 5:

R j G = Max C − ∑ i : p i > p j r i , 0
(4)
T j G = l max j + ∑ i : p i > p j b i / R j G .
(5)

Generally, when R j G > 0 , the switch forwards the type of data stream G j . When R j G = 0 , the value is the minimum value, and the switch will have no ability to forward the data G j . It will be no sense when R j G > 0 .

Considering the micro-data stream F j , all the micro-data in the stream G j is serviced in the order of FIFO. In this way, we can deduce the service curve of the micro-data stream F j . β R j G , T j G G j , θ = θ − T j G R j G = A j 0 − b j , θ = T j G + A j 0 − b j / R j G . Based on the assumption 6.2.1 [9], the service curve of the micro-data stream F j is as follows:

β F j , t = β G j , t − A j t − θ − α j t − θ + = β G j , t − ∑ i : p i = p j α i t − θ − α j t − θ + = β G j , t − ∑ i : p i = p j r i − r j * t − θ + ∑ i : p i = p j b i − b j + = [ R j G t − T j G − ( ∑ i : p i = p j r i − r j ) ] * [ t − T j G − ∑ i : p i = p j b i − b j R j G ] − ( ∑ i : p i = p j b i − b j ) = R j G − ∑ i : p i = p j r i + r j × t − R j G * T j G + ∑ i : p i = p j b i − b j R j G + .
(6)

So, the service rate and service delay of the micro-data stream F j are, respectively, as formulas 7 and 8:

R j = Max R j G − ∑ i : p i = p j r i + r j , 0
(7)
T j = T j G + ∑ i : p i = p j b i − b j R j G + l j / R j G .
(8)

Generally, when R j  > 0, the switch forwards the type of data stream F j . When R j  = 0, the value is the minimum value, and the switch will have no ability forward the data F j . It will be no sense when R j  < 0.

The network calculus value is the upper bound of the switch processing delay. According to the above reasoning process, we can apply the theory into the TTE network scenario to deduce the upper deterministic boundary of the delay in star topology.

3. The design and realization of the simulation model

3.1 The design of TTE simulation model

TTE consists of two kinds of network devices including TTE switches and TTE terminals. With the difference of TTE network nodes' position and role in TTE clock synchronization, TTE network nodes are divided into three different roles: synchronization master(SM), compression master (CM), and synchronization client (SC). The TTE switches can be as the role of SM, CM, and SC, while the TTE terminals can be as the role of SM and SC.

3.1.1 TTE switches

The main function of the TTE switch is to forward time-triggered data (TT data), traditional Ethernet best-effort data (BE data), or rate-limited data (RC data). The workflow of the TTE switch is as shown in Figure 1[10].

Figure 1
figure 1

The work flow of the TTE switch.

The most important two differences between the TTE switches and traditional switches are clock synchronization module and admission control module.

3.1.2 TTE terminals

The main function of the TTE terminal is to transmit and receive data. As the same with TTE switches, the TTE terminals supports TT data, the traditional Ethernet's BE data, and the RC data. The workflow of the TTE terminals is shown in Figure 2[10].

Figure 2
figure 2

The work flow of the TTE terminal.

3.2 TTE simulation performance

3.2.1 The constituent part of the TTE end-to-end delay

The network topology is as shown in Figure 3. Node_1 and node_2 are the TTE terminals, while node_0 and node_3 are TTE switches. Node_1 generates new data and sends the data to node_2 through node_0 and node_3. The size of the data is 100bytes, and creation interval is 0.00001 s. The service capability of the switch is 100Mbps, and the link rate is 100Mbps. The simulation result of the end-to-end delay is as shown in Figure 4.

Figure 3
figure 3

The simple topology.

Figure 4
figure 4

The simulation end-to-end delay of the simple topology.

As the function op_sim_time () being able to get simulation time in simulation platform, we call the function when one node receives a frame. We can also get the simulation time of the frame with function op_pk_creation_time_get (pkt_ptr), getting the time of the creation of frames (pkt_ptr is the point which points to the frame), and tt_ete_delay means one-way delay.

tt _ ete _ delay = op _ sim _ time − op _ pk _ creation _ time _ get pkt _ ptr .
(9)

In Figure 4, the x axis refers to the simulation time, and the y axis refers to the value of end-to-end delay. It also applies to Figures 5 and 6.

Figure 5
figure 5

The comparison analysis between the simulation and network calculus results. (a) Time-triggered data. (b) Rate-constrained data. (c) Best-effort data.

Figure 6
figure 6

Cascade topology and the three types of data. (a) The cascaded topology. The comparison analysis between simulation and network calculus results. (b) The time-triggered data. (c) The rate-constrained data. (d) The best-effort data.

The conclusion is that the TTE end-to-end delay of the above topology is the total sending delay of node_1, node_3, and node_0, and the total processing delay of node_3 and node_0.

4 Comparison analysis of the simulation results and network calculus results

In the second section, we analyze and derivate the maximum delay of the TTE data in a specified network topology with the use of the theory of network calculus. Then, we conduct the statistical analysis on time-triggered Ethernet network performance parameters with simulation tools. Here, we use the results of the two research tools to conduct comparative analysis to verify whether the results of the TTE simulation can meet the delay constraints of the maximum delay derived from network calculus.

4.1 The star topology

The star topology is as shown in Figure 7. There are seven TTE terminals and one TTE switch in the topology. Each TTE terminal communicates with other terminals through the TTE switch. The TTE switch forwards three kinds of data: TT data, RC data, and BE data.

Figure 7
figure 7

The star topology.

4.1.1 The network calculus process

In the star topology network simulation model, there are three types of data traffic (time-triggered real-time data (TT), RC, and BE).The rules of the terminal nodes sending frames are as follows (all the frames are standard Ethernet frames):

For the TT data stream, node_1, node_2, and node_3 send two frames per 0.025 s; node_4, node_5, node_6, and node_7 send one frame per 0.025 s; the TT data frame size varies from 100 to 500bytes. For the RC data stream, each node sends one frame per second; the frame size of the RC data stream varies from 100 to 500bytes. For the BE data stream, each node sends 10,000 frames per second; the frame size of the BE data varies from 100 to 500bytes.

According to the rules of the star topology, the arrival curve of three kinds of data streams is (for example packet_size = 100 bytes)

α TT = 4 , 000 t + 100 bytes α RC = 8 , 000 t + 100 bytes α BE = 1 , 000 , 000 t + 100 bytes .

So, the parameters are as follows:

rTT = 4, 000,  bTT = 100,  rRC = 8, 000,  bRC = 100,  rBE = 4, 000,  bBE = 4, 000,  n = 7 (the number of the micro-data).

Based on formulas 1, 2, 3, 4, and 5, we can obtain the parameters of the aggregated flow which is aggregated by the micro-data:

R TT = R TT G = C = 1.048576 × 10 8 bps
T TT = T TT G = l TT max / R TT G = 7.63 × 10 − 6 s
R RC G = C − r TT = 1.048256 × 10 8 bps
T RC G = l RC max + b TT / R RC G = 1.526 × 10 − 5 s
R BE G = C − r TT − n × r RC = 1.043776 × 10 8 bps
T BE G = l BE max + b TT + n * b RC / R BE G = 6.898 × 10 − 5 s .

Next, based on formulas 6, 7, and 8, we can obtain the parameters of the micro-data flow:

R RC = R RC G − n − 1 × r RC = 1.044416 × 10 8 bps
T RC = T RC G + n − 1 × b RC R RC G + l RC R RC G = 6.868 × 10 − 5 s
R BE = R BE G − n − 1 × * r BE = 0.563776 × 10 8 bps
T BE = T BE G + n − 1 × b BE R BE G + l BE R BE G = 1.226 × 10 − 4 s .

Therefore, the separate service curve of the micro-data flow F j (TT, RC, and BE) supported by switch is

β TT s R TT = 1.048576 × 10 8 bps , T TT = 7.63 × 10 − 6 s
β RC s R RC = 1.044416 × 10 8 bps , T RC = 6.868 × 10 − 5 s
β BE s R BE = 0.563776 × 10 8 bps , T BE = 1.226 × 10 − 4 s .

In Section 3.2, we know that the end-to-end delay consists of not only the processing delay but also the sending delay. So, we obtain the following results:

D TT max = T TT + 2 × send _ delay = 2.363 × 10 − 5 s
D RC max = T RC + 2 × send _ delay = 8.468 × 10 − 5 s
D RC max = T BE + 2 × send _ delay = 1.386 × 10 − 4 s .

4.1.2 Comparison analysis

4.1.2.1 The comparison analysis of the end-to-end delay of the three kinds of data

In Section 4.1.1, we obtain the calculus results of the star topology. We can obtain the simulation value of the star topology. In the star topology, the comparison analysis of the end-to-end delay of the three kinds of data is as follows (for example, packet_size = 100 bytes):

In the star topology, for the three kinds of data, the priority of the BE data is the smallest, and the priority of the RC data is larger than that of the BE data, and the priority of the TT data is the largest. For the TT data and RC data, their end-to-end delays are guaranteed by the switch. Based on the above discussion, the BE data cannot be guaranteed, and it will be lost. From Figure 8a,b, the TT and RC data are well guaranteed, and there is no data exceeding the theory value. From Figure 8c, the best-effort data is not guaranteed, and it is only best effort. There are some data exceeding the theory value. Table 1 gives the exceeding rate of the RC data. From the table, we know that the exceeding rate of the BE data is very small and meets our need when the size of the BE data is 100 bytes. Table 2 shows the exceeding rate of the BE data.

Figure 8
figure 8

The receiving and sending rates of switch of the three kinds of data.

Table 1 The definitions of the symbols
Table 2 The exceeding rate of the BE data

Figure 8 is the description of the receiving and sending rate of switch. The switch services three kinds of data with different sizes.

Tables 3, 4, and 5 are the comparison of the three kinds of data with different sizes. For TT data, it is firstly guaranteed, and it has the const delay value. With different size, the end-to-end delay of the TT data is also different. Generally, when the size is larger, the delay is longer. From Table 3, we know that the delay is longer with larger size. It meets well the actual theory. For the RC and BE data, the conclusion is the same.

Table 3 The comparison results of the TT data with different sizes
Table 4 The comparison results of the RC data with different sizes
Table 5 The comparison results of the BE data with different sizes
4.1.2.2 The comparison analysis of packet loss rate

In Section 4.1.2.1, we obtain the comparison results of the end-to-end delay of the three kinds of data. In the same circumstance, the comparison analysis of packet loss rate is as in Tables 6, 7, and 8.

Table 6 The packet loss of the TT data
Table 7 The packet loss of the RC data
Table 8 The packet loss of the BE data

In the Section 2.2, we assume that the service rate is not less than 0. When the size is up to 250 bytes, the TT and RC data are guaranteed, and the service rate of the BE data is up to 0. While, in the simulation model, different types of data are sent in different time intervals. Thus, when the size of the BE data is up to 250 bytes, there are also BE data in the TTE Ethernet. If the size of the data is larger, the loss rate is higher. We can get good simulation results when we set the size at <200 bytes.

4.2 The cascaded topology

The cascaded topology which is different from the star topology is shown in Figure 6a. The topology is more complex than the star topology. The analysis results of the three kinds of data are shown in Figure 6b,c,d.

According to results from the comparison chart of the three types of data stream in the cascaded topology, we can get that the actual delay of the simulation statistical results fully meets the maximum network delay obtained by network calculus, and it also verifies that the TTE simulation results are reasonable. Then, we can verify that the TTE specification is feasible.

5. Conclusions

As a new real-time deterministic network, time-triggered Ethernet has not yet formed a standardized protocol specification and has no mature products for the market. This paper firstly uses network calculus and network simulation to analyze and study the TTE. We present a service-performance network calculus model, which is triggered by time, and obtain the upper deterministic boundary of the delay. With the TTE clock synchronization, redundancy fault-tolerant, multi-data communication, node model and network model designed and implemented on the simulation platform, we analyze the performance of the TTE and compare the simulation results with the theoretical values from the time-triggered Ethernet network calculus. The final conclusion shows that TTE has good performance in terms of end-to-end delay and service rate, can be compatible with the traditional Ethernet [11], and meets well the needs of real-time and safety-critical systems.

References

  1. Hermann K, Gunter G: TTP—a protocol for fault-tolerant real-time systems. IEEE Computer 1994, 27(1):14-23. 10.1109/2.248873

    Article  Google Scholar 

  2. Maier R: Event-triggered communication on top of time-triggered architecture. DASC 2002, 21(2):135-141.

    MathSciNet  Google Scholar 

  3. Steiner W, Paulitsch M, Kopetz H: Multiple Failure Correction in the Time-Triggered Architecture. Real-Time Systems Group: Technische Universitat Wien, Vienna; 2003.

    Book  Google Scholar 

  4. Kopetz H: Fault containment and error detection in the time-triggered architecture autonomous. Paper presented at the 6th international symposium on decentralized systems. Pisa, 9–11 April 2003

  5. TTTech: TTEthernet Specification v.9.1-22968[Z] (D-INT-S-10002, 200.11.GE). Vienna: TTTech Computertechnik AG;

  6. Firoiu V, Boudec JYL, Towsley D, Zhang Z-L: Theories and models for internet quality of service. Proc IEEE 2002, 90(9):1565-1591. 10.1109/JPROC.2002.802002

    Article  Google Scholar 

  7. Kopetz H, Ademaj A, Grillinger P, Steinhammer K: The time-triggered Ethernet (TTE) Design. In Paper presented at the8th IEEE international symposium on object-oriented real-time distributed computing (ISORC). Seattle, Washington; 18–20 May2005.

    Google Scholar 

  8. Cruz RL: A calculus for network delay, part 1: network analysis. IEEE Trans. Info. Theory 1991, 37(1):132-141. 10.1109/18.61110

    Article  MathSciNet  Google Scholar 

  9. Boudec JYL, Thiran P: Network Calculus: A Theory of Deterministic Queuing System for the Internet. Germany: Springer; 2004.

    Google Scholar 

  10. Li Z: Research on time-triggered Ethernet based on deterministic network calculus. Masters Degree Thesis: University of Electronic Science and Technology of China; 2013.

    Google Scholar 

  11. Heffernan D, Doyle P: Time-triggered Ethernet based on IEEE 1588 clock synchronization. IEEE Trans. Computers 2004, 24(3):264-269.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wang.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Xiang, Y., Wang, W., Zhang, X. et al. Performance research on time-triggered Ethernet based on network calculus. J Wireless Com Network 2014, 12 (2014). https://doi.org/10.1186/1687-1499-2014-12

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2014-12

Keywords