Skip to content

Advertisement

  • Research
  • Open Access

A performance optimization strategy based on degree of parallelism and allocation fitness

EURASIP Journal on Wireless Communications and Networking20182018:240

https://doi.org/10.1186/s13638-018-1254-7

  • Received: 24 July 2018
  • Accepted: 18 September 2018
  • Published:

Abstract

With the emergence of big data era, most of the current performance optimization strategies are mainly used in a distributed computing framework with disks as the underlying storage. They may solve the problems in traditional disk-based distribution, but they are hard to transplant and are not well suitable for performance optimization especially for an in-memory computing framework on account of different underlying storage and computation architecture. In this paper, we first give the definition of the resource allocation model, parallelism degree model, and allocation fitness model on the basis of the theoretical analysis of Spark architecture. Second, based on the model presented, we propose a strategy embedded in the evaluation model which is easy to perform. The optimization strategy selects the worker with a lower load that satisfies requirements to assign the latter tasks, and the worker with a higher load may not be assigned tasks. The experiments consisting of four variance jobs are conducted to verify the effectiveness of the presented strategy.

Keywords

  • Performance optimization
  • Parallelism degree
  • Allocation fitness

1 Introduction

In recent years, big data processing framework [1, 2], especially for in-memory computing framework, enriches and develops constantly [3, 4]. The in-memory computing has appeared in our view and attracted wide attention in the industry after the SAP TechEd global conference in 2010.

With the development of the in-memory computing framework, some research results are committed to the expansion and improvement of the system. A simple and efficient parallel pipelined programming model based on BitTorrent was proposed by Napoli et al. [5]. Chowdhury et al. implemented the broadcast communication technology for the in-memory computing framework. Lamari et al. [6] put forward the standard architecture of relational analysis for big data. A study by Cho et al. [7] proposed a parallel design scheme. An algorithm using programs to analyze and locate common subexpressions was designed in a study by Kim et al. [8]. A study by Seol et al. [9] proposed a fine granularity retention management for deep submicron DRAMs. et al. designed a unified memory manager separating the memory storage function from computing framework. In a study by Tang et al. [10], a standard engine for distributed data stream computing was designed. A high-performance SQL query system was implemented in a study by Jo et al. [11]. A parallel computing method for the applications with the differential data stream and prompt response was proposed in a study by McSherry et al. [12]. Zeng et al. designed a general model for interactive analysis. A study by Corrigan-Gibbs et al. realized the privacy information communication system of in-memory computing. A study by Sengupta et al. [13] used SIMD-based data parallelism to speed up sieving in integer-factoring algorithms. Ifeanyi et al. [14] presented a comprehensive survey fault tolerance mechanisms for the high-performance framework.

Some research results focus on the performance optimization for distributed computing framework, which may not suitable for the in-memory framework. Ananthanarayanan et al. proposed the algorithm, making full use of the data access time and data locality. By analyzing the impact of task parallelism on the cache effectiveness, Ananthanarayanan et al. designed a coordinated caching algorithm that adapted to in-memory computing. By monitoring computation overhead, Babu et al. found that the parallelism of the reduce task has a great influence on the performance of MapReduce system, and the task scheduling algorithm is designed to adapt to resource status. In order to predict the response time of worker node, Zou et al. divided a task into different blocks, which can improve the efficiency of tight synchronization application. In a study by Sarma et al., the communication cost frontier model of worker node was proposed, and the tradeoff between the task parallelism and communication cost were achieved by adjusting the boundary threshold. A study by Pu et al. presented FairRide, a near-optimal, fair cache sharing to improve the performance. Chowdhury et al. proposed an algorithm to balance multi-resource fairness for correlated and elastic demands.

However, most of the current performance optimization strategies are mainly used in distributed computing framework with disks as the underlying storage, in which we pay the most attention to two aspects: task scheduling and resource allocation. Therefore, it is of practical significance to study the optimization mechanism of IMC framework from the perspective of underlying memory-based storage and computation architecture.

Therefore, we consider the degree of parallelism and allocation fitness which differs from the existing strategy. First, taking the task scheduling into consideration, the rationality of the parallelism degree of the shuffle process for the in-memory framework is easier to ignore that may directly affect the efficiency of job execution and the utilization rate of cluster resources. But the degree of parallelism is usually determined based on user experience, and it is hard to adapt to the existing state of the in-memory framework. Second, achieving the rationality of the hardware allocation, especially memory allocation, as well as the acceleration of job execution, is concerned by modifying the fitness of resource allocation.

2 Modeling and analysis

2.1 Resource allocation model

Definition 1 Resource allocation type. Denotes Worker = {w1, w2,…,wm} as the set of workers,  Resource= {r1,r2,…,rn} as a collection of resource types including CPU, memory, disk, and rw = (rw1,rw2,…,rwl) represents l available resource vector of worker wm, where rwi is the ith available resource in the worker w, and the ith resource in all workers could be normalized as:
$$ Normalize\left({r}_{w_i}\to \left(0,1\right)\right)\ast rtype\left( cpu, memory, disk\right) $$
(1)
j = {j1,j2,…,jn} denotes as the set of running jobs at the same time, Vrj = (vrj1,vrj2,…,vrjk) represents the resource requirement vector of job j, since the resource requirement of each job is different, and the resource requirements of all jobs are represented as:
$$ RV=\left({V}_{r_1},{V}_{r_2},\dots, {V}_{r_j}\right)=\left(\left({v}_{r_{11}},{v}_{r_{12}},\dots, {v}_{r_{1k}}\right),\left({v}_{r_{21}},{v}_{r_{22}},\dots, {v}_{r_{2k}}\right),\dots, \left({v}_{r_{j1}},{v}_{r_{j2}},\dots, {v}_{r_{jk}}\right)\right)\Big);{v}_{r_{jk}}\ge 0 $$
(2)
Then, the resource requirements type for all jobs are expressed as:
$$ TypeRV=\left( typeR{V}_1, typeR{V}_2,\dots typeR{V}_k\right)=\left( rtype\right(\mathit{\max}\left({v}_{r_{11}},{v}_{r_{12}},\dots, {v}_{r_{1k}}\right), rtype\Big(\mathit{\max}\left({v}_{r_{21}},{v}_{r_{22}},\dots, {v}_{r_{2k}}\right),\dots, rtype\left(\mathit{\max}\left({v}_{r_{j1}},{v}_{r_{j2}},\dots, {v}_{r_{jk}}\right)\right) $$
(3)
The resource requirements are submitted to the system before the execution of the job, and the jobs will be assigned to workers with idle resources that can feed their requirements. Assume workers = {w1,w2,…,wm} as workers dealing with task j, vaj = (vaj1,vaj2iw2,…,vajk) as the resource allocation vector of task i in worker w1. In principle, workers should strictly allocate resources in accordance with the resource requirements table, which is represented as:
$$ {v}_{a_{jk}}=\frac{v_{r_{jk}}}{workerNum},j\in \mathrm{jobs} $$
(4)

2.2 Parallelism degree model

In Spark, task parallelism degree is used to measure the number of concurrent tasks, which can be specified by the user, and it could not exceed the whole instance number that equals to the product of the number of worker and the number of CPU cores in each worker.

Definition 2 Parallelism degree. Denotes the number of workers as workerNum, the number of CPU cores in each worker node as coreNum; therefore, the tasks executing concurrently is  workerNum×coreNum supported by the hardware environment. If the parallelism parameter specified by the user is puser, then parallelism degree parallelismDegree is the minimum value of workerNum × coreNumand puser:
$$ parallelismDegree=\min \left({p}_{user}, workerNum\times coreNum\right) $$
(5)
Definition 3 Idle time. It is defined to indicate idle time due to uneven task allocation. According to Definition 5, when user parallelism is greater than the hardware parallelism, that is  puser= (workerNum × coreNum), the number of pipelines within the stage is greater than task parallelism. Then, the worker needs to allocate task in multiple turns, and the number of turns can be expressed as:
$$ \mathrm{turnNum}=\mathrm{ceiling}\left(\frac{p_{user}}{workerNum\times coreNum}\right) $$
(6)
where the result of ceiling function is the smallest integer that is greater than or equal to the value of the parameter. By formula 6, we can obtain that when l is an integral multiple of (workerNum × coreNum), all workers should execute the task in each round of distribution. If the remainder when puser divides (workerNum × coreNum) is not 0, there is at least one idle node in the final round, and the number of idle workers can be expressed as:
$$ \mathrm{idleNum}=\left( workerNum\times coreNum\right)\operatorname{mod}\left({p}_{user},\left( workerNum\times coerNum\right)\right) $$
(7)
where mod(puser, (workerNum × coreNum)) represents reminder. Due to random allocation of tasks, the probability that puser is the integer times of (workerNum × coreNum)is very small, then the allocation load of tasks in the final round is likely to be uneven. Assume the set of h pipeline tasks in the final round as \( {\mathrm{Task}}_{{\mathrm{pipe}\mathrm{s}}_{\mathrm{last}}}=\left\{{\mathrm{Task}}_{{\mathrm{pipe}}_{i 1}},{\mathrm{Task}}_{{\mathrm{pipe}}_{i2}},\dots, {\mathrm{Task}}_{{\mathrm{pipe}}_{ih}}\right\} \), where h < ( workerNum× coreNum). Then, the idle time of the bye node is:
$$ {T}_{{\mathrm{idle}}_w}=\max \left({T}_{{\mathrm{pipe}}_{i1}},{T}_{{\mathrm{pipe}}_{i2}},\dots, {T}_{{\mathrm{pipe}}_{ih}}\right) $$
(8)

2.3 Allocation fitness model

Definition 4 Resource occupancy rate. Assume Tfixed as a measurement interval, \( {T}_{{\mathrm{job}}_i} \) as the actual execution time of the job i. The occupancy rate of rth resources OCir is defined as the proportion of the resources used by the workers, which is expressed as:
$$ {OC}_{jr}=\left({v}_{r_j}\times \frac{T_{{\mathrm{job}}_j}}{T_{\mathrm{fixed}}}\right),r\times R $$
(9)
Definition 5 Allocation fitness degree. Assume workLoad as the total workload, CAs = {ca1,cpa2,…,can} represents the set of computing ability of each worker in the workers= {w1,w2,…wn}. Thus, the mean value of the task execution time in all workers can be defined as:
$$ meanValue=\frac{\mathrm{workLoad}}{\sum \limits_{w\in \mathrm{workers}}{\mathrm{ca}}_{w_i}} $$
(10)
Without considering the waiting time, the execution time of tasks in worker wi with the task allocation amount allocationLoadwi can be expressed as:
$$ {T}_{{\mathrm{task}}_{w_i}}=\frac{{\mathrm{allocationLoad}}_{w_i}}{{\mathrm{ca}}_{w_i}},{w}_i\in \mathrm{workers} $$
(11)
Therefore, the variance of task execution time is represented as:
$$ \mathrm{varianc}{\mathrm{e}}_{{\mathrm{w}}_{\mathrm{i}}}={\left({T}_{{\mathrm{finish}}_{w_i}}- meanValue\right)}^2 $$
(12)
The allocation fitness degree of worker wi can be formulated as:
$$ allocationFitnes{s}_{w_i}=\frac{1}{\mathrm{varianc}{\mathrm{e}}_{{\mathrm{w}}_{\mathrm{i}}}}=\frac{1}{{\left({variance}_{{\mathrm{w}}_i}- meanValue\right)}^2} $$
(13)

Lemma 1 For all workers involved in the calculation, the greater the allocation fitness, the shorter the execution time of the job and the higher the computational efficiency.

Proof From the point of view of task allocation, the execution time of the job can be expressed as:
$$ {T}_{\mathrm{job}}=\max \left({T}_{{\mathrm{finish}}_1},{T}_{{\mathrm{finish}}_2},\dots, {T}_{{\mathrm{finish}}_n}\right) $$
(14)

According to formula, the allocation fitness is inversely proportional to the variance. If the fitness value is greater, the variance is smaller, which means the completion time of tasks in the work is closer to the mean. So, when recovery entropy takes a maximum value, job execution time is shortest and execution efficiency is the highest. Therefore, we select the worker with the higher load to immigrate the latter task to the worker with the lower load to reach a higher degree of parallelism and allocation fitness.

3 The performance optimization strategy

3.1 Construct basic data

The improved architecture of Spark with optimization strategy is shown in Fig. 1.
Fig. 1
Fig. 1

The improved architecture of Spark

To deploy the performance optimization strategy in Spark, it is necessary to implement the scheduling method in the spark.scheduler.TaskSchedulerImpl interface. The DAG scheduler contains all the topology information of current cluster operation, including all kinds of parameter configuration information and mapping between thread and the component ID; cluster object contains all status information of the current cluster, including the mapping information between each thread, node and executor of topology, the use and information of idle workers, and slots. The above information can be obtained through the API object. The CPU occupancy information of each thread in the topology can be obtained through the getThreadCpuTime (long id) method in ThreadMXBean class of Java API, where id is the thread ID; network bandwidth occupancy information of each thread can be obtained by measuring each RDD size in the experiment as well as monitoring the data transmitting rate of each thread in Spark UI, then estimating by simple accumulation. Due to the threads existing shared memory, the memory occupancy of each thread can only be roughly estimated by the -Xss parameter in the configuration file; in addition, the hardware parameters and load information in operating system could through the /proc. directory to access relevant documents. When the code is written, it will package jar to the Spark_HOME/lib directory and run after configuring spark.scheduler in spark.yaml of the master node.

3.2 Performance optimization strategy

The key problem of the optimization strategy is the selection of the destination node. However, in order to meet the requirements of the worker, it is necessary to exclude the nodes that do not meet the resource constraint model.

Denote ms and md as the total amount of memory resource in the source node and the alternative destination node respectively. In the process of decision-making to assign the latter task, it is necessary to continue to move out of other tasks, until the source node resources occupied are less than the threshold. Finally, select the optimal destination node to ensure the allocation fitness reaching the larger value. It should be noted that, when the memory, disk, or network bandwidth resources overflow, the optimization strategy is the same as this section, only to calculate the corresponding type of resource.

Then, the detail steps for the process of optimization strategy are shown in algorithm 1.
  • Step 1. Initialize the read data path and the number of data partitions. Spark uses RDD’s text file operator to read the data from HDFS to the memory of the Spark cluster.

  • Step 2. Obtain the default parallelism degree and collect statistical information to calculate data resource occupancy degree in the system.

  • Step 3. The degree of parallelism and allocation fitness are updated based on the former function shown in sections 2.2 and 2.3 in combination with the data information acquired in step 2, and then, select the id of workers with the higher load.

  • Step 4. Save the corresponding parameters to the database and update the information when the status of the resource changes. After selecting the source node and destination node, exchange their tasks and refresh the remaining CPU, memory, and network bandwidth resource of the source node and the destination node.

  • Step 5. The TaskScheduler then selects the set of workers with the lower load to assign a task to get a larger degree of parallelism and the allocation fitness.

4 Result and discussion

4.1 Experimental platform

We established a computing cluster by using 1 server and 8 work nodes; the server is set as Master Hadoop and NameNode Spark, and the others are set as Hadoop Slavers and Spark DataNodes. The details of the configuration are shown in Table 1. The task execution time is acquired from the Spark console, and nomon monitors the memory usage.
Table 1

Configuration parameters

Parameters

Values

CPU

Intel CORE i7/2.2GHZ

RAM

4GB

NIC

1000Mbit/s

Hard Disk

200GB/SATA3.0(6Gbps)

OS

ubuntu 12.04

Spark

Apache Spark 2.1.0

Hadoop

Apache Hadoop 2.6

Scala

Scala-2.10.4

JDK

OpenJDK 1.8.0 25

4.2 Execution time evaluation

In order to verify the algorithm in several different types of operations under the concurrent environment performance, we use the Spark official work examples to form a working set, including the type of four algorithms; dataset type 1, 2, 3, 4 denotes WordCount, TeraSort, K-Means, and PageRank as jobs. Figure 2 is a comparison of the execution time for different strategies.
Fig. 2
Fig. 2

Comparison of execution time for different strategies

Figure 2 shows that in the case of performance optimization, the recovery acceleration of the K-Means and PageRank of the proposed strategy is better than that of without the optimization strategy, which is a comparison of the operations of wide dependency in K-Means and PageRank, WordCount and TeraSort. The corresponding acceleration rate are 17.9%, 17.6%, 15.1%, and 30% respectively. The improper parallelism degree and task allocation may induce a large amount of out of memory and increased disk I/O, which will decrease the execution efficiency and lead to higher overhead in execution time.

Thus, compared to the existing scheduling mechanism, the scheduling with performance optimization strategy can more effectively reduce the latency, and the implementation process will not have a greater impact on the performance of the cluster.

4.3 Memory utilization evaluation

Figures 3, 4, 5, and 6 are monitored under the optimization strategy proposed in this paper. Memory utilization of four different job changes during the execution of worker 3.
Fig. 3
Fig. 3

The memory utilization of WordCount

Fig. 4
Fig. 4

The memory utilization of TerSort

Fig. 5
Fig. 5

The memory utilization of K-Means

Fig. 6
Fig. 6

The memory utilization of PageRank

Memory utilization is related to the type of job and the distribution of input data. For the same algorithm, the greater the amount of data processed is, the greater the amount of memory occupied. As shown in Figs. 3, 4, 5, and 6, WordCount and TeraSort have a relatively stable memory footprint with the increase of execution time, while K-means and PageRank have different memory occupancy rates as the processing task phases are different.

4.4 Disk I/O evaluation

Similarly, the disk I/O has different characteristics as the type of job varies. Figures 7, 8, 9, and 10 are monitored under the optimization strategy proposed in this paper. The memory utilization of four different job changes during the execution of worker 3.
Fig. 7
Fig. 7

The disk utilization of WordCount

Fig. 8
Fig. 8

The disk utilization of TeraSort

Fig. 9
Fig. 9

The disk utilization of K-Means

Fig. 10
Fig. 10

The disk utilization of PageRank

As far as the disk I/O rate is concerned for the task processing the data from the local disk, the corresponding local data reads on a worker will be generated, and a certain disk I/O is consumed. If the network data is processed, additional network I/O is also produced because the worker needs to read data from the remote disk, and memory outrage may produce more frequent disk I/O. As it is known in Figs. 7, 8, 9, and 10, disk I/O of WordCount is more obvious, and the other three jobs are lower. At the beginning of execution for K-Means and TeraSort, disk I/O is significantly increased because the task is assigned to worker 3, and it needs to read some data from the disk at this time.

5 Conclusions

In this paper, our contributions can be summarized as follows. First, we analyze a theoretical relationship of degree of parallelism and allocation fitness. Second, we propose an evaluation model that is pluggable for task assignment. Third, on the basis of the evaluation model, the strategy can take resource characteristics into consideration and assign tasks to the worker with a lower load to increase execution efficiency. Numerical analysis and experimental results verified the effectiveness of the presented strategy.

Our future work is mainly concentrated on analyzing the general principles of the requirements for different types of operating resources for in-memory computing framework and design the optimization strategy adapting to the load and type of jobs.

Abbreviations

CPU: 

Central processing unit

DAG: 

Directed acyclic graph

IMC: 

In-memory computing

SAP: 

System applications and products

SIMD: 

Single instruction, multiple data

SQL: 

Structured query language

Declarations

Acknowledgements

The authors would like to thank the reviewers for their thorough reviews and helpful suggestions.

Funding

This paper was supported by the National Natural Science Foundation of China under Grant No. 61262088, 61462079, 61562086.

Availability of data and materials

All data are fully available without restriction.

Authors’ contributions

CTY is the main writer of this paper. She proposed the main idea, completed the experiment, and analyzed the result. CYY and CB gave some important suggestions for this paper. All authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
School of Mechanical and Electrical Engineering, Shaoxing University, Shaoxing, 312000, People’s Republic of China
(2)
School of Information Science and Engineering, Xinjiang University, Urumqi, 830008, People’s Republic of China

References

  1. S. John Walker, Big data: a revolution that will transform how we live, work, and think. Int. J. Advert. 33(1), 181–183 (2014)View ArticleGoogle Scholar
  2. K. Kambatla, G. Kollias, V. Kumar, et al., Trends in big data analytics[J]. J Parallel Distrib Comput 74(7), 2561–2573 (2014)View ArticleGoogle Scholar
  3. C.L.P. Chen, C.Y. Zhang, Data-intensive applications, challenges, techniques and technologies: a survey on big data. Inf. Sci. 275(11), 314–347 (2014)View ArticleGoogle Scholar
  4. F. Provost, T. Fawcett, Data science and its relationship to big data and data-driven decision making. Big Data 1(1), 51–59 (2013)View ArticleGoogle Scholar
  5. C. Napoli, G. Pappalardo, E. Tramontana, A mathematical model for file fragment diffusion and a neural predictor to manage priority queues over BitTorrent. Int J Appl Math Comput Sci 26(1), 147–160 (2016)MathSciNetView ArticleGoogle Scholar
  6. Y. Lamari, S.C. Slaoui, Clustering categorical data based on the relational analysis approach and MapReduce. J Big Data 4(1), 28 (2017)View ArticleGoogle Scholar
  7. C. Cho, S. Lee, Effective five directional partial derivatives-based image smoothing and a parallel structure design[J]. IEEE Trans. Image Process. 25(4), 1617–1625 (2016)MathSciNetView ArticleGoogle Scholar
  8. Kim W S, Lee J, An H M, et al. Image Filter Optimization Method based on common sub-expression elimination for Low Power Image Feature Extraction Hardware Design. 10(2), 192–197 (2017)Google Scholar
  9. H. Seol, W. Shin, J. Jang, et al., Elaborate refresh: a fine granularity retention management for deep submicron DRAMs. IEEE Trans. Comput. 99, 1–1 (2018)MATHGoogle Scholar
  10. Y. Tang, B. Gedik, Autopipelining for data stream processing. IEEE Trans Parallel Distrib Syst 24(12), 2344–2354 (2013)View ArticleGoogle Scholar
  11. I. Jo, D.H. Bae, A.S. Yoon, et al., YourSQL: a high-performance database system leveraging in-storage computing. Proc of the Vldb Endowment 9(12), 924–935 (2016)View ArticleGoogle Scholar
  12. R. Wu, L. Huang, P. Yu, et al., SunwayMR: a distributed parallel computing framework with convenient data-intensive applications programming. Futur. Gener. Comput. Syst. 71, 43–56 (2017)View ArticleGoogle Scholar
  13. B. Sengupta, A. Das, Use of SIMD-based data parallelism to speed up sieving in integer-factoring algorithms[J]. Appl. Math. Comput. 293(1), 204–217 (2017)MathSciNetGoogle Scholar
  14. I.P. Egwutuoha, D. Levy, B. Selic, S. Chen, A survey of fault tolerance mechanisms and checkpoint/restart implementations for high-performance computing systems [J]. J. Supercomput. 65(3), 1302–1326 (2013)View ArticleGoogle Scholar

Copyright

© The Author(s). 2018

Advertisement