- Research
- Open access
- Published:
RETRACTED ARTICLE: P2P net loan default risk based on Spark and complex network analysis based on wireless network element data environment
EURASIP Journal on Wireless Communications and Networking volume 2019, Article number: 38 (2019)
Abstract
P2P net loan is the latest financial lending platform business, which is a new way of borrowing under the background of rapid development of the mobile Internet. Since the beginning of the new century, net loan default has caused P2P companies to break up funds and operate without continuity, which has become an important factor affecting the healthy development of the industry. Therefore, starting from the actual management situation of P2P net loan platform, the default risk of net loan was studied based on Spark technology in wireless network environment. The decision tree data mining algorithm was introduced to construct the early warning model of the net loan default risk, which achieved effective control of risk links. From the fuzzy characteristics that affected the uncertain factors of net loan credit, a hybrid algorithm model of C4.5 decision tree optimization was established. The simulation results show that the hybrid optimization model has good application value.
1 Introduction
Spark is an open source cluster technology to adapt to large data search, which the most important feature is the ability to implement distributed computing and make large data “zero.” The large data sets are divided into separate operations by refining the distributed database. Finally, the results of all the data are synthesized and the final results are obtained [1]. Spark technology has greatly improved the situation of wireless network, and data has been geometrically multiplied. The computer processing efficiency and speed cannot meet the needs of the use of the situation [2]. In particular, Spark allows the computer to reduce the number of parts of the disk that stores data in the operation process, avoiding the repeated storage and extraction of the intermediate data to affect the overall efficiency of the operation. This advantage helps AI machine algorithms become more rapid in distributed and interactive data analysis. Decision tree algorithm is one of the common mathematical models in establishing the credit evaluation model in artificial intelligence data mining technology [3]. The decision tree algorithm can effectively complete the predictability task, which is the rule of summarizing the descriptive task summary and classification of the collected data information, and the prediction and evaluation of the data attribute and the future development trend of the event [4]. Decision tree is a classification algorithm, which uses inductive learning for a large number of actual data. Through a supervised learning method, a tree structure classification rule is obtained in these data which are not related to each other and have no distribution rules [5]. The implementation method of a decision tree is relatively simple, and its clear logic level makes it easy to understand the final rules. The decision tree is used to compare the attributes of nodes. According to the distribution of node properties, the branch direction of the next step of the node is clearly defined. Finally, the corresponding conclusion is obtained on a leaf node, and the path of the tree structure forms the rule of the whole decision tree. The intuitive tree structure of these rules makes it easier for users to analyze rules [6].
2 State of the art
The latest model of the Internet innovation finance P2P net loan, although it was born less than 10 years, subversive changes to the global financial lending model has been brought. In the P2P platform, the net loan business is welcomed and loved by the ordinary consumers because of the advantages of flexible borrowing methods, fast money to account, and so on, which is also the most important market competition ability of the P2P platform [7]. In the survey of the development data of a large number of P2P companies in China, it is found that the main factors that cannot effectively control the credit risk of the network directly affect the health business development of the P2P net loan, which is how to control the risk of the net loan. The risk of net loan default brings serious consequences to the P2P platform, such as slow return of capital, fragmentation of capital chain, bankruptcy, and liquidation of enterprises. These consequences bring about economic losses that P2P companies and investors cannot recover and also cast a shadow on the healthy and sustainable development of the entire P2P industry [8]. In the analysis of P2P network credit management, it is seen that the risk nature of the credit risk in the network loan platform and the traditional bank lending is the same, because the borrower is unable to achieve the initial repayment agreement under various factors, making the interest of the net loan company damaged. Or the net loan company is unable to repay the agreed interest rate. There are differences between investors’ actual interests and contractual agreements, and personal economic interests are infringed [9]. The advantage of fast loan in network lending is also a great potential risk. For borrowers, information audit time is too short and information collection is limited, which may bring about credit default such as default and non-repayment [10]. Using a decision tree algorithm to study credit default risk of P2P net loan has a good theoretical foundation. Mathematical models can be used to analyze the impact of risk more effectively and quantitatively determined the consequences that may be caused by different factors. In view of different risk factors, this paper puts forward specific control measures to curb default risk from the source.
3 Methodology
3.1 Decision tree algorithm
Since the 1960s of the last century, scholars have proposed the basic implementation model system of a decision tree algorithm. Different classification rules are implicit in a decision tree data, so that the small scale and high accuracy of the tree structure is the core of improving the accuracy of the decision tree algorithm. The internal node of a decision tree is the expression of the attribute of the thing. The node of the leaf is to learn the category of the division, and the attribute of the internal node is called the test attribute. After training a training data sample to train a decision tree, the decision tree can classify a set of data in a position according to the value of the attribute. In the practical application of the decision tree, the tree root is usually tested by the tree root along with the branch of the tree, until the node of a leaf is reached, that is the category of the thing. The decision tree classifying the input information independently and finding out the hidden knowledge is through the tree structure, which can become the relevant rules of the decision service through conversion. Decision tree is an algorithm originating from learning system. The main principle is that when walking down an empty trunk, whenever a problem is encountered, it is necessary to propose a different judgment node from the past and use branching decisions to perfect the established decision tree, until a decision point can complete the correct classification of the training instance. Figure 1 is a typical learning decision tree that predicts whether or not to play according to weather data.
A decision tree algorithm is a selection criterion using information gain as the classification attribute. When selecting the highest information gain attribute as the best classification attribute, the degree of entropy can be reduced, so that the amount of information used in the redivision of the data set is less, which ensures the simple structure of the decision tree. But this simplicity is not the simplest and the most concise structure that satisfies the purpose. The decision tree algorithm cannot be processed directly to the attributes of continuity. If the attribute is not strong and sensitive to the noise reaction, the result is different because of the different training set size. Decision tree learning uses evolutionary learning from the tree top to the tree root. The general decision tree compares the attribute values of the internal nodes according to certain criteria and then chooses branches according to the comparison results of the attributes. Finally, the decision conclusion of the algorithm is obtained on the decision tree nodes. The whole decision tree moves along the root to the continuous node of the leaf, thus forming a rule path that meets the needs. A decision tree corresponds to a set of rules for expression. The decision tree has two main steps. First, the decision tree is generated after putting the data on the root node, and then, the data is divided recursively. The second is trimming decision tree, removing the abnormal data, unreal data, and noise. Tree node data in the decision tree of the party have been categorized, and there is no need to classify the new attributes. The decision tree stops the growth of the tree trunk. Figure 2 is the process of the decision tree generation process.
The mathematical process of the decision tree algorithm is to assume that S is a data set, which contains s samples. The sample contains the m category Ci, i ∈ {1, 2, 3, Λ m}. When Si is the sample size of category Ci, it is possible to get the amount of information needed to categorize the set of data set, as shown in Formula 1. Here, Pi is the probability that samples belong to Ci, and is calculated by Si/S. Log is a logarithmic function based on 2. Under the assumption that the attribute A has v different values {a1, a2, Λ , av}, the A attribute can divide the S data set into v sub sets {S1, S2, Λ , Sv}, so that the expected information formula can be obtained as shown in Formula (2). |S| is the total number of samples, and |Si| is the sample number of attribute A in the collection. Information increment is the difference between the amount of information and the amount of information that is needed, as shown in Formula 3. The algorithm uses the equation to calculate the information increment under different condition attributes, selects the maximum gain attribute from the calculation results as the value of the split attribute, and then generates the branch node of the decision tree according to its value.
The comparison between a C4.5 decision tree algorithm and the basic decision tree algorithm is mainly reflected in the more refined structure and the more intuitive realization of the process. The C4.5 decision tree algorithm can be tailored in the decision process or after the construction is completed, and the incomplete data under unknown attributes can be optimized. The algorithm of the decision tree can also be used to create production rules. The most prominent feature here is to use the information gain rate to decide the decision attributes. The information gain rate is the value derived from the addition of the gain and entropy, which can overcome the deficiency of using the increment as the evaluation criterion. When T is set as the training data set, the collection under k category is expressed as {C1, C2Λ Ck}, |Cj| that is the example of Cj. |T| is the example of a data set. Select the V attribute and set it to have n values that do not coincide with each other. The information entropy of the category can be derived from Formula (4), and the result is calculated by Formula (5).
3.2 Optimization strategy of the decision tree algorithm
The data used to create decision trees are often collections of outliers such as noise and outliers. Using abnormal data to build decision trees, the rules of branching rules based on the process also lead to misjudgment. At this time, pruning can be used to delete abnormal data that does not meet statistical measurement rules, which ensure the accuracy of prediction data. Figure 3 is a comparison diagram of a decision tree without pruning and pruning. As you can see from the graph, the size of the decision tree after pruning is smaller and the complexity is reduced, and it is more convenient to understand at the logical level. When the data is classified by pruning operation, the speed is improved and the effect is better.
In order to solve the problem that the data collection of credit information in the P2P net is easily influenced by uncertain factors, the decision tree algorithm of the uncertain data model is optimized. The definition of data for uncertain numerical data is defined as assuming that the attribute of the uncertain value is Aij, and the value range is Aij ∉ [Aij, a, Aij, b], Aijb > Aija. If some data cannot be clearly defined in a certain interval, the probability PDF can be obtained. The formula for probability density function Aikf(x) is shown in Formula 6.
The main principle of dealing with uncertain data is to calculate the information entropy of uncertain attributes by using probability base. This is a new entropy that plays a decisive role. After guidance, the types of equations of information gain and information gain rate of the basic decision tree algorithm are adopted. The equation of the information gain and the information gain rate of the corresponding uncertain data are calculated, and the final information gain rate is obtained. The construction process of the uncertain data decision tree is shown in Table 1.
When the actual processing is uncertain, the conventional scheme cannot implement fuzzy operation on data. Therefore, a function processing method based on integral principle is introduced. It is assumed here that the numerical range of x is the indeterminate position between [a, b]. [a, b] contains n f(x), which contains values that are not equal to zero. The general attribute of uncertain data is the membership degree of a point relative to a function, which is equal to the membership value of the point or the maximum value of the function of the point. In the fuzzy set theory, the membership degree is different from the conventional function, and the element x corresponds to the values of multiple f(x). After calculating the information gain value of the uncertain attributes, the composite membership function is used to multiply the membership function of the uncertain attributes to improve the accuracy of the prediction. Figure 4 shows an example of the membership function of an uncertain attribute.
4 Result analysis and discussion
In order to verify the performance of the optimized C4.5 decision tree algorithm for parallel large data processing, simulation experiments are carried out. The experimental data come from the P2P network platform in China, a network loan company that has been established for more than 5 years. The basic data for 5 years since the establishment of the company is the original database. There is more than 680 thousand original credit data information of net loan company. In order to make the classification data more targeted, the experiment first is to extract, discretize, and unbalance the collected raw data. The 16 risk factors that affect net loan credit are summed up and extracted, and these factors have 36 characteristic parameters. After calculating the information gain rate of these indexes, the characteristics of the model are obtained according to the ranking from high to low, which are the credit of the borrowers, the real interest rate of borrowing and lending, the loan cycle, the total income of the borrower, the debt ratio of the borrower, the borrower’s housing, the total amount of loan and the total amount of loan repayment and the way, and so on. In the data modeling phase, WEKA intelligent analysis software is used. Using this software, the code of optimizing the algorithm of the decision maker is compiled, and the model of the optimal decision algorithm is established. The experiment uses comparative data research. The decision tree optimization algorithm and the logistic, naive Bayes algorithm have been replaced by the data information that has already been processed nine characteristic items, so that the model data results about the default risk assessment model of the P2P net loan platform are obtained as shown in Table 1. The optimization decision tree algorithm has advantages over other algorithms in the modeling time and accuracy of evaluation (Table 2).
In order to verify the advanced nature of the proposed optimization scheme, under the same data conditions, this experiment compares the accuracy of the algorithm with the traditional decision tree algorithm and the decision tree algorithm under the uncertain data. Shown in Fig. 5 is the comparison of the effective precision under the two algorithms. After the introduction of uncertain data and the analysis of the attributes of uncertainty, the problems can be effectively solved such as income, debt ratio, housing, and other important attribute values, which can effectively improve the accuracy of the decision tree algorithm.
The training data sample size of the uncertain data decision tree algorithm is relatively large, so the rules of the training decision tree are more and the prediction model established by this rule covers all the possible situations. Therefore, when the number of training samples is large, the prediction accuracy of uncertain fuzzy decision tree algorithm is better. As shown in Fig. 6, the total accuracy of the two algorithms is compared.
5 Conclusion
In the application research of classification prediction, the artificial intelligence decision tree algorithm is often used to process the data, and the induction algorithm is used to calculate the corresponding rules. After constructing the decision tree shape map, the new data is analyzed according to the decision strategy, and the analysis results can provide important basis for future decision-making. In this paper, the P2P net loan default risk based on Spark and complex network analysis in wireless network element data environment are mainly studied. After analyzing the basic principle of the decision tree algorithm, the fuzzy set is used to optimize and update the decision tree algorithm in view of the uncertain characteristics of net loan credit data. Starting from the fuzziness of the uncertain data, integral function is used to deal with it. By using the compound membership function, the uncertain attribute that affect the lack of credit is fuzzy processing, so the risk of credit risk is predicted by the classification rules of the decision tree. In the simulation experiment, through the analysis of the attributes of the uncertainty, the problems can be effectively solved and the problems are that the important attribute value of the assets, such as the income situation, the debt ratio, the housing, and other assets, cannot be determined, which effectively improves the accuracy of the decision tree algorithm. The results of the experiment have proved that the study is successful. However, there are still some improvements in this research. The next step is to further study the pruning method of the optimizing decision tree algorithm.
Change history
19 January 2023
This article has been retracted. Please see the Retraction Notice for more detail: https://doi.org/10.1186/s13638-023-02224-9
Abbreviations
- P2P:
-
Peer to peer Internet borrowing
- Spark:
-
Open Source Cluster Technology in big data search
References
S.U. Yong, D. Zhou, Television ratings prediction research based on decision tree algorithm. Comput. Digit. Eng. 21, 325–659 (2017)
D.C. Wickramarachchi, B.L. Robertson, M. Reale, et al., HHCART: an oblique decision tree. Comput. Stat. Data Anal. 96, 12–23 (2016)
A. Hamoud, Selection of best decision tree algorithm for prediction and classification of students’ action. Soc. Sci. Electron. Publ. 3(2), 442–213 (2017)
F. Pan, The test results prediction research based on C5.0 decision tree algorithm. Microcomp. Appl 81, 1–12 (2016)
F. Ahmed, K.Y. Kim, Data-driven weld nugget width prediction with decision tree algorithm. Procedia Manufact. 10, 1009–1019 (2017)
H. Hamsa, S. Indiradevi, J.J. Kizhakkethottam, Student academic performance prediction model using decision tree and fuzzy genetic algorithm. Procedia Technol. 25, 326–332 (2016)
G. Sahoo, S. Kumar, Enhanced decision tree algorithm using genetic algorithm for heart disease prediction. Int. J. Bioinform. Res. Appl. 14(1/2), 49 (2017)
C.Y. Wu, T.J. Chiou, C.Y. Liu, et al., Decision-tree algorithm for optimized hematopoietic progenitor cell-based predictions in peripheral blood stem cell mobilization. Transfusion 56(8), 2042–2051 (2016)
F. Abbasitabar, V. Zare-Shahabadi, In silico prediction of toxicity of phenols to Tetrahymena pyriformis by using genetic algorithm and decision tree-based modeling approach. Chemosphere 172, 249–259 (2017)
S. Datta, V.A. Dev, M.R. Eden, Hybrid genetic algorithm-decision tree approach for rate constant prediction using structures of reactants and solvent for Diels-Alder reaction. Comput. Chem. Eng. 106, 690–698 (2017)
Funding
The study was supported by the National Social Science Foundation of China (Grant No. 16BJY160).
Author information
Authors and Affiliations
Contributions
ZT has made great contributions to the complex network of wireless network element data environment. XC has done a lot of research and made a lot of contributions to the default risk of P2P online loans. Both authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Author’s information
Zeping Tong, Doctor of Management, Associate professor. Graduated from Wuhan University. Worked in Wuhan University of science and technology. His research interests include P2P online lending.
Xiaomin Chen, Doctor of Financial Management, Lecturer. Graduated from Jinan University in 2011. Worked in South China Agricultural University. Her research interests include corporate investment and financing & accounting information disclosure.
Competing interests
The authors declare that they have no competing interests.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Tong, Z., Chen, X. RETRACTED ARTICLE: P2P net loan default risk based on Spark and complex network analysis based on wireless network element data environment. J Wireless Com Network 2019, 38 (2019). https://doi.org/10.1186/s13638-019-1345-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13638-019-1345-0