Skip to main content

The mechanism of dividend distribution and management equity ratio interaction based on wireless network mode

Abstract

In the stock market of China, the rights and interests of small- and medium-sized investors are frequently encroached upon. This criticism makes it an urgent task to effectively restrain the large shareholders and the insiders of the listed companies on the wealth and theft of small and medium investors. The trading data of listed companies show the characteristics of exponential explosion, rapid disclosure of information, and increasing information collection channels. Therefore, in this paper, the BP artificial neural network in data mining was introduced to study the proportion of dividend distribution and equity management. Based on the analysis of the principle and implementation process of BP neural network, the algorithm model was further optimized and updated, establishing a dividend policy identification model with the purpose of protecting the interests of small- and medium-sized investors. The simulation results show that this research has opened up a new research path for the small and medium investors’ equity interests and related policy research.

1 Introduction

Although the domestic stock market has been developing for decades, the mechanism of dividend policy is still a stumbling block to the development of the stock market (Ernayani R et al. 2017) [1]. The irregular fluctuations and ups and downs of relevant policies in the stock market, the unrestricted dividend distribution of the listed companies, the characteristics of low dividend payback, or higher non-violation of dividends, greatly affect the healthy and sustainable development of the stock market and hit the investor’s investment confidence and enthusiasm. In order to protect the rights and interests of small and medium investors who are in a weak position in the stock market, a good atmosphere and environment must be established for the management of stock market investment with Chinese characteristic (Chen Q et al. 2017) [2]. Although in recent years, the national legal level, the introduction of the LLSV index system has a clear evaluation and management of investor rights and interests. Internationally, the law and regulations are also advanced. Within the listed companies, there is no effective maintenance of the rights and interests of small and medium investors, and even some companies are still trying to avoid the legal rights and interests of legal compliance (Nielsen T et al. 2016) [3]. In particular, the large shareholders in the listed companies are often more rapid and ahead of the company’s information sources. In this convenient condition, the large shareholders adopt various kinds of practices of irregularities for favoritism and occupy the “oil and water” of small and medium investors from various levels (Zhang Z et al. 2016) [4]. Many scholars have studied the stock market data of the past 10 years, and found that in the listed companies with the same dividend distribution, the more the ownership of the stock, the announcement of the dividend policy will be more welcomed by the investors (Qiu C et al. 2016) [5]. When the listed companies carry out the same dividend distribution, the more the mutual supervision and restriction among all the shareholders is standardized, the more successful the dividend policy is to get the good evaluation of the investors (Yang B et al. 2016) [6]. Under the support of this research result, the research on the interaction mechanism of dividend distribution and equity proportion is not only very necessary, but also has a positive scientific guidance role.

2 State of the art

The popularity of wireless network also promotes the prosperity of domestic stock market and ushering in new development space. With the exponential increase of Internet data, the channel for small and medium investors to get dividend policy is wider and more efficient. In this paper, the BP artificial neural network in data mining is introduced to establish a dividend policy identification model for the purpose of protecting the interests of small and medium investors, which explore a new way for the study of equity interests and related dividend policies for small- and medium-sized investors. The study of artificial neural networks began in the 40s of last century, which was originally used to study the function of the brain and the electrophysiological function of neurons. The initial mathematical model is a form neuron (Ji H et al. 2017) [7]. Neural network is composed and connected by many simple neurons. The combination of different rules can form a neural network with different structure and complexity. The non-linear characteristics of this network can achieve parallel and adaptive processing, which has great structural and algorithmic advantages in the operation of complexity understanding (Gao H et al. 2017) [8].

The neural network is also a bionic model, which can simulate the function of the human brain neural system and can realize the same function as human, such as the ability of storing and processing information, the ability to analyze and processing, and the function of simplifying, summarizing, and simulating the information, making this algorithm more intelligent. Neural network uses a large number of neuron nodes to represent the structure and function of human brain. Use inductive learning method and large-scale empirical research to repeat learning. In the process of continuous adaptation, the weights of the interconnections between the neurons are corrected, so that the mutual structure and weights of the neural network are distributed steadily. The whole process is the learning process of human knowledge acquisition (Yang W R et al. 2016) [9]. From the current statistical data, most of the neural network models are based on BP neural network. The BP neural network adapts to the change of the system by learning autonomously to change the connection value of the internal network, which has better fault tolerance and robustness. The multi input and output structure model can be better used to study and analyze the system parameters of multivariable (Guanzhou D U et al. 2017) [10].

3 Methodology

3.1 BP artificial neural network

The BP neural network is the most representative information feed forward layered neural network, which uses the backward propagation structure. The performance of the algorithm is two-way propagation. BP network structure is mainly composed of input layer, output layer, and hidden layer. The hidden layer of the network may have only one layer, or there may be many layers. The network allows each layer to make forward link links through the connection weights between nodes. After a signal input, the BP neural network transmits these signals to the hidden layer, then use the incentive function and other calculations to get the results, and transmit the result information to the output layer, and then convert it to the output signal for use. In BP neural networks, each single node represents a neuron. There is no connection between the neurons in the same layer. The nodes of each layer accept only the input from the previous layer, and the output of each neuron only affects the nodes of the subsequent output layer. In practical application, the hidden layer can satisfy the needs of use only one layer. When the hidden layer reaches three levels, any continuous function mapping can be responded to. The BP network model structure is shown in Fig. 1. BP neural network algorithm training process is characterized by the realization of positive and reverse two kinds of propagation. If the input signal is the “input layer hidden layer output layer” process, this is the state of positive information transmission. When the hidden layer accepts the signal and produces an error between the output and the signal, and the error exceeds the desired range, the system modifies the weights and thresholds of each layer of neurons according to this error, so that each layer becomes more adaptive to promote the performance of the system. At this point, the network reaction is the state of back propagation.

Fig. 1
figure 1

Structure of BP neural network

BP neural network is a learning algorithm with tutor guidance. Its basic learning steps are the following: first, the weight value w and θ are initialized. The connection weight of the input layer to the hidden layer neuron is wij, and the connection weight of the hidden layer to the output layer is wjk. The threshold of the hidden layer is set to θj, giving the output threshold θk a smaller number of more than 0 and less than 1. Then, the input vector xi = (x1, x2, , xm) is determined and instead of the corresponding expected output vector \( \overset{\wedge }{Y_i}=\left(\overset{\wedge }{Y_1},\overset{\wedge }{Y_2},\cdots, \overset{\wedge }{Y_n}\right) \). The value of the xi is input to the neuron node of the input layer. The forward calculation is carried out according to the \( {x}_j^i=f\left(\sum \limits_{i=0}^n{W}_{ij}{x}_i-{\theta}_j\right)\kern0.5em \left(j=1,2,\cdots, u\right) \), or the inverse calculation is carried out according to the \( {y}_k=f\left(\sum \limits_{k=0}^n{V}_{jk}{x}_j-{\theta}_k\right)\kern0.5em \left(k=1,2,\cdots, n\right) \). Then, the output value of the output layer neuron is compared with the expected output value, which gets a numerical value called the error value. If the result of the error is expected, the algorithm is finished. If the error exceeds expected, it re-enters the reverse calculation link of the model calculation. Finally, after revising the function calculation, the weights that meet the requirements are obtained. The end of the model calculation is carried out to output the signals. Therefore, the BP neural network learning algorithm flow is shown in Fig. 2.

Fig. 2
figure 2

BP neural network learning algorithm flow chart

There are three main shortcomings in the algorithm of BP neural network. One is that the convergence rate is too slow in network learning. This is because the gradient descent algorithm is used to solve the nonlinear problem. The adjustment of the algorithm is controlled by the network weights. Because the distribution of surface errors in two-dimensional space is varied, the convergence rate is affected. The two is that the network solution is easy to get the local minimum. This also has a direct impact on the error surface of the two dimension space. The high and low peak shape of the surface makes the high dimensional space easily appear the local minimum point, and the gradient of this point is zero, and there are many, which makes the algorithm easy to appear local network training, and the result is only the local minimum. Figure 3 is the distribution of the weight error of a two-dimensional space. Three is the hidden layer of the network has great influence on the network structure. Both the hidden layer and the number of nodes directly affect the convergence speed of the network. Although the number of hidden layers can enhance the negative linear capability of network processing, it brings about too complicated technology. The more hidden layers, the better the performance and accuracy of the network are affected after a certain number of layers.

Fig. 3
figure 3

The distribution of weight error in two dimensional space

So how many hidden layers need to be determined through experiments.

3.2 Optimization strategy of BP artificial neural network

By improving the parameter values of BP neural network, the defects can be overcome, which may be encountered in neural network learning. In this paper, the optimization method of network is to adjust the maximum of the weight value so as to adapt to the decrease of error. In the network back propagation stage, the theoretical weight value and the last weight value are partially superimposed as the actual weight value in this study. When the last weight changes, the momentum factor is used to show the influence of the change and use which momentum factor to adjust the algorithm based on the change. When momentum factor mc equals zero, the weight is adjusted according to the gradient descent method. In mc = 1, the adjustment of the new weight value is the same as the change of the last weight, which not account for the adjustment amount brought by the gradient descent method. After adding momentum into calculation, the network weight is very small in the bottom of the error surface. f(w(n)) becomes very small, so w(n + 1) ≈ w(n). To prevent the occurrence of w(n + 1) = 0 and help the network to get out of the local minimum, the weight adjustment formula is optimized as a Eq. (1).

$$ \nabla w\left(n+1\right)={m}_c\left[w(n)-w\left(n-1\right)\right]-\left(1-{m}_c\right)\eta \nabla f\left(w(n)\right) $$
(1)

n is the number of training. mc is the momentum coefficient. η represents the learning rate. Let us set mc [0, 1]. Assuming that the error precision is 0.0008, the nonlinear function is calculated by the additional kinetic energy method. Set the learning rate randomly and calculate the average of 45 times. The graph between kinetic energy coefficient and learning time is obtained, as shown in Fig. 4. After introducing momentum, the speed of network learning is improved. This adjustment method can effectively avoid the occurrence of network local minimum and reduce the recurrence of errors.

Fig. 4
figure 4

Change chart of learning time with momentum factor

The learning process of neural network is the key link to solve the problem of constant learning rate and frequent weight adjustment. In order to effectively overcome the small learning rate, the algorithm converges too slowly and the learning rate cannot reach the target. The learning rate formula is optimized to help the network reduce the correction and avoid the optimal value of weight over gradient. The learning rate optimization formula is shown in Eq. (2).

$$ {\displaystyle \begin{array}{l}\eta (n)=\left\{\begin{array}{c}a\times \eta \left(n-1\right)\\ {}a\times \eta \left(n-1\right)\end{array}\right\}\begin{array}{c}E(n)<E\left(n-1\right)\\ {}E(n)>c\times E\left(n-1\right)\end{array}\\ {}w(n)=w\left(n-1\right)-\eta (n)\times \frac{\partial E(n)}{\partial W(n)}\end{array}} $$
(2)

Here η(n) is the learning rate at the time of the N iteration. E(n) and E(n + 1) are the values of the two error functions of the front and back times. The constant value is set in the initial setting. When E(n) < E(n + 1), the error is reduced. The learning rate increases to a time of the past, and the convergence speed is accelerated. When E(n) > E(n + 1) indicates an increase in the error, at this time the weight is over-adjusted in the iteration, it is necessary to reduce the learning rate to the original b times, which avoid crossing the gradient and cause the local optimum weight. In order to reduce the error of BP neural network in training, the value of directivity function indicating the direction of search can be reduced. This situation slows down the convergence rate of the network and searches results for local minimums do not meet the needs. At the same time, conjugate gradient algorithm is introduced to provide direction vectors for search. This algorithm takes the error function of the weight setting range as the two line function and can calculate the accurate approximation at one time. The implementation process is first setting the target function =minE(w) wR. When the minimum value of the error function is searched according to the gradient direction, the Eq. (3) can be obtained, and the network is corrected.

$$ {\displaystyle \begin{array}{c}E\left(w(n)+{\eta}_nd(n)\right)=\min E\left(w\left(n+1\right)\right)\\ {}w\left(n+1\right)=w(n)+{\eta}_nd(n)\\ {}d(n)=-g(n)+{\beta}_nd\left(n-1\right),d(0)=-g(o)\end{array}} $$
(3)

In order to solve the problem that BP neural network is not easy to be determined by network structure, it is easy to cause local minimum and so on. In this paper, genetic algorithm is introduced to optimize it to improve the performance of neural network. Genetic algorithm has great advantage in global search. Based on population, it uses individual fitness as a standard to judge the subsequent legacy operation. Genetic algorithm not only has good global search ability, but also improves the local search ability in the presence of mutation operator. The main forms of genetic algorithm for neural network optimization are as follows: first, optimize the topological structure of each layer in the neural network and the various parameters of the neuron. In order to solve the problem that the hidden layer and node number of the neural network cannot be accurately determined, the genetic algorithm is used to optimize the topology structure and then the network parameters are optimized. Second, if the neural network structure is clear, the genetic algorithm is used to update the threshold and weight of the neural network.

The genetic algorithm is used to optimize the structure of BP neural network. The first step is to preprocess coding and decoding. After determining the number and number of nodes in the hidden layer, the chromosomes are randomly generated to form corresponding neural networks after decoding. The initial weights of the neural network are 1 for learning and training. The difference between the expected output value and the actual output value of each coding corresponds to the difference between the expected output value and the actual output value. The cycle of the above steps is carried out until the optimal individual is found, and the corresponding network of the individual is the optimal neural network structure. The combination of BP neural network and genetic algorithm can effectively improve convergence speed, reduce errors, and improve accuracy. Moreover, the situation of falling into local minimum points can be effectively reduced, so that the prediction results given by the algorithm model are more scientific and optimal. The algorithm flow chart of BP neural network introduced by genetic algorithm is shown in Fig. 5.

Fig. 5
figure 5

Intelligent diagnosis flow chart of short children based on

4 Result analysis and discussion

In order to verify the research on the share distribution and policy based on BP neural network, the relevant data of the listed companies of steel industry from 2010 to 2017 are selected to carry out the simulation experiment. The database from 2010 to 2015 served as training data. Using 2016 to 2017 data as test data, the accuracy of neural network is evaluated. First, the model design is carried out. The related parameter design is that the transfer function of the hidden layer and the output layer uses the Tansig function, the training function is purelin function, the actual interval is set to 16, the learning rate of the network is 0.0012, the maximum training time is 360 times, the target error is 0.6*10^ (− 11).

The parameter determination process of BP network is a process of repeated learning and accumulation. In the BP network model, the influence of independent variables on dependent variables depends not only on the size of the independent variable, but also on the values of other independent variables. So in the experiment, the trained network is used to forecast the data in 2016 and 2017, and the results are shown in Table 1. It can be seen from the data that the relative error is less than l%, which proves that the neural network evaluation and prediction system constructed in this paper has very good accuracy. Figure 6 is a diagram of the variation of the BP neural network error.

Table 1 Comparison between predicted and actual values
Fig. 6
figure 6

Network error change diagram

Figure 7 shows the BP neural network model constructed in this paper. The total number of iterations is 66 times, and the best iteration number is 59th times.

Fig. 7
figure 7

BP neural network iterative graph

5 Conclusion

As an evaluation model, BP neural network performs accurately, qualitatively, and efficiently, making it very capable of dealing with nonlinear problems. As an important representative of the efficient mathematical model, the model has made positive contributions to the management of all aspects of human society. Therefore, in this paper, BP neural network mathematical model is proposed using to study the proportion of dividend distribution and equity management. After in-depth analysis of the structure of BP neural network, it leads to over fitting phenomenon for the network to determine weights and thresholds randomly. Human factors affect the number of nodes, which bring about long network learning time, too many iterations and poor learning rate. The optimization is carried out to improve the accuracy and efficiency of the BP neural network prediction model. The weights of isoparametric values are improved to optimize the structure of BP neural network. The network learning rate formula is improved to help the network reduce the correction and overcome the slow convergence of the algorithm. The genetic algorithm is introduced to update the algorithm flow. After improving the performance of neural network, the optimized BP neural network is simulated. From the result of verification, the evaluation model based on BP neural network is successful and can help small and medium investors effectively identify dividend policies of listed companies. However, there are still some improvements in this study. The next step is to further study the prediction accuracy of neural models.

Abbreviations

BP:

Back propagation neural network

References

  1. R. Ernayani, O. Sari, Robiyanto. The effect of return on investment, cash ratio, and debt to total assets towards dividend payout ratio (a study towards manufacturing companies listed in Indonesia stock exchange)[J]. J. Comput. Theor. Nanosci. 23(8), 7196–7199 (2017)

    Google Scholar 

  2. Q. Chen, Analysis of power distribution automation and distribution management[J]. Henan Sci. Technol. 9(21), 8 (2017)

    Google Scholar 

  3. T. Nielsen, Advanced distribution management systems necessary with increased DERs[J]. Nat. Gas Electricity 33(5), 18–22 (2016)

    Article  Google Scholar 

  4. Z. Zhang, Research and application of capability evaluation model based on BP neural network educational technology—taking the problem-based learning teaching process learning as an example[J]. J. Comput. Theor. Nanosci. 13(9), 6210–6217 (2016)

    Article  Google Scholar 

  5. C. Qiu, J. Shan, Research on intrusion detection algorithm based on BP neural network[J]. Int. J. Secur. Appl. 9(4), 247–258 (2016)

    MathSciNet  Google Scholar 

  6. B. Yang, The comprehensive regionalization function partition method research based on BP neural network[J]. Territ. Nat. Resour. Study. 4(3), 68–69 (2016)

  7. H. Ji, Z.Y. Li, Research and simulation of SVPWM algorithm based on BP neural network[J]. Key Eng. Mater. 693, 1391–1396 (2016)

    Article  Google Scholar 

  8. H. Gao, M. Colliery, Research on reliability of coal mine support based on BP artificial neural network[J]. Jiangxi Coal Sci. Technol. 37(4), 18–92 (2017)

    Google Scholar 

  9. W.R. Yang, C. Chi, X.R. Yang, Research on vibration isolation effect of porous media shock absorber based on BP neural network[J]. J. Magn. Mater. Devices 724(18), 26–12 (2016)

    Google Scholar 

  10. D.U. Guanzhou, G. Wei, Z. Gao, Research on power consumption prediction of public buildings based on BP neural network[J]. Eng. Econ. 29(16), 37 (2017)

    Google Scholar 

Download references

Funding

No Funding

Author information

Authors and Affiliations

Authors

Contributions

DF has made many contributions to the interactive mechanism of dividend distribution and management equity ratio of wireless network model. The author read and approved the final manuscript.

Corresponding author

Correspondence to Daihong Fu.

Ethics declarations

Author’s information

Daihong Fu, Master of management, Associate professor. Graduated from Xi’an Jiaotong University in 2004. Worked in China University of Petroleum. Her research interests include Accounting and Audit issues of listed companies.

Competing interests

The author declares that he is no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fu, D. The mechanism of dividend distribution and management equity ratio interaction based on wireless network mode. J Wireless Com Network 2019, 29 (2019). https://doi.org/10.1186/s13638-018-1319-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-018-1319-7

Keywords