Parametric Modeling of Solid Mechanics in Engineering Research Based on Cloud Computing Model


 Traditional parametric models of solid mechanics have disadvantages such as poor lightweight level of cloud data and low integrity of operations of solid mechanics. To solve the above problems, a parametric model for solid mechanics in engineering research based on cloud computing models is designed. By calculating the number of cloud data copy optimizations, the upper limit of resources required for engineering research is estimated, and the specific values is assessed by continuously cycling the demand, so as to complete the basic conflict analysis, and set up a cloud computing engineering research environment. On this basis, the modeling index is selected. By analyzing the sensitivity of the index, the purpose of optimizing the modeling parameters is achieved, and a new solid mechanics parametric model is constructed. Analyzing and contrasting the experimental data shows that after applying the parametric model of solid mechanics in the engineering research based on the cloud computing model, the lightweight level of cloud data can be increased by up to 24 levels, and the operational integrity of solid mechanics can be increased by about 30%.

materials and the original material reaches the limit of their application, the application of nonlinear models becomes more and more widely [3,4]. Solid mechanics is a branch of early formation, strong theoretical and extensive application in mechanics. It mainly studies the displacements, motion, stress, strain, and destruction of deformable solids produced by various internal points under the action of external factors (such as load, temperature, humidity, etc.). The study of solid mechanics has both elasticity and plasticity branches; it has both linear and nonlinear branches. In the early studies of solid mechanics, most of the hypothetical objects were uniform continuous media, but the composite mechanics and fracture mechanics developed in recent years expanded the scope of research. The non-uniform continuities and non-continuous bodies containing cracks are studied.
The traditional solid mechanics parametric model obtains the state of engineering data in the cloud environment by NSGA-II method, then uses the elliptic basis function neural network method to obtain the standard approximation model, and substitutes all the data that conforms to the operation rules into the model, obtaining a series of value. Then, the data is sorted by the corresponding binary tree based on size to achieve the application value of the model. However, with the advancement of scientific and technological means, this traditional parametric model has gradually produced cloud data with a lower lightweight level and lower computing integrity. In order to solve the above problems, a new parametric model of solid mechanics in engineering research based on cloud computing model was designed. The establishment of the basic operating environment was accomplished through the optimization of cloud data copy quantity and demand estimation of engineering research resource, and the practical value of the model was proved by comparing experimental data.

Establishment of cloud computing engineering research environment
The research environment of cloud computing engineering is the basis for the operation of the new solid mechanics parametric model. The specific operation flow can be performed as follows.

Optimization of cloud computing data copy quantity
With the operation of the cloud computing engineering research environment, the amount of application data is increasing, and different cloud data copy groups have also become different. With continuous customization and data accumulation between copies, the number of users accessing data online at the same time may become more and more, resulting in higher visits of certain rows in the node's basic table. In order to avoid problems caused by increased access to certain cloud data, unbalanced model loads, and increased average request waiting time, the number of copy must be properly adjusted to disperse the heat to improve the availability of cloud computing data to ensure the stability of the operating environment of the model [5][6]. In the sharing architecture of cloud computing shared database, all copy data are stored in the shared basic table and extended table mode. Each group of engineering computing units uses metadata mapping to achieve data isolation in the replica view, that is, the users of each copy see as if the cluster is only servicing them. Therefore, user requests of each copy are independent of each other, and the requests of the entire cluster are also independent of each other.
Assuming that represents the number of request ( ) of cloud data copies that arrive within time interval . Let denote the probability of arrival of ( ) , that is: Among them, represents the upper limit of cloud computing data copy, represents the existence of data request, and represents the total amount of data that can reach stably. In a sufficiently small unit time, the frequency of the data request is independent of time, and is approximately proportional to the length of the unit's interval, ie, there is no sudden increase in access in each sufficiently small unit time. In high-order infinitesimal of unit time, two requests won't arrive at the same time, that is, one copy only responds to one request at the same time, and other requests enter the waiting queue. According to the above description, the probability formula is drawn: is the high-order infinitesimal of , is the probability that the data request of cloud computing copy enters into the waiting queue, and when , there is: According to formula (3), it can be derived as follows: When , that is: Where represents the natural variable, represents the optimization constant of cloud computing data copy, represents the random natural number, and represents the optimization order of magnitude.

Demand estimation of engineering research resource
Demand estimation of cloud computing environment engineering research resource refers to the resource demand for a single copy in the shared table mode of shared database, which is related to the number of user requests. However, in the multi-engineering application program, redundant resource demand is required to handle the isolation and security processing of multiple resources. The storage usage of a resource is independent, proportional to the frequency of data access requests for each user [7][8]. The storage usage of all resources is proportional to the total number of users of data nodes. Let be the number of all users on the engineering research Among them, represents the resource demand setting, represents the request quantity relationship between users, represents the number of connection requests, represents the original demand estimation, and represents the number of users to be connected. Taking into account the SLA agreement between the project demand resource and the user, the SLA estimation model is shown in Figure 1, this model reflects the estimation form of satisfaction rate of the cloud computing engineering research. When the actual response time is higher than that specified in the SLA agreement, the request provided by the application service meets the engineering research requirements, and the request response time does not meet the agreed upper limit. On the contrary, engineering research needs cannot be met.

Figure 1 SLA estimation model for cloud computing engineering research
The demand estimation of cloud computing environment engineering research resource is a multi-objective optimization problem. For ease of calculation, some reasonable simplifications are made based on a greedy method. For example, R = {0.2s, 0.3s, 0.4s, 0.5s} corresponds to the required values of the response time of the estimation result A, the estimation result B, the estimation result C, and the estimation result D in their respective SLAs, and they are all at data node x. When an insufficient problem occurs on data node x, first, all data nodes are divided into two sets A and B according to resource usage. At this time, x belongs to set A, and the shortage of x is the most serious. Thus, x can be the data node that migrated first. Set B has data nodes a, b, , and a has the most abundant remaining resources. According to the idea of greedy algorithm, a data node is selected as the destination data node, and then the tenant with smaller value is migrated to this node as mentioned in the previous example. According to this idea, the data of the estimation results are successively transferred until the data node a is no longer insufficient.

Conflict analysis of cloud data engineering research
The conflict analysis of cloud data engineering research is related to the storage characteristics. This study is based on the storage method of the basic table and the extended table in the multi-copy shared data mode of shared database. The basic business data of the application is stored in the basic table, and the copy customized or extended data is stored in the extended table [9][10]. From the characteristics of the transaction request, the request number and frequency of transactions for the basic table far exceeds the request for the extended table. In addition, when a data record in the extended table is converted into data in the basic table, it can only be an attribute in one of the records. Thus, if the same amount of logical data is requested, the physical data amount of the extended table far exceeds the amount of physical data in the basic table. In a single transaction access, the amount of data access D in the extended table is greater than the basic table. Finally, the data in the extended table depends on the metadata table for metadata query to obtain conversion information of the data in the extended table, and the query efficiency is much lower than that of the basic table. Let represent the results of the conflict analysis of cloud data engineering research. Formula (6) can be used to express this result: Among them, represents the basic conflicting request efficiency, represents the research parameter of the cloud computing engineering, represents the stability factor of the cloud environment operation, is the stable operation cycle, and represents the conflict request frequency. The calculation result of formula (7) is utilized to use conflict analysis results of the cloud data engineering research, and the specific cycle operation method is shown in Figure  2.

Figure 2 Operation method of conflict analysis for cloud data engineering research 3 Methods
Based on the research environment of cloud computing engineering, in order to ensure the smooth operation of the new solid mechanics parametric model, it is necessary to complete the construction of the remaining applications of the model through steps such as selection of modeling index.

Selection of modeling index of solid mechanics
Selection of modeling index of solid mechanics is a key step in establishing a new parametric model. When using the center-difference equation to solve the solid-mechanical equations, the nonlinear convergence of the model itself can be used to improve the high degree of vectorization and parallelization of modeling index [11][12]. In general, the common solid mechanics modeling index includes four types, and the mutual constraints between them can be expressed as Figure 3. , and represent the four types of solid mechanics modeling index, respectively, and the specific determination method is shown in formula (8).
Among them, , , and represent the solid mechanics parameters affecting the four modeling index, respectively, represents the impacting coefficient among indexes, represents the original quantity of stable modeling, and represents the minimal value of the product of influence coefficient and original quantity of stable modeling. During the collision of modeling indexes, the relevant influencing parameters of each index change significantly. At this time, to ensure that the index itself is not affected by the modeling event, fixed number processing can be performed for each index [13]. The specific processing results are shown in Table 1.

Index sensitivity analysis
Relying on the relative sensitivity to filter the modeling index may cause the overall performance of the model to drop too much. Therefore, a compromise is adopted in the selection of design variables, that is, the method considering the direct sensitivity and the relative sensitivity at the same time. First of all, the index with large relative sensitivity is excluded from relative sensitivity of each performance [14][15]. From the rest of the indexes, the indexes for weight loss are selected with certain rules. In the selection, the indexes with the quality parameters of 50%-70%, and the impact of indexes with quality parameters lower than 50% on the model is almost negligible.
Finally, a new round of screening is performed on the excluded indexes. First, some indexes with quality parameters of high sensitivity are excluded, such as the first type of indicators, and then a small number of indexes with high direct sensitivity is selected. If selecting indexes with higher relative sensitivity and lower direct sensitivity, the quality parameter of this index may be small. Although its relative sensitivity is high, its direct sensitivity is small, and it is difficult to effectively improve stability even with a large performance enhancement process. The advantages of selecting relative sensitivity and direct sensitivity are that the lightweight selection of indexes can effectively stabilize the performance of the model's running structure [16][17]. Let indexes, , and represent the sensitivities of the modeling indexes respectively, and their specific expressions are shown in formula (9).

(9)
Among them, represents the relative sensitivity of the index, represents the direct sensitivity of the index, and represents the quality parameter of the index. On the basis of the above calculation results, the specific sensitivity analysis results of each index can be expressed as Table 2 through the direct analysis method.

Parametric optimization algorithm
Based on the above calculation results, the model indexes are edited by a parametric optimization algorithm. The specific editing results are shown in Figure 4.

Figure 4 Editing effect of parametric optimization algorithm on model indexes
Under the constraint of the figure, using the formula (9), the parametric optimization results of the four model indexes can be expressed as: Among them, represents the primary level of parametric optimization, represents the optimization authority, and represents the fixed result of the permission setting. The solid mechanics genetic algorithm is a highly parallel, random and adaptive global optimization probability search algorithm that is formed by simulating the biological evolution process in nature. Multi-objective genetic algorithm is based on this. The basic principle is: first, a group of randomly generated populations is searched for as the initial value. Each individual in the population is called a chromosome, and the smallest element of the chromosome is a gene, which corresponds to a certain characteristic of the solution, that is, a design variable. After successive iterations, the offspring chromosomes are obtained from previous generations through crossover or mutation operations. The chromosomes of each generation are measured by the degree of fitness, and in the process of the formation of a new generation of chromosomes, some of the offspring chromosomes are selected and eliminated according to their fitness to keep the population size constant [18][19]. The probability that the chromosome with high fitness is chosen is high. After many iterations, the algorithm converges to a better chromosome, which may be the optimal solution to the multi-objective problem. Under the influence of the algorithm, the parametric model of solid mechanics in the engineering research based on cloud computing model achieves the purpose of improving the computational integrity of the model by continuously narrowing the level of its own optimization parameters, and makes use of stable engineering research properties to make lightweight level of cloud data reach the rated standard [20]. To organize the parametric optimization results of the four model indexes, the parametric model of the solid mechanics in the engineering research based on the cloud computing model can be expressed as: Among them, represents the parametric model constant for solid mechanics in the engineering research based on the cloud computing model, represents the operational integrality factor of the model, is normalized parameter, and represents the upper limit of the lightweight level of the cloud data.

Experiment
To verify the practical value of the parametric model for solid mechanics in engineering studies based on cloud computing model, the following comparative experiments were designed. Two computers equipped with Windows 10 operating system and running memory of 256G were used as experimental objects. One of the computers was equipped with a traditional parametric model as an experimental group; the other computer was equipped with a new parametric model as an experimental group. Other variables were kept the same, at the same time, the changes of the relevant operating data of the two sets of parameterized models were recorded.

Experimental parameter setting
Before starting the experiment, the relevant experimental parameter settings was completed according to the following table. The experimental parameters are set as shown in Table 3. In the above table, CES parameter represents the stability of the cloud environment, ORG parameter represents the level of engineering research objects, MPS parameter represents solid mechanical parameter, MSC parameter represents model stability coefficient, TDL parameter represents the lightweight level of target cloud data, and OTM parameter represents the integrity of solid mechanics operations of the target. In order to ensure the absolute fairness of the experiment, the data of the experimental group and the control group were always consistent.

Comparison of lightweight level of cloud data
Under the premise of ensuring other experimental conditions remain unchanged, using 16 minutes as the experimental time, the changes in the lightweight level of cloud data after applying the experimental group and the control group in this period of time were recorded, in order to avoid the impact of sudden events. The experiment was divided into three parts: the operation state of the low mechanical parameters, the operation state of the middle mechanical parameters, and the operation state of the high mechanical parameters. The specific experimental results were shown in Figure 5, Figure 6, and Figure 7.  Figure 5 shows that when the model was in operation status of low-level mechanical parameter, with the increase of running time, the minimum value of the cloud data lightweight level is 40 files after the application of the experimental group model, and the cloud data lightweight level 84 files was reached when the running time was 10 minutes, and the difference between them was 44 files. After applying the control model, the minimum value of the cloud data lightweight level was 8 files, and when the running time was 8 minutes, the cloud data lightweight level reached a maximum of 36 files. The difference between the two was 28 files, far lower than the experimental group.  Figure 6 shows that when the model was at the operation status of middle mechanical parameter, with the increase of the running time, after applying the model of experimental group, the minimum value of the cloud data lightweight level was 52 files, and when the running time was 6, 7, and 8 minutes, the lightweight level reached the maximum of 80 files, and the difference between them was 28 files. After applying the control group model, the minimum value of the cloud data lightweight level was 12 files, and when the running time was 7, 13 minutes, the cloud data lightweight level reached the maximum value of 28 files, the difference between the two as 16 files, much lower than the experimental group.  Figure 7 shows that when the model is at the operation status of high mechanical parameter, with the increase of the operation time, the minimum value of the cloud data lightweight level was 48 files after the application of the experimental group model, and the cloud data lightweight level reached the maximum value of 88 files, when the running time was 8 minutes, and the difference between them was 40 files. After applying the control group model, the minimum value of the cloud data lightweight level was 4 files, and when the operation time was 7, 14, and 16 minutes, the cloud data lightweight level reached the maximum value of 20 files, the difference between the two was 16 files, much lower than the experimental group.

Comparison of integrity of solid mechanics computing
Under the premise of ensuring that other experimental conditions were not changed, 16 minutes was taken as the experimental time, and the changes of integrity of the solid mechanics operation were recorded after applying the experimental group and the control group model. In Experiment Control order to avoid sudden events, the experiment was divided into three parts: operation status of low mechanical parameter, operation status of middle mechanical parameter, and operation status of high mechanical parameter. The specific experimental results were shown in Table 4, Table 5, and Table 6. Analyzing Table 4 shows that when the model was at operation status of low mechanical parameter, with the increase of the operation time, the computing integrity showed a stepwise increasing trend after the application of the experimental group model. When the operation time was 15 and 16 minutes, the computing integrity of the mechanics reached a maximum of 78.91% after the control group model was applied, and the computing integrity presented a rising trend. At an operation time of 16 minutes, the computing integrity of reached a maximum of 43.05%, which was much lower than the experimental group. Analyzing Table 5 shows that when the model was at operation status of middle mechanical parameter, with the increase of the operation time, the computing integrity showed an increasing trend and then stabilizing and then rising after the application of the experimental group model. When the operation time was 16 minutes, the computing integrity of the mechanics reached a maximum of 82.64%; after the control group model was applied, and the computing integrity presented a stepwise decreasing trend. At an operation time of 16 minutes, the computing integrity of reached a maximum of 50.04%, which was much lower than the experimental group. Analyzing Table 6 shows that when the model was at operation status of high mechanical parameter, with the increase of the operation time, the computing integrity showed a trend of floating back and forth after the application of the experimental group model. When the operation time was 8, 10, 12, 14, 16 minutes, the computing integrity of the mechanics reached a maximum of 84.09%. After the control group model was applied, and the computing integrity presented the trend of decreasing first, then stabilizing, and then rising. At an operation time of 1 minutes, the computing integrity of reached a maximum of 51.33%, which was much lower than the experimental group.

Results and Discussion
The parametric model of solid mechanics in engineering research based on cloud computing model is based on retaining the advantages of the traditional model application, effectively improving the design for the inadequacies, and improving the application stability of the new model by optimizing the solid mechanics operation parameters and other links. In the future, all major academic institutions in China can use this model as a starting point to gradually improve research in solid mechanics related fields. (NSGA) Non-dominated Sorting Genetic Algorithm

Availability of data and material
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.