Life cycle assessment data model analysis system model
The structure design of the database is based on the storage requirements of the analysis data. To build a complete database, the structure of the database must first be designed and established. If the corresponding database structure has been established, the next step is to collect the data and store it according to the corresponding structure of the data. This basically completes the establishment of the database [7]. However, this type of database is still relatively rudimentary and not suitable for general users. A complete and easytouse database requires sufficient interactivity and intelligence. The user initiates operations such as accessing, searching, and querying the database through a terminal with a builtin application program. The premise of these operations is to set different preconditions. The simplest and most direct method is to use the database management system as a medium to provide users with interactive and structured operation communication methods, so that various practical operations are possible, such as adding, deleting, etc. [8]. However, the above methods are more professional, and the target groups are mainly databaserelated staff. For ordinary users, the interface of the program, the difficulty of operation, etc., need to be optimized and simplified [9]. The concept of product cycle data is shown in Fig. 1.
The product life cycle is the entire process from the idea of a new product until it disappears. Most product life cycles are Sshaped, and product life cycles generally refer to new types of products on the market, not just a single brand. The core data required for LCA analysis is the system data of the product, and the system data of the product contain two kinds of data, namely exchange flow data and unit process data. The system data of the product have specific values. If subdivide the product system data, it can see: product system, process connection, exchange flow, etc. Project management data are different from product system data. It mainly stores the process management of the storage unit and related modeling and verification data, such as project time and personnel information. These data are only used to express the transparency of the data in the unit process [10]. The data of LCIA method are mainly composed of LCIA factor, LCIA classification, LCIA method, and so on. It mainly affects the evaluation process and at the same time provides certain support for environmental impact evaluation. The result obtained after the LCIA analysis is the evaluation result data, which serves as a basis to provide practical support for the explanation stage [11].
In the process of establishing the LCA database, the basic data set will not directly participate in the whole process. After the data are collected, the LCIA method and other related data are established uniformly. This sequence of operations can ensure the consistency of the final data [12].
In addition to the abovementioned related issues, LCA analysis will also face the impact of comparing multiple different products with similar functions in various aspects [13]. This type of problem is usually referred to as a project, and it is applied to a system with similar functions and different products. And to manage and collect these data in the mode of engineering table, the details are described in Fig. 2.
Usually, many parts or many products are assembled together to form a complete and effective product, and at the same time, a productcentric system is produced. The system is the sum of the product flow and the basic flow. Therefore, the system not only has a certain feature or function, but can also have multiple features or functions at the same time. Because of its complete product composition system, it can well simulate the unit process of the entire product life cycle [14]. If we refine a product, the parts or products that make up the product can be regarded as a product system. Therefore, the product system can be subdivided almost continuously. As shown in Fig. 3, the product system contains three subproduct systems, and the three subproduct systems are subdivided and analyzed to get the system inventory result [15].
Product life cycle design and multiobjective reliability algorithm
In the process of designing the life cycle of a product, it is actually to analyze, review, improve, and finally shape all the links and factors before the product is launched [16]. Therefore, in the process of product life cycle design, knowledge and technology in various fields are covered, and the requirements for talents are also harsh. Every link and every factor in the design process, their final conclusions are based on professional knowledge and strong technical capabilities [17]. With the continuous replacement and upgrading of technology, for judging whether the life cycle of the product meets the design requirements and actual needs, relevant analysis and simulation systems can be used to conduct detailed experimental reviews, find out the problems in the design process in time, and discuss relevant improvement suggestions and schemes. From a more rigorous point of view, the process of designing the life cycle of a product is comprehensive and diversified [18]. It not only integrates the massive knowledge of multiple disciplines and puts it into practical application, but also adopts the mode of multitechnology cooperation. The former and the latter are integrated with each other based on the existing national background, social capabilities, etc. [19]. The levels involved are: 1. The functional design of the product. 2. The raw material and processing technology of the product. 3. The service life of the product under normal and severe conditions. 4. Product processing equipment and assembly process. 5. Whether the product complies with national environmental protection standards. 6. Emergency plan reserved for unexpected situations. Figure 4 shows the product life cycle design link.
The product life cycle is mainly determined by the changes of consumers' consumption patterns, consumption levels, consumption structure, and consumption psychology. It is generally divided into four stages: introduction (entry), growth, maturity (saturation), and decline (decline).Obviously, the overall cost of a product increases as its reliability increases. Therefore, \(k(w)\) is a monotonically increasing function. The improvement of reliability is accompanied by changes in various links, such as materials and processes. At the same time, through specific analysis, it can be seen that when the reliability of the product is low, the cost required to improve the reliability of the product at this time is significantly lower than the cost when the reliability is high. Let \(g(\mu \le k)\) represent the probability that the working cost \(\mu\) is not greater than \(k\) under a certain environment and within the normal service life of the product. That is, when the cost of the product is \(k\), its corresponding reliability is \(w(k) = g(\mu \le k)\). The corresponding probability that the cost of the product \(\mu\) is greater than \(k\) is \(g(\mu \le k) = 1  w(k)\). The specific expression of function \(g(\mu \le k)\) is not easy to write, but according to the data and graphs, it is not difficult to analyze: when the cost of the product is \(k\), increase the cost \(\Delta k\) at this time, and the corresponding ratio to improve its reliability can be obtained.
$$\frac{\Delta w(k)}{{\Delta k}} = \frac{\Delta g(\mu \le k)}{{\Delta k}}$$
(1)
The ratio of this ratio to \(\Delta g(\mu \le k)\) is regarded as the product costreturn ratio, as shown in the following formula:
$$x(k) = \frac{\Delta g(\mu \le k)}{{\Delta k}} \cdot \frac{1}{g(\mu \le k)} = \frac{\Delta g(\mu \le k + \Delta k)  g(\mu \le k)}{{g(\mu > )\Delta k}} = \frac{g(k + \Delta k)  \nu (k)}{{\Delta k\left[ {1  \nu (k)} \right]}}$$
(2)
Let there be no additional cost at this time, and transform the above formula into a differential formula, we can get:
$$\frac{{{\text{d}}w}}{{{\text{d}}k}} = \left[ {1  w} \right]x(k)$$
(3)
Supposing \(w(k_{1} ) = w_{1}\), and there is a constant \(x(k) = s\), then the differential formula can be solved:
$$w(k) = 1  (1  w_{1} )  e^{{  s(k  k_{1} )}}$$
(4)
It can be seen that if another value of \(w(k_{2} ) = w_{2}\) can be obtained at this time, the value of parameter \(s\) can be solved. At this time, the initial value \(w(k_{0} )\) wirelessly approaches 0, and the inverse function of \(w(k)\) can be obtained as:
$$w(k) = 1  e^{{  s(k  k_{0} )}}$$
(5)
For each parameter in the analysis formula, when the product reliability \(w\) is close to 0, the parameter \(k_{0}\) can be expressed as the product cost at this time, which is equivalent to the cost required when a product design scheme fails. At the same time, it is obviously different from the product design scheme when the reliability R = 0. Even if \(w = 0\), there are big differences between each design scheme of the product. For further analysis of the formula, the parameter \(m\) affects the trend of the curve at all times, that is, when the value of the parameter \(m\) is larger, the curve trend in the early period is relatively flat, and the curve trend in the later period is more steep. This shows that when the value of the reliability \(w\) is not large enough, the cost that needs to be increased to improve the reliability of the product is small. When the value of the product reliability \(w\) is larger, it means that if the reliability of the product is continuously improved, the required cost will be greater. Therefore, through specific analysis of the actual situation of the product, past experience and professional knowledge and other means of evaluation, the values of the parameter \(m\) and the parameter \(k_{0}\) can be estimated more accurately, and the specific expression of the function \(k(w)\) can be finally obtained. Among them, it should be noted that, according to the actual situation of the product, the product reliability taken in the function \(k(w)\) should come from the dynamic reliability \(w(0,R)\) of the product during initial use or the reliability \(w_{0} (R)\) during its design. Because \(w_{0} (R)\) is a special case of \(w(0,R)\), the cost function can be expressed as \(k[w(0,R)]\):
$$k\left[ {w(0,R)} \right] = \left[ {1  \frac{1}{m}Ln\left[ {1  w(0,R)} \right]} \right]k_{0}$$
(6)
When a product fails, the loss it brings at this time is expressed as D, and the loss D consists of three parts: first, the economic loss \(S_{(1)}\) caused by the product itself when the product fails. Second, the initiating loss \(S_{(2)}\) caused to other aspects when the product fails. This loss includes the internal and overall loss of the product and is indirect. Third, when the product fails, it will cause the phenomenon of production stoppage, which leads to related losses, that is, the loss of production stoppage \(S_{(3)}\). Regardless of the above loss, its occurrence is closely related to the degree of failure of the product, that is, different degrees of failure will produce different types of losses, and the degree of loss is also different. The specific value of the loss \(S\) cannot be accurately calculated, but the previous accumulated product experience and related data can be used to estimate the loss. During the life cycle of mechanical products, the cost of materials required for production and market changes is constantly changing with time, so these can be called functions of time.
$$S(t) = S_{(1)} (t) + S_{(2)} (t) + S_{(3)} (t)$$
(7)
Because the material product will gradually fatigue and the surrounding environmental factors will change at any time, so the dynamic reliability \(w(r,R  r)\) will continue to decay, and the loss expectation at this time is a function of time, namely:
$$F(r) = \left[ {1  w(r,R  r)} \right]S(r)$$
(8)
At this time, the expected failure loss for the entire life cycle of the material product is:
$$F = \frac{1}{R}\int_{0}^{\mu } {\left[ {1  w(r,R  r)} \right]S(r)s(r)}$$
(9)
The optimization model of the entire life cycle of the material product is:
$$Q(r) = \left[ {1  \frac{1}{m}Ln\left[ {1  w(0,R)} \right]} \right]k_{0} + \frac{1}{R}\int_{0}^{\mu } {\left[ {1  w(r,R  r)} \right]} S(r)sr \to \min$$
(10)
Taking the previous research on the attenuation of product reliability and the related dynamic reliability function \(w(r,R  r)\) as a reference, the above model is solved. It can be concluded that when the design life of the product is T, the most suitable design reliability \(w(0,R)\) value for the product at this time. According to the value of the most suitable design reliability \(w(0,R)\), the product can be designed more reliably. The solution process of multiobjective decisionmaking is shown in Fig. 5:
The socalled multiobjective decisionmaking theory refers to when a problem has multiple conflicting decisions at the same time, and the comprehensive design of these decisions is carried out to find the best one in the relative situation. By optimizing multiple goals by weighting, multiple and complex optimization problems can be converted into single optimization problems.
$$Minh(n) = h(H) + h(R) + h(V) + h(k) + h(D) + h(w)$$
(11)
In the formula, a certain function or performance of the optimized product is expressed as \(H\), the quality of the product is expressed as \(V\), and the economic efficiency of the product is expressed as \(k\). The production efficiency of the product is represented as \(R\), the resource utilization rate during the production of the product is represented as \(w\), and the environmental protection and energy in the production process of the product is represented as \(D\). Based on the corresponding mathematical model, the problem with multiple goals is optimized to obtain a suitable single problem, and then it is solved. Mathematical models that can optimize multiobjective problems into singleobjective problems include ① optimization method, ② linear weighting, ③ square sum weighting, ④ multiplication and division, ⑤ hierarchical sequence method.
Material product preview similarity model
Before material products are launched, new products need to be predicted, and the promotion direction must be determined based on the predicted data. However, due to the complex attributes of the product, it is difficult to perform an effective similarity measurement on the material products, so at this time, the database will be feature extraction, and then the similarity measurement after decomposition [20].
Let I be a nonempty set, any pair of \(a,b\) in this set corresponds to a real number \(s(a,b)\) and is greater than or equal to 0. When \(a = b\), the real number is 0, and the real number has symmetry. \(s(a,b) \le s(a,c) + s(c,b),c \in I\), that is, the direct distance between the objects in the set is less than or equal to the distance of other objects in the path, we can get:

1.
Euclidean distance
$$S(a_{i} ,a_{1} ) = \left[ {\sum\limits_{k = 1}^{s} {(a_{ik}  a_{1k} )^{2} } } \right]^{1/2}$$
(12)
The Euclidean distance can calculate the straightline distance between two vectors, and according to the attribute weight, Euclidean distance transform has a wide range of applications in digital image processing, especially for image skeleton extraction, it is a good reference. The above formula can be converted into a weighted Euclidean distance:
$$S(a_{i} ,a_{1} ) = \left[ {\sum\limits_{k = 1}^{s} {w_{k} (a_{ik}  a_{1k} )^{2} } } \right]^{1/2}$$
(13)
Each dimension of Euclidean distance has different influencing factors, and its weight value has been verified in practice. Euclidean distance is equal to the difference between different attributes of the sample (that is, each indicator or each variable), and sometimes it cannot meet the actual requirements. The effect of overall variability on distance is not considered.

2.
Squared Euclidean
$$S(a_{i} ,a_{1} ) = \sum\limits_{k = 1}^{s} {(a_{ik}  a_{1k} )^{2} }$$
(14)
The importance of each attribute in this formula can be given weight.

3.
Manchester distance
$$S(a_{i} ,a_{1} ) = \sum\limits_{k = 1}^{s} {\left {a_{ik}  a_{1k} } \right}$$
(15)
This is mainly used to calculate the absolute distance between vectors, which can be transformed into:
$$S(a_{i} ,a_{1} ) = \sum\limits_{k = 1}^{s} {w_{k} \left {a_{ik}  a_{1k} } \right}$$
(16)

4.
Minkowski distance
$$S(a_{i} ,a_{1} ) = \left[ {\sum\limits_{k = 1}^{s} {\left {a_{ij}  a_{1j} } \right^{n} } } \right]^{1/n}$$
(17)
The \(a_{u} = (a_{u1} ,a_{u2} ,...,a_{us} )\) in the Minkowski distance is the sample point in the \(s\)dimensional data sample space. And when \(N = 1\), it will be converted into Manchester distance, when \(N = 2\), it will be converted into Euclidean distance.

5.
Chebyshev distance
$$S(a_{i} ,a_{1} ) = \mathop {\max }\limits_{k} \left {a_{ik}  a_{1k} } \right$$
(18)

6.
Pearson correlation coefficient.
The Pearson correlation coefficient is used to measure the degree of linear correlation between two variables, and its value is between 1 and 1. The intuitive expression for this linear correlation is that as one variable increases, the other increases at the same time. When the two are distributed on a straight line, the Pearson correlation coefficient is equal to 1 or 1. There is no linear relationship between the two variables and the Pearson correlation coefficient is 0.
$$\begin{aligned} S(a_{i} ,a_{1} ) & = (1  r_{{ij}} )/2,r_{{ij}} \\ & = {{\sum\limits_{{v = 1}}^{s} {(a_{{iv}}  {\text{avg}}a_{i} )(a_{{kv}}  {\text{avg}}a_{k} )} } \mathord{\left/ {\vphantom {{\sum\limits_{{v = 1}}^{s} {(a_{{iv}}  {\text{avg}}a_{i} )(a_{{kv}}  {\text{avg}}a_{k} )} } {\sqrt {\sum\limits_{{v = 1}}^{s} {(a_{{iv}}  {\text{avg}}a_{i} )^{2} } \sum\limits_{{v = 1}}^{s} {(a_{{kv}}  {\text{avg}}a_{k} )^{2} } } }}} \right. \kern\nulldelimiterspace} {\sqrt {\sum\limits_{{v = 1}}^{s} {(a_{{iv}}  {\text{avg}}a_{i} )^{2} } \sum\limits_{{v = 1}}^{s} {(a_{{kv}}  {\text{avg}}a_{k} )^{2} } } }} \\ \end{aligned}$$
(19)

7.
Percent Disagreement distance
$$S(a_{i} ,a_{1} ) = ({\text{Num}}(a_{ik} \ne a_{1k} )/s)$$
(20)

8.
Point symmetry distance
$$S(a_{i} ,a_{1} ) = \mathop {\min }\limits_{\begin{subarray}{l} k  1,...,M \\ k \ne i \end{subarray} } \frac{{\left\ {(a_{i}  a_{1} ) + (a_{k}  a_{1} )} \right\}}{{\left\ {(a_{i}  a_{1} )} \right\ + \left\ {(a_{k}  a_{1} )} \right\}}$$
(21)
Rotates the graph 180° around a point. If it can overlap another figure, the two figures are said to be symmetrical about the center of that point. This point is called the point of symmetry. The symmetry of two figures about a point is also called centrosymmetric. Symmetry points in these two figures are called symmetry points about the center.

9.
Cosine of included angle.
The principle of angle cosine similarity is shown in Fig. 6:
$$\cos (a_{d} ,a_{1} ) = \cos (a_{i} ,a_{1} ) = \frac{{\sum\nolimits_{k} {(a_{ik}  a_{1k} )} }}{{\sqrt {\sum\nolimits_{k} {a_{ik} \cdot \sum\nolimits_{k} {a_{1k} } } } }}$$
(22)
When the included angle is 90°, the similarity is 0, and when the included angle is 0, it means that it is very similar.

10.
Custom distance
$$S_{q} (a_{d} ,a_{1} ) = S_{q} (a_{i} ,a_{1} ) = \left[ {\sum\limits_{k} {\left {a_{ik}  a_{1k} } \right}^{q} } \right]^{1/t}$$
(23)
All the above similarity measures satisfy nonnegativity, reflexivity, symmetry, and triangle inequality. Reflexivity in a broad sense refers to the application of a theory's assumptions to the theory itself, and more broadly it refers to the selfmonitoring (or selfdiscipline) of an expert system and interrogating itself against the assumptions. Set by yourself.