Retraction Note: The construction of hierarchical network model and wireless activation diffusion optimization model in English teaching

Wireless communication plays an important role in modern higher education. This paper first analyzes the storage mechanism and data structure of the hierarchical network model, then fits the time series of user behavior attribute data, and uses the information-filtering algorithm to filter the interference information and redundant information in the social network. The feature extraction of association rules applies fuzzy data clustering to the mining and clustering of relational data in hierarchical networks. The simulation results show that the algorithm model has high accuracy and reliability and improves the ability of deep mining in English teaching.

be remembered by us for a long time and even for a lifetime, and information that has not been converted by "encoding" in this process cannot enter long-term memory, and we quickly forget this information because they are quickly lost in the process of memory. But memory is not the passive reception and preservation of information. In fact, to some extent, its storage is a process of constructing information. Memory will combine new information and known information in various ways [3], so that new information enters long-term memory and remains in our memory for a long time. Psychological experiments have shown that knowledge related to the plots that the subject actually experiences is generally easier to enter in the subject's long-term memory and preserved [4]. Therefore, according to the memory principle of cognitive psychology, we can conclude that only words with meanings can be easily remembered. Then, we can get a revelation: in vocabulary acquisition, if teachers provide appropriate associations, guide students to establish a certain connection between new information and old information, teachers can help students effectively make new words.
The rest of this paper is organized as follows. Section 2 discusses related work, followed by the methods in Section 3. Optimal design of data protection algorithm is discussed in Section 4. Section 5 shows experiment result; Section 6 concludes the paper with summary and future research directions.

Related work
Semantics, as one of the three major fields of modern language science, has seen new development since the 1960s. Semantics refers to the study of the meaning communication of language. Its research object is the meaning of natural language [5]. The natural language here can be different levels of language units such as vocabulary, sentence, and chapter. Among them, vocabulary is the most basic and most important factor. When modern semantics examines the meaning of words, they often start from two aspects: on the one hand, what is the word, and on the other hand, on the semantic relationship between words and words. "Semantic field theory" is one of the most important theories in modern semantics. The semantic field refers to an aggregate of a group of words that belong to a common concept and are closely related and mutually constrained [6]. They are also called the meaning field, the vocabulary field, and the word field. The theory of semantic field was first proposed by the German-speaking scholar Trier [7]. According to Trier, "the semantic field is the reality between words and the whole vocabulary. As part of the whole, they have the same characteristics as words, that is, they can be combined in the language structure, and they also have the lexical system nature, that is, Smaller units are composed" [8]. According to Trier, vocabulary is not a string of linguistic symbols that are not related to each other, but a complete network consisting of semantically related morphemes (Lexeme). A word and other words are connected and interconnected through this network of relationships, thus forming a semantic field. The theory of semantic field tells us that a vocabulary of a language is not a messy, random pile of vocabulary, but a network system in which words and words are related to each other. Among them, some words connected by some common concept are connected.
A semantic field and words belonging to the same semantic field are semantically interdependent and mutually constrained. This is similar to the cognitive psychology that we talked about in the semantics of language vocabulary. Psychologists point out that the concept of words is represented in memory through a broad network of relationships. The concept of each word is characterized by a unique node in the network and is related to other words by means of various signs or links. When analyzing the second language vocabulary acquisition, psychologists affirm the role of semantic field theory in vocabulary learning from the perspective of psychology [9]. It points out that people tend to memorize words according to the semantic field. In each semantic field, the vocabulary is pressed. Stanovich further illustrates this problem by a flexible extension of the concept. In this extension, semantically related vocabulary in the Semantic Web is activated and can be used automatically [10]. That is to say, since the lexical meanings in a semantic field are not stored in the memory of human beings in isolation, but are interconnected to form a Lenovo network in memory, the related words are put together to compare, understand, and remember [11].

Dynamic programming algorithms
In the process of composition evaluation, students will encounter a large number of spelling errors and word distortion, which shows that students have mastered grammar knowledge, but not very proficient in basic vocabulary [12]. So students cannot get higher scores for the whole sentence expression. To solve this problem, we need a good way to identify words, tenses, and correct errors. However, some scholars have put forward a method of word variant which is not based on corpus, which is mainly based on editing distance, so it has some limitations [13].
The actual method is to test whether the two strings are morphologically similar to spelling errors. If one string is used as a benchmark to observe the other string, it is based on how many times the reference word is edited [14,15]. The total number of editing operations is called edit distance. Obviously, the larger the editing distance, the greater the difference between strings. In text editing, pattern search and approximate matching, and other applications, editing distance is usually used to measure the extent of differences between the two patterns, approximate matching recognition of the molecular structure of the common application: DNA, approximate matching and searching location of military targets, and WEB browsing for sentences, the difference between an answer statement string and a corpus matching string is the editing distance; the similarity of two sentences can be preliminarily obtained by finding the minimum editing distance between two sentences [16].
For a word that constitutes a sentence, first, we have to determine whether it is a deformed form in the lexicon. If there is no word, we can find its minimum editing distance and set a threshold based on the length of the word [17,18]. To define whether it is misspelled, and if it is misspelled, log it into the wrong word library to help students find their own shortcomings. Common string editing types include character inserts, character deletions, character translocations, and character substitutions. Based on these four character editing operations, dynamic programming algorithms can be used to resolve the edit distance. Formal definition of edit distance: input string and standard characters string, which shows the editing distance between pm and W n . In all editing operations that convert pm to W n , insert is to insert W js after P i , delete P is , replace P is with W j , swap P i − 1 and P i , and define recursively as follows [15]: The algorithm can refer to the Lang toolkit in Apache common. The time complexity of the algorithm is that if you want to improve efficiency, you can choose the improved editing distance algorithm. By setting the threshold value to filter the rules and combining the rules of the corpus, the regular expressions are processed to diagnose the language ability accurately, such as the collocation of words, the structure of words, the construction of words and sentences, and the elements of understanding. Therefore, it is very important to calculate the relevance of sentences in the diagnosis of ideological and political composition. Using cosine vector algorithm to calculate the similarity of text is a good theory. However, the disadvantages are obvious, the cosine theorem is not working, the number of articles is very large, and the text content is very long. The computing time is particularly long because the current computer can compare up to 1000 articles. We can use a large matrix to describe the relationship between 1 million articles and 500,000 words. Each row corresponds to an article and each column corresponds to a word in the matrix.
In which, m = 1,000,000, n = 500,000.The elements in row i and column j are the TF/IDF values of the j word that appears in the first article because the matrix is very large, with 1 million to 500,000 or 500 million elements. Singular value decomposition is to multiply the large matrix above into three small matrices, as shown in the following formula.
These three matrices have very definite physical significance. Each row in the first matrix X represents a set of sememain-related words, where each non-zero element represents the TF/IDF value of each word. The last matrix Y in each column represents the subject of the same type of article; each of these elements represents the correlation of each article in these articles. The intermediate matrix represents the correlation between category words and article rays. For middle school compositions, a clear vocabulary of papers is required, usually between 100 and 200 words. So the cosine vector algorithm can be implemented well.

The solution of the optimal value of rules
And finally formed mapping result, after the rules are initialized, can be understood as a two-dimensional structure tree, where the key is a group. Values are a set of mapping structures (the corresponding key is the rule str, the value is the dot of the score). After parsing the student's composition, each sentence contains all the rules contained in the map, handling similarity calculations and regular expression matching. Calculate the maximum score and delete the corresponding rule until the end of each rule, and the final score is the final score of the article. This is a typical dynamic programming problem, the optimal value problem. It can be approximated to the 0-1 knapsack problem.
Belonging to the theory of calculation NP complete problem, its computational complexity is O (2n), the traditional dynamic programming to solve the knapsack problem. For this problem, in the limited rules condition, each rule can be fully utilized, can be transformed into the largest fractional knapsack problem. The mathematical expression for the objective function: When x i is a 0-1 decision variable, the rule that matches the sentence is successful, while x i = 0 indicates that the rule that matches the sentence fails. It usually uses recursive backtracking to solve the knapsack problem, but it traverses the search space completely. Therefore, with the increase of rule n, the space of solution will increase to n 2 . When n is large enough, it can be solved by genetic algorithm.

Feature extraction of association rules
The nonlinear time series analysis method is used to fit the information of user behavior attribute data in social network. The association rules feature extraction, and directivity data clustering are realized, and the user behavior attribute data collection in social network is established. The sample amplitude is A, the time series of user behavior attribute data in social network is x(t). The time-domain feature of user behavior attribute data in social network is expressed as: Based on the data storage structure analysis and statistical feature measurement, the time series {x(t 0 + iΔt)}, i = 0, 1, ⋯, N − 1 of social network user behavior attribute data are reconstructed according to take embedding theorem. The phase space reconstruction model of data time series fitting is expressed as follows: In which, K = N − (m − 1)τ, it represents the orthogonal eigenvector of the social network user behavior attribute data time series; τ is the time delay of sampling the social network user behavior attribute data; m is the embedded dimension in the phase space; T is a group of scalar data collection. Sample data model distributes transmission sequences. Therefore, the nonlinear time series analysis of social network user behavior attribute data is realized.
Combined with feature extraction of association rules, fuzzy C-means clustering algorithm is used to cluster directional features, and the central moments of clustering output of user behavior attribute data mining data are obtained.
The distribution of the associated directional characteristics is shown as follows: By using the clustering of association rules directivity, we get the time series components of user behavior attribute data mining output: Based on the above processing, the user behavior attribute data can be accurately mined and extracted from the user behavior attribute data sequence.

Encryption algorithm for data protection
A social network data protection algorithm based on dynamic cyclic encryption and link equilibrium configuration is proposed in this paper. The subkey random amplitude modulation method is used to encrypt the data in the social network and construct the key: ReEnc(param, CT i , rk ij ): The social network data gathers in the fault-tolerant sequence k ' = e(C 1 , rk 4ij )k and converts the ID i layer privacy protection protocol CT ID i of ID j to the key CT ID j of the l + 1 layer: Encrypting of data is taken in combination with a dynamic cyclic encryption algorithm, randomly selecting an integer ½0; 2 γ . p Þ in the q 0 , …, q τ interval, and when the privacy protection data in the social network obey the linear distribution of the maximum integer value q i , set: Yu and Tang EURASIP Journal on Wireless Communications and Networking (2020) 2020:88 Page 6 of 14 Initialize the classification center of the encryption algorithm ser = 1, the PSK source is published as (ser, MPK), and the cluster center matrix of the social network is preserved. Through the above processing, the auto-regressive linear equalization method is used to perform adaptive equalization design on the network location privacy protection link. Its expression is: The probability density function of privacy leakage of source node and destination node of network communication channel in mobile social network are represented by c, C, s c respectively. The orthogonal vector of Gram-Schmidt is calculated, where |r m | < 1/2, d > 2 κ . Then: The adaptive equalization scheduling of the data output of the social network is carried out by using the link equilibrium configuration method, and the recursive expression of anti-leakage encryption of the privacy protection data of the social network is obtained as follows: According to the above algorithm design, the data encryption and protection of social network are realized. The implementation flow of the improved algorithm is shown in Fig. 1.

Yu and Tang EURASIP Journal on Wireless Communications and Networking
(2020) 2020:88 Page 7 of 14

Experiment
The questionnaires of the four classes were completed in the classroom. The students did not understand the contents of the questionnaire. The teachers explained the situation in the classroom and ensured the credibility of the data results as much as possible. One hundred ninety-nine questionnaires were collected and tested, and 6 invalid questionnaires were excluded (note: the 6 invalid questionnaires had partial questions and no options were selected). A total of 193 valid questionnaires were obtained. The questionnaire has designed 9 questions, as shown in Table 1 and Fig. 2.
What is your vocabulary learning in high school textbooks? (mainly from words, spelling, meaning, understanding, and mastery). Lexical source analysis is shown in Table 2. The lexical source analysis is shown in Fig. 3.
In summary, the memory of polysemy words, spelling irregular words, synonyms, etc. is relatively difficult. It can be seen that only a few students think that their current vocabulary is more satisfactory. Most of the students think that they only basically meet the requirements of listening, speaking, reading, and writing, and most of them think that the vocabulary they learn is more difficult to use in speaking and writing. Judging from the questionnaire survey, the current vocabulary acquisition status of middle school students is not optimistic. Although they almost agreed that vocabulary is the biggest obstacle to their English learning, but because they lack scientific and effective vocabulary strategies and passive learning, their vocabulary is far from the requirements of the "Standard High School English Curriculum Standards," and the vocabulary is low. It has become a major obstacle affecting the ability of listening, speaking, reading, and writing, especially reading. In addition, the students reflected in the survey that the vocabulary learning difficulties are due in part to the poor classroom performance and insufficient teacher system strategy guidance, which is enough to attract the attention of our teachers. From the survey situation, what we need to do is to improve vocabulary teaching, improve the efficiency of vocabulary teaching, and thus improve students' interest and attention to vocabulary learning, and help students slow down the forgetting speed. At the same time, in the process of vocabulary teaching, consciously focus on the teaching of synonyms, polysemy words, and spelling irregular words. The author believes that the situation of vocabulary acquisition is not optimistic for high school students, in addition to the students' own reasons; the teacher as a student guide also has certain responsibilities in the vocabulary teaching, so the author of the school and the external school and thirty high school English teachers conducted a questionnaire survey to better understand vocabulary teaching ( Table 3). The experiment lasted for 4 months, and the experimental and pre-test students were pre-tested. In order to avoid the Pygmalion effect, the students were not informed of the experiment. Four months later, the students in the experimental class were surveyed, and then the two classes were tested and tested. The test papers used were the  (Table 4). All the schools were disrupted by the original class, and the seats were assigned by the computer. The invigilator was arranged by the school, mainly for the first year of the school. The two test multiple-choice questions were scored by computer scoring, and the rest were scored by the four high school English teachers in the school to seal the class name, and then the group was scored, so that the objectivity of the test results was guaranteed and higher. The acquisition of the experimental data comes from the statistics of the relevant items in the test papers of the four test students. To ensure the authenticity of the experimental data, the test papers of the two classes are disturbed before and after the evaluation of the vocabulary test, and then the unified binding is ensured to ensure classes and names are not known to the reviewers. The test papers of the comprehensive ability test are disorderly sequenced, and the four joint examination schools collectively read the papers. The details have been explained above and will not be repeated. The analysis and comparison results of different age groups are shown in Fig. 4.

Result
In the diagnosis of the middle school ideological and political examination, if only the students' answers are evaluated by intelligent computers and the final scores are calculated, I believe that this result cannot satisfy teachers and students. But it is an important aspect of intelligence evaluation for students to timely diagnose and give rich and intuitive diagnosis results after submitting answers. Subjective and objective questions tested in the diagnosis process through the theories and methods of the above chapters, many distinctive intermediate results, will be recorded in the intermediate tables, such as the number of correct and wrong answers and scores for each question, knowledge mastery of each problem, input point selection of ability value, knowledge subtest, composition of grammatical knowledge, etc. These intermediate results cannot be presented directly to the users for two reasons: one is that the data is not obvious, these results and users are unknown, so it must be data mining and data users to visually display concerned charts and data; the other is that these intermediate data will produce large numbers according to the redundant large-scale application, the system performance is affected. Therefore, it is necessary to uniformly manage and redesign the data to report design and data mining.
Report design mainly includes database design and chart design. Specific database design can be found on the above. Picture design mainly refers to class and students in horizontal and vertical use of j free characters to chart. The level can refer to the schools in the city, the classes within the schools, the students in the classes, etc. From the perspective of the students' examination, the vertical direction can be considered; the data mining is mainly to analyze and integrate the intermediate data through stored procedures. Finally, according to the design requirements of the report, enter the database and present it to the user. Figure 5 shows the student's knowledge points and the previous score chart.
Another important aspect is that according to the student's answer to the question, comment on the student's answer imitates the way the teacher writes the comment. The main basis is the ability to master and not master the knowledge points, that is, to collect, organize, and calculate diagnostic data. The advantages and disadvantages of language learning and language skills based on intelligent judgment and classification are described in detail.
The implementation process of the algorithm is as follows:  Step 1: The paper of the diagnostic connector examination includes the number of times that the order in the input file is to monitor the ability value to the superior supervision level, the depth of forming a tree 4; Step 2: Generating the diagnostic knowledge point tree as described in step 1; Step3: After the diagnosis is finished, the two trees above compare in turn from the leaf to the root node, if the knowledge paper contains diagnostic knowledge. The level of mastery will calculate or generate the corresponding diagnostic knowledge score.
The first level of knowledge mastery will be divided into two parts: L1/V/G/C, which can be used to understand the level of listening and reading through the above steps, and can be based on the categories of computing attributes. Because there are four different types of mastery, there are 32 possibilities for each student to evaluate. For programming, different situations need to be evaluated differently. In order to adjust the description, a large number of if-else statements are simplified and processed using Drools, which are shown in Figs. 6 and 7.
Through the above experimental results, it can be observed that the traditional decision tree is less efficient which has higher time and space complexity under more data sets. The improved method uses stored procedure optimization and  computation on the server side, almost no procedure and data transfer in the client side, and does not need to occupy the internal storage resources of the customer. The implementation method of traditional decision tree, which greatly reduces the time complexity and space complexity, shows that under the same computer hardware environment and the same data source mining situation, when the data set is large, the execution speed is obviously faster than the traditional implementation method. On the other hand, when the number of attributes selected for partitioning is small, the execution speed is almost the same as the traditional method. However, when the number of partitions is large, it takes much time and space to execute and much faster than traditional methods.

Discussion
Wireless communication plays an important role in modern higher education. This paper first analyzes the storage mechanism and data structure of the hierarchical network model, then fits the time series of user behavior attribute data, and uses the information filtering algorithm to filter the interference information and redundant information in the social network. The feature extraction of association rules applies fuzzy data clustering to the mining and clustering of relational data in hierarchical networks. The simulation results show that the algorithm model has high accuracy and reliability, and improves the ability of deep mining in English teaching. Vocabulary online teaching does help students' vocabulary learning ability and the overall language level.