 Research
 Open Access
 Published:
An adversarial learning approach for discovering social relations in humancentered information networks
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 172 (2020)
Abstract
The analytics on graphstructured data in cyber spaces has advanced many humancentered computing technologies. However, if only utilizing the structural properties, we might be prohibited from unraveling unknown social relations of nodes especially in the structureless networked systems. Uptodate ways to unfold latent relationships from graphstructured data are network representation learning (NRL) techniques, but it is difficult for most existing ones to deal with the networkstructureless situations due to the fact that they largely depend on the observed connections. With the everbroader spectrum of humancentered networked systems, large quantities of textual information have been generated and collected from social and physical spaces, which may provide the clues of hidden social relations. In order to discover latent social relations from the accompanied text resources, this paper attempts to bridge the gap between text data and graphstructured data so that the textual information can be encoded to substitute for those incomplete structural information. Generative adversarial networks (GANs) are employed in the crossmodal framework to make the transformed data indistinguishable in graphdomain space and also capable of depicting structureaware relationships with network homophily. Experiments conducted on three textbased network benchmarks demonstrate that our approach can reveal more realistic social relations from textdomain information compared against the stateoftheart baselines.
Introduction
Humancentered networked systems, such as cyberphysical systems (CPS) or Internet of Things (IoT) [1–4], have increasingly accumulated a large amount of multimedia information, which includes texts, images, and videos. Such heterogeneous and internetworked environments present the data in the format of graphstructured networks. Traditional humancentered computing methods to tackle such cyberspace data largely depend on the analytics of structural properties. However, they have difficulties in detecting the updated or hidden social relations due to the fact that the plain graphs with only graph topologies (including nodes and edges) sometimes limit the scope of analyzing the modern social and physical networks. For example, we might be prohibited from inferring the social behaviors of an unknown node or predicting the potential targets that a newly coming node would communicate with especially in the structureless networks. Thus, it is crucial to incorporate useful multimedia resources to discover unknown social relations.
Generally, the problem of predicting unknown relationships in networks is called “link prediction” [5, 6]. Researchers mostly concentrate on inferring the behavior of linking formation process through the observed connections in current networks. In recent years, network representation learning (NRL) [7, 8] has been proposed to support subsequent network processing and improve the performance of relation inference. It aims at learning the embedding space that can preserve the relationships for network reconstruction and support network inference effectively. Many NRL methods [9–12] have indeed advanced graph pattern discovery, analysis, and prediction. However, there exists a “blind zone” where few explicit connections can be observed in the networkstructureless situations. We cannot expect that the complex network environments of CPS or IoT [13–15] would always be wellstructured and exactly take the whole picture of the networked systems. For instance, there might be nodes in the system which have been isolated from the main graph; thus, some valuable information might be blocked for the lack of explicit structural properties; or several newly arrived nodes without any topology information are waiting to be added to the main system or to connect with others. Thus, if a network is only analyzed from the perspective of the currently observed structural information, some potential social relations cannot be discovered since those hidden but vital relationships of nodes may not be preserved in the embedding space. The incomplete network structure forces us to neglect such implicit information. To the best of our knowledge, the existing NRL methods fail to handle structureless networks due to the fact that the inner core that drives those methods to work is the “currently observed connections.” Admittedly, this task seems intractable if the graph structure is the only feature that we can utilize. Therefore, it is difficult to construct those missing parts of the original network or to infer the undiscovered relationships of nodes. To be specific, if omitting the text data in Fig. 1, we will be confused about identifying the topic of the yellow node and the links it may have. Also, it will be unclear how to construct the potential relations like friendships or partnerships amongst a bunch of nodes in the networkstructureless set. However, since modern networked systems have generated and collected a large amount of multimedia data, which can provide related clues for discovering unknown social relations. As illustrated in Fig. 1, if considering such textual information, we can infer that the yellow node would have a higher probability being linked to the blue node labeled with “network analysis” rather than the orange one labeled with “image processing,” and we can also establish meaningful connections for those black nodes due to their textual information.
Some researchers [16–18] try to utilize textual information, but they concentrate on integrating and balancing graphstructured data and text data in network embedding to improve the performance of relation inference. Hence, they still struggle with the aforementioned problem that the latent social relations still cannot be detected from the incomplete networks. Under the circumstance that the structural information of neededtobeanalyzed graph data is missing, we also consider text data as the accompanied resource. However, we attempt to bridge the gap between text data and graphstructured data instead so that the “observed texts” can be encoded to substitute for those incomplete structural information. As a consequence, we can predict missed or future social relations based on textdomain information. Hence, we think of applying deep domain adaptation (DDA) techniques to map the two modalities. DDA techniques embed domain adaptation in the deep learning frameworks for learning more transferable representations [19]. Since our task demands for generating simulated samples that are similar to the target samples and preserve the source domain information, we consider using generative adversarial networks (GANs) [20] to address the aforementioned problem due to the fact that the techniques applied in the existing structurebased NRL methods have difficulties in making valid predictions of social relations by the nongraphstructured data. Inspired by imagetoimage translation [21–23], we propose social relation GAN (SRGAN) for crossdomain knowledge translation. Two GANs are employed in our framework. One (namely tGAN) aims to learn contenttostructure mapping by adversarially training a discriminator and a generator. Specifically, the discriminator tries to distinguish the real network embeddings from the fake embeddings transformed from the text domain, and the generator tries to fool the discriminator to make the fake embeddings look like the ones just learned from the graph domain. The other (namely gGAN) learns structuretocontent mapping by inverting the task of tGAN. In addition, as shown in [21–23], the data reconstruction of the source or target samples can be helpful for improving the performance of domain adaptation. Thus, we also apply reconstruction techniques into our adversarial training process. Social relation is one of the essential characteristics in social networks. The “sociality” in this task is derived from two aspects: one is the original network where the structural information is mostly preserved, and the other is the structureless network where the textdomain information can be encoded to substitute for those incomplete structural information. SRGAN tries to make the transformed data reflect the sociability and the tendency to associate in or to form social connections of the graphstructured data. Experimental results on three realworld datasets show that translating meaningful social relations from the textdomain information is challenging, while SRGAN outperforms the baseline methods.
Overall, our main contributions lie in three aspects:

1
Our approach is a remedy for most existing structurebased NRL techniques which have difficulties in handling such textbased networkstructureless problems;

2
We bridge the gap between graphstructured data and text data using GANs in the networked systems;

3
Meaningful social relations can be translated from textdomain information by our proposed approach.
The rest of the paper is organized as follows. In Section 2, related work is briefly introduced. In Section 3, we present the approach of bridging the gap between graphstructured data and text data using GANs. After that, the proposed SRGAN is evaluated over several baselines and the detailed experiments are given in Section 4. Finally, we conclude our work and point out the future work in Section 5.
Related work
Humancentered techniques have achieved a great success in many realworld applications [24–31]. As mentioned in [7], there are two goals for NRL. First, the learned embedding space can reconstruct the original network. The network relationships are reflected as the relative distance of any two nodes in the embedding space. If there is an edge between two nodes, then the distance of these two nodes should be relatively small. Second, network inference can be supported by the learned embedding space, such as link prediction, node identification, and label inference.
Hence, there are large amounts of structurebased methods proposed for learning network embedding spaces from the network topology. Inspired by natural language processing, Perozzi et al. [9] treated nodes as words, while paths generated by the random walk model over a network were regarded as sentences which were fed into word2vec framework [32] aiming at preserving the neighborhood structure. In [10, 33], they improved the network exploration strategy trying to capture more meaningful node sequences. In order to handle very large scale information networks, LINE [11] was proposed to preserve local and global network structures by utilizing the information of local pairwise proximity to learn half of the dimensions over neighbors of nodes and constraining the sampled nodes at a twohop distance from the sources to learn the rest. To effectively capture the highly nonlinear network structure, SDNE [34] exploited the firstorder proximity and secondorder proximity jointly to preserve the global and local structures. Similar to the imagebased convolutional networks, Niepert et al. [35] proposed a framework for learning convolutional neural networks for arbitrary graphs. The graph convolutional network [36] used a localized firstorder approximation of spectral graph convolutions for semisupervised learning on graphstructured data. To enhance the robustness of network embeddings, Dai et al. [37] proposed an adversarial network embedding (ANE) framework, which utilized GANs to capture latent graph features. Gao et al. [38] generated proximities via GAN framework to discover the relationships between nodes. HeGAN [39] was developed for capturing the rich semantics on heterogeneous information networks. GraphRNA [40] was composed of a collaborative walking mechanism and a tailored deep embedding architecture, where the jointed random walks on attributed networks were utilized to boost the process of learning node representations.
In addition to the structurebased networks, there are some networks accompanied with rich external textbased information, such as content attributes or text profiles. Tu et al. [41] embedded nodes and edges into the same vector space based on the accompanied semantic information. Yang et al. [17] incorporated textual information of the nodes into NRL under a matrix factorization framework. CENE [18] jointly leveraged the network structure and the content information for enhancing the network representation. MMDW [42] tried to learn discriminative network representations by utilizing the labeling information of the nodes. CANE [16] was proposed for modeling the relationships between nodes given rich external information. He et al. [43] fused both structural and content information in a generative manner.
As is known to all, textual information is useful in learning network embedding spaces due to the reason that it can provide related clues for constructing relationships among network nodes. However, most existing textenhanced NRL techniques concentrate on integrating and balancing graphstructured data and text data in network embedding, which means they still need structural information of the unseen nodes. While in some realworld scenarios (as discussed in Section 1), we might not know such information beforehand. Therefore, we apply DDA methods to learn transferable representations for mapping the two modalities. Since the task needs to make the transformed data similar to the target one and preserve the source domain information. Thus, similar to the adversarialbased and reconstructionbased DDA approaches [21–23, 44, 45], we introduce GANs to address the aforementioned problem. The main difference between our proposed approach and previous NRL work is that we focus on the “blind zone,” trying to infer meaningful social relations in structureless network environments from the accompanied text resources. Our approach can be regarded as a remedy for most existing NRL techniques to analyze textbased networked systems.
SRGAN
Generative adversarial networks
In order to capture the “sociality,” GANs [20] are employed in our framework. The necessity of using GANs lies in that the adversarial process can break the barriers in multimodal data so that the potential social connections in some structureless situations can be inferred by the nongraphstructured data. The basic idea behind GANs is to set up a game between a generator and a discriminator [46]. The generator tries to mimic the distribution of the training data, i.e., it generates fake samples that are intended to be indistinguishable from the real ones as much as possible; while the discriminator determines whether a sample is fake or real using supervised learning techniques.
Textbased network definitions
Let \(\mathcal {G}=(V,E)\) be a graph, where V denotes the set of vertices (nodes) and E⊂(V×V) denotes the set of edges. V^{′}⊂V denotes the set of vertices lacking structural information, while V^{′′}=V∖V^{′} denotes the rest with known structures. Let T be the set of textual information of V. For the purpose of translating social relations from textdomain information, without loss of generality, assume there exists an injective mapping f:T↦V between the two types of data, i.e., \(\mathcal {S}=\{(t,v)v=f(t),t \in T,v \in V\}\).
We also present two important concepts as follows.
Real embeddings are the mathematical embeddings in a continuous vector space learned by domainspecific representation learning techniques.
Fake embeddings are created by the generators trying to make discriminators incapable of separating them from the real embeddings.
Crossdomain knowledge translation
Our task can be categorized to the heterogeneous domain adaptation setting [19]. It is defined as transforming textmodal data to graphmodal data by crossmodal mapping knowledge learned from two domain information. Suppose \(\mathcal {X}_{V} \in \mathbb {R}^{V \times d_{V}}\) denotes the embedding space of graphmodal data, and \(\mathcal {X}_{T} \in \mathbb {R}^{T \times d_{T}}\) denotes the embedding space of textmodal data. \(\mathcal {X}_{V} \ne \mathcal {X}_{T}\). d_{V} and d_{T} are the small numbers of latent dimensions. The purpose is to learn the crossmodal mapping g(·) with the following characteristics:
Indistinguishability. For \(\mathbf {x}_{t} \in \mathcal {X}_{T}, g(\mathbf {x}_{t}) \in \mathcal {X}_{V}\). The transformed data g(x_{t}) can exactly map the form of graphmodal data in the embedding space \(\mathcal {X}_{V}\). Meanwhile, it should be hard for a domain discriminator to differ the transformed one from the original one.
Structure awareness. For \((t, v) \in \mathcal {S}, g(\mathbf {x}_{t})\) should play the role of x_{v} in the embedding space \(\mathcal {X}_{V}\) to some extent. In other words, the transformed data g(x_{t}) should be able to hold the structural relationships of x_{v} with network homophily [47]. Hence, the empirical structurepreserving objective is as follows.
where · denotes the symbol of absolute value and P(··) denotes the conditional probability defined in [11, 16]. For example, suppose P(x_{v}x_{u}) is the conditional probability of node v generated by node u:
where exp(·) stands for the exponential function and \(\mathbf {x}_{v}^{\mathsf {T}}\) is the transpose of x_{v}.
Pretrained domain embedding space
As the graphmodal data and textmodal data are totally heterogeneous in their original forms, there are two demands for representing each one of them:

1
The domainspecific embedding space should preserve meaningful relationships or semantics of the modality;

2
The form of the modality representations should be easy to translate from one to the other.
Hence, before learning the crossmodal mapping knowledge, we first apply skipgram framework [32, 48], one of the most popular techniques in deep learning [49, 50], to pretrain the embedding spaces for each domain. The skipgram is one of the frameworks in word2vec that tries to represent each word as a vector in a continuous lowdimensional space, where similar words are close to each other. The objective under skipgram is to maximize the conditional probability of each word and its context.
Graph domain. In the graph domain, we adopt the random walk strategy to generate the node sequences. Same as [9, 10, 33], we then apply the skipgram framework with negative sampling to map nodes to a continuous vector space. We maximize the likelihood of node sequences to learn the structure regularities in the networks.
Text domain. In this domain, we also utilize skipgram to train word vectors. And then, we obtain the average of word vectors as the text embedding, which has shown its effectiveness in text representation tasks [18, 51].
Crossmodal framework
The overall framework of SRGAN is illustrated in Fig. 2. In this particular DDA task, we first use domainspecific encoders (E_{T} and E_{V}) to respectively pretrain text embeddings and network embeddings. Two GANs are then employed to deal with the data in set \(\mathcal {S}\), where x_{v}∼P_{V}(x_{v}) and x_{t}∼P_{T}(x_{t}) are the two modal data distributions. To be more precise, \(G_{T}:\mathcal {X}_{T} \mapsto \mathcal {X}_{V}\) encodes text embeddings into fake network embeddings, while D_{V} tries to distinguish them from the real ones in the graph domain. To the contrary, \(G_{V}:\mathcal {X}_{V} \mapsto \mathcal {X}_{T}\) and D_{T} invert the process to regulate tGAN and prevent model collapse. During the adversarial training processes, reconstruction and construction losses are produced to update the parameters in GANs for the purpose of making the transformed data indistinguishable in the target domain and also capable of holding structural relationships with network homophily.
Contenttostructure. In this framework, tGAN learns contenttostructure mapping knowledge, the process of which transforms textmodal data to graphmodal data. It aims at reflecting the structural style of a node by its textual information.
Structuretocontent. Unlike imagetoimage translation, we interpret the intention of gGAN as learning the structuretocontent mapping knowledge which provides backward cycle consistency [21] to regulate tGAN and prevent model collapse. The necessity has been thoroughly discussed in [22].
The leastsquares adversarial loss [52] is applied for both GANs to match the distribution of the generated modal data to the data distribution in the target domain. The objective functions are as follows.
where “1” denotes the real label, and “0” (omitted in Eqs. (5) and (6)) denotes the fake label. We minimize \(\phantom {\dot {i}\!}\mathcal {L}_{G_{T}}\) and \(\phantom {\dot {i}\!}\mathcal {L}_{D_{V}}\) to make the generated modal data to be indistinguishable in the graphdomain embedding space. Meanwhile, we also minimize \(\phantom {\dot {i}\!}\mathcal {L}_{G_{V}}\) and \(\phantom {\dot {i}\!}\mathcal {L}_{D_{T}}\) to enable \(G_{V}(\mathbf {x}_{v}) \in \mathcal {X}_{T}\).
Since the purpose of crossmodal mapping is not only to make the transformed modal data indistinguishable in the target domain, it should hold the network structure of the target node as well. Therefore, besides the adversarial losses, we also apply reconstruction losses and construction losses to optimize both GANs. We adopt the cycle consistency loss (reconstruction loss) [21–23] to compute the reconstruction error, the idea of which is to use transitivity to induce the generators to be consistent with each other. The reconstruction loss measures how well the original data is reconstructed after a transit generative sequence. Meanwhile, the construction loss is also needed to measure the similarity of relationships between the transformed data and the original one.
where \(\mathbf {x}_{v}^{*}\) and \(\mathbf {x}_{t}^{*}\) in Eq. (7) are the transformed data, and \(\mathbf {x}_{t}^{*'}\) and \(\mathbf {x}_{v}^{*'}\) in Eq. (8) are the reconstructed data. Thus, the cycle consistency loss is defined as follows.
where l_{1} norm is applied in the loss, and we push G_{V}(G_{T}(x_{t}))≈x_{t} and G_{T}(G_{V}(x_{v}))≈x_{v} in the cycle by minimizing \(\mathcal {L}_{cyc}\).
The construction loss is defined in Eq. (10):
To satisfy Eq. (1), we minimize \(\mathcal {L}_{con}\). Let
if \(\\mathbf {x}_{v}^{*}\mathbf {x}_{v}\{~}_{1} \to 0\), then \(\mathcal {L}_{u \to v} \to 0\).
Besides, minimizing P(x_{u}G_{T}(x_{t}))−P(x_{u}x_{v}) is equal to minimizing P^{−1}(x_{u}G_{T}(x_{t}))−P^{−1}(x_{u}x_{v}). Thus, let
if \(\\mathbf {x}_{v}^{*}\mathbf {x}_{v}\{~}_{1} \to 0\), then \(\mathcal {L}_{v \to u} \to 0\).
The full objective of SRGAN is:
where the hyperparameters α and β are the factors controlling the contributions of the reconstruction and construction losses, respectively.
Edge representation learning
Hadamard product [53] is adopted for learning the edge representations from contenttostructure knowledge, which experimentally shows its effectiveness in [10]. For example, given two nodes v,u with textual information, the edge representation is defined as \(\mathbf {x}_{e(v,u)}=\mathbf {Hadamard}(\mathbf {x}_{v}^{*},\mathbf {x}_{u}^{*})\), where x_{e(v,u)} denotes the linking relationship of v,u translated from the corresponding texts.
Experiments
In this section, several experiments are conducted to validate the performances of SRGAN and baseline methods.
Datasets
The following three realworld network datasets^{Footnote 1} are used in the experiments, and the detailed statistics of these datasets are listed in Table 1.
CitNet^{Footnote 2}. This dataset was extracted by Tang et al. [54] where each paper is regarded as a node, and every directed edge between two nodes denotes a citation. We obtain 132,033 papers with abstract contents after filtering and split them into training and testing sets by a ratio of 70:30.
Cora^{Footnote 3}. A typical benchmark [55] for textbased social network analysis. All of the articles are divided into 10 root categories. Also, 70% of the articles are chosen to be the training data, and the rest is for testing.
HepTh^{Footnote 4}. A filtered graph dataset [16, 56] extracted from the eprint arXiv, where directed edges denote the citation relationships. Following the same rule, nodes with textual information are split into training and testing sets as well.
Baselines
The DDA methods (standard GAN and LSGAN), structurebased NRL methods (DeepWalk, node2vec, and AIDW), and text similarity methods (Jaccard and CosSim) are employed to demonstrate the effectiveness of SRGAN. Subsection 4.4 will present the transformation quality of each DDA method in neighborhood preserving. To evaluate the performance of translating social relations from textdomain information, in subsection 4.5, we also apply Hadamard product for all DDA methods and structurebased NRL methods. Text similarity methods measure the similarity of texts straightforward. Detailed descriptions of all baselines are as follows.
GAN. A classic GAN [20] (standard GAN) learns two competing mappings: a discriminator and a generator, both of which are modeled as deep neural networks. They play a minmax game where the discriminator tries to identify the fake network embeddings and the generator tries to produce the examples as real as possible.
LSGAN. Least squares GAN [52] is able to generate samples that are closer to real data. It adopts the least squares loss function for the discriminator to move the fake samples toward the decision boundary, which also performs more stable during the learning process.
DeepWalk. This online learning approach learns lowdimensional latent representations of nodes from the samples yielded by short random walks [9], which is scalable to build incremental results. Note that, for nodes without structural information, we establish selflinks to learn the network representations due to the reason that intuitively a node should have the closest relationship with itself. We switch hierarchical softmax to negative sampling for improving the efficiency [10].
node2vec. It simulates breadthfirst sampling and depthfirst sampling by tunable parameters p and q. According to the parameter sensitivity experiments [10], we set p=q=0.5 for balancing outward exploration and a proper distance from the start vertex. Same as DeepWalk, we establish selflinks for those nodes without structural information both in the training and testing stages.
AIDW. ANE with inductive DeepWalk (AIDW) [37] unifies a structurepreserving component and an adversariallearning component alternatively to train the generator. The former component aims at encoding structural properties, while the latter acts as a regularizer for learning more stable and robust representations based on the adversarial learning principle.
Jaccard. Jaccard similarity coefficient [57], also known as Intersection over Union, is used for measuring text similarity between finite sentence sample sets. It is defined as the size of the word intersection divided by the size of the word union of the sample sets.
CosSim. Cosine Similarity measures the cosine of the angle between two text embeddings. It was adopted to build the document network dataset by Wang et al. [34]. Same as SRGAN, each text is also represented by the average of the word vectors.
Experiment settings
The embedding size for both modalities is set to 300, which is the same size as Google pretrained word vectors^{Footnote 5}. In randomwalk procedure, we empirically set the length of a random path to 30 and the iteration time for each node to 20. Following the settings [37], AIDW initializes the input data with the pretrained node embeddings by DeepWalk and applies one layer for the generator and 5125121 layer structure for the discriminator. Due to the pretrained data forms, all DDA methods apply deep neural networks for linear transformation of the input data, and Adam [58] is employed as the optimization algorithm for the neural networks. Instead of using ReLU and Leaky ReLU [59–61] in imagetoimage translation, we adopt hyperbolic tangent (tanh) as the activation function, and so do the other DDA methods. For regularization, we employ dropout (rate = 0.3) [62] for the generators of GANs. Deep neural networks composed of an input layer, several hidden layers and an output layer are employed for both generators (300600300300) and discriminators (3006003003001). The weights of neural networks are initialized from a Gaussian distribution with mean 0 and standard deviation 0.02. The bestperformed hyperparameters α and β in the space of {0.1,0.3,0.5,0.7,0.9} are selected by applying the grid search strategy.
Neighborhood preserving quality
In this subsection, we evaluate the transformation performance of each DDA method. \(Q_{l_{1}}\) and \(Q_{l_{2}}\) are the metrics used to validate the quality of the transformed data.
where N(v) denotes the neighbors of node v. \(Q_{l_{1}}\) applies l_{1} norm and \(Q_{l_{2}}\) applies l_{2} norm.
As discussed in Section 3.3, for v∈V^{′}, we expect that the transformed data \(\mathbf {x}_{v}^{*}\) could hold the structural relationships of x_{v} with network homophily in the graphdomain space. Thus, the idea of the proposed metrics in Eq. (12) is to measure the similarity of the neighborhoodpreserving distance. If \(Q_{l_{1}}\) and \(Q_{l_{2}}\) are relatively smaller, then we can conclude that the transformed data preserves much more similar relationships with the original one in its neighborhood. Figures 3 and 4 show the results of the transformation quality in neighborhood preserving. Apparently, SRGAN outperforms the standard GAN and LSGAN on all three datasets. Despite the different adversarial learning processes between the two stateoftheart GAN methods, the performances in neighborhood preserving are almost the same. However, SRGAN achieves obviously smaller \(Q_{l_{1}}\) and \(Q_{l_{2}}\) distances. We reduce more than 18% of \(Q_{l_{1}}\) and \(Q_{l_{2}}\) distances on CitNet, 5% on HepTh, and almost a half on Cora.
The stateoftheart GAN methods only consider the adversarial loss in modality transformation. They just try to make the generated data indistinguishable in the graphdomain space but neglect to preserve structural relationships in networks. However, as the experimental results present, SRGAN can narrow down the difference between the transformed data and the original one in the relationships with neighbors. We think this is mainly because of the framework of SRGAN, where the cycle learning process with reconstruction and construction losses provides the crossmodal mapping knowledge that helps diminish \(Q_{l_{1}}\) and \(Q_{l_{2}}\) distances.
Relation inference from textdomain information
To validate the effectiveness of all methods in translating social relations, we conduct link prediction experiments based on textdomain information. We construct positive samples labeled with “1” by selecting all edges (v,u)∈E, and negative ones labeled with “0” by randomly generating node pairs (v,u)∉E. We employ l_{2}regularized logistic regression [63, 64] implemented using scikitlearn ^{Footnote 6} to train the classifiers based on the original network structures for evaluating the methods. We aim to predict whether there exists a link between two given nodes and thus following [65], the metrics adopted for performance evaluation include Micro F_{1} and Macro F_{1}. Micro F_{1} sums up the individual true positives, false positives, and false negatives of the dataset for different classes, while Macro F_{1} calculates the average of the precision and recall values of the dataset on different classes and finds their unweighted mean.
Intuitively, when facing with no explicit structures of the textbased networks, it would be easy to consider measuring the similarity of two nodes by their textual information. Therefore, we employ two commonly used text similarity methods, the Jaccard similarity coefficient and cosine similarity, to show the baseline results of relation inference based on texts. The threshold is set to 0.5, which is the same as in the l_{2}regularized logistic regression classifiers. If Jaccard similarity coefficient or cosine similarity score between two texts is larger than 0.5, then we infer that there exists a link between the corresponding nodes. Besides, due to the scalability of DeepWalk, node2vec, and AIDW, we also investigate the effectiveness of the three stateoftheart NRL methods when dealing with the situation of missing structures. Tables 2, 3, and 4 show the performances of all methods, and the numbers in bold represent the best results.
Table 2 demonstrates the superiority of SRGAN in comparison against the other seven methods. First, we can see that the DDA methods achieve a significant improvement in relation inference based on textual information. The best Micro F_{1} and Macro F_{1} scores on the CitNet dataset produced by SRGAN are 0.8340 and 0.8265, respectively. Meanwhile, we find that standard GAN, LSGAN, and SRGAN perform stable when varying the percentages of the training edges. Second, we can conclude that the structurebased NRL methods (DeepWalk, node2vec, and AIDW) are unsuitable in the situation of missing structures. To our surprise, the edge representations generated by DeepWalk and node2vec make the classifiers predict all potential links negative regardless of the different percentages of the training edges. We think the reason is that the online training process barely learns meaningful structural information for those data in the testing sets. Therefore, the testing nodes might be mapped to the position where they are incapable of holding the proper structural relationships. Though AIDW performs slightly better than DeepWalk and node2vec, it still suffers from unknown structures when generating new embeddings. Third, the naive thought of inferring potential relations by measuring the similarity of textual information cannot achieve expected results. Cosine similarity performs the worst and Jaccard is just slightly better than DeepWalk, node2vec, and AIDW.
As the results presented in Table 3, SRGAN increases more than 5% in Micro F_{1} and Macro F_{1} scores, respectively, compared with the other two DDA methods. The standard GAN and LSGAN are considered indistinguishable on the Cora dataset. We find that DeepWalk and node2vec fail to produce valid results because they make the classifiers predict all relations nonexistent again. AIDW performs much better than the other two NRL methods, but it cannot predict meaningful relationships even though the node representations are enhanced. Jaccard is better than cosine similarity on Cora dataset, but still it cannot infer convincing social relations. SRGAN outperforms all baseline methods and achieves the highest Micro F_{1} (0.7905) and Macro F_{1} (0.7887) scores involving 90% of training edges. Still, the DDA methods perform steady when varying the percentage of training edges from 10 to 90%.
Table 4 shows the results on HepTh dataset. SRGAN also produces the best Micro F_{1} (0.8004) and Macro F_{1} (0.7973) scores giving 50% of training edges. LSGAN turns out to be competitive since in most cases, it shows a better performance compared with the standard GAN. DeepWalk and node2vec seem to be unfit for inferring social relations in the networkstructureless situation. We think the two structurebased methods cannot take the benefit from pretrained embedding models, which leads to generate meaningless edge representations. As a consequence, all edges in the testing sets are predicted as invalid connections. Though AIDW makes positive predictions, few of them are correct. Jaccard similarity coefficient and cosine similarity are still uncompetitive with our proposed approach.
Sensitivity of hyperparameters
We evaluate the sensitivity of hyperparameters α and β in the space of {0.1,0.3,0.5,0.7,0.9} in this subsection. Figures 5 and 6 present the results.
With the increase of α and β, the performance of SRGAN gradually approaches to DeepWalk and node2vec on the CitNet and Cora datasets. It seems to be inappropriate to set the hyperparameters too large in SRGAN. When α=0.1 and β=0.1, SRGAN achieves its best results on CitNet. For the Cora dataset, SRGAN produces the highest results when α increases to 0.3. Different from the two datasets, there is a slight fluctuation approximately around 0.77 in both metrics on HepTh. SRGAN performs competitive compared against other methods and the best results are achieved at (α=0.9,β=0.1). Hence, we think that in most cases, it would be more effective for SRGAN to use a relatively smaller β. Depending on the applications, α still needs to be finetuned.
Efficiency of SRGAN
To demonstrate the efficiency of SRGAN, we conduct the experiments on the speed of SRGAN during the training stage. For each dataset, the batch size is set to 64. Two graphics processing units (GPUs, NVIDIA Tesla K80 ^{Footnote 7}) are deployed to accelerate model training.
Table 5 shows the efficiency of SRGAN on the three datasets, where T_{all} denotes the total time for training SRGAN per iteration (epoch), T_{1} denotes the time that SRGAN completes the first batch loop, and T_{avg} denotes the average time for the rest per batch. Due to the GPU accelerators, the speed of training SRGAN on the three datasets is fast. T_{avg} is only around 0.05s for all datasets. Even dealing with the large CitNet network, we can still finish training SRGAN within 87s per iteration.
Discussions
Overall, the performance of SRGAN is superior to the stateoftheart DDA methods (standard GAN and LSGAN), structurebased NRL methods (DeepWalk, node2vec, and AIDW), and text similarity methods (Jaccard similarity coefficient and cosine similarity). SRGAN makes the transformed data preserve much more similar relationships with the original one in its neighborhood. Meanwhile, the results of relation inference produced by SRGAN show that even if lacking some of the structural information, we can still make valid prediction according to textdomain information. Thus, SRGAN outperforms the baseline methods in making the transformed data reflect the sociability and the tendency to associate in or to form social connections of the graphstructured data. The standard GAN and LSGAN only consider adversarial losses in modality transformation, which cannot wellpreserve the network structures. It is difficult for DeepWalk and node2vec to learn meaningful embeddings without explicit structures, since the online learning process cannot transfer valid information of the pretrained network relationships to those unseen nodes. AIDW enhances the robustness of node representations, but it still struggles with the networkstructureless situations. Jaccard similarity coefficient and cosine similarity methods infer the relationships only based on text resources, which in our experiments proves that the complex realworld social relations cannot be simply inferred according to the similarity of their text data.
Compared with the aforementioned baselines, the effectiveness of SRGAN can be explained as follows.

1
In our crossmodal mapping framework, not only do we use the adversarial losses to deceive domain discriminators, but also reconstruction and construction losses are applied to learn textual and topological styles, which depict the structureaware relationships with the network homophily;

2
SRGAN incorporates the knowledge from the text domain which remedies the networkstructureless situation where structurebased NRL methods cannot be wellperformed;

3
Unlike the text similarity methods that only consider text resources, in social relation translation, SRGAN also takes the advantages of the original graph data. The crossmodal mapping knowledge we learn bridges the textmodal data and graphmodal data, which helps infer meaningful social relations.
However, this task is still challenging that demands for more research efforts. In the experiments, we find that the generated network data cannot perfectly imitate the “manners” of real one in the original network space (i.e., \(Q_{l_{1}}\) and \(Q_{l_{2}}\) distances still have some rooms to be reduced). We think SRGAN can preserve “relative” relationships (social connections) based on textual information, but it is challenging to locate the “absolute” position of a node in the graphdomain space. The reason may lie in that the networks might be generated from diverse social information, where the utilized textual information might just be a part of the key components that lead to construct some of the topologies or interactions. Therefore, it would be hard to accurately locate an unseen node in the original network space only by such textual information.
Conclusion and future work
In this paper, we propose social relation GAN (SRGAN) which tries to remedy for most of the existing structurebased NRL techniques that have difficulties in dealing with textbased networkstructureless problems. The crossmodal mapping framework bridges the gap between the graphmodal data and textmodal data, which helps learn meaningful relations from the textdomain information in networked systems. Experimental results on three textbased network benchmarks show that SRGAN can translate more realistic social relations compared against the baselines.
In future work, we will consider incorporating other multimedia data like images, videos, etc., to analyze such a networkstructureless situation. Also, we believe it is possible to generate meaningful textbased profiles from the graphmodal data, which could provide more information for some humancentered applications such as recommendations and detections.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Notes
 1.
For the validation purpose, the original linking information of the nodes in the testing sets is hidden during the training stages.
 2.
 3.
 4.
 5.
 6.
 7.
Abbreviations
 NRL:

Network representation learning
 GANs:

Generative adversarial networks
 CPS:

Cyber physical systems
 IoT:

Internet of Things
 DDA:

Deep domain adaptation
 SRGAN:

Social relation GAN
 LSGAN:

Least squares GAN
 ANE:

Adversarial network embedding
 AIDW:

ANE with inductive DeepWalk
 GPUs:

Graphics processing units
References
 1
X. Xu, R. Mo, F. Dai, W. Lin, S. Wan, W. Dou, Dynamic resource provisioning with fault tolerance for dataintensive meteorological workflows in cloud. IEEE Trans. Ind. Inform. (2019). https://doi.org/10.1109/TII.2019.2959258.
 2
J. Li, T. Cai, K. Deng, X. Wang, T. Sellis, F. Xia, Communitydiversified influence maximization in social networks. Inf. Syst.92:, 1–12 (2020).
 3
L. Qi, Q. He, F. Chen, X. Zhang, W. Dou, Q. Ni, Datadriven web APIs recommendation for building web applications. IEEE Trans. Big Data (2020). https://doi.org/10.1109/TBDATA.2020.2975587.
 4
X. Xu, X. Liu, Z. Xu, C. Wang, S. Wan, X. Yang, Joint optimization of resource utilization and load balance with privacy preservation for edge services in 5G networks. Mob. Netw. Appl. (2019). https://doi.org/10.1007/s11036019014488.
 5
V. Martínez, F. Berzal, J. C. Cubero, A survey of link prediction in complex networks. ACM Comput. Surv.49(4), 69 (2017).
 6
H. Liu, H. Kou, C. Yan, L. Qi, Link prediction in paper citation network to construct paper correlation graph. J. Wirel. Commun. Netw. EURASIP. 2019:, 233 (2019).
 7
P. Cui, X. Wang, J. Pei, W. Zhu, A survey on network embedding. arXiv preprint arXiv:1711.08752 (2017).
 8
W. L. Hamilton, R. Ying, J. Leskovec, Representation learning on graphs: methods and applications. arXiv preprint arXiv:1709.05584 (2017).
 9
B. Perozzi, R. AlRfou, S. Skiena, in Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Deepwalk: online learning of social representations (ACM, 2014), pp. 701–710. https://doi.org/10.1145/2623330.2623732.
 10
A. Grover, J. Leskovec, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. node2vec: scalable feature learning for networks (ACM, 2016), pp. 855–864. https://doi.org/10.1145/2939672.2939754.
 11
J. Tang, M. Qu, M. Wang, M. Zhang, J. Yan, Q. Mei, in Proceedings of the 24th International Conference on World Wide Web. Line: largescale information network embedding (ACM, 2015), pp. 1067–1077. https://doi.org/10.1145/2736277.2741093.
 12
S. Cui, T. Li, S. C. Chen, M. L. Shyu, Q. Li, H. Zhang, DISL: deep isomorphic substructure learning for network representations. Knowl.Based Syst.189:, 105086 (2020). https://doi.org/10.1016/j.knosys.2019.105086.
 13
X. Xu, X. Zhang, H. Gao, Y. Xue, L. Qi, W. Dou, Become: blockchainenabled computation offloading for IoT in mobile edge computing. IEEE Trans. Ind. Inform.16(6), 4187–4195 (2020).
 14
Y. Chen, N. Zhang, Y. Zhang, X. Chen, W. Wu, X. S. Shen, Energy efficient dynamic offloading in mobile edge computing for internet of things. IEEE Trans. Cloud Comput. (2019). https://doi.org/10.1109/TCC.2019.2898657.
 15
X. Xu, C. He, Z. Xu, L. Qi, S. Wan, M. Z. A. Bhuiyan, Joint optimization of offloading utility and privacy for edge computing enabled iot. IEEE Internet Things J.7(4), 2622–2629 (2020).
 16
C. Tu, H. Liu, Z. Liu, M. Sun, in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol 1. Cane: contextaware network embedding for relation modeling, (2017), pp. 1722–1731. https://doi.org/10.18653/v1/P171158.
 17
C. Yang, Z. Liu, D. Zhao, M. Sun, E. Y. Chang, in Proceedings of the 24th International Joint Conference on Artificial Intelligence. Network representation learning with rich text information (AAAI Press, 2015), pp. 2111–2117.
 18
X. Sun, J. Guo, X. Ding, T. Liu, A general framework for contentenhanced network representation learning. arXiv preprint arXiv:1610.02906 (2016).
 19
M. Wang, W. Deng, Deep visual domain adaptation: a survey. Neurocomputing (2018). https://doi.org/10.1016/j.neucom.2018.05.083.
 20
I. Goodfellow, J. PougetAbadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, Y. Bengio, in Advances in Neural Information Processing Systems. Generative adversarial nets, (2014), pp. 2672–2680.
 21
J. Y. Zhu, T. Park, P. Isola, A. A. Efros, in 2017 IEEE International Conference on Computer Vision. Unpaired imagetoimage translation using cycleconsistent adversarial networks (IEEE, 2017), pp. 2242–2251.
 22
T. Kim, M. Cha, H. Kim, J. K. Lee, J. Kim, in International Conference on Machine Learning. Learning to discover crossdomain relations with generative adversarial networks, (2017), pp. 1857–1865.
 23
Z. Yi, H. Zhang, P. Tan, M. Gong, in 2017 IEEE International Conference on Computer Vision. DualGAN: unsupervised dual learning for imagetoimage translation (IEEE, 2017), pp. 2868–2876.
 24
T. Mukherjee, P. Kumar, D. Pati, E. Blasch, E. Pasiliao, L. Xu, LOSI: large scale location inference through FM signal integration and estimation. Big Data Min. Analytics. 2(4), 319–348 (2019).
 25
B. S. Jena, C. Khan, R. Sunderraman, High performance frequent subgraph mining on transaction datasets: a survey and performance comparison. Big Data Min. Analytics. 2(3), 159–180 (2019).
 26
L. Qi, X. Zhang, S. Li, S. Wan, Y. Wen, W. Gong, Spatialtemporal datadriven service recommendation with privacypreservation. Inf. Sci.515:, 91–102 (2020).
 27
M. Bouazizi, T. Ohtsuki, Multiclass sentiment analysis on twitter: classification performance and challenges. Big Data Min. Analytics. 2(3), 181–194 (2019).
 28
W. Zhong, X. Yin, X. Zhang, S. Li, W. Dou, R. Wang, L. Qi, Multidimensional qualitydriven service recommendation with privacypreservation in mobile edge environment. Comput. Commun.157:, 116–123 (2020).
 29
X. Xu, Y. Chen, X. Zhang, Q. Liu, X. Liu, L. Qi, A blockchainbased computation offloading method for edge computing in 5G networks. Softw. Pract. Experience (2019). https://doi.org/10.1002/spe.2749.
 30
C. Zhou, A. Li, A. Hou, Z. Zhang, Z. Zhang, P. Dai, F. Wang, Modeling methodology for early warning of chronic heart failure based on real medical big data. Expert Syst. Appl., 113361 (2020). https://doi.org/10.1016/j.eswa.2020.113361.
 31
T. Cai, J. Li, A. Mian, R. H. Li, T. Sellis, J. X. Yu, Targetaware holistic influence maximization in spatial social networks. IEEE Trans. Knowl. Data Eng. (2020). https://doi.org/10.1109/TKDE.2020.3003047.
 32
T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, J. Dean, in Advances in Neural Information Processing Systems. Distributed representations of words and phrases and their compositionality, (2013), pp. 3111–3119.
 33
S. Cui, B. Xia, T. Li, M. Wu, D. Li, Q. Li, H. Zhang, in 2017 12th International Conference on Intelligent Systems and Knowledge Engineering. SimWalk: learning network latent representations with social relation similarity (IEEE, 2017), pp. 1–6. https://doi.org/10.1109/ISKE.2017.8258804.
 34
D. Wang, P. Cui, W. Zhu, in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Structural deep network embedding (ACM, 2016), pp. 1225–1234. https://doi.org/10.1145/2939672.2939753.
 35
M. Niepert, M. Ahmed, K. Kutzkov, in International Conference on Machine Learning. Learning convolutional neural networks for graphs, (2016), pp. 2014–2023.
 36
T. N. Kipf, M. Welling, in International Conference on Learning Representations. Semisupervised classification with graph convolutional networks, (2017).
 37
Q. Dai, Q. Li, J. Tang, D. Wang, in The 32nd AAAI Conference on Artificial Intelligence. Adversarial network embedding, (2018).
 38
H. Gao, J. Pei, H. Huang, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ProGAN: network embedding via proximity generative adversarial network (ACM, 2019), pp. 1308–1316. https://doi.org/10.1145/3292500.3330866.
 39
B. Hu, Y. Fang, C. Shi, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Adversarial learning on heterogeneous information networks (ACM, 2019), pp. 120–129. https://doi.org/10.1145/3292500.3330970.
 40
X. Huang, Q. Song, Y. Li, X. Hu, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Graph recurrent networks with attributed random walks (ACM, 2019), pp. 732–740. https://doi.org/10.1145/3292500.3330941.
 41
C. Tu, Z. Zhang, Z. Liu, M. Sun, in Proceedings of the 26th International Joint Conference on Artificial Intelligence. Transnet: translationbased network representation learning for social relation extraction (AAAI Press, 2017), pp. 2864–2870.
 42
C. Tu, W. Zhang, Z. Liu, M. Sun, in Proceedings of the 25th International Joint Conference on Artificial Intelligence. Maxmargin deepwalk: discriminative learning of network representation (AAAI Press, 2016), pp. 3889–3895.
 43
Z. He, J. Liu, N. Li, Y. Huang, in Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Learning networktonetwork model for contentrich network embedding (ACM, 2019), pp. 1037–1045. https://doi.org/10.1145/3292500.3330924.
 44
K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, D. Krishnan, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Unsupervised pixellevel domain adaptation with generative adversarial networks, (2017), pp. 3722–3731.
 45
P. Isola, J. Y. Zhu, T. Zhou, A. A. Efros, in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Imagetoimage translation with conditional adversarial networks, (2017), pp. 1125–1134.
 46
I. Goodfellow, Nips 2016 tutorial: generative adversarial networks. arXiv preprint arXiv:1701.00160 (2016).
 47
M. McPherson, L. SmithLovin, J. M. Cook, Birds of a feather: homophily in social networks. Annu. Rev. Sociol.27(1), 415–444 (2001).
 48
T. Mikolov, K. Chen, G. Corrado, J. Dean, Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013).
 49
Y. LeCun, Y. Bengio, G. Hinton, Deep learning. Nature. 521(7553), 436–444 (2015).
 50
I. Goodfellow, Y. Bengio, A. Courville, Deep learning (MIT Press, 2016). https://www.deeplearningbook.org/.
 51
A. Joulin, E. Grave, P. Bojanowski, T. Mikolov, in Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, vol 2. Bag of tricks for efficient text classification, (2017), pp. 427–431.
 52
X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, S. P. Smolley, in 2017 IEEE International Conference on Computer Vision. Least squares generative adversarial networks (IEEE, 2017), pp. 2813–2821.
 53
C. Davis, The norm of the Schur product operation. Numer. Math.4(1), 343–344 (1962).
 54
J. Tang, J. Zhang, L. Yao, J. Li, L. Zhang, Z. Su, in Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ArnetMiner: extraction and mining of academic social networks (ACM, 2008), pp. 990–998. https://doi.org/10.1145/1401890.1402008.
 55
A. K. McCallum, K. Nigam, J. Rennie, K. Seymore, Automating the construction of internet portals with machine learning. Inf. Retr.3(2), 127–163 (2000).
 56
J. Leskovec, J. Kleinberg, C. Faloutsos, in Proceedings of the 11th ACM SIGKDD International Conference on Knowledge Discovery in Data Mining. Graphs over time: densification laws, shrinking diameters and possible explanations (ACM, 2005), pp. 177–187. https://doi.org/10.1145/1081870.1081893.
 57
P. Jaccard, Distribution de la flore alpine dans le bassin des dranses et dans quelques régions voisines. Bull Soc. Vaudoise Sci. Nat.37:, 241–272 (1901).
 58
D. P. Kingma, J. Ba, Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014).
 59
X. Glorot, A. Bordes, Y. Bengio, in Proceedings of the 14th International Conference on Artificial Intelligence and Statistics. Deep sparse rectifier neural networks, (2011), pp. 315–323.
 60
A. L. Maas, A. Y. Hannun, A. Y. Ng, in International Conference on Machine Learning, vol 30. Rectifier nonlinearities improve neural network acoustic models, (2013), p. 3.
 61
B. Xu, N. Wang, T. Chen, M. Li, Empirical evaluation of rectified activations in convolutional network. arXiv preprint arXiv:1505.00853 (2015).
 62
N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res.15(1), 1929–1958 (2014).
 63
R. E. Fan, K. W. Chang, C. J. Hsieh, X. R. Wang, C. J. Lin, LIBLINEAR: a library for large linear classification. J. Mach. Learn. Res.9:, 1871–1874 (2008).
 64
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, et al, scikitlearn: machine learning in python. J. Mach. Learn. Res.12:, 2825–2830 (2011).
 65
H. Wang, J. Wang, J. Wang, M. Zhao, W. Zhang, F. Zhang, X. Xie, M. Guo, in The 32nd AAAI Conference on Artificial Intelligence. GraphGAN: graph representation learning with generative adversarial nets, (2018).
Acknowledgements
Not applicable.
Funding
This work was supported in part by the China Scholarship Council (No. 201706840112), Fundamental Research Funds for the Central Universities (No. 30918012204), Jiangsu province key research and development program (BE2017739), the 4th project “Research on the Key Technology of Endogenous Security Switches” (2020YFB1804604) of the National Key R&D Program “New Network Equipment Based on Independent Programmable Chips” (2020YFB1804600), 2018 Jiangsu Province Major Technical Research Project “Information Security Simulation System” (BE2017100), Military Common Information System Equipment Preresearch Special Technical Project (315075701), and Industrial Internet Innovation and Development Project in 2019  Industrial Internet Security OnSite Emergency Detection Tool Project.
Author information
Affiliations
Contributions
Shicheng Cui is the principal contributor. In a supervising role, Dr. Qianmu Li and Dr. ShuChing Chen formulated the research problem and contributed to the discussion of results. The authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Cui, S., Li, Q. & Chen, S. An adversarial learning approach for discovering social relations in humancentered information networks. J Wireless Com Network 2020, 172 (2020). https://doi.org/10.1186/s13638020017826
Received:
Accepted:
Published:
Keywords
 Humancentered networked systems
 Social relations
 Networkstructureless
 Textual information
 Generative adversarial networks