Academic Network Representation via Prediction-Sampling Incorporated Tensor Factorization
By: Chunyang Zhang, Xin Liao, Hao Wu
Potential Business Impact:
Finds hidden science connections to predict future discoveries.
Accurate representation to an academic network is of great significance to academic relationship mining like predicting scientific impact. A Latent Factorization of Tensors (LFT) model is one of the most effective models for learning the representation of a target network. However, an academic network is often High-Dimensional and Incomplete (HDI) because the relationships among numerous network entities are impossible to be fully explored, making it difficult for an LFT model to learn accurate representation of the academic network. To address this issue, this paper proposes a Prediction-sampling-based Latent Factorization of Tensors (PLFT) model with two ideas: 1) constructing a cascade LFT architecture to enhance model representation learning ability via learning academic network hierarchical features, and 2) introducing a nonlinear activation-incorporated predicting-sampling strategy to more accurately learn the network representation via generating new academic network data layer by layer. Experimental results from the three real-world academic network datasets show that the PLFT model outperforms existing models when predicting the unexplored relationships among network entities.
Similar Papers
Water Quality Data Imputation via A Fast Latent Factorization of Tensors with PID-based Optimizer
Machine Learning (CS)
Fixes bad water data for better decisions.
Tensor Network Based Feature Learning Model
Machine Learning (CS)
Learns computer patterns faster and better.
Dynamic QoS Prediction via a Non-Negative Tensor Snowflake Factorization
Machine Learning (CS)
Predicts what services people will like.