Uncertainty-Aware Global-View Reconstruction for Multi-View Multi-Label Feature Selection
By: Pingting Hao, Kunpeng Liu, Wanfu Gao
Potential Business Impact:
Helps computers learn better from many different views.
In recent years, multi-view multi-label learning (MVML) has gained popularity due to its close resemblance to real-world scenarios. However, the challenge of selecting informative features to ensure both performance and efficiency remains a significant question in MVML. Existing methods often extract information separately from the consistency part and the complementary part, which may result in noise due to unclear segmentation. In this paper, we propose a unified model constructed from the perspective of global-view reconstruction. Additionally, while feature selection methods can discern the importance of features, they typically overlook the uncertainty of samples, which is prevalent in realistic scenarios. To address this, we incorporate the perception of sample uncertainty during the reconstruction process to enhance trustworthiness. Thus, the global-view is reconstructed through the graph structure between samples, sample confidence, and the view relationship. The accurate mapping is established between the reconstructed view and the label matrix. Experimental results demonstrate the superior performance of our method on multi-view datasets.
Similar Papers
Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation
CV and Pattern Recognition
Makes computers learn from messy, different information.
Blending 3D Geometry and Machine Learning for Multi-View Stereopsis
CV and Pattern Recognition
Makes 3D pictures from photos faster.
Structure-Aware Prototype Guided Trusted Multi-View Classification
CV and Pattern Recognition
Helps computers make better decisions from different information.