Paper/Multimodal Learning (4) 썸네일형 리스트형 [Graph&Text] ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings https://arxiv.org/abs/2305.14321 ConGraT: Self-Supervised Contrastive Pretraining for Joint Graph and Text Embeddings We propose ConGraT(Contrastive Graph-Text pretraining), a general, self-supervised method for jointly learning separate representations of texts and nodes in a parent (or ``supervening'') graph, where each text is associated with one of the nodes. Datasets arxiv.org [MMML] Multimodal Deep Learning (ICML2011) https://people.csail.mit.edu/khosla/papers/icml2011_ngiam.pdf 0. Abstract THIS paper a series of tasks for multimodal learning & how to train cross modality feature learning how to learn a shared representation between modalities 1. Introduction start of MMML : speech recognition (audio-visual information), McGurk effect THIS paper focus on modeling "mid-level" relationships task : audio-visual .. [ML] RBM & sparse RBM 보호되어 있는 글입니다. [MMML] Multimodal Machine Learning Introduction (CMU LTI-11777 Lecture1.1) 이전 1 다음