This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users’ fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Fan Liu is a Research Fellow with the School of Computing, National University of Singapore (NUS). His research interests lie primarily in multimedia computing and information retrieval. His work has been published in a set of top forums, including ACM SIGIR, MM, WWW, TKDE, TOIS, TMM, and TCSVT. He is an area chair of ACM MM and a senior PC member of CIKM.
Zhenyang Li is a Postdoc with the Hong Kong Generative Al Research and Development Center Limited. His research interest is primarily in recommendation and visual question answering. His work has been published in a set of top forums, including ACM MM, TIP, and TMM.
Liqiang Nie is Professor at and Dean of the School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen). His research interests are primarily in multimedia computing and information retrieval. He has co-authored more than 200 articles and four books. He is a regular area chair of ACM MM, NeurIPS, IJCAI, and AAAI, and a member of the ICME steering committee. He has received many awards, like the ACM MM and SIGIR best paper honorable mention in 2019, SIGMM rising star in 2020, TR35 China 2020, DAMO Academy Young Fellow in 2020, and SIGIR best student paper in 2021.
This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.
First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.
The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: Best Price, Torrance, CA, USA
Zustand: New. SUPER FAST SHIPPING. Bestandsnummer des Verkäufers 9783031831874
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: New. Bestandsnummer des Verkäufers 49563644-n
Anbieter: Grand Eagle Retail, Bensenville, IL, USA
Paperback. Zustand: new. Paperback. This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Bestandsnummer des Verkäufers 9783031831874
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
Taschenbuch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents an in-depth exploration of multimodal learning toward recommendation, along with a comprehensive survey of the most important research topics and state-of-the-art methods in this area.First, it presents a semantic-guided feature distillation method which employs a teacher-student framework to robustly extract effective recommendation-oriented features from generic multimodal features. Next, it introduces a novel multimodal attentive metric learning method to model user diverse preferences for various items. Then it proposes a disentangled multimodal representation learning recommendation model, which can capture users' fine-grained attention to different modalities on each factor in user preference modeling. Furthermore, a meta-learning-based multimodal fusion framework is developed to model the various relationships among multimodal information. Building on the success of disentangled representation learning, it further proposes an attribute-driven disentangled representation learning method, which uses attributes to guide the disentanglement process in order to improve the interpretability and controllability of conventional recommendation methods. Finally, the book concludes with future research directions in multimodal learning toward recommendation.The book is suitable for graduate students and researchers who are interested in multimodal learning and recommender systems. The multimodal learning methods presented are also applicable to other retrieval or sorting related research areas, like image retrieval, moment localization, and visual question answering. 152 pp. Englisch. Bestandsnummer des Verkäufers 9783031831874
Anzahl: 2 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: As New. Unread book in perfect condition. Bestandsnummer des Verkäufers 49563644
Anzahl: 1 verfügbar
Anbieter: moluna, Greven, Deutschland
Zustand: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. Bestandsnummer des Verkäufers 2054249771
Anzahl: Mehr als 20 verfügbar
Anbieter: Books Puddle, New York, NY, USA
Zustand: New. Bestandsnummer des Verkäufers 26403601002
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. Print on Demand. Bestandsnummer des Verkäufers 410634677
Anzahl: 4 verfügbar
Anbieter: Biblios, Frankfurt am main, HESSE, Deutschland
Zustand: New. PRINT ON DEMAND. Bestandsnummer des Verkäufers 18403600992
Anzahl: 4 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Paperback. Zustand: Brand New. 169 pages. 9.25x6.10x9.25 inches. In Stock. Bestandsnummer des Verkäufers x-303183187X
Anzahl: 1 verfügbar