This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal to a broad audience in the field of machine learning, artificial intelligence, big data and database management.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Jiawei Jiang obtained his PhD from Peking University 2018, advised by Prof. Bin Cui. His research interests include distributed machine learning, gradient optimization and automatic machine learning. He has served as a program committee member or reviewer for various international events, including SIGMOD, VLDB, ICDE, KDD, AAAI and TKDE. He was awarded the CCF Outstanding Doctoral Dissertation Award (2019) and ACM China Doctoral Dissertation Award (2018).
Bin Cui is a Professor at the School of EECS and Director of the Institute of Network Computing and Information Systems, at Peking University. His research interests include database system architectures, query and index techniques, and big data management and mining. He has published over 200 refereed papers at international conferences and in journals. Dr. Cui has served on the technical program committee of various international conferences, including SIGMOD, VLDB, ICDE and KDD, and as Vice PC Chair of ICDE 2011, Demo Co-Chair of ICDE 2014, Area Chair of VLDB 2014, PC Co-Chair of APWeb 2015 and WAIM 2016. He is currently a member of the trustee board of VLDB Endowment, is on the editorial board of the VLDB Journal, Distributed and Parallel Databases Journal, and Information Systems, and was formerly an associate editor of IEEE Transactions on Knowledge and Data Engineering (TKDE, 2009-2013). He was selected for a Microsoft Young Professorship award (MSRA 2008), CCF Young Scientist award (2009), Second Prize of Natural Science Award of MOE China (2014), and appointed a Cheung Kong distinguished Professor by the MOE in 2016.
This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.
Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appeal toa broad audience in the field of machine learning, artificial intelligence, big data and database management.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 16,97 für den Versand von USA nach Deutschland
Versandziele, Kosten & DauerGratis für den Versand von USA nach Deutschland
Versandziele, Kosten & DauerAnbieter: Basi6 International, Irving, TX, USA
Zustand: Brand New. New. US edition. Expediting shipping for all USA and Europe orders excluding PO Box. Excellent Customer Service. Bestandsnummer des Verkäufers ABEJUNE24-314915
Anzahl: 1 verfügbar
Anbieter: Books Puddle, New York, NY, USA
Zustand: New. 1st ed. 2022 edition NO-PA16APR2015-KAP. Bestandsnummer des Verkäufers 26388127495
Anzahl: 1 verfügbar
Anbieter: moluna, Greven, Deutschland
Zustand: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, imp. Bestandsnummer des Verkäufers 473138653
Anzahl: Mehr als 20 verfügbar
Anbieter: Biblios, Frankfurt am main, HESSE, Deutschland
Zustand: New. Bestandsnummer des Verkäufers 18388127501
Anzahl: 1 verfügbar
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. Bestandsnummer des Verkäufers 391505112
Anzahl: 1 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9789811634192_new
Anzahl: Mehr als 20 verfügbar
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Buch. Zustand: Neu. Neuware -This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 184 pp. Englisch. Bestandsnummer des Verkäufers 9789811634192
Anzahl: 2 verfügbar
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
Buch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management. 184 pp. Englisch. Bestandsnummer des Verkäufers 9789811634192
Anzahl: 2 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - This book presents the state of the art in distributed machine learning algorithms that are based on gradient optimization methods. In the big data era, large-scale datasets pose enormous challenges for the existing machine learning systems. As such, implementing machine learning algorithms in a distributed environment has become a key technology, and recent research has shown gradient-based iterative optimization to be an effective solution. Focusing on methods that can speed up large-scale gradient optimization through both algorithm optimizations and careful system implementations, the book introduces three essential techniques in designing a gradient optimization algorithm to train a distributed machine learning model: parallel strategy, data compression and synchronization protocol.Written in a tutorial style, it covers a range of topics, from fundamental knowledge to a number of carefully designed algorithms and systems of distributed machine learning. It will appealto a broad audience in the field of machine learning, artificial intelligence, big data and database management. Bestandsnummer des Verkäufers 9789811634192
Anzahl: 1 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: New. Bestandsnummer des Verkäufers 44301030-n
Anzahl: Mehr als 20 verfügbar