This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Gauri Joshi, Ph.D., is an Associate Professor in the ECE department at Carnegie Mellon University. Dr. Joshi completed her Ph.D. from MIT EECS. Her current research is on designing algorithms for federated learning, distributed optimization, and parallel computing. Her awards and honors include being named as one of MIT Technology Review's 35 Innovators under 35 (2022), the NSF CAREER Award (2021), the ACM SIGMETRICS Best Paper Award (2020), Best Thesis Prize in Computer science at MIT (2012), and Institute Gold Medal of IIT Bombay (2010).
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: New. Bestandsnummer des Verkäufers 46847697-n
Anzahl: 15 verfügbar
Anbieter: Grand Eagle Retail, Bensenville, IL, USA
Paperback. Zustand: new. Paperback. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. Shipping may be from multiple locations in the US or from the UK, depending on stock availability. Bestandsnummer des Verkäufers 9783031190698
Anbieter: California Books, Miami, FL, USA
Zustand: New. Bestandsnummer des Verkäufers I-9783031190698
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: As New. Unread book in perfect condition. Bestandsnummer des Verkäufers 46847697
Anzahl: 15 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9783031190698_new
Anzahl: Mehr als 20 verfügbar
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
Taschenbuch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where the task of computing gradients is divided across several worker nodes. The author discusses several algorithms that improve the scalability and communication efficiency of synchronous SGD, such as asynchronous SGD, local-update SGD, quantized and sparsified SGD, and decentralized SGD. For each of these algorithms, the book analyzes its error versus iterations convergence, and the runtime spent per iteration. The author shows that each of these strategies to reduce communication or synchronization delays encounters a fundamental trade-off between error and runtime. 144 pp. Englisch. Bestandsnummer des Verkäufers 9783031190698
Anzahl: 2 verfügbar
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. Print on Demand pp. 144. Bestandsnummer des Verkäufers 397858253
Anzahl: 4 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Paperback. Zustand: Brand New. 140 pages. 9.45x6.61x0.33 inches. In Stock. Bestandsnummer des Verkäufers x-3031190696
Anzahl: 2 verfügbar
Anbieter: Biblios, Frankfurt am main, HESSE, Deutschland
Zustand: New. PRINT ON DEMAND pp. 144. Bestandsnummer des Verkäufers 18398551576
Anzahl: 4 verfügbar
Anbieter: moluna, Greven, Deutschland
Kartoniert / Broschiert. Zustand: New. Dieser Artikel ist ein Print on Demand Artikel und wird nach Ihrer Bestellung fuer Sie gedruckt. This book discusses state-of-the-art stochastic optimization algorithms for distributed machine learning and analyzes their convergence speed. The book first introduces stochastic gradient descent (SGD) and its distributed version, synchronous SGD, where th. Bestandsnummer des Verkäufers 1215161127
Anzahl: Mehr als 20 verfügbar