Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a "cloud" to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.
The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the "effective" hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.
We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a “cloud” to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.
The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the “effective” hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.
We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
Gratis für den Versand innerhalb von/der Deutschland
Versandziele, Kosten & DauerAnbieter: moluna, Greven, Deutschland
Zustand: New. The first comprehensive overview book on the foundations of deep learningWritten by leading experts in the fieldExplicates excellent generalizability of deep learning, including generalization analysisDeep learning has si. Bestandsnummer des Verkäufers 521847426
Anzahl: 1 verfügbar
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Buch. Zustand: Neu. Neuware -Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a 'cloud' to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 308 pp. Englisch. Bestandsnummer des Verkäufers 9789811682322
Anzahl: 2 verfügbar
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
Buch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a 'cloud' to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the 'effective' hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory. 284 pp. Englisch. Bestandsnummer des Verkäufers 9789811682322
Anzahl: 2 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a 'cloud' to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues.The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the 'effective' hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability.We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory. Bestandsnummer des Verkäufers 9789811682322
Anzahl: 1 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9789811682322_new
Anzahl: Mehr als 20 verfügbar
Anbieter: California Books, Miami, FL, USA
Zustand: New. Bestandsnummer des Verkäufers I-9789811682322
Anzahl: Mehr als 20 verfügbar
Anbieter: CitiRetail, Stevenage, Vereinigtes Königreich
Hardcover. Zustand: new. Hardcover. Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the effective hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability. We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Bestandsnummer des Verkäufers 9789811682322
Anzahl: 1 verfügbar
Anbieter: Books Puddle, New York, NY, USA
Zustand: New. 1st ed. 2022 edition NO-PA16APR2015-KAP. Bestandsnummer des Verkäufers 26396346908
Anzahl: 4 verfügbar
Anbieter: AussieBookSeller, Truganina, VIC, Australien
Hardcover. Zustand: new. Hardcover. Deep learning has significantly reshaped a variety of technologies, such as image processing, natural language processing, and audio processing. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. This irreconciliation considerably undermines the confidence of deploying deep learning to security-critical areas, including autonomous vehicles and medical diagnosis, where small algorithmic mistakes can lead to fatal disasters. This book seeks to explaining the excellent generalizability, including generalization analysis via the size-independent complexity measures, the role of optimization in understanding the generalizability, and the relationship between generalizability and ethical/security issues. The efforts to understand the excellent generalizability are following two major paths: (1) developing size-independent complexity measures, which can evaluate the effective hypothesis complexity that can be learned, instead of the whole hypothesis space; and (2) modelling the learned hypothesis through stochastic gradient methods, the dominant optimizers in deep learning, via stochastic differential functions and the geometry of the associated loss functions. Related works discover that over-parameterization surprisingly bring many good properties to the loss functions. Rising concerns of deep learning are seen on the ethical and security issues, including privacy preservation and adversarial robustness. Related works also reveal an interplay between them and generalizability: a good generalizability usually means a good privacy-preserving ability; and more robust algorithms might have a worse generalizability. We expect readers can have a big picture of the current knowledge in deep learning theory, understand how the deep learning theory can guide new algorithm designing, and identify future research directions. Readers need knowledge of calculus, linear algebra, probability, statistics, and statistical learning theory. The excellent generalizability of deep learning is like a cloud to conventional complexity-based learning theory: the over-parameterization of deep learning makes almost all existing tools vacuous. Shipping may be from our Sydney, NSW warehouse or from our UK or US warehouse, depending on stock availability. Bestandsnummer des Verkäufers 9789811682322
Anzahl: 1 verfügbar
Anbieter: Biblios, Frankfurt am main, HESSE, Deutschland
Zustand: New. PRINT ON DEMAND. Bestandsnummer des Verkäufers 18396346902
Anzahl: 4 verfügbar