This monograph addresses the intertwined mathematical, neurological, and cognitive mysteries of the brain. It first evaluates the mathematical performance limits of simple spiking neuron models that both learn and later recognize complex spike excitation patterns in less than one second without using training signals unique to each pattern. Simulations validate these models, while theoretical expressions validate their simpler performance parameters. These single-neuron models are then qualitatively related to the training and performance of multi-layer neural networks that may have significant feedback. The advantages of feedback are then qualitatively explained and related to a model for cognition. This model is then compared to observed mild hallucinations that arguably include accelerated time-reversed video memories. The learning mechanism for these binary threshold-firing “cognon” neurons is spike-timing-dependent plasticity (STDP) that depends only on whether the spike excitation pattern presented to a given single “learning-ready” neuron within a period of milliseconds causes that neuron to fire or “spike”. The “false-alarm” probability that a trained neuron will fire for a random unlearned pattern can be made almost arbitrarily low by reducing the number of patterns learned by each neuron. Models that use and that do not use spike timing within patterns are evaluated. A Shannon mutual information metric (recoverable bits/neuron) is derived for binary neuron models that are characterized only by their probability of learning a random input excitation pattern presented to that neuron during learning readiness, and by their false-alarm probability for random unlearned patterns. Based on simulations, the upper bounds to recoverable information are ~0.1 bits per neuron for optimized neuron parameters and training. This information metric assumes that: 1) each neural spike indicates only that the responsible neuron input excitation pattern (a pattern lasts less than the time between consecutive patterns, say 30 milliseconds) had probably been seen earlier while that neuron was “learning ready”, and 2) information is stored in the binary synapse strengths. This focus on recallable learned information differs from most prior metrics such as pattern classification performance and metrics relying on pattern-specific training signals other than the normal input spikes. This metric also shows that neuron models can recall useful Shannon information only if their probability of firing randomly is lowered between learning and recall. Also discussed are: 1) how rich feedback might permit improved noise immunity, learning and recognition of pattern sequences, compression of data, associative or content-addressable memory, and development of communications links through white matter, 2) extensions of cognon models that use spike timing, dendrite compartments, and new learning mechanisms in addition to spike timing- dependent plasticity (STDP), 3) simulations that show how simple optimized neuron models can have optimum numbers of binary synapses in the range between 200 and 10,000, depending on neural parameters, and 4) simulation results for parameters like the average bits/spike, bits/neuron/second, maximum number of learnable patterns, optimum ratios between the strengths of weak and strong synapses, and probabilities of false alarms.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
David Staelin is Professor Emeritus of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology (MIT) where he actively served on the faculty for 46 years and also received his SB, SM, and ScD degrees. His teaching involves signal processing, estimation, and electro-magnetics, and his recent research has primarily involved neural signal processing, remote sensing and estimation, image processing, and communications networks. Neural spike processing has been an interest for over 50 years and became a major effort in 2007. He was an Assistant Director of the MIT Lincoln Laboratory from 1990 to 2001, a member of the (U.S.) President’s Information Technology Advisory Committee 2003-2005, and founding Chairman of PictureTel Corporation, now part of Polycom. Carl Staelin is a member of the technical staff at Google, Israel and an assistant editor for the Journal of Electronic Imaging. Earlier he was Chief Technologist for Hewlett Packard Labs Israel, working on automatic image analysis and processing, digital commercial print, and enterprise IT management. His research interests include digital storage systems, machine learning, image analysis and processing, document and information management, and computer performance analysis. He received his PhD in Computer Science from Princeton University in high performance file system design. He currently lives in Haifa, Israel with his wife and family.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 4,09 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerEUR 17,51 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerAnbieter: WorldofBooks, Goring-By-Sea, WS, Vereinigtes Königreich
Paperback. Zustand: Very Good. The book has been read, but is in excellent condition. Pages are intact and not marred by notes or highlighting. The spine remains undamaged. Bestandsnummer des Verkäufers GOR009518932
Anzahl: 1 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: As New. Unread book in perfect condition. Bestandsnummer des Verkäufers 17806810
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPricesUK, Woodford Green, Vereinigtes Königreich
Zustand: As New. Unread book in perfect condition. Bestandsnummer des Verkäufers 17806810
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPricesUK, Woodford Green, Vereinigtes Königreich
Zustand: New. Bestandsnummer des Verkäufers 17806810-n
Anzahl: Mehr als 20 verfügbar
Anbieter: CitiRetail, Stevenage, Vereinigtes Königreich
Paperback. Zustand: new. Paperback. This monograph addresses the intertwined mathematical, neurological, and cognitive mysteries of the brain. It first evaluates the mathematical performance limits of simple spiking neuron models that both learn and later recognize complex spike excitation patterns in less than one second without using training signals unique to each pattern. Simulations validate these models, while theoretical expressions validate their simpler performance parameters. These single-neuron models are then qualitatively related to the training and performance of multi-layer neural networks that may have significant feedback. The advantages of feedback are then qualitatively explained and related to a model for cognition. This model is then compared to observed mild hallucinations that arguably include accelerated time-reversed video memories. The learning mechanism for these binary threshold-firing "cognon" neurons is spike-timing-dependent plasticity (STDP) that depends only on whether the spike excitation pattern presented to a given single "learning-ready" neuron within a period of milliseconds causes that neuron to fire or "spike". The "false-alarm" probability that a trained neuron will fire for a random unlearned pattern can be made almost arbitrarily low by reducing the number of patterns learned by each neuron. Models that use and that do not use spike timing within patterns are evaluated. A Shannon mutual information metric (recoverable bits/neuron) is derived for binary neuron models that are characterized only by their probability of learning a random input excitation pattern presented to that neuron during learning readiness, and by their false-alarm probability for random unlearned patterns. Based on simulations, the upper bounds to recoverable information are 0.1 bits per neuron for optimized neuron parameters and training. This information metric assumes that: 1) each neural spike indicates only that the responsible neuron input excitation pattern (a pattern lasts less than the time between consecutive patterns, say 30 milliseconds) had probably been seen earlier while that neuron was "learning ready", and 2) information is stored in the binary synapse strengths. This focus on recallable learned information differs from most prior metrics such as pattern classification performance and metrics relying on pattern-specific training signals other than the normal input spikes. This metric also shows that neuron models can recall useful Shannon information only if their probability of firing randomly is lowered between learning and recall. Also discussed are: 1) how rich feedback might permit improved noise immunity, learning and recognition of pattern sequences, compression of data, associative or content-addressable memory, and development of communications links through white matter, 2) extensions of cognon models that use spike timing, dendrite compartments, and new learning mechanisms in addition to spike timing- dependent plasticity (STDP), 3) simulations that show how simple optimized neuron models can have optimum numbers of binary synapses in the range between 200 and 10,000, depending on neural parameters, and 4) simulation results for parameters like the average bits/spike, bits/neuron/second, maximum number of learnable patterns, optimum ratios between the strengths of weak and strong synapses, and probabilities of false alarms. Shipping may be from our UK warehouse or from our Australian or US warehouses, depending on stock availability. Bestandsnummer des Verkäufers 9781466472228
Anzahl: 1 verfügbar
Anbieter: THE SAINT BOOKSTORE, Southport, Vereinigtes Königreich
Paperback / softback. Zustand: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 230. Bestandsnummer des Verkäufers C9781466472228
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: New. Bestandsnummer des Verkäufers 17806810-n
Anzahl: Mehr als 20 verfügbar