Memory and the Computational Brain offers a provocative argument that goes to the heart of neuroscience, proposing that the field can and should benefit from the recent advances of cognitive science and the development of information theory over the course of the last several decades.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
C. R. Gallistel is Co-Director of the Rutgers Center for Cognitive Science. He is one of the foremost psychologists working on the foundations of cognitive neuroscience. His publications include The Symbolic Foundations of Conditional Behavior (2002), and The Organization of Learning (1990).
Adam Philip King is Assistant Professor of Mathematics at Fairfield University.
Memory and the Computational Brain spans the fields of cognitive science, linguistics, psychology, neuroscience, and education, to suggest new perspectives on the way we consider learning mechanisms in the brain.
Gallistel and King propose that the architecture of the brain is structured precisely for learning and for memory, and that the concept of an addressable read/write memory mechanism should be integrated into the foundations of neuroscience. They argue that the field of neuroscience can and should benefit from the recent advances of cognitive science and the development of information theory over the recent decades. Based on three lectures given by Randy Gallistel in the prestigious Blackwell-Maryland Lectures in Language and Cognition, the text has been significantly revised and expanded with numerous interdisciplinary examples and models and reflects recent research to make it essential reading for both students and those working in the field.
Most cognitive scientists think about the brain and behavior within an information-processing framework: Stimuli acting on sensory receptors provide information about the state of the world. The sensory receptors transduce the stimuli into neural signals, streams of action potentials (aka spikes). The spike trains transmit the information contained in the stimuli from the receptors to the brain, which processes the sensory signals in order to extract from them the information that they convey. The extracted information may be used immediately to inform ongoing behavior, or it may be kept in memory to be used in shaping behavior at some later time. Cognitive scientists seek to understand the stages of processing by which information is extracted, the representations that result, the motor planning processes through which the information enters into the direction of behavior, the memory processes that organize and preserve the information, and the retrieval processes that find the information in memory when it is needed. Cognitive neuroscientists want to understand where these different aspects of information processing occur in the brain and the neurobiological mechanisms by which they are physically implemented.
Historically, the information-processing framework in cognitive science is closely linked to the development of information technology, which is used in electronic computers and computer software to convert, store, protect, process, transmit, and retrieve information. But what exactly is this "information" that is so central to both cognitive science and computer science? Does it have a rigorous meaning? In fact, it does. Moreover, the conceptual system that has grown up around this rigorous meaning - information theory - is central to many aspects of modern science and engineering, including some aspects of cognitive neuroscience. For example, it is central to our emerging understanding of how neural signals transmit information about the ever-changing state of the world from sensory receptors to the brain (Rieke, Warland, de Ruyter van Steveninck, & Bialek, 1997). For us, it is an essential foundation for our central claim, which is that the function of the neurobiological memory mechanism is to carry information forward in time in a computationally accessible form.
Shannon's Theory of Communication
The modern quantitative understanding of information rests on the work of Claude Shannon. A telecommunications engineer at Bell Laboratories, he laid the mathematical foundations of information theory in a famous paper published in 1948, at the dawn of the computer age (Shannon, 1948). Shannon's concern was understanding communication (the transmission of information), which he schematized as illustrated in Figure 1.1.
The schematic begins with an information source. The source might be a person who hands in a written message at a telegraph office. Or, it might be an orchestra playing a Beethoven symphony. In order for the message to be communicated to you, you must receive a signal that allows you to reconstitute the message. In this example, you are the destination of the message. Shannon's analysis ends when the destination has received the signal and reconstituted the message that was present at the source.
The transmitter is the system that converts the messages into transmitted signals, that is, into fluctuations of a physical quantity that travels from a source location to a receiving location and that can be detected at the receiving location. Encoding is the process by which the messages are converted into transmitted signals. The rules governing or specifying this conversion are the code. The mechanism in the transmitter that implements the conversion is the encoder.
Following Shannon, we will continue to use two illustrative examples, a telegraphic communication and a symphonic broadcast. In the telegraphic example, the source messages are written English phrases handed to the telegrapher, for example, "Arriving tomorrow, 10 am." In the symphonic example, the source messages are sound waves arriving at a microphone. Any one particular short message written in English and handed to a telegraph operator can be thought of as coming from a finite set of possible messages. If we stipulate a maximum length of, say, 1,000 characters, with each character being one of 45 or so different characters (26 letters, 10 digits, and punctuation marks), then there is a very large but finite number of possible messages. Moreover, only a very small fraction of these messages are intelligible English, so the size of the set of possible messages - defined as intelligible English messages of 1,000 characters or less - is further reduced. It is less clear that the sound waves generated by an orchestra playing Beethoven's Fifth can be conceived of as coming from a finite set of messages. That is why Shannon chose this as his second example. It serves to illustrate the generality of his theory.
In the telegraphy example, the telegraph system is the transmitter of the messages. The signals are the short current pulses in the telegraph wire, which travel from the sending key to the sounder at the receiving end. The encoder is the telegraph operator. The code generally used is the Morse code. This code uses pulses of two different durations to encode the characters - a short mark (dot), and a long mark (dash). It also uses four different inter-pulse intervals for separations - an intra-character gap (between the dots and dashes within characters), a short gap (between the letters), a medium gap (between words), and a long gap (between sentences).
In the orchestral example, the broadcast system transmitting radio signals from the microphone to your radio is the transmitter. The encoder is the electronic device that converts the sound waves into electromagnetic signals. The type of code is likely to be one of three different codes that have been used in the history of radio (see Figure 1.2), all of which are in current use. All of them vary a parameter of a high- frequency sinusoidal carrier signal. The earliest code was the AM (amplitude modulated) code. In this code, the encoder modulates the amplitude of the carrier signal so that this amplitude of the sinusoidal carrier signal varies in time in a way that closely follows the variation in time of the sound pressure at the microphone's membrane.
When the FM (frequency modulated) code is used, the encoder modulates the frequency of the carrier signal within a limited range. When the digital code is used, as it is in satellite radio, parameters of the carrier frequency are modulated so as to implement a binary code, a code in which there are only two characters, customarily called the '0' and the '1' character. In this system, time is divided into extremely short intervals. During any one interval, the carrier signal is either low ('0') or high ('1'). The relation between the sound wave arriving at the microphone with its associated encoding electronics and the transmitted binary signal is not easily described, because the encoding system is a sophisticated one that makes use of what we have learned about the statistics of broadcast messages to create efficient codes. The development of these codes rests on the foundations laid by Shannon.
In the history of radio broadcasting, we see an interesting evolution (Figure 1.2): We see first (historically) in Figure 1.2a a code in which there is a transparent (easily comprehended) relation between the message and the signal that transmits it (AM). The code is transparent because variation in the amplitude of the message is converted into variation in the amplitude of the carrier signal that transmits the message. This code is, however, inefficient and highly vulnerable to noise. It is low tech. In Figure 1.2b, we see a code in which the relation is somewhat less transparent, because variation in the amplitude of the message is converted into variation in the frequency of the carrier signal that transmits it (FM). This code is no more efficient than the first code, but it is less vulnerable to noise, because the effects of extraneous noise tend to fall mostly in frequency bands outside a given FM band. Finally, in Figure 1.2c we see a high-tech code in which the relation between the message and the signal that transmits it is opaque. The encoding makes extensive use of advanced statistics and mathematics. The code is, however, both efficient and remarkably invulnerable to noise. That's why satellite broadcasts sound better than FM broadcasts, which sound better than AM broadcasts. The greater efficiency of the digital code accounts for the ability of digital radio to transmit more channels within a given bandwidth.
The evolution of encoding in the history of broadcasting may contain an unpalatable lesson for those interested in understanding communication within the brain by means of the action potentials that carry information from sources to destinations within the brain. One of neurobiology's uncomfortable secrets - the sort of thing neurobiologists are not keen to talk about except among themselves - is that we do not understand the code that is being used in these communications. Most neurobiologists assume either explicitly or tacitly that it is an unsophisticated and transparent code. They assume, for example, that when the relevant variation at the source is in the amplitude or intensity of some stimulus, then the information-carrying variation in the transmitted signal is in the firing rate (the number of action potentials per unit of time), a so-called rate code. The transparency of rate codes augurs well for our eventually understanding the communication of information within the brain, but rate codes are grossly inefficient. With more sophisticated but less transparent codes, the same physical resources (the transmission of the same number of spikes in a given unit of time) can convey orders of magnitude more information. State-of-the-art analysis of information transmission in neural signaling in simple systems where we have reason to believe that we know both the set of message being transmitted and the amount of information available in that set (its entropy - see below) implies that the code is a sophisticated and efficient one, one that takes account of the relative frequency of different messages (source statistics), just as the code used in digital broadcasting does (Rieke et al., 1997).
A signal must travel by way of some physical medium, which Shannon refers to as the signal-carrying channel, or just channel for short. In the case of the telegraph, the signal is in the changing flow of electrons and the channel is a wire. In the case of the symphony, the signal is the variation in the parameters of a carrier signal. The channel is that carrier signal. In the case of the nervous system, the axons along which nerve impulses are conducted are the channels.
In the real world, there are factors other than the message that can also produce these same fluctuations in the signal-carrying channel. Shannon called these noise sources. The signal that arrives at the receiver is thus a mixture of the fluctuations deriving from the encoding of the message and the fluctuations deriving from noise sources. The fluctuations due to noise make the receiver's job more difficult, as the received code can become corrupted. The receiver must reconstitute the message from the source, that is, change the signal back into that message, and if this signal has been altered, it may be hard to decode. In addition, the transmitter or the receiver may be faulty and introduce noise during the encoding/decoding process.
Although Shannon diagrammatically combined the sources of noise and showed one place where noise can be introduced, in actuality, noise can enter almost anywhere in the communication process. For example, in the case of telegraphy, the sending operators may not code correctly (use a wrong sequence of dots and dashes) or even more subtly, they might make silences of questionable (not clearly discernible) length. The telegraph key can also malfunction, and not always produce current when it should, possibly turning a dash into some dots. Noise can also be introduced into the signal directly - in this case possibly through interference due to other signals traveling along wires that are in close proximity to the signal-carrying wire. Additionally, the receiving operator may have a faulty sounder or may simply decode incorrectly.
Shannon was, of course, aware that the messages being transmitted often had meanings. Certainly this is the case for the telegraphy example. Arguably, it is the case for the orchestra example. However, one of his profound insights was that from the standpoint of the communications engineer, the meaning was irrelevant. What was essential about a message was not its meaning but rather that it be selected from a set of possible messages. Shannon realized that for a communication system to work efficiently - for it to transmit the maximum amount of information in the minimum amount of time - both the transmitter and the receiver had to know what the set of possible messages was and the relative likelihood of the different messages within the set of possible messages. This insight was an essential part of his formula for quantifying the information transmitted across a signal-carrying channel. We will see later (Chapter 9) that Shannon's set of possible messages can be identified with the values of an experiential variable. Different variables denote different sets of possible messages. Whenever we learn from experience the value of an empirical variable (for example, how long it takes to boil an egg, or how far it is from our home to our office), the range of a priori possible values for that variable is narrowed by our experience. The greater the range of a priori possible values for the variable (that is, the larger the set of possible messages) and the narrower the range after we have had an informative experience (that is, the more precisely we then know the value), the more informative the experience. That is the essence of Shannon's definition of information.
The thinking that led to Shannon's formula for quantifying information may be illustrated by reference to the communication situation that figures in Longfellow's poem about the midnight ride of Paul Revere. The poem describes a scene from the American revolution in which Paul Revere rode through New England, warning the rebel irregulars that the British troops were coming. The critical stanza for our purposes is the second:
He said to his friend, "If the British march By land or sea from the town to-night, Hang a lantern aloft in the belfry arch Of the North Church tower as a signal light, - One if by land, and two if by sea; And I on the opposite shore will be, Ready to ride and spread the alarm Through every Middlesex village and farm, For the country folk to be up and to arm."
The two possible messages in this communication system were "by land" and "by sea." The signal was the lantern light, which traveled from the church tower to the receiver, Paul Revere, waiting on the opposite shore. Critically, Paul knew the possible messages and he knew the code - the relation between the possible messages and the possible signals. If he had not known either one of these, the communication would not have worked. Suppose he had no idea of the possible routes by which the British might come. Then, he could not have created a set of possible messages. Suppose that, while rowing across the river, he forgot whether it was one if by land and two if by sea or two if by land and one if by sea. In either case, the possibility of communication disappears. No set of possible messages, no communication. No agreement about the code between sender and receiver, no communication.
(Continues...)
Excerpted from Memory and the Computational Brainby C.R. Gallistel Adam Philip King Copyright © 2009 by C.R. Gallistel and Adam Philip King. Excerpted by permission.
All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.
Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 10,47 für den Versand von USA nach Deutschland
Versandziele, Kosten & DauerGratis für den Versand innerhalb von/der Deutschland
Versandziele, Kosten & DauerAnbieter: moluna, Greven, Deutschland
Zustand: New. C. R. Gallistel is Co-Director of the Rutgers Center for Cognitive Science. He is one of the foremost psychologists working on the foundations of cognitive neuroscience. His publications include The Symbolic Foundations of Conditional Behavior (2002), and T. Bestandsnummer des Verkäufers 556576902
Anzahl: Mehr als 20 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Buch. Zustand: Neu. Neuware - Memory and the Computational Brain offers a provocative argument that goes to the heart of neuroscience, proposing that the field can and should benefit from the recent advances of cognitive science and the development of information theory over the course of the last several decades.\* A provocative argument that impacts across the fields of linguistics, cognitive science, and neuroscience, suggesting new perspectives on learning mechanisms in the brain\* Proposes that the field of neuroscience can and should benefit from the recent advances of cognitive science and the development of information theory\* Suggests that the architecture of the brain is structured precisely for learning and for memory, and integrates the concept of an addressable read/write memory mechanism into the foundations of neuroscience\* Based on lectures in the prestigious Blackwell-Maryland Lectures in Language and Cognition, and now significantly reworked and expanded to make it ideal for students and faculty. Bestandsnummer des Verkäufers 9781405122870
Anzahl: 2 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9781405122870_new
Anzahl: Mehr als 20 verfügbar
Anbieter: ThriftBooks-Atlanta, AUSTELL, GA, USA
Hardcover. Zustand: As New. No Jacket. Pages are clean and are not marred by notes or folds of any kind. ~ ThriftBooks: Read More, Spend Less 1.6. Bestandsnummer des Verkäufers G1405122870I2N00
Anzahl: 1 verfügbar
Anbieter: GreatBookPricesUK, Woodford Green, Vereinigtes Königreich
Zustand: New. Bestandsnummer des Verkäufers 3498351-n
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: As New. Unread book in perfect condition. Bestandsnummer des Verkäufers 3498351
Anzahl: Mehr als 20 verfügbar
Anbieter: PBShop.store UK, Fairford, GLOS, Vereinigtes Königreich
HRD. Zustand: New. New Book. Shipped from UK. Established seller since 2000. Bestandsnummer des Verkäufers FW-9781405122870
Anzahl: 15 verfügbar
Anbieter: THE SAINT BOOKSTORE, Southport, Vereinigtes Königreich
Hardback. Zustand: New. New copy - Usually dispatched within 4 working days. 780. Bestandsnummer des Verkäufers B9781405122870
Anzahl: Mehr als 20 verfügbar
Anbieter: GreatBookPrices, Columbia, MD, USA
Zustand: New. Bestandsnummer des Verkäufers 3498351-n
Anzahl: Mehr als 20 verfügbar
Anbieter: Majestic Books, Hounslow, Vereinigtes Königreich
Zustand: New. pp. 336. Bestandsnummer des Verkäufers 6804173
Anzahl: 3 verfügbar