Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural net works are inspired by the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previ ous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. There are two types of learning: structural and parametric. Structural learning consists of learning the topology of the network, that is, the number of layers, the number of neurons in each layer, and what neurons are connected. This process is done by trial and error until a good fit to the data is obtained. Parametric learning consists of learning the weight values for a given topology of the network. Since the neural functions are given, this learning process is achieved by estimating the connection weights based on the given information. To this aim, an error function is minimized using several well known learning methods, such as the backpropagation algorithm. Unfortunately, for these methods: (a) The function resulting from the learning process has no physical or engineering interpretation. Thus, neural networks are seen as black boxes.
Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.
Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural net works are inspired by the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previ ous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. There are two types of learning: structural and parametric. Structural learning consists of learning the topology of the network, that is, the number of layers, the number of neurons in each layer, and what neurons are connected. This process is done by trial and error until a good fit to the data is obtained. Parametric learning consists of learning the weight values for a given topology of the network. Since the neural functions are given, this learning process is achieved by estimating the connection weights based on the given information. To this aim, an error function is minimized using several well known learning methods, such as the backpropagation algorithm. Unfortunately, for these methods: (a) The function resulting from the learning process has no physical or engineering interpretation. Thus, neural networks are seen as black boxes.
This book introduces 'functional networks', a novel neural-based paradigm, and shows that functional network architectures can be efficiently applied to solve many interesting practical problems. Included is an introduction to neural networks, a description of functional networks, examples of applications, and computer programs in Mathematica and Java languages implementing the various algorithms and methodologies. Special emphasis is given to applications in several areas such as: * Box-Jenkins AR(p), MA(q), ARMA(p,q), and ARIMA (p,d,q) models with application to real-life economic problems such as the consumer price index, electric power consumption and international airlines' passenger data. Random time series and chaotic series are considered in relation to the Henon, Lozi, Holmes and Burger maps, as well as the problems of noise reduction and information masking. * Learning differential equations from data and deriving the corresponding equivalent difference and functional equations. Examples of a mass supported by two springs and a viscous damper or dashpot, and a loaded beam, are used to illustrate the concepts. * The problem of obtaining the most general family of implicit, explicit and parametric surfaces as used in Computer Aided Design (CAD). * Applications of functional networks to obtain general nonlinear regression models are given and compared with standard techniques. Functional Networks with Applications: A Neural-Based Paradigm will be of interest to individuals who work in computer science, physics, engineering, applied mathematics, statistics, economics, and other neural networks and data analysis related fields.
„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.
EUR 29,19 für den Versand von Vereinigtes Königreich nach Deutschland
Versandziele, Kosten & DauerGratis für den Versand innerhalb von/der Deutschland
Versandziele, Kosten & DauerAnbieter: moluna, Greven, Deutschland
Zustand: New. Bestandsnummer des Verkäufers 4195691
Anzahl: Mehr als 20 verfügbar
Anbieter: buchversandmimpf2000, Emtmannsberg, BAYE, Deutschland
Taschenbuch. Zustand: Neu. This item is printed on demand - Print on Demand Titel. Neuware -Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural net works are inspired by the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previ ous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. There are two types of learning: structural and parametric. Structural learning consists of learning the topology of the network, that is, the number of layers, the number of neurons in each layer, and what neurons are connected. This process is done by trial and error until a good fit to the data is obtained. Parametric learning consists of learning the weight values for a given topology of the network. Since the neural functions are given, this learning process is achieved by estimating the connection weights based on the given information. To this aim, an error function is minimized using several well known learning methods, such as the backpropagation algorithm. Unfortunately, for these methods: (a) The function resulting from the learning process has no physical or engineering interpretation. Thus, neural networks are seen as black boxes.Springer Verlag GmbH, Tiergartenstr. 17, 69121 Heidelberg 324 pp. Englisch. Bestandsnummer des Verkäufers 9781461375623
Anzahl: 1 verfügbar
Anbieter: BuchWeltWeit Ludwig Meier e.K., Bergisch Gladbach, Deutschland
Taschenbuch. Zustand: Neu. This item is printed on demand - it takes 3-4 days longer - Neuware -Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural net works are inspired by the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previ ous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. There are two types of learning: structural and parametric. Structural learning consists of learning the topology of the network, that is, the number of layers, the number of neurons in each layer, and what neurons are connected. This process is done by trial and error until a good fit to the data is obtained. Parametric learning consists of learning the weight values for a given topology of the network. Since the neural functions are given, this learning process is achieved by estimating the connection weights based on the given information. To this aim, an error function is minimized using several well known learning methods, such as the backpropagation algorithm. Unfortunately, for these methods: (a) The function resulting from the learning process has no physical or engineering interpretation. Thus, neural networks are seen as black boxes. 324 pp. Englisch. Bestandsnummer des Verkäufers 9781461375623
Anzahl: 2 verfügbar
Anbieter: AHA-BUCH GmbH, Einbeck, Deutschland
Taschenbuch. Zustand: Neu. Druck auf Anfrage Neuware - Printed after ordering - Artificial neural networks have been recognized as a powerful tool to learn and reproduce systems in various fields of applications. Neural net works are inspired by the brain behavior and consist of one or several layers of neurons, or computing units, connected by links. Each artificial neuron receives an input value from the input layer or the neurons in the previ ous layer. Then it computes a scalar output from a linear combination of the received inputs using a given scalar function (the activation function), which is assumed the same for all neurons. One of the main properties of neural networks is their ability to learn from data. There are two types of learning: structural and parametric. Structural learning consists of learning the topology of the network, that is, the number of layers, the number of neurons in each layer, and what neurons are connected. This process is done by trial and error until a good fit to the data is obtained. Parametric learning consists of learning the weight values for a given topology of the network. Since the neural functions are given, this learning process is achieved by estimating the connection weights based on the given information. To this aim, an error function is minimized using several well known learning methods, such as the backpropagation algorithm. Unfortunately, for these methods: (a) The function resulting from the learning process has no physical or engineering interpretation. Thus, neural networks are seen as black boxes. Bestandsnummer des Verkäufers 9781461375623
Anzahl: 1 verfügbar
Anbieter: Ria Christie Collections, Uxbridge, Vereinigtes Königreich
Zustand: New. In. Bestandsnummer des Verkäufers ria9781461375623_new
Anzahl: Mehr als 20 verfügbar
Anbieter: Chiron Media, Wallingford, Vereinigtes Königreich
PF. Zustand: New. Bestandsnummer des Verkäufers 6666-IUK-9781461375623
Anzahl: 10 verfügbar
Anbieter: THE SAINT BOOKSTORE, Southport, Vereinigtes Königreich
Paperback / softback. Zustand: New. This item is printed on demand. New copy - Usually dispatched within 5-9 working days 484. Bestandsnummer des Verkäufers C9781461375623
Anzahl: Mehr als 20 verfügbar
Anbieter: Revaluation Books, Exeter, Vereinigtes Königreich
Paperback. Zustand: Brand New. 320 pages. 9.25x6.10x0.73 inches. In Stock. Bestandsnummer des Verkäufers x-1461375622
Anzahl: 2 verfügbar
Anbieter: Lucky's Textbooks, Dallas, TX, USA
Zustand: New. Bestandsnummer des Verkäufers ABLIING23Mar2716030034015
Anzahl: Mehr als 20 verfügbar
Anbieter: dsmbooks, Liverpool, Vereinigtes Königreich
Paperback. Zustand: Like New. Like New. book. Bestandsnummer des Verkäufers D8F0-0-M-1461375622-6
Anzahl: 1 verfügbar