Essentials of Error-control Coding - Hardcover

Farrell, Patrick

 
9780470035726: Essentials of Error-control Coding

Zu dieser ISBN ist aktuell kein Angebot verfügbar.

Inhaltsangabe

Rapid advances in electronic and optical technology have enabled the implementation of powerful error-control codes, which are now used in almost the entire range of information systems with close to optimal performance. These codes and decoding methods are required for the detection and correction of the errors and erasures which inevitably occur in digital information during transmission, storage and processing because of noise, interference and other imperfections.

Error-control coding is a complex, novel and unfamiliar area, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the subject, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. A practical engineering and information technology emphasis, as well as relevant background material and fundamental theoretical aspects, provides an in-depth guide to the essentials of Error-Control Coding.

  • Provides extensive and detailed coverage of Block, Cyclic, BCH, Reed-Solomon, Convolutional, Turbo, and Low Density Parity Check (LDPC) codes, together with relevant aspects of Information Theory
  • EXIT chart performance analysis for iteratively decoded error-control techniques
  • Heavily illustrated with tables, diagrams, graphs, worked examples, and exercises
  • Invaluable companion website features slides of figures, algorithm software, updates and solutions to problems&;&;&;&;&;

Offering a complete overview of Error Control Coding, this book is an indispensable resource for students, engineers and researchers in the areas of telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing and applied mathematics.

Die Inhaltsangabe kann sich auf eine andere Ausgabe dieses Titels beziehen.

Über die Autorin bzw. den Autor

Jorge Castiñera Moreira is Associate Professor (Senior Lecturer) in Communication Systems in the Electronics Department, School of Engineering, Mar del Plata University, Argentina. 
He is Director of the Communications Laboratory, Director of the research project &;Open Source software applications for wireless networks&; and co-director of the research project &;Information Theory. Data Networks. Chaos and Communications&;.  Jorge is also responsible for the teaching area &;Communications&;.

Patrick G. Farrell is Visiting Professor in the Department of Communication Systems at Lancaster University, UK, where he supervises 7 research assistants, 50 PhD and 35 MSc students.  His research interests include error-control coding, coded modulation, digital communications, multi-user communications and information theory and source coding.  Patrick has over 350 publications, reports and presentations, and is Editor of two book series (Academic Press and Research Studies Press).

Von der hinteren Coverseite

Rapid advances in electronic and optical technology have enabled the implementation of powerful error-control codes, which are now used in almost the entire range of information systems with close to optimal performance. These codes and decoding methods are required for the detection and correction of the errors and erasures which inevitably occur in digital information during transmission, storage and processing because of noise, interference and other imperfections.

Error-control coding is a complex, novel and unfamiliar area, not yet widely understood and appreciated. This book sets out to provide a clear description of the essentials of the subject, with comprehensive and up-to-date coverage of the most useful codes and their decoding algorithms. A practical engineering and information technology emphasis, as well as relevant background material and fundamental theoretical aspects, provides an in-depth guide to the essentials of Error-Control Coding.

  • Provides extensive and detailed coverage of Block, Cyclic, BCH, Reed-Solomon, Convolutional, Turbo, and Low Density Parity Check (LDPC) codes, together with relevant aspects of Information Theory
  •  Presents EXIT chart performance analysis for iteratively decoded error-control techniques
  •  Heavily illustrated with tables, diagrams, graphs, worked examples, and exercises
  • Includes an invaluable companion website featuring slides of figures, algorithm software, updates and solutions to problems

Offering a complete overview of Error Control Coding, this book is an indispensable resource for students, engineers and researchers in the areas of telecommunications engineering, communication networks, electronic engineering, computer science, information systems and technology, digital signal processing and applied mathematics.

 

Auszug. © Genehmigter Nachdruck. Alle Rechte vorbehalten.

Essentials of Error-Control Coding

By Jorge Castieira Moreira, Patrick Guy Farrell

John Wiley & Sons

Copyright © 2006 John Wiley & Sons, Ltd
All rights reserved.
ISBN: 978-0-470-03572-6

Excerpt

<h2>CHAPTER 1</h2><p><b>Information and Coding Theory</b></p><br><p>In his classic paper 'A Mathematical Theory of Communication', Claude Shannon introducedthe main concepts and theorems of what is known as information theory. Definitionsand models for two important elements are presented in this theory. These elements are thebinary source (BS) and the binary symmetric channel (BSC). A binary source is a device thatgenerates one of the two possible symbols '0' and '1' at a given rate <i>r</i>, measured in symbolsper second. These symbols are called <i>bits</i> (binary digits) and are generated randomly.</p><p>The BSC is a medium through which it is possible to transmit one symbol per time unit.However, this channel is not reliable, and is characterized by the error probability <i>p</i> (0 ≤ <i>p</i> ≤1/2) that an output bit can be different from the corresponding input. The symmetry of thischannel comes from the fact that the error probability <i>p</i> is the same for both of the symbolsinvolved.</p><p>Information theory attempts to analyse communication between a transmitter and a receiverthrough an unreliable channel, and in this approach performs, on the one hand, an analysis ofinformation sources, especially the amount of information produced by a given source, and, onthe other hand, states the conditions for performing reliable transmission through an unreliablechannel.</p><p>There are three main concepts in this theory:</p><p>1. The first one is the definition of a quantity that can be a valid measurement of information,which should be consistent with a physical understanding of its properties.</p><p>2. The second concept deals with the relationship between the information and the source thatgenerates it. This concept will be referred to as source information. Well-known informationtheory techniques like compression and encryption are related to this concept.</p><p>3. The third concept deals with the relationship between the information and the unreliablechannel through which it is going to be transmitted. This concept leads to the definition ofa very important parameter called the channel capacity. A well-known information theorytechnique called error-correction coding is closely related to this concept. This type ofcoding forms the main subject of this book.</p><br><p>One of the most used techniques in information theory is a procedure called coding, which isintended to optimize transmission and to make efficient use of the capacity of a given channel.In general terms, coding is a bijective assignment between a set of messages to be transmitted,and a set of codewords that are used for transmitting these messages. Usually this procedureadopts the form of a table in which each message of the transmission is in correspondencewith the codeword that represents it (see an example in Table 1.1).</p><p>Table 1.1 shows four codewords used for representing four different messages. As seen inthis simple example, the length of the codeword is not constant. One important property of acoding table is that it is constructed in such a way that every codeword is uniquely decodable.This means that in the transmission of a sequence composed of these codewords there shouldbe only one possible way of interpreting that sequence. This is necessary when variable-lengthcoding is used.</p><p>If the code shown in Table 1.1 is compared with a constant-length code for the same case,constituted from four codewords of two bits, 00, 01, 10, 11, it is seen that the code in Table 1.1adds redundancy. Assuming equally likely messages, the average number of transmitted bitsper symbol is equal to 2.75. However, if for instance symbol <i>s</i><sub>2</sub> were characterized by aprobability of being transmitted of 0.76, and all other symbols in this code were characterizedby a probability of being transmitted equal to 0.08, then this source would transmit an averagenumber of bits per symbol of 2.24 bits. As seen in this simple example, a level of compression ispossible when the information source is not uniform, that is, when a source generates messagesthat are not equally likely.</p><p>The source information measure, the channel capacity measure and coding are all relatedby one of the Shannon theorems, the channel coding theorem, which is stated as follows:</p><p><i>If the information rate of a given source does not exceed the capacity of a given channel,then there exists a coding technique that makes possible transmission through this unreliablechannel with an arbitrarily low error rate.</i></p><br><p>This important theorem predicts the possibility of error-free transmisssion through a noisy orunreliable channel. This is obtained by using coding. The above theorem is due to ClaudeShannon, and states the restrictions on the transmission of information through a noisychannel, stating also that the solution for overcoming those restrictions is the application ofa rrrrather sophisticated coding technique. What is not formally stated is how to implement thiscoding technique.</p><p>A block diagram of a communication system as related to information theory is shown inFigure 1.1.</p><p>The block diagram seen in Figure 1.1 shows two types of encoders. The channel encoderis designed to perform error correction with the aim of converting an unreliable channel intoa reliable one. On the other hand, there also exists a source encoder that is designed to makethe source information rate approach the channel capacity. The destination is also called theinformation sink.</p><p>Some concepts relating to the transmission of discrete information are introduced in thefollowing sections.</p><br><p><b>1.1 Information</b></p><p><i>1.1.1 A Measure of Information</i></p><p>From the point of view of information theory, information is not knowledge, as commonlyunderstood, but instead relates to the probabilities of the symbols used to send messagesbetween a source and a destination over an unreliable channel. A quantitative measure ofsymbol information is related to its probability of occurrence, either as it emerges from asource or when it arrives at its destination. The less likely the event of a symbol occurrence,the higher is the information provided by this event. This suggests that a quantitative measureof symbol information will be inversely proportional to the probability of occurrence.</p><p>Assuming an arbitrary message <i>x<sub>i</sub></i> which is one of the possible messages from a set a givendiscrete source can emit, and...

„Über diesen Titel“ kann sich auf eine andere Ausgabe dieses Titels beziehen.

Weitere beliebte Ausgaben desselben Titels

9780470029206: Essentials of Error-Control Coding

Vorgestellte Ausgabe

ISBN 10:  047002920X ISBN 13:  9780470029206
Verlag: John Wiley & Sons Inc, 2006
Hardcover