Declercq / Fossorier / Biglieri | Channel Coding: Theory, Algorithms, and Applications | E-Book | sack.de
E-Book

E-Book, Englisch, 690 Seiten

Declercq / Fossorier / Biglieri Channel Coding: Theory, Algorithms, and Applications

Academic Press Library in Mobile and Wireless Communications
1. Auflage 2014
ISBN: 978-0-12-397223-1
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark

Academic Press Library in Mobile and Wireless Communications

E-Book, Englisch, 690 Seiten

ISBN: 978-0-12-397223-1
Verlag: Elsevier Science & Techn.
Format: EPUB
Kopierschutz: 6 - ePub Watermark



This book gives a review of the principles, methods and techniques of important and emerging research topics and technologies in Channel Coding, including theory, algorithms, and applications. Edited by leading people in the field who, through their reputation, have been able to commission experts to write on a particular topic. With this reference source you will: - Quickly grasp a new area of research - Understand the underlying principles of a topic and its applications - Ascertain how a topic relates to other areas and learn of the research issues yet to be resolved - Quick tutorial reviews of important and emerging topics of research in Channel Coding - Presents core principles in Channel Coding theory and shows their applications - Reference content on core principles, technologies, algorithms and applications - Comprehensive references to journal articles and other literature on which to build further, more specific and detailed knowledge

Declercq / Fossorier / Biglieri Channel Coding: Theory, Algorithms, and Applications jetzt bestellen!

Weitere Infos & Material


2;Half Title;2
3;Title Page;4
4;Copyright;5
5;Contents ;6
6;Preface;16
7;Contributors;18
8;1 Turbo Codes: From First Principles to Recent Standards;20
8.1;1 Introduction;21
8.2;2 History of turbo codes;21
8.2.1;2.1 The origins of turbo codes;22
8.2.2;2.2 Concatenation;22
8.2.3;2.3 Negative feedback in the decoder and recursive systematic convolutional codes;22
8.2.4;2.4 Extrinsic information and iterative decoding;23
8.2.5;2.5 Parallel concatenation;25
8.3;3 Fundamentals of turbo coding;27
8.3.1;3.1 Recursive systematic convolutional (RSC) component codes;27
8.3.2;3.2 Block coding with turbo codes;32
8.3.3;3.3 The permutation;37
8.3.3.1;3.3.1 Regular permutation;38
8.3.3.2;3.3.2 Irregular permutations;42
8.4;4 Fundamentals of turbo decoding;45
8.4.1;4.1 The turbo principle;45
8.4.2;4.2 Soft-input soft-output decoding;49
8.4.2.1;4.2.1 Definitions;50
8.4.2.2;4.2.2 The MAP algorithm;51
8.4.2.3;4.2.3 The MAP algorithm in the logarithmic domain: Log-MAP and Max-Log-MAP;55
8.5;5 Industrial impacts of turbo codes;57
8.5.1;5.1 The very first implementations of turbo codecs;57
8.5.1.1;5.1.1 The CAS 5093 circuit;58
8.5.1.2;5.1.2 The Turbo4 circuit;59
8.5.2;5.2 Early applications of turbo codes;60
8.5.3;5.3 Turbo codes in standards;62
8.5.3.1;5.3.1 Mobile communication systems;62
8.5.3.2;5.3.2 Digital video broadcasting (DVB) standards;63
8.5.3.3;5.3.3 Other standards;64
8.5.3.4;5.3.4 Summary;66
8.6;6 Conclusion;66
8.7;References;66
9;2 Turbo-Like Codes Constructions;72
9.1;1 Introduction and bibliography survey;74
9.1.1;1.1 Introduction;74
9.2;2 Structure of concatenated codes;78
9.2.1;2.1 Main characteristics of turbo encoding structures;78
9.2.2;2.2 Trellis encoders;81
9.2.3;2.3 Mapper;82
9.2.3.1;2.3.1 Repeater;84
9.2.3.2;2.3.2 Parity-check bit generator;84
9.2.3.3;2.3.3 Constellation labeling;85
9.2.4;2.4 Interleaver;85
9.2.5;2.5 Rate conversion modules;86
9.2.6;2.6 Puncturer;87
9.2.7;2.7 Summary of encoding modules;87
9.2.8;2.8 Some turbo encoder structures and their main properties;87
9.2.8.1;2.8.1 Serially concatenated convolutional codes;88
9.2.8.2;2.8.2 Duobinary PCCC;88
9.2.8.3;2.8.3 (Irregular) repeat-accumulate codes;89
9.2.8.4;2.8.4 Self-concatenation;90
9.2.8.5;2.8.5 Other turbo coding structures;90
9.2.9;2.9 Convolutional versus block encoding;90
9.2.9.1;2.9.1 Periodic state enforcing of trellis encoders;91
9.3;3 ML analysis and design of constituent codes;92
9.3.1;3.1 Maximum-likelihood analysis;93
9.3.1.1;3.1.1 Word and bit error probabilities;93
9.3.1.2;3.1.2 Uniform interleaver;94
9.3.1.3;3.1.3 Analysis of PCCC;94
9.3.1.4;3.1.4 Analysis of SCCC;97
9.3.1.5;3.1.5 Analysis of hybrid concatenated codes with interleavers;97
9.3.1.6;3.1.6 More refined upper bounds;98
9.3.2;3.2 Design criteria for constituent encoders;99
9.3.2.1;3.2.1 Design of parallel concatenated convolutional codes with interleaver;101
9.3.2.2;3.2.2 A heuristic explanation of the interleaver gain;105
9.3.2.3;3.2.3 Design of serially concatenated convolutional codes with interleavers;106
9.3.3;3.3 Comparison between parallel and serially concatenated codes;106
9.3.4;3.4 Finding the optimum constituent encoders;107
9.4;4 Iterative decoding;108
9.4.1;4.1 Messages in iterative decoders and independence assumption;109
9.4.2;4.2 Soft-input soft-output modules;112
9.4.3;4.3 The SISO for the data ordering encoding modules;112
9.4.4;4.4 The SISO module for the trellis encoder;114
9.4.4.1;4.4.1 The SISO algorithm for computing the extrinsic LLRS;114
9.4.4.2;4.4.2 Trellis with multiple symbol labels;115
9.4.5;4.5 The SISO module for the mapper;116
9.4.5.1;4.5.1 Computation of SISO updating with the minimal trellis;116
9.4.6;4.6 Multiple code representations;118
9.5;5 Interleaver designs;119
9.5.1;5.1 Interleaver theory;119
9.5.2;5.2 Interleavers: basic definitions;119
9.5.2.1;5.2.1 Causal and canonical interleavers;121
9.5.3;5.3 Connections among interleaver parameters;125
9.5.4;5.4 Convolutional and block interleavers;126
9.5.4.1;5.4.1 Convolutional interleavers;127
9.5.4.2;5.4.2 Implementation of convolutional interleavers with minimal memory requirement;128
9.5.4.3;5.4.3 Block interleavers;130
9.5.4.4;5.4.4 Block interleaver causalization;130
9.5.4.5;5.4.5 The canonical causal interleaver of a block interleaver;130
9.5.4.6;5.4.6 The two-register causal interleaver of a block interleaver;131
9.5.5;5.5 Some practical interleavers;131
9.5.5.1;5.5.1 Random interleaver;133
9.5.5.2;5.5.2 Spread interleaver;134
9.5.5.3;5.5.3 Pseudo-random interleavers;135
9.5.5.4;5.5.4 Congruential-type interleavers;135
9.5.5.5;5.5.5 Multidimensional interleaver;136
9.5.5.6;5.5.6 More interleaver classes;138
9.5.5.7;5.5.7 Pruning and extending interleavers;139
9.6;6 Performances;140
9.6.1;6.1 Introduction;140
9.6.2;6.2 Iterative decoding versus ML upper bounds;141
9.6.3;6.3 Optimality of constituent encoders;142
9.6.4;6.4 Performance versus number of iterations;142
9.6.5;6.5 Comparison between PCCC and SCCC;145
9.6.6;6.6 Other concatenated structures;148
9.6.6.1;6.6.1 Simulation results for variations of PCCC codes;148
9.6.6.2;6.6.2 Simulation results for variations of SCCC codes;148
9.6.6.3;6.6.3 Simulation results for multiple turbo codes (DPCCC);148
9.6.6.4;6.6.4 Simulation results for hybrid concatenated codes (HCCC);148
9.6.6.5;6.6.5 Simulation results for self-concatenated codes;155
9.6.6.6;6.6.6 Simulation results for repeat-accumulate (RA) codes;157
9.6.6.7;6.6.7 Simulation results for repeat-accumulate-accumulate (RAA) codes;157
9.7;References;157
10;3 Low-Density Parity-Check Code Constructions;160
10.1;1 Introduction;161
10.2;2 LDPC codes and ensembles;162
10.2.1;2.1 Gallager ensemble;163
10.2.2;2.2 Unstructured irregular ensemble;164
10.2.3;2.3 Repeat-accumulate code ensemble;165
10.2.4;2.4 Multi-edge type ensemble;167
10.2.5;2.5 Protograph ensemble;170
10.2.6;2.6 LDPC convolutional code ensemble;173
10.3;3 Asymptotic analysis and optimization;175
10.3.1;3.1 Asymptotic threshold;175
10.3.2;3.2 Weight distribution;180
10.3.3;3.3 Optimization via differential evolution;183
10.4;4 Finite-length construction;185
10.4.1;4.1 Unstructured codes;186
10.4.1.1;4.1.1 Progressive edge-growth (PEG) algorithm;186
10.4.1.2;4.1.2 Improved PEG algorithms based on extrinsic cycle connectivity;192
10.4.1.3;4.1.3 Improved PEG algorithm for avoidance of small stopping sets;195
10.4.1.4;4.1.4 Improved PEG algorithms for error rate minimization;198
10.4.1.5;4.1.5 Randomized PEG algorithm and cage design;200
10.4.2;4.2 Structured codes;204
10.4.2.1;4.2.1 QC-LDPC codes via circulant PEG;206
10.5;5 LDPC codes in standards;208
10.5.1;5.1 IEEE 802.16-2009 LDPC codes;208
10.5.1.1;5.1.1 Construction of parity-check matrices;209
10.5.1.2;5.1.2 Efficient encoding;210
10.5.2;5.2 ITU-T G.9960 LDPC codes;210
10.5.3;5.3 CCSDS LDPC codes;212
10.5.4;5.4 Other standards including LDPC codes;213
10.5.5;5.5 Details of parity-check matrices;214
10.5.5.1;5.5.1 Exponent matrices for IEEE802.16-2009 LDPC codes with n=2304;214
10.5.5.2;5.5.2 Exponent matrices for ITU-T G.9960 LDPC codes;216
10.5.5.3;5.5.3 Exponent matrices for IEEE802.11-2012 LDPC codes;218
10.6;Acknowledgments;222
10.7;References;222
11;4 LDPC Decoders;230
11.1;1 Introduction;231
11.1.1;1.1 A decoding-oriented approach to coding;231
11.1.2;1.2 Decoding complexity, long codes, and Shannon limit;232
11.1.3;1.3 Evolution of LDPC decoding from Gallager to our days;233
11.1.3.1;1.3.1 Belief-propagation decoding;233
11.1.3.2;1.3.2 Min-sum and min-sum-based decoding;234
11.1.3.3;1.3.3 Stochastic decoding;235
11.1.3.4;1.3.4 Decoding over erasure channels;235
11.1.3.5;1.3.5 Non-binary LDPC decoding;236
11.1.4;1.4 Chapter's organization;237
11.2;2 Notation and terminology;237
11.3;3 Binary LDPC decoders;240
11.3.1;3.1 Bit-flipping decoding;240
11.3.2;3.2 Hard-decision MP decoders;241
11.3.2.1;3.2.1 Gallager-B decoding;241
11.3.2.2;3.2.2 Gallager-A decoding;243
11.3.2.3;3.2.3 Majority-voting decoding;243
11.3.2.4;3.2.4 Gallager-B decoding with extended alphabet;243
11.3.3;3.3 Belief-propagation decoding;245
11.3.4;3.4 Min-sum decoding;249
11.3.5;3.5 Min-sum-based decoding;251
11.3.5.1;3.5.1 Normalized/offset-min-sum;251
11.3.5.2;3.5.2 Self-corrected min-sum;252
11.3.5.3;3.5.3 Finite alphabet iterative decoders;253
11.3.6;3.6 Erasure decoding;254
11.3.7;3.7 Further readings;257
11.3.7.1;3.7.1 Other decoding algorithms;257
11.3.7.2;3.7.2 Implementation-related issues;257
11.4;4 Non-binary LDPC decoders;258
11.4.1;4.1 Notation update;259
11.4.2;4.2 Belief-propagation decoding;260
11.4.3;4.3 Min-sum decoding;263
11.4.4;4.4 Extended-min-sum decoding;265
11.4.5;4.5 Min-max decoding;266
11.4.6;4.6 Erasure decoding;268
11.4.7;4.7 Further readings;270
11.4.7.1;4.7.1 Other decoding algorithms;270
11.4.7.2;4.7.2 Implementation-related issues;270
11.5;Appendix;270
11.6;References;270
12;5 Code Design with EXIT Charts;280
12.1;1 Introduction;281
12.1.1;1.1 System model;281
12.1.2;1.2 Notation;282
12.2;2 Parallel concatenated codes;282
12.2.1;2.1 Encoder and decoder;282
12.2.2;2.2 The EXIT method;284
12.2.2.1;2.2.1 Decoding trajectory;284
12.2.2.2;2.2.2 Decoding model and transfer function;287
12.2.3;2.3 Code analysis and design;290
12.3;3 Serially concatenated codes;293
12.3.1;3.1 Encoder and decoder;294
12.3.2;3.2 EXIT analysis;295
12.3.3;3.3 Code analysis and design;298
12.4;4 LDPC codes;299
12.4.1;4.1 Decoder and decoding models;300
12.4.1.1;4.1.1 Variable-node decoder;300
12.4.1.2;4.1.2 Check-node decoder;302
12.4.1.3;4.1.3 Discussion of decoding models;303
12.4.1.4;4.1.4 Discussion of the area properties;303
12.4.2;4.2 Analysis and design for the BEC;305
12.4.2.1;4.2.1 EXIT functions;306
12.4.2.2;4.2.2 Code design;307
12.4.3;4.3 Analysis and design for the AWGN channel;308
12.5;5 Comments and generalizations;310
12.5.1;5.1 Estimation of mutual information;310
12.5.2;5.2 Theory of EXIT analysis;311
12.5.3;5.3 EXIT analysis for other codes or coded systems;312
12.6;6 Summary;313
12.7;References;313
13;6 Failures and Error Floors of Iterative Decoders;318
13.1;1 Introduction;319
13.2;2 Preliminaries;325
13.2.1;2.1 LDPC codes;325
13.2.2;2.2 Channel assumptions;325
13.2.3;2.3 Message passing decoding algorithms;326
13.2.3.1;2.3.1 Gallager A/B message passing algorithm;326
13.2.3.2;2.3.2 The sum-product algorithm;327
13.2.4;2.4 Bit flipping algorithms;327
13.3;3 Overview of decoding failures;328
13.4;4 Combinatorial characterization of decoding failures;332
13.4.1;4.1 Trapping sets on the BEC;332
13.4.2;4.2 Trapping sets on the BSC;332
13.4.3;4.3 Trapping sets on the AWGNC;334
13.5;5 Case study: Column-weight-three codes with the Gallager A/B algorithm on the BSC;334
13.5.1;5.1 Graphical representation;334
13.5.2;5.2 Topological relation;335
13.5.3;5.3 Evolution of trapping sets;336
13.5.4;5.4 FER estimation;337
13.6;6 Combating error floors;340
13.6.1;6.1 Improving error floor performance by constructing better Tanner graphs;340
13.6.1.1;6.1.1 Constructing better Tanner graphs by increasing girth;340
13.6.1.2;6.1.2 Constructing better Tanner graphs by avoiding trapping sets;342
13.6.1.3;6.1.3 Relative harmfulness;343
13.6.2;6.2 Improving error floor performance by designing better decoders;346
13.6.2.1;6.2.1 Finite precision of messages, quantized belief propagation, and low-complexity modifications;346
13.6.2.2;6.2.2 Decoding beyond belief propagation;348
13.6.2.3;6.2.3 Approaching ML performance via iterative decoding;348
13.7;7 Connections to LP decoding;350
13.7.1;7.1 Pseudo-codewords for LP decoders;351
13.8;8 Conclusion;353
13.9;Acknowledgments;354
13.10;References;354
14;7 Rate-Compatible LDPC and Turbo Codes for Link Adaptivity and Unequal Error Protection;362
14.1;1 Unequal error protection Turbo codes;363
14.1.1;1.1 Puncturing and pruning;363
14.1.2;1.2 Hybrid Turbo codes and their convergence;366
14.1.2.1;1.2.1 Local EXIT charts;368
14.1.2.2;1.2.2 Relation between local and global EXIT charts;369
14.1.3;1.3 Interleaver structures;370
14.2;2 Unequal error protection LDPC codes based on puncturing and pruning;374
14.2.1;2.1 Density evolution for general RC-LDPC codes;375
14.2.2;2.2 Design of good puncturing patterns for a given mother code;376
14.2.3;2.3 Pruning for creating irregular UEP check-node profiles;377
14.2.4;2.4 Structured RC-LDPC codes;378
14.3;3 Unequal error protection LDPC codes based on degree distribution optimization;380
14.3.1;3.1 Multi-edge-type UEP LDPC codes;381
14.4;References;382
15;8 Rateless Coding;388
15.1;1 Introduction;389
15.2;2 The fountain paradigm;389
15.2.1;2.1 Fountain coding and decoding: definitions and principles;389
15.2.2;2.2 The random binary fountain code;390
15.3;3 Rateless sparse-graph codes for the binary erasure channel: LT and Raptor codes;392
15.3.1;3.1 LT codes;392
15.3.2;3.2 Raptor codes;394
15.3.3;3.3 Decoding algorithms for the BEC;396
15.3.4;3.4 Raptor codes in standards;397
15.4;4 Extensions to noisy channels;397
15.4.1;4.1 The additive white Gaussian noise channel;397
15.4.2;4.2 Fading channels;399
15.4.3;4.3 Other channels;400
15.4.4;4.4 Link to fixed rate counterparts;400
15.5;5 Advanced sparse-graph based rateless coding schemes;401
15.5.1;5.1 LT/Raptor codes extensions;401
15.5.1.1;5.1.1 Improved decoding algorithms and sequence designs;401
15.5.1.2;5.1.2 Generalization of LT/Raptor codes;402
15.5.1.3;5.1.3 Distributed LT codes;403
15.5.1.4;5.1.4 Non-binary fountain codes;403
15.5.1.5;5.1.5 ``Turbo''-based fountain coding;404
15.5.2;5.2 Other rateless coding schemes: beyond sparse-graph codes;404
15.6;6 Applications of rateless coding;405
15.6.1;6.1 Rateless coding versus IR-HARQ;405
15.6.2;6.2 Multimedia communications and broadcasting;406
15.6.2.1;6.2.1 Unequal error protection;406
15.6.2.2;6.2.2 Multimedia video communications;406
15.6.3;6.3 Wireless networks and relays;406
15.6.4;6.4 Distributed storage and data dissemination;406
15.6.5;6.5 Source and source-channel coding;406
15.7;References;406
16;9 An Introduction to Distributed Channel Coding;418
16.1;1 Introduction;419
16.1.1;1.1 Distributed channel coding;421
16.1.2;1.2 Outline of this chapter;421
16.1.3;1.3 Notations;422
16.2;2 The three-node relay channel;422
16.2.1;2.1 Basic model;422
16.2.1.1;2.1.1 AWGN relay channel;423
16.2.1.2;2.1.2 Binary erasure relay channel;424
16.2.2;2.2 Relaying strategies;424
16.2.3;2.3 Fundamental coding strategies for decode-and-forward relaying;425
16.2.3.1;2.3.1 Full-duplex relaying;425
16.2.3.2;2.3.2 Half-duplex relaying;428
16.2.3.3;2.3.3 Design objectives: achieving the optimal decode-and-forward rates;429
16.3;3 Distributed coding for the three-node relay channel;435
16.3.1;3.1 LDPC code designs for the relay channel;435
16.3.1.1;3.1.1 Code structures for decode-and-forward relaying;436
16.3.1.2;3.1.2 Irregular LDPC codes;440
16.3.1.3;3.1.3 Spatially coupled LDPC codes;446
16.3.2;3.2 Distributed turbo-codes and related code structures;452
16.3.2.1;3.2.1 Code optimization;454
16.3.2.2;3.2.2 Noisy relay;455
16.4;4 Relaying with uncertainty at the relay;455
16.4.1;4.1 Compress-and-forward relaying;456
16.4.2;4.2 Soft-information forwarding and estimate-and-forward;457
16.5;5 Cooperation with multiple sources;457
16.5.1;5.1 Two-user cooperative network: coded cooperation;457
16.5.2;5.2 Multi-source cooperative relay network;459
16.5.2.1;5.2.1 Spatially coupled LDPC codes;460
16.6;6 Summary and conclusions;462
16.7;Acknowledgment;463
16.8;References;463
17;10 Space-Time Block Codes;470
17.1;1 Introduction and preliminaries;471
17.2;2 STBCs with low ML decoding complexity;476
17.2.1;2.1 Encoding complexity, decoding complexity and diversity gain;477
17.2.1.1;2.1.1 Measure of ML decoding complexity;479
17.2.1.2;2.1.2 Full diversity;481
17.2.1.3;2.1.3 Problem statement of optimal rate-ML decoding complexity trade-off;481
17.3;3 Full-rate full-diversity STBCs;482
17.3.1;3.1 STBCs from division algebras;483
17.3.1.1;3.1.1 Division algebras—an introduction;483
17.3.1.2;3.1.2 Codes from the left regular representation of division algebras;484
17.3.1.3;3.1.3 Cyclic division algebras;484
17.3.2;3.2 Embedding cyclic division algebras into matrices over the maximal cyclic subfield;486
17.4;4 Perfect space-time block codes;487
17.5;5 Diversity and multiplexing gain trade-off of space-time codes;491
17.6;6 Space-time codes for asymmetric MIMO systems;497
17.6.1;6.1 Fast-decodable MIDO codes with large coding gain;497
17.6.2;6.2 DMT-optimal LSTBC-schemes for asymmetric MIMO systems;499
17.6.2.1;6.2.1 Fast-decodable STBCs;500
17.7;7 Distributed space-time codes;501
17.7.1;7.1 Communication with relays;501
17.7.1.1;7.1.1 Distributed space-time coding for asynchronous relay networks;504
17.7.2;7.2 Space-time codes for wireless two-way relaying;505
17.8;8 Conclusion;507
17.9;References;507
18;11 Coded Modulation;516
18.1;1 Introduction;517
18.2;2 Preliminaries;518
18.2.1;2.1 Gaussian and Rayleigh fading channels;518
18.2.2;2.2 Capacity of signal sets over the Gaussian channel;518
18.2.3;2.3 Overview of coded modulation schemes;520
18.2.4;2.4 Performance measure;523
18.3;3 Trellis coded modulation;524
18.3.1;3.1 Set partitioning;525
18.3.2;3.2 Encoder and decoder for trellis codes;526
18.3.2.1;3.2.1 Design of trellis coded modulation;529
18.3.3;3.3 Concatenated trellis coded modulation;530
18.4;4 Multilevel codes and multistage decoding;531
18.4.1;4.1 Multilevel codes;532
18.4.2;4.2 Multistage decoding;532
18.4.3;4.3 Code design for multilevel codes and multistage decoding;535
18.4.3.1;4.3.1 Component codes for low error probability;535
18.4.3.2;4.3.2 Design for capacity-approaching component codes;536
18.4.4;4.4 Iterative decoding of multilevel codes;538
18.4.5;4.5 Multilevel codes for unequal error protection;539
18.5;5 Bit-interleaved coded modulation;541
18.5.1;5.1 Encoding of BICM;541
18.5.2;5.2 Decoding BICM;541
18.5.3;5.3 BICM capacity and capacity-approaching codes;543
18.5.4;5.4 BICM with iterative decoding;544
18.5.5;5.5 Shaping and BICM;546
18.6;References;547
19;12 Joint Source-Channel Coding and Decoding;554
19.1;1 Why joint source-channel coding/decoding;555
19.2;2 Joint source-channel decoding basics;556
19.2.1;2.1 Various types of redundancy;557
19.2.1.1;2.1.1 Redundancy due to the syntax of source coders;557
19.2.1.2;2.1.2 Redundancy due to semantic of the source and of the source coder;557
19.2.1.3;2.1.3 Redundancy due to the packetization of compressed data;559
19.2.2;2.2 Decoders to exploit the redundancy;559
19.2.3;2.3 Reducing the decoding complexity;561
19.2.3.1;2.3.1 Aggregated trellises;561
19.2.3.2;2.3.2 Projected trellises;561
19.2.3.3;2.3.3 Sequential decoders;563
19.2.3.4;2.3.4 Iterative decoders;564
19.3;3 Joint source-channel coding basics;566
19.3.1;3.1 OPTA;567
19.3.2;3.2 Simple (counter-)example;568
19.3.3;3.3 To code or not to code;569
19.4;4 Modified source encoders;571
19.4.1;4.1 Variable-length error-correcting codes;571
19.4.1.1;4.1.1 Distance properties of variable-length codes;571
19.4.1.2;4.1.2 Code construction;572
19.4.2;4.2 Redundant signal representations;574
19.4.2.1;4.2.1 Multiple description scalar quantization;575
19.4.2.2;4.2.2 Correlating transforms;576
19.4.2.3;4.2.3 Frame expansion;576
19.4.2.4;4.2.4 Channel coding;578
19.4.3;4.3 Hierarchical and high-density constellations;579
19.4.3.1;4.3.1 Hierarchical modulations;579
19.4.3.2;4.3.2 High-density constellations;580
19.5;5 Accounting for the presence of a network;581
19.5.1;5.1 Redundancy in the protocol stack;581
19.5.2;5.2 Protocol-assisted channel decoding;582
19.5.2.1;5.2.1 Optimal estimator;582
19.5.2.2;5.2.2 Suboptimal estimator;583
19.5.3;5.3 Reliable header recovery;584
19.5.4;5.4 Reliable burst segmentation;585
19.5.4.1;5.4.1 Aggregated packets within a burst;585
19.5.4.2;5.4.2 Estimators for the number of packets and their boundaries;586
19.5.4.3;5.4.3 Example: WiMax burst segmentation;588
19.6;6 Conclusion;589
19.7;References;589
20;13 Hardware Design and Realization for Iteratively Decodable Codes;602
20.1;1 Introduction;603
20.2;2 Standard implementation;604
20.2.1;2.1 Quantization of input LLR;605
20.2.2;2.2 Standard turbo decoder architecture;606
20.2.2.1;2.2.1 Global architecture;607
20.2.2.2;2.2.2 Standard MAP architecture;608
20.2.2.3;2.2.3 MAP architecture for tail-biting code;612
20.2.3;2.3 Standard LDPC decoder architecture;612
20.3;3 Low complexity decoder;618
20.3.1;3.1 Internal precision optimization;619
20.3.2;3.2 Precision optimization in turbo decoder;619
20.3.3;3.3 Sliding-window technique in turbo decoder;620
20.3.4;3.4 Algorithm simplifications;621
20.3.5;3.5 Scheduling of Turbo-Code;627
20.4;4 High throughput architectures;627
20.4.1;4.1 Basic solutions to increase decoding speed;628
20.4.2;4.2 Average vs. worst case number of iteration;629
20.4.3;4.3 Parallel error-free memory access;629
20.4.4;4.4 High-speed Turbo-Code;634
20.4.4.1;4.4.1 Radix-4 architecture;634
20.4.4.2;4.4.2 Forward-Backward parallelism;635
20.4.4.3;4.4.3 Half iteration parallelism;635
20.4.5;4.5 High-speed LDPC code;636
20.5;5 Energy efficient architectures;638
20.5.1;5.1 General purpose methods;639
20.5.2;5.2 Application-specific methods;644
20.6;6 Exotic designs;650
20.6.1;6.1 Analog decoders;650
20.6.2;6.2 Stochastic decoders;651
20.6.3;6.3 Conclusion;653
20.7;7 A survey of relevant implementations;653
20.7.1;7.1 High throughput implementations;654
20.7.2;7.2 Low energy and low area implementations;655
20.7.3;7.3 Flexible decoders;656
20.7.4;7.4 Hardware accelerators;660
20.8;8 Conclusion;660
20.9;References;661
21;Subject Index;672
21.1;A;672
21.2;B;672
21.3;C;673
21.4;D;674
21.5;E;674
21.6;F;675
21.7;G;676
21.8;H;676
21.9;I;677
21.10;J;678
21.11;K;678
21.12;L;678
21.13;M;679
21.14;N;681
21.15;O;681
21.16;P;681
21.17;Q;682
21.18;R;682
21.19;S;683
21.20;T;684
21.21;U;686
21.22;V;686
21.23;W;686
21.24;Z;686


(11) where i is the encoder state after the encoding of bit i. The LLR of bit i can then be expressed as (di)=lnSm?i1(m)Sm?i0(m). (12) (12) For the code of Figure 26, (di) is written as (di)=?i1(0)+?i1(1)+?i1(2)+?i1(3)?i0(0)+?i0(1)+?i0(2)+?i0(3). The principle of the MAP algorithm consists in processing separately data encoded between time 1 and time i (past data) and data encoded between time +1 and time (future data) to compute probabilities ij(m). This dichotomous processing is achieved by introducing probability functions i(m),ßi(m), and i(m) defined by ai(m)=PrSi=m|R1i, (13) (13) ßi(m)=PrRi+1K|Si=mPrRi+1K|R1i, (14) (14) State transition probabilities:?j(Ri,m',m)=Prdi=j,Si=m,Ri|Si-1=m',j=0,1. (15) (15) One can show that (di)=lnSmSm'/d(m',m)=1?1(Ri,m',m)ai-1(m')ßi(m)SmSm'/d(m',m)=0?0(Ri,m',m)ai-1(m')ßi(m). (16) (16) The application of (16) to the example of Figure 26 yields (di)=ln?1(Ri,1,0)ai-1(1)ßi(0)+?1(Ri,2,1)ai-1(2)ßi(1)+?1(Ri,0,2)ai-1(0)ßi(2)+?1(Ri,3,3)ai-1(3)ßi(3)?0(Ri,0,0)ai-1(0)ßi(0)+?0(Ri,1,2)ai-1(1)ßi(2)+?0(Ri,2,3)ai-1(2)ßi(3)+?0(Ri,3,1)ai-1(3)ßi(1). Computation of state probabilities (m)i(m) and i(m) (m) Forward probabilities can be computed recursively through a forward recursion in the trellis: i(m)=Sm'Sj=0,1ai-1(m')?j(Ri,m',m), (17) (17) i(m) values can then be all computed from initial values 0(m). If 0 is the initial state of the encoder, then 0(m)0=1 and 0(m)=0 for ?m0. If the initial state is unknown o(m)=12?, for all , where is the code memory. When circular encoding is implemented, one can use the value of K(m) as initial value for 0(m), for every trellis state , since the initial and final states are identical. This initialization method is particularly convenient for iterative decoding, since the values of K(m) obtained at the end of iteration It are simply handed to 0(m) before starting the forward recursion of the It+1)th decoding iteration. To avoid numerical precision problems when implementing the algorithm, it is highly recommended to normalize the i(m) values regularly. Since (di) is a ratio, it does not make any difference in the LLR calculation. Figure 28 shows an example of forward state probability computation using the trellis of the code of Figure 26.
Figure 28 Example of computation of i(0), i(1), and i+1(2) and for the code of Figure 26. i(0),ai(1), and i(2) are computed by: i(0)=ai-1(0)?0(Ri,0,0)+ai-1(1)?1(Ri,1,0),ai(1)=ai-1(3)?0(Ri,3,1)+ai-1(2)?1(Ri,2,1),ai+1(2)=ai(1)?0(Ri+1,1,2)+ai(0)?1(Ri+1,0,2). In a similar way, backward probabilities can be computed recursively through a backward recursion in the trellis: i(m)=Sm'Sj=0,1ßi+1(m')?j(Ri+1,m,m'), (18) (18) i(m) values can then be all computed from final values K(m). If k is the final state of the encoder, then K(mK)=1 and K(m)=0 for ?mK. If the final state is unknown K(m)=12? for all m, where is the code memory. When circular encoding is implemented, one can use the value of 0(m) as initial value for K(m), for every trellis state . Then, during the iterative decoding process, the values of 0(m) obtained at the end of iteration It are passed to K(m) as initial values for the backward recursion of the It+1)th decoding iteration. Regular normalization of the i(m) values is also recommended for the same reason as mentioned for the computation of forward state probabilities. Figure 29 shows an example of forward state probability computation using the trellis of the code of Figure 26.
Figure 29 Example of computation of i(0), i(2), and i-1(1) for the code of Figure 26. i(0), i(2), and i-1(1) are computed by: i(0)=ßi+1(0)?0(Ri+1,0,0)+ßi+1(2)?1(Ri+1,0,2),ßi(2)=ßi+1(3)?0(Ri+1,2,3)+ßi+1(1)?1(Ri+1,2,1),ßi-1(1)=ßi(2)?0(Ri,1,2)+ßi(0)?1(Ri,1,0). Computation of state transition probabilities j(Ri,m',m) The state transition probabilities j(Ri,m',m) can be expressed as the product of three terms: ?j(Ri,m',m)=Pr{di=j|Si=m,Si-1=m'}Pr{di=j}Pr{Ri|di=j,Si=m,Si-1=m'}. (19) (19) The first term is equal to one if there is a transition between states ' and in the trellis corresponding to i=j and is equal to zero otherwise. The second term is the a priori information related to i=j: in a non-iterative process or at the first turbo decoding iteration, it is given by the source statistics; in an iterative process, it is provided by the output extrinsic information computed by the other component decoder. The third term is given by the transition probability of the transmission channel. Therefore, if we consider a transmission in the discrete Gaussian memoryless channel, with noise variance 2, the non-zero j(Ri,m',m) terms can be written as j(Ri,m',m)=Pr{di=j}12ps2exp-(xi-Xi)22s2exp-(yi-Yi)22s2. (20) (20) Since i and i are equal to +1 or -1, (18) can also be written as j(Ri,m',m)=12ps2exp-xi2+yi2+22s2expxiXis2expyiYis2Pr{di=j}. (21) (21) The two first terms of (21) do not depend on and are identical for all state transitions in the trellis. Since (di) is a ratio, they can be omitted in the expressions of i(m) and i(m). In practice the following simplified expressions of j(Ri,m',m) are used in (16)– (18) for the LLR computation: 1'(Ri,m',m)=expxis2expyiYis2Pr{di=1}=expxi+yiYis2+lnPr{di=1}, (22) (22) 0'(Ri,m',m)=exp-xis2expyiYis2Pr{di=0}=exp-xi+yiYis2+lnPr{di=0}, (23) (23) where the value of i depends on the state transition m',m) considered. Extraction of extrinsic information from LLR Introducing (22) and (23) into (16) yields (di)=2xis2+lnPr{di=1}Pr{di=0}+lnSmSm'/d(m',m)=1expyiYis2ai-1(m')ßi(m)SmSm'/d(m',m)=0expyiYis2ai-1(m')ßi(m),?(di)=2xis2+La+Zi. (24) (24) The first term of (24) is the intrinsic information related to i, available at the channel output, the second term is the a priori information related to i, and the third term is the extrinsic information i produced by the decoder. In the context of turbo decoding, at iteration It, the a priori information for a given decoder is provided by the extrinsic information computed by the other SISO decoder at...



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.