An elegant way to describe the working of low-density parity-check and turbo codes is through the use of factor graphs and the summary-product algorithm. The first part of my talk will therefore be an introduction to these concepts. But it will be apparent that the summary-product algorithm, which operates by message passing in a factor graph, subsumes a great variety of algorithms not only in coding theory but also in signal processing and artificial intelligence.
The second part of my talk will be devoted to the analysis of iterative decoding algorithms for finite-length codes. Recently, we have been able to identify a key insight in the "old" subject of iterative decoding, namely pseudo-codewords. In fact, iterative decoding can be tightly characterized by the pseudo-codewords that lie within a certain polytope, the fundamental polytope, as we called it. Interestingly, most of the available concepts for understanding iterative decoding of finite-length codes can now be explained with the help of pseudo-codewords and the fundamental polytope. We will also point out connections to linear programming and statistical physics.
Codes -- by definition -- are discrete objects, a fact which usually makes it relatively hard to state properties about a specific code. But the fundamental polytope is a continuous and convex object: not only is it easier to derive properties of a specific code (like the minimum pseudo-distance), but these properties are also more relevant to the iterative decoding behavior than classical notions like minimum Hamming distance. These observations have interesting implications for code design.
(Parts of this talk are based on joint work with Ralf Koetter, UIUC.)
SYSTEMS SEMINAR WEB PAGE: http://www.cae.wisc.edu/~gubner/seminar/