# Better Codes - Better Communication

R.A. Echard
Space Systems Development Department

S.C. Chang
George Mason University

Introduction: "Can you hear me now?" This popular phrase, familiar to everyone in today's fast pace world, is more than just an indicator of convenience. To a Naval communicator, maintaining communications (Comms) during military operations is vital to the survival of U.S. and allied forces. How do we ensure that our forces have the best communication systems available? This question boils down to a special mathematical theory discovered by Claude Shannon more than 50 years ago1 but only recently exploited to its full potential. Researchers at NRL and George Mason University have refined a system of error control coding that enables new possibilities for a variety of communication products. Our codes, known as π-rotation codes, offer error correction capability approaching the fundamental mathematical limit while maintaining a small and simple circuit configuration.

Background: All messages can be represented by a string of ones and zeros. Coding for error control is simply a matter of mapping these binary message patterns to longer sequences with deterministic relationships. The amount of information, as measured by the number of information bits, is still maintained in the coded transmission. However, the longer sequence with predetermined arrangement allows the decoder in the receiver to recover the information, even when the transmission has been corrupted.

An important parameter of coding is the ratio between the number of information bits to the number of bits in the encoded string or codeword. We call this the "rate" of the code, and it determines the additional bandwidth required to implement the error control scheme. For example, a rate 1/2 code would nominally require twice the bandwidth, a rate 1/3 code, three times the bandwidth, and so forth. The cost of additional bandwidth is well offset by the improved power efficiency if the codes are properly designed.

More than 40 years ago, a coding scheme was invented called low-density parity-check (LDPC) codes.2 These codes were recently rediscovered as having the capability to reach the efficiencies promised by Shannon's original theory.3 In general, to describe LDPC codes that perform at these high efficiencies requires a massive amount of memory for each code design. With the introduction of π-rotation codes, the amount of memory required is reduced to just a few bytes. In addition to the memory savings, π-rotation codes are easily encoded by using a unique encoding architecture based on a single permutation mapping. The performance of π-rotation codes places them among the best performing error control codes known today.

LDPC Codes: LDPC codes are described by the parity check matrix H. The code C is defined as the set of all codeword vectors y, such that 0 = Hy. Thus, C is the set of all vectors orthogonal to the row vectors in the matrix H. Good performance requires a large H matrix formed from randomly placed one/zero patterns. Figure 9 shows a simple example. The dots in this figure represent the locations of the ones, and the blank regions represent zeros. A good performing matrix may have row and column dimension of many tens of thousands. In general, to create codewords from information messages, this matrix must be inverted to obtain the generator matrix. Although conceptually this is a simple procedure, the large size makes conversion to the generator matrix a significant computational process. In general, the LDPC coding system requires the storage of both the parity-check matrix and the generator matrix for encoding and decoding the messages. Thus a substantial amount of memory and processing power is required to implement LDPC codes.

The π-Rotation Codes: The π-rotation codes are a form of LDPC codes that do not require a generator matrix and are described by a set of three integers for the classic version and an additional four integers for the extended version.4 The three integers [m, a, b] are used to define a permutation matrix through a unique chaotic generator. This permutation matrix has a single "one" in each column and row of the matrix. Its size is determined by the value of m. Calling this matrix πA, we create four matrices by rotating the πA permutation matrix, as shown in Fig. 10. Using these four matrices as a basis, we create the mosaic pattern as shown to produce the larger matrix Hd. This matrix is combined with a pair of diagonal matrices to create the complete H matrix. The circuit to create codewords is easily implemented, as shown in Fig. 11. The results in Figs. 12 and 13 provide an example of the bit error correcting performance of π-rotation codes.

 FIGURE 9 Example of an arbitrary parity-check matrix. FIGURE 10 The four π-rotation permutation matrices are arranged to create the larger Hd composite matrix.

FIGURE 11
Layout of the classic and extended π-rotation encoding circuit.

FIGURE 12
Bit error rate (BER) performance curves for irregular (extended)_π-rotation LDPC codes in the additive white Gaussian noise channel. All information lengths are 15,000 bits.

FIGURE 13
Signal-to-noise ratio required to obtain 10_4 BER performance with irregular π-rotation LDPC codes compared to the binary Shannon limit. Each code is completely defined by a 7-integer vector.

Summary: We can see that these codes easily perform within 1 dB of the theoretical limit. Also, since these codes are defined with a short description, the exact code is easily stored or transmitted for exchange between radios. The circuit implementation is an added plus to the usefulness of the π-rotation codes since code rate and code length are easily adapted to dynamic channel conditions. This research is an example of NRL's contribution to the continued improvement of communication resources available to our Service personnel.

Acknowledgments: Richard Echard acknowledges support from the Naval Research Laboratory to pursue studies at the George Mason University through the Edison Memorial Graduate Training Program.