You are on page 1of 4

On the Application of Turbo Codes to the Robust Transmission of Compressed Images

Jiali He, Daniel J. Costello Jr., Yih-Fang Huang, and Robert L. Stevenson

Dept. of Electrical Engineering University of Notre Dame Notre Dame, IN 46556 USA he.2@nd.edu, costello.2@nd.edu
Compressed images transmitted over noisy channels are extremely sensitive to bit errors. This necessitates the application of error control channel coding to the compressed representation before transmission. This paper presents an image transmission system which takes advantage of the superior performance of Turbo codes, an important new class of parallel concatenated codes. Several aspects of the application of Turbo codes to image transmission are studied, including comparison to a previous image transmission system using convolutional codes. Experimental results for several channel signal-to-noise ratios show that, in the same SNR range, Turbo codes achieve much better performance with less decoding complexity than convolutional codes and that similar performance can be achieved at much lower channel SNR's. Studies also show that the use of feedback from an outer Reed-Solomon code to aid Turbo decoding results in further improvement.

Abstract

In 1993 a French research group presented a new \parallel concatenated" coding scheme called Turbo codes 1]. These codes are capable of achieving a bit error rate of 10,5 at a channel signal-to-noise ratio (SNR) which is only 0.7 dB away from capacity, an improvement of almost 2 dB compared to the best previously known codes. More recently, much research has been done on the structure and performance of this new coding scheme. In this paper, we study the application of Turbo codes to the robust transmission of compressed images over noisy channels.
This work was supported by the Lockheed-Martin Corporation, NSF Grant NCR95{22939, and NASA Grant NAG5{557.

1 Introduction

The constraints on bandwidth, power, and time in many image communication systems prohibit transmission of uncompressed raw image data. Compressed image representation, however, is very sensitive to bit errors, which can severely degrade the quality of the image at the receiver. Therefore, application of a channel code is required before transmission over noisy channels. In 2], an image transmission system was described which uses a rate 1/2 convolutional code with constraint length 7 to protect the compressed images. At the receiver, there is feedback from the postprocessing unit to the channel decoder, and decoding proceeds iteratively. During the rst iteration, the decoder uses standard Viterbi decoding. The second and third iterations are based on the \pinned state" Viterbi algorithm 3]. For the fourth iteration, a listbased trellis decoder accepts feedback from the postprocessor and reconsiders the possible channel error locations until the post-processor is satis ed with the decompressed image. In this paper, we remove the convolutional code and the feedback and substitute a rate 1/2 Turbo code. We also use two Reed-Solomon (RS) codes to protect the header information, since it is critical for reconstructing the image. The postprocessor, which uses the Huber-Markov random eld (HMRF) image model 4] to detect errors, can still be used to reduce the quantization noise after reconstruction of the image. Turbo codes are characterized by a large interleaver and their performance improves with increasing interleaver size. Thus, the large number of bits in an image representation makes Turbo codes naturally suitable for image transmission. In our scheme, the rate 1/2 (37, 21) Turbo code from the original paper 1] is used with an interleaver equal to the size of the compressed image block, which is slightly more than 64K bits in our compressed test image, and the MAP algorithm

5] is used for iterative decoding. In the following, a more detailed description of the system is presented in Section 2. Experimental results are reported in Section 3, and a detailed comparison is made to the previous system.

At the transmitter, the input image is rst compressed by the source encoder using the JPEG still image compression standard. Then, Reed-Solomon codes are used to protect the header information. The header is encoded into two RS code words using two di erent RS codes. The codes (255, 1 ) and (255, 2 ) are chosen so that 1 + 2 will accommodate the largest possible header, and the JPEG header length is expanded to 1 + 2 with JPEG ll bytes. This coded header is then interleaved into the entropy coded image body to get the new compressed representation. Note that in 2], RS codes are also used for the header, and unequal strength RS codes are required to make the iterative decoding at the receiver possible. Here we use RS codes to make the experimental results directly comparable to the previous system, and to make possible use of feedback from RS decoding to Turbo decoding to improve the performance of Turbo codes. Clearly, a number of other block codes could have been chosen to protect the header. The new compressed representation is now encoded using a rate 1/2 Turbo code with constraint length 5. The (octal) generators for this code are 37 and 21. The interleaver size is chosen to be the number of bits in the new compressed representation plus four, where the last four bits are used to terminate the rst component code 6] (while the second component code is \left open"), and the interleaver is improved using a method presented in 7]. The number of bits in a compressed image is usually very large, so the performance of Turbo codes is expected to be much better than convolutional codes. After channel coding, the bit sequence is transmitted over a noisy channel using BPSK modulation.
k k k k k k

2.1 Transmitter

2 System Description

The number of decoding iterations needed to obtain the best possible performance limit depends on the channel SNR. After Turbo decoding, the two RS code words protecting the header are extracted from the compressed representation and decoded using the Berlekamp-Massey algorithm 8]. Then the image is decompressed and sent to the post-processor. The post-processor reduces the quantization noise and can also be used as a stopping criterion for the iterative MAP decoder, thereby reducing the average number of decoding iterations. Collins and Hizlan 3] have described a way of iteratively concatenating RS and convolutional codes. This technique is also applicable to the concatenation of RS and Turbo codes. With the help of iterative decoding between the two codes, we can obtain better performance with the same number of Turbo decoding iterations or get similar performance with fewer iterations. In our system, the two RS code words interleaved in the compressed image body were both used for this purpose in the following way: after several iterations of Turbo decoding, the stronger RS code was decoded and those bits (known to be correct with very high probability) were fed back to the subsequent Turbo decoding iterations. Then, after some further iterations of Turbo decoding, the same process was repeated for the other RS code word. It was shown in 9] that there are several possible ways to utilize feedback from the RS codes, and that the best way to do this depends on the channel SNR and several other factors. For the image transmission system, we found it most e cient to use the feedback to modify the channel metric. The corresponding channel metric of the corrected RS code bits is thus changed to a very high con dence level, and this helps the Turbo decoder in its subsequent iterations. A 256 256 image of an airport (Figure 1) is used as a test image. This test image is compressed using the JPEG standard to a bit rate of 1.012 bpp. The header is encoded using the RS codes described in Section 2 with 1 = 171 and 2 = 107 and then interleaved into the image body, which expands the compressed image representation to a bit rate of 1.045 bpp. This image representation is then encoded by the rate 1/2 Turbo code and BPSK symbols are assumed to be sent over an additive white Gaussian noise channel. The channel SNR is measured in p 0 , where p is the energy per pixel. We use the bit error rate (BER) and the average image SNR over a large number of trials as the objective performance indices.
k k E =N E

3 Results

The operation of the channel decoder in the present system is less complex than in the previous one. The Turbo decoder interprets the received noisy bit stream using the MAP algorithm presented in 5], modi cation for Turbo decoding as in 1] and 6]. The performance of Turbo codes usually improves with the number of decoding iterations. After many iterations, the performance normally approaches a limit.

2.2 Receiver

Image SNR vs. Channel SNR 24 23 22


Image SNR (dB)

After 2 Iterations

21 20 19 18 17 16

Results from [2]

After 1 Iteration

3.4

3.5 3.6 3.7 Channel SNR(dB Ep/N0)

3.8

3.9

Figure 1:

Figure 2:
Test Image

Performance at Higher Channel SNR

The average image SNR in decibels is calculated as


S N Rave

= 10 log10 (

Save Nave

) dB
N

where ave is the average signal power and ave is the average (quantization and channel) noise power of the reconstructed image. The average image SNR is calculated relative to the original unquantized test image, and the compression alone results in an image SNR of 23.3 dB. The BER is calculated as the ratio of the average number of bit errors divided by the number of bits in the compressed representation (including ll bytes but excluding RS parity bytes). First, we tested the performance of this system for a channel SNR range from 3.3 dB to 3.9 dB, which is the same SNR range used in 2]. Simulation results after 600 trials at each of seven channel SNR values are shown in Figure 2. A comparison with the results presented in 2] shows that even after the rst iteration of Turbo decoding (and RS decoding of the header) the performance of the present system is close to that in 2]. After the second iteration, the performance of the present system is clearly superior and there are almost no errors. Thus the image SNR curve attens out at a value of 23.3 dB. The error oor, which is usually one of the disadvantages of Turbo codes, does not provide any problems in this application. Second, we tested the system performance at lower channel SNR values, namely at 2.3 dB, 1.3 dB, and 0.9 dB, respectively. Results are shown in Figure 3.
S

When decreasing the channel SNR, more iterations are required to get satisfactory performance. At 2.3 dB, 3 iterations of Turbo decoding achieves better performance than the results in the much higher SNR range 3.3 dB{3.9 dB reported in 2]; at 1.3 dB, 5 or 6 iterations of Turbo decoding are required to achieve similar or better results; and at 0.9 dB, after 10 iterations, the performance of the present scheme is better than the results at 3.3 dB reported in 2]. Thus, by using Turbo codes, we can achieve similar performance while lowering the channel SNR by as much as 2.4 dB. We have also studied the use of feedback from the interleaved RS codes to improve the performance of the MAP decoder using the method described in Section 2. It was observed that the performance improved and that the improvement obtained depended on when and with what con dence level we returned feedback information to the inner Turbo decoder. Unless the con dence level is high, the performance may be worse than without feedback. Thus, we must make sure that the information fed back has very high reliability. With a -error-correcting RS code, the RS decoder can correct up to symbol errors in one codeword. If there are more than symbol errors, either a decoder failure occurs, i.e., the RS decoder recognizes that it fails to nd the correct codeword, or an undetected decoding error occurs, i.e., the RS decoder outputs a codeword di erent from the transmitted codeword without recognizing its mistake. The probability of an undetected decoding error is known to be less than 1 ! 10]. Since for the two RS codes used in this
t t t =t t

Image SNR vs. Channel SNR 25 6 5 3 2

10 20
Image SNR (dB)

2 9 1

4 15 8 : Noiseless channel i: 10 7 1 1.5 2 2.5 Channel SNR(dB Ep/N0) 3 After i Iteration(s)

performance. According to performance simulations performed for a higher rate (rate 2/3) Turbo code, it can be expected that in the SNR range 3.3 dB{ 3.9 dB, similar performance can be achieved (although with more iterations of decoding than for rate 1/2 Turbo codes) with 33% less bandwidth expansion or source compression (which means less quantization noise) compared to the rate 1/2 system if the higher rate Turbo code is used.

References

3.5

Figure 3:

Performance at Lower Channel SNR

scheme is 42 and 74, respectively, this probability is extremely small and can be excluded from consideration. We used this fact to determine when to begin the feedback, i.e., after which iteration of the Turbo decoder to begin decoding the RS codes and to change the channel metric of the corresponding decoded bits. After the Turbo decoder nished several iterations, the two RS code words were decoded. If they were decoded without detecting an error, the decoded bits were fed back. Otherwise the RS code words were decoded again after another iteration of Turbo decoding. We used a channel SNR of 0.9 dB as an example to observe the improvement obtained by using feedback, and the results show that, after 10 iterations, the BER was 8 9 10,6 with feedback vs. 2 9 10,5 without feedback, and the average image SNR improved accordingly. Note that this improvement was achieved with almost no added decoding complexity.
: :

The performance of the present system employing Turbo codes is superior to that of the system employing convolutional codes reported in 2] in the sense that the present system can achieve similar or better performance with lower decoding complexity or at a much lower channel SNR. Although the (37, 21) Turbo code was used in this system as an example, better performance can be anticipated by using the (23, 35) Turbo code, which has a lower error oor. The use of feedback from the interleaved outer RS codes to the Turbo decoder further improves the

4 Conclusion

1] C. Berrou, A. Glavieux, and P. Thitimajshima, \Near Shannon limit error-correcting coding: Turbo codes." Proc. 1993 IEEE International Conference on Communications (Geneva, Switzerland), pp. 1064{1070, May 1993. 2] T. P. O'Rourke, Ph.D. Dissertation, \Robust image communication: an improved design," Dept. of Electrical Engineering, University of Notre Dame, April 1996. 3] O. Collins and M. Hizlan, \Determinate state convolutional codes", IEEE Trans. on Comm., vol. COM41, pp. 1785{1794, Dec. 1993. 4] T. P. O'Rourke and R. L. Stevenson, \Improved image decompression for reduced transform coding artifacts," IEEE Trans. on Circuits and Systems for Video Technology, vol. 5, no. 6, pp. 490{499, Dec. 1995. 5] L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, \Optimal decoding of linear codes for minimizing symbol error rate," IEEE Trans. on Inform. Theory, vol. IT-20, pp. 284{287, Mar. 1974. 6] P. Robertson, \Illuminating the structure of code and decoder of parallel concatenated recursive systematic (Turbo) codes," Proc. GLOBECOM'94 (San Francisco, California), pp. 1298{1303, Dec. 1994. 7] D. Arnold and G. Meyerhans, \The realization of the Turbo-coding system," Semester Project Report, Swiss Federal Institute of Technology, Zurich, Switzerland, July 1995. 8] S. Lin and D. J. Costello, \Error Control Coding: Fundamentals and Applications." Englewood Cli s, NJ: Prentice-Hall, 1983. 9] G. Meyerhans, M.S. Thesis, \Interleaver and code design for parallel and serial concatenated convolutional codes," Dept. of Electrical Engineering, University of Notre Dame, Mar. 1996. 10] R. J. McEliece and L. Swanson, \On the decoder error probability for Reed-Solomon codes," IEEE Trans on Inform. Theory, vol. IT-32, pp. 701{703, Sept. 1986.

You might also like