Summary

MIT Professor Emeritus William Schreiber systematically dismantles the technical arguments advanced by Sony, ATSC, and the Grand Alliance in favor of 1080i interlaced broadcasting over 720p progressive. He argues that 1080i's claimed pixel advantage is illusory, as objective ATTC tests showed 720p achieves equal or superior actual resolution.

Source document circa 1998 preserved as-is

HDTV News Online

False Argument In Favor of Interlaced DTV Broadcasting, Part 1

by William F. Schreiber, Prof. Emeritus of Elec. Eng., MIT
Wednesday, April 29, 1998

(*Words between asterisks are to be in italics.*)

The argument that 1080x1920, 30 frames/sec interlaced (1080I) will give better picture quality than 720x1280, 60 frames/sec progressive (720P) because it has twice as many picture elements (pixels) per frame is the most recent erroneous idea put forth by those who have been pushing the NHK 1125/60 system for years as a world-wide production standard. The earlier arguments all turned out to be incorrect; this new one is but the latest attempt to foist off this obsolete technology on the American broadcasting industry.

In fact, the vertical resolution actually achieved in 1080I is *lower* than that actually achieved in 720P, while the horizontal resolution is considerably less than 1920 pixels, as clearly shown by objective tests carried out at ATTC. Subjective tests carried out by ATEL showed that the perceived picture quality of the two systems was comparable.

Background
To understand the current controversy, it helps to recall a little history. Interlace, with its many problems, has always been used in television. Nevertheless, current systems -- NTSC and PAL -- for all their defects, have been very successful commercially. About 1970, NHK embarked upon a project to develop the next generation of TV systems. They had in mind a system much like NTSC but with a wider aspect ratio, about twice the vertical and horizontal resolution, and about five times the bandwidth. It was intended to be the "FM" of video, delivered by satellite, while NTSC, delivered terrestrially, was to be the "AM." The NHK system used interlace because it was a straightforward scale-up of existing analog systems, and it did not provide for separate production and distribution formats.

System design was done by NHK and equipment development by Sony, Matsushita, Toshiba, and other large electronics manufacturers. In the late seventies, the system was successfully demonstrated, but the special wide-band transponders were evidently deemed uneconomical. By 1984, NHK had developed a bandwidth-compression system called MUSE to permit transmission in standard transponder channels as used for NTSC. It was at this point that the concept of using the original 30-MHz system as a production system was first enunciated. Eventually, a full line of production equipment for the 1125-line system was developed in Japan. With the help of SMPTE, an attempt was made to have ANSI recognize the NHK system as a production standard.

ANSI at first agreed, but reversed itself on appeal by ABC on the grounds that the system was not actually being widely used for the proposed application.

Ever since, there has been unremitting pressure to use this system as a production standard, even to the extent of enlisting the help of the State Department. It was decisively turned down by the EBU, and that had seemed the end of the matter until the formation of the Grand Alliance.

As everyone knows, the FCC Inquiry set up in 1987 to develop HDTV transmission standards was turned on its head in 1989 by the proposal from General Instrument for an all-digital system. In the first round of tests at ATTC, there were four digital systems (two progressive and two interlaced), plus MUSE and ACTV, an NTSC-compatible analog system from the Sarnoff Laboratories. MUSE placed last in performance and it and the other remaining analog system were withdrawn. The four remaining digital system proponents were forced into a shotgun wedding by ACATS, forming the Grand Alliance.

Evidently no system proponent was willing to give up his format, so all were included. (The standard-definition formats were added later by ATSC, with no testing at all.) Curiously, the interlaced systems, which had used 960x1408 and 960x1440 in the initial tests, when combined into one, were raised to 1080x1920, thus reviving the NHK system as the natural production standard and providing a potential market for the production equipment already developed by Japanese companies. (The production equipment now being offered for sale is actually 1035I, not 1080I.)

The interlace arguments pro and con were spelled out in 1996 in submissions to the FCC by the interested parties as the Commission was considering the standard proposed by ACATS in 1995. Foremost among the interlace advocates were Sony, ATSC, the Grand Alliance, and North American Philips. It appears that the ATSC and GA submissions were both to a large extent written by Robert Graves, who was hired by the GA to get the proposal accepted and who is now head of ATSC. The main reasons advanced at that time for using interlace were:

1. Interlace doubles the vertical resolution for a given bandwidth and frame rate.

2. P requires more bandwidth or channel capacity than I for the same resolution.

3. We have to have interlace so that we can have cheaper receivers.

4. P raises costs for broadcasters.

5. No one knows how to make P cameras with adequate SNR.

6. Many programs that will be used for SDTV transmission already exist in NTSC

(I) format.

*Every one of these arguments proved to be false.* There were so many misstatements of facts in these four submissions that I felt obliged to submit a detailed rebuttal for each. (Copies of my submissions are available to anyone interested.) In the case of ATSC and Philips, I attempted to get knowledgeable persons known to me in those organizations to deal with my objections, but no response was ever forthcoming.

In this short piece, there is insufficient space for presenting the detailed rebuttal of these erroneous statements. Briefly, 1 and 2 relate to the 2 million/1 million issue, and are dealt with below. As for 3, I receivers are slightly cheaper than P receivers, but a P-to-I converter can be built into an MPEG decoder at nearly zero cost for use with P broadcasts. (Note that an MPEG decoder for 1080I needs more than twice the memory as a decoder for 720P, so costs more, not less.) As for 4 and 6, P broadcasting does require the upconversion from I to P at the studio for archival NTSC.

This costs almost nothing for most of the material, which originated with 24-fps film. In any event, the cost of the I-to-P converter at the sending end is entirely negligible compared with the cost of converting to any kind of digital transmission. Item 5 disappeared with the development of an excellent 720P camera by Polaroid in 1996 and the demonstration of a 720P camera by Panasonic at the recent NAB convention. *Many who saw the Panasonic720 P demonstration said that the pictures were the best TV they
had ever seen.*

An interesting and highly relevant development occurred beginning in 1994 when various laboratories began looking into the relative compressibility of P and I video.

With a P and an I signal having the same number of lines/frame and the same field rate and horizontal resolution (e.g., 480x720 P 60 frames/sec and 480x720 I 30 fps) the P signal has twice the analog bandwidth as the I signal. However, because of the much higher statistical correlation and lower level of aliasing present in the P signal, both can be MPEG-compressed to the same digital data rate with about the same subjective quality. Results like these have been reported by Bell Labs, NHK, RAI, and Project Race of the EU. Thus, *there is no data-rate penalty for using P rather than I, and there are many advantages.*

As a result of all these considerations, my conclusion is that *there is no disadvantage, monetary, quality-wise, or convenience-wise to any domestic stakeholder from using P rather than I transmission.* It is true that manufacturers who have made an unwise investment in this obsolete technology would suffer a temporary setback. However, should the US broadcasting industry choose progressive transmission, I am also sure that these same companies will produce the necessary P products in short order.

Interlace and Resolution

All current analog TV systems use interlace, in which the odd lines are transmitted on one field and the even lines on the next. This was originally done in order to double the large-area flicker rate at a given bandwidth, but it can just as well be thought of as a means to double the vertical resolution by offsetting successive fields by one-half the line spacing. The hope was to achieve the doubled flicker rate and the doubled vertical resolution at the same time. However, there is no free lunch. The only circumstances under which this can be done is when the two successive fields are taken from the same (still) frame and printed on film. When there is motion and/or when the integration of the two fields is done in the eye, the scheme does not work as hoped for.

This has been known for many years. A paper by E.F.Brown of Bell Labs (BSTJ 46,1,1967 pp 199-232) showed that the degree of resolution-enhancement actually attained depended on the screen brightness; at normal brightness, it is only 10%, not 100%! Thus interlace never really worked, even in analog systems; it only seemed to.

Interlace produces many artifacts in the image. The most common is interline flicker, which is caused when adjacent lines in the frame (which are transmitted and displayed one field-time apart) are not identical. In other words, whenever there is good vertical resolution, there is interline flicker. This is the reason why interlace has been abandoned in computer monitors; computer video has full vertical resolution. Camera video, however, always has reduced vertical resolution. In tube cameras, this comes about automatically, since the physics of the camera causes the target to be discharged completely every field, thus averaging (blurring together) successive lines of each frame. In CCD cameras, this is done deliberately by discharging two lines of photosites at once. If this were not done, the interline flicker would make the image unwatchable. Those whose experience is limited to conventional TV practice will not have seen this problem to its full extent.

The maximum possible degree of interline flicker can be imagined by thinking of an NTSC image in which the even lines are white and the odd lines are black. While this is certainly an unusual picture, it is NTSC-legal. Such a display would flicker at 30 Hz, and the flicker would be perceived at any distance. It is the extent to which adjacent lines differ (i.e., the extent to which they represent vertical detail) that produces the flicker. For years, we had a demo of this effect in my lab at MIT. None of the hundreds of TV professionals who came through had ever seen this before and none had imagined that the effect was so large.

The necessity of reducing the vertical resolution to avoid totally unacceptable interline flicker means that the nominal resolution of interlaced systems is not the resolution actually achieved in practice. The vertical resolution actually achieved is usually not significantly higher than the number of lines per *field,* not the number of lines per *frame.* For example, I have never seen an 1125 demonstration in which the limiting vertical resolution was more than 700 lines.

At one time, the CBS laboratory in Stamford, Conn., had an NHK system that had been modified so that it could quickly be switched between 1125 lines interlaced and 562 lines progressive. When switched to P, there was no visible reduction in vertical resolution. The only effect was to make the line structure somewhat more visible.
END PART 1


Return To HDTV News Online Editorial Page


HDTV News Online © 1998 - 2000 Advanced Television Publishing
All Rights Reserved