The real problem that 8-VSB is having with multipath is that the data field synchronizing (DFS) signal, which was supposed to be a training signal for the adaptive filtering used for equalization and ghost cancellation, is poorly designed as a training signal. I've heard that the Grand Alliance was warned about this at the time the ATSC DTV Broadcast Standard was being adopted. C. B. Patel and I have both subsequently tried to bring this matter to industry attention. Zenith engineers have taken a revisionist stance saying that the DFS signal was never intended to be a training signal. Any receiver designer that says his experience shows the ATSC signal doesn't have enough information in it to support fast equalization is denigrated with comments to the effect that competent receiver designers all rely successfully on straight "blind" equalization techniques.
There has been over-optimism in the interpretation of laboratory tests concerning how well equalization of the ATSC DTV signal can be done in the presence of dynamic multipath. A very-slow-tracking adaptive filter appears to recover quite quickly from occasional short interludes of dynamic multipath conditions (rapid fades) because the weighting coefficients will have changed so little during each such interlude. This is a variant of the old observation that a stopped clock shows the correct time twice a day. Testing of how well the filtering used for equalization and ghost cancellation adapts, when the dynamic multipath changes the transmission/reception channel characteristic, is not rigorous unless the changes are such that
|
There has been over-optimism in the interpretation of laboratory tests concerning how well equalization of the ATSC DTV signal can be done in the presence of dynamic multipath.
|
the characteristic does not return to a previous state.
A couple of summers ago, some visiting young Samsung Electronics engineers demonstrated a lash-up of a Radio Shack indoor antenna with some sort of wideband amplification, their front end and decoder, and one of our computer monitors in the fifth-floor downtown DC law offices I was working at. C. B. Patel also happened to be visiting me at that time. Reception was of signal reflected from buildings down the street, since the block of buildings the law offices were in blocked line-of-sight towards transmitter location. The indoor reception was so much better than we had expected, we telephoned NAB and ATSC executives and invited them over. They came and everybody agreed the SEC engineers' impromptu demonstration was encouraging, considering the bad reports from the Sinclair Broadcasting tests a month or two before.
Unless someone walked directly in front of the antenna, reception was perfect. But recovery of reception, once it was disrupted, took a noticeable amount of time. Long enough to intrude on enjoyable viewing if such disruptions occurred repeatedly, as they would be apt to do under rapid fading (dynamic multipath) conditions.
As I recall, Dr. Patel has told DTV receiver designers that, if blind equalization was employed, the time for LMS-gradient algorithms to converge upon a set of correct weighting coefficients would be one to two seconds if tracking were yet to be established. This presumed strong-signal reception conditions with only static multipath distortion that was not extraordinarily severe. Self-styled experts say that newer algorithms will overcome this shortcoming. These newer algorithms supposedly converge faster than the LMS-gradient algorithms Dr. Palett's people used at the old Advanced Media Laboratory.
I recently got a tip that the newer algorithms that are under consideration are not found in digital communications textbooks, but are in adaptive filtering textbooks. I was told they were RLS algorithms that gobbled quite a bit of silicon die area, but were manufacturable. RLS methods generally involve a computational cost that increases about as the square of the number of taps contained in the adaptive filter. This is a prohibitively high cost for a DTV receiver, given the current state of silicon integrated-circuit technology.
I started some detective work in a textbook I had at hand, Adaptive Filter Theory, 2nd Ed. by Simon Haykin published in 1991 by Prentice Hall. The only recursive-least-squares algorithms known in 1991 that do not gobble silicon beyond belief are the fast transversal filters (FTF) algorithms. They realize the RLS solution with a computational cost that increases only linearly with the number of taps contained in the adaptive filter, as in the LMS-gradient algorithms.
The computational cost of the FTF algorithms is at least four times larger than that for the LMS-gradient algorithms, however, with division calculations being required. For an adaptive filter having an (M+1)-tap kernel, the LMS-gradient algorithms require about 2M+1 multiplications and 2M additions/subtractions, with no divisions being required. The FTF algorithms require at least 7M+12 multiplications, 4 divisions, and 6M+3 additions/subtractions, with additional computation being required to avoid long-term instabilities.
An RLS algorithm converges in about 2M iterations for small error signals, where M is the number of taps in the kernel of the adaptive filter, which the textbooks indicate is typically about an order of magnitude faster than LMS-gradient algorithms converge. If one digs into the supporting experiments, one finds that the "order of magnitude" is in binary number terms. The "order of magnitude" improvement is not in decimal number terms, as one might presume from casual reading of the textbooks.
The higher convergence speed of the RLS algorithm depends on the error signal being small-valued and on the signal-to-noise ratio of the filter input signal being high. So, getting an initial set of correct weighting coefficients for the adaptive filter using blind equalization techniques will still take time, especially when the S/N of the filter input signal is low. Moreover, dynamic multipath conditions tend to be more troublesome during weak-signal reception, where the convergence of the RLS algorithm is not so much greater than that of the LMS-gradient algorithms.
The faster convergence speed of an RLS algorithm would appear to reduce the chances for loss of adaptive filter tracking owing to dynamic multipath conditions and for such loss being of protracted duration. Data Communications Principles by Gillin. Hayes & Weinstein published in 1992 by Plenum press indicates this can be the case with many-tap filters in "Appendix 8D Tracking Properties Of The LMS And RLS Algorithms" pp. 598-601. A Ling and Proaksis paper "Lattice decision-feedback equalizers and their application to fading dispersive channels", IEEE Trans. on Communications, vol. 33, No. 4, April 1985 is cited for showing instances where the RLS algorithm is superior to the LMS algorithm. For the more common practical cases where the variation of the channel is small relative to the steady-state mean square error, the RLS tracking error is indicated to be little smaller than the LMS tracking error.
Also see page 501 of Adaptive Filter Theory, 2nd Ed., which is instructive on the question of how the RLS and LMS algorithms compare in tracking in a non-stationary environment. Haykin indicates the tracking performance is influenced not only by the rate of convergence (which is a transient characteristic) but also by fluctuation of the steady-state performance of the algorithm as influenced by measurement and algorithm noise. With both algorithms tuned to minimize the misadjustment of the filter response by a proper optimization of their forgetting rates (i. e., the exponential weighting factor lambda for the RLS algorithm and the step-size parameter mu for the LMS algorithm), the LMS algorithm exhibits tracking performance superior to that of the RLS algorithm. Haykin cites the 1986 paper of Eleftheriou and Falconer "Tracking properties and steady state performance of RLS adaptive filter algorithms", IEEE Trans, Acoustic Speech Signal Process., vol. ASSP-34, pp.1097-1110, and the 1989 paper of Bershad and Macchi "Comparison of RLS and LMS Algorithms for Tracking a Chirped Signal" Proc. "CASSP, Glascow, Scotland, pp.896-899.
I suspect that trying to improve blind equalization of ATSC signals is a wild goose chase in practical engineering terms. The improvement that can be made for exceptional cases costs silicon-die area in every receiver. It seems to me the better way to go is simply to modify the ATSC signal so it incorporates a better training signal than the current DFS data segment. This can be accommodated by adding one or two data segments per data field.
A training signal that uses a repetitive PN511 sequence should be satisfactory, based on experience gained from the earlier work on NTSC ghost cancellers. Charlie Dietrich's U. S. patent No. 5,065,242 that can be downloaded from http://www.uspto.gov or http://www.ibm.com/patents is instructive on the advantages of repetitive-PN-sequence signals. The raster mapping techniques Dietrich describes are unnecessary if one does not use his cepstrum technique for locating ghosts. Match filters can be used to locate ghosts in a more straightforward manner. Detroit's DTF method for spectrum whitening is very good, however, and will work if the PN511 sequence is extended with null samples to have an integral power of two samples in it.
An important thing is to locate the repetitive PN sequence so that it is not overlapped by the ghosts of other data. I learned this concept from C. B. Patel, and the designers of the ATSC DFS signal apparently did not have this concept in hand. Engineers at Sarnoff have speculated that other-data ghosts can be suppressed by accumulating DFS signals over several data fields. Accumulation of training signal over several data fields is exactly what one should avoid. What is desirable is that the training signal have sufficient energy that the adaptive filtering can be immediately updated based on the training signal incorporated in one data field. This allows a re-initialization of the adaptive filter weighting coefficients as often as every 25 milliseconds, avoiding protracted losses of signal when the data-directed algorithms lose tracking.
|
The real horse race between single-carrier and multiple-carrier modulation systems should be run without hindering the single-carrier modulation system with a blind equalization requirement.
|
Insertion of one additional data segment per data field will reduce data frame rate from 20.66 frames per second to 20.59 frames per second with a 0.32% loss in channel capacity compared to the 1995 ATSC standard. Insertion of two additional data segments per data field will reduce data frame rate from 20.66 frames per second to 20.52 frames per second with a 0.64% loss in channel capacity compared to the 1995 ATSC standard. Such losses would seem acceptable to broadcasters, supposing the number of receiver sites able to receive off-the-air signals can be increased by at least a percent or so. Every receiver purchaser would benefit from the receiver cost reduction that would be possible because silicon-die-area requirements would be kept relatively modest.
The real horse race between single-carrier and multiple-carrier modulation systems should be run without hindering the single-carrier modulation system with a blind equalization requirement. Synchronous equalization is another hobble that unquestionably should be removed from the single-carrier modulation system before systems are compared.
As far as I can discern from the number of taps various manufacturers say they are employing in their adaptive filters, no manufacturer is currently using fractional equalization. 500 to 576 taps means sampling is probably being done at baud rate if ghosts delayed as long as 40-odd microseconds are to be suppressed. Sampling at baud rate through the equalizer simply does not meet the Nyquist criterion for adequacy of sampling if the ATSC signal is phase-modulated owing to dynamic multipath. Fractional equalization has long been known to be a necessity for good performance of a data communications radio receiver. Data Communications Principles by Gillin. Hayes & Weinstein indicates on page 535 that a (3/4)-symbol-epoch fractional equalizer performs substantially as well as the (1/2)-symbol-epoch fractional equalizer known to be absolutely okay. Fractional equalization costs in silicon die area, of course, a (3/4)-symbol-epoch fractional equalizer having one-third more taps than a synchronous equalizer. A 640-tap equalizer is what one expects to see if dynamic multipath is to be suppressed. However, there are some good cheats available, such as sampling at baud rate when canceling long-delayed ghosts. The more frequent sampling in the spectrum-whitening filtering will compensate for the reduced sample rate in the ghost-cancellation filtering.
It appears to me that, even with fractional equalization, the single-carrier modulation system should still take a lot less silicon die area in the receiver than an 8000-carrier system. Presumably DFT techniques are used for demodulation; 8000 synchrodyne-to-baseband circuits for the plural-carrier system would be a bit much. There will be 8192 multipliers for gain controlling each of the DFT bin results before data-slicing, as opposed to about 1300 multipliers for fractional equalization o
| Even with fractional equalization, the single-carrier modulation system should still take a lot less silicon die area in the receiver than an 8000-carrier system |
f the single-carrier baseband signal. The DFT for spectrum whitening of the single-carrier baseband signal in accordance with training signal requires only a 512-bin DFT. For broadcasting to stationary receivers there is not much question that COFDM will lose hands down on the silicon-die-area shoot-out, supposing ATSC will modify its standard to include a good equalizer training signal so multipath performance of single-carrier transmissions can be improved.
The proponents of single-carrier transmission have been pounding on higher useful data rates as their great advantage over COFDM. I think they should accept a somewhat lower useful data rate and put in the overhead necessary to improve multipath performance.
Clearly, the most damning argument against COFDM for broadcasting is that, even in the era of falling integrated-circuit prices, the receiver costs are appreciably higher than for VSB-8. The volume on TV receivers is so great that even a few cent cost advantage, when integrated over the millions of receivers a manufacturer will make, equals more bucks than its bean-counters can ignore. Low-volume other businesses are merely peripheral considerations to a TV receiver manufacturer. TV for tanks is an issue down in the noise.
Incidentally, the advocates of COFDM seem to me to be falling into a trap many of us fell into when pyramid processing types of hierarchial filtering were being investigated in the mid-80's. There are several things that the system can do when they are separately considered, but unfortunately they cannot all be accomplished at the same time. This, owing to the parameters of the system having to be different in order to be optimal for each task.
The current scrambling around by ATSC proponents to try to accommodate the desire to televise to moving vehicles is to my mind not a very sensible business plan. Broadcasting should not be handmaiden to DOD desires or a small special-purpose receiver market. The ATSC DTV Broadcast Standard should be modified to optimize it for its original purpose ñ over-the-air broadcasting. As has been pointed out by others, over-the-air broadcasting has some real advantages over wired systems and satellite broadcasting systems.
We do not have a situation where engineers should be looking for a basic unifying equation underlying the universe. We have a situation where systems should be tailored to the specific requirements of each system, where the engineering compromises should be made to optimize each system individually rather than to make these systems uniform in unnecessary ways.
|
|