<img src="http://www.hdtvmagazine.com/images/hdmi_200.gif" alt="HDMI" align="right">While the multichannel audio industry keeps creating more formats for newer equipment, it also creates the need of compatibility with existing equipment. And being the digital connectivity solution that the industry says it is, HDMI has to meet the challenge of that "evolution".
When HDMI 1.1 came out, it added to the spec a new packet to carry some DVD-Audio content protection-related data. All audio capabilities of DVD-Audio were part of 1.0 but the CPPM/CPRM license (used for DVD-Audio encrypted disks) required some additional data to be transmitted.
Then, when HDMI 1.2 came out...
[url=http://www.hdtvmagazine.com/articles/2006/08/hdmi_part_5_-_a.php]Read the Full Article[/url]
HDMI Part 5 - Audio in HDMI Versions
-
Rodolfo
- Author
- Posts: 755
- Joined: Wed Sep 01, 2004 8:46 pm
- Location: Lansdowne VA
-
LesMoss
- New Member
- Posts: 2
- Joined: Fri Aug 11, 2006 1:51 pm
Why lipsync
This lipsync "feature" seems like a bad idea. Why burden every source with this responsibility for the few displays that have this problem? And exactly what does it mean? Delay the audio stream on the HDMI link? Delay the audio stream on other outputs? What is the maximum delay which must be implemented? Is the delay adjustment dynamic (such as when the user turns "game mode" on and off)? Can a receiver just pass it thru to the source devices?
I think the source of the problem is that the HDMI authors are "receiver centric" (probably to sell more chips). I would much prefer that the spec was "TV centric". In this model the TV knows its own delay and passes the delayed audio stream on it audio outputs. A video switching receiver is just a level of complication that is not needed, but the HDMI spec seems to force it.
I think the source of the problem is that the HDMI authors are "receiver centric" (probably to sell more chips). I would much prefer that the spec was "TV centric". In this model the TV knows its own delay and passes the delayed audio stream on it audio outputs. A video switching receiver is just a level of complication that is not needed, but the HDMI spec seems to force it.
-
Rodolfo
- Author
- Posts: 755
- Joined: Wed Sep 01, 2004 8:46 pm
- Location: Lansdowne VA
LesMoss,
I could not respond until now due to time availability but I thought I would share with you a response from Joe Lee, Director of Marketing at Silicon Image.
-------------------
Response: With HDMI 1.3, it is sink or repeater devices (i.e. TV or AV receivers) that report their amount of audio and/or video latency, not the source devices. Source devices (such as DVD players or STBs) typically will output the audio & video in fairly good synchronization.
This is expected because the source is reading the content material directly, and thus there are no intermediate processing steps that would cause the source to lose the reference information about audio & video synchronization.
With HDMI, the audio & video latency information is stored in a ROM chip in the device called the EDID ROM. This ROM stores information about the device's capabilities, such as supported audio & video formats. This EDID ROM is always read when a source device first powers up.
When a source is interfaced directed to a TV, there is usually no issue with audio & video sync since the audio/video goes out of the source synchronized, and the TV has its own audio delay electronics to compensate for the video delay resulting from the TV's video processing.
Since the TV knows how much video delay its own processing will impart, it can compensate by delaying the audio accordingly. The problem typically occurs when a user has an AV receiver between the source and the TV. In this instance, the AV receiver extracts and plays the audio (with no signicant delay), and then send the video to the TV (which often does significantly delay the video because of its processing).
Since the TV is only getting the video, it obviously has no control to perform the audio delay that it normally would do. And since the AV receiver does not know how much video delay the TV has, it does not perform the delay. Today, a user will usually manually set the AV receiver to delay the audio by a specific amount, but the users must "guess" how much delay to dial in.
With HDMI, there is no guessing as devices will be able to report their video or audio delay effects, and the source or AV receiver will be able to compensate automatically with just the right amount.
With this architecture, the audio delay is set once and does not result in any glitches when channels are changed, etc.
--------------------------
My comments:
I believe the use of pre/pros and A/V receivers to handle the switching of many input devices and output a single image to a TV is what most people do, and I do not believe that letting the TV output audio would be a good system architecture starting by its typical limited number of digital inputs.
Your idea that HDMI is sided with using A/V receivers rather than concentrating on a TV centric solution seems itself sided with TVs, besides, an HDMI source device (DVD player for example) adapts its audio output (multi-channel or just L/R) by reading what the receiving device is capable of handling.
TVs are usually capable of two channels so the TV would never get the multichannel version, in which case the audio output of the TV could be no more than the same limited L/R version it receives, while a multichannel version could have been fed to an A/V receiver architecture that HDMI detects it could handle.
Best Regards,
Rodolfo La Maestra
I could not respond until now due to time availability but I thought I would share with you a response from Joe Lee, Director of Marketing at Silicon Image.
-------------------
Response: With HDMI 1.3, it is sink or repeater devices (i.e. TV or AV receivers) that report their amount of audio and/or video latency, not the source devices. Source devices (such as DVD players or STBs) typically will output the audio & video in fairly good synchronization.
This is expected because the source is reading the content material directly, and thus there are no intermediate processing steps that would cause the source to lose the reference information about audio & video synchronization.
With HDMI, the audio & video latency information is stored in a ROM chip in the device called the EDID ROM. This ROM stores information about the device's capabilities, such as supported audio & video formats. This EDID ROM is always read when a source device first powers up.
When a source is interfaced directed to a TV, there is usually no issue with audio & video sync since the audio/video goes out of the source synchronized, and the TV has its own audio delay electronics to compensate for the video delay resulting from the TV's video processing.
Since the TV knows how much video delay its own processing will impart, it can compensate by delaying the audio accordingly. The problem typically occurs when a user has an AV receiver between the source and the TV. In this instance, the AV receiver extracts and plays the audio (with no signicant delay), and then send the video to the TV (which often does significantly delay the video because of its processing).
Since the TV is only getting the video, it obviously has no control to perform the audio delay that it normally would do. And since the AV receiver does not know how much video delay the TV has, it does not perform the delay. Today, a user will usually manually set the AV receiver to delay the audio by a specific amount, but the users must "guess" how much delay to dial in.
With HDMI, there is no guessing as devices will be able to report their video or audio delay effects, and the source or AV receiver will be able to compensate automatically with just the right amount.
With this architecture, the audio delay is set once and does not result in any glitches when channels are changed, etc.
--------------------------
My comments:
I believe the use of pre/pros and A/V receivers to handle the switching of many input devices and output a single image to a TV is what most people do, and I do not believe that letting the TV output audio would be a good system architecture starting by its typical limited number of digital inputs.
Your idea that HDMI is sided with using A/V receivers rather than concentrating on a TV centric solution seems itself sided with TVs, besides, an HDMI source device (DVD player for example) adapts its audio output (multi-channel or just L/R) by reading what the receiving device is capable of handling.
TVs are usually capable of two channels so the TV would never get the multichannel version, in which case the audio output of the TV could be no more than the same limited L/R version it receives, while a multichannel version could have been fed to an A/V receiver architecture that HDMI detects it could handle.
Best Regards,
Rodolfo La Maestra