Both Technicolor and THX have demonstrated that platforms carrying their 4K certification can produce very-high-quality up-converted 4K images.
In my last post, I reported on a conference call with members of THX’s San Francisco-based certification team, which described the philosophy behind the THX program and some of the practical aspects of their implementation. This time, I will report on a call with Kirk Barker, Senior VP of Development and Strategy in the Technology Licensing Division of Technicolor Thomson and VP of Technology Licensing at Technicolor. The trans-Atlantic call took place July 19. I was in Connecticut and Barker was at Technicolor’s HQ in Issy les Moulineaux, just across the Peripherique from Paris’s 15th Arrondissement.
Barker, who is in charge of the certification program development team, started off by reminding me that Technicolor has very long history in color and the movies. (The first Technicolor movie, shot in a clever but problematic two-color process, was The Gulf Between, 1917.) The company has a very large research team, Barker said, with 300 researchers, over 100 of which have Ph.Ds.
The folks at Technicolor observed the proliferation of 2K-to-4K implementations, and saw that the quality varied widely. The viewer’s experience can be ruined by a poor implementation, and Technicolor wanted to ensure that 2K up-converted material would deliver an experience to viewers that is very close to native 4K. Barker and his team began to develop algorithms and a certification program that would realize this very ambitious goal.
The team’s ambition was to do far more than simply follow Hippocrates’ mandate to “do no harm” to the original 2K image. In making a 4K image from a FHD image, it is necessary to synthesize three quarters of the pixels in the up-converted image. That’s a lot of synthesized pixels — 6 million in each frame — and it’s important that this startling act of creationism doesn’t introduce artifacts. The technicolor process first removes noise, because noise in the original image would be magnified by the up-conversion. Then, the additional pixels are synthesized and inserted. Then, the algorithms restore texture, including film grain, to the image. They then do line sharpening and removal of jaggies.
In practice, the process is even more complicated than the preceding paragraph makes it sound. The algorithms can be made to be more or less aggressive, and what helps in some images can hurt in others. An algorithm that works well on a white background can create objectional artifacts on a gray background.
Technicolor’s approach is to establish the original 4K experience as the baseline. To do this, they look at wide range of natural and CGI 4K content. They start with 4K content. They then down-convert the 4K content to 2K, and then reconvert back to 4K with the aim of having the reconverted content be as as close as possible to the original native 4K as possible. Now, you can’t be perfect when you go from FHD to 4K. But you can be perceptually very close to it. Barker noted that you can’t be perfect when you go from FHD to 4K., but, as far as the viewer’s perception is concerned, you can get very close. That was confirmed by our observations of the Technicolor/Marseille demonstration at the PEPCOM Digital Experience event held late last June in New York.
Barker said his team’s orientation at all times is to be faithful to what the studios, Technicolor’s biggest customers, produce. If a director wants a scene to be hazy and foggy, you don’t want your algorithms to make the scene sharp.
Techicolor has developed a standard test suite, but they are not publishing the details of the suite at this time. Many of the determinations about how the algorithms perform on the suite is based on the perceptions of video experts, which Technicolor calls “golden eyes.” Barker: “No piece of equipment is better than the human eye for detecting small differences.”
Barker did reveal the following details about the testing protocol:
1. First the test suite is drawn in part from the vast existing library of 4K content. Some content they shoot themselves, and some comes from the EBU test suite.
2. Most of tests compare these original images with the upscaled versions. Pixel accuracy must be very precise — to a single pixel.
3. To deal with rounding errors in upscaling, color accuracy must be monitored precisely.
4. In addition to video content, their experts look at a library of still image library, to see how static patterns interact with the algorithms. This is necessary because motion conceals artifacts that can be seen in still images. For instance, you can see flicker in areas of some still images of the algorithms are not tuned correctly.
5. They also use conceptual tools, some of which were originally designed to evaluate signal compression schemes. Among these are Peak Signal to Noise Ratio or PSNR (see, for example, http://www.ni.com/white-paper/13306/en/); the Structual Similarity Index Method or SSIM (see, for example, http://www.ni.com/white-paper/12956/en#toc6, Section 7); and Technicolor’s proprietary WQA, which is similar to SSIM.
But, says Barker, the human eye is the most important measure, even more than mathematical methods.
Although the goal is to minimize all artifacts to the point of insignificance, different artifacts are not all equally important. So each test has a minimum grade, which can differ depending on the importance of the artifact.
Finally, the results of the algorithms are tested among groups of subjects using the methodology of ITU-R Recommendaton BT 500.11 for subjective evaluation of video images. BT 500.11 requires that each test use more than 15 people. Technicolor uses 20. In their testing the “high anchor” is the original 4K imagery. The “low anchor” is a simple bi-linear transform. Other algorithms are interspersed with the algorithm under test to avoid bias. The subjects are asked, “Is original image better or worse than the second image?” A great deal of testing is done with different content types.
“Overall,” says Barker, these tests are surprisingly reliable in sorting out the algorithms. Our algorithm comes in very close to the high anchor.” Since none of the images can fall below a pre-determined level, a single configuration must work for all images if the algorithm or a device that embodies the algorithm is to pass certification.
Technicolor’s algorithms and certification focuses on the image itself, not the display device. “We look at the HDMI output from the Blu-ray Disk player,” says Barker. “Hook up the TV set and you get into set issue. We don’t get into that layer. We are focusing more deeply on a narrower space. And with preserving the artistic intent.”
If you have read both this article on Technicolor and the previous one concerning THX’s 4K certification, you are in a good position to see the differences between the two approaches. But if you’re interested in what I have to say about that, all you have to do is read my next post.
Ken Werner is Principal of Nutmeg Consultants, specializing in the display industry, display manufacturing, display technology, and display applications. You can reach him at firstname.lastname@example.org.
Posted by Ken Werner, August 3, 2013 1:42 PM
About Ken WernerKenneth I. Werner is the founder and Principal of Nutmeg Consultants, which specializes in the display industry, display technology, display manufacturing, and display applications. He serves as Marketing Consultant for Tannas Electronic Displays (Orange, California) and Senior Analyst for Insight Media. He is a founding co-editor of and regular contributor to Display Daily, and is a regular contributor to HDTVexpert.com and HDTV Magazine. He was the Editor of Information Display Magazine from 1987 to 2005.