- Tram Ho
Although today’s hardware has been developed to support not only 4K content but also 8K content, it was not until recently that people began to really produce content according to 8K quality standards. Therefore, to improve the display quality of content created before 8K is available, upscale technology is the solution for consumers to take advantage of the ultra-high-definition standard on. now on.
That’s right, you don’t really need videos shot at 7680 x 4320 resolution – Samsung’s 8K TVs use AI image enhancement techniques to transfer any type of video (from SD to 4K). and everything between them) into 8K resolution.
Of course, upscale is not something new. For years, 4K and even HD TV models have used many ways to stretch low-resolution content, making them fit to the large pixel-to-inch ratio of modern TVs. But because the number of pixels on an 8K TV is equivalent to 4 4K TVs, traditional upscale methods simply don’t work.
Why upscale in the traditional way makes images worse
Before 1998, broadcasters at 720 x 480 resolution, and movies shot at higher quality would be compressed to fit that format. 345,600 pixels of that content occupies only a very small portion of the time on modern TVs with a high pixel-to-inch (PPI) ratio. And this SD content must be stretched to fit more than 2 million pixels if upscale to HD, more than 8 million pixels to 4K, or more than 33 million to 8K.
The basic principle of upscale is to maintain proper pixel ratios by multiplying them. To convert HD to 4K, the TV’s processor has to “inflate” an HD pixel so that it occupies the space of four pixels on a higher resolution screen. Or 16 pixels if switched from HD to 8K.
Without image processing, the end result would be ” like a small piece of butter being smeared on a large slice of bread “. Each piece of data will turn into a piece of unnatural squares, with no natural gradients between details and colors. The resulting image will have lots of squares, or noise, around the objects on the screen.
You will also likely see something called “mosquito noise”. In order to compress a video to suit your limited internet bandwidth, broadcasters and websites have to fill the data stream with raw color patches, or “artifact”. These intentionally included raw pixels appear around parts of the screen where there is high contrast, like a brown bridge against a blue sky in the image above.
To solve these problems, TV programmers taught their TVs to analyze and digitize images in real time to fill or correct missing or damaged pixels. And they achieved their goal by manipulating mathematical functions – who would say that watching TV too much will leave you brain-shaken?
Specifically, the engineers taught the TV’s processor to infer the color value of each missing pixel, based on the pixels around it. To do that, it must define the kernel function: a function that assigns color priority to adjacent pixels of missing pixels, based on their asymptotic parameters.
The most basic kernel function used in TV is the nearest pixel kernel function, which calculates which pixel is closest to a missing pixel and pastes its color data into that empty pixel. This method causes the image to be jagged, with the edges of the objects looking very bad. Imagine a black “A” on a white screen; a missing pixel just outside the character can be filled in black, while a pixel at the edge of the character may display white. The result will either be a gray spot around the character, or a jagged black-and-white ladder.
Bilinear interpolation requires more processing power, but is more efficient. In this method, empty pixels are compared to the two closest adjacent pixels to create a linear gradient between them, sharpening the image. The result is smoother images, but may be inconsistent. Therefore other TVs use bicubic interpolation, which takes the color value of the nearest 16 pixels in all directions. While this method can obtain colors as close as possible, it produces a more blurry image, with the edges of the objects being distracted by the halo effect.
The diagram shows the procedure for calculating a blank pixel (P) based on bilinear interpolation
By now you probably guess the problem: Previous TVs filled with pixels based on mathematical formulas that were statistically capable of producing the most accurate results, but there was no way to know what they should look like based on what’s actually on the screen.
At Samsung, engineers have come up with a solution to all of these problems: using artificial intelligence (AI), machine learning, and deep learning to upscale images to 8K.
Samsung’s secret: machine learning, object recognition, and filters
Samsung’s secret weapon is a technique called super-resolution machine learning (MLSR). This AI system captures a low-resolution video stream and upscae it to fit the resolution of a larger screen with a higher PPI ratio. It’s like a “cool trick” you often see in movies: scientists enlarge and enhance the details of a blurred photo at the touch of a button – except that Samsung’s technique is done. automatically and almost instantly.
Samsung representatives explained how they can analyze a large amount of video content coming from a variety of sources – low and high quality YouTube videos, DVD and Bluray, movies and sports events – and create two image databases, one for low quality images and one for high quality.
Then, the AI training firm to complete a process called “reverse regression”. First, you take high-resolution photos and downgrade them to lower resolutions, tracking lost image data. You must then reverse the process and train the AI to fill the lost data from low resolution images so that they reflect back in the high resolution images. This type of machine learning is called “self-directed study”.
Samsung’s team calls this process a “recipe”. The company’s 8K microprocessors contain a formula bank with a database of recipes for different objects, such as a fruit or an “A” character. When the processor identifies a translucent apple in an actor’s hand, it restores the edges of the apple, corrects any compressed artifact that appears, and makes sure the empty pixels have a reddish tint. match based on the actual apple’s color, not on vague statistical algorithms. Plus, besides restoring specific objects, AI will adjust your content based on whatever you’re watching.
According to Samsung, it has dozens of different “filters” that can change the level of detail creation, reduce noise, and restore the edge to match the content inserted, based on the genre you are watching. like a particular sport, a movie series, or some kind of movie.
Edge restoration is not the most difficult task for AI. Cloning the texture of an object in real time is a difficult challenge. Samsung engineers must ensure that the processor enhances the appearance of objects without making them appear artificial.
What a microprocessor doesn’t do is misclassify an object. ” It will not turn an apple into a tomato, ” an engineer said. It is very likely that the microprocessor is trained to avoid making any major changes if it does not identify what the object is.
AI also will not change the “intention of the director” in a movie. It is a good idea that if the director uses the bokeh effect, the background will be blurred while the sharpness of the foreground is pushed up to 8K.
They also claim not to specifically analyze the content of high popularity to serve the cataloging of objects, towards the overall quality and diversity of content. Therefore, maybe Samsung does not have the “dragon” or “wolf” formula for your “Game of Thrones” series.
Samsung’s new 8K (and 4K) TVs come preloaded with the latest recipe bank, and over time, new physical data will be added via firmware updates that you need to approve installation. Samsung says it will continue analyzing new content to expand its physical library, but it does so on Samsung servers, which doesn’t analyze data from a user’s TV.
Are you wondering how many physical formulas have been accumulated by Samsung from its analysis? An engineer revealed an impressive number, explaining that the processor will usually recognize a large number of objects on the screen. But perhaps users do not need to care about those numbers, but instead should only focus on the effectiveness of the MLSR during operation only!
Enhance the game with deep learning
Without resting on the laurels, Samsung continues to focus on developing deep learning algorithms with the ability to allow the screen to always display optimal video quality without human intervention. Deep learning is a more in-depth method of self-learning in machine learning, allowing AI to process important, specialized information from the vast amount of data provided, thereby making complex judgments. trash based multi-stage processing; and of course it requires more powerful hardware.
” Deep learning allows for more accurate and efficient enhancements in image quality than previously achieved, ” Samsung engineers say. They have come up with a new AI upscale technology, combining machine learning with deep learning – AI Quantum Processor 8K. ” Machine learning technology used to bring sharper picture quality, but now our technology can bring in more sophisticated textures. Images with complex textures, like frames mountains or meadows can now be upscale to a much more natural 8 “.
Although deep learning has infinite potential, there are still barriers to overcome. Samsung has tried countless times to perfect the technology and is ready to launch it to the market. ” It is very difficult for us to follow and understand the algorithm developed by the microprocessor’s artificial neural network. The relatively high power consumption of the hardware chip that underlie the artificial neural network is also a problems to be solved “.
Samsung is ahead of the competition with upscale 8K technology
Samsung is not the only TV manufacturer that uses AI and its image recovery technology on TVs.
Sony’s 4K TV models are also equipped with microprocessors with a dual database of tens of thousands of reference images, capable of “dynamically improving pixels in real time.”
LG announced the a9 Gen 2 TV chip at CES 2019, with image processing and machine learning technology to improve noise reduction and brightness boost – in part by analyzing sources and content types, then fine-tuning them. Align the algorithm accordingly.
However, apart from the AI factor, the above processors still only use machine learning to enhance image quality. While Samsung’s latest QLED 8K TVs have taken another step forward, incorporating deep learning to deliver super-realistic video quality, regardless of their original quality and resolution.
Source : Genk