10 bit encodes utilise a profile of the H.264 specification (Hi10P) which makes use of ten bits of information torepresent colour. This allows for a reduction of colour banding and an overall reduction in file size due to increased compression efficiency.
Ten bit encodes are automatically detected by TehConnection's MediaInfo parser and have a red denotation beside them:
The main drawback to this format is that not everything that is able to play Matroska files can play these... yet. An excellent playback tutorial including a section on ten bit playback is available (Windows only) here. This article will remain as updated as possible and thus far the following list will detail which software/hardware can and cannot decode 10 bit files. For more information on ten bit colour, read on.
Software able to handle Hi10P:
Software Unable to Handle Hi10P:
- DXVA
- CUDA
Tell Me More About Ten Bit Colour
Firstly, it is necessary for you to understand what bit depth is in the first place. The Wikipedia article is an excellent in-depth article but the key item to take away is that bit depth is the measure of accuracy available for storing and displaying colour.
At the consumer level, the majority of digital media (including Blu-ray) and hardware today and for the foreseeable future will have the ability to display eight bits of colour information. This is because eight bits has been the de facto standard for a very long time. Eight bits of colour means that for each of the three primary colours (red, green & blue) there are two hundred fifty six (256) different shades of that colour. Combined they allow for twenty four (24) bits of colour depth to be displayed which is also the most common colour depth configuration of most operating systems.
The total possible number of colours able to be displayed with eight bit systems is 16,777,216 (2563). A ten bit colour depth allows for one thousand twenty four (1024) shades for each primary colour and over a billion total possible shades; twelve bit has four thousand ninety six shades for each primary colour and over sixty eight billion possible colours, 10243 and 40963 respectively.
To give you some context, your average cinema will use (at least) a 2K but usually a 4K projector and show film with a twelve bit (ten bit for 3D) colour depth.
Why Do We Need 10 Bit Encodes?
You might ask that if all of our hardware and digital media is stored and transmitted with an eight bit colour depth, aren't we wasting our time encoding with a ten bit colour depth? This is a good question and something that currently has many incorrect or partially correct answers.
To answer this we need a little side information, most LCD panels (TN panels to be precise) can only represent a colour depth of six bits (a mere sixty four shades for each primary colour) and under normal circumstances to the regular user this would look terrible. To work around this issue these panels use a trick named 'dithering' to simulate a colour depth of eight bits. In simplified terms this means that the panel quickly alternates between the nearest of the available sixty four (six bit) shades to simulate more available colour depth. When done correctly, a six bit panel is able to create the illusion that it is capable of a higher colour depth than it actually is.
This trick can also be used to display high bit depth encodes on hardware and software only capable of eight bit output.
This still does not fully answer the original question, surely we could simply encode in eight bit and hard code that dithering to simulate a ten bit colour depth?
This is actually already the case, it is mainly used to prevent colour banding. The huge drawback with this hard coding is that the bit rate required to keep the dithering intact is disproportionately high and can result in much larger files.
This is where the true advantage of ten bit encoding reveals itself; we are able to use fewer bits to represent the same image even if the source is only eight bits. We end up with a double whammy with ten bit encoding, not only do we no longer need to hard code dithering but we can also increase our error tolerance. Losing one bit of information in an eight bit colour range is equivalent to three bits in a ten bit space. This means that the same quality can be perceived with a lower bit rate allowing encoders to achieve transparency with smaller files.
0 comments:
Post a Comment