Breaking into the HDR Game: My First Time Color Grading in Stunning High Dynamic Range!


You’re going to see linked Videos that are not viewable in HDR unless you are watching/reading this article on a device that supports HDR (if you’re via mobile, with a phone that supports HDR and allows you to tweak the HDR settings from the YouTube app). I’m also going to attach images in BOTH SDR qualities in order to give you an understanding of HDR, EVEN IF, you’re watching them in SDR. Have a great read, may Leonardo Davinci be with you.

Now… let’s imagine this opening as it is made by this guy:

Chapter one: "The Colorful World of HDR"

We see a close-up of a hand turning a dial on a color grading monitor, adjusting the brightness and contrast levels of an HDR image. The screen flickers as the image changes, revealing a stunning landscape shot with vibrant colors and sharp details.

The camera zooms out to reveal a figure in a dark room, hunched over the monitor with intense focus. It’s none other than our protagonist, a seasoned colorist with a passion for HDR. He’s been working on this project for weeks, painstakingly fine-tuning every shot to perfection.

Cut to a montage of.. dude, seasoned? Me? no no no… let’s cut to the meat.

Let me explain all you why… 

I fall in love for HDR, and here's why.

Hi there friends and followers, here’s Filippo Cinotti back at you with a brand new article.

This time I want to talk about something that has been there since we might say, a good 9 years now (talking about the first consumer HDR TVs release) but still, not a daily base type of requirement-specific output, for some Professionals (let’s not close this one just to Colorists) like me.

You might get it from the headline. I’m talking about HDR (High Dynamic Range)

But what stands for HDR?

HDR, which stands for High Dynamic Range, is a technique used in photography and videography to capture and display a wider range of brightness and color than traditional methods. The main difference between an HDR image and an SDR (Standard Dynamic Range) image is the range of details, colors, and brightness that can be displayed in both the brightest and darkest areas of the image.

An HDR image can capture a greater range of brightness levels than an SDR image, which means that it can display more details in the shadows and highlights of an image. This results in a more realistic and vibrant image, with more contrast and depth.

It’s good to think and remember the early days of HDR photography when people use to un-realistically push the data of the files just… ’cause they could.

What makes the real differences between HDR and SDR images?

A few more things to mention are that visually (and technically), there are several key differences between HDR and SDR images. Here are some of the most important ones:

1: Brightness

HDR images can display much brighter highlights and darker shadows than SDR images. This means that HDR images have more detail in both the brightest and darkest areas of the image.

The number of nits and values required for base HDR deliveries can vary depending on the specific HDR standard being used.

For HDR10, which is the most common HDR standard, the minimum peak brightness required is 1,000 nits.

This means that the display must be capable of reaching a brightness level of at least 1,000 nits in order to fully display the HDR content.

This thing evolves even more if we talk about specific standards.

For Dolby Vision, which is a more advanced HDR standard, the peak brightness level required can range from 1,000 nits up to 10,000 nits, depending on the content. In addition, Dolby Vision also supports a wider color gamut and higher bit depth than HDR10, allowing for more accurate and vibrant colors and finer gradations of brightness.

In terms of HDR standards for streaming services, both Netflix and Prime Video have their own requirements for HDR content. For Netflix, the minimum peak brightness required for HDR content is 1,000 nits, and the color space must be at least Rec. 709 or DCI-P3. In addition, Netflix also recommends using a bit depth of at least 10 bits (with a variety of standards for the cameras constantly updated)

For Prime Video, the requirements for HDR content are similar, with a minimum peak brightness of 1,000 nits and a minimum color space of Rec. 709 or DCI-P3. However, Prime Video also supports HDR10+ in addition to HDR10 and Dolby Vision, which can provide even more dynamic range and color depth.

2: contrast

HDR images have a much higher contrast than SDR images, with more distinct differences between the brightest and darkest areas of the image. This makes the image look more vibrant and realistic. Also, contrast is a crucial aspect of HDR imaging, as it enables the display of finer details and subtler gradations of brightness. The contrast value specifications in HDR can vary depending on the specific delivery standard being used.

For HDR10, which is the base HDR standard used by most streaming services and Blu-ray discs, the minimum contrast ratio required is 10,000:1. This means that the brightest part of the image must be at least 10,000 times brighter than the darkest part of the image.

For Dolby Vision, the contrast specifications are more complex and can vary depending on the specific content being delivered. Dolby Vision supports dynamic metadata, which allows the content to be optimized for the specific capabilities of the display device. The dynamic metadata can adjust the brightness and contrast values of the content on a scene-by-scene basis, allowing for more precise control over the image quality. In general, Dolby Vision requires a higher contrast ratio than HDR10, with a target contrast ratio of up to 1,000,000:1.

For Netflix and Prime Video, the contrast ratio specifications are similar to those of HDR10, with a minimum contrast ratio of 10,000:1. However, both streaming services also support HDR10+ and Dolby Vision, which can provide more dynamic metadata and higher contrast ratios than HDR10 alone.

It’s worth noting that contrast is just one aspect of HDR imaging, and other factors such as color space, nits level, and bit depth can also play a significant role in the overall quality of the HDR image. Additionally, while these contrast and other specifications provide a useful framework for creating and delivering HDR content, the actual image quality will depend on a variety of factors such as the quality of the source material, the capabilities of the display device, and the viewer’s individual perception.

3: color (and color spaces)

HDR images can display a wider range of colors than SDR images, resulting in more accurate and vivid colors. Color spaces are another important aspect of HDR imaging, as it defines the range of colors that can be displayed in the image. The base color space required to deliver HDR is generally wider than that of standard dynamic range (SDR) content, in order to support the expanded color range that HDR provides.

The most commonly used color space for HDR is the Rec. 2020, which has a wider range of colors than the Rec. 709 used for SDR content. Rec. 2020 covers more of the visible spectrum than Rec. 709 and supports a wider range of color luminance and saturation.

However, not all displays are capable of fully reproducing the Rec. 2020 color space, and some HDR content may be delivered in other color spaces such as DCI-P3, which is commonly used in cinema.

For Netflix, the minimum color space required for HDR content is Rec. 709 or DCI-P3, with a recommended color space of Rec. 2020. Netflix also supports the BT.2100 color space, which is a more recent standard that extends the range of colors that can be displayed.

For Prime Video, the minimum color space required for HDR content is Rec. 709 or DCI-P3, with a recommended color space of Rec. 2020. Prime Video also supports the BT.2020 color space, which is a more recent standard that extends the range of colors that can be displayed.

For Dolby Vision, the color space requirements can vary depending on the specific content being delivered. Dolby Vision supports a wider range of colors than most other HDR standards, including Rec. 2020, P3-D65, and BT.2020.

The specific color space used for a particular piece of content will depend on a variety of factors such as the production process, the display capabilities of the target devices, and the intended artistic vision.

4: details

One of the key benefits of HDR is the ability to display more detail in both the brightest and darkest parts of the image. This is achieved through the use of a wider dynamic range, which allows for finer gradations of brightness and contrast.

As said, in SDR content, the maximum brightness level is often limited to around 100 nits, while in HDR content, the maximum brightness level can be much higher, often exceeding 1,000 nits or even higher in some cases. This higher brightness level allows for a more natural and immersive viewing experience, with greater depth and detail in highlights and shadows.

In addition to increased brightness, HDR content also typically uses a higher bit depth than SDR content. While SDR content is often encoded with 8 bits per color channel, HDR content is typically encoded with 10 bits or more per color channel, allowing for a greater range of possible colors and shades.

The increased brightness and bit depth of HDR content can also have an impact on the display technology used to view the content… and that said, ne of the challenges of HDR content delivery is ensuring that the content is optimized for a wide range of display devices with varying capabilities. This is where dynamic metadata comes in, allowing the content to be optimized on a scene-by-scene basis for the specific capabilities of the display device. This can help to ensure that the content is displayed as intended, with the maximum detail and nuance available on each individual display.

The increased detail and nuance available in HDR content can provide a more immersive and engaging viewing experience. However, to fully take advantage of the expanded dynamic range and color space of HDR, displays need to be capable of reproducing a wide range of brightness and color values, and the content needs to be optimized for a wide range of display devices with varying capabilities.

Because HDR images can capture more detail in the brightest and darkest areas of the image, they can display finer details that may be lost in an SDR image.

And to wrap it all up...

…we now know that to achieve HDR, images need to be captured with a camera that has a higher dynamic range than a traditional camera (guess any camera will be HDR in a matter of years, but still, not less than 10Bit), using techniques such as bracketing and merging multiple exposures. In addition, the display technology used to view HDR images must also be capable of displaying a wider range of brightness and color. This is achieved through the use of higher brightness levels, wider color gamuts, and more precise color representation.

Color spaces, IRE levels, and nits are all important factors in HDR imaging. Color spaces refer to the range of colors that can be displayed, and HDR typically uses wider color gamuts such as Rec. 2020, which can display a much larger range of colors than SDR color spaces like Rec. 709.

IRE levels refer to the signal levels used in the video, and HDR typically uses a higher IRE range than SDR, which allows for more detail in the brightest parts of the image.

Finally, nits refer to the brightness level of a display, and HDR displays can typically reach much higher nits levels than SDR displays, allowing for much brighter highlights in the image.

In summary, HDR images offer a more realistic and vivid viewing experience compared to SDR images, with greater brightness, contrast, color, and detail. Achieving HDR requires specialized equipment and display technology, including cameras with a high dynamic range, wide color gamut displays, and high nits levels, among other factors.

But now, as you were guessing, comes my personal experience with HDR.

Chapter Two: An High Dynamic Range YouTube case-history: The Iron Sea

Let me tell you this: like most of you, I’m not working full-time on HDR projects.

My client’s needs so far are not exclusively on the HDR side, so if the project is not a Feature or a Documentary, I might say that most of the time the HDR pass is not asked for.

That’s why with my minimum HDR expertise, I literally LOVE it when I have the possibility to grade a piece in HDR.

Why? Here are my 3 top reasons in a really practical way:

  • You can literally see what your image can give you back from tiny details in dark corners to fully talking highlights
  • The luminosity response literally lets you feel the scene (glasses on for high-key shots)
  • The possibilities you have in the post are literally extended (and with that, the possibility to make a big mess)

And talking about this specific scenario, I’m going to speak about Grading an HDR piece for YouTubethe less practical and less fun thing it might happen to you, talking about HDR deliveries. (know why at the end of the article)

Let me start by telling you that I’ve been honored enough to make an HDR pass of the short film living inside the famous Media Division episode: ‘The impossible Lens’ (available down here)

It was a blast for me testing and really taking the best out from the camera and this impossible lens, even for the fact that and HDR pass was literally needed in order to really see all the details on the pool shots and during the opening first shots 

By the way, for the ones asking, here’s the HDR video:

And about the process itself, it was something far away from the classic SDR grade.

Chapter Three: The process around an HDR Color Grading and delivery that DOESN'T WORK for YouTube

If you know me for a while, you might know that I’m a Davinci Resolve colorist (I don’t normally use Baselight or any other software) and so for this production too, Davinci was my starting point.

So what I did as my first action, was to open my timeline, watch the edit and take a look at Nikolas’s tweaks inside Resolve.

We all know that he’s not a Colorist, so I just found what I was supposed to find: a grade, based on a custom structure on every single cut he had… I needed to get rid of that and restart from scratch.

So I did what I suggest anyone do in this situation: I duplicated the timeline and set my whole workflow by grouping my clips into three main situations:

This is a small project of 23 cuts, I found it way more practical to subdivide the Base clips for the forest in the ‘Clips’ group, the lake clips in ‘Exteriors’, and the dark clips in the ‘Pool’ group.

That said, I was ready to re-think my fixed node structure, by maintaining in my timeline what Nikolas did, but, turning it off. I felt like it might have been useful at some point, so I decided to don’t cancel anything he did.

So, this was my FIRST structure

Based on the most common workflows I use, I decided to split the workflow into 3 main groups.

This is my most common way to grade a project like this and be able to add a Color Space Transform, at the end of the pipeline. Why? In order to transcode e re-form the color space used into another Color Space for a different export (this process is called ‘pass’, and I’m not doing it in this case BTW as this project was supposed to be released just in HDR.. but just Imagine that by adding a node and re/covering/mapping data, we can be anywhere).

This process was Display Referred (Color Management handled by me in the structure)

So, here’s I set this:

Pre clip: Kinenfinity transcodes into Arri LogC space (in order to have the clips floating into Arri LogC, know why in seconds). I used LookDesigner and not a Color Space Transform here, (if you don’t know look designer, I suggest you to check out this video) 

Clip: nothing but a Base corrector for exposure and WB tweaks. I added a few parallels when needed – but mainly it was all about tweaking exposure and WB minor fixtures (hey, that’s what happens when footage is good – don’t blame me)

Post Group: that’s where the fun is. On the first corrector I put the ShowLUT generated for Nikolas (you could call this one Look node. if you want) and here’s where I put my Kodak 5219 negative remaining in an Arri space using LookDesigner. On the second corrector, I maintained LookDesigner once again, Inputting Arri and Outputting Rec 709 2.4 (having graded all this initially on a 2.4 display as a first pass). And that’s the point where my chain would end IF I would output for SDR delivery… now we need to expand all this.

That’s where the HDR happens, so read carefully:

And that’s where we come to the moment where I switched my monitor to HDR P3-D65 ST2084 (1000nit) for the HDR pass. Again: at this point, if your monitor can’t handle this space, you CAN’T GRADE IN HDR.

Read that again. if your monitor can’t handle this space, you CAN’T GRADE IN HDR.

This was composed of a series of 3 corrector serial nodes (highlight/blacks recovery and a remapping corrector) with my final 709 to HDR P3-D65 ST2084 

By doing this, with a properly set monitor, I was now working on an HDR scenario, giving a whole look to my clips, finalizing the project, and being ready to export my timeline with the correct metadata… until

Until I realized this wasn't the correct method for YouTube HDR deliveries

If on the technical side, everything was displaying properly and looking great on the practical side, YouTube was n0t recognizing the video as HDR, but still as an SDR timeline.

After some research (yeah, could have done this on the second one but hey, I’m human, and I make mistakes) that’s what I found:

Basically (from YouTube guidelines):

If you’re grading your video, grade in Rec. 2020 with PQ or HLG. Using a different configuration, including DCI P3, will produce incorrect results.

Once a video has been properly marked as HDR, uploading it follows the usual process of uploading a video. YouTube will detect the HDR metadata and process it, producing HDR transcodes for HDR devices and an SDR downconversion for other devices.

I screw that up.. and needed to take my project back on track.

And before jump into that again, I want to leave you some more guidelines for that: 

Resolution720p, 1080p, 1440p, 2160p
For best results, use UHD rather than DCI widths (for example, 3840×1600 instead of 4096×1716).
Frame rate23.976, 24, 25, 29.97, 30, 48, 50, 59.94, 60
Color depth10 bits or 12 bits
Color primariesRec. 2020
Color matrixRec. 2020 non-constant luminance
EOTFPQ or HLG (Rec. 2100)
Video bitrateFor H.264, use the recommended upload encoding setting
AudioSame as the recommended upload encoding setting

HDR video file encoding

These containers have been tested to work:

  • MOV/QuickTime
  • MP4
  • MKV

These codecs are recommended, as they support 10-bit encoding with HDR metadata and deliver high quality at reasonable bitrates:

  • VP9 Profile 2
  • AV1
  • HEVC/H.265

These codecs also work, but require very high bitrates to achieve high quality, which may result in longer upload and processing times:

  • ProRes 422
  • ProRes 4444
  • H.264 10-bit

HDR metadata

To be processed, HDR videos must be tagged with the correct:
  • Transfer function (PQ or HLG)
  • Color primaries (Rec. 2020)
  • Matrix (Rec. 2020 non-constant luminance)
HDR videos using PQ signaling should also contain information about the display it was mastered on (SMPTE ST 2086 mastering metadata). It should also have details about the brightness (CEA 861-3 MaxFALL and MaxCLL). If it’s missing, we use the values for the Sony BVM-X300 mastering display.
Optionally, HDR videos may contain dynamic (HDR10+) metadata as ITU-T T.35 terminal codes or as SEI headers.

HDR authoring tools 

The following are examples of tools you can use to upload HDR videos to YouTube:

  • DaVinci Resolve
  • Adobe Premiere Pro
  • Adobe After Effects
  • Final Cut Pro X

Time to run back on the project and get this done right this time.

Chapter four: The process around an HDR Color Grading and delivery that works for YouTube

As the first thing, I started setting everything right on the project (and decide that this time I would have worked Color Managed) So I opened the project settings and put the scenario in this way:

And mostly, on Master Settings – Video Monitoring, habilitate Enable HDR metadata.

Fun enough I had this tweaked for my old version, but still it wasn’t processed in YouTube.

That done, this was my brand new Node Tree structure:

Pre-Group: Kino to LogC as before, with the addition of a general offset for the whole pre-group and the LookDesigner Look process baked here, with the difference that what I did here, was outputted in LogC and not in Rec709 2.4

Clip: As before, BASE adjustments on the exposure and WB side mainly.

Clip: Counting that the grade has been dedicated to the Pre-Group, outputted in LogC, I fixed the fixable zones (starting from the previous structure) and outputted my Color Space and Gamma to the timeline settings, as seen up here.

And it's already Delivery time

On the delivery side, I decided to go for a pretty fat format as the Apple ProRes 4444XQ (’cause… why not) with native resolution.

Sure thing is taking extra consideration of the HDR10 metadata and correct color space tags.

all this... in less than 48H with one single review from Nikolas.

Chapter five: Making it to YouTube

My first consideration on the YouTube side is that for a works like this, you need to take in consideration the most important thing: time

YouTube will take something like 2 to 3 days to process your HDR image, and until then, it will just appear as a failed SDR upload.

And you know what? Even with these settings properly set on the project, YouTube hasn’t randomly processed HDR on 2 uploads upon 3. Maybe even the initial workflow would have worked well? Who knows.

What I know now is that it was an incredible pain to wait or every single upload to process day after day, without knowing if I was missing something or if it was just.. my dearest YouTube encoder.

Hope your next export will be less painful than mine.

Even if the result is worth every minute spent on it.

My best to you,


Leave a Reply

Your email address will not be published. Required fields are marked *

Plasma Republic