Raw Work Flows ( Changes to White Balance and Exposure)

How does RC handle white balance if the camera is in auto white balance?

Or if i change white balance in lightroom? 

If I save the changes in a sidecar file will RC read the changes? If so just white balance or other stuff too?

 

The reason I ask, using tiff’s in RC is extremely slow compared to other formats. JPG’s are one of the fastest if not the fastest, but after including the export time, it is then slower than just processing the RAW’s in RC. 

I frequently use mixed cameras ( Sony A7RIII, Drone, and Phone…ect ) and things like exposure, and white balance can vary. I believe in DNG files the changes are saved directly to the file, not needing a sidecar file, unlike the Sony .arw RAW files. If RC handles DNG’s “better” it might be more efficient for me just to convert the Sony’s to DNG, as all my other cameras are DNG RAW’s already and would just be less pre-processing and exporting.

 

Any insight into how RC handles RAW files would be appreciated and would save me and my computer many hours of testing.

Thanks Steven for bringing the subject on the table, I am very interested too ! 

Mainly for scenes with complex lighting and huge contrast, we keep more informations on the raw files (if RC could handle these informations)… I posted this somehow related thread : https://support.capturingreality.com/hc/en-us/community/posts/360009453811-Texturing-exposure-EXIF-HDR-

I had some issues with DNGs (because of some bad Windows DNG driver support, on some of my machines my DNG files are downscaled) and now I am trying to colorcheck and develop my raw in RawTherapee and then export JPGs to RC. But this is not a so easy road…

Afaik RC only uses the embedded JPG in the RAW, so it should be handled with care. Also, as mentioned, it greatly depends on the windows driver (as Jponathan mentioned) since that is how RC can access RAWs.

How do both of you handle the white balance issue between different cameras? I find that quite challenging to say the least, even though I think RC is doing a great job if the differences stay within limits. BTW, I also use Rawtherapee - despite much criticism, after thorough testing of many different developers I am convinced that it is one of the best. At least for my X-Trans III.

To be honest, I am more and more going back to using JPG directly from the camera. In terms of geometry, the difference is negligible (with quality lenses) despite distortion correction. And if the contrast in the scene isn’t extreme, then there isn’t much need for me to work much with RAW processing, plus it’s a huge time-saver. Cranking up the shadows won’t interfere much with the features in my opinion and it makes a huge difference for alignment.

 

I try to stick with a fixed white balance. I avoid at all cost auto white balance. So usually for outdoor, daylight (sunny) or cloudy setting, depending on the lighting… and I am trying to use a colorchecker chart : at each of my shooting runs I try to get the colorchecker passport on my pictures, which I can then use in the app to build a profile (out of the DNG) and then use this profile in RawTherapee… But the gain in quality is really not obvious, as I think this is much harder to do a precise colorimetric work on a set of 1000 pictures for photogrammetry than this is for a portrait facing one direction… so I am still experimenting…

And yes, I agree, I also tend to use the JPG directly from the cam… much faster, much lighter to archive (data size is 4 times less), …

The only exception is when I work with my Mavic pro drone footage : the JPGs are so bad, so heavily smoothed that the JPGs are really a no go - DNGs are much much better.

 

 

Ah, good to hear - the JPG community seems to be growing!  :-)  Probably that’s more and more possible because the in-camera processing gets better and better.

I heard that a simple 80% gray color check is much quicker bit sufficient for most cases. But the processing time would still be similar. And there are still changes in the light over the course of the day. But I gather that’S the only good way to handle differences between cameras? They can be quite obvious, even with the same white balance setting or even RAW developement. What app are you using for those calibration profiles? It cannot be done within Rawtherapee?

I have to use the X-Rite app “ColorChecker passport”, that gets one DNG with the Color checker on it and compute a DCP profile, that I can then import in RawTherapee and apply to my pictures. But this makes things so complicated…

So what you are doing is you set the white balance in manual on your gray card ? no issues between different cameras and sensors ?

 

That’s what I heard - didn’t do it myself yet.

So far, I got away with a fixed WB or even auto at times…

So using raw files has no benefit and might even be slightly worst if the jpg previews of said camera that are built into the raw suck, like out of the mavic? 

Whats the point of supporting raw files? Only to give a Camera jpg preview, and not gain any benefit of all that extra data and drive space?

The only way I see this working then is to make adjustments and then rebuild all DNG’s with full sized previews built-in and that at least saves a little drive space. At least I don’t have 2 copies ( 1 DNG and 1 JPG ) of the same pic. There is no time savings here then.

Jonathan Tanant I also use a color checker. The short answer to complex lighting is to use a dual lum. profile. And if you don’t have time to do that, then a generic dual lum profile of daylight and tungsten is recommended as they as far apart in white balance.

On the Mavic, this is not the preview in the raws that sucks, this is the embedded JPG processing itself that sucks. 

Yes, I tried with dual lum profile, but this just does not work for complex subjects, or I would have to reshoot a color checker every 100 pictures and that would become a nightmare because I would spend too much time editing the raws… 

Take for exemple a building - with the sun, at say 4 or 5PM that gives you at least 3 different lightings (if not more) for the 4 faces, from direct sunlight (exposed side) to indirect and total shade (opposite side)… So for this kind of subject we could shoot 3 sets with 3 color calibrations, but of course you have issues at overlaps…

And of course sometimes you want these changes, because lighting is part of the subject : you can not always try to get the reflected ideal color of your subject only (even if we could).

Ideally I would see that as part of the workflow in RC (I could maybe turn this into a feature request) :

-work with the RAWs, so no information lost (12 bits-14 bits + EXIF that gives RC informations about exposure…).

-align the pictures.

-for outdoor given the time of day / GPS position, RC can compute a light model of where the sun is, the user would just have to say how cloudy it was that day.

-at texturing, all these informations are used to compute texturing given the user’s choice : 

    -remove the illuminant (so try to balance every picture) or not

    -get a constant exposure (so dark areas would be maybe all dark and bright areas all white) or normalise the exposure (what a camera is usually doing by setting exposure)

Really, this would be so great to have this handled in RC !!!

 

 

 

 

Hey Steven, don’t take my word for it though. When I tried with direct RAW, that’s just how I interpreted it. It might be different for others. Plus, what just struck me and I didn’t consider at the time, it could be that RC uses the JPG only for the display and calculates with the RAW. That seems to be a bit error-prone though.

I also would be careful in relying on the in-RAW JPG since you don’t have much influence over the settings.

EDIT: I just noticed at the time that when I import RAWs into RC, it looks eactly like the camera-JPG and not at all like the unprocessed RAW. I just realized that it might also be that the same rendering settings are used, since they might be stored in the EXIF. But then one would loose a great deal of the advantages of RAW processing…

If im working at all with my DJI Phantom 4 Advanced, the DNG files do not at all work well with basic windows drivers or RC. I bring them all into photoshop and convert them into tifs. The files are huge but I keep all the raw information that RC can use at that point. 

But my workflow right now is to bring everything into photoshop and mass edit the exposure, highlights shadows… etc then convert them all into TIF.

Jonathan Tanant,

     I just read your other post. I haven’t tested this but I believe that the way you are wanting to do HDR and have consistency in RC ( how I would do it anyway ) would be to spot meter the brightest part, ie out the window and expose it at about +1 to +2 stops over exposed. Take note of shutter speed and write it down.Then do the same with the darkest part of the subject but at like -2 stops, again depending on taste. 

     Say, the brightest parts the shutter was 1/500 and the darkest part was 1/4 sec.s. you then count the stops difference. Every half or doubling is one stop. So 

  1. 1/500

  2. 1/250

  3. 1/125

  4. 1/60

  5. 1/30

  6. 1/15

  7. 1/8

  8. 1/4

  9. 1/2

You can then set your camera to bracket 9 shots from the middle (1/30) for every shot. If you camera can’t bracket that many you have to do it by hand. Use lightroom to generate hdr for each bracket. Lightroom will spit out a raw dng hrd image that you can then use. The trick is let the darkest parts be dark and the brightest parts be almost blown out. It looks more natural. 

I see that your a unity dev, so I don’t know how familiar you are with cameras, so just ask any questions if needed.

Where is WishGranter? Hearing the it from the RC staff would put my mind at ease. Testing is great, but is a lot of guess work. 

What are the benefits of working straight from raw files, if any?

What are the draw backs? 

What are the limitations of RC reading a raw file. 

If it varies from codec to codec generalize it. Maybe to just talking about DNG’s.

Thanks Steven for the bracketing method explanations.

Actually I am sure this would be very high quality, but I am afraid this would be really slow, because of the tripod  (when I have sufficient light I am working handheld to shoot faster) and because of the processing for each picture. I can’t imagine doing that for each aligned camera in RC (we are talking about thousands of cameras). 

What I had in mind is more using the raw (this is not as good as bracketing but better than 8 bits JPGs) and use the informations they contain (EXIF).

But thanks, I will try on a simple subject.

 

Yeah tell me about it! Its a nightmare. I just did a small screw with focus stacking. Something like 200 images stacked turned out to be 27 final images to then run through RC. A lot of work for just 27 images. Or doing an hdr 360 pano in lightroom, 50 hrd’s processed as described above then stitched into a dng pano in light room.  Its a lot of work for just one image, or in RC, one model. Sometimes I just want to push the limits, and every time I do I learn.

Yes, I saw your post about the screw !  Great result !

Do you want or need ambient lighting? In photography ( in studio conditions ) you can control the mix of ambient vs flash via shutter and aperture. Shutter controls ambient light and aperture controls flash intensity for the most part. You kill ambient light with a high shutter.  As long as you can get even exposure and no shadows with a flash setup your results would be the same as de-lighting, or better since it’s not a software approximation. This would also not have mixed white balance.

For a big room, using 1 or 2 flashes bouncing off the ceiling would be the cheapest and easiest way to do it. Use your color checker when doing this as the ceiling or wall will cause color casts, easily corrected though. You wouldn’t move the flashes and you could shoot hand held. For smaller subjects a light box or ring light ( ring flash ) should work.

The ceiling-bouncing flashes is not a bad idea, I like it!

Wishgranter is pretty much gone from this forum, so I wouldn’t count on him too much (prove me wrong Milos! :slight_smile:

Also the staff usually doesn’t contribute too much to such discussions.

It really is about experimenting and finding out what works best for yourself. What do you think where Wishgranter got his amazing knowledge from?  :wink:

So roll up the sleeves and click those buttons!

Jonathan, I think 16 bit TIF is supported, so that would be the way to go there. Unfortunately, it is only for input, output is still only 8 bit, afaik. So not really much gain there imho. Did I say this in this thread or another, anyway, I think it helps a lot to raise the shadows (blacks) like crazy to give RC more to work with. It can also result in something coming close (-ish) to a delighted image.

So for sure RC is only using the JPG previews of DNG files. Would explain how dng files are processed just as fast as jpg. Changes to white balance and exposure didn’t show in RC even after clicking “save metadata to file” in lightroom, but after clicking “update DNG preview & metadata” did RC reflect those changes in the 2d view and in the point cloud after an alignment. To be thorough, every test was started as a new project with the cache cleared each time. 

Yep, after rebuilding the dng’s with no preview in lightroom, it seems like there is some absolute minimum like 128x128 preview. And that is all RC can read from the 42mp DNG. So unless you know that the jpg previews build into your raw files are full res and with the processing you want (noise reduction, sharping, ect…) your better off exporting full quality jpgs. I don’t think I’ll be using tiffs anymore as there is no quality difference and the downside is speeds much slower and massive files, 4 times lager than the original raw files. I have found that compressing them either zip or lzw only increases the already slow tiff RC processing time, and the files are still much larger than the original raws.