Removing Directional light from a Photoscan

Hello!

First post here on the forums!

Since the kite video released a while ago, i’ve been out photoscanning assets trying to build a asset library myself.

Everything has been going great (been using Agisoft Photoscan and 3ds max), but i’ve reached an obstacle. Even at really overcast days im getting some directional light onto my assets which makes them really stand out in a bad way in engine.

I Know the guys who created the open world demo managed to get around this using their own(?) software, but i really can’t figure it out.

All my assets are recorded with a greyball, chromeball and a full HDR enviroment.

What i think has to be done, is to get the directional-light setup exactly right using the greyball and chromeball in 3ds max/ maya. then bring in my assets and bake a secondary basecolor map. Then somehow subtract that (highly directional) map from the original captured map in photoshop.

Does anyone know where to get more info on this or maybe even software that can help me to it?

Best Regards
Oskar Wallin

I had the same problem and ended up removing most of it using Bitmap2Material (it has some features for removing AO and highlights for turning diffuses into albeido) from Allegorithmic and some by hand in Photoshop. Might not be the best way of doing it but it’s fairly quick and easy.

I’ll give that a look aswell. (Software companies must be ecstatic with all these Game engines releasing for free ;)) Thanks!

I am curious about this step as well.

I gave Bitmap2Material a try, it does make it look better, but i think there should be a better solution. Bitmap2Material basicly just removes the highlight and ignores the shape of the object itself.

Any devs out there that knows if you will release your tool to “de-light” meshes?

Yeah, I’ve heard other people suggest Bitmap2Material, but I’m super skeptical of how it would react to a wrapping texture with light directionality that’s going to change across UV shells - my understanding is that a lot of its math is contingent upon deducing where the light source is coming from in a 2D image, and if that direction is arbitrary it seems like it’s going to have a tough time.

It’s a super interesting question, though, and one I’m keen to figure out myself. I think you’re on the right track in the first post there, LOAW. Would it be possible to bake out a light map using the HDR, then invert it and apply it as a screen against the textures in Photoshop? Probably a technique that would need a lot of TLC, but seems like the basic idea could work…? Probably gets weird around the edges though; if it’s not matched up exactly to the scene lighting, it would presumably give you slivers of doubled-up light/shadow, huh. But yeah, please let us know if you find any promising solutions, and I’ll do the same! I’m on the hunt.

I’m planning on doing some scanning in the near future, I was thinking it would be worth it to invest in a cheap portable reflector for diffusing shadows. Probably best to tackle the problem at the source.

I’ll report back if I make any progress.

Im swamped in uni work right now, but i’ll give it a try or two the coming weekends and i will make sure to post my progress.

Report back, but i think that you will have problems flattening out an entire rockface. I also think that your photoscan will have problems if your lighting isn’t exactly the same each time you take a picture.

Someone posted a good thread on this in the Facebook group the other day, I’ll try and find the link.

I prefer to use Photoshop personally, I get finer control then. Plus, there are a lot of tutorials on YouTube of people doing similar things in standard photography.

Yeah that would be great! Thanks!