The video was very helpful. I was able to solve the shadow issue (recalculate the normals).
However, the guy in the video forgot to mention that for the calculation to work, you had to uncheck the option I put in the red square. 30 minutes of tutoring and man, he forgot to say the most important thing… I almost went crazy. LOL
I left it here so that whoever comes after doesn’t have to suffer.
Thank you for teaching this.
I definitely still have a lot to learn about modeling.
When I started with Blender it seemed to me the most complicated program I had ever used.
And I had technical drawing classes at university with AutoCad.
But getting started with Blender was really a pain… Now I know a lot of things… but I still don’t know anything about the sculpting tools.
Always learning!!
Ok, yes… i use it to flip the normals…
Or rather, to know where the faces are looking.
I think it also paints them blue and red.
But I didn’t know that vertices were used for smoothing.
It’s also a strange concept, that of “painting vertices”
Actually when I paint a mesh I select the polygons.
I need to look into that too… painting vertices looks really weird.
Yes, i can see how the angle of the vertices is different in each case… on the “Smooth Shade” they are more closer together.
I apologize if I write something “incoherent” some times…
I write it quickly, translate it into English as best I can… and then when I review it I realize that I messed up a couple of times…
Don’t think I’m crazy… just I am not a native English speaker…
No worries 3d art is a vast subject, not everyone has the time to delve into it’s madness ;).
The normal maps are basically a pixel shader versions of the values of the vertex smoothing (that’s why you can fake light interaction on a per pixel level in them). But they are more costly than simple vertex shading.
Just like if you have vertex lighting (calculated by interpolating the lighting that hits each vertex) and pixel lighting that calculates it on a per pixel basis.
I used a program to do lighting calculations (DAISALUX). For real installations… I remember the program spending hours doing calculations for just one room. And unreal does it in real time… But it has its logic. That program makes calculations per unit of surface. (Candles/m2 or lux/m2). (Among other types of calculations)… And that’s why it takes so long.
However, as you say, calculations for each vertex can eliminate a large amount of calculation surface. Simply averaging is optimization.
It seems to be all about optimizations…
32 Cores an just One Game thread…
I don’t think I’ll ever get over this… In 2025… LOL
I’ll have to learn to live with it.
And the keyboard shortcuts are endless. It takes a lot of time and practice to learn them all.
You also have to be constantly switching from object mode to edit mode… and selecting objects.
To do anything you need to do a series of other things first… so that makes it a slow process.
Not to mention that sometimes for some understandable reason there are vertices that have not been merged and you need to run the “merge” command very often.
Another thing that could be improved a lot is the boolean functions… it could be more automatic. When you do booleans you always end up having to manually adjust vertices one by one…
Is it a good program? Yes, without a doubt.
Can it be improved? a lot… above all, making it easier and faster to use.
There are good plugins for that ( like box cutter / hard ops ).
What I hate, is not being able to pan the camera in place, like you can in the engine with RMB.
When I’m working very closeup, and can’t just pan slightly to work on another bit. I have to zoom out and re-select and zoom in…
Also can’t wait for AI UV mapping. Something that AI might actually be useful for, rather than using it to fill the world with grey derivative sludge, as is mostly the case…
Yes, that could be very good…I’ve seen some programs that make 3D models from images.
But they are still very immature.
The meshes they generate have many defects (too many polygons, holes, etc.)
At the moment I think it is less work to make a new one by hand than to fix all the defects in the meshes generated by AI.
But it seems that generative AI is advancing fast. Maybe soon we will have tools that work well.