Scientifical use: creation of 3D-objects

Hi there,
I got one question for the experts which needs a bit more description.

For a project, I plan the create 3D-objects layerwise. The content of the layers will be extracted from microscopy images (certain areas) to recreate, for example, a cell organelle.

From what I’ve seen so far, Unreal Engine 4 seems to be an interesting option for this. Is it possible and if so - how? And can one zoom into the object?

Thanks for reading :slight_smile:

P.S.: If I might have chosen the wrong subforum - sorry.

You can create your meshes and use them the same way you would use any mesh in any 3D virtual environment (gems, arch-viz, VR) You can walk around them, through them whatever. You will have to learn to code the actions though.

Could you show me some links? Like, tools for 3D-coordinates modeling? Pretty new to this area and quite overwhelmed at the moment.

EDIT2: Ignore this post, I missed the part about extracting the content from microscopy images (you’ll not want to manually model this I think, but instead have it built dynamically as KVogler below mentions.)

Original post:
Hi Noiree,

For 3d modelling you’ll want to look into tools like Autodesk Maya, Autodesk 3D Studio Max, or a free alternative Blender (there are others, but these are the ones I am familiar with and are most commonly used).

Do you maybe have an example of what you’re trying to achieve within the engine?

EDIT: Here is a huge list of video tutorials for UE4 to help get you started

If you want to derive the geometry from 2d images, then Maya, 3DS max might not be the best solution in your case.
In 3DS max, you would need to sculpt/model the object according to your imagery. I imagine that would be agonizingly laborious.
And all you would end up is one generic cell.

Depending on how dynamic you want to have it, you could do the following (at least thats how I would approach it):

Create a program, (the language does not relly matter here), that does the following:
It analyzes/parses the image and creates the 3d data for a mesh, based on the data.
Since the imagery is coming from a microscope, I guess you have very high contrast levels. That should help you find corresponding regions of geometry. Or you make a second image and colorize it accordingly (thus moving the pattern recognition part from the program to your brain :stuck_out_tongue: )

Once you have the geometry for the objects, dump them into a CSV datafile.
Import the CSV into the engine as a datatable.

Create a blueprint that constructs a “CustomStaticMesh” by parsing the data from the table.

This way you dont just replicate a cell, but the cell from your microscope image.
Each cell is different :slight_smile: