If you want to derive the geometry from 2d images, then Maya, 3DS max might not be the best solution in your case.
In 3DS max, you would need to sculpt/model the object according to your imagery. I imagine that would be agonizingly laborious.
And all you would end up is one generic cell.
Depending on how dynamic you want to have it, you could do the following (at least thats how I would approach it):
Create a program, (the language does not relly matter here), that does the following:
It analyzes/parses the image and creates the 3d data for a mesh, based on the data.
Since the imagery is coming from a microscope, I guess you have very high contrast levels. That should help you find corresponding regions of geometry. Or you make a second image and colorize it accordingly (thus moving the pattern recognition part from the program to your brain )
Once you have the geometry for the objects, dump them into a CSV datafile.
Import the CSV into the engine as a datatable.
Create a blueprint that constructs a “CustomStaticMesh” by parsing the data from the table.
This way you dont just replicate a cell, but the cell from your microscope image.
Each cell is different