I would suggest just using a render target.
Then you can write the image you download onto the render target and allow for a custom/different effect any time.
Mostly, it’s completely isolated.
Meaning that your niagara system/emitter won’t ever need to change at runtime unless you force it to.
As to why.
Even if a user doesn’t upload a power of 2 image, the script that puts the user image onto the RT can preserve aspect ratio, and crop out the image with an alpha mask based on position+size
You also force shrinking if the image is larger than needed, which allows you to load in the image while keeping to whatever memory standards your particle system has to follow.
Or you can allow for custom scaling.
So at a very base level
Modify your script.
Create a “brush” in which the user image is stamped into another material; use Blend mode translucent because user images will have transparency.
Alpha in opacity so it is preserved, Color in emissive so it’s unlit.
With that material write to the render target
Draw canvas to render target > draw material
You’ll need to figure out the right parameters for the job, but it should be simple.
And from then on bob’s your uncle.
The produced RT can just be used as an image into anything that takes a texture sample.