I have a big matrix (1000x1000 and eager to make it even bigger) of numbers. I have some simply formulas (like really really super simple adding and multiplying) that needs to be done on all of the members of the matrix independently.
As far as I know, this is the sort of calculations that could run on GPU much faster then on CPU, but I was not able to find any clue about how to send this task to GPU instead of CPU.
I would therefore like to ask you this question:
Is there any way to send a “loop” to GPU? Let say that I want to make the GPU do this:
for i in range(20):
T[i] = T[i]+1
How to do it?
Many thanks for any hints.