I made the neural networks be able to forward-propagate and backward-propagate in a separate thread. Right now this creates a thread for each individual NN, so you can split supervised training data into separate threads. Im also working on Batch NN threading (n NN’s per thread) as well as All NN in a separate single thread.
1 Threaded NN with supervised learning VS Non-Threaded NN with supervised learning split across 4 threads are about 3 to 4x faster, its about 3.5 to 1, speedup increase. as expected since i have a quad core.
Threaded NN’s with unsupervised learning VS Non-Threaded NN’s with unsupervised learning are not any faster at all, its about 1 to 1, as expected.
Sometimes Unsupervised Threaded NN are slower when you have say 300 NN’s because the sheer amount, since threaded are doing more work because the creation of threads /copying data ect. and the kind of unsupervised learning too takes a toll as it has to generate it’s own optimal outputs as well as compute the final outputs vs only having to compute the final outputs in supervised…
Edit: I have a feeling im going to remove threading altogether as there is no real reason for it to be there other than to speed up supervised learning, which to me is a very specific optimization, for a very general ai. ill come back to the threaded code at a later date, when i can find a use for the speed.