------comparing training times------

---old LRNN training performance:---
-	with the fair uncompressed representation, i.e. 1:1 architecture
	- after JVM warmup (performance takes abut 500 epochae to stabilize)
	- it takes about 2sec per full epocha (including output recalculation)

- with the compressed representation it's about 10x speedup (but that's irrelevant)
	- i.e. 200ms per epocha
- both versions can be tuned for even less time (about 1.8 sec per epocha)



---new Neuralogic performance---


about 3.2 sec per epocha in the fastest setting
	- actually it depends on computer state - in fresh start of IntelliJ is faster!
	- best as of 11.11.2019 we have under 3 sec per epocha!
		- 2.5 sec for epocha without GC (and 3 sec with GC)




----DONE-----

- shuffling, factoOutput=0.1, evaluation after each epoch
	- no effect
- gradient learning rate appplication
	- no effect
- merge LRNN1.0 github branches
	- all nicely ready on the old github

- performance of 2 months back
	- I dont think there was any significant change w.r.t. training speed
		- no commit should affect performance except linear chain pruning
			- so it's best to continue with the latest version
				- i.e. 3.2 sec per epocha

-----TODO-----


check 1:1 step by step 1 neural net with manual initialization







-----NEXT

checking all algebraic calculation and trying to minimize new Value object creation
	- the garbage collectors is what takes time there