We have a server with 64 gb ram and 32 cores that takes more time than the local machine with core i7 and 16 gb ram , any particular reason and solution for this problem? Is there anything wrong with tensorflow?
Training details: Local Machine(core i7 and 16 gb ram): [02:44<00:00, 2.43it/s, loss=0.804, acc=0.996] Server( 64 gb ram and 32 cores): [03:41<00:00, 1.81it/s, loss=0.842, acc=0.994]
Also lscpu on server:
lscpu on local machine:
Help would be much appreciated. Thanks.