This content has been marked as final. Show 3 replies
I believe the direct sparse solver will benifit most from more RAM, and the iterative solver will benifit most from a faster processor.
Adding memory will help IF you are actually running out of physical memory when running the analysis. When the operating system runs out of memory it starts using virtual memory which means it starts writing any overflow that wont fit into memory into a swap file on the hard drive. This will slow down things tremendously. Usually you will be able to tell when this happens because the hard drive starts thrashing wildly trying to keep up. You can use task manager to monitor the available physical memory.
It is true that Direct Sparse typically uses alot more memory than the iterative solver. If you are using the direct sparse solver and running out of memory try the iterative solver.
Multiple processors will benefit both sovers but the Direct Sparse will see a much larger gain in performance. Direct Sparse will see approx 50% gain with two processors compared to one. The iterative solver will only see about 13% gain.
I find a quad core (say the fastest one you can afford) with 8GB of RAM with a decent video card (say nVidia FX 1700) and a RAID 0 array makes for a pretty capable machine.
I run the direct sparse solver for most NL problems as I have lots of RAM. In NL work the stiffness martix decomp runs very efficiently with multi core systems. However, the stiffness matrix assembly op is only a single processor op and once you hit say 4 cores speeding up the matrix decomp is a diminishing returns proposition in my view.
It would serve the cosmos guys well to get the stiffness matrix assembly op done with a multi core strategy. If it could be done as efficiently as the decomp op then the stuff would really hum in my humble option.