I've been running a random vibration model that has about 25,000 elements. The model represents a fairly complicated electronics assembly (200 mm X 100 mm X 100mm )where I need to analyze the overall sheet-metal structure, the internal structure and about 8 printed-circuit boards - hence the reason for 25,000 elements. I've already started with as coarse as possible of a mesh and have used mesh refinement sparingly but where needed. The random-vibration environment is per MIL-STD 810 which encompasses the frequency range of 0 to 2000 Hz.
The issue that I'd like to discuss is how to minimize the run-times for the solver. I first run the frequency analysis, which is the pre-cursor to the random vibration. This doesn't take long - maybe 20 minutes or so. The exact number of natural frequencies that is identified by the solver depends on the depth of my analysis at the particular time but there are usually 150 to 250 resonant frequencies. When I then run the random-vibration solver, the solution can take over 20 hours to solve.
The computer I use has 4 processors and they each appear to be running at about 30 percent max. Most of the time they're only running at a few percent. I also have over 3 GB RAM (with the 3GB switched turned on), but except for the frequency portion, the RAM being used during the random-vibration solver is very small, about 10 MB or so.
I think the reason for the very long solution times is due to I/O. That is, with the iterative solver, at least the way it's configured, the solver needs to write out to the hard disk tiny bits of information over and over again as it iterates towards a solution. You can see the I/O light flicker away as the data is being written/(read?) between the hard drive and the processors.
Does what I write above make any sense? I already have a very fast hard drive (Raptor 12,000 rpm) so what I think I need to do is get a RAM-based hard drive. Has anyone tried using one in a case like this?