Hello,

I'm running (at least trying to run) a static analysis on a rather large assembly. I coarsened the elements and applied enough mesh controls that I finally got it meshed. I had something like 1.3 million elements and a ridiculous number of DOFs. Not surprisingly, when I left it to run overnight, I maxed out my ram (4GB in 32 bit system).

I just coarsened the mesh even further and got it down to 300K elements, 430K nodes, 1.2 million DOFs. It's been running for 1.5hrs and the PF usage seems to have leveled off at just over 2GB. Does this mean the analysis will actually finish this time? And if it does finish how much more can I refine the mesh to see if the results converge?

I guess what I'm asking is how do I determine how big a problem I can run on this system?

And what is the best solver for this size problem? I thought the FFEplus was more RAM efficient for large problems, but when I maxed out the ram the message said to run again with direct sparse??? And then to make the choice of solver harder the KB says that with upward of 1million DOF a dual core processor should run much faster with the direct sparse than with the FFEplus, but it also says that direct sparse will consume ~10x more ram for the same number of DOF. And I've been told that with many contact sets the FFEplus losses accuracy. This makes the choice of solver non-trivial.

Any help / suggestions would be greatly appreciated,

Adam

I'm running (at least trying to run) a static analysis on a rather large assembly. I coarsened the elements and applied enough mesh controls that I finally got it meshed. I had something like 1.3 million elements and a ridiculous number of DOFs. Not surprisingly, when I left it to run overnight, I maxed out my ram (4GB in 32 bit system).

I just coarsened the mesh even further and got it down to 300K elements, 430K nodes, 1.2 million DOFs. It's been running for 1.5hrs and the PF usage seems to have leveled off at just over 2GB. Does this mean the analysis will actually finish this time? And if it does finish how much more can I refine the mesh to see if the results converge?

I guess what I'm asking is how do I determine how big a problem I can run on this system?

And what is the best solver for this size problem? I thought the FFEplus was more RAM efficient for large problems, but when I maxed out the ram the message said to run again with direct sparse??? And then to make the choice of solver harder the KB says that with upward of 1million DOF a dual core processor should run much faster with the direct sparse than with the FFEplus, but it also says that direct sparse will consume ~10x more ram for the same number of DOF. And I've been told that with many contact sets the FFEplus losses accuracy. This makes the choice of solver non-trivial.

Any help / suggestions would be greatly appreciated,

Adam

Exactly what is in your model?

Is it static with Contact, static Non-linear, Static Non-linear with contact?, etc.

Each makes the solution more complex and less likely to succeed. A pure linear static without contact would probably do well with 300K elements.

A non-linear with contact uses almost 4G of RAM when run on my X64 machine and will basically not run at all on an XP 32 machine, and that is with only a bit over 100,000 elements.

Good Luck.

-Mike