This content has been marked as final. Show 3 replies
this is something that happens with today's version of floworks. It is worse with larger models, such as over 300,000 cells. I've heard that floworks solvers request large blocks of memory during iteration zero. during a batch run, both solvers may make a memory request at the same time, which then exceeds available memory. The request happens so fast that you will not be able to see it with Task Manager, but you will get that annoying out of memory error.
I found that using two instances of solidworks/floworks can help. Start solidworks and begin a regular solve (not batch solve) on one model. Start a second instance of solidworks and wait until the first instance has passed meshing and is on iteration 1 or higher before starting a regular solve on the second model.
This is a manual way of forcing a delay between solvers, but of course is limited to only two solutions at a time.
Try it out and let us know what you find!
Best regards, Rich.
I might try your suggestion. The problem is that in some designs I have over 20 configurations that need to run as a batch. We've tried running five of six configs overnight to then analyze and make changes the following day and twenty or more configs over a weekend to, again, look at the results on Monday. This, of course, as an attempt to gain efficiency. However, it has backfired in that it is rare to be able to do this due to crashing and lockups.
I've been working with our VAR on this. The initial knee-jerk reaction was "go to a 64 bit OS/machine). While I don't have a problem with the concept I always prefer to do things from a foundation of knowledge rather than hear-say. So, I pressed for an explanation. The 32 bit machine we are using is good, fast and unencumbered with spurious software. Tek support seems to always want to blame the machine...maybe 'cause it's an easy out.
Anyhow, I think we finally got to the bottom of it: Memory fragmentation. I've seen cases where FW stops or crashes because it can't allocate 24 or 4096 bytes (yes, bytes) for a 200K-cell model with 1.5GB available showing on the performance monitor and the /3GB switch enabled. It just didn't make sense. The machine isn't broken. No other engineering application on this system exhibits this behaviour. There was not plausible explanation until memory fragmentation was put on the table as a potential culprit.
For those who may not be familiar with the concept a Google search on "memory fragmentation" will produce plenty of info. Basicly, programs request small or large chunks of memory as they go along. These chunks are not necessarily allocated one after the other. If the application and OS are not smart about this, eventually you can get into rare situations where you don't have enough contiguous memory to allocate even a small chunk.
Think of using a 1 inch punch to punch out 1 inch disks out of a sheet of plastic. Eventually you can't punch out any more 1 inch disks, even though there's plenty of material left over between the holes. None of the available material can "allocate" a perfectly round 1 inch disk...so you are done.
Anyhow, what XP/Vista 64 gives you, effectively, is a larger sheet of plastic. Memory still gets fragmented, but there's so much more of it that you can go much farther before running into trouble.
So...while the solution to the problem is not really a solution, it seems that you can throw money at it and make it work.
While I plan to build a 64 bit machine with an obscene amount of RAM, eight cores, etc. I thought I'd give a few ideas a shot before that machine is ready.
I am currently trying an approach that feels promising. Basicly, I do a first batch run where ONLY the mesh is generated for each configuration. Once generated and saved, I shutdown SW, reboot the machine, kill unnecessary processes (Windows search, backup jobs, etc.) and then run the solver in batch mode for all configs. The theory here is that memory will fragment less if at least the meshing is taken out of the equation. So far, so good.
A nice side effect of this approach is that you can enable the use of as many processors as you have and the meshing will happen that much faster. I don't think one is likely to run into memory issues while generating a mesh.