My VAR tells me that I need a 64bit machine. I am not
convinced. I think there's a bug in FW --like a memory leak-- that
is causing this.
Here's what I have:
Cells: 100K to 300K
Simple fan-driven thermal simulation of a heatsink.
It's a shrouded heatsink, so flow is internal.
Heatflow in solids enabled.
Only one material: aluminum
Fluid is air.
Dual core Pentium 4, 3.2GHz
4GB RAM
NVidia 1700 card
16GB page file
No antivirus or any such software
I am running a model with 10 to 20 different configurations.
All configurations are driven from sketches at the top assembly level.
I setup the batch solver to run overnight.
The results are never loaded between runs.
It will run a few configurations (anywhere from one to five) and eventually either crash or stop, claiming that it doesn't have enough memory. One time it stopped because it claimed that it could not allocate 4,096 BYTES!
When I have monitored the machine (it takes two to three hours per run, so I don't do this very often) to see what was going on, there was always plenty of memory available.
I've run it with both the 3GB switch on and off. Same results.
I've run it using one or both processors. Same results.
In general terms, configurations that crash tend to run just fine outside of the batch solver.
If I had to guess I'd say that what is needed is the ability to have a one minute pause between batch runs in order to allow the OS to do a little memory cleanup. Conjecture on my part, but it may be well founded.
Thanks,
-Martin
Here's what I have:
Cells: 100K to 300K
Simple fan-driven thermal simulation of a heatsink.
It's a shrouded heatsink, so flow is internal.
Heatflow in solids enabled.
Only one material: aluminum
Fluid is air.
Dual core Pentium 4, 3.2GHz
4GB RAM
NVidia 1700 card
16GB page file
No antivirus or any such software
I am running a model with 10 to 20 different configurations.
All configurations are driven from sketches at the top assembly level.
I setup the batch solver to run overnight.
The results are never loaded between runs.
It will run a few configurations (anywhere from one to five) and eventually either crash or stop, claiming that it doesn't have enough memory. One time it stopped because it claimed that it could not allocate 4,096 BYTES!
When I have monitored the machine (it takes two to three hours per run, so I don't do this very often) to see what was going on, there was always plenty of memory available.
I've run it with both the 3GB switch on and off. Same results.
I've run it using one or both processors. Same results.
In general terms, configurations that crash tend to run just fine outside of the batch solver.
If I had to guess I'd say that what is needed is the ability to have a one minute pause between batch runs in order to allow the OS to do a little memory cleanup. Conjecture on my part, but it may be well founded.
Thanks,
-Martin
this is something that happens with today's version of floworks. It is worse with larger models, such as over 300,000 cells. I've heard that floworks solvers request large blocks of memory during iteration zero. during a batch run, both solvers may make a memory request at the same time, which then exceeds available memory. The request happens so fast that you will not be able to see it with Task Manager, but you will get that annoying out of memory error.
I found that using two instances of solidworks/floworks can help. Start solidworks and begin a regular solve (not batch solve) on one model. Start a second instance of solidworks and wait until the first instance has passed meshing and is on iteration 1 or higher before starting a regular solve on the second model.
This is a manual way of forcing a delay between solvers, but of course is limited to only two solutions at a time.
Try it out and let us know what you find!
Best regards, Rich.