3 Replies Latest reply on Jul 24, 2008 10:46 PM by 1-1NHKU5

    FloWorks crashing regularly with simple models

      My VAR tells me that I need a 64bit machine. I am not convinced. I think there's a bug in FW --like a memory leak-- that is causing this.

      Here's what I have:

      Cells: 100K to 300K
      Simple fan-driven thermal simulation of a heatsink.
      It's a shrouded heatsink, so flow is internal.
      Heatflow in solids enabled.
      Only one material: aluminum
      Fluid is air.

      Dual core Pentium 4, 3.2GHz
      4GB RAM
      NVidia 1700 card
      16GB page file
      No antivirus or any such software

      I am running a model with 10 to 20 different configurations.
      All configurations are driven from sketches at the top assembly level.

      I setup the batch solver to run overnight.
      The results are never loaded between runs.

      It will run a few configurations (anywhere from one to five) and eventually either crash or stop, claiming that it doesn't have enough memory. One time it stopped because it claimed that it could not allocate 4,096 BYTES!

      When I have monitored the machine (it takes two to three hours per run, so I don't do this very often) to see what was going on, there was always plenty of memory available.

      I've run it with both the 3GB switch on and off. Same results.

      I've run it using one or both processors. Same results.

      In general terms, configurations that crash tend to run just fine outside of the batch solver.

      If I had to guess I'd say that what is needed is the ability to have a one minute pause between batch runs in order to allow the OS to do a little memory cleanup. Conjecture on my part, but it may be well founded.

      Thanks,

      -Martin


        • FloWorks crashing regularly with simple models
          Rich Bayless
          Hi Martin,

          this is something that happens with today's version of floworks. It is worse with larger models, such as over 300,000 cells. I've heard that floworks solvers request large blocks of memory during iteration zero. during a batch run, both solvers may make a memory request at the same time, which then exceeds available memory. The request happens so fast that you will not be able to see it with Task Manager, but you will get that annoying out of memory error.

          I found that using two instances of solidworks/floworks can help. Start solidworks and begin a regular solve (not batch solve) on one model. Start a second instance of solidworks and wait until the first instance has passed meshing and is on iteration 1 or higher before starting a regular solve on the second model.
          This is a manual way of forcing a delay between solvers, but of course is limited to only two solutions at a time.

          Try it out and let us know what you find!

          Best regards, Rich.
            • FloWorks crashing regularly with simple models
              I might try your suggestion. The problem is that in some designs I have over 20 configurations that need to run as a batch. We've tried running five of six configs overnight to then analyze and make changes the following day and twenty or more configs over a weekend to, again, look at the results on Monday. This, of course, as an attempt to gain efficiency. However, it has backfired in that it is rare to be able to do this due to crashing and lockups.

              I've been working with our VAR on this. The initial knee-jerk reaction was "go to a 64 bit OS/machine). While I don't have a problem with the concept I always prefer to do things from a foundation of knowledge rather than hear-say. So, I pressed for an explanation. The 32 bit machine we are using is good, fast and unencumbered with spurious software. Tek support seems to always want to blame the machine...maybe 'cause it's an easy out.

              Anyhow, I think we finally got to the bottom of it: Memory fragmentation. I've seen cases where FW stops or crashes because it can't allocate 24 or 4096 bytes (yes, bytes) for a 200K-cell model with 1.5GB available showing on the performance monitor and the /3GB switch enabled. It just didn't make sense. The machine isn't broken. No other engineering application on this system exhibits this behaviour. There was not plausible explanation until memory fragmentation was put on the table as a potential culprit.

              For those who may not be familiar with the concept a Google search on "memory fragmentation" will produce plenty of info. Basicly, programs request small or large chunks of memory as they go along. These chunks are not necessarily allocated one after the other. If the application and OS are not smart about this, eventually you can get into rare situations where you don't have enough contiguous memory to allocate even a small chunk.

              Think of using a 1 inch punch to punch out 1 inch disks out of a sheet of plastic. Eventually you can't punch out any more 1 inch disks, even though there's plenty of material left over between the holes. None of the available material can "allocate" a perfectly round 1 inch disk...so you are done.

              Anyhow, what XP/Vista 64 gives you, effectively, is a larger sheet of plastic. Memory still gets fragmented, but there's so much more of it that you can go much farther before running into trouble.

              So...while the solution to the problem is not really a solution, it seems that you can throw money at it and make it work.









                • FloWorks crashing regularly with simple models
                  While I plan to build a 64 bit machine with an obscene amount of RAM, eight cores, etc. I thought I'd give a few ideas a shot before that machine is ready.

                  I am currently trying an approach that feels promising. Basicly, I do a first batch run where ONLY the mesh is generated for each configuration. Once generated and saved, I shutdown SW, reboot the machine, kill unnecessary processes (Windows search, backup jobs, etc.) and then run the solver in batch mode for all configs. The theory here is that memory will fragment less if at least the meshing is taken out of the equation. So far, so good.

                  A nice side effect of this approach is that you can enable the use of as many processors as you have and the meshing will happen that much faster. I don't think one is likely to run into memory issues while generating a mesh.