What would you recommend for a Dell system:
Dual Processor Quad Core XEON E5-2609 @ 2.4GHz
or
Single Eight Core XEON E5-2665 @ 2.4 GHz
If there is no difference in performance, the cost difference is $900.
Thanks for any advice!
What would you recommend for a Dell system:
Dual Processor Quad Core XEON E5-2609 @ 2.4GHz
or
Single Eight Core XEON E5-2665 @ 2.4 GHz
If there is no difference in performance, the cost difference is $900.
Thanks for any advice!
Reading this may help
http://www.javelin-tech.com/blog/2010/11/do-multi-core-processors-help-with-solidworks/
Thanks Peter, but in this case the question is if it is better to have 8 cores on 1 chip or on 2 chips.
According to http://ark.intel.com/compare/64588,64597 the 2609 doesn't have Hyperthreading nor Turbo-boost. The Memory Bandwidth is also 50% greater for the 2665.
Whether those numbers equate to a performance gain worthy of $900 I wouldn't want to guess.
This may be of interest.
http://content.dell.com/ca/en/business/d/help-me-choose/hmc-processor-12g
Anna's limited testing shows that hyperthreading does not appear to have any benefit to FEA when dealing with large number of cores. It actually had a negative benchmark effect.
I need to test out one more quad core machine to see if I get the same results with hyper-threading as I did with my Core i7-2600 and 2600K. This will be on a Xeon E3-1240v2 system that I am going to build this week.
It also appears from the systems benchmarked in Russ's post that more cores/threads do not help a typical FEA study solution. The quad cores at a higher clock speed, and newer architecture are at the top of the speed list.
FWIW,
Anna
I know this is old, but the answers irritate me a bit.
A single processor should be notably faster for some workloads because it has a cache shared by all 8 cores. With dual cpus, when a process shifts from one to another, any memory stored in the cache must be re-read by the second cpu. Cached memory is substantially faster.
The OS and scheduler make things like hyperthreading a plus or a minus depending on implementation. OS intelligence is something that will improve over time; with hyperthreading and a smart OS, your app can stay on the same physical set of cores when more cores are active. Without hyperthreading, if all 4 cores are active on CPU 1, a jump to a core on CPU 2 will be more expensive than a hyperthread.
*Some* OSes may be smart enough to try to run on the same set of cores (cpu sets are good for forcing apps to stay on the same physical set of cores), but if there is any memory shared among threads running on physically seperate cpus the performance simply has to take a hit.
I understand your irritation.
Starting from the view "If you don't measure it you can't control it", I suggest that if empirical tests do not produce an unequivocal answer, then they are not measuring the correct parameters.
I have felt that I/O has not been adequately measured, and your post has expanded my view of I/O to include cache, RAM, ATA/SCSI controller, disk controller, disk velocity and allocation unit.
The players that decide what is fetched when from where may include Intel firmware, mobo BIOS, Windows and SW.
I suspect that much of SW code derives from Unix, and is thus old and sub optimal for current hardware.
Perhaps there is specific purpose simulation software which is much faster than SW but too costly for me.
Empirical measurements are all that matter if you're only running 1 application or have a primary purpose. Theory or even physics have no place if your OS bites nails or if your app doesn't thread well.
In today's world your time is worth more than the delta of the lowest and highest end cpu, so get a 12 core E5 and turn off hyperthreading if you must. If it saves you 10 minutes a day it pays for itself rather quickly.
The 2665 has twice the cache and can run the faster ram too, no contest that I can see!
1066 ram is old hat now btw
Though I would look at adding an extra cooling fan @ 115watts