Discussion:
Multi-Core CPUs & SolidWorks?
(too old to reply)
Bo
2007-05-13 17:34:19 UTC
Permalink
Is there any news out there on what is coming down the line with
SolidWorks which may speed up our work with these CPUs?

Thanks - Bo
Philippe Guglielmetti
2007-05-13 19:39:16 UTC
Permalink
don't expect too much : CAD has no clear multithreadad structure.
Think about the features dependency tree : theres is little you can do
in parallel. Read http://www.evanyares.com/the-cad-industry/2006/6/20/multithreading-and-cad.html
However, parellizing some long to rebuild features like shells should
be possible...

So with dual cores, you can read your mails while your CAD rebuilds...
With quad cores, you might have some FEA running for hours while
you're designing something else.
Bo
2007-05-13 23:00:58 UTC
Permalink
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Think about the features dependency tree : theres is little you can do
in parallel. Readhttp://www.evanyares.com/the-cad-industry/2006/6/20/multithreading-an...
However, parellizing some long to rebuild features like shells should
be possible...
So with dual cores, you can read your mails while your CAD rebuilds...
With quad cores, you might have some FEA running for hours while
you're designing something else.
AND...with Microsoft trying to upset the OpenGL display system with
its own "Java Killer" methods of trying to establish its OWN
PROPRIETARY graphics display system, we seem to be entering another
possible arena for slowdowns in SolidWorks.

I REALLY REALLY DETEST MONOPOLIES!

Are we going to get UNIX support before MicroSoft squishes us all?

Bo
s***@xmission.com
2007-05-16 01:10:18 UTC
Permalink
Post by Bo
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Think about the features dependency tree : theres is little you can do
in parallel. Readhttp://www.evanyares.com/the-cad-industry/2006/6/20/multithreading-an...
However, parellizing some long to rebuild features like shells should
be possible...
So with dual cores, you can read your mails while your CAD rebuilds...
With quad cores, you might have some FEA running for hours while
you're designing something else.
AND...with Microsoft trying to upset the OpenGL display system with
its own "Java Killer" methods of trying to establish its OWN
PROPRIETARY graphics display system, we seem to be entering another
possible arena for slowdowns in SolidWorks.
I REALLY REALLY DETEST MONOPOLIES!
Are we going to get UNIX support before MicroSoft squishes us all?
Bo
The thing I don't get is why the CAD industry doesn't jettison Open GL
and go with Direct X. It's a virtual certainty there is more work and
$$$ going into the developement of Direct X than Open GL.
TOP
2007-05-16 01:13:58 UTC
Permalink
1. OpenGL is open, direct X is not.
2. OpenGL works on any OS
3. There is a lot invested in the hardware
4. Work and devlopment doesn't always translate into performance and
stability.
5. There is no perceived need. Can directX make drawings faster? Can
it shorten rebuild times in drawings and large assemblies?
RaceBikesOrWork
2007-05-16 10:54:38 UTC
Permalink
Post by TOP
1. OpenGL is open, direct X is not.
2. OpenGL works on any OS
3. There is a lot invested in the hardware
4. Work and devlopment doesn't always translate into performance and
stability.
5. There is no perceived need. Can directX make drawings faster? Can
it shorten rebuild times in drawings and large assemblies?
Here is an interesting link on the subject.

http://en.wikipedia.org/wiki/Comparison_of_Direct3D_and_OpenGL
s***@xmission.com
2007-05-19 13:01:05 UTC
Permalink
Post by TOP
1. OpenGL is open, direct X is not.
That's obvious. How does that matter?
Post by TOP
2. OpenGL works on any OS
Does Solidworks? Neither does any other CAD program.
Post by TOP
3. There is a lot invested in the hardware
Which will all be obsolete three years after purchase (in other words
"invested in hardware" is not a compelling reason).
Post by TOP
4. Work and devlopment doesn't always translate into performance and
stability.
True but then when you have more people working the problem you're
likely to get it right sooner.
Post by TOP
5. There is no perceived need. Can directX make drawings faster? Can
it shorten rebuild times in drawings and large assemblies?
Neither Direct X or Open GL are going to do anything to speed up
rebuild times.
TOP
2007-05-19 13:48:52 UTC
Permalink
Post by s***@xmission.com
Post by TOP
1. OpenGL is open, direct X is not.
That's obvious. How does that matter?
Yup, a proprietary graphics standard will allow a monopoly to dictate
what platform CAD runs on. Right now there is a choice.
Post by s***@xmission.com
Post by TOP
2. OpenGL works on any OS
Does Solidworks? Neither does any other CAD program.
Several OS support serious CAD.
Unix - CATIA, UG, Pro/E, and perhaps some lesser known
Windows XP - The above plus mid range modelers and AutoCAD
Windows NT and 2k - Ditto XP but only older versions.
MaxOS - SolidWorks, and others (don't know if they use OGL or
something else)
Linux - Pro/E and others.
Post by s***@xmission.com
Post by TOP
3. There is a lot invested in the hardware
Which will all be obsolete three years after purchase (in other words
"invested in hardware" is not a compelling reason).
I invest in high end graphics cards with a much longer time horizon
since at least as far as SW is concerned it only takes a CPU and
motherboard upgrade to keep up at minimum cost.
Post by s***@xmission.com
Post by TOP
4. Work and devlopment doesn't always translate into performance and
stability.
True but then when you have more people working the problem you're
likely to get it right sooner.
If it ain't broke, why fix it? What in the world is wrong with OpenGL
as far as CAD is concerned? I have yet to see a post on the NG
complaining about graphics issues other than driver problems.

If it is MicroSoft throwing people and money on the problem don't bank
on it. They haven't gotten the OS right yet and it has been 14 or 15
years since New Technology (NT) was going to save the world from Unix
and MAC.

SW got it right when there were very few people working on the
problem. The bigger they grow the more problems there are. Getting it
right doesn't always scale with the number of people working on it.

TOP
Post by s***@xmission.com
Post by TOP
5. There is no perceived need. Can directX make drawings faster? Can
it shorten rebuild times in drawings and large assemblies?
Neither Direct X or Open GL are going to do anything to speed up
rebuild times.
If it ain't broke don't fix it. SW is moving away from letting the
graphics card do all the work anyway.
Bo
2007-05-19 16:38:14 UTC
Permalink
Post by TOP
Post by s***@xmission.com
Post by TOP
1. OpenGL is open, direct X is not.
That's obvious. How does that matter?
Yup, a proprietary graphics standard will allow a monopoly to dictate
what platform CAD runs on. Right now there is a choice.
Post by s***@xmission.com
Post by TOP
2. OpenGL works on any OS
Does Solidworks? Neither does any other CAD program.
Several OS support serious CAD.
Unix - CATIA, UG, Pro/E, and perhaps some lesser known
Windows XP - The above plus mid range modelers and AutoCAD
Windows NT and 2k - Ditto XP but only older versions.
MaxOS - SolidWorks, and others (don't know if they use OGL or
something else)
Linux - Pro/E and others.
Post by s***@xmission.com
Post by TOP
3. There is a lot invested in the hardware
Which will all be obsolete three years after purchase (in other words
"invested in hardware" is not a compelling reason).
I invest in high end graphics cards with a much longer time horizon
since at least as far as SW is concerned it only takes a CPU and
motherboard upgrade to keep up at minimum cost.
Post by s***@xmission.com
Post by TOP
4. Work and devlopment doesn't always translate into performance and
stability.
True but then when you have more people working the problem you're
likely to get it right sooner.
If it ain't broke, why fix it? What in the world is wrong with OpenGL
as far as CAD is concerned? I have yet to see a post on the NG
complaining about graphics issues other than driver problems.
If it is MicroSoft throwing people and money on the problem don't bank
on it. They haven't gotten the OS right yet and it has been 14 or 15
years since New Technology (NT) was going to save the world from Unix
and MAC.
SW got it right when there were very few people working on the
problem. The bigger they grow the more problems there are. Getting it
right doesn't always scale with the number of people working on it.
TOP
Post by s***@xmission.com
Post by TOP
5. There is no perceived need. Can directX make drawings faster? Can
it shorten rebuild times in drawings and large assemblies?
Neither Direct X or Open GL are going to do anything to speed up
rebuild times.
If it ain't broke don't fix it. SW is moving away from letting the
graphics card do all the work anyway.
Top of the morning to you all, and I have for all...ONE word...for
user benefits...

<- - -CHOICE- - ->

It is the only thing that allows users to keep the damn suppliers on
their toes. MS does NOT promote choice on their OS. It is the
Redmond highway or noway they want promoted.

Microsoft became complacent in the late 90s once they became a super-
majority supplier and could virtually dictate terms to everyone. Now
they are trying to muscle in on audio, video, graphics, replacing
Java, .Net. I think MS is spread to thin, trying to "be all, end
all".

Frankly, I can't see the benefit of Microsoft's OS in today's web
world. I can see the value in various programs I use on the OS, like
SolidWorks, but the OS is a bitch if you take it online, and sort of
OK if you keep it off the net. That is not an OS I want to write home
about.

If the CAD guys start seeing better ways to run their products on
other OSs, I think we will see more viable options appear. Mac &
Linux use in some organizations and colleges is on a major up spike,
and it is not hard to see why, just in reduced maintenance time.

IT guys are seeing the Mac as a boon. Run Mac OS, BSD, Solaris,
Windows, DOS, Linux, and all from one box with several running at the
same time if needed.

Where is the "run anywhere" mantra when users want it?

Bo

RaceBikesOrWork
2007-05-14 10:40:55 UTC
Permalink
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Then why, in the may issues of Desktop Engineering, do dual Xeon
systems blow everything away on the SPECapc Solidworks 2005 tests?

It makes it look like this 'don't bother with dual processors' stuff
is not accurate.
Dale Dunn
2007-05-14 11:35:20 UTC
Permalink
Post by RaceBikesOrWork
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Then why, in the may issues of Desktop Engineering, do dual Xeon
systems blow everything away on the SPECapc Solidworks 2005 tests?
It makes it look like this 'don't bother with dual processors' stuff
is not accurate.
The explanation is in the linked article. The solid modeling kernel and a
few other key components do use multithreading where possible. So, multiple
Xeon processors will have some advantage over single Xeon. The issue is
whether it's worth the extra cost.
matt
2007-05-14 12:53:06 UTC
Permalink
Post by RaceBikesOrWork
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Then why, in the may issues of Desktop Engineering, do dual Xeon
systems blow everything away on the SPECapc Solidworks 2005 tests?
It makes it look like this 'don't bother with dual processors' stuff
is not accurate.
For certain types of models, I think dual core does have a huge positive
impact. I haven't taken the time yet to research this fully, but on
several models, I have seen both processors pegged at 100%, not for the
entire rebuild, but maybe for 60-40% of the rebuild. My guess is that
these are multibody models or surface models which are normally
multibody. Assemblies and drawings should also benefit from dual
threading. The test is over a year old now I think, but I pitted my dual
core AMD X2 4800+ against a single core AMD FX57, and sometimes, but not
on every model, the 4800+ came out on top. The FX57 was at the time the
fastest single core available.
alphawave
2007-05-14 14:27:34 UTC
Permalink
Post by matt
Post by RaceBikesOrWork
Post by Philippe Guglielmetti
don't expect too much : CAD has no clear multithreadad structure.
Then why, in the may issues of Desktop Engineering, do dual Xeon
systems blow everything away on the SPECapc Solidworks 2005 tests?
It makes it look like this 'don't bother with dual processors' stuff
is not accurate.
For certain types of models, I think dual core does have a huge positive
impact. I haven't taken the time yet to research this fully, but on
several models, I have seen both processors pegged at 100%, not for the
entire rebuild, but maybe for 60-40% of the rebuild. My guess is that
these are multibody models or surface models which are normally
multibody. Assemblies and drawings should also benefit from dual
threading. The test is over a year old now I think, but I pitted my dual
core AMD X2 4800+ against a single core AMD FX57, and sometimes, but not
on every model, the 4800+ came out on top. The FX57 was at the time the
fastest single core available.
Slightly OT but ...

If one was to run SWX on a dual core (or twin CPU) PC it would be
possible to start 2 sessions of SWX and force 1 session to run on core
(or processor) A and the second on core (or processor) B?


Kev
Dale Dunn
2007-05-14 14:49:55 UTC
Permalink
Post by alphawave
If one was to run SWX on a dual core (or twin CPU) PC it would be
possible to start 2 sessions of SWX and force 1 session to run on core
(or processor) A and the second on core (or processor) B?
It is possible, but I've never actually done it. IIRC the process is to
find the sldworks.exe process in the task manager, right click on it and
find "set processor affinity". It shouldn't be necessary though, because
the OS will automatically make sure that two separate high-load threads run
on separate cores. I'm not sure if Windows optimally manages multi-socket
multi-core systems (there would be some memory bandwidth and latency
issues) but for a single dual-core CPU you shouldn't need to set anything.
swizzle
2007-05-14 20:02:33 UTC
Permalink
Theoretically, yes.

You would have to start both sessions of SWX. Then go into the Windows Task
Manager and find the processes for each session. You can set the affinity
of each process to a different core/chip/processor.

--Scott
Post by alphawave
On 5/14/2007 at 7:27 AM, in message
Slightly OT but ...
If one was to run SWX on a dual core (or twin CPU) PC it would be
possible to start 2 sessions of SWX and force 1 session to run on core
(or processor) A and the second on core (or processor) B?
Kev
TOP
2007-05-15 01:54:32 UTC
Permalink
You probably don't want or need to set affinity. The OS should divvy
up the clock cycles just like it does when a single SW process is
running. Since SW is not likely to be running full speed
simultaneously on both CPUs you will give the OS a chance to find the
best solution.

If you do set affinity then when those times arise that SW can utilize
both CPUs it won't and will therefore run a few percent slower.

TOP
TOP
2007-05-14 16:57:40 UTC
Permalink
See my postings to the SPECapc thread that was current as of a day or
two ago. SPECapc DOES NOT measure CPU performance, it measures
graphics performance. If you want to see CPU performance, take a long
to rebuild part and run it on difference machines. Or run Ship in a
Bottle or STAR2.1. Did the Xeon's have some big expensive graphics
card in them also?

Multicore machines can greatly speed up SW in theory. Consider that in
an assembly any part that does not have in context features can be
rebuilt simultaneously with any other similar part. The solution of
mates is, I believe, a process that can be parallelized. But it will
take the vendors that SW licenses from to change to get this to work.
SW is at the mercy of it's vendors and long is that list.

Consider also that in theory parts can be sped up when the feature
tree has many branches starting high up because each branch is
independent.

In fact, multicore machines do not make more than a few percent
difference with anything but drawings and PhotoWorks.

TOP
jimsym
2007-05-16 13:49:24 UTC
Permalink
<SPECapc DOES NOT measure CPU performance, it measures
graphics performance.>

This is simply NOT TRUE. SPECapc results scale directly with CPU
performance. If anything, the SPECapc for SolidWorks benchmark
understates the importance of the graphics card.

For example, in benchmark testing conducted by CADCAMnet, the Quadro
FX4500 outperforms the FX550 by only 5.7% (294 secs vs 312 secs.)
Similar results have been reported by Desktop Engineering, MCADonline
and other publications.

SPEC ViewPerf is another matter entirely. Viewperfs are synthetic
benchmarks that are highly skewed to the graphics card. In my
experience, they bear little resemblance to real-world results.
TOP
2007-05-16 19:21:35 UTC
Permalink
Care to post links or dates and author's names. I always take with a
grain of salt magazine benchmarks unless I can repeat them.

TOP

PS Do the results on the SPECapc website scale with CPU speed?
Dale Dunn
2007-05-16 20:41:48 UTC
Permalink
Post by TOP
PS Do the results on the SPECapc website scale with CPU speed?
When I was playing with overclocking a few years ago, I found that the
benchmark scaled almost linearly with clock speed for the CPU portion of
the test. Graphics was relatively unaffected. I think this was SPECapc SW
2003.
Jerry Steiger
2007-05-17 01:21:47 UTC
Permalink
Post by Dale Dunn
Post by TOP
PS Do the results on the SPECapc website scale with CPU speed?
When I was playing with overclocking a few years ago, I found that the
benchmark scaled almost linearly with clock speed for the CPU portion of
the test. Graphics was relatively unaffected. I think this was SPECapc SW
2003.
It's been quite a while since we have run any SW Benchmarks and I can't find
the results, but as I recall, both CPU and Graphics results scaled fairly
well with the processor speed. I/O was less sensitive to CPU speed, but
still affected appreciably. We weren't overclocking, we were comparing
different processors, so that could have skewed our results.

Jerry Steiger
Tripod Data Systems
"take the garbage out, dear"
Dale Dunn
2007-05-17 12:29:30 UTC
Permalink
Post by Jerry Steiger
It's been quite a while since we have run any SW Benchmarks and I
can't find the results, but as I recall, both CPU and Graphics results
scaled fairly well with the processor speed. I/O was less sensitive to
CPU speed, but still affected appreciably. We weren't overclocking, we
were comparing different processors, so that could have skewed our
results.
Jerry Steiger
Tripod Data Systems
"take the garbage out, dear"
I suppose it's possible that different graphics subsystems depend
differently on CPU power. Some graphics cards may be limited by the CPU
while others may not (depending on driver architecture, on-board memory,
etc.). This would explain why some individual testers see one trend, while
other testers see another trend.

Perhaps we should not be making blanket statements about CPU and graphics
performance being linked or not, except for specific systems being tested.
TOP
2007-05-17 16:48:24 UTC
Permalink
Dale,

You are exactly right that the system has a lot to do with it. This
includes the operating system. For example some testers may shut down
the networking and every other service that is not necessary including
things like USB ports and other IO ports. Different machines have
different chipsets, different memory and memory settings and different
BIOS. Some testers may run each test on a fresh image. In the real
world though, these things won't be true.

Then in the test, does it really cover the operations that you do?
What about testing error conditions as in an assembly with multiple
bad mates or failed feature? What about multi config performance?
These are some of the things that can really hang up work. My opinion
about real world type testing is to have in-house models that are
known to cause trouble and benchmark them on any candidate hardware or
software releases and service packs.

TOP

PS Some time ago Sporky let us download a whole library of parts. Just
doing a SW conversion on that lot would make a wonderful benchmark.
Dale Dunn
2007-05-17 18:21:32 UTC
Permalink
Post by TOP
PS Some time ago Sporky let us download a whole library of parts. Just
doing a SW conversion on that lot would make a wonderful benchmark.
Also Seth Renigar.

I would also like to see a benchamrk that segregates performance for the
various "vertical industries" or whatever they're called now.

Unfortunatley, simply converting files doesn't really exercise SW. That
would be more of an IO test. (What's the point of an IO test when most of
the time spent saving has nothing to do with file transfer?) We would
probably need to run ForceRebuild3(false), which is more thorough than
ctrl-Q.

Rant: Even that would only exercise the CPU. We still need some meaningful
test of the video card. Unfortunately, by the time you have a model that
can tax current video cards, the SW UI is so bogged down that you can't
work any faster. Just today I added a feature that shows a rebuild time of
zero, even though it took almost 5 seconds for the UI to catch up.
TOP
2007-05-17 20:40:10 UTC
Permalink
Again, I use TSToolbox to time in-house models.

There are:

Ship in a Bottle -- fairly complex part that taxes both CPU and
graphics. By setting the graphics at full fast and full slow you can
get a feel for the contribution each makes.

STAR2.1 -- definitely checks out the CPU only. Interestingly Intel and
AMD typically will score very different on the modeling portion, but
still be quite close on the rebuild times.

PatBench -- large pattern check and on older SW releases checks the
maximum amount of memory SW will use.

SpecAPC -- good for tuning a machine and stress testing. As a
benchmark it sucks because they change it so often.

Perhaps I could turn those truncated icosadodecahedron models into
some sort of benchmark if the authors agreed to it.

IO is the hardest thing to check because the OS gets involved and with
caching it is hard to tell whether a file is coming from disk, network
or ram cache.

The other thing that is hard to evaluate, and you can see it in STAR
is that after SW is done building the model there is a big wait while
it writes to disk. This is not included in the current timing
algorithms. But it could be.

TOP

TOP
Dale Dunn
2007-05-17 21:15:47 UTC
Permalink
Post by TOP
Again, I use TSToolbox to time in-house models.
Oh yeah? Well my watch has a chronograph on it!
Post by TOP
Perhaps I could turn those truncated icosadodecahedron models into
some sort of benchmark if the authors agreed to it.
I actually inadvertantly turned one of those into a stress test. Someone on
the SW forum asked how to do an extarnal thread as a library feature. Well,
I thought that sounded like fun, so I did it. I needed a test model with
faces oriented to all 8 quadrants (octants?) to make sure something
wouldn't flip, so I used the truncated icosahedron. Of course, the helical
cut took a little time to rebuild. What surprised me was that every
successive thread I inserted required exponentially longer to rebuild, so
that the 8th one took quite a while to rebuild. That was SW06, I think.
TOP
2007-05-18 03:41:44 UTC
Permalink
Now there is an idea. Take all the pathological models and turn them
into a benchmark.

TOP
Dale Dunn
2007-05-18 12:20:12 UTC
Permalink
Post by TOP
Now there is an idea. Take all the pathological models and turn them
into a benchmark.
TOP
Hmm. This has me thinking about the fact that reported rebuild time is
often far less than the time we have to wait. I think they call this
"friction" in the interface. I would be interested to see if this is
linearly proportional to rebuild time, if if veries between systems, and
how much it varies between versions. I did some subjective testing between
05, 07 and 08 b1 yesterday, and there seems to have been some serious
progress made since 05. A matter of 8-10 seconds vs 3-4 seconds to add a
feature that shows a rebuild time of zero.
Dale Dunn
2007-05-18 12:20:18 UTC
Permalink
Post by TOP
Now there is an idea. Take all the pathological models and turn them
into a benchmark.
TOP
Hmm. This has me thinking about the fact that reported rebuild time is
often far less than the time we have to wait. I think they call this
"friction" in the interface. I would be interested to see if this is
linearly proportional to rebuild time, if if veries between systems, and
how much it varies between versions. I did some subjective testing between
05, 07 and 08 b1 yesterday, and there seems to have been some serious
progress made since 05. A matter of 8-10 seconds vs 3-4 seconds to add a
feature that shows a rebuild time of zero.
TOP
2007-05-17 04:12:06 UTC
Permalink
Post by Dale Dunn
Post by TOP
PS Do the results on the SPECapc website scale with CPU speed?
When I was playing with overclocking a few years ago, I found that the
benchmark scaled almost linearly with clock speed for the CPU portion of
the test. Graphics was relatively unaffected. I think this was SPECapc SW
2003.
For SW2005 it appears that the CPU score doesn't influences the
graphics score as can be seen for the many machines that use the
FX3500 graphics card.
CPU Graphics
2.41 3
2.44 2.84
2.35 2.7
2.4 2.79
2.17 2.8

What I was refering to was a test I did a few years back in which a
box with a mediocre CPU aced a box with a fast CPU solely because of
the graphics card in the slower box.

Let's look at a simplified version of how SPEC scores:

CPU I/O Graphics Overall Geometric Average
1 1 5 1.71

5 1 1 1.71

5 1 5 2.92

Now which box would you rather be on? I would pick the second. The
third would probably not be that much faster in the real world than
the second, but would certainly blow away the first in any real work.

These are not arithmetic averages. They are geometric averages. SPEC
states that they weight them, but they don't say how. Looking at
their results the graphics can certainly pull up a score noticeably.

Examples taken from:
http://www.spec.org/gpc/apc.data/specapc_sw2005_summary.html

TOP
Loading...