R/E/P Community

Please login or register.

Login with username, password and session length
Advanced search  

Pages: 1 [2] 3 4   Go Down

Author Topic: AMD or Intel  (Read 23774 times)

UnderTow

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 393
Re: AMD or Intel
« Reply #15 on: May 01, 2006, 02:27:22 PM »

danlavry wrote on Mon, 01 May 2006 17:36


Hi Karl,

I am not talking about Moore's law over the last 30 years. That was not my question at all.

I am talking about it in the context of the LAST 3 YEARS.

I was talking specifically about clock speed.

Again, if you plot a curve of clock speed improvements in the last, say 25 years, the last 3 years seem "flat" to me.

Regards
Dan Lavry


Hi Dan,

Moore's law doesn't mention clock speeds. Here are all the details on Mr Moore's prediction: http://en.wikipedia.org/wiki/Moore's_law

Anyway, clock speeds are not a relevant measure of computing power. Much more important is the speed of calculations measured in FLOPS (Floating point Operations Per Second) or MIPS (Million Instructions Per Second).

Different CPUs of similar clock speeds can have very different FLOPS or MIPS ratings. For instance, an AMD FX-60 running at 2.6 Ghz is faster than even the very fastest Intel processor running at 3.8 Ghz.

To show how irrelevant clock speeds have become, AMD didn't even make a press announcement when they broke their own 3Ghz barrier by releasing their first CPUs running at 3 Ghz a couple of weeks ago.

Clockspeeds are so 20th century.   Laughing

Anyway, with the advent of dual-core processors, I reckon that over the last 5 years, the computing industry has bested Moore's law.

Alistair

PS: I just checked te "Cost of computing" paragraph on that Wikipedia page and it says: Cost per GFLOPS in May 2000: 640$, Cost of GFLOPS in February 2006: 1$. We are way beyond Moore's law.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #16 on: May 01, 2006, 05:28:55 PM »

UnderTow wrote on Mon, 01 May 2006 19:27

danlavry wrote on Mon, 01 May 2006 17:36


Hi Karl,

I am not talking about Moore's law over the last 30 years. That was not my question at all.

I am talking about it in the context of the LAST 3 YEARS.

I was talking specifically about clock speed.

Again, if you plot a curve of clock speed improvements in the last, say 25 years, the last 3 years seem "flat" to me.

Regards
Dan Lavry


Hi Dan,

Moore's law doesn't mention clock speeds. Here are all the details on Mr Moore's prediction: http://en.wikipedia.org/wiki/Moore's_law

Anyway, clock speeds are not a relevant measure of computing power. Much more important is the speed of calculations measured in FLOPS (Floating point Operations Per Second) or MIPS (Million Instructions Per Second).

Different CPUs of similar clock speeds can have very different FLOPS or MIPS ratings. For instance, an AMD FX-60 running at 2.6 Ghz is faster than even the very fastest Intel processor running at 3.8 Ghz.

To show how irrelevant clock speeds have become, AMD didn't even make a press announcement when they broke their own 3Ghz barrier by releasing their first CPUs running at 3 Ghz a couple of weeks ago.

Clockspeeds are so 20th century.   Laughing

Anyway, with the advent of dual-core processors, I reckon that over the last 5 years, the computing industry has bested Moore's law.

Alistair

PS: I just checked te "Cost of computing" paragraph on that Wikipedia page and it says: Cost per GFLOPS in May 2000: 640$, Cost of GFLOPS in February 2006: 1$. We are way beyond Moore's law.


There are many possible factors for limiting compute speed, and they are APPLICATION DEPENDENT. One piece of software may demands fast hard drive, and the processor speed is almost insignificant. Another piece of software demands real fast video. Something else may be limited by how fast one can write and read to ram...

And then some applications that rely heavily on a bunch of computations that are done INSIDE the CPU, DO DEPEND ON CLOCK SPEED. Not all software that is mostly done inside the CPU requires the ultimate in clock speed, but some applications do!

So please put aside my reference to Moore's law. My point is strictly about clock speed.

I am not a computer guru, but I know just about everything that was said here, it is really very fundamental to any EE. I am well aware of the various tests used over the years, for comparison in compute speed, and in fact I too performed some of the benchmark tests. So I know that a hard drive intensive task calls for a very fast hard drive. I also understand that dual core is great for many things. I know that risk machines have some pluses, I know about the ups and downs of parallel processing...

And then there are the few cases where hard drive speed does no good, the video can be slow like molasses, the front bus speed is not going to help that much... because you are doing a lot of iterative computations inside the cpu. I happened to need such a fast clock.

A statement that clock speed does not matter is a gross generalization. It may hold true for the majority of cases, but it is not a general truth.

Again, there are many ways to improve computing. Some are about system architecture, some are about speed of data transfer from and to a CPU, a hard drive, ram.... and clock speed is one of those issues. What I said is: I am surprised that it did not go up that much in the last 3 years. I believe it was near 3GHz for a desktop and around 2GHz for a laptop. I do not think it moved up much lately.

Regards
Dan Lavry

Logged

kraster

  • Full Member
  • ***
  • Offline Offline
  • Posts: 199
Re: AMD or Intel
« Reply #17 on: May 02, 2006, 05:04:24 AM »

Hi Dan,

Intel have shifted their emphasis to energy efficiency rather than raw clock speed. Their new technology claims to reduce transistor leakage a 1000 times from their current technology. This is obviously good news for the mobile market in terms of battery life and probably the main reason Apple decided to switch to Intel.

With this new 65nm manufacturing process Intel will be able to double the amount of transistors per chip.

They also mention something about their strained silicon providing higher drive current increasing the speed of the transistors. (Sounds a bit voodooish to me!)

I'm not familiar with the real world performance benefits of Raw clock speed versus increased instruction per clock cycle/increased transistor count. As you say it's probably application dependant.

This is merely a rough summary of the direction Intel is taking.

Regards,

Karl
Logged

UnderTow

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 393
Re: AMD or Intel
« Reply #18 on: May 02, 2006, 09:37:39 AM »


Hi Dan,

danlavry wrote on Mon, 01 May 2006 22:28


There are many possible factors for limiting compute speed, and they are APPLICATION DEPENDENT. One piece of software may demands fast hard drive, and the processor speed is almost insignificant. Another piece of software demands real fast video. Something else may be limited by how fast one can write and read to ram...



I am talking purely about computing speed. By that I mean actual computing operations and not data rates in and out of the CPU, video speed, storage and retrieval speeds etc.

Quote:


And then some applications that rely heavily on a bunch of computations that are done INSIDE the CPU, DO DEPEND ON CLOCK SPEED. Not all software that is mostly done inside the CPU requires the ultimate in clock speed, but some applications do!



Yes indeed. I never have enough computing speed. Smile

Quote:


So please put aside my reference to Moore's law. My point is strictly about clock speed.



So are mine. Smile My point is still that clock speed is not a relevant measure of computing speed.

An AMD Opteron processor is a 3 way super-scalar processor with each pipeline processing two micro operations per clock cycle giving us a total of 6 uOps per cycle. In contrast, the Intel P4 processor handles 4 uOps per clock cycle. So as you see, the Opteron processor can handle 50% more operations per clock cycle compared to a P4 processor.

Now make these processor dual-core and we go up to 12 and 8 uOps per clock cycle respectively. That is why I say that clock speed is irrelevant as a direct measure of computing power.

(There are other reasons for the AMD chips being faster, especially in dual-core architectures but that becomes quite complex).

Quote:


A statement that clock speed does not matter is a gross generalization. It may hold true for the majority of cases, but it is not a general truth.



You are equating clockspeed to computational speed when you should be doing: Computing Speed = Clock speed * Instructions per cycle.

Quote:


What I said is: I am surprised that it did not go up that much in the last 3 years. I believe it was near 3GHz for a desktop and around 2GHz for a laptop. I do not think it moved up much lately.

Regards
Dan Lavry



No, clock speeds havn't gone up much but computing speed has. Especially with dual-core processor. That doubles the number of instructions per second. (Slight simplification Smile.

Alistair
Logged

UnderTow

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 393
Re: AMD or Intel
« Reply #19 on: May 02, 2006, 09:50:42 AM »

Hi Karl,

kraster wrote on Tue, 02 May 2006 10:04

Hi Dan,
With this new 65nm manufacturing process Intel will be able to double the amount of transistors per chip.



Intel allready use a 65nm manufacturing process. They are going to 45nm. AMD is still at 90nm and going to 65nm but they manage to be much more energy efficient at 90nm compared to Intel due to their SOI manufacturing process (Silicon On Insulator).

Alistair
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #20 on: May 02, 2006, 04:59:30 PM »

UnderTow wrote on Tue, 02 May 2006 14:37


Hi Dan,

You are equating clockspeed to computational speed when you should be doing: Computing Speed = Clock speed * Instructions per cycle.

No, clock speeds havn't gone up much but computing speed has. Especially with dual-core processor. That doubles the number of instructions per second. (Slight simplification Smile.

Alistair


I said it twice, you seem to "sort of disagree". I'll say it again:
1. Clock speed is ONE OF THE FACTORS for compute speed.
2. The importance of clock speed relative to the other factors is APPLICATION DEPENDENT.
3. I have a couple of applications where clock speed is very important factor for compute speed

What is that you disagree with?

Regards
Dan Lavry
Logged

Jon Hodgson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1854
Re: AMD or Intel
« Reply #21 on: May 02, 2006, 06:20:58 PM »

danlavry wrote on Tue, 02 May 2006 21:59


1. Clock speed is ONE OF THE FACTORS for compute speed.


Yes, but it is actually less of a factor than you seem to believe it is
danlavry wrote on Tue, 02 May 2006 21:59

2. The importance of clock speed relative to the other factors is APPLICATION DEPENDENT.


True, but it is unusual to find a case where instructions cannot be rescheduled to use multiple execution units (I am NOT talking about dual core here, I am talking about instruction level parallelism in a single thread) with an improvement in performance.
danlavry wrote on Tue, 02 May 2006 21:59


3. I have a couple of applications where clock speed is very important factor for compute speed


Firstly the fact that it is processor bound and single threaded does not mean that a process cannot benefit from an architectural change in the CPU, rather than just increasing the clock speed. Secondly the examples you gave, compilation and mathematical analysis, can benefit greatly from multiple processors or cores... if the software is written to use multiple threads.

The fact is that increasing clock speed has been a diminishing return for many years now, faster clocks mean more power consumption, which means greater heat, which means it's harder to keep the thing cool enough to not destroy itself. The result is that designers have been looking to improve performance in other ways, one of which is to increase parallelism, either by increasing the number of execution units in a single core and using clever hardware to reschedule and allocate instructions in a single thread (and then introducing hyperthreading to increase average utilization with two threads), or by duplicating the whole processor in a multicore setup and relying on cleverer software to take advantage of it.

Laptops in particular are an area where the emphasis has long ago moved from maximum performance, to performance for a given power consumption.
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #22 on: May 02, 2006, 08:54:24 PM »

Jon Hodgson wrote on Tue, 02 May 2006 23:20

danlavry wrote on Tue, 02 May 2006 21:59


1. Clock speed is ONE OF THE FACTORS for compute speed.


Yes, but it is actually less of a factor than you seem to believe it is
danlavry wrote on Tue, 02 May 2006 21:59

2. The importance of clock speed relative to the other factors is APPLICATION DEPENDENT.


True, but it is unusual to find a case where instructions cannot be rescheduled to use multiple execution units (I am NOT talking about dual core here, I am talking about instruction level parallelism in a single thread) with an improvement in performance.
danlavry wrote on Tue, 02 May 2006 21:59


3. I have a couple of applications where clock speed is very important factor for compute speed


Firstly the fact that it is processor bound and single threaded does not mean that a process cannot benefit from an architectural change in the CPU, rather than just increasing the clock speed. Secondly the examples you gave, compilation and mathematical analysis, can benefit greatly from multiple processors or cores... if the software is written to use multiple threads.

The fact is that increasing clock speed has been a diminishing return for many years now, faster clocks mean more power consumption, which means greater heat, which means it's harder to keep the thing cool enough to not destroy itself. The result is that designers have been looking to improve performance in other ways, one of which is to increase parallelism, either by increasing the number of execution units in a single core and using clever hardware to reschedule and allocate instructions in a single thread (and then introducing hyperthreading to increase average utilization with two threads), or by duplicating the whole processor in a multicore setup and relying on cleverer software to take advantage of it.

Laptops in particular are an area where the emphasis has long ago moved from maximum performance, to performance for a given power consumption.



I am more then aware of the factors relating to speed and density, how the the reduction in dimension decreases capacitance, how the reduction in core voltage reduces power (X^2 factor) and so on. One does not need to design a processor to know that, the whole IC industry has been marching to the same drummer.

So I tell you what I'll do. I will get a new laptop, and run my software on it. I will compare the same exact computations (I will use one that takes about 5-10 minutes). Then I will report here what I find.
In fact, I will try 3 things:
1. Software for compiling an complex Altera FPGA
2. Math software doing a long iterative calculation
3. An auto router for a complicated printed circuit board

Now, if I get a factor of 2 in speed, I will be happy. If I get less then 50% improvement, I will be disappointed.

What do you guys think the speedup will be?

It will be a while before I get to it, but I am ready for some predictions.

Regards
Dan Lavry  
Logged

Jon Hodgson

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1854
Re: AMD or Intel
« Reply #23 on: May 03, 2006, 03:33:09 AM »

danlavry wrote on Wed, 03 May 2006 01:54

So I tell you what I'll do. I will get a new laptop, and run my software on it. I will compare the same exact computations (I will use one that takes about 5-10 minutes). Then I will report here what I find.
In fact, I will try 3 things:
1. Software for compiling an complex Altera FPGA
2. Math software doing a long iterative calculation
3. An auto router for a complicated printed circuit board

Now, if I get a factor of 2 in speed, I will be happy. If I get less then 50% improvement, I will be disappointed.

What do you guys think the speedup will be?

It will be a while before I get to it, but I am ready for some predictions.

Regards
Dan Lavry  


It all depends, quite obviously, on how the software has been written. Some gains are "free" in this respect, for example the instruction scheduler will take advantage of multiple execution units in a single core, and the os will automatically do certain operations in a second core, thus reducing load in what is otherwise a single threaded situation.

However the maximum gains will usually require the software to have been written with a finer grained threading than you might have done if you assumed a single processor was all tat was available.

Also if you want maximum gains on an x86 system you'd probably want to be using a 64 bit processor and a program that has been compiled for it. Not necessarily because you gain anything from the 64 bit factor itself, but because the doubling of the number of registers has been shown to improve performance notably.

If you are using software which has been written to take advantage of the architecture available, then I would exoect notable gains in most case.

However there is another factor, which is the specifics of the operation you are doing, for example a simple iterative calculation might not offer any opportunities for parallelism, whether at thread or instruction level.

Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #24 on: May 03, 2006, 02:48:32 PM »

Jon Hodgson wrote on Wed, 03 May 2006 08:33

danlavry wrote on Wed, 03 May 2006 01:54

So I tell you what I'll do. I will get a new laptop, and run my software on it. I will compare the same exact computations (I will use one that takes about 5-10 minutes). Then I will report here what I find.
In fact, I will try 3 things:
1. Software for compiling an complex Altera FPGA
2. Math software doing a long iterative calculation
3. An auto router for a complicated printed circuit board

Now, if I get a factor of 2 in speed, I will be happy. If I get less then 50% improvement, I will be disappointed.

What do you guys think the speedup will be?

It will be a while before I get to it, but I am ready for some predictions.

Regards
Dan Lavry  


It all depends, quite obviously, on how the software has been written. Some gains are "free" in this respect, for example the instruction scheduler will take advantage of multiple execution units in a single core, and the os will automatically do certain operations in a second core, thus reducing load in what is otherwise a single threaded situation.

However the maximum gains will usually require the software to have been written with a finer grained threading than you might have done if you assumed a single processor was all tat was available.

Also if you want maximum gains on an x86 system you'd probably want to be using a 64 bit processor and a program that has been compiled for it. Not necessarily because you gain anything from the 64 bit factor itself, but because the doubling of the number of registers has been shown to improve performance notably.

If you are using software which has been written to take advantage of the architecture available, then I would exoect notable gains in most case.

However there is another factor, which is the specifics of the operation you are doing, for example a simple iterative calculation might not offer any opportunities for parallelism, whether at thread or instruction level.




Hi Jon,

You seem to echo some of what I said - it is software depended, and of course it is not easy to know what to expect. Some software vendors will tell you information, but with some companies, it is not easy to find the person with the know how...

Yes I understand some of the comments about longer battery time (between recharge), dual core, wireless stuff... It is all fine and great, perhaps for over 90% of what people do... But is it great for most engineering applications? I do not know.

Regards
Dan Lavry  



Logged

crm0922

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 272
Re: AMD or Intel
« Reply #25 on: May 04, 2006, 11:42:10 AM »

Dan: If you are looking for performance, do not buy a laptop.  I appreciate the convenience factor that exists, but they are absolutely inferior performers.

Laptop engineering has been heading towards increased power conservation at the cost of performance improvements for a while now.

Even the laptop clock speeds are not running at rated speed at all times, some are able to change speed with temperature, CPU load, etc. when you get involved with Centrino and things like that.

If you must have a laptop, you need to spec 7200RPM drives, which are expensive and rare (last I checked, that is).  It makes a huge difference.  I suspect that the functions you require are i/o bound to some degree.

You may also be able to reinstall Windows (or convert the installation) to disable ACPI and that will defeat all the power saving functions.  It will also prevent battery information display, lid closing/sleep type functions, etc. as well.  I have heard that some laptops will not operate at all without ACPI active.

Chris
Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #26 on: May 04, 2006, 03:20:33 PM »

crm0922 wrote on Thu, 04 May 2006 16:42

Dan: If you are looking for performance, do not buy a laptop.  I appreciate the convenience factor that exists, but they are absolutely inferior performers.

Laptop engineering has been heading towards increased power conservation at the cost of performance improvements for a while now.

Even the laptop clock speeds are not running at rated speed at all times, some are able to change speed with temperature, CPU load, etc. when you get involved with Centrino and things like that.

If you must have a laptop, you need to spec 7200RPM drives, which are expensive and rare (last I checked, that is).  It makes a huge difference.  I suspect that the functions you require are i/o bound to some degree.

You may also be able to reinstall Windows (or convert the installation) to disable ACPI and that will defeat all the power saving functions.  It will also prevent battery information display, lid closing/sleep type functions, etc. as well.  I have heard that some laptops will not operate at all without ACPI active.

Chris


Thank you Chris,

I do some of my work on a desktop, and some of it on a laptop.
It is a question of "life style". As a rule, it is extremely easy to keep the "latest data" always updated on both computers. I use "memory sticks" and CDR's. I trust CDR more as short term backup, though I had only one memory stick" failure so far.

On my laptop, I did disable the various battery saving features (except for the display) and that laptop is one hack of a "gas hog". A new battery every year does help a lot!  


The other point, though unrelated, I value my design work a lot, I do not want it destroyed by a virus, and I do not know how far one can trust the various protection schemes against un invited guests. My solution is: Never to connect my design computers to the Internet! Most of my machines are connected, but a few are not.

That does cause a lot of inconvenience! One can not download directly many "software things" such as software upgrades, enhancements, device models and much more. Inconvenient, but I sleep better at night. It may not be 100% bullet proof, I do load software, but not via the Internet.

It does amount to "taking a few steps back" technology wise. But I am not thrilled about the state of affairs in Internet security.

Regards
Dan Lavry
Logged

crm0922

  • Sr. Member
  • ****
  • Offline Offline
  • Posts: 272
Re: AMD or Intel
« Reply #27 on: May 05, 2006, 04:34:40 AM »

You could just disable the network adapter most of the time and enable it when you need to download something.  Contrary to popular belief, it is very rare that one gets spyware or a virus of some sort without more or less asking for it.  Surfing around looking at porn, cracked software, etc. will get you something creepy pretty quick.

The vast majority of viruses and spyware don't travel between networks very easily, so it isn't terribly difficult to keep it under control with adequate AV protection.

I find the best for small networks is Trend Micro:

http://www.trendmicro.com

Their server/network solutions are easy to maintain and keep the network up to date automatically.

I'd use a network with a server and offline synchronization to keep things up to date without the need for CDR's and memory sticks and the like.  The access times for such media is atrociously bad.

Isn't all your design data incrementally (at least) backed up nightly?  Should probably be shadowed in realtime as well.  Thus almost no potential for data loss.

Chris






Logged

danlavry

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 997
Re: AMD or Intel
« Reply #28 on: May 05, 2006, 02:22:18 PM »

crm0922 wrote on Fri, 05 May 2006 09:34

You could just disable the network adapter most of the time and enable it when you need to download something.  Contrary to popular belief, it is very rare that one gets spyware or a virus of some sort without more or less asking for it.  Surfing around looking at porn, cracked software, etc. will get you something creepy pretty quick.

The vast majority of viruses and spyware don't travel between networks very easily, so it isn't terribly difficult to keep it under control with adequate AV protection.

I find the best for small networks is Trend Micro:

http://www.trendmicro.com

Their server/network solutions are easy to maintain and keep the network up to date automatically.

I'd use a network with a server and offline synchronization to keep things up to date without the need for CDR's and memory sticks and the like.  The access times for such media is atrociously bad.

Isn't all your design data incrementally (at least) backed up nightly?  Should probably be shadowed in realtime as well.  Thus almost no potential for data loss.

Chris



About 16 years ago I was hit with a virus. By the time it "showed itself up", all the backups of the last few month were also contaminated. That is when I learned that backups are not always the cure all as many people think.

But of course we back up a lot. Not just for viruses, but also because hard drives "go away". It is always very time consuming to install the software, the backup data....

I would really like to have a couple of extra hard drives or even a couple of computer that will have a "mirror picture" of the hard drives I use, so that I can just "pick up and go" when the hard drive breaks down. I would be willing to "re match" the extra drives to the working drives as often as once a week.
I have not yet figured out how to do it. It would think there must be an easy way to do it.

Regards
Dan Lavry

 
Logged

PookyNMR

  • Hero Member
  • *****
  • Offline Offline
  • Posts: 1991
Re: AMD or Intel
« Reply #29 on: May 05, 2006, 02:55:43 PM »

danlavry wrote on Fri, 05 May 2006 12:22

I would really like to have a couple of extra hard drives or even a couple of computer that will have a "mirror picture" of the hard drives I use, so that I can just "pick up and go" when the hard drive breaks down. I would be willing to "re match" the extra drives to the working drives as often as once a week.
I have not yet figured out how to do it. It would think there must be an easy way to do it.  


On the Mac there is an application that will make a working 'clone' of your hard drive, Carbon Copy Cloner.  On the PC, if I'm not mistaken, Norton's 'Ghost' will do the same thing.

I use CCC to clone my hard drives to protect against failures, thefts, etc.

Logged
Nathan Rousu
Pages: 1 [2] 3 4   Go Up
 

Site Hosted By Ashdown Technologies, Inc.

Page created in 0.042 seconds with 19 queries.