Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A six year old CPU can still be described as "pretty fast?" Moore's Law really must be dead and buried.


8 threads, 8MB cache, 4GHz? Pretty fast, yes. Not the fastest, by far, but are you really arguing that casual web browsing for news viewing (viewing! the stats are not even for page load, but idle) should require top of the line equipment?


No, I'm saying Moore's Law suggest(s|ed) that every 18 months your chip's speed relative to current offerings is about half what it was. And this old chip has had four cycles of that exponential decay.

I just built a 12 core system I would describe as "pretty fast." A high end consumer desktop these days is 16 cores. A couple years from now, that will be considered "pretty fast."


Actually, Moore's law says nothing about speed, just the number of transistors you can stuff into the thing. Plus progress has been leveling off.


The only reason anyone ever cared transistor count is because that is a rough proxy for measuring speed.


It hasn't been for a long, long time now.


Moore's paper was about VLSI manufacturing. He certainly cared about transistor count. He didn't once touch on how those transistors get to be used.


Why did he care about transistor count?


VLSI manufacturing makes chips, made of transistors. It does not sell end products. The end products are a concept that their clients care about, not them. The same way a tree farmer does not spend his days thinking about toilet paper, even though that's what he is actually helping create. Instead, he thinks about trees, and how to best grow them.

Make no mistake, he knew he worked on the production of CPUs and memories, it was just not his focus. Also, he certainly cared about switching time, but his observation was about transistor count, not switching time.


Why did their clients care about transistor count?


Ever since the arrival of multi-core CPUs chip speed should probably be considered a dual number: max single core throughput and total throughput.

Also, my over 6 year old DAW (Digital Audio Workstation) CPU[0] is still plenty fast. It’s almost everything around it I have since upgraded (NVMe SSD, RAM, audio interface, video card) to keep overall system performance high.

[0] https://ark.intel.com/content/www/us/en/ark/products/77780/i...


We've just hit a point of AMD being competitive again in the last couple of years and things are looking up - Intel has doubled core count on their top end consumer CPUs and more than halved prices in their last couple of generations.

The last decade of stagnation in the consumer CPU market was less the end of Moores law and more Intel not needing to do any better because they had no real competition.


This is pretty severe revisionist history that presupposes that Intel's investment of billions every year in trying (and failing) to make soft X-ray lithography work well enough to double transistor count was just them not trying hard enough due to lack of competition.


So them more than halving prices and boosting core counts massively in the space of a couple of years just coincidentally coincided with AMD competing with them again?

I wont argue there wasn't lots of R&D going on behind the scenes, but their current improvements are still on the same architecture they've been using for years - this wasn't something they couldn't have done earlier (especially the prices - surely being able to lower them so much and still make a profit means that consumers were getting screwed earlier?).


Intel's competitive strategy has, historically, always been to retain such an overwhelming technical advantage in terms of transistor count that the other things didn't matter--they could make a huge profit while still providing better value than the competition. The only time that didn't work until now was when they tried to completely switch architectures (with Itanium) and they were able to quickly recover their advantage by returning to an x86-based architecture. Now, of course, this strategy has finally failed them, and all sorts of people are accusing them of having been complacent due to the lack of competition, but really I don't think they were doing anything differently from before (at least with regards to what you're talking about).


Intel has had serious yield issues for their top-end chips for the better part of a decade, so that will also affect pricing. Everyone from ARM and Qualcomm to AMD and nVidia has been able to successfully step to new process nodes with acceptable yields where Intel struggled to hit the same node steps.


I think it's more a function of modern OS's not really taking that much more from the CPU than they used to be- IIRC, Windows 10 is faster than Windows 7 on slow hardware because it disables more nice-to-have features (think Aero).

CPUs continue to get more powerful, but the minimum hardware requirements for newer editions of operating systems tends to not follow the same curve that new game releases do. (I can't play Rainbow 6 Siege on my Ryzen 5 3550H + GTX 1050 laptop, but CSGO runs fine on it and my i7-2670QM laptop mobo with a GTX 1060 strapped to it.)


Oh I completely agree that basic desktop computing is still reasonably responsive on older CPUs; I'm just pointing out that the mere idea of a six year old CPU being called "pretty fast" by current standards would have been unthinkable for most of my lifetime.


> IIRC, Windows 10 is faster than Windows 7 on slow hardware.

This has not been my experience with Windows 7 ==> 10.


10 (and IIRC 8?) felt incredibly slow on my gaming PC until I switched to an SSD for the OS disk. I think it hits disk way, way more often than 7, which makes it feel slower. Even on an SSD it doesn't really seem any better than 7 did on spinning rust (programs load faster, of course, but you can't really credit the OS for that—the OS interface itself, and OS utilities, don't seem faster). This was true even with Cortana and all that junk totally disabled.


Strange. I'd expect you to be hitting at least 40-50fps on 1080 High settings in R6 Siege with those specs. Minimum system requirements for R6 Siege is an i3. Maybe because its a laptop you are either hitting thermal limits or you have significantly less VRAM than the desktop counterpart.


> Minimum system requirements for R6 Siege is an i3.

I've never played R6 Siege and have no idea how it performs, but I would just like to say that this is effectively meaningless and I hate how games put it in their system requirements. Writing "moderately fast CPU" would carry more useful information. A Westmere Core i3-530 from 2010 is not going to perform anything like a Coffee Lake Core i3-8100B from 2018.

(I'm not that smart, I looked up the model numbers on Wikipedia: https://en.wikipedia.org/wiki/List_of_Intel_Core_i3_micropro...)


The actual minimum listed for that particular game seems like it does what you're asking for:

> Intel Core i3 560 @ 3.3 GHz or AMD Phenom II X4 945 @ 3.0 GHz


I think it's possible for Moore's law to still be true while single core performance stagnates


For everyday user computing tasks, it’s not that Moore’s law is dead so much as just irrelevant. We’ve hit the point of diminishing returns on CPUs getting faster.

Many users have flipped to prioritize power efficiency and quiet operation over speed. And even despite those optimizations, most of their tasks are still bottlenecked as often by their available memory and the quality of their network connection as as their processor.

We just haven’t figured out what to do with all that power in the hands of a user who isn’t super technical, so we’re just spending it on inefficient cruft instead.


That CPU is pretty close to the Ryzen 2700x I just installed last year.

And yeah, Moore's Law isn't a law recently. :)


Yeah, an 8 year old cpu with a 2 year old graphics card still makes a decent gaming computer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: