Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
For years, Intel sat on its CPU monopoly and now the tide turns against them (seekingalpha.com)
162 points by jaytaylor on Oct 3, 2020 | hide | past | favorite | 236 comments


I don't think it's fair to say that Intel 'sat' on their monopoly. From McAfee, through Altera to (whatever that stupid company that had the drones was), Intel was desparately trying to leverage their monopoly to expand in to new markets. What I think is unique about Intel when compared to their rightful competitors (FAANG + M/S + AMD + Nvidia) is that Intel don't pay for talent. Intel have been cost orientated for atleast a decade now, when 14nm cost spiralled. The problem wasn't that Intel was a monopoly so much as it's just that Intel has been living it's life in the world of real companies, not silicon valley. Intel is viewed/valued/run as a manufacturing company. So they need to hit the eternal treadmill of performance whlist spending like they're just doing average work. It's difficult to do that when Google is just going to 10x the salary of any of your 10x staff.


Intel is not wrong about being a manufacturing company though. Their true competition is TSMC, whose formula to success is having an army of process engineers working three shifts figuring out every little detail on an island that has seen stagnant wages for more than a decade. Intel's mistake was to believe they should diversify and nearly every acquisition outside of their core competence turned out to be a disaster.


Turned out to be a disaster?

Intel's acquisitions have been pre-existing disasters, also-rans and underdogs looking for their big break, often hoping that getting access to Intel's fabs could make their next generation of chips more competitive. Many—if not most—of their acquisitions over the past decade only really make sense if they were intending to leverage Intel's supposed core competencies to not just get Intel into new markets, but grab a lot more market share in those new segments. And then being part of Intel ended up not really helping those companies, because Intel couldn't deliver on their core competencies.


Not true for NVidia. Companies ultimately sell stories, not products. Tesla could be as much as manufacturing company as Ford but they are not. Stories are very different.


You could say, running the company with MBAs, instead of BSEEs, when their competitors were doing the opposite.

Putting the money they spent on acquisitions, into manufacturing technology, process innovation, etc. would have kept them right up front.

As for paying for talent, they certainly paid enough dollars for talent, but could have directed those same dollars at fewer resources with the talent they actually needed. That means paying top dollar for the right people, and skipping the rest.

If they let others poach, that is because those competitors were smarter at valuing talent while Intel either slept or fell victim to broken HR policies.


Intel was shipping 4-core desktop/laptop CPUs for years when they could have had 10-12 cores. They were definitely sandbagging until AMD forced their hand.


Is this true? I thought most of the reasons why AMD can push higher core counts is largely caused by very recent manufacturing innovations with chiplet/infinity fabric.

Namely that it enabled the manufacturing process to glue together slightly broken CCXes to enable high core count lower binned parts.


Intel has had no trouble moving their mainstream product lines from 2-4 cores up to 6-10 cores, without copying AMD's chiplet playbook, or using their own more ambitious chiplet interconnects, or even delivering significant improvements to their CPU fabrication process. They just stretched out a ring bus a bit further to add in more of the same CPU cores they'd been shipping for years, with very incremental improvements/fixes. Aside from some of the clock speed increases and some of the integrated GPU updates, all the big improvements they've made to their consumer 14nm chips since 2015's Skylake are things they could have delivered in 2016 if there had been competitive pressure.


Not to mention the 2 core 15 watt laptop CPUs.


Which most developers have no idea how to use them for, beyond running more OS processes.


Or more like Developers have no incentive to optimise the usage for it purely because so little people have it.


Go to any developers meeting and make an informal query about how comfortable most are with these kind of subjects, I bet most would fail a parallel programming Pub quiz.

I certainly might, and all applications that I have written commercially since 2000 make use of parallel programming and concurrency.


Intel's US employees are mostly in Arizona and Oregon, which are much lower cost of living locations than the Bay Area and Seattle.

I would be curious to know which companies pay more for process engineers than Intel in the US. My best guess would be Samsung?


From Glassdoor for Process Engineer (not senior): Intel average is 124k, Apple average is 143k


Apple doesn't hire outside of Cupertino, and the cost of living there is easily double Arizona and probably 50% more than Portland. I'd rather make 125K in Arizona than 143K in Cupertino that's for sure.


TSMC


I don't know if it's the salary delta so much as Intel feels like the modern version of Fairchild, IBM, or Sperry-Rand; business-suit wearing lifers more interested in defending their established market than creating new ones or competing with themselves.*

So as the Andy Grove generation retired out, very competent and talented but ultimately risk-averse replacements came in.

So they failed to make meaningful inroads into mobile and are now getting flanked by the myriad of companies who did as those companies are moving upstream with lower power, lower heat and thus potentially more scalable chips.

Intel also failed to rise to the occasion of "pivot or preserve" and just went to a "preserve at all costs" strategy even when it was clearly not working out for them. Chips got delayed 6 months, then 12, then 18, then got redefined just to get something marginally better out as their true intentions continually got pushed further.

And now, instead of say, sniping some stars from Rockchip or the other amazing up and comers, they did a reorg from within; a very GE/Texas Instrument style move that usually does not work - especially when you have already passed a generational changing of the guards.

But they have plenty of market still captured, a continued strong brand identity in the vast majority of the public, plenty of money to pad a runway and plenty of decent products so there's still significant time to turn their ship around and not continue to ignore every word of every book of Jack Trout, Al Ries, Clayton Christensen, Geoffrey Moore, and Steve Blank. First they need to stop believing "it can't happen here" when it's clearly already happening here.

If they don't, they might become another Bethlehem Steel or Sears; deftly unable to read the cards despite evidence attacking them from all directions. They need to realize their old ways will not carry them into a new world.

Look how Microsoft flipped. As a hardcore Linux user I would have never thought I'd ever be giving cash to Microsoft (via GitHub and Azure), and monthly at that! These things are possible: "When losing ground focus on your potential instead of preserving position"

Even after whatever AMD is going to say on October 8 and whatever Nvidia is going to do with ARM, the chip ball is still in Intel's court for now and it's still their game to lose. I am pretty confident they will figure out this rough patch and a substantial recovery will come (come see my stock positions if you don't believe me). My tune may change in 6 months...

---

* The modern Sperry-Rand, at Unisys, for instance, is STILL maintaining their UNIVAC operating system - why? Good question. Do they think that's still a potentially winning hand? A path for recapturing a market they lost 60 years ago as if say, somehow hot new startups will start choosing OS 2200 running on Univacs? I don't know either. I downloaded and tried their free virtualized x86 version about 6 months ago just to see what it was like and I got a high-touch white-glove sales-treatment of emails and phone calls. It all felt very desperate. (https://en.wikipedia.org/wiki/OS_2200 and https://www.unisys.com/offerings/clearpath-forward/clearpath...). I wish them the best of luck, they'll need it.


>So as the Andy Grove generation retired out

They had Patrick Gelsinger. But he was pushed out. You left with a Board lead by former CFO Andy Bryant, chooses a CEO that fits his taste.

It was all Politics.


> It was all Politics.

And this has trickled down since.


Sure. Apple had the Sculley/Spindler/Amelio era and Microsoft had the Ballmer era. These things are recoverable if they get addressed in time. (Then there's the SGIs, Yahoos and SUNs of the world - coasting for years into oblivion; the Radioshack/A&P Grocery model. I don't think Intel is in that camp though.)


>Apple had the Sculley/Spindler/Amelio era and Microsoft had the Ballmer era

I would actually argue those two had strategy problem rather than execution. And those two had comparatively little politics / power play involved.

Intel has both, and politics.


I have to disagree. The politics and power play of the kicking out of Jobs is famous, kind of the textbook case. Woz even quit over it.

The mantle passing to ballmer came at the objection of many major stock holders but some tiny cabal of gates et al overrode it. For years there were protests and threats that people would sell off millions if they didn't can him. There was substantial slippage and he lost all support and bounced, like Lee Iacocca except without any glory days


> Intel was desparately trying to leverage their monopoly to expand in to new markets.

If your business is in an industry with very rapid innovation, would it be wise to lose focus on your core products and processes?


For years now, Intel's only way to get more performance out of their functional process nodes has been to push power and heat higher.

Now, the Tiger Lake performance per watt story just isn't that impressive compared to the competition.

>Here we present the 15W vs 28W configuration figures for the single-threaded workloads, which do see a jump in performance by going to the higher TDP configuration, meaning [Tiger Lake] is thermally constrained at 15W even in [single threaded] workloads.

Comparing it against Apple’s A13, things aren’t looking so rosy as the Intel CPU barely outmatches it even though it uses several times more power, which doesn’t bode well for Intel once Apple releases its “Apple Silicon” Macbooks.

https://www.anandtech.com/show/16084/intel-tiger-lake-review...


Same thing happened to Nvidia, and I'm very perplexed why this is not widely reported.

I wonder how the 3070 and 3060 are going to compare, in terms of performance per watt, when compared to the 2080/2070 series. Based on the current numbers, they'll possibly show very little improvement.


What “current numbers” do you refer to? Of course it may depend on workload, but the 3080 has, at least in one benchmark[1], better performance per watt than all compared cards (7% better than 2080 Ti, 21% than 2070, 32% than 2080, 67% than 1080 Ti). Total power consumption is up quite a bit (25% over 2080 Ti), but still get more performance than more power.

[1] https://youtu.be/csSmiaR3RVE?t=1229


Right, the efficiency improvement used to be ~60% per two years now it's 7% per two years.


NVIDIA pushed the 3080's stock performance a little too high up the perf/watt curve. If you limit it to the TDP of the 2080 Ti, you lose 4% performance but you get much better efficiency: https://www.computerbase.de/2020-09/geforce-rtx-3080-test/6/


> Right, the efficiency improvement used to be ~60% per two years now it's 7% per two years.

Is 3080 vs. 2080 Ti the correct comparison?


It’s not clear yet, since we’ve no idea if there will be a 3080 Ti. The 3090 throws out the naming convention from the past few generations, leaving it a bit of a mystery. Nvidia may do as they did with the 1080 Ti: not release till nearly a year after the 1080 (whereas 2080 & 2080 Ti were launched just a week apart).

Given the 3090 is not too much faster than 3080, it seems there may not be room there. But then again, the 1080 Ti was as fast as the Titan X. So…yea.


This is actually what I find extremely dishonest (which is, in other words, extremely good marketing) - a lot of websites are comparing the 3080 with the 2080, and reporting massive improvements... which doesn't make much sense.

I think 3080 vs 2080 Ti is the only possible comparison, but websites should make it very clear that it's an unfair comparison.

Probably the only comparison that actually makes sense is the (future) 3070 vs 2080 Ti. I'm not suprised that Nvidia pushed the release (it was supposed to be released earlier).


The end of Moore’s law has been broadly reported for the last decade and is widely understood. Nobody is likely to get huge efficiency improvements in general processors ever again, not like what we saw in the past, and it isn’t news anymore.

Algorithmic improvements, custom domain specific ASICs, and maybe quantum computing or other physical processes in the future are where large efficiency deltas might come, but for now small improvements are here to stay for all chip makers.


Moore's law may be considered dead at Intel, but TSMC does not agree.

>Wong, who is vice president of corporate research at Taiwan Semiconductor Manufacturing Corp, gave a presentation at the recent Hot Chips conference where he claimed that not only is Moore’s Law alive and well, but with the right bag of technology tricks it will remain viable for the next three decades.

“It’s not dead,” he told the Hot Chips attendees. It’s not slowing down. It’s not even sick.”

https://www.nextplatform.com/2019/09/13/tsmc-thinks-it-can-u...


I'm not sure chip fabs are ever going to say Moore's law is dead -- I interned at Intel last summer, and Moore's law was pretty much all they could talk about. (In fact, they made very similar claims at the exact same conference [0]).

[0]: https://twitter.com/stshank/status/1295469775678627840


Maybe another approach has potential:

[0] https://www.eetimes.com/bipolar-zener-combo-takes-on-cmos/

In fact, multiple gates can be created in the same transistor, in an effect SFN calls “multi-tunnel.” Multiple NOR and OR gates can thus be created from a single Bizen transistor, allowing creation of logic circuits with many fewer devices. This can result in a three-fold increase in gate density with a corresponding reduction in die size for integrated circuits based on the transistors. Summerland said that SFN is also creating a reduced device count processor architecture to enable analogue computing with Bizen transistors.


There's a lot more to the perf improvement tapering than changes in Moore's law (which is about circuit complexity increasing at given cost), namely problems translating the increasing transistor budget to ipc improvement or failing that, solving parallel programming. And clock speed improvements.


FWIW, I don’t think perf improvements are slowing down, I just think efficiency improvements in ICs are. Flops per watt of general compute isn’t moving quickly, and can’t anymore. But we can still make bigger parallel machines, design better algorithms, solve new problems, etc.


Outside HPC/ML I think our programs are now trading off useful ops per watt to take some advantage of the elusive beast called thread level parallelism. A web browser is happy to get a speedup of N by throwing 2N or 4N spinning threads at the problem if correctness and stability can be retained.


Great comment. It seems like a cool time.

What makes sense to accelerate, how to integrate it and balance accelerators vs. general cpus, and how to expose it all to the programmer all seem like fun and interesting problems.


It is a cool time! Yeah I totally agree, and I think it’s awesome that you’re looking at it as an opportunity to learn and have fun doing it. Some people worry, and others embrace the change and make good things happen. I think I can attest to your vision since I work for a chip maker and I’m involved in the hardware & software design of some domain specific computing - it has been a blast, and we are learning all kinds of fun things.


The press follows, rather than identifies, the trend. It doesn't help that in most 'news' organizations, the news is considered entertainment and isn't terribly rigorous.

The 2xxx series wasn't terribly impressive to many so they've just started to peak out in the way that Intel did at least 5 years ago. We're heading into AMD's 4th iteration of processors that basically mop the floor with Intel from a price/performance and performance/watt standpoint and it's just starting to become accepted in the mainstream that Intel is in trouble. It will take another generation or two of products before the press catches on to the fact that what nVidia is telling everyone isn't true. Of course it will be helpful if there's some competition to point to that helps make the case.


Coincidentally there was a Twitter fight about that topic last night. From a consumer perspective there's nothing that can really be done about it. Chip designers have known for years that "Moore's Law" is slowing so it's not news to them.


Good analysis on this by AdoredTV. TLDR Ampere is one of the smallest improvements in performance per watt in Nvidia's history.

https://youtu.be/VjOnWT9U96g?t=1104


[flagged]


Was there an issue with any of the math in the video? It was just straight forward performance per watt / performance per watt calculations as far as I could tell.


The numbers and math all look correct but some of the comparisons seem to be cherry-picked. Titans weren't included but the 3090 was for example.


3090 is not a titan as per Nvidia. They are promoting it as the 8k gaming solution.


The 3090 should be the new Titans


Does it even matter? When it starts to matter you won’t be able to run MacOS on Intel and you can’t run Windows or something else on ‘Apple Silicon’.

They may be very nice machines but they won’t be in the race for a very large portion of the market.


Software and platform lockdown are a lot weaker than they were in the WinTel heyday. It’s a lot easier to go from macOS to Windows to Linux these days. There are many exceptions of course, but I’d wager that market is not as big as you think.


I think OPs point is that this is going to get a lot harder once macbooks become Apple Silicon only


Software has a lot more abstractions these days. Almost no one codes close to the ISA. I don’t think it’s as much of an issue as it was.


Only if ones lives in a POSIX CLI world of Web GUIs.

That is not the market that Apple cares about, nor NeXT did that much, other to gain market share when things were looking dim.


Most of the world uses web apps. Huge and performance sensitive applications like Adobe, Maya, etc mostly already have their own UI rendering engine anyway.

It’s a lot different from the Win32/x86 duopoly situation.

Platforms have been mostly abstracted away.


Most of the world is a bubble.


Care to elaborate? I don’t understand you meant by that.


>Does it even matter?

Yes, most tools are cross platform (not necessarily cross-arch but that's a matter of time) and performance is important - I don't care if it's Apple silicon or Intel or AMD, or what OS GUI I'm using - I just want faster builds and iteration


Also note that the only benchmark where the A13 was included was Spec2006 - so unless Spec 2006 is relevant across most workloads that most users care about it's not really telling the whole story.


Is there any reason to think you wouldn’t (eventually) be able to install the Arm version of Windows on Apple silicon?


There will be an arm version of windows.. but there’s no reason to believe that Microsoft will or can support all of the coprocessors that Apple is going to add - including gpu, t2, nueral engine, etc. it isn’t like Apple plays nice with 3rd parties. I seriously doubt they’ll release the details necessary to develop independent drivers for those... whether it’s Microsoft or Linux.

So there’s good reason to believe it may be macOS only.

Personally I’ll believe it when I see it... because right now there are no 3rd party iPad or iPhone operating systems. So a better question is probably: what makes you think there will be 3rd party OS support?


But, but...

https://news.ycombinator.com/item?id=24636166 (Apple’s T2 security chip jailbreak)


I find it hard to believe Apple would ditch bootcamp outright. It is too useful of a feature.


Microsoft is the one doing Apple Silicon contributions to OpenJDK though.

Yep, curious world.


Yes. There is the fact that macOS on ARM does not appear to have a standardized boot method (Apple says the boot sequence is "based on iPadOS"), and there is the fact that Apple also uses their own GPU architecture for which they're only motivated to make mac/iOS drivers.

Getting Windows to run on this is a pretty tall order, and without significant investment it won't happen. And significant investment from whom? Not Apple surely. I'm guessing Microsoft has more stake if anyone. At best we're going to see Windows on ARM virtual machines.


Apple currently supports windows installs, though. They write drivers and make sure the hardware works.


Apple has enough incentive to do it now because Windows on x86 is something people actually want/need to use. It doesn't cost much to support and probably sells a bunch of Macs. Windows on ARM is not really something people would want to use, not to the enormous extent people want to to use Windows on x86.

Speaking from what people expect from a "Windows PC" - Windows on a current MacBook makes a decent Windows PC. Windows on a future MacBook makes for a very poor Windows PC that will make customers regret their purchase.

Apple currently uses standard PC architecture, with identical CPUs, GPUs and Wifi chips to standard PCs, and a (sort of) standard EFI booting mechanism. With ARM Macs, it won't. Clearly Apple in its hardware design is already not invested enough to support Windows - they could use standard EFI on ARM, for example. So why would they spend a lot of money to do it in software, so people can run a "crippled" version of a competitor's OS?

Note I don't actually believe Apple will actively lock this down. I do believe we will get native Linux on these things to some extent. But I do believe Apple will not lift a single finger to make alternative OSes happen on ARM Macs. Apple only cares about macOS and virtualization. I also think Windows on ARM is a great product, but I also know that common people that just want to use a Windows computer won't agree with me.


Yes. You can't install anything on iPads and iPhones. And those are the closet hardware we have to to be release ARM macs. I would actually be incredibly surprises if they could run anything but Mac OS.


Counterargument: the closest hardware we currently have to ARM Macs is probably... Intel Macs.

The "Apple is going to lock down the Mac just like the iPhone" narrative has been with us since, well, the iPhone. But despite the alarm at tighter security measures in more recent versions of macOS, that hasn't happened yet -- and if a Secret Nefarious Lockdown Plan (tm) was going to come to fruition, the year that they shifted CPU hardware and radically redesigned the operating system's UX would sure as heck seem to be The Perfect Moment. And it still hasn't happened.

Past performance is not a guarantee of future returns and all that, but I don't think there's any reason to think Apple is going to make it any harder to run different operating systems on Apple Silicon hardware than they do on Intel hardware. (Of course, I don't think there's any reason to think they'll make it easier, either.)


On Intel macs the default was that it would run windows as long as Apple did not make substantial changes to the uefi, etc.

But arm macs are really an Apple A series processor plus a number of coprocessors like the t2, nueral engine, etc.. and the default is that windows will not work on this specialized hardware.

So unless there’s evidence that Apple is actively going to help 3rd parties develop operating systems for the Mac, the best we can hope for is a fairly acceptable OS with reverse engineered drivers for those chips. We’re either going to get an unstable OS or a severely crippled OS.

With intel macs Apple just had to stand back and let 3rd parties do their work. But they would have to actively assist with arm macs.. and there’s no indication theyve ever done that much less intend to do so in the future.


That's false. The ARM chips are tightly integrated SoCs. They aren't even close to a dropdown in replacement as "just" a new CPU would be. The internals of the ARM Macs will be much closer to the Ipad then to the Intel Macs.

And I don't think it's a coincidence Apple is slowly but surely moving OSX towards a locked down platform. It's already made pretty hard to run normal software.


https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...

“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering.


Well, there you go.


> and you can’t run Windows or something else on ‘Apple Silicon’.

There's no reason to believe this is true.


I see at least two: boot loader and GPU drivers. Who's going to write them?


There is plenty of precedent.

apple iphones run ios, only.

apple computers run macos and windows.

Thing is - which precedent will win?



Apple has already confirmed that they have no plans on restricting the ability of third party operating systems to run on Apple Silicon based notebooks.


https://www.theverge.com/2020/6/24/21302213/apple-silicon-ma...

“We’re not direct booting an alternate operating system,” says Craig Federighi, Apple’s senior vice president of software engineering.

Apple has already confirmed they have no plans to allow third party software to run on Apple Silicon based notebooks.


That’s not really a helpful statement. There are so many coprocessors that we need them to actively release the details for those.. not just say they won’t oppose it.

The former requires their support. The later just says they won’t oppose reverse engineering. The later is far more difficult and will likely mean any OS will be buggy and flaky. Unless Apple comes out and says they will actively support 3rd parties, it’s just going to be a shit show with no good alternative OS options.


Is there a reference for this? Running Windows, particularly with the 64-bit x86 emulation announcement, would go a long way to making purchases more palatable.


> which doesn’t bode well for Intel once Apple releases its “Apple Silicon” Macbooks.

How so? Apple's market share is something like 10% so it's a minority of Intel's market.


Beyond what others have mentioned, we also mustn’t discount the possibility that Apple Silicon could be a strategic move to lower device MSRP, thus increasing market share and Apple services revenue.

People will instinctively write off this idea, but this is the new pricing strategy Apple has been employing with the $329 entry level iPad, $399 iPhone SE, and the new $279 Apple Watch SE. The Macintosh now remains Apple’s only major consumer product line that hasn’t seen aggressive price reductions to make Apple services accessible to a broader range of consumers. The move to Apple Silicon, which could save Apple potentially hundreds of dollars per device, is the perfect time to move the Mac to this pricing strategy. If this happens, it would absolutely eat into Intel’s consumer market share.


Not by much unless they come with Windows as default OS.


Eh, every chip company has lower-end models that are cheaper. There's nothing stopping Apple from putting i3s in Macs.


The point is that apple's models would not be lower end, just cheaper.


But what if Apple offers a Macbook with i7 performance and i3 price?


Typical Apple - undercutting other companies on price \s


I know you're joking, but looking at some Geekbench scores there doesn't seem to be a single Android phone on the market that can outperform the iPhone SE in single-core performance. The OnePlus 8 is the closest, but still not really that close.

iPhone SE: 1321

OnePlus 8: 898

On multi-core it's a little better since there seem to be seven Android phones that can outperform the iPhone SE:

iPhone SE: 2737

OnePlus 8: 3281

OnePlus 8 Pro: 3216

Samsung Galaxy S20 Ultra 5G: 3107

Samsung Galaxy S20+ 5G: 3102

Samsung Galaxy S20 5G: 3078

Huawei Mate 30 Pro 5G: 2918

Huawei Mate 30 Pro: 2835

However, all of those Android phones cost quite a bit more than the SE.

https://browser.geekbench.com/ios-benchmarks

https://browser.geekbench.com/android-benchmarks


They do in the mac mini.


With low-end branding and possibly sub-par performance.


Because Apple is going to prove TSMC is capable of building x86 destroying processors. Not in theory. Not academic or institutional one offs... no, it’ll be mass-produced and in the hands of consumers.

Intel’s failure to get their fabs running has put them into a death spiral unless they pull off a miracle.

Also, if Apples ARM chips are really that fast they will be purchased by the shipload to be sent to performance critical operations like HFT. They will be put into servers or turned into them to squeeze and eek out every possible advantage... which will shit on intels most profitable lines (Xeons for single threaded workloads).

It’s gonna be a crazy time.


There is no scenario in which Apple's ARM chips will be sold to 3rd parties.


Why not? Apple used to sell servers. It could easily sell its chips to Azure or GCP to compete with Amazon’s Graviton chips.

It’ll help Apple cement their dev tools as industry standard, it’ll further amortize overall dev costs by increasing volumes, people can develop better algorithms with their custom hardware accelerators for ML and etc, it’s a great way to fight the current anti-trust cases against Apple.

There are some pretty good reasons for Apple to sell their chips to 3rd parties.


They could probably make money there, but it would take time and energy to do, doesn’t seem like it helps their brand, and seems generally a bit afield for them.

Then again, they started their own TV studio, so what do know.


> It’ll help Apple cement their dev tools as industry standard...

They're just ARM chips. So long as Amazon otherwise plans to build their own ARMs, there's no real benefit. Apple is looking out for #1.


Apple has a similarly onerous history of monopoly action - watch what you wish for, you may get it.


The situation is a bit different though. If Apple's plan was to produce processors for the likes of HP, Dell, etc, then maybe that would be an issue. But I highly doubt they will move into those markets. It's AMD that's poised to take over that space.


x86/x64 is such a terrible bloated messy instruction set. It really needs to die.

I'd like to see the ARM16 and ARM32 variants die too, as they are also bad.


Apple haven't ever been really interested in or committed to non-consumer markets.


“Haven’t ever” is simply not true. Apple has had plenty of historical server products, including dedicated rack servers with the Xserve line. That’s not to say it’s likely but it’s not unprecedented ;P


I don't think they were really interested in the market honestly. They did it because they felt the needed to, to support use cases like CI for macOS and iOS apps. It was all about supporting the Apple developer community. They cut it as soon as it wasn't necessary, IMO.


> They did it because they felt the needed to, to support use cases like CI for macOS and iOS apps

Actually, the Xserve platform really had nothing to do with that. Xserve’s were partly designed and built for the high end video production industry (as an extension of the Mac Pro hardware) and partly as a general purpose small to medium size business file server / web server (which the Mac Mini has subsumed what’s left of that space).


And they scrapped them. That's exactly my point. They were not committed: they made an uncompetitive product for a few years and bailed instead of trying harder. Compare to the Mac.


yes, yet cloud changes it as the naturally non-consumer thing - cloud datacenters - are needed by the Apple itself to serve the consumer oriented cloud based functionality. So, it may so happen that the major money saving from their own chip would be not on the consumer devices, instead it would be Apple's datacenter/cloud costs/density/efficiency/etc., and that improvement on those metrics can also enable and push Apple further into cloud business.


Apple is a trendsetter, though: I think, as a rule, Apple’s decisions are copied by a large part of the market so its market share isn’t an adequate measurement of its influence. This will be especially true if Apple Silicon MacBooks are impressive in some way, like extra-long battery life.


Will manufacturers start to demand Windows on ARM? ( WARM? )


Windows on ARM only runs on lame processors like 8cx/SQ1/SQ2.

Apple Silicon may beat Intel/AMD and Apple Silicon is ARM, but that doesn't mean that ARM is beating Intel/AMD.


Yes. Microsoft is happy to oblige and Qualcomm seems to be ramping up their Snapdragon 8cx program, with it in a few recently released devices.


https://community.arm.com/developer/ip-products/processors/b... They are making some.

I hope bootcamp will be updated for the new macbooks eventually. I don't expect it to be out on day 1 but maybe a couple months after release?


Absolutely, better performance per watt means longer battery life. Windows already supports ARM. Microsoft doesn't really have a choice here but to improve support for ARM. Office on Mac seems like is going to be ready for Apple Silicon if that effort also helps Office on Windows ARM then they are half way there.


Is something that can be condensed down to a number really a "story"?


This article focuses on PC CPUs too much. Yes, they're lagging there, and they're lagging on the fab side, too, but their big mistake was failing to get into the mobile CPU market, even if it meant making ARM CPUs. Now they're stuck making PC and server CPUs, while ARM might even be taking over those markets.


If Windows on ARM ever goes mainstream, Intel will find themselves unable to compete on consumer PCs as well. Their inability to compete on mobile could decimate their consumer PC business. But only time will tell.


Success of Windows and gaming on ARM64 will absolutely put Intel literally out of business. edit: Assuming Linux servers follow suit.


Is the server business so big that Intel can survive on just server chips?


Not just that, AWS offers AMD x64 and ARM CPUs now.


Yeah but really they never had a chance in the mobile market. No way. But they owned the PC market and blew it by improving nothing from 2010 through 2018. That's much less forgivable.


> Yeah but really they never had a chance in the mobile market.

You're saying that if Intel had used their leading edge semiconductor fabs to make ARM CPUs that were exceptionally good, they would not have stood a chance in mobile? I disagree. Qualcomm would have faced serious challenges keeping up with these hypothetical Intel ARM processors back in the early 2010s. They still might, but Intel hasn't even tried.

Intel was just too proud to do something like that. They wanted x86 to become the smartphone standard, which it obviously didn't... and now they're suffering defeat on all fronts.


It would also require a different type of design flexibility.

The mainstream Intel CPU line is relatively limited in variations. Yeah, we have different core counts and presence of SMT or iGPU, but you can get there with basically a single die design, or a small family of them, and blowing some fuses after binning.

Mobile SoCs tended to be a lot more bespoke. The same basic CPU block might need to e paired with different modems, and physical space and production costs probably don't accommodate doing the "low-end by fusing off bits of big higher-end chips" model.

As I understand it, Intel wasn't fond of that sort of product diversity-- the Atom they developed for mobile devices was much more a take-it-or-leave-it proposition.

I also suspect the concept of being an external single source was unappetizing for buyers. Samsung has the choice of buying Snapdragons or making their own Exynos parts; low end manufacturers can cross-shop MediaTek or Allwinner. If you're too reliant on a uniquely Intel design, what happens if they have a production kink and you have a million phones awaiting processors?


> Qualcomm would have faced serious challenges keeping up with these hypothetical Intel ARM processors back in the early 2010s

Qualcomm's success on mobile market has nothing to do with their processors but everything to do with them exploiting their "FRAND" patents.


Mobile is dominated by cost. Qualcomm, Samsung, AMD have all tried their hand at good ARM processors but the perf gains and costs have never been good enough to sell mass market devices. Apple is the only one remaining, and they spend a lot per chip... it would be about as expensive as an Intel chip, if not more expensive, if sold separately.

ARM is not a panacea.


> They wanted x86 to become the smartphone standard

Or that the future was Netbooks with Atom CPUs. But yes, they bet very wrong and didn't hedge that bet. I think Intel was even an ARM licensee at the time, and I'm sure ARM would have been happy to take more money from them.


Intel still has plenty of margin to buy the market with, and their own fabs. They can keep a good portion of the server market until someone makes the cost to switch low enough to be attractive.

That said, I think they blew it the the cellphone market, and now this has haunted them. They aren’t competitive in the broader market, and more the cost to switch from arm there is high. AMD I think is a near term issue in the server market, but ARM is the real issue.


Intel’s previous acquisitions have been pretty bad.

It doesn’t matter how deep your pockets are if you’re unable to successfully integrate or manage what you buy, and this speaks to management/organizational dysfunction, which money also doesn’t easily fix.


Re: ARM - spot on. They sold their XScale/StrongARM assets right around the same time Apple switched to them. They were probably riding high on that win.

However, market growth switched so quickly from PC to mobile/IoT, they couldn't get in. Instead, they just wasted resources on projects like Moblin which became boondoggles.


In general, the elephant in the room is: what if Apple outperforms both Intel and AMD with their Apple Silicon? ARM is an efficient architecture and the have a process node advantage for 2021. If they wanted, they probably could make very convincing server CPUs. If they want to move as announced the Mac Pro to arm, they would need server class CPUs anyway


> what if Apple outperforms both Intel and AMD with their Apple Silicon?

It wouldn't matter at all to either AMD or Intel until Apple starts supplying these chips to other OEM's. Till then, it will just be used in the walled garden of Apple products, which is the very thing that makes it very unattractive for many users.


The Mac isn't a "walled garden". MacOS is a Unix. Losing Apple as a CPU customer adds another dent into Intels revenue. On its own, it wouldn't be decisive, but in a time of sinking market share, you don't want this to happen.

Also, it will be interesting how Apples market share develops, as th AS could make the Macs much more attractive. Both from the performance and price and of course, because of the iOS compatibility.


>Also, it will be interesting how Apples market share develops, as th AS could make the Macs much more attractive. Both from the performance and price and of course, because of the iOS compatibility.

Apple's pricing strategy over the past few years has been to reduce the entry-level costs of their devices thus increasing their marketshare, ecosystem lock-in and services revenue. It's the strategy they've employed with the entry level iPad, iPhone SE and Apple Watch SE, and I fully expect them to bring to same strategy to the Mac, now that they no longer have to pay Intel's profit margins.

We'll very likely see a MacBook in the $700 price range in the upcoming months. That would put the price of Apple's entry-level MacBook right around the that of an average notebook computer in the United States, while providing far better performance than their competitors in that price range (see A14 benchmarks). This would naturally provide Apple with a big marketshare boost in the consumer notebook space.

Ultimately nobody but Apple knows exactly what their pricing strategy will be with Apple Silicon based notebooks. However, if I were competing with Apple in this space, I'd be tremendously concerned.


Were I in Apple's shoes, I'd be making a laptop that's cheap enough that everything cheaper is a bad computer. I'd want half of all freshmen to get that laptop. $700 sounds about right, $600 would nail it.


> The Mac isn't a "walled garden".

On the software side you have Gatekeeper and SIP. On the hardware side you have T2, soldered RAM, their weird custom SSDs, and their lawsuits against hackintosh manufacturers.

Yes, there are still ways around some of them, but it's pretty obvious where they want things to go.


Gatekeeper has been around for 10 years and every year people have predicted you couldn't install software outside of the app store. It never came by. As a development machine, it just wouldn't work.


The frog needs to be boiled slowly. Very slowly.


No. For a general purpose machine, especially for developers, you need the ability to access your files freely and to run any program. If that is no longer given, all developers and most users would drop the platform. Also, it completely doesn't make sense as long as you allow virtualisation. Which Apple not only does, they even demoed this capabiltiy in the keynote.


Lawsuits against hackintosh manufacturers seem fair enough to me.


I think that’s a short term take. Apple may be able to reduce the price of their laptops by hundreds of dollars, offer better performance, offer better battery life, offer more desirable form factors, and maintain their margins.

It seems unlikely to me that PC OEMs will just stand by and take it. Windows on ARM already exists and this may be the spark that really ignites that fire.


That is my hope too. So far ARM on the desktop/laptop lagged both because of the lack of suitable hardware and software. There are a few interesting Windows on ARM laptops around, I had been eyeing the Galacy Book S, but they lag behind as there is very few native software for them. Which is a chicken and an egg problem. As long as the number of ARM machines is low, software companies are saving money by ignoring them. And this means, the numbers will stay low.

With Apple Silicon, the game changes a bit. Not only is there the prospect of the AS being really fast, as Apple is going to make a complete transition, software companies have to support AS, if they want to continue to sell to Mac users. And suddenly, any Linux user also has a great ARM platform to work on, via a VM running on the Mac.

This might even have a direct impact on the Windows software offerings. So far, they could sell to Mac users running a x86-VM. This will be no longer possible. As much as Mac users contributed to Windows verndors revenue, they have to either give up the Mac market, make a native Mac application or hope that Windows on ARM becomes available in a VM on the Mac. But then, they at least need to fully support Windows on ARM.


I suspect it’s mostly price that makes it unattractive to users. Worrying about the walled garden is a fringe concern.


They have almost 0 computer market share in developing markets, nobody knows how to use apple, nobody knows their stack and their programming languages, all the infrastructure is built around windows.

With smartphones it's not such big a problem cause you don't do anything productive on smartphones anyway.

With PC/laptops it's a big disadvantage I don't see changing in a few years just because of small performance advantages, even if the prices weren't an issue.


This depends on the use case. For phones and consumer applications, the walled garden is often tolerable. But imagine trying to run a cloud service, a FinTech system, a factory, an ERP system, etc in the walled garden.


The Mac isn't a walled garden, why should servers with Apple Silicon be?


Not as long as Windows is not a default OS.


If, and it's a big if, Apple wanted to build "Darwin in the cloud", they could make the Big Three into the Big Four pretty quick.

Those enormous piles of cash could build out a world-class service that would be attractive both to developers servicing iOS applications, and just to developers, period, a lot of whom use Macs.

That would indirectly take a chunk out of Intel's server dominance, and might get the other players thinking about whether Intel is the best platform for them to stick with as well.


> > what if Apple outperforms both Intel and AMD with their Apple Silicon?

> It wouldn't matter at all to either AMD or Intel until Apple starts supplying these chips to other OEM's. Till then, it will just be used in the walled garden of Apple products, which is the very thing that makes it very unattractive for many users.

Well, there is another possible path: if Apple Silicon is seen as a PoC, someone else could decide to pursue the server market with an Apple Silicon inspired ARM-based offering. Perhaps aimed at FaaS deployments or something similar.


There are already ARM based servers available in AWS. The big problem is: almost no one has desktop machines, where the software development happens, with ARM. Linus once commented that ARM doesn't sell so well on the server, because developers are lacking desktop machines with that architectore. Apple Silicon could close that gap. You can develop with a Linux VM on AS, and deploy on ARM in the cloud.


Around the first .com wave, I used to develop on Windows (C and C++) and deploy on SPARC/PowerPC/PA-RISC servers.

The only x86 servers were running a mix of NT 4.0 and Windows 2000.

Apparently it worked.


Of course it works. Otherwise there would be no ARM on the servers. But it is second class to a setup, where the development and deployment happens on the same platform. Having good ARM machines available on the desktop will give ARM on the server a boost. As I wrote, don't take my word for it, listen to what Linus hat to say on that topic.


Ironically UNIX is exactly one kind of platform where remote development is first class.

One just needs to properly configure mount points and remote sessions.

There was hardly any difference between my X sessions and local development.


Sorry, in general that is not true. You need a very good network connection, both with bandwidth and latency, to make remote devlopment workable. Still, it never equals local development. If you follow the discussions here on hacker news, which terminal software has the smalles latency, remote development never can compete with that.


Apparently having UNIX servers on premises is a forgotten art.


It is. Even if companies have their own hardware, it is often in separate compute centers. And of course anything on AWS or Azure is not on premise.


Not everyone is FAANG, or pretends to be one.


Nobody cares about server market, literally if it runs on abacus fast enough it's ok.

Developers! DEvelopers! Developers!

ARM should be in consumer space, millions of laptops for developers first running Windows and Linux


With the cloud and managed runtimes (or stuff like POSIX), I don't care one second what CPUs are running there (beyond endiness and using the right widths), I never wrote hand Assembly for such kind of deployments and the optimizer was good enough for our workloads.

The only issue would be libraries available only in binary form for AOT compiled languages.


someone else could decide to pursue the server market with an Apple Silicon inspired ARM-based offering

In other words Nuvia.


I could see Apple come out with their own in-house Cloud Computing platform within the next 2 years.

  - They would be vertically integrated with Apple Silicon
  
  - Apple themselves are increasingly depending on cloud services

  - Async/Await in Swift will likely land next year, making Server-Side Swift much more appealing

  - Apple is greatly increasing cloud / Kubernetes hires

  - Could share a single Apple Silicon ARM architecture from client(ios) to dev(mac) to cloud


Unless something has changed radically since MacOS Server failed, Apple would either need to do massive software engineering of a sort that they are unfamiliar with or they would need to use a different OS entirely to have a credible cloud offering. Mac OS is unlikely to be able to compete with Linux or FreeBSD in the near future even if the CPU is magically better.


They could have a very bare bone macOS based server operating system specialised for running virtual machines. Dont they have their own xhyve virtualisation layer?


Federighi mentioned during some interviews after the Big Sur presentation, that macOS is a great hypervisor for vitualisation, when asked about Linux on AS Macs. Running Linux in the cloud would equally easy.


A lot of the prelim work is probably already being done with their docker optimizations.


Apple’s in-house cloud stuff is mostly Java on commodity Linux servers. I don’t think they are looking to have macOS compete with Linux.

Most of the special sauce from GCP and AWS is the integrated software services anyway.


> Apple’s in-house cloud stuff is mostly Java

Can u give some more details? I am not really familiar with Apple eco-system

But wouldn't most of it (storage, web, streaming) be based on commodity C/C++ projects ?


It wouldn’t have to be exclusively macOS. You could likely choose your OS or it could be some kind of containerization.


What happened when Apple outperformed Intel and AMD with PowerPC? People kept buying Windows PCs.

What if Apple outperforms both Intel and AMD with their Apple Silicon? People will keep buying Windows and ChromeOS PCs.

AWS is testing the server market right now with Graviton2, basically a drop-in replacement for x86 that's faster and cheaper. It's going to be fascinating to watch and I hope AWS reports architecture usage share over time.


Is graviton2 really faster? I dont remember the exact details but didnt they still have a (long) way until reaching intel/amd performance? If i remember correctly they are currently only interesting because of their price.



From running my asp .net core api on Intel vs Graviton 2, graviton turned out to be roughly 15% faster. But the SSR of vue turned out to be roughly 10% slower.


That is almost 20 years past, many things have changed. Most importantly, the iPhone has happened. I am not claiming, that Apple completely takes over the PC market, just that the market share could considerably rise. Having the better processors and iPhone compatibility could do the trick.


Apple didn't make PowerPC chips. That's a huge difference.

First, Apple wasn't a supplier. If Silicon takes off, they could become a supplier. Second, the reason they're switching to Silicon is the same reason they switched from IBM to Intel - the chip suppliers didn't react to the changing consumer landscape.

By making consumer devices, Apple is much more in tune with that landscape than IBM/Intel is. They can adapt chips much better and won't be handcuffed by chip suppliers in the future.


Moving to their own chips also allows for much more aggressive pricing.


The arm MacBooks will mean developing ARM code for your server is easier than x86.

I highly suspect we’ll see a switch from intel to arm in the server space in next 5-10 years.

The cloud vendors have so much to gain by building their own arm chips.


I wonder whether Intel/AMD will start offering x86_64 licensing if AArch64 takes off on servers in a major way. You'd think that they would prefer to have competitors on their own architecture/terms than something out of their control.


I wonder how much easier? Could devs use the Rosetta emulator to do x86 stuff on the ARM Macs anyway?


Nvidia bought ARM so it would be interesting to see where this goes.


> and that's the problem Intel is facing. It's just a matter of time when developers will make use of higher core numbers, and distributing the workload across more cores will also reduce the power consumption of the CPU, which is essential for light and portable laptops.

I'm not convinced, parallelism is not inevitable for individual applications, it can be hard or not applicable. I understand there are certain CPU intensive programs where the benefit is too great to ignore, but for the rest it still seems like it's not worth the developers time.


This isn't 2010, using multiple cores is standard practice outside of mindless stuff like webdev.


What makes you think web dev is mindless or cannot benefit from multiple cores?


I can't wait for hardware to have thousands of cores by default. In a way, if you program for iOS, that's already the case if you can target Apple Metal with your problem successfully. Many-cores is certainly worth developer's time. It's just that many developers just don't have the education to make use of it. And software development itself is not good enough right now to make consistent and easy use of it.


Concurrency, as they say, is a bitch. I would hate to program against that model.

The best model I've found so far is cooperative multithreading, that is, coroutines that explicitly yield control, combined with well-isolated worker threads and a rendezvous mechanism for passing parameters and getting results back. I can write code all day long in this model and not run into very hard to find concurrency issues . . . or at least, not as many. I've written a ton of code in the "every thread for itself, have fun with locking" universe, and man, life is just too short for that stuff.

Naturally you run out of processor headroom in this model because you're essentially single-threaded; message passing between processes and running multiple services, one in each process, is one good approach to scaling. Often you can scale out to the network, too, and with 1K cores you need start worrying about the blast radius of your services when things eventually go sideways so getting off the box is a good idea. And things will go sideways.

I can see a thousand cores being useful for embarrassingly parallel things, like classic graphics algorithms where you're doing mostly SIMD with a few variations. But I can't imagine wrangling a zillion cores to do "random logic" type applications where you have to use traditional low-level synchronization primitives. See above: life is short.

Feel free to color me a dinosaur. I've only got another 20 years of my career to figure this out :-)


Which is why originally WinRT had only asynchronous APIs (and Android as well), because Microsoft and Google both wanted to push developers to do the right thing and not code single threaded UIs.

On WinRT with how everything turned out with UWP, now as per Reunion reboot, they are supporting again both models, as that was one of the reasons why many devs did not jumped into UWP app model.

On the Android side, they eventually came up with WorkManager and Concurrent. Also for those in the Kotlin train, co-routines.

As for what best models, I use to like doing thread based stuff, but came to enjoy using good old processes instead.

Security and stability issues haven proven that having the process sandbox, with OS level IPC is a much better way to work with parallel/concurrency programming, and processes must not even run all on the same computer, so one can scale across the cluster as well.


I mostly agree with you. It doesn't help that software development is so archaic right now. I would like to express just what is necessary for a certain problem, and not address the irrelevant aspects. For example, if I want to express an embarrassingly parallel algorithm, which does something embarrassingly simple, why do I have to drop down to C++11 to get it to run fast on iOS / Apple Metal? Something is needed that is a superset of Mathematica and Isabelle to remove clutter. It is that clutter that makes parallel programming a mess.


Politics mostly, in 2000 no one would believe what JavaScript JITs are capable of, but without the willingness to spend the necessary resources to make it happen, it is hard to prove it is possible.

So most people just hand wave one assumptions as day dreaming and they go back their tools as they always did.


Intel has an internal systemic problem. The company is run for shareholders, and it is run by sales, marketing and HR teams. The engineers have become second-class citizens, hardware engineers and then at the bottom of the pile are software engineers. When an engineering company is run by a sales-marketing-hr combo, whatever you do it is not going to have better outcomes. Intel is surviving because of achievements of it's founders. They are still reaping the benefits of that. The question is how long!


Was this written by GPT2 or something?

All the sentences are grammatical, but they didn't say anything


Intel still has an edge in single-core performance for laptop CPUs, and I agree that for a laptop that matters more than multicore due to the sorry state of multithreading in most apps.

That said, the cash cow is servers, and the AMD advantage in cores, RAM and I/O is overwhelming except for a very few niches that care about single-thread performance (high frequency trading is probably most of it).


Peak performance doesn't matter in laptops, due to heat. Performance per watt matters.


Exactly, and Intel has been all about peak/short-burst performance over the past 5-6 generations, if not longer, just so it can keep winning benchmarks.

Now that led them to a new microarchitecture that will likely suck on servers and laptops, due to the high power usage, and might not even win on the desktop compared to Zen 4 CPUs (Alder Lake will mostly compete with Zen 4, not Zen 3).


I dunno, I think my kid's 4900U laptop is the single core king. Right?


No, Tiger Lake is faster (sometimes a little, sometimes a lot). From a sibling comment: https://www.anandtech.com/show/16084/intel-tiger-lake-review...


That compares to 4800U, not 4900U GP mentioned...


Unless the 4900U is twice as fast as the 4800U (it's not), Intel's stuff is still faster.


It is but Intel still manufactures in larger volumes by far.


Having a worse product in higher volume is hardly any argument that intel had an edge. If anything it's evidence that they are crashing faster than supply changes can shift.


> Microsoft Excel, PowerPoint, or Chrome run mostly single-core and are everyday consumer tools

As a chromium developer, I'm pretty sure Chrome has been leveraging multi-core for a long time


Excel is also thoroughly multi-core and has been for many years. The formula dependency engine in Excel allows them to chop up workloads efficiently across the cores.


Chromium utilizes multithreads but sometimes I encounter single-threaded bottleneck.


don't they have process-per-tab architecture, so that bad tab won't crash the Chrome?


This is what happens when you put a sales/marketing guy in charge as the CEO.


> Microsoft Excel, PowerPoint, or Chrome run mostly single-core and are everyday consumer tools.

Chrome seems very much not to run single core; from what I've read of it's process/threading architecture, it is both aggressively multiprocess and it's processes aggressively multithreaded, with work pushed out as widely as practical, using as many cores as it can, with a strategy around keeping the UI thread and each process’s IO thread as responsive as possible.


I think the aggressively multiprocess bit is about isolation rather than parallelism (background tabs should be mainly idle). On multi threaded rendering, isn't that what servo was trying to do? Not sure if chrome has done much there.


Can someone help me understand around what time did performance per watt become more important than performance per dollar?


It started with the Pentium 4. The tradeoff of power/heat for the increased clock speed stopped paying off. In fairness, Apple/IBM/Motorola also learned this lesson with the G5 but they were minor players in the market and were largely ignored. So it was roughly between 2005 and 2008 when perf/watt became a thing. I think public awareness of it had a lot to do with Steve Jobs relentlessly pushing perf/watt to help explain the PPC to Intel transition as Apple was just starting its public awareness rocket ride thanks to the iPod taking off.


Around Pentium 4 and Centrino, so somewhere around 2002 - 2003 ish.

The initial projection was that Intel could scale their transistor down indefinitely, we could somehow cool Pentium 4 effectively, and management think clock speed sells, so some day they could scale to 10Ghz CPU.

That was when they hit the TDP ceiling. Internally named as Thermal Wall by Intel CTO Patrick Gelsinger. Before Pentium 4 Intel saw the rise of Laptop computer where energy efficiency was the key. And Pentium-M was a side project from (cough) Patrick Gelsinger. Which ultimately saved Intel's Pentium 4 fiasco.

The market might have unlimited Dollar for performance. But it is still limited by TDP.


With work from home/covid lockdown I am more and more interested in going back to a desktop computer (vs macbook pro). In a desktop performance per watt effectively doesn't matter. Similarly with the chromebook as a portal to servers/cloud resources too (eg run your IDE in a cloud instance or something) ...

I wonder if performance per dollar will ever come back ?


>In a desktop performance per watt effectively doesn't matter.

Unfortunately it does. Just different sort of ways. If you want 64 Core, And you have max ~250W TDP CPU on a computer ( Assuming no fancy cooling ). You are effectively limited to less than 4W per core. Even less if you exclude memory controllers and Interconnect. You could argue you want 32 Core or even 16 Core, but scaling the up the clock speed curve beyond optimal level will means power increase exponentially. i.e You are still hitting the same wall.

And I should note, with anything per dollar, the question isn't really technical per se, but function of market. AMD is pretty good at performance per dollar right now, and will be more so once they announce Zen 3.

Some day if I could be bothered I should write a blog post on it.


I'd say somewhere in the 2005 ballpark.

Performance per watt matters for mobile, obviously, but also for "hyperscalers". If you plug a chip in and run it at full tilt for five years, the cost of the electricity eclipses the cost of the chip quite handily.


Both have always been important in different contexts.


laptops


2007


I started coding on android with a physical keyboard and the main problem so far is bugs that dont always allow me to switch windows with alt-tab


I bet Intel make a comeback, it’s just a matter of when. Maybe they’ll decide to sideline ARM and make RISC-V cores in a few years, would be interesting to see them make a mobile part designed for phones.


> would be interesting to see them make a mobile part designed for phones.

I worked on a well-funded effort to build a top-of-the-line mobile AP (the Broxton half of these [two cancelled programs]). It was a shit show from start to finish. There is so much mismatch between Intel's strengths and what the industry demands that I don't see Intel being successful in this domain if they make another attempt. Things would be worse now than in that attempt, as Intel has suffered significant brain-drain since then.

[two cancelled programs]:https://www.anandtech.com/show/10288/intel-broxton-sofia-sma...


Intel has been behind on process before and come out fine. They've been ahead on process and technology, but at those times should have been able to capitalize on it to get into mobile and GPUs early on I feel. They couldn't/didn't do it.

I was eagerly awaiting Broxton & successors, and now FPGAs and Optane just aren't going to be as big as GPUs and mobile. They have no choice but to keep investing in GPUs and hope for the best in CPUs.


I really don't get why they were trying to build an x86 mobile part that's just making things very difficult for yourselves no?


x86 works in mobile form factors, as evidenced by the Silvermont-based SoCs that made it into a few phones, including a reasonably successful Zenphone.

The problem I saw was that the design philosophy at Intel was not conducive to fast turnaround, and the mobile market does not wait.


what's to stop intel from purchasing fab time from TSMC, other than pride?


They are purchasing fab time from TSMC right now because of the failure to produce 10nm for years but both sides are saying that it's only a temporary thing.

Now if you're talking about in the future, the biggest hurdle is going to be Apple and AMD since they got there first and already have contracts for specific amounts of fab time/wafers. Intel coming in now is basically only able to take whatever table scraps are left.


Unless intel decides to fund the creation of a fab just for themselves with tsmc but that'd be a much much more expensive process and likely mean that they'd be giving up on their own fabs in the not too distant future.


Intel is likely planning to purchase some fab capacity from TMSC already: https://www.cnbc.com/2020/07/27/tsmc-shares-jump-as-intel-fa...

"…U.S. chipmaker Intel said it faces delays for its next-generation chips and could outsource some of the production.…While Intel didn’t name potential third parties, analysts see TSMC as being a contender."


Margins and as many wafers as they want, eventually. If they can get their 7nm process working they will have a lower cost and higher capacity than their competitors since TSMC does exist to make money and other companies want their advanced nodes too.

Also there's a reason Intel's 14nm chips can compete with AMD's 7nm chips. Manufacturing and design working together has advantages of its own.

Finally it seems like they plan to make some chips at TSMC.


Main problem is that Majority of TSMC's capacity is already reserved by Apple, Qualcomm, and AMD who have been longstanding customers. Another issues is that TSMC is a fab competitor so they have little incentive to keep Intel competitive. So if you are intel you have no real options to go mostly external if you want to maintain the volume you have delivered in the past.


Intel produces military grade CPU's too, which probably cannot be produces overseas.


In addition to the points raised by others, there would most likely be significant work (dozens of engineers, several months) needed to re-target their designs to TSMC’s process —- so some opportunity and actual costs.


intel already announced that Xe GPUs will be made at "an external fab" (which everyone believes to be TSMC).

https://www.anandtech.com/show/15974/intels-xehpg-gpu-unveil...


[flagged]


I bought a 12 core 3900X. Outside of the very narrow domain of professional tools I built it for, I'd have been better served with a quad core desktop Tiger Lake.

Most things aren't strictly single core any more and you're running multiple apps, but after you pass two cores with HT you've already soaked up most of that extra parallelism for everyday tasks.

Considering you can get a workstation with 64 cores and 16 is now high end consumer, the single core performance bottleneck is real.

Chrome might spawn a bunch of processes but they aggressively sleep all but your active tabs.


I believe the reasoning goes, Chrome isn't processor limited. Spotify isn't processor limited. Etc...

The use for multi core at performance is rare, I'm potentially writing a program for the first time in my life that uses parallel processing. The goal is to run this program a few times a year. No one would be buying a processor for me. They buy a computer for CAD guys, video card priority, ram, etc...

And the event that you are doing large scale performance computing, was Intel a monopoly there?


More to the point, no one buys enthusiast or professional grade CPUs to run chrome or excel. They buy them to run games or do professional compute tasks (video rendering, 3d rendering, photo editing, code compiling, etc.). The companies that work in these spaces have realized that they can't write single threaded code any more. For example, the two big games I play, Hunt:Showdown (cryengine) and Modern Warfare (custom engine) are both happy to use all sixteen logical threads of my Ryzen 3700x. Gone are the days when gamers only cares about single core performance.


> Gone are the days when gamers only cares about single core performance

Unfortunately that's not true.

I closely follow popular hardware subreddits (which are disproportionally gaming oriented) and read a fair share of comments every day. The old mentality is still prevalent. The limit gamers care for multi-core is the # of cores game consoles use. People still happily recommend a 4-core CPU over a 6-core CPU with ever-so-slightly-lower ST performance for new builds.


I'm not meaning by my comment to say that every person in the world is fully informed. However this knowledge about multicore scaling is starting to trickle through the community. It is only in the last few years it has started to be valuable to have more cores for AAA action games, so it is understandable to have outdated knowledge.


>People still happily recommend a 4-core CPU over a 6-core CPU with ever-so-slightly-lower ST performance for new builds.

and that is reasonable if you are focused on the bulk of games available today, and not next year or 5 years from now, and you don't want to do any streaming while gaming.


My whole point was that AAA games coming out today can utilize more than the traditional one or two cores -- more than four even. I'm not sure if it applies to all games but if someone is playing a CPU-intensive game today, the most likely candidate is Warzone, and that thing can put the hurt on eight cores at once. Likely other modern FPSes do also, or will in the very near future.

The only reason to get a new CPU is to play new games, as any decent CPU can run anything older than say three years. If you're buying a CPU for today's games, more cores will tend to help at least among enthusiast CPUs. It is not good advice to forego a large fraction of cores for a small increase in single-thread performance, at least at a given price. Threadrippers are probably not needed though.


Glad I bought AMD years ago at $7 once they had a solid CEO like Su who can execute well.

I have hopes that Swan can kick out the dilletantes. Too much engineering going on at Intel that doesn't translate to actual progress.


Was it Su or was it actually Keller?


It was actually Su. Keller, for all his brilliance, could not turn a company around. After Su, AMD has been very focused and have executed around a plan (marketing insanity notwithstanding).

It's just AMD's good fortune that Intel's consistent stumbling have made their great engineering look even more outstanding in comparison.


Both were required to turn AMD around. Less competent leadership would have squandered the opportunity that Keller gave the company.


Yep, everyone deservedly gives Keller credit for K8, A4, Zen, but Su was behind the Cell, and has now shown her leadership with Zen and RDNA.


Years of ruthless commercial exploitation by Intel, especially against AMD - whom we must praise... Chip set purchase linkage and all the rest.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: