Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What Transistors Will Look Like at 5nm (semiengineering.com)
101 points by Lind5 on Aug 22, 2016 | hide | past | favorite | 53 comments


>At 5nm, it will cost $500 million or more to design a “reasonably complex SoC,” In comparison, it will cost $271 million to design a 7nm SoC, which is about 9 times the cost for a 28nm planar device, according to Gartner.

So basically with every node improvement, the "design" cost increase at around 50% - 100%. This heavily flavours Apple's model. You will have to sell a lot more SoCs to balance out that $500M design cost for leading node. Apart from the design cost, does anyone know if the Wafer cost increase as well? And at what rate does it increase per node?

Thinking this in Intel's scale, PC Industry is shrinking in unit, the CPU die size is also getting smaller to keep up with Intel's profit margin. at what point does the cost increase sway the balance to other Foundry's flavour, when Intel 's unit sales takes longer to recoup the cost then TSMC.


I agree (re: favoring Apple's model). Whether server or mobile, there'll be fewer SKUs, and those that exist need to be blockbuster hits.

It may well be that economics is the limiting factor stopping Intel's long run, not the physically limiting properties.


Intel actually has very few designs. Most of the variety comes from frequency binning and post-test functional unit deactivation as a means of salvaging otherwise defective parts.


Same as with "peak oil" then. Its not that we run out of physically extractable oil, but that the cost of doing so will curtail our current usage of the substance.

Its pretty much the inverse of the Jevon paradox (or maybe that everything has a bell curve, and we are so focused on the left half we forget its mirror on the right).


Actually, people are looking forward to a long period where a certain node, "20" something nm (not Intel's 20 something), becomes so standard that economies of scale result in it being the one most entities use for their designs, especially non-mobile or otherwise seriously power constrained.

I gather we're already sort of there, with wafer costs in the $5,000 or so range, the trick is of course to limit the NRE to something you can afford. The lowRISC project is going this route, I learned much of this starting from their future production plans.


Wafer cost goes up, just not as fast as transistor count, which is why Moore's law holds.


The lattice constant for silicon is about half a nanometer, right? So that means that this new "gate" is only about 10 silicon atoms across?

That's a very impressive feat. It's hard to imagine CMOS continues scaling down beyond that point.


Yes. And another problem is that there's such a thing as electron confinement. It's the same effect as in the "particle in a box" solution of the Schrödinger equation where the particle's probability is zero next to an infinitely high potential.

The gate oxide is a very high potential for an electron and therefore the actual current flows close to the center of the wire.

So instead of a diameter of 10 silicon atoms, you now have something like 5 or so where the current flows. Imagine that you place a single atom wrongly in the center of your nanowire. Now you have a scattering target exactly where your first mode (the most populated one) has its maximum. It's effectively like putting a block of concrete on a one-way street. There's no way the electrons could go around the defect. It could completely stop your nanowire from working or at least deteriorate its properties severely.

That's why imec tries to use 3 nanowires per transistor. It'd be quite the show stopper if some of your transistors wouldn't work at all.


It makes me wonder why we really need such a huge lattice (the silicon bulk) really. Can't we somehow deposit small pieces of lattice in places where a transistor is needed?


There are alternatives but they are difficult (expensive) to work with and not yet required so we keep using silicon for the 5nm node


until peak silicon

EDIT the analogy with peak oil is not literal, but that just as uneconomical sources of oil (and of energy) became more economical when the price went up, the parent's "alternatives" will become more economical when present silicon technology gets more expensive (due to the lattice size for example, as discussed). Just as with oil, it doesn't necessarily mean we'll give up silicon. Though we might.


28% of Earth's mass is silicon, so I'm not too worried.


What would the remainder be? At this scale there's no such thing as homogeneity, your options are crystal lattices or a lumpy mess of atoms.


Could be anything, as long as it is insulating.

Advantages would be: reduced noise coupling between devices, and it would open the possibility to make multiple layers of transistors.


The problem with this is lack of uniformity. You can have a single perfect crystal lattice that is perfectly uniform over 5 billion transistors, but you cannot deposit small pieces of lattice in 5 billion places without at least one being deformed.


I do not understand the proclaimed 3X increase of design cost per finfet node, particularly in the context of digital ICs. Most cell designs are highly repetitive, so the increased design complexity should only add a one-time offset to the total chip cost. Hence, the total design cost should be comparable to previous nodes, i.e. well below 2X. I understand that purely analog designs are a different beast, but it doesn't make any sense to use advanced nodes for these in the first place (ft/fmax drops as a result of higher-than-linearly scaled CGS).


Multi-patterned masks are a big expense both in the manufacture, yield, and design/verification. Really, 10nm and below truly need EUV (we used to call it X-ray lithography, but that was already a decade late so they changed the name) to be commercially viable. It brings the mask layers back down to ~60 from 80 @ 10nm... and I assume triple patterning and techniques beyond at 7nm or 5. The mask production and exposure steps balloon without EUV (not using 193nm ArF with crazy NA immersion tricks), but right now the light sources for EUV aren't bright enough to put wafers through at a reasonable speed. https://en.wikipedia.org/wiki/Next-generation_lithography

Also, when you need almost zero defects and the costs of low yields are so big, then a lot (20-30%?) of your budget is spent modeling and confirming your design. Then there is the design of the SOC itself, which only makes sense to do at such geometries, if your level of integration is extreme. Once speed and power stoped scaling with geometry (around 55-28nm), the only reason to keep going was integration and $/transistor, but with higher upfront costs you got more to amortize.

Now don't get me wrong, there's good reason to think that 10nm could be a decent node, and finFETs are cool (low leakage and fast). It's just that I suspect multi-wafer stacking is going to give us a big jump too. We may also leave Si behind. But each of these are really separate jumps rather than incremental.

The problem is that the obvious treadmill is over. The returns are less certain, and once the money men figure that out, there won't be $billions to spend every year. That's the real end to semiconductor age.


The "design" cost quoted mostly means mask cost. It gets very expensive to produce for smaller nodes.


> The "design" cost quoted mostly means mask cost.

No, mask costs are only a very small part in development and production of complex ASICs. Some sources:

1) First graph in http://electroiq.com/insights-from-leading-edge/2014/02/gate...

2) First graph in http://www.eetimes.com/author.asp?doc_id=1322021


That's the recurring cost of the mask set, because they don't last forever. The setup cost of the mask however is very high.

As for logic design effort, verilog is verilog. Floor-planning and P&R is a bit more complex at lower nodes but that's mostly software. Also TSMC offers you standard cells to pop into your design.

Therefore most of that "design" cost is in the foundry making the fist set of masks for you, not the incremental mask cost. They're recouping their investment in the process, thus the price is extremely high for newer nodes.


Do you have a source or some links? I would like to know more about this.


It is mostly based on my experience working as an ASIC designer and having taped out several chips.

However, you can consider it logically -- the engineering effort to design the logic, and perform place and route doesn't change much from node to node. You're doing the same work, with the same software; albeit with new libraries provided by your fab.

The cost clearly correlates with smaller nodes because they charge you a ton to do the "setup" for you, i.e. make the first set of masks. Older nodes are now much cheaper than they were because more shops are using them, thus spreading the amortization costs.

AFAIK, a lot of cortex-M ARM chips (such as STM32F) are made on 90nm nodes. There, the variation you offer your customer is important so they want lower mask costs to make as many variants as possible. The core itself is so small than going to 28nm wouldn't offer much savings because a bulk of the cost is in packaging, testing, and at 28nm would be the amortized mask cost.


“My current assumption is that 5nm will happen, but it won’t hit high-volume manufacturing until after 2020,” said Bob Johnson, an analyst at Gartner. “If I were to guess, I’d say 2021 to 2022." ... In R&D, chipmakers are also looking at 3nm and beyond, although it’s unclear if these nodes will ever happen.

What this all seems to be pointing to is that, in all likelihood, the "party" will be over within the next 3 to 6 years. It seems to me like there is an insufficient level of panic happening in the world over this issue (the imminent end of Moore's Law after 30+ years of non-stop exponential growth). Moore's Law was a massive, earth-shattering, 30+ year continuous exponential curve, that we all have been riding. I think the collapse of this curve could have serious implications for more than just chip designers and manufacturers. In my view, some groups that seem over-exposed and under-concerned are the software and startup industries.

For the past 30+ years, the market has seen better/smaller/bigger/faster silicon computing devices arriving with unrelenting regularity. Now that will suddenly stop with one final, ultimate silicon product generation. After which, it sounds like we don't get faster or cheaper CPUs, or bigger CPU caches, or more CPU cores, or more RAM, or better or cheaper GPUs, or bigger or cheaper SSDs, or better/cheaper LED lights or OLED screens, or better/smaller/faster/cheaper smart phones/tablets/digital cameras/sensors/broadband networking, etc? So the whole show just stops, like a high-speed train hitting a wall. And, by all credible accounts, this cataclysmic event is scheduled to occur within the next 3-6 years (barring some unforeseen, miraculous scientific revolution)!?

Once that happens, all computer technology seems destined to a fate similar to many other commoditized, industrial age technologies. A famous example is the design and performance of commercial aircraft, which has not fundamentally changed in nearly a century. Another example is steel manufacturing - nothing about the cost or other performance dimensions of steel have changed since the industrial revolution. Steel is now a base commodity, and no one is waiting to upgrade all of their steel structures to some next generation steel alloy that will justify the massive expense of a complete "hardware upgrade." Other examples include electro-chemical batteries, internal combustion engines, rockets, etc.

If this truly is an end-game within the next 4-6 years, then there will be a few niche, interesting areas to eke out performance gains within (let's say) the next 20 to 30 years (e.g. through specialized hardware designs and new software optimizations), but these gains seem to be very different from the economic miracle of the endless PC/server/smartphone/software upgrade cycle that everyone has become so accustomed to. No such "eke out optimizations" will be able to compensate for the massive, negative economic impact of Moore's Law ending, considering that our global economy seem to be fundamentally dependent on technology consumption cycles stemming from Moore's Law.

This situation reminds me of the 2008-2009 global financial crisis, where US sub-prime residential real estate prices had started falling in 2006 but everyone kept on ignoring the problem and hoping that it would just go away. It also reminds me of global warming, where the consequences are dire, and there is no solution in sight, but everyone is hoping that some new fundamental scientific discovery will revolutionize the energy industry. It could happen ... we could possibly discover a fundamentally new energy technology that replaces conventional fossil fuels, electro-chemical batteries, etc. And we could possibly discover a fundamentally new science that replaces "Complementary Metal–Oxide–Semiconductor (CMOS)" silicon technology. On the other hand, perhaps the laws of physics do not admit such a possibility? Or worse, even if it is physically possible, what if our global socio-economic system should collapse long before we are able to fund the necessary discovery? For example, is it possible that Moore's Law stalling within the next 5-10 years could trigger a financial crisis so massive that it economically ends Moore's Law for good, before the research can be funded to solve the problem?


Your proposal for immediate panic is an unnecessary overreaction.

> it sounds like we don't get faster or cheaper CPUs

The progress in desktop CPUs has been a near-plateau for several years now.

> bigger CPU caches

Diminishing returns have been hit several years ago.

> more CPU cores

Dito for most use cases.

> more RAM

Dito for most use cases.

> better or cheaper GPUs

GPUs have become so fast and cheap that integrated GPUs are good enough for most use cases.

> better/cheaper LED lights or OLED screens

I don't think they are related to the semiconductor nodes of digital logic?

> So the whole show just stops, like a high-speed train hitting a wall.

No, the train has already started slowing down several kilometers away, and most people on it have realized it. There will be no wall-hitting.

> Steel is now a base commodity

Steel may be a commodity, but the steel-using industry (in this analogy, the software industry) is alive and well.

> no one is waiting to upgrade all of their steel structures to some next generation steel alloy that will justify the massive expense of a complete "hardware upgrade."

This is actually a good thing. Besides, the sentiment "no need to optimize, just wait a year and computers will be fast enough" died over ten years ago and was stupid to begin with.

> our global economy seem to be fundamentally dependent on technology consumption cycles stemming from Moore's Law

It's really, really not. At least not outside of the SV-startup-scene cognitive bubble.


IT infrastructure consumes a non-trivial proportion of global electricity production, ~10% by some estimates. That consumption is growing at an alarming rate. Most of our electricity still comes from coal.

Performance-per-core may have stagnated, but performance-per-watt continues to improve at an exponential rate. Without the efficiency dividend provided by Moore's Law, IT infrastructure could become one of the biggest obstacles in our transition to sustainable energy.


> This is actually a good thing. Besides, the sentiment "no need to optimize, just wait a year and computers will be fast enough" died over ten years ago and was stupid to begin with.

Ruby, Python, PHP :-/

Maybe with the next few years of performance not improving developers will wake up to the fact that doing stuff in a slow language limits the scale you can push it to.


Or maybe most developers aren't worried about the scale you can push something too. Most projects never go anywhere. If one turns into a runaway success you can deal with performance later. Designing everything as if it is going to require Google scale infrastructure is a good way to get distracted by cool technology and procrastinate on actually solving any useful problems.


Thankfully, programmers are already waking up to the fact that C and Python are not the only choices. Compare the success of Rust.


I was astonished how fast plain-old JS has become (probably closer to c++ than to Ruby or Python, in terms of performance - even more impressive with support for stuff such as asm.js).

And recent versions of ECMAScript/JS (which are gradually albeit slowly become more and more available for real-world usage) actually make for a pretty nice language.


If I can summarize what you are saying, we have been experiencing early symptoms of end of Moore's Law for the past 5 to 10 years, and nothing drastic has happened yet, so there's nothing to worry about?

Consider the possibility that such a narrative is simply describing a "slow-motion train wreck" as opposed to the absence of a "train wreck?"

There is no doubt that the slow-down has been occurring for several years. One turning point was the collapse of Dennard Scaling around 2005, which led to stalling CPU clock speeds and the "multi-core crisis." I know that CPU clock speed has plateaued over a decade ago. I have not tracked costs, cache sizes, and other factors as closely, but comments here seem to be confirming that many factors have been stalled for the past 5 to 10 years (somewhere between the failure of Dennard Scaling and 28nm nodes). There have been other slow-motion slowdowns happening as well, including the collapse of PC sales growth and now the emerging collapse of smartphone sales growth. The fact that this has been happening for "a while" does not make me feel any more at ease.

To be frank, I am less confident in your comments regarding the dependence (or lack thereof) of global economic growth on technology consumption. I beg to differ that it is just a "SV-startup-scene cognitive bubble" ... I'm pretty sure technology is systemically important to global economic growth.


This is OT, but I can't wrap my head around the reverence of global economic growth. Surely it is unsustainable in the long run. It would seem far better if people weren't required to buy a new, slightly shinier, device every few years just to keep society from collapsing.

As for the stalling of technological progress & sales, I would venture it's partly because the technology has caught up with other industries; eg. I can now watch HD movies on demand on my smartphone, at practically no cost. While it may be impractical to design smaller CPUs, there is also no incentive to do that, as people don't really have a need for it (also, see the rise in high end GPUs for VR gear, since that's what people want). However, I'd say there are other avenues of progress available than mere clock speeds, eg. energy consumption. I doubt marketing departments around the world will shrug and say 'Well, we had a good run, now it's time to call it a day'. Although that's what I'd hope for.


The thing is, we're not actually all in the same industry. Companies that sell phones might not be so happy if sales level off or decline, but others will do fine. On the software side, there are plenty of ways to innovate without the hardware getting better.

If people stop upgrading phones, all the companies writing software to run on phones can still do their thing. Uber and AirBnB aren't really going to care; their apps already work. New businesses can come along based on mobile phone apps that put today's hardware to good use, even if the specs don't get any better.

Or looking at entertainment, the game industry doesn't actually depend on better hardware. If the VR thing turns out to be a fad, game designers will still manage to come up with innovative new video games for existing platforms, and people will keep playing them. (Minecraft doesn't require the latest hardware.)

Also, the design of phones can improve even if the raw hardware specs for CPU or memory aren't any better. You can see hardware innovations in other areas (like new sensors). Most recently, the fingerprint sensor is a good improvement.

There are all sorts of opportunities for innovation and refinement that have nothing to do with Moore's law.


When phones were wired you didn't 'upgrade' your phone for years if not decades.

All the end of Moore's law indicates is the end of the free lunch: free performance increase for software without any work on that software.

So instead of 'doom and gloom' I predict a very healthy re-surgence in reduction of bloat and efficiency, something that as far as I'm concerned can't happen fast enough.

FWIW my cell phone is 6 years old and as long as it works I'll happily use it. If a phone is a status symbol or a fashion accessory then there are plenty of ways to get people to upgrade even if the underlying tech doesn't change so I'm quite sure that manufacturers will find a way to re-package the same tech in ways that allows them to sell into the elective replacement market.


Most software companies won't have a lot of exposure here, since they just depend on the devices being out there. As long as there's no shortage of ideas, software advancements are still possible.

There is certainly quite a lot of downside for hardware and device companies, though, and it's hard to guess how that will also ripple into companies that just do software


> If I can summarize what you are saying, we have been experiencing early symptoms of end of Moore's Law for the past 5 to 10 years, and nothing drastic has happened yet, so there's nothing to worry about?

Yes, and what's more, the software industry seems to be stronger than ever.

> To be frank, I am less confident in your comments regarding the dependence (or lack thereof) of global economic growth on technology consumption.

Oh no, I think a large part of global economic growth depends on technology, I just don't think it depends on continued exponential improvements in the underlying semiconductor process nodes. There are other ways to innovate than just relying on automatic hardware improvements.


Excellent point. We have received technological advancement reliably and automatically for a long time. "Oops looks like we can have navigation software on the phone, just like that." - "Oh and VR goggles have just become viable, thanks Moore's Law".

There is plenty of stuff to do with our current hardware, but a new breakthrough would be nice.


The jump from 28nm to 14nm for graphics cards enabled a massive jump for mobile graphics card made by nvidia. They are no longer a seperate product from the desktop cards. The difference is so massive that even the lowest end model, the 1060, is 15% faster than the highest end mobile graphics card (980M) from the last generation.


"GPUs have become so fast and cheap that integrated GPUs are good enough for most use cases."

Except for realistic VR I'd argue, this will be an issue.


I agree that not having realistic VR due to a lack of hardware performance would be sad, but my arguments mostly concerned the global economic collapse (or rather, a lack thereof) mentioned in the parent comment.


Yes of course, I'm not disagreeing with that - just adding that VR is a difficult case right now and something that i don't think can be solved with software or hardware in the short term.


640K ought to be enough for anyone.


New software optimizations is unlikely to stay a niche area where we only manage to eke out a bit of performance. Look at our popular languages today: pretty much none of them even do something as simple as combine a GC with true composition of data types (Go is the only one I can think of, and it makes up the difference elsewhere). This isn't about high level abstractions being bad, it's about our current arbitrary limits on high level abstractions being bad. The JIT languages are of course severely limited in the complexity of their optimizations due to time constraints. But most compiled languages experience the same because they either don't provide a good set of semantics for a compiler to reason about the code (e.g. C/C++), or use one that is so limited that you simply can't get the compiler to generate sufficiently efficient code in some instances (e.g. Haskell).

On top of this, a programmer's input to the optimization process rarely extends past "inline/don't inline this function". Compilers have to make optimizations that are either strictly better or base it on a wild guess of the time, instead of using profiling to either automatically or manually (i.e. overridden by humans after they test the code) invoke optimizations. Many compilers offer profile-guided optimization but few projects use it, and the JVM just uses it to smooth over some of the performance loss from lacking things like value types or refined generics. In languages that provide sufficiently low level operations you can sometimes get what you want by altering the code, but that doesn't always work.

I'm excited for us to stop using Moore's law as a crutch (which has stopped on desktop CPUs for a while now as far as performance if not price) and to look at things like improved tooling, improved languages, and hopefully even improved architectures. I find it hard to believe that "5nm transistors arranged into x86 processors running hand-optimized C code" is the epitome of feasible computing in our universe, and that's forgetting that most of our code isn't optimized anywhere near that much.


>"combine a GC with true composition of data types"

Could you elaborate on how combining a garbage collector with composition of data types works as an optimization?

Is this distinctly different than GC with an inheritance object model?


It's not about inheritance, it's about how to implement composition. In C++, B inheriting A is implemented to same as B containing a member variable of type A as far as memory layout: the form one allocation with space for A's fields as well as B. In Java or .NET, fields which are objects are pointers to a separate allocation, like they would be if they were pointed to in C++, even if you assign them once and never change them. This increases indirection, memory and GC overhead, and reduces locality. Since you can nest containment, the effect from this can be quite large. .NET has structs to offset this a bit, but they make you decide for each type instead of each variable and you can't really operate on references to them.

Obviously there are cases where you need that indirection (e.g if you're swapping out pointers to different objects in that field) but there are plenty cases where you don't (this usage pattern is a big part of the reason why write barriers work okay for these languages). Also note that you don't need to use indirection if all you want to do is use/store a garbage collected pointer to the field's object elsewhere in your program; the .NET garbage collector already has the ability to deal with pointers within allocations. It's just that they've stuck to not discriminating between these within the language for simplicity (IIRC the .NET runtime can sort of use interior pointers to structs, but this isn't exposed in C#).


A GC needs to visit every pointer in live memory. The less pointers you have the quicker the GC can finish it's job.


Who has the luxury of having a CPU bound problem these days? The developers of games for handheld games consoles maybe. Most problems are memory or IO bound nowadays and in this era the latest crutch is the SSD. Since random access is faster than on an HDD the developers care less about optimizations. Heck even in the games with fast loading screens there is still low hanging fruit like "oh you are in a queue for map X? let me preload it for you in the background instead of waiting until the match starts and then loading it" for example.


Well, I agree partly with this, but keep in mind that FinFET is also too expensive for many of the applications you listed. 28nm was actually the last generation for quite a few architectures, especially embedded or otherwise cost-limited ones

Also, SSD density doesn't exactly depend on Moore's Law per se anyway


I mostly agree with Adwn here... it's been slowing for quite a while. I expect it to be a bit shocking, and Intel has to find something different to do, but the only wall we might hit is Fab/equip building.

Also, OLED and display Fab technologies are completely different and they have years more to go... an OLED today isn't more than 6% total efficiency. What changed there is that resolution isn't the treadmill anymore so something else (color, power, refresh) will have to take its place.


I think the end of ever faster hardware will be good for plenty of software people and certainly for the world. The every shifting sands of the programming environment and vast amount of eWaste punishing the physical environment are a wild ride for which there will be a settling.

I cut my teeth in the era 1K of RAM.


IMO except a few small sub-domains like neural networks, we have much more computing power than ideas for things we can do with it or people that can implement those ideas. Supporting people side of the business - salaries, benefits or offices are much bigger costs than machines, especially in SV.


Although the technological aspect may cease to advance, I think for a lot longer after 3-6 years, things will continue to get cheaper, and this will hide what is happening for consumers for at least a decade.


Since you mentioned steel and airplanes, I need to ask what were the disastrous effects of the "plateau".

While airplanes reached the physical limits in the 70s, there were still many revolutions ahead, for example the low cost revolution that brought air-flight to the masses.


This may be slightly unrelated but I'm curious; have curved silicon dies ever been considered? Say like hollow rings or spheres? Would there be any benefits regarding heat or speed etc.?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: