There is an interesting debate about feature size though. Devices on silicon for a long long time were essentially 2D, patterns on the top surface of Silicon. "Feature size" in this environment directly translated into area which directly translated into the die size.
As features got smaller you started getting 'trench fets' and other tricks to increase the effective size of the gates so that leakage current wasn't insane. So at what point then do the circuit elements become fully vertical, which is to say that viewed from the 'top' the transistor is 10 nm on a side but vertically its 22 nm 'tall' ?
And other tricks where the silicon layers are separately tested and 'thinned' and then packaged as a sandwich for
final testing with ion implanters creating the vias between the connecting layers.
Really interesting work in that sort of stuff going on.
Moore first made his claim in 1965. Assuming we have at least 5 more years wringing out efficiencies with the move to 10nm that's 55 years of continuous innovation. It's mind boggling. Maybe there will be a gap until the next major breakthrough but an entirely new process will come along eventually.
My guess is we'll probably need something like graphene. Gallium Arsenide seems more like a 10 year stop-gap than a 50 year revolution. The problem is we can't get any "smaller", because we enter the quantum world and that's something entirely different. However, with graphene we can keep the transistors the same, but raise the clock speeds each generation, perhaps up to 1 TeraHerz or more, with a Moore's Law-like rate of improvement in performance.
Yeah, transistor size might reach their top pretty soon, but with 3D circuits and new materials speeds will continue to increase.
I would bet that improvements will become less predictable, though. Unfortunately, investors really dislike unpredictability, so R&D spending will probably drop.
Yeah, I don't think 1 THz is achievable, but we can likely do substantially better than 4 Ghz, even when we can't go smaller. Then there are also gains to be made from building bigger, building in 3 dimensions, decreasing waste heat, etc. I suspect that innovation with chips will get bumpier, but likely overall maintain the pace of innovation for a couple decades more - which is all we need to get into really interesting territory re AI and other things.
I asked about this yesterday [1], and I found this article[2]. I can't speak to the validity, but it appears that the 'distances' used in these marketing concepts are not the same distances that are used to measure things. I'm still curious what the actual controllable resolution of the feature sizes are in the '7nm process'. I get the feeling that it's more like 30nm, but the effective feature density is greater than yesterday because there is more control vertically, and diagonally, etc. and so they need a smaller number than yesterday. But still just ~30nm (not actually 7nm).
When Intel say they have 52nm wire pitch, this means that they produce wires that are 26nm wide, with 26nm distance between wires. Producing wires that are thinner, or that are closer together, is unreliable.
That said, they are probably able to position those wires at a higher precision. Without being involved in the manufacturing myself, I deduce this mainly from looking at optical proximity correction[1], for example, it's clear that the final masks used for production have more detail. Another indication is the fact that different masks seem to be aligned to basically nm-level precision (otherwise, the different parts of transistors and the vertical interconnects (vias) between wiring planes would not match up properly). The photographs one sees of the final product also indicate this. This means that the location of wires could theoretically be controlled very precisely, but for a mixture of wavelength and other (chemical? surface tension?) reasons, the size of the wires cannot be made smaller reliably.
I'd be curious to know how precise this alignment really is, and I've never seen numbers for it, but it must be incredibly precise. Given that a large part of it can be done optically, this is not even that surprising, compared to some of the other magic that's going on here.
As someone nearly irrationally annoyed by the collective inefficiency of all the Javascript being run in the world, I kind of wish the computational "free lunch" ends sooner than later so the frontend world has to move towards the zero-cost abstractions and techniques being adopted server side. Too bad these hardware engineers are really good at their jobs!
The computational "free lunch", IMHO, ends way earlier before Moore's Law hits the wall.
Far from a well-established point, and please do correct me if I am wrong, but ever since some point around 2008, the performance gain from CPU upgrade becomes kind of stuck, which sticks to around 10%-15% between generation.
Since IVB it's been more like 0-5 percent. Intel has decided to focus on reducing power consumption and keeping performance the same, since that's much easier to do now.
There are no non-leaky zero-cost abstractions - there is no such thing as a free lunch (hah!).
Good abstractions come at a cost. You'll need more CPU power, but it becomes easier to write, read and maintain your code.
If performance becomes an issue, rewrite the critical sections to make less use of abstractions. If performance is a massive issue and it's not doable in a high-level language, then don't use a high-level language.
If CPU power stagnates, it doesn't matter. There is, and always will be, a place for abstractions, no matter what overhead they have.
JavaScript is very efficient in some respects: it trades off speed and memory usage for programmer productivity, safety, security and portability.
> Good abstractions come at a cost. You'll need more CPU power, but it becomes easier to write, read and maintain your code.
I don't believe this has to be true in the future (in fact, I'm not sure I believe it today either). This is a meme that we tell ourselves because we haven't invented clever enough programming languages or abstractions yet. This is exactly what I was referring to in my post. We as an industry would have to solve these problems at a fundamental level.
Take for instance manual memory management. People used to think that dynamic languages make this so much easier, but it has become much easier to write GC free programs (see C++11 and Rust).
I want to see more efforts in this kind of direction.
> JavaScript is very efficient in some respects: it trades off speed and memory usage for programmer productivity, safety, security and portability.
I take issue with this. JavaScript it is not a productive language at LOC scale. Security is par for the course. Modern systems languages are no worse or better. Portability I'll grant.
Last I've checked java and .net were the most popular server-side languages, and they are all about costly abstractions. Dependency injection, reflection, AbstractFactoryBuilderManagerBean etc.
In fact I've yet to see java server-side stacktrace that's less than 100 lines long. Usually it's more like 1000 lines, with a few RMIs inside. On the other hand, the js stacktraces I've seen (mostly hobby projects, so I may be biased) are usually less than 50 lines, often just 10 or so. Not good, but much better.
The JVM is an insane thing, and with clever enough programmers a lot can be done on top of that.
Were those Java servers written in standard Java, though? Because it's possible to write pretty low-level code in java if you're willing to compromise on standards compliance. Non-GC'ed, direct memory access is possible, I think.
Gallium Arsenide? If that's true, then it will be like a sign of the apocalypse. The old saw has been: "Gallium Arsenide: Technology of the Future! Always was. Always will be!"
"More interesting than 10nm, though, is the news that Intel is looking to move away from silicon FinFETs for its 7nm process. While Intel didn't provide any specifics, we strongly suspect that we're looking at the arrival of transistors based on III-V semiconductors. III-V semiconductors have higher electron mobility than silicon, which means that they can be fashioned into smaller and faster (as in higher switching speed) transistors."
To those unfamiliar with "III-V": Think GaAs (Gallium Arsenide) and friends.
gallium arsenide (GaAs) has six times higher electron mobility than silicon, which allows faster operation... Conversely, silicon is robust, cheap, and easy to process, whereas GaAs is brittle and expensive, and insulation layers can not be created by just growing an oxide layer; GaAs is therefore used only where silicon is not sufficient.
In many way's Moore's Law has been a law of economics. Look at the charts in the article; Intel will at least intend to continue the drop in cost per transistor, even if the price of a mm^2 of wafer continues to go up.
Crystalline III-V channels can be grown epitaxially on silicon substrates, with buffer layers to grade the strain due to crystal lattice mismatch. With silicon wafers heading to 450mm diameter the economics would argue against native III-V substrates.
One advantage of native III-V substrates is they are semi-insulating (very high resistivity) so there is no need for transistor isolation wells. However, insulated substrates could be obtained on silicon by means of wafer bonding with an intermediate dielectric layer.
That's vague. I hope it's just Intel being quiet... I was under the impression that cutting-edge chips requires deep experience and planning - Intel must know exactly what they're going to move to if they're going to have any prayer of coming out with chips on a timely basis after 10nm in 2017.
The fact is that no one knows, not even Intel. Many hoped-for technologies have failed to pan out before. For example, the industry had placed high hopes on EUV lithography (in 2007 they hoped it would be ready by 2010), but continued problems make that technology look less and less likely today, at least for the short-term. III-V semiconductors are another major hope for the industry (a hope that's been around for decades), but now delays have pushed that from 10 nm to 7 nm. Imec has been working for over 10 years on a way of growing III-V semiconductors in tiny trenches to make it cheap enough, but until they actually ship product, no one knows if it's good enough to work.
It's a scary time for the industry, as Moore's law comes to an end.
(All that said, even if no one knows for sure what's coming next, that doesn't mean nothing will. The semiconductor industry is throwing billions of dollars and thousands of engineers at many potential solutions in parallel. Even if plan A falls through, there is always a heavily researched plan B, C, and D.)
Yes, I think it's accurate. As the article says, the #1 problem with EUV is low power output. Especially with less sensitive photoresists (another problem unmentioned), today's 30 W output is not enough to churn out 100-200 wafers per hour.
By the way, an interesting alternative to blasting drops of molten tin in vacuum is to just build a multimillion dollar synchrotron and use its x-rays for lithography in a fab. This has a whole bunch of other problems, but it's an idea that engineers are seriously considering.
Most of my knowledge comes from my friend who used to work on the problem of inspecting x-ray masks.
why the jump from 193nm light to 13.5nm light with nothing in between? Would 50nm light for instance basically be exactly as hard as 13.5nm so you just go with the smallest possible?
I'm just guessing here. Current technology uses all sorts of tricks to focus/manipulate/use 193nm light so that you can build something smaller than the size of wavelength. However many of these tricks don't work with EUV or bigger but smaller than current light because it has to be done in a vacuum. So basicly you have to go even smaller to be better.
It surely has to be. The jump from silicon at a mass scale is going to have an incredibly high barrier to entry so I can only see the most entrenched players making it. It seems to me a prime opportunity for the big players to entrench their {mon|du}opoly over the industry.
I know I'd be keeping specifics as tightly controlled as possible until the last moment. It's one of those rare big jumps that really separate the players in the field.
Things actually broke at 90nm. Gate leakage went up enough that most analog/RF circuits scaled their transistors back up to 130-150nm dimensions while the digital guys cashed in the density increase one last time.
65nm was the first node where static RAM cells didn't scale with the rest of the digital circuitry. RAM cells are more sensitive to leakage since they have a "writability constraint" where you have to be able to shove enough electrons from outside the cell, through a transistor, with enough oomph to change the state inside the RAM cell.
40nm was where RAM scaling really broke. Designers had to start jumping through amazing hoops to support tricks for the manufacturing guys to eke out the last jump even for standard digital circuits. Most technologies started trading off multiple gate oxide thicknesses to manage leakage current.
28nm was where everything basically went to hell. The strong form of Moore's Law (twice the transistors for same cost) broke. RAM cells are way off the scaling curve. Leakage is everywhere. Multiple gate oxide thicknesses are the rule, not the exception. Designers are jumping through tremendous hoops for manufacturing (aligning all gates in the same direction over the entire chip, for example).
Below 28nm has been a disaster, and, as pointed out, a lot of the sub-28nm stuff is more marketing than actual physical dimensions.
We fixed that problem by dropping back to under 2GHz; focusing on clock rate rather than IPC and energy efficiency was just a stupid time for Intel and the industry.
I don't know if we'll halt, but we may have a point where things stagnate for a bit. We've been increasing at breakneck speeds for a very long time.
I don't see that as a bad thing. There's plenty of optimization opportunities on the software level. Think of the difference between first-generation and last-generation console games.
Lots of opportunities throughout the rest of the stack too - from novel architectures to on-chip programmable logic, the end of Moore's law puts more pressure on some really interesting innovation.
There is an interesting debate about feature size though. Devices on silicon for a long long time were essentially 2D, patterns on the top surface of Silicon. "Feature size" in this environment directly translated into area which directly translated into the die size.
As features got smaller you started getting 'trench fets' and other tricks to increase the effective size of the gates so that leakage current wasn't insane. So at what point then do the circuit elements become fully vertical, which is to say that viewed from the 'top' the transistor is 10 nm on a side but vertically its 22 nm 'tall' ?
And other tricks where the silicon layers are separately tested and 'thinned' and then packaged as a sandwich for final testing with ion implanters creating the vias between the connecting layers.
Really interesting work in that sort of stuff going on.