The end of Moore’s law has been broadly reported for the last decade and is widely understood. Nobody is likely to get huge efficiency improvements in general processors ever again, not like what we saw in the past, and it isn’t news anymore.
Algorithmic improvements, custom domain specific ASICs, and maybe quantum computing or other physical processes in the future are where large efficiency deltas might come, but for now small improvements are here to stay for all chip makers.
Moore's law may be considered dead at Intel, but TSMC does not agree.
>Wong, who is vice president of corporate research at Taiwan Semiconductor Manufacturing Corp, gave a presentation at the recent Hot Chips conference where he claimed that not only is Moore’s Law alive and well, but with the right bag of technology tricks it will remain viable for the next three decades.
“It’s not dead,” he told the Hot Chips attendees. It’s not slowing down. It’s not even sick.”
I'm not sure chip fabs are ever going to say Moore's law is dead -- I interned at Intel last summer, and Moore's law was pretty much all they could talk about. (In fact, they made very similar claims at the exact same conference [0]).
In fact, multiple gates can be created in the same transistor, in an effect SFN calls “multi-tunnel.” Multiple NOR and OR gates can thus be created from a single Bizen transistor, allowing creation of logic circuits with many fewer devices. This can result in a three-fold increase in gate density with a corresponding reduction in die size for integrated circuits based on the transistors. Summerland said that SFN is also creating a reduced device count processor architecture to enable analogue computing with Bizen transistors.
There's a lot more to the perf improvement tapering than changes in Moore's law (which is about circuit complexity increasing at given cost), namely problems translating the increasing transistor budget to ipc improvement or failing that, solving parallel programming. And clock speed improvements.
FWIW, I don’t think perf improvements are slowing down, I just think efficiency improvements in ICs are. Flops per watt of general compute isn’t moving quickly, and can’t anymore. But we can still make bigger parallel machines, design better algorithms, solve new problems, etc.
Outside HPC/ML I think our programs are now trading off useful ops per watt to take some advantage of the elusive beast called thread level parallelism. A web browser is happy to get a speedup of N by throwing 2N or 4N spinning threads at the problem if correctness and stability can be retained.
What makes sense to accelerate, how to integrate it and balance accelerators vs. general cpus, and how to expose it all to the programmer all seem like fun and interesting problems.
It is a cool time! Yeah I totally agree, and I think it’s awesome that you’re looking at it as an opportunity to learn and have fun doing it. Some people worry, and others embrace the change and make good things happen. I think I can attest to your vision since I work for a chip maker and I’m involved in the hardware & software design of some domain specific computing - it has been a blast, and we are learning all kinds of fun things.
Algorithmic improvements, custom domain specific ASICs, and maybe quantum computing or other physical processes in the future are where large efficiency deltas might come, but for now small improvements are here to stay for all chip makers.