“The history of Semiconductor is written by transistors which still continue to be its backbone and its fate is written by Gordon Moore in 1965. Since last 30 years, the semiconductor industry is benefitted by the incomparable technological revolution that is Moore’s Law. The ongoing debate on Moore law’s death has given us two options – both perplexing. Either make Moore’s law immortal or find some new technological mystic. Anything which comes it goes. The appetite of AI revolution and smart Computing has made Chip designer think what is the next path after Moore’s law.”
Transistors have reached their minimal limits in size. With growing demand for Smartphones and Internet of things the need of diverse array of sensors and low power processors are of great importance for chip companies. The highly integrated chips do not only have built processor but also RAM, Power regulation, gyroscope, accelerometer and much more as its functionality magic amazes us. Shrinking the size of transistors and making chips bigger was not as simple as to draft law, semiconductor companies have invested heavily in their R&D and Fabs where chips become much expensive but this shrinking provided far more advantageous than its cost.
Each time the transistors shrank the chips made from them become more and more efficient and which indirectly has grown market from them. Some-time now it has been observed that making transistors smaller, fails to contribute to efficiency of chips and the operating speed of high-end chips has stayed same. As the benefits from Moore’s law start decreasing the cost eventually becomes high and this is due to limit has been reached to their fundamental limit of smallness.
Who is winning?
Moore’s Law has been an on-going debate for the engineers, scientist and mathematicians. Recently Stacy Smith stated “Intel manufacturing processes advance according to Moore’s Law, delivering ever more functionality and performance, improved energy efficiency and lower cost-per-transistor with each generation”. Intel has demonstrated its much-awaited chip based on transistors dense 10nm fabrication process is out of the development phase which is an improvised version of its previous chip based on 14nm fabrication process. By shrinking the size from 14nm to 10 nm, Intel is proving that still, Moore’s law exists. On the other side of coin, Nvidia CEO declares the death of Moore’s Law. According to Jen-Hsun, though we are able to pack more and more transistors in the same area but performance improvements has not been significant which was main reason of shrinking size.
Now the question arises whether we should follow long ago strategy of doubling the transistor every two years by relying on photolithography or should keep on exploring new techniques and technology. With Intel, delaying or slowed progress is the issue of shrinking technology. Worldwide advances going on artificial intelligence, medical electronics, and genetic engineering and the technology era will keep growing which will need the efficient and miniaturized electronics.
As the different theories for existence of Moore’s law came scientist and engineers have also started developing new technologies and techniques to protect future from the death of Moore’s Law. Many researches are going on to improve semiconductor technologies.
Materials beyond Silicon
Graphene, Nitrides and Ultra-thin materials and many more have appeared as promising materials with improved mobility, electronic switching and speed than Silicon. Many researchers and scientist have claimed that Moore’s law will sustain but with new semiconductor material rather than silicon. Recently two new semiconductor materials named hafnium diselenide and zirconium diselenide have properties that can make them the successor to silicon. These are 2D materials with high dielectrics and provide scaling benefits due to their atomically thin nature. The motive of switching to new materials is their ability of miniaturization with enhanced efficiency. It has also been researched that transistor design and new materials will unlock the potential of new technology.
Quantum computing is the next step of classical digital computing, whose originating source is not Moore’s law but the advancement in classical digital computing. Earlier in classical computing, the information is encoded in two different bits, either one or zero. Quantum computing has introduced qubits which work on two main principles one is superposition and other is entanglement. Superposition says that qubit can represent both 1 and 0 at the same time. Entanglement tells that qubits follow the superposition theorem that is whether it is 1 or 0 their state depends on each other. With these two functions qubits act as more sophisticated switches than its predecessors to solve difficult problems in the era of artificial intelligence and neural network. Quantum optimization and simulations are the concerning things to quantum computing industry. Optimization has a large variety of parameters to optimize in order to make the processing faster and faster. Recently Intel has accelerated its quantum computing chip with 17 qubits chip and that’s how we are progressing.
What hardware people are doing?
Though shifting from silicon to new materials is the latest approach. Researchers are also shifting to 3D nanofabrication which will dramatically increase the efficiency with reduced size. It is based on ability to manipulate material with atomic precision. This is an era of modern computing and to think beyond this transition that has to be made in the atomic control of their placement and interfaces. Atomic control of materials assembly enables designer hierarchy device structure.
It is era of GPU computing
Computing is not the unknown word now. The speed of transistor can be made efficient with increasing the clock rates, as every time the clock rate ticks the transistor transition switching according to clock-rate and directly faster the chip can carry out instructions. CPU performance has been slowed down as their scaling have significantly increased. GPU has emerged as a new face of computing. As GPU have massive parallel architecture which enables completion of computationally task much faster than CPU.
Program is a sequential instruction set of which is processed at sequence but in era of artificial intelligence, machine learning and deep learning many new processing sequences have been notices like Pipelining in which next step is stacked before the completion of first step in the instructions. Increase in Microprocessor architecture performance needs deployment of more and more transistor in a constant area as addition of few thousand transistor results in new advanced speed but now GPU computing have made applications and performance time much faster.
If we draw a graph between CPU performance and growing requirement. On plotting CPU performance on horizontal line and a growing requirement on vertical line, which is growing constantly, so what comes between them is explosion of new architectures. Earlier it was hardware people who have all the responsibility of taking down the sizes to 10nm but as with new technologies like artificial intelligence and neural network the paradigm shift to software side. No matter how many processors we are using, how many layers are we building, transition from CPU to GPU and FPGA is going own. For securing future we must shift our focus to software computing and instructional set architecture should be introduced.