Mapping out a future for integrated circuits and computing is paramount. One option for advancing chip performance is the use of different materials, Chudzik says. For instance, researchers are experimenting with cobalt to replace tungsten and copper in order to increase the volume of the wires, and studying alternative materials for silicon. These include Ge, SiGE and III-V materials such as gallium arsenide and gallium indium arsenide. However, these materials present performance and scaling challenges and, even if those problems can be addressed, they would produce only incremental gains that would tap out in the not-too-distant future.
Faced with the end of Moore’s Law, researchers are also focusing attention on new and sometimes entirely different approaches. One of the most promising options is stacking components and scaling from today’s 2D ICs to 3D designs, possibly by using nanowires. “By moving into the third dimension and stacking memory and logic, we can create far more function per unit volume,” Rabaey explains. Yet, for now, 3D chip designs also run into challenges, particularly in terms of cooling. The devices have less surface volume as engineers stack components. As a result, “You suddenly have to do processing at a lower temperature or you damage the lower layers,” he notes.
Consequently, a layered 3D design, at least for now, requires a fundamentally different architecture. “Suddenly, in order to gain denser connectivity, the traditional approach of having the memory and processor separated doesn’t make sense. You have to rethink the way you do computation,” Rabaey explains. It’s not an entirely abstract proposition. “The advantages that some applications tap into—particularly machine learning and deep learning, which require dense integration of memory and logic—go away.” Adding to the challenge: a 3D design increases the risk of failures within the chip. “Producing a chip that functions with 100% integrity is impossible. The system must be fail-tolerant and deal with errors,” he adds.
Regardless of the approach and the combination of technologies, researchers are ultimately left with no perfect option. Barring a radical breakthrough, they must rethink the fundamental way in which computing and processing take place.
Conte says two possibilities exist beyond pursuing the current technology direction.
One is to make radical changes, but limit these changes to those that happen “under the covers” in the microarchitecture. In a sense, this is what took place in 1995, except “today we need to use more radical approaches,” he says. For servers and high-performance computing, for example, ultra-low-temperature superconducting is being advanced as one possible solution. At present, the U.S. Intelligence Advanced Research Projects Activity (IARPA) is investing heavily in this approach within its Cryogenic Computing Complexity (C3) program. These non-traditional logic gates are made in small scale, at a size roughly 200 times larger than today’s transistors.
Another is to “bite the bullet and change the programming model,” Conte says. Although numerous ideas and concepts have been forwarded, most center on creating fixed-function (non-programmable) accelerators for critical parts of important programs. “The advantage is that when you remove programmability, you eliminate all the energy consumed in fetching and decoding instructions.” Another possibility—and one that is already taking shape—is to move computation away from the CPU and toward the actual data. Essentially, memory-centric architectures, which are in development in the lab, could muscle up processing without any improvements in chips.
Finally, researchers are exploring completely different ways to compute, including neuromorphic and quantum models that rely on non-Von-Neumann brain-inspired methods and quantum computing. Rabaey says processors are already heading in this direction. As deep learning and cognitive computing emerge, GPU stacks are increasingly used to accelerate performance at the same or lower energy cost as traditional CPUs. Likewise, mobile chips and the Internet of Things bring entirely different processing requirements into play. “In some cases, this changes the paradigm to lower processing requirements on the system but having devices everywhere. We may see billions or trillions of devices that integrate computation and communication with sensing, analytics, and other tasks.”
In fact, as visual processing, big data analytics, cryptography, AR/VR, and other advanced technologies evolve, it is likely researchers will marry various approaches to produce boutique chips that best fit the particular device and situation. Concludes Conte: “The future is rooted in diversity and building devices to meet the needs of the computer architectures that have the most promise.”