Over the last half-century, as computing has advanced by leaps and bounds, one thing has remained fairly static: Moore's Law.

For more than 50 years, this concept has provided a predictable framework for semiconductor development. It has helped computer manufacturers and many other companies focus their research and plan for the future.

However, there are signs that Moore's Law is reaching the end of its practical path. Although the IC industry will continue to produce smaller and faster transistors over the next few years, these systems cannot operate at optimal frequencies due to heat dissipation issues. This has "brought the rate of progress in computing performance to a snail's pace," wrote IEEE fellows Thomas M. Conte and Paolo A. Gargini in a 2015 IEEE-RC-ITRS report, On the Foundation of the New Computing Industry Beyond 2020.

Yet, the challenges do not stop there. There is also the fact that researchers cannot continually miniaturize chip designs; at some point over the next several years, current two-dimensional ICs will reach a practical size limit. Although researchers are experimenting with new materials and designs—some radically different—there currently is no clear path to progress. In 2015, Gordon Moore predicted the law that bears his name will wither within a decade. The IEEE-RC-ITRS report noted: "A new way of computing is urgently needed."

As a result, the semiconductor industry is in a state of flux. There is a growing recognition that research and development must incorporate new circuitry designs and rely on entirely different methods to scale up computing power further. "For many years, engineers didn't have to work all that hard to scale up performance and functionality," observes Jan Rabaey, professor and EE Division Chair in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley. "As we reach physical limitations with current technologies, things are about to get a lot more difficult."