SHRIRAM SPARK

THINK DIFFERENT

Month: June 2017

FUTURE OF ARTIFICIAL INTELLIGENCE


artificial_intelligence

Technology moves at breakneck speed, and we now have more power in our pockets than we had in our homes in the 1990s. Artificial intelligence (AI) has been a fascinating concept of science fiction for decades, but many researchers think we’re finally getting close to making AI a reality. NPR notes that in the last few years, scientists have made breakthroughs in “machine learning,” using neural networks, which mimic the processes of real neurons.

This is a type of “deep learning” that allows machines to process information for themselves on a very sophisticated level, allowing them to perform complex functions like facial recognition. Big data is speeding up the AI development process, and we may be seeing more integration of AI technology in our everyday lives relatively soon. While much of this technology is still fairly rudimentary at the moment, we can expect sophisticated AI to one day significantly impact our everyday lives. Here are 6 ways AI might affect us in the future.

1. Automated Transportation

We’re already seeing the beginnings of self-driving cars, though the vehicles are currently required to have a driver present at the wheel for safety. Despite these exciting developments, the technology isn’t perfect yet, and it will take a while for public acceptance to bring automated cars into widespread use. Google began testing a self-driving car in 2012, and since then, the U.S. Department of Transportation has released definitions of different levels of automation, with Google’s car classified as the first level down from full automation. Other transportation methods are closer to full automation, such as buses and trains.

2. Cyborg Technology

One of the main limitations of being human is simply our own bodies—and brains. Researcher Shimon Whiteson thinks that in the future, we will be able to augment ourselves with computers and enhance many of our own natural abilities. Though many of these possible cyborg enhancements would be added for convenience, others might serve a more practical purpose. Yoky Matsuka of Nest believes that AI will become useful for people with amputated limbs, as the brain will be able to communicate with a robotic limb to give the patient more control. This kind of cyborg technology would significantly reduce the limitations that amputees deal with on a daily basis.

3. Taking over dangerous jobs

Robots are already taking over some of the most hazardous jobs available, including bomb defusing. These robots aren’t quite robots yet, according to the BBC. They are technically drones, being used as the physical counterpart for defusing bombs, but requiring a human to control them, rather than using AI. Whatever their classification, they have saved thousands of lives by taking over one of the most dangerous jobs in the world. As technology improves, we will likely see more AI integration to help these machines function.

Other jobs are also being reconsidered for robot integration. Welding, well known for producing toxic substances, intense heat, and earsplitting noise, can now be outsourced to robots in most cases. Robot Worx explains that robotic welding cells are already in use, and have safety features in place to help prevent human workers from fumes and other bodily harm.

4. Solving climate change

Solving climate change might seem like a tall order from a robot, but as Stuart Russell explains, machines have more access to data than one person ever could—storing a mind-boggling number of statistics. Using big data, AI could one day identify trends and use that information to come up with solutions to the world’s biggest problems.

5. Robot as friends

Who wouldn’t want a friend like C-3PO? At this stage, most robots are still emotionless and it’s hard to picture a robot you could relate to. However, a company in Japan has made the first big steps toward a robot companion—one who can understand and feel emotions. Introduced in 2014, “Pepper” the companion robot went on sale in 2015, with all 1,000 initial units selling out within a minute. The robot was programmed to read human emotions, develop its own emotions, and help its human friends stay happy. Pepper goes on sale in the U.S. in 2016, and more sophisticated friendly robots are sure to follow.

6. Improved elder care

For many seniors, everyday life is a struggle, and many have to hire outside help to manage their care, or rely on family members. AI is at a stage where replacing this need isn’t too far off, says Matthew Taylor, computer scientist at Washington State University. “Home” robots could help seniors with everyday tasks and allow them to stay independent and in their homes for as long as possible, which improves their overall well-being.

Although we don’t know the exact future, it is quite evident that interacting with AI will soon become an everyday activity. These interactions will clearly help our society evolve, particularly in regards to automated transportation, cyborgs, handling dangerous duties, solving climate change, friendships and improving the care of our elders. Beyond these six impacts, there are even more ways that AI technology can influence our future, and this very fact has professionals across multiple industries extremely excited for the ever-burgeoning future of artificial intelligence.
.

Designs on the Future

Mapping out a future for integrated circuits and computing is paramount. One option for advancing chip performance is the use of different materials, Chudzik says. For instance, researchers are experimenting with cobalt to replace tungsten and copper in order to increase the volume of the wires, and studying alternative materials for silicon. These include Ge, SiGE and III-V materials such as gallium arsenide and gallium indium arsenide. However, these materials present performance and scaling challenges and, even if those problems can be addressed, they would produce only incremental gains that would tap out in the not-too-distant future.

Faced with the end of Moore’s Law, researchers are also focusing attention on new and sometimes entirely different approaches. One of the most promising options is stacking components and scaling from today’s 2D ICs to 3D designs, possibly by using nanowires. “By moving into the third dimension and stacking memory and logic, we can create far more function per unit volume,” Rabaey explains. Yet, for now, 3D chip designs also run into challenges, particularly in terms of cooling. The devices have less surface volume as engineers stack components. As a result, “You suddenly have to do processing at a lower temperature or you damage the lower layers,” he notes.

Consequently, a layered 3D design, at least for now, requires a fundamentally different architecture. “Suddenly, in order to gain denser connectivity, the traditional approach of having the memory and processor separated doesn’t make sense. You have to rethink the way you do computation,” Rabaey explains. It’s not an entirely abstract proposition. “The advantages that some applications tap into—particularly machine learning and deep learning, which require dense integration of memory and logic—go away.” Adding to the challenge: a 3D design increases the risk of failures within the chip. “Producing a chip that functions with 100% integrity is impossible. The system must be fail-tolerant and deal with errors,” he adds.

Regardless of the approach and the combination of technologies, researchers are ultimately left with no perfect option. Barring a radical breakthrough, they must rethink the fundamental way in which computing and processing take place.

Conte says two possibilities exist beyond pursuing the current technology direction.

One is to make radical changes, but limit these changes to those that happen “under the covers” in the microarchitecture. In a sense, this is what took place in 1995, except “today we need to use more radical approaches,” he says. For servers and high-performance computing, for example, ultra-low-temperature superconducting is being advanced as one possible solution. At present, the U.S. Intelligence Advanced Research Projects Activity (IARPA) is investing heavily in this approach within its Cryogenic Computing Complexity (C3) program. These non-traditional logic gates are made in small scale, at a size roughly 200 times larger than today’s transistors.

Another is to “bite the bullet and change the programming model,” Conte says. Although numerous ideas and concepts have been forwarded, most center on creating fixed-function (non-programmable) accelerators for critical parts of important programs. “The advantage is that when you remove programmability, you eliminate all the energy consumed in fetching and decoding instructions.” Another possibility—and one that is already taking shape—is to move computation away from the CPU and toward the actual data. Essentially, memory-centric architectures, which are in development in the lab, could muscle up processing without any improvements in chips.

Finally, researchers are exploring completely different ways to compute, including neuromorphic and quantum models that rely on non-Von-Neumann brain-inspired methods and quantum computing. Rabaey says processors are already heading in this direction. As deep learning and cognitive computing emerge, GPU stacks are increasingly used to accelerate performance at the same or lower energy cost as traditional CPUs. Likewise, mobile chips and the Internet of Things bring entirely different processing requirements into play. “In some cases, this changes the paradigm to lower processing requirements on the system but having devices everywhere. We may see billions or trillions of devices that integrate computation and communication with sensing, analytics, and other tasks.”

In fact, as visual processing, big data analytics, cryptography, AR/VR, and other advanced technologies evolve, it is likely researchers will marry various approaches to produce boutique chips that best fit the particular device and situation. Concludes Conte: “The future is rooted in diversity and building devices to meet the needs of the computer architectures that have the most promise.”

The Incredible Shrinking Transistor

The history of semiconductors and Moore’s Law follows a long and somewhat meandering path. Conte, a professor at the schools of computer science and engineering at Georgia Institute of Technology, points out that computing has not always been tied to shrinking transistors. “The phenomenon is only about three decades old,” he points out. Prior to the 1970s, high-performance computers, such as the CRAY-1, were built using discrete emitter-coupled logic-based components. “It wasn’t really until the mid-1980s that the performance and cost of microprocessors started to eclipse these technologies,” he notes.

At that point, engineers developing high-performance systems began to gravitate toward Moore’s Law and adopt a focus on microprocessors. However, the big returns did not last long. By the mid-1990s, “The delays in the wires on-chip outpaced the delays due to transistor speeds,” Conte explains. This created a “wire-delay wall” that engineers circumvented by using parallelism behind the scenes. Simply put: the technology extracted and executed instructions in parallel, but independent, groups. This was known as the “superscalar era,” and the Intel Pentium Pro microprocessor, while not the first system to use this method, demonstrated the success of this approach.

Around the mid-2000s, engineers hit a power wall. Because the power in CMOS transistors is proportional to the operating frequency, when the power density reached 200W/cm2, cooling became imperative. “You can cool the system, but the cost of cooling something hotter than 150 watts resembles a step function, because 150 watts is about the limit for relatively inexpensive forced-air cooling technology,” Conte explains. The bottom line? Energy consumption and performance would not scale in the same way. “We had been hiding the problem from programmers. But now we couldn’t do that with CMOS,” he adds.

No longer could engineers pack more transistors onto a wafer with the same gains. This eventually led to reducing the frequency of the processor core and introducing multicore processors. Still, the problem didn’t go away. As transistors became smaller—hitting approximately 65nm in 2006 —performance and economic gains continued to subside, and as nodes dropped to 22nm and 14nm, the problem grew worse.

What is more, all of this has contributed to fabrication facilities becoming incredibly expensive to build, and semiconductors becoming far more expensive to manufacture. Today, there are only four major semiconductor manufacturers globally: Intel, TSMC, GlobalFoundries, and Samsung. That is down from nearly two dozen two decades ago.

To be sure, the semiconductor industry is approaching the physical limitations of CMOS transistors. Although alternative technologies are now in the research and development stage—including carbon nanotubes and tunneling field effect transistors (TFETs)—there is no evidence these next-gen technologies will actually pay off in a major way. Even if they do usher in further performance gains, they can at best stretch Moore’s Law by a generation or two.

The Future of Semiconductors

Over the last half-century, as computing has advanced by leaps and bounds, one thing has remained fairly static: Moore’s Law.

For more than 50 years, this concept has provided a predictable framework for semiconductor development. It has helped computer manufacturers and many other companies focus their research and plan for the future.

However, there are signs that Moore’s Law is reaching the end of its practical path. Although the IC industry will continue to produce smaller and faster transistors over the next few years, these systems cannot operate at optimal frequencies due to heat dissipation issues. This has “brought the rate of progress in computing performance to a snail’s pace,” wrote IEEE fellows Thomas M. Conte and Paolo A. Gargini in a 2015 IEEE-RC-ITRS report, On the Foundation of the New Computing Industry Beyond 2020.

Yet, the challenges do not stop there. There is also the fact that researchers cannot continually miniaturize chip designs; at some point over the next several years, current two-dimensional ICs will reach a practical size limit. Although researchers are experimenting with new materials and designs—some radically different—there currently is no clear path to progress. In 2015, Gordon Moore predicted the law that bears his name will wither within a decade. The IEEE-RC-ITRS report noted: “A new way of computing is urgently needed.”

As a result, the semiconductor industry is in a state of flux. There is a growing recognition that research and development must incorporate new circuitry designs and rely on entirely different methods to scale up computing power further. “For many years, engineers didn’t have to work all that hard to scale up performance and functionality,” observes Jan Rabaey, professor and EE Division Chair in the Electrical Engineering and Computer Sciences Department at the University of California, Berkeley. “As we reach physical limitations with current technologies, things are about to get a lot more difficult.”

MILITARY DRONES(RQ-2A Pioneer)

Contractor
Pioneer UAV Inc.

Service
United States Navy, United States Marine Corps

Propulsion
Sachs SF-350 gasoline engine, 26 horsepower

Weight
Max design gross take-off: 416 pounds (188.69 kg).

Airspeed
110 knots

Ceiling
15,000 ft

The RQ-2A represents one of the U.S. Navy’s first unmanned surveillance drones to enter the fleet. Originally designed jointly by the Israeli companies AAI Corp. and Israeli Aircraft Industries, the Navy adapted the original design for shipboard operation deploying from recently-recommissioned battleships in the 1980s. The UAV was later adopted by the Marine Corps for ground-based operations.

The Pioneer UAV system performs a wide variety of reconnaissance, surveillance, target acquisition and battle damage assessment missions. The low radar cross section, low infrared signature and remote control versatility provides a degree of cover for the aircraft. Pioneer provides the tactical commander with real-time images of the battlefield or target.

In the 1980s, U.S. military operations in Grenada, Lebanon, and Libya identified a need for an on-call, inexpensive, unmanned, over-the-horizon targeting, reconnaissance, and battle damage assessment (BDA) capability for local commanders. As a result, in July 1985, the Secretary of the Navy directed the expeditious acquisition of UAV systems for fleet operations using nondevelopmental technology. A competitive fly-off was conducted and two Pioneer systems were procured in December 1985 for testing during 1986. Initial system delivery was made in July 1986 and subsequently deployed on the battleship USS Iowa (BB 61) in December 1986.

During 1987, three additional systems were delivered to the Marine Corps where they were operationally deployed on board LHA-class vessels as well as with several land-based units. Pioneer has operated in many theaters including the Persian Gulf, Bosnia, Yugoslavia and Somalia. Marine Corps Unmanned Aerial Vehicle Squadrons deployed to Iraq in 2003 during Operation Iraqi Freedom and currently support Marine operations in Iraq. The Pioneer is launched using rocket-assisted takeoff or pneumatic rails and is recovered by net at sea or by landing ashore on a 200-by-75-meter unimproved field. The Pioneer carries a payload of 65-100 pounds — including an electro-optical and infrared camera — and can patrol for more than five hours. Control of the RQ-2B can be handed off from control station to control station, thereby increasing the vehicle’s range and allowing launch from one site and recovery at another. With a ManPackable Receiving Station, Pioneer provides portable, payload imagery to forward deployed Marines. Pioneer has flown other payloads including an acoustic-wave vapor sensor and a hyperspectral imagery sensor.

Desert Shield/Storm Anecdote: The surrender of Iraqi troops to an unmanned aerial vehicle did actually happen. All of the UAV units at various times had individuals or groups attempt to signal the Pioneer, possibly to indicate willingness to surrender. However, the most famous incident occurred when USS Missouri (BB 63), using her Pioneer to spot 16 inch gunfire, devastated the defenses of Faylaka Island off the coast near Kuwait City. Shortly thereafter, while still over the horizon and invisible to the defenders, the USS Wisconsin (BB 64) sent her Pioneer over the island at low altitude.

When the UAV came over the island, the defenders heard the obnoxious sound of the two-cycle engine since the air vehicle was intentionally flown low to let the Iraqis know that they were being targeted. Recognizing that with the “vulture” overhead, there would soon be more of those 2,000-pound naval gunfire rounds landing on their positions with the same accuracy, the Iraqis made the right choice and, using handkerchiefs, undershirts, and bedsheets, they signaled their desire to surrender. Imagine the consternation of the Pioneer aircrew who called the commanding officer of Wisconsin and asked plaintively, “Sir, they want to surrender, what should I do with them?”

The RQ-2A Pioneer is operated by four Naval aircraft squadrons: VMU-1 & VMU-2 (USMC) and VC-6 and Training Wing Six (USN). The VC-6 system at Patuxent River Naval Air Station, Maryland, supports software changes, hardware acceptance, test and evaluation of potential payloads and technology developments to meet future UAV requirements. Training Wing Six at Naval Air Station Whiting Field, Florida trains all Navy and Marine Corps Pioneer operators and maintainers.

© 2017 SHRIRAM SPARK

Theme by Anders NorenUp ↑