close
Written by Adrian Yeung
on November 07, 2018

We currently create over 2.5 quintillion bytes of data every day1, and this number is set to increase with the rise of the Internet of Things and proliferation of applications that benefit from machine learning and artificial intelligence. To process this massive amount of data, improving our computing power will be key. Moore’s Law, the observation that the number of transistors on integrated circuits will double every two years, effectively died in 2005, eliminating the ability to rely on increasing transistor density. However, a combination of economic incentives and alternatives to transistor density, means that computing power is likely to continue growing steadily, until quantum computing takes over.

Autonomous vehicles rely heavily on computing power to support the AI algorithms that form the backbone of its real-time decision making. Waymo is currently seen as the leader in this space, recently receiving permission from California DMV to extend its driverless testing program to California2. Unsurprisingly, for driverless cars to be safe, they require an immense amount of both data and computing power. This is because autonomous vehicles must:

  1. Understand and map the external environment around the car, including vehicles, people, objects etc.;
  2. Make appropriate real-time decisions, for example, deciding the best course of action when approaching amber traffic lights or when confronted with a hazard.

Driverless cars are equipped with an array of sensors including Lidar, cameras, radar and several others. By combining the data from each sensor, the car can build a detailed map of its external environment and use artificial intelligence to make decisions in real time. Unfortunately, even with 5G, the latency risks mean that the processor needs to be stored locally inside the car.

NVIDIA’s current generation of computers designed for autonomous vehicles are said to perform up to 30 trillion operations per second. To provide some context, this is 16 times faster than the latest generation of Tesla’s autopilot system and 16,000 times faster than the system used in the winning car in the DARPA urban challenge in 2007.

Moore’s Law and its supposed death

Historically, when forecasting the growth of computing power, we have relied on Moore’s Law. Gordon Moore, one of the co-founders of Intel, noticed in 1965 that the number of transistors on integrated circuits had doubled every year, while costs halved and speeds remained constant. He predicted that this trend would continue for at least the next 10 years3.

This simple prediction forecasted computing power extraordinarily well between 1965 and 1975, and although Gordon Moore had to revise his prediction - to transistor density doubling every 2 years - his revised prediction has continued to perform well over the past several decades and formed the basis for Moore’s Law.

Although there was no theory behind his observation, Moore’s Law agreed with a technological rule, called Dennard Scaling. Essentially, Dennard Scaling states that as you increase transistor density, the power requirements will remain constant. What this means is that as you double the transistor density, the processing power will double, while keeping energy costs constant.

Around 2005 it was discovered that Dennard Scaling would no longer hold true. While transistor density continued to double, the gains in speed or energy efficiency became much lower than what was achieved previously. Fortunately, transistor density is not the only process in which higher speeds or energy efficiency can be achieved. By focusing too heavily on Moore’s Law, we can ignore other potential avenues of growth. 

Economic incentives as the driver of improvements in computing power

Due to economic incentives, even without Dennard Scaling, computing power should continue to increase in the future. If the economic benefits of big data are as large as current forecasts estimate, large financial rewards await the companies who can solve the challenge of increasing computing power first. While CPUs have struggled to keep up with Moore’s Law, GPUs have been able to exceed it. The difference between the success of CPUs and GPUs suggests that the issue may not only be a technological, but also economic.

Forecasting the future demand of CPUs and GPUs paints an interesting picture. CPUs were designed for applications in the 1970s and 80s and the use cases for higher processing power are currently unclear. The use cases for more capable GPUs, in contrast to CPUs, are clear: GPUs are highly suited towards applications like machine learning and big data, which will undoubtedly take advantage of the extra computing power. This difference suggests that in the future, higher financial rewards will be found in the GPU market. To explore this further we can look specifically at Intel and Nvidia.

Intel has recently struggled with moving towards a smaller processing node, initially planning to release its 10nm CPUs in 2016. However, they have been consistently delayed as the low yield from the current manufacturing processes have prevented Intel from mass producing these chips4. Intel has also signalled its intent on expanding its presence in the GPU market, hiring AMD’s graphics chief, Raja Koduri at the end of 2017. Clearly, Intel recognises the financial rewards associated with GPUs.

Nvidia has had much more recent success. At Nvidia’s 2018 GPU Technology Conference, they highlighted that Nvidia’s GPUs are 25 times faster than 5 years ago and if they followed Moore’s Law, they would have only been at most 10 times faster. Nvidia has been able to surpass Moore’s Law due to simultaneous advances on several fronts: architecture, interconnects, memory technology, algorithms and many more. However, how long can NVIDIA outperform Moore’s Law? This remains unclear as sustained innovation across multiple areas has not been demonstrated in the long run. Apple is another company that is trying to advance computing power, through innovation across multiple fronts.

Forks in the road

The idea of offloading specific tasks to specialised processors is not a new concept.  Mainframe manufacturers such as IBM and others, created front-end processors that handled tasks such as multiplexing and storage management in the 1970s. However, Apple has taken this concept to a new level with its A-series mobile processors. By integrating many different specialised processors on one highly efficient chip, Apple is setting a new benchmark for performance at a low power consumption level. This processor used in the latest iPad Pro is extremely close to the desktop performance of the latest four core Intel i7 as the Geekbench score shows. 

 

macbook-benchmark

 

However, in addition to this the A12x hands off specific tasks such as facial and object recognition to the neural engine. As Apple has been able to crack the 7nm production obstacles in conjunction with TSMC we are seeing a dawn of other ways to keep delivering performance gains in a chip that literally sips power. The i7 consumes 28w compared to the the A12x although Apple does not publish the power consumption of this chip, however it does not take much imagination to see that it is likely to  be using less than half the power of the i7 if the whole iPad Pro consumes less than 18w to drive the screen, storage, speakers, battery management, WiFi, Cellular and processor. While Apple continues to make advances, Quantum computing may render traditional computing redundant.

What to expect from quantum computing

Quantum computing offers the promise of breakthroughs in many areas such as drug discovery and artificial intelligence. Instead of using bits like traditional computers, quantum computers use qubits, which have two useful properties: superpositions and entanglement. Two clear use cases of quantum computing are in databases and encryption. A quantum computer can search through a database at a rate equal to the square root of the time required by a traditional computer. When dealing with large databases, which will be increasingly common, this offers massive efficiency gains.

Google is hoping to reach a giant milestone in quantum computing within the next few months. Quantum supremacy is the idea that a sufficiently powerful quantum computer will be able to perform calculations that are impossible for a traditional computer. It is currently undemonstrated but could provide the start of a quantum computing revolution. Google has asked NASA to help prove that its latest 72-qubit quantum chip has in fact achieved quantum supremacy5. However, not everyone agrees with Google’s view, with Alibaba publishing a paper in May suggesting that quantum chips will require lower error rates before they can exceed traditional computers6.

Predictions for the future

Economic incentives will likely continue to drive advances in the processing power within traditional computers. Quantum computers offer the promise of a revolution in computing, but as with any unproven technology, it could end up as a niche production with a handful of applications. Google could change the landscape dramatically, if it can prove it has achieved quantum supremacy, but traditional computers will remain relevant for years to come.

 

Sources:

 

  1. https://www.forbes.com/sites/bernardmarr/2018/05/21/how-much-data-do-we-create-every-day-the-mind-blowing-stats-everyone-should-read/#207984aa60ba
  2. https://medium.com/waymo/a-green-light-for-waymos-driverless-testing-in-california-a87ec336d657
  3. https://newsroom.intel.com/wp-content/uploads/sites/11/2018/05/moores-law-electronics.pdf
  4. https://appleinsider.com/articles/18/07/27/intel-delays-10nm-cannon-lake-processor-production-to-late-2019
  5. https://www.technologyreview.com/s/612381/google-has-enlisted-nasa-to-help-it-prove-quantum-supremacy-within-months/
  6. https://arxiv.org/abs/1805.01450

Share your thoughts!

Leave your comments in the form below.

You may also like: