Decoder: Understanding Accelerated Computing

Accelerated computing uses parallel processing to speed up work on demanding applications, from AI and data analytics to simulations and visualizations.

Accelerated computing is the use of specialized hardware to dramatically speed up work, often with parallel processing that bundles frequently occurring tasks. It offloads demanding work that can bog down CPUs, processors that typically execute tasks in serial fashion.

Born in the PC, accelerated computing came of age in supercomputers. It lives today in your smartphone and every cloud service. And now companies of every stripe are adopting it to transform their businesses with data.

Accelerated computers blend CPUs and other kinds of processors together as equals in an architecture sometimes called heterogeneous computing.

Accelerated Computers: A Look Under the Hood

GPUs are the most widely used accelerators. Data processing units (DPUs) are a rapidly emerging class that enable enhanced, accelerated networking. Each has a role to play along with the host CPU to create a unified, balanced system.

Both commercial and technical systems today embrace accelerated computing to handle jobs such as machine learning, data analytics, simulations and visualizations. It’s a modern style of computing that delivers high performance and energy efficiency.

How PCs Made Accelerated Computing Popular

Specialized hardware called co-processors have long appeared in computers to accelerate the work of a host CPU. They first gained prominence around 1980 with floating-point processors that added advanced math capabilities to the PC.

Over the next decade, the rise of video games and graphical user interfaces created demand for graphics accelerators. By 1993, nearly 50 companies were making graphics chips or cards.

In 1999, NVIDIA launched the GeForce 256, the first chip to offload from the CPU key tasks for rendering 3D images. It was also the first to use four graphics pipelines for parallel processing.

NVIDIA called it a graphics processing unit (GPU), putting a stake in the ground for a new category of computer accelerators.

How Researchers Harnessed Parallel Processing 

By 2006, NVIDIA had shipped 500 million GPUs. It led a field of just three graphics vendors and saw the next big thing on the horizon.

Some researchers were already developing their own code to apply the power of GPUs to tasks beyond the reach of CPUs. For example, a team at Stanford led by Ian Buck unveiled Brook, the first widely adopted programming model to extend the popular C language for parallel processing.

Buck started at NVIDIA as an intern and now serves as vice president of accelerated computing. In 2006, he led the launch of CUDA, a programming model to harness the parallel-processing engines inside the GPU for any task.

Teamed with a G80 processor in 2007, CUDA powered a new line of NVIDIA GPUs that brought accelerated computing to an expanding array of industrial and scientific applications.

HPC + GPUs = Accelerated Science

This family of GPUs destined for the data center expanded on a regular cadence with a succession of new architectures named after innovators — Tesla, Fermi, Kepler, Maxwell, Pascal, Volta, Turing, Ampere.

Like the graphics accelerators of the 1990s, these new GPUs faced many rivals, including novel parallel processors such as the transputer from Inmos.

“But only the GPU survived because the others had no software ecosystem and that was their death knell,” said Kevin Krewell, an analyst at Tirias Research.

Experts in high-performance computing around the world built accelerated HPC systems with GPUs to pioneer science. Their work today spans fields from the astrophysics of black holes to genome sequencing and beyond.

Article first appeared on Nvidia blog.


Previous Post Next Post

AD

AD