Today’s embedded solutions are driving higher performance applications in smaller form factors, from sophisticated industrial control and automation applications that require complex processing algorithms to digital signage applications that require high-performance graphics processing. These applications often require low power consumption and support for open standards in order to provide the highest levels of design flexibility. To enable these applications, developers need embedded processing platforms that deliver advanced performance while helping to reduce time-to-market and development costs.
New highly integrated system-on-chip (SOC) processors are available that feature a high-performance x86 multicore processor, a discrete-class graphics processing unit (GPU), an I/O controller, and error-correction code (ECC) memory support for high reliability – all on a single die. With increased chip-level integration, developers can achieve new levels of processing efficiency, while retaining a low power design and a significant footprint reduction to reduce manufacturing costs and minimize design complexity.
This article will describe the benefits, technology, and target markets for single-chip SOCs so developers can make informed decisions about whether this type of solution is right for their next embedded design projects.
Typical processing SOCs are comprised of one or more microcontroller or DSP cores, memory blocks, timing sources, peripherals, external interfaces, analog interfaces, voltage regulators, and power management circuits. The processor is usually powerful enough to run a Windows, Linux, Android, or RTOS operating system.
Traditionally, SOC processor architectures have not been widely utilized for graphics-intensive applications. For these applications, developers typically design a system whereby CPUs and GPUs are separate processing elements, and therefore they usually do not work together efficiently. Each has a separate memory space, requiring an application to copy data from the CPU to GPU and then back again. Additional chips are required to make a complete system.
The accelerated processing unit (APU), pioneered by AMD, is comprised of a low power CPU and a discrete-class GPU with a companion I/O unit in a two-chip architecture (Figure 1). The APU was the first step toward the realization of a new generation of SOC processors. APUs enable a large amount complex data processing more efficiently than either a CPU or GPU alone, but in a larger footprint than a single-chip SOC.
Single-chip SOCs, like their APU predecessors, enable “heterogeneous computing”, which refers to systems comprised of multiple processor types, typically CPUs and GPUs, and usually on the same silicon die. There are numerous advantages to heterogeneous computing but most importantly, heterogeneous computing enables each processor element to perform efficiently at what it does best, and to work cooperatively. With heterogeneous computing the processors share memory space so there is no need for them to copy data back and forth.
Using its high-performance vector processing capabilities, the onboard GPU is free to perform parallel operations on very large sets of data at much lower power consumption than a CPU could. Meanwhile, the onboard CPU handles scalar processing tasks that support general-purpose functions such as running the operating system. Heterogeneous computing via an integrated single-chip SOC results in dramatic performance increases per watt as compared to ad hoc CPU+GPU chipsets.