Accelerator chips increasingly are providing the performance boost that device scaling once provided, changing basic assumptions about how data moves within an electronic system and where it should be processed.
To the outside world, little appears to have changed. But beneath the glossy exterior, and almost always hidden from view, accelerator chips are becoming an integral part of most designs where performance is considered essential. And as the volume of data continues to rise—more sensors, higher-resolution images and video, and more inputs from connecting systems that in the past were standalone devices—that boost in performance is required. So even if systems don’t run noticeably faster on the outside, they need to process much more data without slowing down.
This renewed emphasis on performance has created an almost insatiable appetite for accelerators of all types, even in mobile devices such as a smart phone where one ASIC used to be the norm.
“Performance can move total cost of ownership the most,” said Joe Macri, corporate vice president and product CTO at AMD. “Performance is a function of frequency and instructions per cycle.”
And this is where accelerators really shine. Included in this class of processors are custom-designed ASICs that offload a particular operation in software, as well as standard GPU chips, heterogeneous CPU cores that can work on jobs in parallel (even within the same chip), and both discrete and embedded FPGAs.
But accelerators also add challenges for design teams. They require more planning, a deeper understanding of how software and algorithm works within a device, and they are very specific. Reuse of accelerators can be difficult, even with programmable logic.
“Solving problems with accelerators require more effort,” said Steve Mensor, vice president of marketing at Achronix. “You do get a return for that effort. You get way better performance. But those accelerators are becoming more and more…