Dubbed the Radeon Instinct, the chip is a GPU-based solution for deep learning, inference and training. AMD has also issued a new free, open-source library and framework for GPU accelerators, dubbed MIOpen.
MIOpen is made for high-performance machine intelligence applications and is optimized for deep learning frameworks in AMD’s ROCm software suite.
The first products are the Radeon Instinct MI6, the MI8, and the MI25. The 150W Radeon Instinct MI6 accelerator is powered by a Polaris-based GPU, packs 16GB of memory (224GB/s peak bandwidth), and can manage 5.7 TFLOPS of peak FP16 performance when the wind is behind it and it is going downhill.
It also includes the Fiji-based Radeon Instinct MI8. Like the Radeon R9 Nano, the Radeon Instinct MI8 features 4GB of High-Bandwidth Memory (HBM) with peak bandwidth of 512GB/s. AMD claims the MI8 will offer up to 8.2 TFLOPS of peak FP16 compute performance, with a board power that typical falls below 175W.
The Radeon Instinct MI25 accelerator uses AMD’s next-generation Vega GPU architecture and has a board power of approximately 300W. All the Radeon Instinct accelerators are passively cooled but when installed into a server chassis you can bet there will be plenty of air flow.
Like the recently released Radeon Pro WX series of professional graphics cards for workstations, Radeon Instinct accelerators will be built by AMD. All the Radeon Instinct cards will also support AMD MultiGPU (MxGPU) hardware virtualisation.