A researcher working for the US Department of Energy’s Fermi National Accelerator Laboratory has found that GPUs make ideal tools for capturing details about network data.
Fermilab’s Wenji Wu told CIO that GPU-based network monitors keep pace with all the traffic flowing through networks running at more than 10Gbps.
As bandwidth has skyrocketed, network analysis tools have found it hard to keep up. To make matters worse, network admins want to inspect operational data in real-time.
All this is done with standard x86 processors or customer ASICs which are limited in what they can do. CPUs have the memory of a goldfish and tend to drop packets. ASICs have the memory bandwidth but are an arse to programme. They also can’t split processing duties into parallel tasks which is very important these days.
In a paper, Wenji that GPUs have “a great parallel execution model.” They offer high memory bandwidth, easy programmability, and can split the packet capturing process across multiple cores.
Nonitoring networks requires reading all the data packets as they cross the network, which requires more parallelism than you can poke a stick at.
Wenji has built a prototype at Fermilab to demonstrate the feasibility of a GPU-based network monitor, using a Nvidia M2070 GPU and an off-the-shelf NIC to capture network traffic.
Not only did it not catch fire, it could be easily be expanded with additional GPUs, he said.
The GPU-based system was able to speed performance by as much as 17 times. Compared to a six core CPU, the speed up from using a GPU was threefold.
If this is the case then the makers of commercial network appliances could use GPUs to boost their devices’ line rates. Developers could save a bomb by using pre-existing GPU programming models such as the Nvidia’s CUDA, Wenji said.