Don’t forget to research your memory bandwidth before purchasing this Server – PC Perspective

Don’t forget to research your memory bandwidth before purchasing this Server – PC Perspective

Written By Adarsh Shankar Jha

The evolution of per-socket and per-core memory bandwidth

ServeTheHome has written an article which should be of interest to anyone buying a server or interested in how memory bandwidth has changed on Intel and AMD in the recent past. They track performance from 2013 to present, with some theoretical results for AMD’s Bergamo in various ways, to give a great overview of how the two companies have changed over the years. It also proves why AMD’s EPYC has been able to eat up quite a bit of the market once dominated by Intel’s Xeons.

First they looked at memory channels per slot times memory bandwidth per DIMM, and you can see small jumps as the memory frequency increases, but the big ones come from increasing the number of channels. The transition from DDR4 to DDR5 also had a significant impact on overall bandwidth, as one might expect. A different picture develops when you look strictly at memory bandwidth per core, with Intel’s performance looking flat since 2019. AMD, on the other hand, is showing a lot of change due to its focus on core counts. The number of cores in the EPYC has exceeded the Xeon, but the memory bandwidth remains similar.

These findings could influence the decision of which vendor to go with for an upgrade. If your application cares more about memory bandwidth than the number of cores you can throw at it, the Xeon remains a solid choice. On the other hand, if you’re more into processing power and don’t need to worry as much about feeding those cores with a lot of memory-intensive work, EPYC should be seriously considered.

You May Also Like

0 Comments