How much memory do supercomputers have?

I was recently part of an HPC discussion and one topic that came up was the amount of RAM per CPU core on supercomputers. I realized that everyone had a different viewpoint about what amount was common. I also realized that even though I have used dozens of machines I didn’t really know what was most common and additionally I was curious whether there were any trends over time. The discussion of memory per node for openMP and whether machines geared towards different scientific uses is a future discussion.

It was a bit of a chore to assemble this data. I contacted the folks at Top500 but they didn’t have any more data than what’s on some of the individual system pages and the memory is not included in the spreadsheets here  Anyway, I managed to get 69 of the top 100 systems from the November 2014 list. I could probably get another dozen or two and maybe I’ll do that in the future. A lot of systems simply don’t have clear data available and some of the industry systems have no other data available. The top500 list has total memory values for quite a few systems on their individual system pages, but those values include accelerator memory and memory on nonstandard nodes such as login nodes and high-memory nodes. And it’s not always even clear how many nodes a system has or how many accelerators per node or how much memory the accelerators have so that it can be subtracted from the total. Accelerators or not, quite a few systems have a handful of those nonstandard nodes. In pretty much all cases, those nonstandard nodes make up < 10% of the system. If I was able to get data for their standard nodes, I used that. If I wasn’t, then I just used the total memory for the machine, which is why in some cases the calculation came out to a weird number, like 2.03 GB. Close enough. The values I’ve assembled come from top500, system webpages, Wikipedia, and other sources like press releases and news articles. Email me if you’d like to see the sources and I’ll spend the time to clean up the spreadsheet.

So, there is really no standard amount of memory per core.   It’s not totally clear whether the dates are when the machine was brought online or the first date of the machine’s entry into the top500. Those dates should be pretty similar for most machines. The most noticeable change is the move away from 1 and 2 GB core and towards larger memory. The 1 GB/core machines were mainly Blue Gene/Q. The 2GB/core came from a variety of machines. The 2.67 GB/core became more common in 2014 as Intel moved to 12 cores on Ivy Bridge and those nodes with two CPUs commonly had 64 GB. The peak around 3.2 came from a variety of machines. For several of those machines, I just have total core count and total memory, so I’d have to do more analysis to see what is going on there. For several others, it’s a case of 3.2 GB/core from 64 GB and dual 10-core Intel CPUs. Finally, 4GB/core appears to be the mostly consistently chosen GB/core over the last few years.

GB per core data top500 Nov 2014