Canada Is Quietly Adding 10 Petaflops to Its Network of Academic Supercomputers

Simon Fraser University (SFU) has officially launched Canada’s most powerful academic supercomputer. The new 3.6-petaflop system, known as “Cedar,” is just the beginning of a big push by the Canadian government to upgrade the network of its 50 aging HPC machines used to serve the nation’s academic research community.

The upgrade, which began last year, will increase the research network’s aggregate performance from 2 to 12 petaflops, while increasing its total storage capacity from 20 to more than 50 petabytes. The effort is being led by Compute Canada, a government organization tasked with deploying these advanced research computing (ARC) systems for the research community.



While Canada’s supercomputing performance is being multiplied six-fold, the number of datacenters housing those systems is being pared down from 27 down to 5 or 10. Four of those facilities will contain the research network’s largest supercomputers, which will form the backbone of the ARC network.

One of these is the new 3.6-petaflop Cedar, which is now running at SFU’s new datacenter located on the Burnaby campus in British Colombia. That system, alone, has more computational horsepower that the entire complement of Canada’s remaining supercomputing network dedicated to the research community. Researchers will use the SFU machine to support a wide array of scientific work, including personalized medicine, green energy technology, and artificial intelligence.

Cedar is a heterogeneous cluster, comprised of Dell server nodes of various flavors, and connected with Intel’s Omni-Path network. All nodes are based on Intel Xeon CPUs of the “Broadwell” persuasion. However most of the Cedar’s performance – 2.744 petaflops, to be exact – comes from GPU accelerators, in this case 584 NVIDIA P100 Tesla GPUs, which are spread across 146 of the system’s 902 nodes.

Other than GPU acceleration, most of the node variation in Cedar has to do with memory capacity, which ranges from 128GB up to a whopping 3 TB. The latter is backed by four E7 Xeon CPUs, representing the only server that doesn’t use a dual-socket E5 Xeon setup.

All Cedar nodes are equipped with local storage. The ones with GPUs have a single 800GB SSD, while the others come with two 480GB SSDs. External storage is provided by DDN gear, specifically the ES14K platform, which in this case comprises 640 8TB SAS drives, backed by SSD-based metadata controllers.

If all goes according to plan, Cedar will soon have company. Next month another petascale supercomputer known as Graham is scheduled to be up and running at the University of Waterloo in Ontario, Canada….

Read the full article at the Original Source..

Back to Top