SEARCH FINANCIAL SERVICES INFRASTRUCTURE SECURITY SCIENCE INTERVIEWS

 

     

Square Kilometre Array Engineering Design Work Concluded

May 13, 2019

The SKA’s Science Data Processor (SDP) consortium has concluded its engineering design work, marking the end of five years’ work to design one of two supercomputers that will process the enormous amounts of data produced by the SKA’s telescopes.

The international consortium, led by the University of Cambridge in the UK, has designed the elements that will together form the “brain of the SKA”. In total, close to 40 institutions in 11 countries took part. SDP is the second stage of processing for the masses of digitised astronomical signals collected by the telescope’s receivers, following the correlation and beamforming that takes place in the Central Signal Processor (CSP).

“It’s been a real pleasure to work with such an international team of experts, from radio astronomy but also the High-Performance Computing industry” said Maurizio Miccolis, SDP’s Project Manager for the SKA Organisation. “We’ve worked with almost every SKA country to make this happen, which goes to show how hard what we’re trying to do is.”

The role of the consortium was to design the computing hardware platforms, software, and algorithms needed to process science data from CSP into science data products.

“SDP is where data becomes information” said Rosie Bolton, Data Centre Scientist for the SKA Organisation “This is where we start making sense of the data and produce detailed astronomical images of the sky.”

To do this, SDP will need to ingest the data and move it through data reduction pipelines at staggering speeds, to then form data packages that will be copied and distributed to a global network of regional centres where it will be accessed by scientists around the world.

SDP itself will be composed of two supercomputers, one located in Cape Town, South Africa to process data from SKA-mid and one in Perth, Western Australia, to process data from SKA-low.

“We estimate SDP’s total compute power to be around 250 PFlops – that’s 25% faster than IBM’s Summit, the current fastest supercomputer in the world,” said Maurizio. “In total, up to 600 PB of data will be distributed around the world every year from SDP – that’s enough to fill more than a million average laptops.”

Additionally, because of the sheer quantity of data flowing into SDP – some 5 Tb/s, or 100,000 times faster than the projected global average broadband speed in 20222 – it will need to make decisions on its own in almost real-time about what is noise and what is worthwhile data to keep.

The team also designed SDP so that it can detect and remove manmade radio frequency interference (RFI) – for example from satellites and other sources – from the data.

“By pushing what’s technologically feasible and developing new software and architecture for our HPC needs, we also create opportunities to develop applications in other fields” added Maurizio.

High-Performance Computing plays an increasingly vital role in enabling research in fields such as weather forecasting, climate research, drug development and many others where cutting-edge modelling and simulations are essential.

Prof. Paul Alexander, Consortium Lead at the University of Cambridge concluded “I’d like to thank everyone involved in the consortium for their hard work over the years. Designing this supercomputer wouldn’t have been possible without such an international collaboration behind it.”

Terms of Use | Copyright © 2002 - 2019 CONSTITUENTWORKS SM  CORPORATION. All rights reserved. | Privacy Statement