Supercomputing research collaboration to bring fusion energy to UK grid in 2040s


The UK Atomic Energy Authority (UKAEA) and the University of Cambridge are collaborating with Dell Technologies and Intel to access the supercomputing resources needed to get green fusion power onto the energy grid within the next 20 years.  

With the UK government’s 2050 net-zero economy target looming large, and with plans afoot to decommission the country’s ageing and environmentally unfriendly fossil fuel plants, efforts are being made on multiple fronts to ramp up the amount of renewable energy available via the UK grid.

The UKAEA is among the parties seeking out alternative forms of green power to plug the gap and is pioneering the use of fusion energy. Generating it is renowned for being a difficult scientific and engineering challenge, but one that needs to be overcome. 

Energy security and net-zero secretary Grant Shapps said: “The world needs fusion energy like never before, [and it] has the potential to provide a ‘baseload’ power, underpinning renewables like wind and solar, which is why we’re investing over £700m to make the UK a global hub for fusion energy.”

The fusion energy generation process involves mixing together and heating two forms of hydrogen to create a controlled plasma at extreme temperatures that fuse together to create helium and release energy that can be harnessed to generate electricity. This is a process the UKAEA is looking to replicate in power plants.

The difficulties lie in the fact the temperatures needed to create the plasma are 10 times hotter than the core of the sun, but – if the process can be made possible – it would pave the way for a new source of energy that emits no greenhouse gases and has a low risk of generating radioactive by-products.

“Fusion is the natural process that powers the heart of our sun and causes all of the stars in the night sky to shine, [and] our aim is to try to harness that energy here on Earth to produce a clean, green form of energy production,” said Rob Akers, head of advanced computing at the UKAEA, during a video shown during a press-only roundtable to discuss the project.

“The challenge we have on our hands is that there isn’t enough time for using test-based design to work out what this [fusion] power plant needs to look like. We’ve got to design… in the virtual world using huge amounts of supercomputing and artificial intelligence [AI].”

The UKAEA has stated that it needs to get a sustainable source of fusion energy onto the grid in the 2040s, and has created a prototype design of the required power plant, known as STEP, in Nottinghamshire.

It has collaborated with Dell, Intel and the University of Cambridge to develop a digital twin of the site, which is designed to be “highly immersive” and will be hosted in a virtual environment known as the “industrial metaverse”.

Specifically, this collaboration will look at how exascale supercomputers and AI can deliver the digital twin design so that UKAEA reaches its goal of having an on-grid fusion power plant by the 2040s.

“The collaboration brings together world-class research and innovation, and supports the government’s ambitions to make the UK a scientific and technological superpower,” said the organisations, in a group statement.

“It aims to make the next generation of high-performance computers (HPC) accessible, practical to use and vendor agnostic.”

During the roundtable discussion, Paul Calleja, director of research computing services at the University of Cambridge, went into more detail about the cost and skills challenges involved with large-scale supercomputing projects like this.

“From an IT perspective, exascale today represents systems costing north of £600m of capital to deploy [and] they consume north of 20MW of power. So, that costs £50m pounds a year just to plug it in,” he said.

“These systems are [also] very difficult to exploit. You may run applications on them that can only get a fraction of the peak performance because of some bottleneck in [the] scalability of the code… so these are very specific, difficult systems.”

The university has been a collaboration partner with UKAEA for four years now and said they have come to realise that the best way to approach a project like this is to work together.

“When you want to design a single supercomputer, you really need to do that as a collaboration between hardware providers, scientists and application developers – all working together looking at the holistic problem. That doesn’t happen often,” Calleja continued.

“We work closely with our industrial partners at Dell and Intel, and people in my team to understand how to string these things together.”

The university has three generations of Intel x86 systems in operation, but are not suitable for a deployment like this from a performance point of view, so the group is looking to use Intel’s new datacentre GPU Max technology.

“Intel’s GPU technology gives us that step function in performance per watt, [and] that’s really what this is about: performance per watt is a key metric along with how many bytes we can move around the system,” he added.

And because of the complexities involved with pulling together a system like this, building it on open source technologies is a must.

“We might work with Intel today, but who knows what’s going to happen in the future, so we don’t want to be locked into a particular vendor… and here Intel has got a really interesting programming environment called oneAPI, which is based largely on SYCL [a royalty-free, cross-platform abstraction layer],” he said.

“And this oneAPI SYCL environment gives us a really nice way to develop codes that if we wish can run on Intel GPUs, [but] we can also run those codes on Nvidia GPUs and even AMD GPUs with minimal recoding.”

He added: “All our work, where possible, uses open standard hardware implementations that use open source software implementations [and we can] make those blueprints available.”

And this is important because it means other companies and universities will be able to reap the benefits of this work.

“How do you make these supercomputers accessible to a broad range of scientists and engineers, [and we’re doing this by] developing a novel cloud-native operating system environment, which we call Scientific OpenStack, developed with a UK SME called StackHPC,” said Calleja.

“[As] part of the democratisation [push], you have to make these systems accessible to companies and scientists that are not used to supercomputing technologies. So that middleware layer is really important.”



Source link