Japanese astronomers have developed a new artificial intelligence (AI) technique to remove noise in astronomical data due to random variations in galaxy shapes. After extensive training and testing on large mock data created by supercomputer simulations, they then applied this new tool to actual data from Japan’s Subaru Telescope and found that the mass distribution derived from using this method is consistent with the currently accepted models of the Universe. This is a powerful new tool for analyzing big data from current and planned astronomy surveys.

Wide area survey data can be used to study the large-scale structure of the Universe through measurements of gravitational lensing patterns. In gravitational lensing, the gravity of a foreground object, like a cluster of galaxies, can distort the image of a background object, such as a more distant galaxy. Some examples of gravitational lensing are obvious, such as the “Eye of Horus.” The large-scale structure, consisting mostly of mysterious “dark” matter, can distort the shapes of distant galaxies as well, but the expected lensing effect is subtle. Averaging over many galaxies in an area is required to create a map of foreground dark matter distributions.

But this technique of looking at many galaxy images runs into a problem; some galaxies are just innately a little funny looking. It is difficult to distinguish between a galaxy image distorted by gravitational lensing and a galaxy that is actually distorted. This is referred to as shape noise and is one of the limiting factors in research studying the large-scale structure of the Universe.

To compensate for shape noise, a team of Japanese astronomers first used ATERUI II, the world’s most powerful supercomputer dedicated to astronomy, to generate 25,000 mock galaxy catalogs based on real data from the Subaru Telescope. They then added realist noise to these perfectly known artificial data sets, and trained an AI to statistically recover the lensing dark matter from the mock data.

After training, the AI was able to recover previously unobservable fine details, helping to improve our understanding of the cosmic dark matter. Then using this AI on real data covering 21 square degrees of the sky, the team found a distribution of foreground mass consistent with the standard cosmological model.

Find your dream job in the space industry. Check our Space Job Board »

Schematic of the artificial intelligence used in this study, an adversarial generative network (GAN). The first network, called the image generator G, estimates and outputs a denoised lens map from a noisy lens map. The second network, the image discriminator D, compares the lens map created by G with the true noise-free lens map and identifies the image created by G as a fake. By inputting a large number of noisy/noise-free lens map pairs into the two networks, G is trained to make lens maps that are closer to the originals, and D is trained to more accurately spot the fakes made by G. In this study, 25,000 pairs of noisy and noise-free lens maps obtained from numerical simulations using ATERUI II were used to create a stable network. Finally, a trained image generator G estimates a denoised lens map based on the actually observed noisy observational lens map. Credit: NAOJ

“This research shows the benefits of combining different types of research: observations, simulations, and AI data analysis,” says Masato Shirasaki, the leader of the team, “In this era of big data, we need to step across traditional boundaries between specialties and use all available tools to understand the data. If we can do this, it will open new fields in astronomy and other sciences.”