The Milky Way contains more than 100 billion stars, each following its own evolutionary path through birth, life, and sometimes violent death.
For decades, astrophysicists have dreamed of creating a complete simulation of our galaxy, a digital twin that could test theories about how galaxies form and evolve. That dream has always crashed against an impossible computational wall.
Until now.
Researchers led by Keiya Hirashima at RIKEN's Center for Interdisciplinary Theoretical and Mathematical Sciences have achieved what seemed beyond reach, a simulation representing every single one of those 100 billion stars over 10,000 years of galactic time.
Related: Scientists Create Digital Twin of Earth, Accurate to a 1-Kilometer Scale
The breakthrough came from an unexpected marriage of artificial intelligence and traditional physics simulations, presented at this year's Supercomputing Conference.

The problem wasn't merely one of scale, though the numbers are staggering.
Previous state-of-the-art galaxy simulations could handle roughly one billion solar masses, meaning their smallest "particle" represented a cluster of about 100 stars.
Individual stellar events got averaged away, lost in the noise. To capture what happens to single stars requires taking tiny time steps through the simulation, short enough to catch rapid changes like supernova explosions.
But smaller time steps demand exponentially more computing power. Using conventional methods to simulate the Milky Way at individual star resolution would require 315 hours of supercomputer time for every million years of galactic evolution.
Modelling even one billion years would consume 36 years of real time.
Adding more processor cores doesn't solve the problem either since beyond a certain point, efficiency plummets while energy consumption skyrockets.
Hirashima's team found their solution in a deep learning surrogate model.
They trained an AI on high-resolution simulations of supernovae, teaching it to predict how gas expands during the 100,000 years following an explosion.

This AI shortcut handles the rapid small-scale physics without dragging down the rest of the model, allowing the simulation to simultaneously track both galaxy-wide dynamics and individual stellar catastrophes.
The performance gains are remarkable. What would have taken 36 years now requires just 115 days.
The team verified their results against large-scale tests on RIKEN's Fugaku supercomputer and The University of Tokyo's Miyabi system, confirming the AI-enhanced simulation produces accurate results at an unprecedented scale.

This approach could transform how we model any system involving vastly different scales of space and time.
Climate science, weather prediction, and ocean dynamics all face similar challenges, needing to link processes that range from molecular to planetary scales.
This article was originally published by Universe Today. Read the original article.
