(Inside Science) -- In Cixin Liu’s 2008 science fiction novel The Three Body Problem, an alien civilization struggles to accurately predict the movements of the three suns that their planet orbits around. Throughout the imaginative saga, the three alien suns sometimes dance too closely to the planet, while at other times they slingshot each other far away into space, leaving the planet with apocalyptic weather conditions.
If the math behind merely three moving bodies is already next to unsolvable, what about a problem with four bodies, or a hundred, or a million?
"In many-body problems, there are exponentially more possible configurations, and it's simply impossible to consider them all," said Mari-Carmen Bañuls, a physicist from the Max Planck Institute of Quantum Optics in Garching, Germany.
Computational physicists are now proposing to use a process called deep learning to help us find answers to some of these seemingly unanswerable questions. Simon Trebst, a physicist from the University of Cologne in Germany, presented his team's research on the topic in Los Angeles last month during a meeting of the American Physical Society.
Many-body problems are at the forefront of any research that involves multiple interacting bodies, whether the investigations involve molecules in a chemical solution, electrons inside a magnet, or something else. For each additional "body" in a many-body system, it becomes exponentially more difficult to calculate the precise behavior for the system.
Instead of trying to understand these systems using brute force calculations, scientists often focus on finding solutions for just the meaningful parameters, such as the rate of reaction between the two chemical solutions, or the strength of magnetic field produced by the interacting electrons. But even with these shortcuts, most many-body problems still require a supercomputer to run for hours or even days to complete one such calculation, and every once in a while, a component of these calculations would fluctuate around zero and cause the whole thing to fall apart.
This is known as the sign problem, and is what Trebst and his colleagues set out to solve using deep learning. Part of their ongoing effort was published in Scientific Reports last year.
To understand the significance of the sign problem, look no further than an election. In order to declare the winner of an election, we count votes. In a very close race, an otherwise insignificant counting error could be very significant for the overall outcome. For example, in the 2000 presidential election George W. Bush and Al Gore both received 48.8 percent of the nearly 6 million votes cast in Florida. In the official count, Bush received 537 more votes and received the state's electoral votes -- and the presidency itself. Now, if only one in 10,000 Florida votes were counted incorrectly there would have been 600 miscounted votes, and potentially a different outcome.
When researchers use computer models to extract certain properties from a group of interacting bodies -- be they molecules, stars or something else -- sometimes the interactions compete against each other like Republicans and Democrats. As a result, the error for predicting the final outcome from these competing factors can be very large even when the calculation's result is extremely precise in comparison. Oftentimes, the amount of precision required to predict, for example, the magnetic properties of a material, is simply too much for even a state-of-the-art supercomputer to handle.
Trebst and his colleagues want to see if a deep learning computer could get around these precision requirements without using brute force calculations. Similar to the way pattern recognition algorithms can identify faces, their algorithm can identify the specific condition under which a type of material would transition from an insulator to a conductor after having fed on a certain amount of training data.
To test their algorithm, they chose a problem that can be solved one of two ways -- one setup was relatively simple and the other included a potential sign problem in the middle. First, they solved the problem the "easy" way using traditional methods; then they used the new algorithm to solve the same problem but this time with a sign problem in the way. Instead of faltering, the new algorithm powered through the sign problem and arrived at the correct conclusion.
"For problems where we have no quantitative or qualitative handle on, this can really help us," said Trebst.
In the future, this approach may be used to find solutions to previously unsolvable many-body problems that don't have an alternative route around the sign problem. However, this approach also creates a conundrum -- if there is no existing solution to a problem and the deep learning computer doesn't "show its work," how can we tell if it has gotten it right?
Bañuls said that as long as the researchers are evaluating something physical, they will have a reference point. "Ultimately if we're talking about a new material that we're inventing a new model for, we can always go back to experiments," said Bañuls.