Sunday, September 29, 2019

Boltzmann generators: Sampling equilibrium states of many-body systems with deep learning

Frank Noé, Simon Olsson, Jonas Köhler, Hao Wu (2019)
Highlighted by Jan Jensen

Figure 1A from the paper (Copyright © 2019 The Authors, some rights reserved)

The paper presents a novel method to predict free energy differences much more efficiently.  Currently, this is usually done by MD simulations and observing how often each state is visited along the trajectory. However, transitions between each state are rare, which means very long and costly trajectories, even when using various tricks to force the transitions.

The solution Noé et al. present is to "train a deep invertible neural network to learn a coordinate transformation from x to a so-called “latent” representation z, in which the low-energy configurations of different states are close to each other and can be easily sampled."

For example, the user supplies a few examples of each state [px(x)] and trains the NN to find a new set of variables (z) with a much simpler probability distribution [pz(z)], by minimising the difference using the Kullback-Leibler divergence as a loss function.

Since the NN is invertible, one can now sample repeatedly from pz(z) to get a more accurate px(x), which can then be reweighed to give the Boltzmann distribution. Since pz(z) is a simple Gaussian most of the sampled structure will have a high Boltzmann probability, so you don't have to sample that many structures.

The technical main advance is the use of an invertible NN, that allow you to go both from x to z and z to x. This is done by using a NN architecture where only simple mathematically operations (addition and multiplication) that can be reversed (subtraction and division) are allowed.

It would be very interesting to see if a similar approach can be used for inverse design.


This work is licensed under a Creative Commons Attribution 4.0 International License.

No comments:

Post a Comment