Speaker
Description
A trivializing map is a field transformation whose Jacobian determinant exactly cancels the interaction terms in the action, providing a representation of the theory in terms of a deterministic transformation of a distribution from which sampling is trivial. A series of seminal studies have demonstrated that approximations of trivializing maps can be 'machine-learned' by a class of invertible neural models called Normalizing Flows, constructed such that the Jacobian determinant of the transformation can be efficiently computed. Asymptotically exact sampling from the theory of interest can be performed by drawing samples from a simple distribution, passing them through the network, and reweighting the resulting configurations (e.g. using a Metropolis test). From a theoretical perspective, this approach has the potential to become more efficient than traditional Markov Chain Monte Carlo sampling techniques, where autocorrelations severely diminish the sampling efficiency on the approach to the continuum limit. A major caveat is that it is not yet well-understood how the size of models and the cost of training them is expected to scale. In previous work, we conducted an exploratory scaling study using two-dimensional $\phi^4$ theory with up to $20^2$ lattice sites, which suggested that training costs grow very quickly indeed. We present updated results using a more scalable architecture utilising convolutional neural networks, and discuss various factors contributing to the scalability of these methods.