Speaker
Description
Neural samplers in general generate configurations from distribution that only approximates the desired target distribution. The importance weights, which are the quotients of the target probability of the sample to its actual probability, do account for this discrepancy and permit the correction of the distribution either by reweighting the samples or accepting/rejecting them using the Metropolis algorithm. Ideally, we would like those weights to be distributed around one with a small variance. It turns out however that, in the case of poorly trained sampler, this distribution can be long tailed, even to the point of having infinite variance. In my talk I will discuss possible causes of this behavior and its connection with mode collapse.