The International Symposium on Lattice Field Theory is an annual conference that attracts scientists from around the world. Originally started as a place for physicists to discuss their recent developments in lattice gauge theory, nowadays the conference is the largest of its type and has grown to include areas like algorithms and machine architectures, code development, chiral symmetry, physics beyond the standard model, and strongly interacting phenomena in low-dimensions.
The 39th Lattice conference will take place in Bonn, Germany, from August 8 to August 13 2022.
The scientific programme of this conference will include plenary talks and parallel sessions on the following topics:
The conference is supported by:
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Recent results from lattice simulations of QCD at nonzero temperature and/or density and/or in presence of magnetic fields will be reviewed. Progress in our understanding of the phases and boundaries in the phase diagram, as well as on the calculation of thermodynamic quantities with relevant phenomenological consequences will be discussed.
As the precision test of the standard model has become accurate, the need for fine lattices has been increasing. However, as we approach the continuum limit, we get into the critical region of the theory and encounter critical slowing down. Among many studies tackling this problem, we develop the idea of trivializing map, whose use in lattice calculation was proposed by Luscher. With this field transformation, the theory of interest will be mapped to the strong coupling limit. Luscher gave an analytic formula to construct the trivializing map in the form of t-expansion, where t is the trivializing-flow time. In this work, we alternatively use the Schwinger-Dyson equation to obtain the trivializing map approximately. In this method, we choose a set of Wilson loops to include in the flow kernel by hand and determine their coefficients from the expectation values of the Wilson loops. The advantages of this method over the t-expansion are two-fold: (1) We can circumvent the rapid increase of necessary Wilson loops which we have in increasing the order of t-expansion because the basis can be chosen arbitrarily. (2) We can expect to obtain a reasonable approximation of the trivializing map also for large beta because the coefficients are determined from a non-perturbative evaluation of the expectation values. In this talk, we show preliminary results applying our method to pure Yang-Mills theory.
In this talk, we review recent advances in applying quantum computing to lattice field theory. Quantum technology offers the prospect to efficiently simulate sign-problem afflicted regimes in lattice field theory, such as the presence of topological terms, chemical potentials, and out-of-equilibrium dynamics. First proof-of-concept simulations of Abelian and non-Abelian gauge theories in (1+1)D and (2+1)D have been accomplished, and resource efficient formulations of gauge theories for quantum computations have been proposed. The path towards quantum simulations of (3+1)D particle physics requires many incremental steps, including algorithmic development, hardware improvement, methods for circuit design, as well as error mitigation and correction techniques. After reviewing these requirements and recent advances, we discuss the main challenges and future directions.
Reaching Exascale compute performance at an affordable budget requires increasingly heterogeneous HPC systems, which combine general purpose processing units (CPUs) with acceleration devices such as graphics processing units (GPUs) or many-core processors. The Modular Supercomputing Architecture (MSA) developed within the EU-funded DEEP project series breaks with traditional HPC system architectures by orchestrating these heterogeneous computing resources at system-level, organizing them in compute modules with different hardware and performance characteristics. Modules with disruptive technologies, such as quantum devices, can also be included in a modular supercomputer to satisfy the needs of specific user communities. The goal is to provide cost-effective computing at extreme performance scales fitting the needs of a wide range of Computational Sciences.
This approach brings substantial benefits for heterogeneous applications and workflows. In a modular supercomputer, each application can dynamically decide which kinds and how many nodes to use, mapping its intrinsic requirements and concurrency patterns onto the hardware. Codes that perform multi-physics or multi-scale simulations can run across compute modules due to a global system-software and programming environment. Application workflows that execute different actions after (or in parallel) to each other can also be distributed in order to run each workflow-component on the best suited hardware, and exchange data either directly (via message-passing communication) or via the filesystem. A modular supercomputing system can supply any combination or ratio of resources across modules and is not bound to fixed associations between, for instance, CPUs and accelerators as will be found in clusters of heterogeneous nodes. It is therefore ideal for supercomputer centers running a heterogeneous mix of applications (higher throughput and energy efficiency).
This talk will describe the Modular Supercomputing Architecture – which constitutes the central element in Europe’s roadmap to Exascale computing –, including its history, its role in Europe’s Exascale computing strategy, its hardware and software elements, and experiences from mapping applications and workflows to MSA systems.
Emerging sampling algorithms based on normalizing flows have the potential to solve ergodicity problems in lattice calculations. Furthermore, it has been noted that flows can be used to compute thermodynamic quantities which are difficult to access with traditional methods. This suggests that they are also applicable to the density-of-states approach to complex action problems. In particular, flow-based sampling may be used to compute the density directly, in contradistinction to the conventional strategy of reconstructing it via measuring and integrating the derivative of its logarithm. By circumventing this procedure, the accumulation of errors from the numerical integration is avoided completely and the overall normalization factor can be determined explicitly. In this proof-of-principle study, we demonstrate our method in the context of two-component scalar field theory where the O(2) symmetry is explicitly broken by an imaginary external field. First, we concentrate on the zero-dimensional case which can be solved exactly. We show that with our method, the Lee-Yang zeroes of the associated partition function can be successfully located. Subsequently, we confirm that the flow-based approach correctly reproduces the density computed with conventional methods in one- and two-dimensional models.
Normalizing flows (NFs) are a class of machine-learning algorithms that can be used to efficiently evaluate posterior approximations of statistical distributions. NFs work by constructing invertible and differentiable transformations that map sufficiently simple distributions to the target distribution, and provide a new, promising route to study quantum field theories regularized on a lattice. In this contribution, based on our recent work [arXiv:2201.08862], I explain how to combine NFs with stochastic updates, demonstrating that this theoretical framework is the same that underlies Monte Carlo simulations based on Jarzynski’s equality, and present examples of applications for the evaluation of free energies in lattice field theory.
This study explores the utility of a kernel in complex Langevin simulations of quantum real-time dynamics on the Schwinger-Keldysh contour. We give several examples where we use a systematic scheme to find kernels that restore correct convergence of complex Langevin. The schemes combine prior information we know about the system and the correctness of convergence of complex Langevin to construct a kernel. This allows us to simulate up to
In this talk, we discuss gauge-equivariant architectures for flow-based sampling in fermionic lattice field theories with pseudofermions. We also discuss how flow-based sampling approaches can be improved by combination with standard techniques such as even/odd preconditioning and the Hasenbusch factorization. Numerical demonstrations in two-dimensional U(1) and SU(3) theories with
Automatic Differentiation (AD) techniques allows to determine the Taylor expansion of any deterministic function. The generalization of these techniques to stochastic problems is not trivial. In this work we explore two approaches to extend the ideas of AD to stochastic processes, one based on reweighting and another one based on the ideas of numerical stochastic perturbation theory using the Hamiltonian formalism. We show that, when convergence can be guaranteed, the approach based on NSPT is able to converge to the Taylor expansion with a much smaller variance.
The numerical sign problem has been a major obstacle to first-principles calculations of many important systems, including QCD at finite density. The worldvolume tempered Lefschetz thimble method is a HMC algorithm which solves both the sign problem and the ergodicity problems simultaneously. In this algorithm, configurations explore the extended configuration space (worldvolume) that includes a region where the sign problem disappears and also a region where the ergodicity problem is mild. The computational cost of the algorithm is expected to be much lower than other related algorithms based on Lefschetz thimbles, because one no longer needs to calculate the Jacobian of the gradient flow of Picard and Lefschetz when generating configurations. In this talk, after reviewing the basics of the method, we apply the method to various lattice field theories suffering from the sign problem, and report on the numerical results together with the computational cost scaling with the lattice volume.
We present results for the energy levels for two pions and a kaon, and two kaons and a pion, all at maximal isospin, on CLS ensembles D200 and N203, with pion/kaon masses of
200/480 MeV and 340/440 MeV, respectively. We use multiple frames, and have determined many energy levels on each ensemble. We fit these levels, together with those for
The quest of unraveling the nature of excited hadrons necessarily involves determination of universal (reaction independent) parameters of these states. Such determinations require input, either from experiment or theory.
Lattice gauge theory is the only tool available to us to tackle the non-perturbative dynamics of QCD encoded in the determined finite-volume interaction spectra. Many insights have been gained on resonant two-body systems in the past by studying such spectra. Now -- with the advent of the three-body finite-volume methods -- advances are being made towards more complex systems. This progress will be discussed in the talk, including theoretical developments and applications to phenomenologically interesting systems.
We study a three-particle resonance in Euclidean Lattice
Recent years have witnessed a rapid growth of interest to the three-body
problem on the lattice. In this connection, the derivation of a relativistic-
invariant three-particle quantization condition, which relates the finite-volume
lattice spectrum to the infinite-volume observables in the three-particle sec-
tor, has become a major challenge. First and foremost, providing a manifestly
relativistic-invariant framework is important because the typical momenta of
light particles studied on the lattice are generally not small, as compared to
their mass. Moreover, Lorentz invariance puts stringent constraints on the
possible form of the two- and three-body interactions, reducing the number
of effective couplings needed for their parameterization. These constraints
are absent in the non-invariant formulations, leading to an inflation of the
number of independent parameters.
In the literature, there exist three different but conceptually equivalent
formulations of the three-particle quantization condition. In this talk, I shall
put the issue of the relativistic covariance of these formulations under a
renewed scrutiny. A novel formulation is suggested, which is devoid of some
shortcomings of the existing approaches related to the explicit non-covariance
of the three-particle propagator. The proposed approach is based on the
“covariant” NREFT framework. We reformulate this framework, choosing
the quantization axis along an arbitrary timelike unit vector vμ, demonstrate
the explicit relativistic invariance of the infinite-volume Faddeev equations
and derive the modified quantization condition. The relativistic invariance is
tested numerically, producing synthetic data for the energy levels in different
moving frames.
In this talk, I will present our recent results on two- and three-particle scattering in the O(3) non-linear sigma model in 1+1 dimensions. We focus on the isospin-1 and 2 channels for the two-particle case, and the isospin-2 and 3 channels for three particles. We perform numerical simulations at four values of the physical volume and three lattice spacings, using a three-cluster generalization of the cluster update algorithm. The lattice results for two particles are then compared against exact analytic predictions of the finite-volume energy levels obtained combining analytic results for the phase shifts and the (1+1)-dimensional two-particle scattering formalism. For the three-particle results, we use the relativistic field theory (RFT) approach to constrain the scheme-dependent three-body interaction.
We investigate the energy levels corresponding to the Roper resonance based on a two-flavor chiral effective Lagrangian at leading one-loop order. We show that the Roper mass can be extracted from these levels for not too large lattice volumes.
Further, to include three body dynamics, such as
The Bielefeld Parma collaboration has recently put forward a method to investigate the QCD phase diagram based on the computation of Taylor series coefficients at both zero and imaginary values of the baryonic chemical potential. The method is based on the computation of multi-point Padè approximants. We review the methodological aspects of the computation and, in order to gain confidence in the approach, we report on the application of the method to the two-dimensional Ising model (probably the most popular arena for testing tools in the study of phase transitions). Besides showing the effectiveness of the multi-point Padè approach, we discuss what these results can suggest in view of further progress in the study of the QCD phase diagram.
We report updated results on the determination of Lee-Yang edge (LYE) singularities in
We calculate Fourier coefficients of the net-baryon number as a function
of a purely imaginary chemical potential. The asymptotic behavior of
these coefficients is governed by the singularity structure of the QCD
partition function and thus encodes information on phase
transitions. Although it is not easy to obtain a high number of Fourier
coefficients from lattice QCD data directly, models for these
coefficients have been constructed in the past. We investigate to what
extent our data is consistent with those models and estimate the
position of the nearest singularities in the complex chemical potential
plane. Our lattice data has been obtained from simulations with
(2+1)-flavors of highly improved staggered quarks (HISQ) at imaginary
chemical potential on
masses. For the calculation of the Fourier coefficients we apply
asymptotic numerical quadrature designed for highly oscillatory integrals.
Knowledge of the screening masses at finite chemical potential can provide insight into the nature of the QCD phase diagram. However, lattice studies at finite chemical potential suffer from the well-known issue of the sign problem, which has made the calculation of observables such as screening correlators and screening masses at finite chemical potential quite challenging. One way to proceed is by expanding the observable in a Taylor series in the chemical potential and hence calculating the finite-density corrections to the observable. In this talk, we will use this approach to calculate the screening mass of the pseudoscalar meson at finite temperatures and chemical potential by expanding the screening correlator in a Taylor series in the chemical potential. We will present our results for the second derivative of the screening mass w.r.t. the chemical potential. Our calculation was done on
In this talk we present our study of the electromagnetic conductivity in dense quark-gluon plasma obtained within lattice simulations with
We study the (2+1)-dimensional Gross-Neveu model in an external magnetic field. The model, which serves as a toy model for QCD, has been predicted by mean-field studies to exhibit a very rich phase structure in the plane spanned by temperature and chemical potential as the external field is varied. We investigate what remains of this phase structure beyond the mean-field approximation. Our lattice results are consistent with the magnetic catalysis scenario, i.e. an increase of the chiral condensate with the magnetic field, both at finite temperature and chemical potential.
We show that a recently discovered non-perturbative field-theoretical mechanism giving mass to elementary fermions, is also capable of generating a mass for the electro-weak bosons and can thus be used as a viable alternative to the Higgs scenario. A detailed analysis of this remarkable feature shows that the non-perturbatively generated fermion and
We present an update of our results for the ongoing work on the four-supercharge two-dimensional Yang–Mills theory discretized on a Euclidean torus using thermal boundary conditions. Although the theory under consideration does not have a gravity dual, we investigate whether it has features qualitatively similar to its sixteen-supercharge counterpart. Our investigation hints at a possible ‘spatial deconfinement’ transition in this theory similar to the maximal one with sixteen supercharges. We also analyse the behaviour of the scalars, Wilson lines, and the absence of supersymmetry breaking with a relatively large-N setup and various lattice sizes in different coupling (temperature) regimes and draw comparisons with the two-dimensional maximally supersymmetric Yang–Mills theory.
We present an update of our ongoing study of the SU(2) gauge theory with one flavor of Dirac fermion in the adjoint representation. Compared to our previous results we now have data at larger lattice volumes, smaller values of the fermion mass, and also larger values of
In this work we present perturbative results for the renormalization of the supercurrent operator,
Supersymmetry on the lattice is explicitly broken by the gluino mass and lattice artifacts. However, it can be restored in the continuum limit by fine tuning the parameters based on the renormalized Ward identities. On the renormalization step not only the mass but also the renormalization of the supercurrent needs to be addressed. Here we present a lattice investigation to obtain the renormalization factors of the supercurrent for
Vector Boson scattering (VBS) is a central process in the search for physics beyond the SM at collider experiments. To correctly identify SM and BSM physics, such as composite Higgs scenarios, at these experiments, it is crucial to gain a clear picture of VBS-like processes.
In our study we therefore analyse this process in a reduced SM-setup for different physical scenarios. To this end we apply a Lüscher-type analysis to extract scattering properties and compare the results with (augmented) perturbative tree-level predictions.
We show that the nonperturbative approach suggests a composite structure for the scalar degree of freedom, being in line with previous investigations. Furthermore we present an alternative way of extracting resonance-like states from the spectrum by using the perturbative prediction as a tool.
COLA is a software library for lattice QCD written in modern Fortran and NVIDIA CUDA. Intel and NVIDIA have dominated the HPC domain for a long time, but the status quo has been changed with the recent advent of AMD-based systems in the supercomputing Top`500. Setonix is a next generation Cray AMD machine currently being installed at the Pawsey Supercomputing Centre in Perth, Australia. Setonix features both AMD CPUs and AMD Instinct GPUs. This talk will describe first experiences with porting COLA to the AMD platform.
Lyncs-API is a Python API for Lattice QCD. It aims to create a complete framework for easily running applications via Python. It implements low- and high-level tools, including interface to common LQCD libraries. Last year, at this conference, we presented the API to the community for the first time. In this talk we will give a status update on its development and show the potential of the API via some applications we have implemented this year.
We present progress in interfacing the Hybrid Monte Carlo implementation in the tmLQCD software suite with the QUDA library and compare its performance to our top of the line algorithms on CPU machines. We discuss the main challenges and overheads of our approach and scrutinize its fundamental architectural limitations before exploring ongoing improvements as well as current and future simulations.
In this talk we present work on extending the set of solvers for the inversion of the Dirac matrix for Wilson-Clover type fermions in Grid. Particular emphasis is put on the inexact deflation method put forward by Lüscher. Besides providing fast solves for configurations at the physical point one of the method’s central advantages is that it can be included into the HMC algorithm at relatively low computational cost. We assess the performance of our implementation of the algorithm on both CPU and GPU architectures and carry out comparisons with other solvers.
We report novel lattice QCD results for the three-gluon vertex from quenched lattice-QCD simulations. Using standard Wilson action, we have computed the three gluon vertex beyond the usual kinematic restriction to the symmetric (q² = r² = p²) and soft-gluon (p = 0) cases where it depends on a single momentum scale. We will present a detailed analysis of the asymmetric case (r² = q² ≠ p²) where the transversely projected vertex can be cast in terms of three independent tensors.
The lattice data show a clear dominance of the form-factor corresponding to the tree-level tensor.
For the general kinematical configuration (q² ≠ r² ≠ p²); we have computed the projection of the three-gluon vertex providing the relevant information on the ghost-gluon kernel-related function W(q²) that appears in the recently discussed smoking-gun signals of the Schwinger mechanism in QCD. This projection exhibits a striking scaling in terms of (q² + r² + p²)/2.
In this talk we present numerical simulations of N = 4 super Yang-Mills for 3 color gauge theory over a wide range of ’t Hooft coupling 5 ≤ λ ≤ 30 using a supersymmetric lattice action. By explicit computations of the fermion Pfaffian we present evidence that the theory possesses no sign problem and exists in a single phase out to arbitrarily strong coupling. Furthermore, preliminary work shows that Non-Abelian Coulomb potential extracted via Polyakov loop correlators shows the 1/R scaling and a dependence on square root of the 't Hooft coupling at large values of λ as expected from the holographic calculations.
Master-field simulations offer an approach to lattice QCD in which calculations are performed on a small number of large-volume gauge-field configurations. This is advantageous for simulations in which the global topological charge is frozen due to a very fine lattice spacing, as the effect of this on observables is suppressed by the spacetime volume. Here we make use of the recently developed Stabilised Wilson Fermions to investigate a variation of the master-field approach in which only the temporal direction (T) is taken larger than in traditional calculations. As compared to a hyper-cubic master-field geometry, this has the advantage that finite-L effects can be useful, e.g. for multi-hadron observables, while compared to open boundary conditions time-translation invariance is not lost.
In this proof-of-concept contribution, we study the idea of using very cold, i.e. long-T, lattices to topologically 'defrost' observables at fine lattice spacing. We identify the scalar-scalar meson two-point correlation function as useful probe and present first results from Nf=3 ensembles with time extents up to T=2304 and a lattice spacing of a=0.055fm.
We study two different SU(2) gauge-scalar theories in 3 and 4 spacetime
dimensions. Firstly, we focus on the 4 dimensional theory with 2 sets of
fundamental scalar (Higgs) fields, which is relevant to the 2 Higgs Doublet Model
(2HDM), a proposed extension to the Standard Model of particle physics.
The goal is to understand the particle spectrum of the theory at zero temperature
and the electroweak phase transition at finite temperature. We present exploratory
results on scale setting and the multi-parameter phase diagram of this theory.
On the other hand, we are interested in the 3 dimensional SU(2) theory with
multiple Higgs fields in the adjoint representation, that can be mapped to cuprate
systems in condensed matter physics which host a rich phase diagram including
high-Tc superconductivity. It has been proposed that the theory with 4 adjoint Higgs
fields can be used to explain the physics of hole-doped cuprates for a wide range
of parameters while the theory with 1 real adjoint Higgs field would describe the
physics of electron-doped cuprates. We show exploratory results on the phase
diagram of these theories.
Topological Data Analysis (TDA) is a field that leverages tools and ideas from algebraic topology to provide robust methods for analysing geometric and topological aspects of data. One of the principal tools of TDA, persistent homology, produces a quantitative description of how the connectivity and structure of data changes when viewed over a sequence of scales. We propose that this presents a means to directly probe topological objects in gauge theories. In this talk I will present recent work on using persistent homology to detect center vortices in SU(2) lattice gauge theory configurations in a gauge-invariant manner. I will introduce the basics of persistence, describe our construction, and demonstrate that the result is sensitive to vortices. Moreover, I will discuss how with simple machine learning, one can use the resulting persistence to quantitatively analyse the deconfinement transition via finite-size scaling, providing evidence on the role of vortices in relation to confinement in Yang-Mills theories.
The Hamiltonian formalism for lattice gauge theories has experienced a resurgence of interest in recent years due to its relevance for quantum simulation, a major goal of which is the solution of sign problems in QCD. The particular formulation of the Hamiltonian formalism is itself an important design decision, where factors to consider include (non)locality of the degrees of freedom, (non)Abelian constraints, and computational costs associated with simulating the Hamiltonian.
This work represents a key step toward understanding the costs and benefits associated with the loop-string-hadron (LSH) formulation of lattice gauge theories by generalizing the original SU(2) construction to SU(3) (in 1+1 dimensions). We show that the SU(3) LSH construction is indeed a straightforward generalization of its SU(2) counterpart with all salient theoretical features left intact---particularly the conversion of SU(3) Clebsch-Gordan coefficients into explicit functions of LSH number operators. The validity of the LSH approach is underscored by demonstrating numerical agreement with the better-known purely-fermionic formulation of the theory (with open boundary conditions).
The standard method for determining matrix elements in lattice QCD requires the computation of three-point correlation functions. This has the disadvantage of requiring two large time separations: one between the hadron source and operator and the other from the operator to the hadron sink. Here we consider an alternative formalism, based on the Dyson expansion leading to the Feynman-Hellmann theorem, which only requires the computation of two-point correlation functions. Both the cases of degenerate energy levels and quasi-degenerate energy levels which correspond to diagonal and transition matrix elements respectively are considered in this formalism. Numerical results for the Sigma to nucleon transition are presented in a further contribution by M. Batelaan.
Theoretical calculations of the transition form factors of the hyperons are an important component of the determination of the CKM matrix elements. These calculations historically have been performed using ratios of lattice three point functions and two-point functions to extract the form factors, this requires the careful balancing of control over excited states and the preservation of a strong signal. We present a novel method which uses the Feynman-Hellmann method to relate a shift in energy due to a perturbation to the required form factors, this method requires only the calculation of two-point functions. The formalism of this Method is expanded on in the presentation by R. Horsley, the details of the numerical computation and the results of the Sigma to nucleon transition will be presented here.
Multi-particle states with additional pions are expected to result in a non-negligible excited-state contamination in lattice simulations at the physical point. We show that heavy meson chiral perturbation theory (HMChPT) can be employed to calculate the contamination due to two-particle
Combining experimental input, perturbative calculations, and form factors computed in lattice QCD simulations, it is possible to deduce
This talk presents our recent computations of the dominant
I will describe recent progress in the development of custom machine learning architectures based on flow models for the efficient sampling of gauge field configurations. I will present updates on the status of this program and outline the challenges and potential of the approach.
We present our attempts to control the sign problem by the path optimization method with emphasis on efficiency of the neural network. We found a gauge invariant neural network is successful in the 2-dimensional U(1) gauge theory with a complex coupling. We also investigate possibility of the improvement in the learning process.
We present a novel strategy to strongly reduce the severity of the sign problem, using line integrals along paths of changing imaginary action. Highly oscillating regions along these paths cancel out, decreasing their contributions. As a result, sampling with standard Monte-Carlo techniques becomes possible in cases which otherwise requires methods taking advantage of complex analysis, such as Lefschetz-thimbles or Complex Langevin. We lay out how to write down an ordinary differential equation for the line integrals. As an example of its usage, we apply the results to a 1d quantum mechanical anharmonic oscillator with a
At fine lattice spacings, lattice simulations are plagued by slow (topological) modes that give rise to large autocorrelation times. These in turn lead to statistical and systematic errors that are difficult to estimate. We study the problem and possible algorithmic solutions in 4-dimensional SU(3) gauge theory, with special focus on instanton updates and metadynamics.
A trivializing map is a field transformation whose Jacobian determinant exactly cancels the interaction terms in the action, providing a representation of the theory in terms of a deterministic transformation of a distribution from which sampling is trivial. A series of seminal studies have demonstrated that approximations of trivializing maps can be 'machine-learned' by a class of invertible neural models called Normalizing Flows, constructed such that the Jacobian determinant of the transformation can be efficiently computed. Asymptotically exact sampling from the theory of interest can be performed by drawing samples from a simple distribution, passing them through the network, and reweighting the resulting configurations (e.g. using a Metropolis test). From a theoretical perspective, this approach has the potential to become more efficient than traditional Markov Chain Monte Carlo sampling techniques, where autocorrelations severely diminish the sampling efficiency on the approach to the continuum limit. A major caveat is that it is not yet well-understood how the size of models and the cost of training them is expected to scale. In previous work, we conducted an exploratory scaling study using two-dimensional
The recent introduction of machine learning tecniques, especially normalizing flows, for the sampling of lattice gauge theories has shed some hope on improving the sampling efficiency of the traditional HMC algorithm. However, naive usage of normalizing flows has been shown to lead to bad scaling with the volume. In this talk we propose using local normalizing flows at a scale given by the correlation length. Even if naively these transformations have a very small acceptance, when combined with the HMC lead to algorithms with high acceptance and reduced autocorrelation times compared with HMC. Several scaling tests are performed in the
Finite-volume pionless effective field theory is an efficient framework with which to perform the extrapolation of finite-volume lattice QCD calculations of multi-nucleon spectra and matrix elements to infinite volume and to nuclei with larger atomic number. In this contribution, a new implementation of this framework based on correlated Gaussian wavefunctions optimized using differentiable programming and using a solution of a generalised eigenvalue problem is discussed. This approach is found to be more efficient than previous stochastic implementations of the variational method, as it yields comparable representations of the wavefunctions of nuclei with atomic number
This talk presents a new method for computing correlators for systems of many identical mesons. The method allows the computation of every meson correlator up to N mesons from propagators using only a single N by N eigendecomposition. This pushes the frontier of many-meson calculations from dozens to thousands, and as a demonstration I will present the computation of the maximal-isospin pion correlator for systems from 1 up to 6144 pions on an ensemble of Wilson fermions with slightly heavier than physical pions (
The formalism for relating finite-volume energies and matrix elements to scattering and decay amplitudes has been established for three-pion states with all possible isospins in the so called RFT (relativistic field theory) method. This necessarily leads to coupled-channel systems. The three-pion I=1 channel, for example, includes all two-pion isospins as sub-channels. In this talk I describe issues and strategies in implementing both the scattering and decay formalism in practice and show examples of the relations between finite- and infinite-volume quantities. I also describe an open source python library that supports the practical implementation.
The Lüscher scattering formalism, the standard approach for relating the discrete finite-volume energy spectrum to two-to-two scattering amplitudes, fails when analytically continued so far below the infinite-volume two-particle threshold that one encounters the t-channel cut. This is relevant, especially in baryon-baryon scattering applications, as finite-volume energies can be observed in this below-threshold regime, and it is not clear how to make use of them. In this talk we present a generalisation of the scattering formalism that resolves this issue, allowing one to also constrain scattering amplitudes on the t-channel cut.
The γ⋆γ⋆ → ππ scattering amplitude can help constrain hadronic contributions to the anomalous magnetic moment of the muon, as well as structural information of glueball and tetraquark candidates. To leading order in QED, this amplitude can be accessed from matrix elements from non-local products of electromagnetic currents evaluated in an infinitely large Minkowski spacetime. In this talk, we present a model-independent formalism to determine this amplitude from finite, Euclidean spacetime correlation functions.
Determining the internal structure of hadrons is a necessary step to advance our understanding of the dynamics of confined partons. Extracting form factors of resonances directly from lattice QCD requires a formal connection between the finite volume Euclidean correlation functions and the infinite volume Minkowski amplitudes. In this talk we describe a novel procedure to extract transitions that couple states with at most two nucleons by exploiting the finite volume of the lattice. Building on previous work pertaining to spinless systems, we describe how to achieve the description of the spin degrees-of-freedom given their non-trivial finite-volume interaction with an external local current of arbitrary Lorentz structure. We will present the main ingredients of our derivation, and an outlook for future calculations where we discuss a case study of the significance of the finite-volume corrections as a function of the binding energy of a deuteron-like state.
In QCD at large enough isospin chemical potential Bose-Einstein Condensation (BEC) takes place, separated from the normal phase by a phase transition. From previous studies the location of the BEC line at the physical point is known. In the chiral limit the condensation happens already at infinitesimally small isospin chemical potential for zero temperature. The zero-density chiral transition might then be affected, depending on the shape of the BEC boundary, by its proximity. As a first step towards the chiral limit, we perform simulations of 2+1 flavors QCD at half the physical quark masses. The position of the BEC transition is then extracted and compared with the results at physical masses.
At finite baryon chemical potential, the sign problem hinders Monte Carlo simulations which is remedied by a Dual Representation that makes the sign problem mild. At the strong coupling limit, the dual formulation with staggered quarks is well established. We have used this formulation to look at the quark mass dependence of the baryon mass and the nuclear transition which allows us to quantify the nuclear interaction. The results obtained are also compared with the mean field theory.
The Hamiltonian formulation of Lattice QCD with staggered fermions
in the strong coupling limit has no sign problem at non-zero baryon density
and allows for Quantum Monte Carlo simulations.
We have extended this formalism to two flavors,
and after a resummation, there is no sign problem
both for non-zero baryon and isospin chemical potential.
We report on recent progress on the implementation
of the Quantum Monte Carlo simulations and present results
on the baryon and isospin densities in the chiral limit.
These will be compared with Meanfield theory.
The thermodynamics of QCD with sufficiently heavy dynamical quarks can be described by a three-dimensional Polyakov loop effective theory, after a truncated character and hopping expansion. We investigate the resulting phase diagram for low temperatures by mean field methods. Taking into account chemical potentials both for baryon number and isospin, we obtain clear signals for a liquid-gas type transition to baryon matter at
At low temperature and large chemical potential QCD might exhibit a chiral inhomogeneous phase, as indicated by various simple low-energy models. One of these models is the 3+1-dimensional Nambu-Jona-Lasinio model, which is non-renormalizable -- rendering the results possibly dependent on the employed regularization scheme. While most previously published results regarding the inhomogeneous phase in this model were obtained with the Pauli-Villars or similar regularizations, this talk explores the dependence of this phase on different lattice regularizations. Furthermore, the lattice approach allows us to determine the energetically preferred shape of the condensate without a specific ansatz.
We studied the 2+1 dimensional XY model at nonzero chemical potential on deformed integration manifolds with the aim of alleviating its sign problem. We investigated several proposals for the deformations and managed to considerably improve on the severity of the sign problem with respect to standard reweighting approaches. In this talk I present numerical evidence that a significant reduction of the sign problem can be achieved which is exponential in both the squared chemical potential and the spatial volume. Furthermore, I discuss a new approach to the optimizaiton procedure, based on reweighting, that sensibly reduces its computational cost.
We use one-flavour QCD (
Composite Higgs models are a popular solution to the Naturalness problem in the Higgs sector, where the mass of the Higgs bosons is explained in terms of Goldstone dynamics. We address a composite model described by a
We present ongoing investigations of maximally supersymmetric Yang--Mills (
Maximally supersymmetric Yang--Mills theory (
On behalf of the Lattice Strong Dynamics (LSD) collaboration, we present first results of the SU(4) Gauge theory Stealth Dark Matter hadron spectrum using stochastic Laplacian Heaviside (sLapH) smearing. We compare our results to previous work in the context of our Stealth Dark Matter baryon scattering project.
In the context of Strongly Interacting Dark Matter theories dark isosinglet mesons might play an important role in the low-energy dynamics and might provide crucial signatures in collider and direct detection searches. We present first results in
Bandwidth and latencies are central performance limiters for Lattice QCD. To overcome bandwidth limiters one way is to reduce the number of bits need by e.g., mixed precision solvers. These provide great speedups but increase the relative importance of latency limiters. We discuss techniques that QUDA uses to reduce latencies from GPU-CPU and GPU-network transfers and their impact for strong-scaling HMC simulations, where these matter most.
MPI Job Manager (MPI_JM) is "scheduler" designed enable users to make maximum use of heterogenous architectures, particularly which require a "swarm" of independent MPI tasks is required for a complete calculation - such as lattice QCD calculations of correlation functions on pre-existing configurations. MPI_JM managers all these tasks through lightweight C++ code supported by Python3. MPI_JM allows users to describe the resource requirements of their tasks (GPU-intense, CPU-only, number of nodes, wall clock time, etc) as well as their dependencies. MPI_JM then schedules these tasks on an allocation on an HPC platform based upon user defined priority and dependencies. Jobs with GPU-intense and CPU-only requirements are placed upon the same nodes, maximizing the use of all node resources. This is all managed with a single mpirun
call, minimizing the requirements of the service nodes that manage an HPC system. Planned features include (among others):
Multiple job-configurations: as the wall clock of the allocation nears the end, the optimal run configuration may not have enough time to complete, but doubling the nodes at a performance loss would allow a job to complete in time. MPI_JM can try alternate configurations specified by the user, to use up the otherwise idle cycles towards the end of a job allocation
Try again: sometimes, the GPUs on a node will just fail to start up in time, causing a job to time out. MPI_JM can be instructed to try N-times before giving up and trying a new job, or removing those nodes from the allowed ones to be used in the allocation.
Use real wall-clock time rather than user specified estimate: Optinoally, MPI_JM will track performance of similar jobs in a database, and then use this information to provide more reliable estimates of wall-clock time requirements than what is specified by the user.
etc.
The 2D O(N) non-linear sigma models are exactly solvable theories and, on the lattice, they have many applications from statistical mechanics to QCD toy models. In this talk, I will consider a particular generalization of the O(N) model, i.e. the non-linear sigma model on the supersphere. The global symmetry group of this model – the OSp(N+2M|2M) supergroup – mixes bosonic and fermionic degrees of freedom, hence the sigma model can be thought of as a toy model for string worldsheet theories with target space supersymmetry. In this talk, I will describe the non-linear sigma model on the supersphere, its discretization on the lattice, its renormalization properties, and the relation between this model and its non-supersymmetric equivalent. I will also present our strategy for numerical simulations and some preliminary numerical results.
A generalization of Wilsonian lattice gauge theory may be obtained by considering the possible self-adjoint extensions of the electric field operator in the Hamiltonian formalism. In the special case of
Recent studies on the 't Hooft anomaly matching condition have suggested
a nontrivial phase structure in 4D SU(
In the large-
in the confined phase, while it restores in the deconfined phase,
which is indeed one of the possible scenarios.
However, at small
with the consequence of the anomaly matching condition.
Here we investigate this issue for
The crucial point to note is that the CP restoration can be probed
by the sudden change of the tail of the topological charge distribution at
which can be seen by simulating the theory at imaginary
Our results suggest that the CP restoration at
higher than the deconfining temperature unlike the situation in the large-
The 3D Ising conformal field theory (CFT) describes different physical systems, such as uniaxial magnets or fluids, at their critical points. In absence of an analytical solution for the 3D Ising model, the scaling dimensions and operator product expansion (OPE) coefficients characterizing this CFT must be determined numerically. The currently most-cited values for these quantities have been obtained from the conformal bootstrap, while lattice calculations have so far only produced reliable results for the scaling dimensions involved in calculating the critical exponents. Using Quantum Finite Elements to investigate critical
For the 2d Ising model on a triangular lattice, we determine the exact values of the three critical coupling coefficients which restore conformal invariance in the continuum limit as a function of an affine transformation of the triangle geometry. On a torus with a non-trivial modular parameter, we present numerical results showing agreement with the exact CFT solution. Finally, we discuss how this method may be applied to simulate the critical Ising model on curved 2d simplicial manifolds.
We study the massless Schwinger model with an additional 4-fermi interaction and a topological term. For topological angle
CKM matrix elements can be obtained from lattice determinations of semileptonic decay form factors by combining them with experimental results for decay rates. We give a status update on our study using the Domain Wall Fermion action for up/down, strange and charm quarks to determine semileptonic form factors for
We present HPQCD's improved scalar, vector and tensor form factors for
We compare Standard Model observables using our form factors to experimental measurements for the rare flavour changing neutral current processes
We study, with lattice QCD, the radiative leptonic decays
We present a strategy for the extraction of the SD form factors and implement it in an exploratory lattice computation of the decay rates for the four channels of kaon decays (
It is the SD form factors which describe the interaction between the virtual photon and the internal hadronic structure of the decaying meson, and in our procedure we separate the SD and point-like contributions to the amplitudes. The form factors are extracted with good precision and used to reconstruct the branching ratio values, which are compared with the available experimental data.
These are very suppressed processes, which thus provide an excellent test of the Standard Model, and provide a useful avenue for the search for signatures of new physics.
In the region of hard photon energies, radiative leptonic decays represent important probes of the internal structure of hadrons.
Moreover, radiative decays can provide independent determinations of Cabibbo-Kobayashi-Maskawa matrix elements with respect to purely leptonic or semileptonic channels.
Prospects for a precise determination of leptonic decay rates with emission of a hard photon are particularly interesting, especially for the decays of heavy mesons for which currently only model-dependent predictions, based on QCD factorization and sum rules, are available to compare with existing experimental data.
We present a non-perturbative lattice calculation of the structure-dependent form factors which contribute to the amplitudes for the radiative decays
With moderate statistics, thanks to the use of a sine-cardinal-reconstruction technique and improved estimators, we are able to provide rather precise, first-principles results for the form factors in the full kinematical (photon-energy) range for both light and heavy mesons.
We developed a strategy to implement RI/MOM schemes on quark bilinear and four-quark operators. In these schemes, the momentum transfer is not restricted to the exceptional point or to the symmetric point. In particular, we study the convergence of the perturbative series and the potential to reduce some systematic errors (discretisation and chiral symmetry breaking effects). In particular, we observe a notable reduction of the pseudo-Goldstone pole contributions which could lead to a significant improvement for the renormalisation of some four-quark operators.
Title: Structure and geometry of 12C from a Wigner SU(4) symmetric interaction
The carbon-12 nucleus, one of the most crucial elements for life, is full of interesting structures and multifaceted complexity. One famous example is the first excited 0+ state, the so called Hoyle state. It can not be described by most of the ab initio calculations. Moreover, a lack of model-independent description for the shape also hinders an understanding of its geometric properties. Here we present calculations of 12C by nuclear lattice effective field theory using a simple nucleon–nucleon interaction that is independent of spin and isospin and therefore invariant under Wigner’s SU(4) symmetry. Despite the simplicity of the interaction, the agreement with experiment is impressive, not only for all the low-lying levels including the Hoyle state, but also properties such as the charge radius, density profiles, and BE2 transitions. Furthermore, we provide the first model-independent tomographic scan of the three-dimensional geometry for those nuclear states, which show many interesting shapes and features.
A recently re-discovered variant of the Backus-Gilbert algorithm for spectral reconstruction enables the controlled determination of smeared spectral densities from lattice field theory correlation functions. The particular advantage of this model-independent approach is the a priori specification of the kernel with which the underlying spectral density is smeared, allowing for variation of its peak position, smearing width, and functional form. If the unsmeared spectral density is sufficiently smooth in the neighborhood of a particular energy, it can be obtained from an extrapolation to zero smearing kernel width at fixed peak position.
The determination of scattering amplitudes is a natural application. As a proof-of-principle test, an inclusive rate is computed in the two-dimensional O(3) sigma model from a two-point correlation function of conserved currents. The results at finite and zero smearing radius are in good agreement with the known analytic form up to energies at which 40-particle states contribute, and are sensitive to the 4-particle contribution to the inclusive rate. The straight-forward adaptation to compute the R-ratio in lattice QCD from two-point functions of the electromagnetic current is briefly discussed.
A very rich place to look for phenomena to challenge our current understanding of physics is the flavor sector of the Standard Model (SM). In particular, the
Recently, there have been interesting efforts in Lattice QCD (LQCD) trying to cast some light onto the current situation. Calculations of the form factors of the gold-plated channels
norm, and when combined with the latest data coming from
In this talk, I will review the current status of the form factor LQCD calculations at non-zero recoil of the
We review recent progress on heavy flavor physics from lattice QCD.
One of the most direct predictions of QCD is the existence of color-singlet states
called Glueballs, which emerge as a consequence of the gluon field self-interactions.
Despite the outstanding success of QCD as a theory of the strong interaction
and decades of experimental and theoretical efforts, all but the most basic properties of Glueballs are still being debated.
In this talk, I will review efforts aimed to understanding Glueballs and the current status of Glueball searches, including recent experimental results and lattice calculations.
The study of real-time evolution of quantum field theories is known to be an extremely challenging problem for classical computers. Due to a fundamentally different computational strategy, quantum computers hold the promise of allowing for detailed studies of these dynamics from first principles. However, much like with classical computations, it is important that quantum algorithms do not have a cost that scales exponentially with the volume. In this paper, we present an interesting test case: a formulation of a compact U(1) gauge theory in 2+1 dimensions. A naive implementation onto a quantum circuit has a gate count that scales exponentially with the volume. We discuss how to break this exponential scaling by performing an operator redefinition that reduces the non-locality of the Hamiltonian and also provide explicit implementations using the Walsh function formalism. While we study only one theory as a test case, we expect the exponential gate scaling to persist for formulations of other gauge theories, including non-Abelian theories in higher dimensions.
We propose a variational quantum eigensolver suitable for exploring the phase structure of the multi-flavor Schwinger model in the presence of a chemical potential. The parametric ansatz we design incorporates the symmetries of the model and can be implemented on both measurement-based and circuit-based quantum hardware. We numerically demonstrate that our ansatz is able to capture the phase structure of the model and allows for faithfully approximating the ground state. Our results show that our approach is suitable for current intermediate-scale quantum hardware and can be readily implemented on existing quantum devices.
With the long term perspective of using quantum computers for lattice gauge theory simulations, an efficient method of digitizing gauge group elements is needed. We thus present our results for a handful of discretization approaches for the non-trivial example of
Simulating SU
Sign problems in Monte Carlo simulations have long hindered studies of phase diagrams of lattice gauge theories (LGTs) at finite densities. Quantum computation of LGTs does not encounter sign problems, but preparing thermal states needed for a complete phase-diagram analysis on quantum devices is a difficult and resource-intensive process. Thermal Pure Quantum (TPQ) states have been proposed in recent years as an efficient method to reliably estimate thermal expectation values on a quantum computer. We propose a new form of TPQ states, called Physical Thermal Pure Quantum (PTPQ) states, to quantum compute thermal expectation values and non-equal time correlation functions of LGTs at finite temperature and density. We illustrate the approach by computing the chiral phase diagram of a toy theory accessible to near-term quantum hardware, 1+1 dimensional
Quantum computing is a promising new computational paradigm which may allow one to address exponentially hard problems inaccessible in Euclidean lattice QCD. Those include real-time dynamics, matter at non-zero baryon density, field theories with non-trivial CP-violating terms and can often be traced to the sign problem that makes stochastic sampling methods inapplicable. As a prototypical example we consider a low-dimensional theory, Quantum Electrodynamics in 1+1 space-time dimensions with a theta term. Using staggered fermions, this model can be mapped to a quantum Ising-like model with nearest-neighbor interactions which is well-suited for digital gate-based quantum computers. We study and compare properties of three algorithms that can be employed for the initial state preparations: Quantum Adiabatic Evolution (QAE), Quantum Approximate Optimization Algorithm (QAOA) as well as recently proposed Rodeo Algorithm. Understanding their convergence properties may be helpful for designing optimal algorithms with minimal number of CNOT gates for near-term noisy intermediate scale quantum (NISQ) devices that are currently within technological reach.
Recently, a doubly charmed tetraquark
The doubly charm tetraquark with exotic quark composition
We study a doubly-bottomed tetra-quark state
Employing
By extrapolating results at
A comparison shows that the effect from virtual
We report progress on finite-volume determinations of heavylight-meson -- Goldstone boson scattering phase shifts using the Luescher method on CLS 2+1 flavor gauge field ensembles. In a first iteration we will focus on D-meson -- pion scattering in the elastic scattering region at various pion masses using ensembles with three lattice spacings. We employ ensembles on the CLS quark-mass trajectory with a fixed trace of the quark-mass matrix as well as ensembles with a strange-quark mass fixed close to its physical value, which will allow us to study both the light- and the strange quark-mass dependence of positive parity heavy-light hadrons close to threshold.
We present an investigation of the spectrum of exotic charmonium-like mesons using lattice QCD. The focus is on
Optimized meson operators in the distillation framework are used to study the charmonium spectrum in two ensembles with two heavy dynamical quarks at half the physical charm quark mass but different lattice spacings. The use of optimal meson distillation profiles is shown to increase the overlap with the ground state significantly, as well as grant access to excited states, for multiple quantum numbers including hybrid states with very little additional cost. These same operators are also employed for the calculation of meson-glueball mixing.
We present results for the electromagnetic form factors of the proton and neutron computed on the Coordinated Lattice Simulations (CLS) ensembles with
We present results for the nucleon electromagnetic form factors using
We present the results of a complete lattice calculation of the gravitational form factors (GFFs) of the proton and pion, including glue as well as connected and disconnected quark contributions, on an ensemble with 2+1 flavors of Wilson fermions with close-to-physical pion mass of 170 MeV. We use these results to predict full, physical densities of energy, pressure, and shear forces inside the proton and pion via the relation of GFFs with the energy-momentum tensor.
Nucleon isovector form factors calculated on a 2+1-flavor domain-wall-fermions ensemble with strange and degenerate up and down quarks at physical mass and lattice cut off,
I will discuss progress on computing nucleon elastic form factors with the stochastic LapH method.
OR, if results these results are not yet ready,
I will discuss preliminary results on the nucleon-pion sigma term determined with O(30) HISQ ensembles with MDWF valence fermions. The nucleon spectrum results are determined at 7 pion masses in the range 130 < Mpi < 400 MeV, four lattice spacings in the range 0.06 < a < 0.15 fm, and several volumes. The nucleon-pion sigma term is determined through a derivative of the extrapolation of the nucleon mass to the physical point.
We present results for the isovector axial form factor of the nucleon computed on a set of
We use the summed operator insertion method (summation method) to suppress the contamination from excited states, and use the
Recently an approximate SU(4) chiral spin-flavour symmetry was discovered in
multiplet patterns of QCD meson correlation functions, in a temperature range above
the chiral crossover. This symmetry is larger than the full chiral symmetry
of QCD with massless u,d-quarks. It can only arise effectively
when color-electric quark gluon interactions dominate the effective Dirac action
of QCD, which suggests that quarks remain bound in such a regime.
At temperatures about two to three times the crossover temperature, this
pattern disappears again, and the usual chiral symmetry is recovered.
We present additional evidence for this phenomenon based on meson screening
masses, and discuss how this chiral spin symmetric band continues into
the QCD phase diagram.
We investigate the phase structure of QCD with three degenerate quark flavors at finite temperature using Mobius domain wall fermions. To locate the critical endpoint and explore the order of phase transition on the diagonal line of the Columbia plot, we performed simulations at temperatures 131 and 196 MeV with lattice spacing
The global center symmetry of quenched QCD at zero baryonic chemical potential is broken spontaneously at a critical temperature
The so called Columbia Plot summarises the order of the QCD thermal transition as a function of the number of quark flavours and their masses. Recently, it was demonstrated that the first-order chiral transition region, as seen for
QCD with infinite heavy quark masses exhibits a first-order thermal transition which is driven by the spontaneous breaking of the global
Decreasing the quark masses weakens the transition until the latent heat vanishes at the critical mass. We give an update on our exploration of the heavy mass region with three flavors of staggered quarks.
The QCD crossover is marked by the rapid change in various observables such as the chiral condensate, the Polyakov loop or the topological susceptibility. We studied the topological properties in pure SU(3) gauge theory where the transition is first order.
Our study focused on the topological susceptibility and the
In our previous work, we showed that unresolved excited state contaminations provide a major source of systematic uncertainty in the calculation of the nucleon electric dipole moment due to the QCD topological term theta. Here we extend this result to the calculation of the nucleon electric dipole moment due to the quark chromo-electric dipole moment operator. We also show quantitatively the impact of mixing of the latter with lower-dimensional operators on the lattice. Finally, we present preliminary results from a unitary clover-on-clover calculation for the QCD topological term.
We report our calculation of the neutron electric dipole moment (EDM) induced by the theta term. We use overlap fermions on three 2+1-flavor RBC/UKQCD domain wall lattices with pion mass ranging from ~300 to ~500 MeV. The use of lattice chiral fermions guarantees a correct chiral limit even at finite lattice spacings and enables us to reliably extrapolate our result from heavy pion masses to the physical point. Furthermore, by utilizing the partially-quenched chiral extrapolation formula, several valence pion points are added to better constrain the chiral extrapolation. With the help of the cluster decomposition error reduction (CDER) technique and a large amount of statistics accumulated, the statistical uncertainty is effectively controlled. We also carefully check the systematic uncertainties from the two-state fits, the momentum extrapolation, the chiral extrapolation and the CDER technique.
At the TeV scale, low-energy precision observations of neutron characteristics provide unique probes of novel physics. Precision studies of neutron decay observables are susceptible to beyond the Standard Model (BSM) tensor and scalar interactions. The neutron electric dipole moment also has high sensitivity to new BSM CP-violating interactions. To fully utilise the potential of future experimental neutron physics programs, matrix elements of appropriate low-energy effective operators within neutron states must be precisely calculated. We present results from the QCDSF/UKQCD/CSSM collaboration for the isovector charges
By far the biggest contribution to hadronic vacuum polarization (HVP) arises from the two-pion channel. Its quark-mass dependence can be evaluated by combining dispersion relations with chiral perturbation theory, providing guidance on the functional form of chiral extrapolations, or even interpolations around the physical point. In addition, the approach allows one to estimate in a controlled way the isospin-breaking corrections that arise from the pion mass difference. As an application, I will present an updated estimate of phenomenological expectations for electromagnetic and strong isospin-breaking corrections to the HVP contribution in the anomalous magnetic moment of the muon.
In this contribution we report on progress in the determination of the isospin breaking corrections to the vector-vector correlator in QCD from the RBC/UKQCD collaborations. They are relevant to estimate the hadronic contributions to the muon anomalous magnetic moment directly from first-principles lattice QCD simulations, and indirectly from cross sections measured in tau decay experiments.
We present a calculation of the intermediate window quantity of the
hadronic vacuum polarization contribution to the muon g-2 using a
Lorentz-covariant coordinate-space method at a fixed pion mass of ~350
MeV. This method is more flexible in the choice of the integration
kernel than the time-momentum representation and gives a different
perspective on the systematic errors of the g-2 calculation. It
furthermore serves as a check for the recent results of the Mainz
group.
Standard local updating algorithms experience a critical slowing down close to the continuum limit, which is particularly severe for topological observables. In practice, the Markov chain tends to remain trapped in a fixed topological sector. This problem further worsens at large
To mitigate it, we adopt the parallel tempering on boundary conditions proposed by M. Hasenbusch. This algorithm allows to obtain a reduction of the auto-correlation time of the topological charge up to several orders of magnitude.
With this strategy we are able to provide the first computation of low-lying glueball masses at large
In order to understand the puzzle of the free energy of an individual quark in QCD, we explicitly construct ensembles with quark numbers
Quark confinement is perhaps the most important emergent property of the theory of quantum chromodynamics. I review recent results studying centre vortices in SU(3) lattice gauge theory with dynamical quarks. Starting from the original Monte Carlo gauge fields, a vortex identification procedure yields vortex-removed and vortex-only backgrounds. The comparison between the original `untouched' Monte Carlo gauge fields and these so called vortex-modified ensembles supports the notion that centre vortices are fundamental to confinement in full QCD.
We compute the topological susceptibility of
The Hamiltonian approach can be used successfully to study the real time evolution of a non-Abelian lattice gauge theory on the available noisy quantum computers. In this talk, results from the real time evolution of SU(2) pure gauge theory on IBM hardware are presented. The long real time evolution spanning dozens of Trotter steps with hundreds of CNOT gates and the observation of a traveling excitation on the lattice were made possible by using a comprehensive set of error mitigation techniques. Self-mitigation is our novel tool, which consists of using the same physics circuit as a noise-mitigation circuit.
Studies of the Schwinger model in the Hamiltonian formulation have hitherto used the Kogut-Susskind staggered approach. However, Wilson fermions offer an alternative approach and are often used in Monte Carlo simulations. Tensor networks allow the exploration of the Schwinger model even with a topological θ-term, where Monte Carlo methods would suffer from the sign problem. Here, we study the one-flavour Schwinger model with Wilson fermions and a topological θ-term using Matrix Product States (MPS) methods in the Hamiltonian formulation. The mass parameter in this model receives an additive renormalization shift from the Wilson term. In order to perform a continuum extrapolation, the knowledge of this shift is important. We present a method suitable for tensor networks that determines the mass renormalization using observables such as the electric field density, which vanish when the renormalized mass is zero. Using this shift, the continuum extrapolation is performed for various observables.
Quantum simulations of QCD require digitization of the infinite-dimensional gluon field. Schemes for doing this with the minimum amount of qubits are desirable. A practical digitization for SU(3) gauge theories via its discrete subgroup S(1080) has been shown to allow classical simulations down to a=0.08 fm and reproduce thermal and glueball spectrum using modified and improved actions. Together with primitive gates and improved Hamiltonians for non-abelian gauge theories, the time is approaching where more realistic quantum resource estimates will become possible.
Open quantum systems are good models of many interesting physical systems. Non-Hermitian Hamiltonians are known to describe, or at least approximate some of these open quantum systems well. Recently, there has been an increase in interest in quantum algorithms for simulating such Hamiltonians, such as the Quantum Imaginary Time Evolution algorithm, and other ones based on trace preserving quantum operations, using an enlarged Hilbert space. The focus of our work is on testing the near-term applicability of some of these NISQ-era algorithms on real, noisy quantum hardware. We will look at the 1D quantum Ising model in complex parameter space. Such models have a rich phase-structure in the complex plane and studying them would allow us to explore critical regions such as Lee-Yang edges and Fisher zeros. We will also discuss the applicability of these algorithms for ground-state preparation.
We determine the gradient flow scale
The OpenLat initiative presents its results of lattice QCD simulations using Stabilized Wilson Fermions (SWF) using 2+1 quark flavors. Focusing on the
We present the results of basic gauge observables and of hadron masses, and their statistical properties like the autocorrelation. For the determination of the hadron masses we used a Bayesian analysis framework with constraints and model averaging to obtain the most unbiased results as possible.
We compute the static energy of a quark-antiquark pair in lattice QCD using a method which is not based on Wilson loops, but where the trial states are formed by eigenvector components of the covariant lattice Laplace operator. The computational effort of this method is significantly lower than the standard Wilson loop calculation, when computing the static potential not only for on-axis, but also for many off-axis quark-antiquark separations, i.e., when a fine spatial resolution is required, e.g., for string breaking calculations. We further improve the signal by using multiple eigenvector pairs, weighted with Gaussian profile functions of the eigenvalues, providing a basis for a generalized eigenvalue problem (GEVP), as it was recently introduced to improve distillation in meson spectroscopy. We show results from the new method for the static potential with dynamical fermions and demonstrate its efficiency compared to traditional Wilson loop calculations.
We present SU(3) lattice Yang-Mills data for hybrid static potentials from five ensembles with different small lattice spacings and the corresponding parametrizations for quark-antiquark separations
We study four-quark systems using lattice QCD, which consist of two heavy antiquarks (either
Moreover, we study the overlaps of trial states generated by our interpolating operators and low-lying energy eigenstates to obtain insights regarding the composition of the latter.
We present the leading order mixed-action effect
clover or overlap valence fermion actions on the gauge ensembles with kinds of
sea fermion actions among a widely used lattice spacing range
of the lattice spacing on the gauge ensembles with the dynamical chiral sea
fermion, likes the Domain wall or HISQ fermion. When the clover sea fermion
action which has explicit chiral symmetry breaking is used in the ensemble,
used.
We present the current status of our analyis of nucleon structure observables including isovector charges and twist-2 matrix elements as well as the nucleon mass. Results are computed on a large set of CLS
The study of resonance form factors in lattice QCD is a challenging endeavor. Namely, the infinite-volume limit,
In this talk, I shall discuss a novel method to tackle this problem in which the difficulty, related to the presence of the triangle diagram, never emerges. The approach is based on the study of two-particle scattering in a static, spatially periodic external field by using a generalization of the Lüscher method in the presence of such a field. In addition, I shall demonstrate that the resonance form factor in the Breit frame is given by the derivative of a resonance pole position in the complex plane with respect to the coupling constant of the external field. This result is a generalization of the well-known Feynman-Hellmann theorem for the form factor of a stable particle.
We present results of nucleon structure studies measured in 2+1 flavor QCD with the physical light quarks (
We report on the recent progress of our analysis into nucleon sigma terms, as well as the singlet axial and tensor nucleon charges.
These are extracted from the CLS gauge configurations, which utilise the Lüscher-Weisz gluon action and the Sheikholeslami-Wohlert fermion action with
We have employed a variety of methods to determine the necessary correlation functions, including the sequential source method for connected contributions, and the truncated solver method for disconnected contributions.
Extrapolation to the physical point involves leading order discretisation, chiral and finite-volume effects.
We present an analysis of the pion-nucleon sigma term on the CLS ensembles with
A lot of progress has been made in the direct determination of nucleon sigma terms. Using similar methods we consider the sigma terms of the other octet baryons as well. These are determined on CLS gauge field ensembles employing the Lüscher-Weisz gluon action and the Sheikholeslami-Wohlert fermion action with
Using the high statistics datasets of the HotQCD Collaboration,
generated with the HISQ (2+1)-flavor action for light and strange quarks,
and treating the charm sector in the quenched approximation, we analyze
the second and fourth order cumulants of charm fluctuations and
the correlations of charm with lighter conserved flavor quantum numbers.
We can make use of a factor 100 larger statistics on
and datasets on lattices with temporal extent
never have been used in studies of the charm fluctuations. This allows us to
perform the continuum limit for charm fluctuations in the quenched
approximation.
Analyzing correlations of charm fluctuations with baryon number and
electric charge fluctuations we can project onto charmed baryon and
meson correlations and compare results with quark model extended hadron
resonance gas model calculations. We aim at a precise determination of
the dissociation temperature of charmed hadrons and will probe the
sensitivity of the fluctuations observables to the presence of
multiple-charmed baryons.
We discuss results about inhomogeneous chiral phases, i.e. phases where in addition to chiral symmetry also translational symmetry is broken, in the
The thermal photon emission rate is determined by the spatially transverse, in-medium spectral function of the electromagnetic current. Accessing the spectral function using Euclidean data is, however, a challenging problem due to the ill-posed nature of inverting the Laplace transform. In this contribution, we present the first results about testing the proposal of directly computing the analytic continuation of the retarded correlator at fixed, vanishing virtuality of the photon via the calculation of the appropriate Euclidean correlator at imaginary spatial momentum. We employ two-flavors of dynamical Wilson fermions at a temperature of 250 MeV.
We study the Wilson line correlation function in Coulomb gauge on
We present a lattice determination of the disconnected contributions to the leading-order hadronic vacuum polarization (HVP) to the muon anomalous magnetic moment in the so-called short and intermediate time-distance windows. We employ gauge ensembles produced by the Extended Twisted Mass Collaboration (ETMC) with
We present new lattice results of the ETM Collaboration for the SM prediction of the so-called intermediate window (W) and short-distance (SD) contributions to the leading-order hadronic vacuum polarization (HVP) term of the muon anomalous magnetic moment,
Our results are obtained from extensive simulations of twisted mass lattice QCD with dynamical up, down, strange and charm quarks at physical mass values, different volumes, and lattice spacings down to
With the publication of the new measurement of the anomalous magnetic moment of the muon, the discrepancy between experiment and the data-driven theory prediction has increased to
We present our calculation of the intermediate distance window contribution using
Our result at the physical point displays a tension of
We employ
We discuss the conversion of our lattice result for the hadronic running of the electromagnetic coupling,
We investigate the isospin symmetry breaking effects in the two-flavour Schwinger model. Specifically, we check a prediction by Howard Georgi about automatic fine-tuning effects, i.e. that the isospin breaking is suppressed exponentially in the fermion mass
We study non-invertible defects constructed from dualities in the Cardy-Rabinovici model. The Cardy-Rabinovici model is a four-dimensional
In this contribution, we report on our study of the properties of the Wilson flow and on the calculation of the topological susceptibility of
The Wilson flow is shown to scale according to the quadratic Casimir operator of the gauge group, as was already observed for
for a large interval of the inverse coupling for each probed value of
The continuum limit of the topological susceptibility is computed and it is conjectured that it scales with the dimension of the group. Our estimates of the topological susceptibility and the
measurements performed in the
Charged particles in an Abelian Coulomb phase are non-local infra-particles that are surrounded by a cloud of soft photons which extends to infinity. Gauss' law prevents the existence of charged particles in a periodic volume. In a C-periodic volume, which is periodic up to charge conjugation, on the other hand, charged particles can exist. This includes vortices in the 3-d XY-model, magnetic monopoles in 4-d U(1) gauge theory, as well as protons and other charged particles in QCD coupled to QED. In four dimensions non-Abelian charges are confined. Hence, in an infinite volume non-Abelian infra-particles cost an infinite amount of energy. However, in a C-periodic volume non-Abelian infra-particles (whose energy increases linearly with the box size) can indeed exist. Investigating these states holds the promise of deepening our understanding of confinement.
We present our ongoing study of a set of solutions to the
We report recent progress in determining
We present the first lattice study of dibaryons with highest bottom number. Utilizing a set of state-of-the-art lattice QCD ensembles and methodologies, we determine ground state of the dibaryon composed of two
The dominant contribution to the long distance region of any meson correlation function comes from the quark propagator's eigenmodes with the smallest eigenvalues. As precision demands for this region increase, methods that offer an exact determination of these low modes have become widely adopted as an effective tool for noise reduction. This work explores the effect of exact low modes on noise reduction for all-to-all as well as traditional wall-to-all propagator techniques. We focus on the connected light quark vector current two-point correlation function, a key observable for the hadronic vacuum polarization contribution to the muon's anomalous magnetic moment. For this analysis we use MILC's 2+1+1 Highly Improved Staggered Quark (HISQ) ensembles at lattice spacings as small as ~0.06 fm at physical mass.
We report recent progress in data analysis on two-point and three-point correlation functions. The data set of measurement is obtained using the Oktay-Kronfeld (OK) action for the heavy quarks (valence quarks) and the HISQ action for the light quarks on MILC HISQ a12m220 ensemble (
Inclusive hadronic
In this study, we explore the distribution of energy-momentum tensor around the static quark and antiquark in SU(3) pure gauge theory at finite temperature. Double extrapolated transverse distributions on mid-plane of the flux tube have been presented for the first time at nonzero temperature. Also, we investigate the spatial distributions of the flux tube on the source plane obtaining from the stress tensor for several
We compute the spectra of flux tubes formed between a static quark antiquark pair up to a significant number of excitations and for eight symmetries of the flux tubes, up to
A study of heavy-light meson spectroscopy, specifically the excited and exotic spectra of
In this work, we calculate the fine tuning of parameters in N = 1 Super- symmetric QCD, discretized on a Euclidean lattice. Specifically, we study the renormalization of the Yukawa (gluino-quark-squark interactions) and the quar- tic (four-squark interactions) couplings. At the quantum level, these interactions suffer from mixing with other operators which have the same transformation properties. We exploit the symmetries of the action, such as charge conjuga- tion and parity, in order to reduce the list of the mixing patterns. To deduce the renormalizations and the mixing coefficients we compute, perturbatively to one-loop and to the lowest order in the lattice spacing, the relevant three-point and four-point Green’s functions using both dimensional and lattice regulariza- tions. Our lattice formulation involves the Wilson discretization for the gluino and quark fields; for gluons we employ the Wilson gauge action; for scalar fields (squarks) we use naive discretization. We obtain analytic expressions for the renormalization and mixing coefficients of the Yukawa couplings; they are func- tions of the number of colors Nc, the gauge parameter α, and the gauge coupling g. Furthermore, preliminary results on the quartic couplings are also presented.
Lattice scales defined using gradient flow are typically very precise, while also easy to calculate. However, different definitions of flows and operators can differ, suggesting possible systematical effects. Using the set of RBC-UKQCD 2+1 flavor domain wall fermion and Iwasaki gauge action ensembles, we explore differences between
The Lambda parameter of three flavor QCD is obtained by computing the running of a
renormalized finite volume coupling from hadronic to very high energies where connection with perturbation theory can safely be made. The theory of decoupling allows us to perform the bulk of the computation in pure gauge theory. The missing piece is then an accurate matching of a massive three flavor coupling with the pure gauge one, in the continuum limit of both theories. A big challenge is to control the simultaneous continuum and decoupling limits, especially when chiral symmetry is broken by the discretization.
We refine our previous study of a
This leads to a coupled channel Schroedinger equation where the two channels correspond to
The calculation of disconnected diagram contributions to physical signals is a computationally expensive task in Lattice QCD. To extract the physical signal, the trace of the inverse Lattice Dirac operator, a large sparse matrix, must be stochastically estimated. Because the variance of the stochastic estimator is typically large, variance reduction techniques must be employed. Multilevel Monte Carlo (MLMC) methods reduce the variance of the trace estimator by utilizing a telescoping sequence of estimators. Frequency Splitting is one such method that uses a sequence of inverses of shifted operators to estimate the trace of the inverse lattice Dirac operator, however there is no a priori way to select the shifts that minimize the cost of the multilevel trace estimation. We present a sampling and interpolation scheme that is able to predict the variances associated with Frequency Splitting under displacements of the underlying space time lattice. The interpolation scheme is able to predict the variances to high accuracy and therefore choose shifts that correspond to an approximate minimum of the cost for the trace estimation. We show that Frequency Splitting with the chosen shifts displays significant speedups over multigrid deflation.
Staggered fermions, Karsten-Wilczek (KW) fermions and Borici-Creutz (BC) fermions all retain a remnant chiral symmetry. The price to be payed is that they are doubled, and the resulting taste symmetry is broken by cut-off effects. We measure the size of the taste symmetry violation by determining the low-lying eigenvalues of these fermion operators in the two-dimensional Schwinger model which admits, like QCD, a global topological charge of a given gauge configuration. A first result is that it matters whether the pertinent eigenmode is a would-be zero-mode or a non-topological mode. The intra-pair splittings of the fermion formulations mentioned are found to depend sensitively on the gauge coupling
We report recent progress in data analysis on two-point correlation functions with HYP-smeared staggered fermions using a sequential bayesian fitting method. We present details on data analysis and preliminary results for the meson spectrum.
We describe our implementation of a multigrid solver for Wilson clover fermions, which increases parallelism by solving for multiple right-hand sides (MRHS) simultaneously. The solver is based on Grid and thus runs on all computing architectures supported by the Grid framework. We present detailed benchmarks of the relevant kernels, such as hopping and clover term on the various multigrid levels, intergrid operators, and reductions. The benchmarks were performed on the JUWELS Booster system at FZ Jülich, which is based on Nvidia A100 GPUs. For example, solving a
QCD sum-rule mass predictions for tetraquark states provide insights on the interpretations and internal structure of experimentally-observed exotic mesons. However, the overwhelming majority of tetraquark QCD sum-rule analyses have been performed at leading order (LO), which raises questions about the underlying theoretical uncertainties from higher-loop corrections. The impact of next-to-leading order (NLO) perturbative effects are systematically examined in scalar (
When comparing the Lagrangian and Hamiltonian formulations of lattice gauge theories, a matching procedure is required to match the parameters and observables between these two formulations. For this, we take the continuum limit in time direction on the Lagrangian side, while keeping the spatial lattice spacing fixed. We study several observables for this nonperturbative matching and compare different ways to take the temporal continuum limit. We apply our approach to the pure U(1) lattice gauge theory in 2+1 dimensions.
The problem of having to reconstruct the decay rates and corresponding amplitudes of the single-exponential components of a noisy multi-exponential signal is common in many other areas of physics and engineering besides lattice field theory, and it can be helpful to study the methods devised and used for that purpose in those contexts in order to get a better handle on the problem of extracting masses and matrix elements from lattice correlators. Here we consider the use of Padé and Padé-Laplace methods, which have found wide use in laser fluorescence spectroscopy and beyond, emphasizing the importance of using robust Padé approximants to avoid spurious poles. To facilitate the accurate evaluation of the Laplace transform required for the Padé-Laplace method, we also present a novel approach to the numerical quadrature of multi-exponential functions.
Wilson-like Dirac operators can be written in the form
We find
We report recent progress in data analysis on the correlation functions
of the semileptonic decays
The data set of measurement is MILC HISQ ensemble for the light quarks
and Oktay-Kronfeld (OK) action for the heavy quarks: a12m310 (
We used sequential Bayesian method for the analysis and adopted Newton
method to find better initial guess.
We investigate
In this study, we calculated the effect of self-interacting dark matter on neutron stars. Properties like the mass, radius and the tidal deformability are affected by the presence of dark matter in neutron stars. We show that the Love number can be used to probe the presence and the properties of dark matter inside of neutron stars in future gravitational wave measurements.
Low energy effective models are a useful tool to understand the mechanisms behind physical processes in QCD. They additionally provide ways to probe into regions of the QCD phase diagram that are harder to simulate on the lattice, e.g., small temperature, due to their lower UV cutoff, as well as more direct comparison with functional methods such as fRG. We present here lattice simulations of such an effective model: the quark-meson model. We simulate the theory via Stochastic Quantisation and report on the effects of employing coloured noise, a method that allows control over the momentum scale of the simulation.
We compute the pion and kaon matrix elements with non-local staple-shaped operators using an
Bridge++ is a general-purpose code set for lattice QCD simulations aiming at a readable, extensible, and portable code while keeping practically high performance. The new version 2.0 employs machine-dependent optimization,extended from a fixed data layout in double precision only to a flexible data layout in float/double precision. In this talk, we report the performance on supercomputer Fugaku with Arm A64FX-SVE by Fujitsu.
Modern B-factory experiments, such as Belle II, are able to investigate physics anomalies with some
of the largest datasets ever produced. High luminosity datasets allow for precision measurements of
exclusive B-decays, such as in B → ℓν, which in turn reduce error in calculations of the correponding
CKM matrix element, Vub. This is especially important given the current tension between calculations
of Vub via exclusive decays and inclusive ones, the latter of which could hint towards the presence of
beyond Standard Model processes. While experimental error in Vub can be constrained with larger
datasets, controlling the error contributions from the relevant theory parameters, such as the B(s)
meson decay constant fB(s) , requires novel analysis.
This work will present the continuing efforts from the UKQCD/QCDSF/CSSM groups towards
improving calculations of fB(s) with lattice QCD techniques. This is performed on 2+1 flavour gauge
ensembles, where SU (3)f symmetry is broken in a controlled way. The heavy b-quark is treated with
an anisotropic clover-improved action and tuned to the physical properties of B and Bs mesons. Such
a tuning requires fitting approximately 1600 correlation functions, where individually optimising the
bounds of each fit is no longer feasible, and may lead to systematic fit uncertainties that are difficult to
quantify. A weighted-average across multiple fitting regions is implemented so as to improve practicality
and reduce the potential for bias in the final derivation of fB(s)
High statistics results for quantities like the gradient flow scale, the quark masses, the lower lying baryon spectrum and the baryon octet sigma terms determined on CLS ensembles with
Our exploratory study looks for direct access to the hadronic transition
amplitude at the resonance without resorting to the Lüscher formalism.
We study the decay
twisted boundary conditions to the quenched charm quark,
circumventing possible problems with final state interactions.
If successful, we could compute the dependence of the transition amplitude
on the charm-quark mass, and test the predictions made by
phenomenological quark pair creation models.
Finally, we investigate if and to what extent an explicit extraction
of the excited state
Fourier acceleration is a technique used in Hybrid Monte Carlo (HMC) simulations to decrease the autocorrelation length. In the weak interaction limit, Fourier acceleration eliminates the problem of critical slowing down. In this work, we show that by properly tuning the kinetic term in HMC simulations, Fourier acceleration can be applied effectively to a strongly interacting
Collins-Soper (CS) evolution kernel is critical to relate transverse-momentum-dependent parton distribution functions (TMDPDFs) at different scales. When the parton transverse momentum is small,
Isospin breaking corrections become relevant when aiming to quantify hadronic observables with uncertainties below the percent level. Discretising QED on the lattice is a non-trivial task and several suggested methodologies are available in the literature. Our work uses massive QED, which provides a fully local prescription of QED on the lattice. We present a status update of our ongoing computation of isospin breaking corrections to the spectrum and provide an outlook on future computations.
The automatic fine-tuning of isospin breaking effects by conformal coalescence found by Howard Georgi in the 2-flavor Schwinger model is studied. Numerical investigation of meson mass splitting confirms the exponential suppression of symmetry breaking effects.
The stochastic LapH method has proven to be successful in hadronic calculations. In this work, with charm light spectroscopy in mind, we set up and optimise the LapH procedure limiting ourselves to the evaluation of 2-point mesonic functions. The calculations are performed on CLS ensembles with
We present first results of a recently started lattice QCD investigation of antiheavy-antiheavy-light-light tetraquark systems including scattering interpolating operators in correlation functions both at the source and at the sink. In particular, we discuss the importance of such scattering interpolating operators for a precise computation of the low-lying energy levels in
Topological freezing is a well known problem in lattice simulations: with shrinking lattice spacing, a transition between topological sectors becomes increasingly improbable, leading to a problematic increase of the autocorrelation time. We present our investigation of metadynamics as a solution for topological freezing in the Schwinger model. Specifically, we take a closer look at the collective variable used in this process and its scaling behaviour. We visualize the effects of topological freezing and how metadynamics helps in that respect. Possible implications for and differences to four-dimensional SU(3) are briefly discussed.
We report on the non-perturbative determination of the improvement coefficient
Our computational method exploits the PCAC relation for two different pseudo-scalar states within the Schrödinger functional, which are modelled by altering the spatial structures at the boundaries via properly chosen wavefunctions.
The lattice spacings considered span a range that matches the gauge field ensembles with the stabilised Wilson-Clover action being generated by the OPEN LATtice initiative.
In the same framework and using chiral Ward identities, we also present preliminary results on the renormalisation constants
Stabilized Wilson fermions are a reformulation of Wilson clover fermions that incorporates several numerical stabilizing techniques, but also a local change of the fermion action - the original clover term being replaced with an exponentiated version of it. We intend to apply the stabilized Wilson fermions toolbox to the thermodynamics of QCD, starting on the Nf=3 symmetric line on the Columbia plot, and to compare the results with those obtained with other fermion discretizations.
Computations within theories with complex actions are generally inaccessible by standard numerical techniques as they typically suffer from the numerical sign problem. The complex Langevin (CL) method aims to resolve this problem. In recent years CL has been successfully applied to various problems, e.g. the QCD equation of state for finite chemical potential, and therefore also may represent a promising method in other applications with similar numerical issues. However, CL in its original formulation is numerically unstable and therefore needs to be artificially stabilised to avoid wrong attractors of the distribution function as well as runaway instabilities.
In this work, we study the application of modern stabilisation techniques such as dynamical stabilisation and gauge cooling to CL simulations of real-time SU(2) Yang-Mills theory. We present preliminary numerical results demonstrating that stabilisation techniques may extend the applicability of CL in real-time gauge theories.
This poster reviews the recent HPQCD calcuation of
Increasing GPU power across a competitive market of various GPU manufacturers and GPU based supercomputers pushes lattice programmers to develop code usable for multiple APIs. In this poster we showcase SIMULATeQCD, a SImple MUlti-GPU LATtice code for QCD calculations, developed and used by the HotQCD collaboration for large-scale projects on both NVIDIA and AMD GPUs. Our code has been made publicly available on GitHub. We explain our design strategy, give a list of available features and modules, and provide our most recent benchmarks on state-of-the-art supercomputers.
We present preliminary results for the leading strange and charm connected contributions to the hadronic vacuum polarization contribution to the muon's g-2. Measurements are performed on the RC collaboration’s QCD ensembles, with
The I=1/2 and I=3/2 nucleon-pion scattering lengths are determined from a high-statistics computation on a single ensemble of gauge field configurations from the CLS consortium with dynamical up, down, and strange quarks and a pion mass
The quark-gluon vertex is an important object of QCD. Studies have shown that this quantity can be relevant for the dynamical chiral symmetry breaking pattern in the vacuum. The goal of our project is to obtain the quark-gluon vertex at finite temperature around the deconfinement/chiral transition using the tools provided by lattice QCD. It will be the first time that the quark-gluon vertex at finite temperature is determined using lattice QCD. The propagators, which are a by-product of this project, are also of interest in themselves. In this poster, we will describe our motivations and goals, some details of the determination and report on the status of the calculation.
We give an update on the ongoing effort of the RC
Gauge covariant smearing based on the 3D lattice Laplacian can be used to create extended operators that have better overlap with hadronic ground states. This is often done iteratively. For staggered quarks using two-link parallel transport preserves taste properties. We found that such iterative smearing was taking an inordinate amount of time when done on the CPU, so we have implemented the procedure in QUDA.
Instead of carrying out two consecutive parallel transports between nearest neighbor sites on each smearing iteration, we calculate the product of the two links joining next-to-nearest-neighbor sites once and reuse it for all iterations. This reduces both required floating point operations and communications.
We present the performance of this code on some recent GPUs.
We give an update on our ongoing studies of the light composite scalar in eight-flavor SU(3) gauge theory. The chiral limit of this theory can serve as the strong dynamics input to a number of composite Higgs models. Composite Higgs models of this type naturally produce
Semileptonic heavy-to-heavy and heavy-to-light
The structure of hadrons relevant for deep-inelastic scattering are completely characterised by the Compton amplitude. The standard approach in structure function calculations is to utilise the operator product expansion where one computes the local matrix elements. However, it is well established that tackling anything beyond leading-twist presents additional challenges that are not easily overcome; complicating the investigations of hadron structure at a deeper level. Alternatively, it is possible to directly calculate the Compton amplitude by taking advantage of the Feynman-Hellmann approach. By working with the physical amplitude, the intricacies of operator mixing and renormalisation are circumvented. Additionally, higher-twist contributions become more accessible given precise enough data.
In this talk, we focus on the QCDSF/UKQCD Collaboration's advances in calculating the forward Compton amplitude via an implementation of the second-order Feynman-Hellmann theorem. We highlight our progress on investigating the low moments of unpolarised structure functions of the nucleon. We also have a glance at our progress on the polarised and off-forward cases.
Next generation high-precision neutrino scattering experiments have the goal of measuring the as-of-yet unknown parameters governing neutrino oscillation. This effort is hampered by the use of large nuclear targets: secondary interactions within a nucleus can confuse the interpretation of experimental data, leading to ambiguities about the initial neutrino interaction in scattering events. The distribution of energies for neutrino events must instead be inferred from the responses of a sum of dissimilar event topologies. For this reason, precise neutrino cross sections on nucleon targets are of vital importance to the neutrino oscillation experimental program. On the other hand, the necessary experimental data for neutrino scattering with elementary targets are scarce because of the weak interaction cross section, which leads to poorly-constrained nucleon and nuclear cross sections.
Lattice QCD is uniquely positioned to provide the requisite nucleon amplitudes needed to enable high-precision oscillation experiments. In particular, LQCD has the ability to probe axial matrix elements that are challenging to isolate or completely inaccessible to experiments. In this talk, I will discuss some of my work to quantify neutrino cross sections with realistic uncertainty estimates, primarily focusing on neutrino quasielastic scattering and the nucleon axial form factor. I will also outline how the needs of next-generation neutrino oscillation experimental programs can be met with modern dedicated LQCD computations.
We review progress on the lattice QCD calculation of parton structure in the nucleon, specifically that of the gluon. The structure of a hadron is typically described by
First-principles calculations of multi-hadron dynamics are a crucial goal for lattice QCD calculations. Significant progress has been achieved in developing, implementing and applying theoretical tools that connect finite-volume quantities to their infinite-volume counterparts. In this talk, I will review some recent theoretical developments and numerical results regarding multi-particle quantities in a finite volume. The focus will be laid on properties of resonances and observables involving nucleons.
I review the recent progresses on lattice calculations of hadron spectroscopy and interactions. The methods to precisely determine the energy eigenstates on lattice and subsequently extract the scattering information have been matured in the last years. After briefly introduce the methodology, I present the new results in the last couple of years, focus will be the results on the exotic hadrons beyond the conventional quark model, such as multi-quark states and glueballs. I will also discuss the existing challenges and future paths.
The International Lattice Data Grid (ILDG) started almost 20 years ago as a global community initiative to enable and coordinate sharing of gauge configurations within the lattice QCD community. We outline the basic ideas of ILDG and explain the urgent need to fully support the meanwhile established FAIR data management practices. We will report on recent activities within the ILDG and on ongoing efforts in migrating to modern technologies.
Using D-Wave's quantum annealer as a computing platform, we study lattice gauge theory with discrete gauge groups. As digitization of continuous gauge groups necessarily involves an approximation of the symmetry, we extend the formalism of previous studies on the annealer to finite, simply reducible gauge groups. As an example we use the dihedral group
Future quantum computers will enable the study of real-time dynamics of non-perturbative quantum field theories without the introduction of the sign problem. We present ongoing progress on low-dimensional lattice systems which will serve as suitable testbeds for near-term quantum devices. The two systems studied to date are 0+1 dimensional supersymmetric quantum mechanics and the Wess-Zumino model in 1+1 dimensions. In both we comment on whether supersymmetry is dynamically broken for various superpotentials.
We present a tensor-network method for strong-coupling QCD with staggered quarks at nonzero chemical potential. After integrating out the gauge fields at infinite coupling, the partition function can be written as a full contraction of a tensor network consisting of coupled local numeric and Grassmann tensors. To evaluate the partition function and to compute observables, we develop a Grassmann higher-order tensor renormalization group method, specifically tailored for this model. We apply the method to the two-dimensional case and validate it by comparing results for the partition function, the chiral condensate and the baryon density with exact analytical expressions on small lattices up to volumes of
We propose a method to represent the path integral over gauge fields as a tensor network. We introduce a trial action with variational parameters and generate gauge field configurations with the weight defined by the trial action. We construct initial tensors with indices labelling these gauge field configurations. We perform the tensor renormalization group with the initial tensors and optimize the variational parameters. As a first step to the TRG study of non-Abelian gauge theory in more than two dimensions, we apply this method to three-dimensional pure SU(2) gauge theory. Our result for the free energy agrees with the analytical results in weak and strong coupling regimes.
Tensor renormalization group (TRG) has attractive features like the absence of sign problems and the accessibility to the thermodynamic limit, and many applications to lattice field theories have been reported so far. However it is known that the TRG has a fictitious fixed point that is called the CDL tensor and that causes less accurate numerical results. There are improved coarse-graining methods that attempt to remove the CDL structure from tensor networks. Such approaches have been shown to be beneficial on two dimensional spin systems. We discuss how to adapt the removal of the CDL structure to tensor networks including fermions, and numerical results that contain some comparisons to the plain TRG, where significant differences are found, will be shown.
Motivated by attempts to quantum simulate lattice models with continuous Abelian symmetries using discrete approximations, we consider an extended-O(2) model that differs from the ordinary O(2) model by an explicit symmetry breaking term. Its coupling allows to smoothly interpolate between the O(2) model (zero coupling) and a
Previous Lattice QCD calculations of nucleon transverse
momentum-dependent parton distributions (TMDs) focused
on the case of transversely polarized nucleons, and thus
did not encompass two leading-twist TMDs associated with
longitudinal polarization, namely, the helicity TMD and
the worm-gear TMD corresponding to transversely polarized
quarks in a longitudinally polarized nucleon. Based on a
definition of TMDs via hadronic matrix elements of quark
bilocal operators containing staple-shaped gauge connections,
TMD observables characterizing the aforementioned two TMDs
are evaluated, utilizing a RBC/UKQCD domain wall fermion
ensemble at the physical pion mass.
We report the first lattice QCD calculation of pion valence quark distribution with next-to-next-to-leading order perturbative matching correction, which is done using two fine lattices with spacings
We report a state-of-the-art lattice QCD calculation of the isovector quark transversity distribution of the proton in the continuum and physical limit using large-momentum effective theory. The calculation is done at three lattice spacings
We present an exploratory study of the quasi-beam function on a
In this talk I will show our calculations of Collins-Soper kernel and soft function on a newly generated 2+1 flavor clover fermion CLS ensemble of size
We present results for the parton distribution functions (PDFs) of the nucleon at the physical point from lattice QCD utilizing a next-to-next-to-leading order (NNLO) matching. We consider two different strategies in our calculation. The first makes use of the short-distance factorization formalism to extract the first few Mellin moments in a model-independent way. In the second approach, we consider a matching in Bjorken-x space using the recently developed hybrid renormalization scheme.
We study phase structure and critical point of finite-temperature QCD with heavy quarks applying the hopping parameter expansion (HPE). We first study finite-size effects on the critical point on
We present the latest results from the use of the Backus-Gilbert method for reconstructing the spectra of NRQCD bottomonium mesons using anisotropic FASTSUM ensembles at non-zero temperature. We focus in particular on results from the
We report preliminary progress in the calculation of the thermal interquark potential of bottomonium using the HAL QCD method with NRQCD quarks. We exploit the fast Fourier transform algorithm, using a momentum space representation, to efficiently calculate NRQCD correlation functions of non-local mesonic S-wave states, and thus obtain the central potential for various temperatures. This work was performed on our anisotropic 2+1 flavour "Generation 2" FASTSUM ensembles.
The heavy quark diffusion coefficient is encoded in the spectral functions of the chromo-electric and the chromo-magnetic correlators, of which the latter describes the T/M contribution. We study these correlators at two different temperatures T=1.5Tc and T=10⁴Tc in the deconfined phase of SU(3) gauge theory. We use gradient flow for noise reduction. We perform both continuum and zero flow time limits to extract the heavy quark diffusion coefficient. Our results imply that the mass suppressed effects in the heavy quark diffusion coefficient are 20% for bottom quarks and 34% for charm quark at T=1.5Tc.
We present a novel approach to nonperturbatively estimate the heavy quark momentum diffusion coefficient, which is a key input for the theoretical description of heavy quarkonium production in heavy ion collisions, and is important for the understanding of the elliptic flow and nuclear suppression factor of heavy flavor hadrons. In the heavy quark limit, this coefficient is encoded in the spectral functions of color-electric and color-magnetic correlators that we calculate on the lattice to high precision by applying gradient flow. For the first time we apply the method to 2+1 flavor ensembles with temperatures between 200-350 MeV. Using our experience from quenched QCD, where we performed a detailed study of the lattice spacing and flow time dependence, we estimate the heavy quark diffusion coefficient using theoretically well-established model fits for the spectral reconstruction.
We present full QCD correlator data and corresponding reconstructed spectral functions in the pseudoscalar channel. Correlators are obtained using clover-improved Wilson fermions on
The renormalization group (RG)
Our results are based on gradient flow measurements performed on dynamical gauge field configurations generated using Möbius domain wall fermions and Symanzik gauge action. In the case of
We report on numerical results of masses and decay constants of the lightest pseudoscalar, vector and axial vector mesons in
Chimera baryons are an important feature of composite Higgs models, since they play role of top partner in partial top compositeness. In the realisation of the mechanism provided by
Many models of composite dark matter feature a first-order confinement transition in the early universe, which would produce a stochastic background of gravitational waves that will be searched for by future gravitational-wave observatories. I will present work in progress using lattice field theory to predict the properties of such first-order transitions and the resulting spectrum of gravitational waves. Targeting both the thermal as well as the bulk phase transitions of SU(N) Yang-Mills theories, this work employs the Logarithmic Linear Relaxation (LLR) density of states algorithm to avoid long autocorrelations.
In the Holographic Model, the two-point function of Energy-Momentum Tensor (EMT) of the dual QFT can be mapped into the power spectrum of the Cosmic Microwave Background in the gravitational theory. However, the presence of divergent contact terms poses challenges in extracting a renormalized EMT two-point function on the lattice. Using a
The infrared effective theory of adjoint QCD with one Dirac
flavour is still under debate. The theory could be confining, conformal,
or fermionic fields could become the lightest fields in the IR. Chiral
symmetry seems to be important to answer this question. Previous
investigations have considered Wilson fermions breaking chiral symmetry.
We present here the first results for this theory based on overlap
fermions. These indicate a chiral symmetry breaking and formation of a
fermion condensate. We have also investigated the running coupling of
the theory, which indicates no IR conformality in the energy region we have explored.
We determine the strange quark mass and the isospin averaged up/down quark mass from QCD in the isospin limit. We utilize 46 CLS ensembles generated with
We present preliminary results for a scale setting procedure based on a mixed action strategy, consisting of Wilson twisted mass valence fermions at maximal twist on CLS ensembles with
We report on the determination of light quark masses with three sea-quark flavours based on a mixed action with maximally twisted valence fermions on CLS ensembles with O(
The decay constants of the kaon and pion provide important input into the determination of light CKM matrix elements. Here we present current progress in computing these quantities using the ensembles and analysis techniques employed by the Budapest-Marseille-Wuppertal collaboration in our recent determination of
We present our calculation of radiative correction to the pion and nucleon decay given by the
The pion box contribution is computed on five 2+1+1-flavor HISQ ensembles using with Clover action.
The preliminary nucleon box contribution is being analyzed on one ensemble.
In both contributions, the loop momentum is integrated with discrete sums.
We report on our first set of results for charm physics, using a mixed-action setup with maximally twisted valence fermions on CLS
We show how staggered fermions can be coupled to gravity by generalizing them to Kaehler-Dirac fermions. The latter experience a perturbative gravitational anomaly which breaks
a U(1) symmetry down to Z_4. This anomaly is captured exactly by the lattice theory. Furthermore we show that this theory exhibits a second
non-perturbative 't Hooft anomaly which can be seen
by considering propagation on non-orientable spaces. This anomaly can be
cancelled for multiples of two Kaehler-Dirac fields. This observation explains recent work that shows that multiples of two staggered fermions can be gapped without breaking symmetries.
This research aims to analyze the integrability condition of the chiral determinant of 4D overlap fermions and construct lattice chiral gauge theories.
We investigate the Casimir effect for relativistic lattice fermions, such as the naive fermion, Wilson fermion, and overlap fermion with the periodic or antiperiodic boundary condition. We also discuss anomalous behaviors for nonrelativistic particles. We apply our approaches to condensed matter systems described by low-energy effective Hamiltonian of Dirac semimetals such as Cd3As2 and Na3Bi.
Lattice simulations of Yang-Mills theories coupled with
Standard lattice formulations of non-relativistic Fermi gases with two spin components suffer from a sign problem in the cases of repulsive contact interactions and attractive contact interactions with spin imbalance. We discuss the nature of this sign problem and the applicability of the complex Langevin method in both cases. For repulsive interactions, we find the results to converge well using adaptive step size scaling and a Gaussian regulator to modify the lattice action. Finally, we present results on density profiles and correlations of a harmonically trapped, one dimensional system in both position and momentum space, which are also directly accessible via cold atoms experiments.
The non-local dependence of the fermion determinant on the gauge field limits our ability of simulating Quantum Chromodynamics on the lattice. Here we present a factorization of the gauge field dependence of the fermion determinant based on an overlapping four-dimensional domain decomposition of the lattice. The resulting action is block-local in the gauge and in the auxiliary bosonic fields. Possible applications are multi-level integration, master field simulations, and more efficient parallelizations of Monte Carlo algorithms and codes.
In this talk we present the first RBC-UKQCD lattice calculation of the leading isospin-breaking corrections to the ratio of leptonic decay rates of kaons and pions into muon and neutrino,
Analytical techniques to derive the finite-volume dependence of observables calculated in lattice simulations can be used to improve numerical determinations. With the need for (sub-)percent precision in lattice predictions, also isospin-breaking effects have to be considered. When including electromagnetism in the so-called QED
In lattice calculations including isospin-breaking effects, low-energy Standard Model predictions can be unambiguously obtained providing external inputs to define the quark masses, the QCD scale and the value of the electromagnetic coupling. However, there is phenomenological interest to define an isospin-symmetric value of a given observable, or to define the corrections coming from the strong or electromagnetic isospin-breaking effects separately. This separation is known to be prescription-dependent, and a diversity of such prescriptions is used across the lattice community. Since these quantities are actively used, for example in the context of the muon g-2 or radiative corrections to weak decays, the question of quantifying scheme dependency is relevant. In this talk we discuss a general framework to describe these ambiguities, and how to estimate them using lattice data or effective field theories.
We present the comparison of preliminary results of
decays from two related projects: the first one is based on unitary Wilson
fermions, and the second uses valence quarks rotated to maximal twist.
While these projects differ by their goals and strategies, both studies
are performed on CLS
techniques. The universality test can then be used as a non-trivial
validation of our calculations, in particular regarding the notoriously
difficult control of excited state contributions to form factors. Finally,
we will discuss the scaling of these two fermionic actions, compared to
their theoretical merits, with a focus on
artefacts.
We present our results for the kaon semileptonic form factors using the two
sets of the PACS10 configuration, whose physical volumes are more than
(10 fm)
The configurations were generated using the Iwasaki gauge action and
stout-smeared nonperturbatively
momentum transfer dependence of the form factors in the continuum limit, we
evaluate the slope and curvature for the form factors at the zero momentum
transfer. Furthermore, we calculate the phase space factor, which is used
to obtain
compared with previous lattice results and experimental values.
Worldline representations were established as a powerful tool for studying bosonic lattice field theories at finite density. For fermions, however, the worldlines still may carry signs that originate from the Dirac algebra and from the Grassmann nature of the fermion fields. We show that a density of states approach can be set up to deal with this remaining sign problem, where finite density is implemented by working with a fixed winding number of the fermion worldlines. We discuss the approach in detail and show first results of a numerical implementation in 2 dimensions.
The determination of entanglement measures in SU(N) gauge theories is a non-trivial task. With the so-called "replica trick", a family of entanglement measures, known as "Rényi entropies", can be determined with lattice Monte Carlo. Unfortunately, the standard implementation of the replica method for SU(N) lattice gauge theories suffers from a severe signal-to-noise ratio problem, rendering high-precision studies of Rényi entropies prohibitively expensive.
In this work, we propose a method to overcome the signal-to-noise ratio problem and show some first results for SU(N) in 3 and 4 dimensions.
We develop a method to improve on the statistical errors for higher moments using machine learning techniques. We present here results for the dual representation of the Ising model with an external field, derived via the high temperature expansion.
We compare two ways of measuring the same set of observables via machine learning: the first gives any higher moments but has larger statistical errors, the second provides only two point function but with small statistical errors. We use the decision tree method to train the correlations between the higher moments and the two point function and use the accurate data of these observable as a input data.
Supervised machine learning with a decoder-only CNN architecture is used to interpolate the chiral condensate in QCD simulations with five degenerate quark flavors in the HISQ action. From this a model for the probability distribution of the chiral condensate as function of lattice volume, light quark mass and gauge coupling is obtained. Using the model, first order and crossover regions can be classified, and the boundary between these regions can be marked by a critical mass. An extension of this model to studies of phase transitions in QCD with variable number of flavors is expected to be possible.
Deep generative models such as normalizing flows are suggested as alternatives to standard methods for generating lattice gauge field configurations. Previous studies on normalizing flows demonstrate proof of principle for simple models in two dimensions. However, further studies indicate that the training cost can be, in general, very high for large lattices. The poor scaling traits of current models indicate that moderate-size networks cannot efficiently handle the inherently multi-scale aspects of the problem, especially around critical points. In this talk, we explore current models that lead to poor acceptance rates for large lattices and explain how to use effective field theories as a guide to design models with improved scaling costs. Finally, we discuss alternative ways of handling poor acceptance rates for large lattices.
Many fascinating systems suffer from a severe (complex action) sign problem preventing us from simulating them with Markov Chain Monte Carlo. One promising method to alleviate the sign problem is the transformation towards Lefschetz Thimbles. Unfortunately, this suffers from poor scaling originating in numerically integrating of flow equations and evaluation of an induced Jacobian. In this talk we present a new preliminary Neural Network architecture based on complex-valued affine coupling layers. This network performs such a transformation efficiently, ultimately allowing simulation of systems with a severe sign problem. We test this method within the Hubbard Model at finite chemical potential, modelling strongly correlated electrons on a spatial lattice of ions.
We present results of the x-dependence of the unpolarized gluon PDF for the proton. We use an
Precise exploration of the partonic structure of the nucleon is one of
the most important aims of high-energy physics. In recent years, it has
become possible to address this topic with first-principle Lattice QCD
investigations. In this talk, we focus on the so-called
pseudo-distribution approach to determine the isovector unpolarized
PDFs. In particular, we employ three lattice spacings to study
discretization effects and extract the distributions in the continuum
limit, at a pion mass of around 370 MeV. Also, for the first time with
pseudo-PDFs, we explore effects of the 2-loop matching from pseudo- to
light-cone distributions.
"We present results on the chiral-even twist-3 quark GPDs for the proton using one ensemble of two degenerate light, a strange and a charm quark (
The Parton Distribution Functions (PDFs) encode the non-perturbative collinear dynamics of a hadron probed in inclusive and semi-inclusive scattering processes, and hence provide an avenue to address a number of key questions surrounding the structure of hadrons. This talk will summarize recent efforts of the HadStruc Collaboration to map out the leading-twist quark PDFs of the nucleon using Lattice QCD. This effort hinges on the computation of matrix elements of space-like parton bilinears, which factorize, akin to the QCD collinear factorization of hadronic cross sections, in a short-distance regime into the desired PDFs - ideas codified within the pseudo-distribution formalism. By exploiting the distillation spatial smearing paradigm, matrix elements of sufficient statistical quality are obtained such that the leading-twist PDFs and various systematic effects can be simultaneously quantified. Consistency of our obtained PDFs with phenomenological expectations is also explored.
We present a lattice QCD calculation towards determining gluon helicity distribution and how much of the proton’s spin budget is contributed by gluons. We consider matrix elements of bilocal operators composed of two gluon fields that can be used to determine the polarized gluon Ioffe-time distribution and the corresponding parton distribution function. We employ a high-statistics computation using a
A major focus of the new Electron-Ion Collider will be the experimental determination of generalised parton distributions (GPDs). I will give an outline of the CSSM/QCDSF collaboration's determination of GPD properties from a lattice calculation of the off-forward Compton amplitude (OFCA). By determining the OFCA, we can access phenomenologically important properties such as scaling and non-leading-twist contributions, and the subtraction function. We calculated the OFCA for soft momentum transfer
The FASTSUM collaboration has developed a comprehensive research programme in thermal QCD using 2+1 flavour, anisotropic ensembles. In this talk, we summarise our recent results including hadron spectrum calculations using our “Generation 2L” ensembles. We will also report on our progress in obtaining anisotropic lattices with a temporal spacing of 17am, half that of our Generation 2L data, which we will use in future studies to reduce systematic effects.
Singly, doubly and triply charmed baryons are investigated at multiple temperatures using the anisotropic FASTSUM 'Generation 2L' ensemble. We discuss the temperature dependence of these baryons' spectrum in both parity channels with a focus on the confining phase. To further qualify the behaviour of these states around the pseudocritical temperature, the parity doubling due to the restoration of chiral symmetry is examined. The addition of heavier 'heavy' quarks and lighter 'light' quarks compared to our previous studies improves our understanding.
We present a strategy to study QCD non-perturbatively on the lattice at very high temperatures. This strategy exploits a non-perturbative, finite-volume, definition of the strong coupling constant to renormalize the theory. As a first application we compute the flavour non-singlet meson screening masses in a wide range of temperature, from
It is known that contrary to expectations, the order parameter of chiral
symmetry breaking, the Dirac spectral density at zero virtuality, does not
vanish above the critical temperature of QCD. Instead, the spectral density
develops a pronounced peak at zero. We show that the spectral density in the
peak has large violations of the expected volume scaling. This anomalous
scaling and the statistics of these eigenmodes is consistent with them being
produced by mixing instanton and antiinstanton zero modes. Consequently, we
show that a nonvanishing topological susceptibility implies a finite density
of eigenvalues around zero, which can have implications on the restoration of
chiral symmetry above the critical temperature.
The interrelation between quantum anomalies and electromagnetic fields leads to a series of non-dissipative transport effects in QCD. In this work we study anomalous transport phenomena with lattice QCD simulations using improved staggered quarks in the presence of a background magnetic field. In particular, we calculate the conductivities both in the free case and in the interacting case, analysing the dependence of these coefficients with several parameters, such as the temperature and the quark mass.
Gradient flow can be used to describe a Wilsonian renormalization group transformation. In this talk, we use gradient flow to extract running mesonic and baryonic anomalous dimensions for an SU(3) gauge system with
The IKKT matrix model in the large-
Past lattice simulations tentatively suggested that the spectrum of observable particles in BSM theories is qualitatively different than perturbatively expected. We expand on this using a GUT-like toy theory, SU(3) Yang-Mills coupled to a scalar `Higgs' in the fundamental representation. We show the most comprehensive spectroscopy to date, including all channels up to spin 2., and find it indeed in disagreement with perturbative expectations.
The discrepancy can be traced back to nontrivial field-theoretical effects arising from the requirement of gauge invariance. These results still appear to be consistent with a mechanism proposed by Fröhlich, Morchio and Strocchi, giving a possible analytical approach.
Beyond the standard model theories involving early universe first order phase transitions can lead to a gravitational wave background that may be measurable with improved detectors. Thermodynamic observables of the transition, such as the latent heat, determined through lattice simulations can be used to predict the expected signatures from a given theory and constrain physical models. Metastable dynamics around the phase transition make precise determination of these observables difficult and often lead to large uncontrolled numerical errors. In this talk, I will discuss a prototype lattice calculation in which the first order deconfinement transition in the strong Yang-Mills sector of the standard model is analysed using a novel lattice method, the logarithmic linear relaxation method. This method provides a determination of the density of states of the system with exponential error suppression. From this, thermodynamic observables can be reconstructed with a controlled error, providing a promising direction for accurate model predictions.
Computing CP-violating nucleonic matrix elements on the lattice allows one to place theoretical constraints on the couplings of effective interactions related to BSM sources of CP-violation. These interactions are related to local operators that mix under renormalization. Typically, this mixing is parametrized by the only scale available, the lattice spacing, and induces local divergences in the coefficients of lower-dimensional operators, obscuring the continuum limit. The gradient flow has become an attractive method to circumvent this problem. In adopting the flow to define renormalized operators, the renormalization and mixing scales are disentangled, allowing for a clean computation of the corresponding matching (Wilson) coefficients. Perturbative calculations within the gradient flow formalism can be used to fix the high-energy behavior of the matching coefficients, so that the matrix elements are renormalized across a wide range of energy scales. We present results on the renormalization and mixing of the gluon chromoelectric dipole moment (gCEDM) operator to one-loop order in perturbation theory. These include the power-divergent mixing of the gCEDM with the topological charge density and the logarithmic mixing with various dimension-six operators. We also discuss the construction of a basis compatible with the chiral anomaly.
The gradient flow has become a common tool for state-of-the-art lattice
calculations. I will present observations and selected results obtained with the gradient flow.
The gradient flow, which exponentially suppresses ultraviolet field fluctuations and thus removes ultraviolet divergences (up to a multiplicative fermionic wavefunction renormalization), can be used to describe real-space Wilsonian renormalization group transformations and determine the corresponding beta function. We recently proposed a new nonperturbative renormalization scheme for local composite fermionic operators that uses the gradient flow and is amenable to lattice QCD calculations. Here we present nonperturbative results for the beta function and the Lambda parameter in two flavour QCD, along with the nonperturbative running of quark bilinear operators, obtained using our gradient flow scheme.
We present new results for the pure gauge SU(3) static force computed in a novel way on the lattice. We use Wilson loops with a chromoelectric field insertion for measuring the force directly and compare it with the traditional way of performing a numerical derivative on the static potential. Extended Wilson loop calculations have a bad signal-to-noise ratio, and the use of discretized chromo field insertions causes finite extension effects. We extend our method to support the gradient flow algorithm to improve the signal-to-noise ratio and to challenge finite extension issues, which leads to a larger impact on the general usage of operators with chromo field insertions. Furthermore, we show that direct measurement of the static force can be used to extract the strong coupling constant
Padé approximants are employed in order to study the analytic structure of the four-dimensional SU(2) Landau-gauge gluon and ghost propagators in the infrared regime. The approximants, which are model independent, are used as fitting functions to lattice data for the propagators, carefully propagating uncertainties due to the fit procedure taking into account all possible correlations. Applying this procedure systematically to the gluon propagator data, we observe the presence of a pair of complex poles at
In this talk, I will revisit the emergence of de Sitter space in Euclidean dynamical triangulations (EDT). Working within the semi-classical approximation, it is possible to relate the lattice parameters entering the simulations to the partition function of Euclidean quantum gravity. We verify that the EDT geometries behave semi-classically, and by making contact with the Hawking-Moss instanton solution for the Euclidean partition function, we show how to extract a value of the renormalized Newton coupling from the simulations. I will discuss new ways to extract the necessary quantities from the lattice configurations and present an updated value for the renormalized Newton coupling.
In Non Destructive Testings (NDT), ultrasonic Time Revesal based Nonlinear Elastic Wave Spectroscopy (TR-NEWS) turned out to be an efficient method. In order to find out anomalies in the convolution of scattered phonetic waves one of which is time reversed (TR) phonon of the other, it is necessary to perform Fourier transforms of signals.
The energy flow of nonlinear waves detected in TR-NEWS has symmetry structure of quaternions, the path of phonetic waves are confined on a
In one loop approximation, we consider 7 A type loops which sit on
We adopt a model of bosonic phonons propagating in Fermi-sea of neutral Weyl spinors which follow the Clifford algebra. Configurations in momentum space is transformed to real position space via Clifford Fourier Transform (CFT).
We propose application of Machine Learning (ML) or Neural Network (NN) technique for the analysis of
optimal weight of 20 kind of topological loops.
Using different observables we test the approach to the continuum limit of several lattice gauge actions. We use lattice spacings in the range that are usually found in typical lattice QCD simulations. As observables we use different flow observables. This allows to check the scaling properties of the different discretizations with high statistical precision.
In this talk we present results on B-meson semileptonic decays using the highly improved staggered quark (HISQ) action for both valence and 2+1+1 sea quarks. The use of the highly improved action, combined with the MILC collaboration's gauge ensembles with lattice spacings down to ~0.03 fm, allows the b quark to be treated with the same discretization as the lighter quarks. The talk will focus on updated results for
We present new results on semileptonic decays of D-mesons using the highly improved staggered quark (HISQ) action for both valence and 2+1+1 sea quarks. Our calculation uses lattice spacings ranging from 0.12 fm down to 0.042 fm, including several ensembles with physical-mass pions. The focus on the talk will be on the vector and scalar form factors (
We discuss progress towards the RBC & UKQCD collaborations' next generation of measurements of Standard Model direct CP-violation in kaon decays with G-parity boundary conditions, for which we aim to leverage the power of the upcoming exascale computers to perform the continuum limit and thus eliminate this dominant lattice systematic error.
Since our recent publication on direct CP violation and the Delta I = 1/2 rule in
In Monte Carlo simulations of lattice quantum field theories, if the variance of an estimator of a particular quantity is formally infinite, or very large compared to the square of the mean, then expectation of the estimator can not be reliably obtained using the given sampling procedure. A particularly simple example is given by the Gross-Neveu model where Monte Carlo calculations involve the introduction of auxiliary bosonic variables through a Hubbard-Stratonovich (HS) transformation. Here, it is shown that the variances of HS estimators for classes of operators involving fermion fields are divergent in this model. To correctly estimate these observables, an infinite sequence of discrete Hubbard-Stratonovich transformations and a reweighting procedure that can be applied to any non-negative observable are introduced.
The study of autocorrelation times of various meson operators and the topological charge revealed the presence of hidden harmonic oscillations of the autocorrelations (for the HMC).
These modes can be extracted by smoothing the observables with respect to the Monte Carlo time. While this smoothing procedure removes the largest share of the operator's signal, it can not be excluded that physically relevant contributions remain coupled to the oscillations. Furthermore, common statistical error analysis relies on binning and, thus, is not suited to remove non-decaying forms of autocorrelation.
I present a new error analysis framework that is based on defining an effective number of independent measurements via the ratio of the entropy of the correlated data distribution excluding autocorrelation and the entropy of the distribution including autocorrelation.
This framework is used to show that the autocorrelation oscillations are significant. I argue that the oscillations could be understood in terms of a 5D theory involving the Molecular dynamics momenta and are manifestations of the theoretical modes used by the Fourier acceleration approach. FA might control the modes and suppress their impact on the simulated physics.
When lattice QCD is formulated in sectors of fixed quark numbers, the canonical fermion determinants can be expressed explicitly in terms of transfer matrices. This in turn provides a complete factorization of the fermion determinants in temporal direction. Here we present this factorization for Wilson-type fermions and provide explicit constructions of the transfer matrices. Possible applications of the factorization include multi-level integration schemes and the construction of improved estimators for generic n-point correlation functions.
The trace of a function
Recently, an enhancement of Hutchinson's method has been proposed, termed
In this talk, we combine
In lattice QCD, the trace of the inverse of the discretized Dirac operator appears in the disconnected fermion loop contribution to an observable. As simulation methods get more and more precise, these contributions become increasingly important. Hence, we consider here the problem of computing the trace
The Hutchinson method, which is very frequently used to stochastically estimate the trace of the function of a matrix, approximates the trace as the average over estimates of the form
In recent work, we have introduced multigrid multilevel Monte Carlo: having a multigrid hierarchy with operators
In this talk, we explore the use of exact deflation in combination with the multigrid multilevel Monte Carlo method, and demonstrate how this leads to both algorithmic and computational gains.
Most Monte Carlo algorithms generally applied to lattice gauge theories, among other fields, satisfy the detailed balance condition (DBC) or break it in a very controlled way. While DBC is not essential to correctly simulate a given probability distribution, it ensures the proper convergence after the system has equilibrated. While being powerful from this perspective, it puts strong constraints on the algorithms.
In this talk, I will discuss how breaking DBC can accelerate equilibration and how it can be tailored to improve the sampling of specific observables. By focusing on the case of the so-called Skewed Detailed Balance Condition, I will discuss applications in lattice gauge theories and the perspective of improving sampling over topology, for theories with distinct topological sectors.
We investigate the glueball spectrum for
It is a fundamental question: what is the origin of the glueball masses? In the pure Yang-Mills theory, there is no mass scale in the classical level, while the breaking of scale invariance is induced by quantum effects. This is regarded as the trace anomaly, which is associated with the non-vanishing trace of the energy-momentum tensor (EMT) operator. In this context, the origin of the glueball masses can be attributed to the trace anomaly. Our purpose is to quantify how much the trace anomaly contributes to the glueball masses by using lattice simulations. Once one can have the renormalized EMT operator
Lattice simulations of QED in 2+1 dimensions are done both in the Lagrangian and Hamiltonian formalism. Though equivalent in the continuum limit, at finite lattice spacing there is no trivial correspondence among the physical parameters, and a matching is required. This can be done non-perturbatively, finding the Hamiltonian parameters that reproduce the
The RBC and UKQCD Collaborations continue to generate 2+1 flavor
domain wall fermion ensembles to support a variety of physics goals.
With the current set of ensembles, which includes one with physical
quark masses and an inverse lattice spacing of 2.7 GeV, we can
revisit the scale setting approach we have previously used in Phys.
Rev. D 93 (2016) 7, 074505. This global-fit approach involves a
simultaneous fit to a number of observables on a collection of
ensembles, using an expansion in light-quark masses and finite
lattice spacing errors, along with an expansion about the physical
strange quark mass. We report on our results to date, along with
our estimates of the systematic errors in our procedure.
We present first results from our effort to incorporate isospin-breaking effects stemming from the non-degeneracy of the light quark masses and electromagnetic interactions into the determination of the lattice scale. To this end we compute the masses of octet and decuplet baryons on isospin-symmetric ensembles generated by the CLS effort for
effects perturbatively. We show leading-order results for baryon masses on two ensembles with
We present results for the static energy in (
The nucleon transverse quark spin densities are presented. The densities are extracted from the unpolarized and transversity generalized form factors using three Nf = 2 + 1 + 1 twisted mass fermion ensembles simulated with physical quark masses. The results obtained for three lattice spacings are extrapolated to the continuum limit directly at the physical pion mass. The isovector tensor anomalous magnetic moment is determined to be κT = 1.051(94), which confirms a negative and large Boer-Mulders function, h⊥1 , in the nucleon.
It is often taken for granted that Generalized Parton Distributions (GPDs) are defined in the "symmetric" frame, where the transferred momentum is symmetrically distributed between the incoming/outgoing hadrons. However, such frames pose more computational challenges for the lattice QCD practitioners. In this talk, we lay the foundation for lattice QCD calculations of GPDs in non-symmetric frames, where the transferred momentum is not symmetrically distributed between the incoming/outgoing hadrons. The novelty of our approach relies on the parameterization of the matrix elements in terms of the so-called Generalized Ioffe-time Distributions (ITD), which helps in not only isolating but also reducing part of the higher-twist contaminations as a byproduct. This work opens possibilities for faster and more effective computations of GPDs.
We present a numerical investigation of a novel Lorentz covariant parametrization to extract x-dependent GPDs using off-forward matrix elements of momentum-boosted hadrons coupled to non-local operators. The novelty of the method is the implementation of a non-symmetric frame for the momentum transfer between the initial and final hadron state and the parametrization of the matrix elements into generalized Ioffe-time distributions (ITD), which are frame independent. The generalized ITD can then be related to the standard light-cone GPDs, which are frame-dependent. GPDs are defined in the symmetric (Breit) frame, which requires a separate calculation for each momentum transfer value, increasing the computational cost significantly. The proposed method is powerful, as one can extract the GPDs at more than one momentum transfer value within the same computational cost. For this proof-of-concept calculation, we use one ensemble of
Distribution amplitudes (DAs) describe the momentum of a meson’s constituent partons and are of great importance in quantum chromodynamics (QCD) experiments and phenomenology. The advent of large-momentum effective theory (LaMET) in 2013 made the determination of DAs amenable to lattice calculations. Parton physics is described in the limit of infinite momentum and corrections to LaMET calculations are quadratic in
Information about double parton distributions (DPDs) can be obtained by calculating four-point functions on the lattice. We continue our study on the first DPD Mellin moment of the unpolarized proton by considering interference effects w.r.t. the quark flavor. In our simulation we employ an
We compute the equation of state of isospin asymmetric QCD at zero and non-zero temperatures using direct simulations of lattice QCD with three dynamical flavors at physical quark masses. In addition to the pressure, the trace anomaly and the approach to the continuum, we will particularly discuss the extraction of the speed of sound. Furthermore, we will discuss first steps towards the extension of the EoS to small non-zero baryon chemical potentials via Taylor expansion.
Recently interest in calculations of the speed of sound in QCD under conditions like constant temperature
We present here results on
Using the scaling functions corresponding to the 3-
We update the pressure, energy density and entropy density calculations
at non-zero checmial potentials based on Taylor expansion up to 6th order
performed by the HotQCD Collaboration in 2017.
The HotQCD collaboration has now accumulated an order of magnitude larger
statistics for lattices with temporal extent
Nt=8 and 12 and added results for Nt=16 that were not available previously.
For Nt=8 we also calculated the 8th order expansion coefficients.
Furthermore, we showed that the straightforward Taylor series expansion
for the pressure provides a well controlled description of the pressure
upto
In this talk, we will use the high statistics results on Taylor expansion
coefficients, calculated with HISQ fermions and extrapolated to the continuum
limit, for a determination of the QCD equation of state under conditions
relevant for the description of hot and dense matter created in heavy
ion collisions. We determine energy density and pressure along lines of
constant entropy per net baryon-number.
We furthermore use the eighth order Taylor series for the pressure to construct
Pade-resummed thermodynamic observables along lines of fixed entropy per
baryon number-density and comment on the location of
singularities in the complex chemical potential plane that influence
the convergence of the Taylor series for bulk thermodynamic observables.
At low temperature we compare our results with hadron resonance
gas (HRG) model calculations based on the recently constructed QMHRG2020
hadron list, which in addition to the hadronic resonances listed by the
Particle Data Group, also includes resonances calculated in relativistic
quark models.
We present a new way of calculating the QCD equation of state(EoS) at finite chemical potential. Our method derives from the previously published method of exponential resummation. While exponential resummation does resum Taylor coefficients to all orders in
In this talk, we consider only isospin chemical potential (density) and perform a cumulant expansion of the exponential resummation formula, in which each of the terms is carefully evaluated using the unbiased powers of the operators. We compare our results with both exponential resummation as well as Taylor series expansion, and find that our formalism has the potential to manifest the actual fluctuations of the different-ordered operators
I will present recent results on the lattice QCD equation of state from the direct
reweighting approach advocated recently in 2004.10800 and 2108.09213. I will present direct results up to a baryochemical potential-to-temperature ratio of
We calculate a resummed equation of state with lattice QCD simulations at imaginary chemical potentials. This talk presents a generalization of the scheme introduced in our previous work to the case of non-zero
We present results up to
We perform a continuum extrapolation using lattice simulations of the 4stout-improved staggered action with 8, 10, 12 and 16 timeslices.
We present an update on including isospin breaking effects in the determination of HVP using C
We present an update on including isospin breaking effects in the determination of HVP using C
The muon
The ratio of the cross sections for
Quark chromomagnetic dipole operators encode low-energy effects of heavy particles on flavor observables related to neutral Kaon mixing or Kaon decays, for example. However, their renormalization on the lattice is complicated by the power-divergent mixing with lower-dimensional operators. The gradient flow provides a promising scheme to circumvent this problem. The matching to the MSbar scheme can be obtained by a perturbative calculation. In this talk, we report on the results for the matching coefficients through NNLO QCD and discuss the impact of these corrections on the theoretical precision.
Pseudoscalar pole diagrams provide the bulk of the contribution -- as well as a large fraction of the error budget -- of dispersive estimates of the hadronic light-by-light (HLbL) piece of the muon g-2. We report on a calculation of the pion transition form factor
Implementations of measurement kernels in high-level Lattice QCD frameworks enable rapid prototyping, but can leave hardware capabilities significantly underutilized. This is an acceptable tradeoff if the time spent in unoptimized routines is generally small. The computational cost of modern spectroscopy projects however can be comparable to or even exceed the cost of generating gauge configurations and computing solutions of the Dirac equation. One such key kernel in the stochastic LapH method is the computation of baryon blocks; we discuss several implementation strategies and achieve a 3x speedup over the current implementation on Intel Ice Lake.
Adaptive multigrid methods have proven very successful in dealing with critical slow down for the Wilson-Dirac solver in lattice gauge theory. New formulations for Multigrid methods with staggered fermions are currently being tested on pre-exascale GPU supercomputers such as Summit and Crusher. In this talk, I will discuss our implementation of staggered multigrid codes on the Summit Supercomputer and subsequent optimization efforts.
Lyncs-API is a Python API for lattice QCD. One of the goals of lyncs-API is to provide a common framework for lattice QCD calculation for different HPC architectures with and without accelerators by utilizing different software packages. As such, it contains interfaces to c-lime, DDalphaAMG, tmLQCD, and QUDA. In this talk, we focus on the interface to QUDA, named lyncs-QUDA, and present a small tutorial on how to use the Python interface to perform a Hybrid Monte Carlo simulations using computational kernels provided by QUDA.
We give an overview of the mixed-precision Krylov strategies of QUDA. These have evolved over the past decade and utilize a variety of numerical techniques to stabilize the convergence of solvers such as Conjugate Gradient. We describe a recently developed bit packing technique to increase precision at fixed word size. This improvement in precision stabilizes the mixed-precision solvers as the chiral limit is reached.
As a fully computational discipline, Lattice Field Theory has the potential to give results that anyone with sufficient computational resources can reproduce, going from input parameters to published numbers and plots correct to the last byte. After briefly motivating and outlining some of the key steps in making lattice computations reproducible, I will present the results of a survey of all 1,229 submissions to the hep-lat arXiv in 2021 of how explicitly reproducible each is. I will highlight areas where LFT has historically been well ahead of the curve, and areas where there are opportunities to do more. I will conclude by outlining some potential next steps to embracing reproducible open science as a community.
Open science aims to make scientific research processes, tools and results accessible to all scientific communities, creating trust in science and enabling digital competences to be realized in research, leading to increased innovation. It provides standard and transparent pathways to conducting research and fosters best practices for collecting, analysing, preserving, sharing and reusing data, software, workflows and other outputs through collaborative networks. Open Science appears to becoming the norm with its applications spanning throughout the whole research cycle of a project. The importance of making Open Science a reality is reflected in the policy and implementation actions of the European Commission incorporated in research and innovation funding programmes (FP7, Horizon 2020, Horizon Europe) and the development of the European Open Science Cloud (EOSC) as it improves the quality, efficiency and responsiveness of research. EOSC will enable researchers across disciplines and countries to store, curate and share data under a common policy framework with rules of participation and pre-defined set of technical specifications that are expected to help shape the “Internet of FAIR data and services” in Europe. In this talk we will present the basic Open Science principles explaining briefly best practices for materialising open science. Subsequently, we will present the results of the landscaping survey of Open Science in the Lattice Gauge Theories community (https://latticesurvey.hpcf.cyi.ac.cy/index.php/157898). Finally, we will provide directions in which the LGT community could move in order to enhance Openness and FAIRness (Finability, Accessibility, Interoperability, Reusability) in Science.
We study U(1) lattice field theory in the Villain formulation and couple electrically as well as magnetically charged bosonic matter. The system has a manifest self-duality that allows to establish a relation between the weak and strong coupling regimes. The complex action problem can be overcome with a worldline representation such that numerical simulations are possible. We study the spontaneous breaking of self-duality and present results for the phase diagram.
We present our progresses in the use of the non-perturbative renormalization framework based on considering QCD at finite temperature with shifted and twisted (for quarks only) boundary conditions in the compact direction. We report our final results in the application of this method for the non-perturbative renormalization of the flavor-singlet local vector current. We then discuss the more challenging case of the renormalization of the energy-momentum tensor, and show preliminary results on the relevant one-point functions for the computation of the renormalization constants of its non-singlet components.
Numerical Stochastic Perturbation Theory (NSPT) has over the years proved to be a valuable tool, in particular being able to reach unprecedented orders for Lattice Gauge Theories, whose perturbative expansions are notoriously cumbersome. One of the key features of the method is the possibility to expand around non-trivial vacua.
While this idea has been around for a while, and it has been implemented in the case of the (non-trivial) background of the Schroedinger Functional, NSPT expansions around instantons have not yet been worked out. Here we present computations for the double well potential in Quantum Mechaniscs. We compute a few orders of the expansion of the ground state energy splitting in the one-instantons sector. We discuss how (already) known three-loop results are reproduced and present the current status of higher order computations.
The Szymanzik improvement program for gauge theories is most commonly implemented using forward finite difference corrections to the Wilson action. Central symmetric schemes (see e.g. [1]) naively applied, suffer from a doubling of degrees of freedom, identical to the well known fermion doubling phenomenon. And while adding a complex Wilson term remedies the problem for fermions, it does not easily transfer to real-valued gauge fields.
In this talk I report on recent progress in formulating symmetric discretization schemes for classical actions of simple one-dimensional problems [2]. They avoid doubling by exploiting the weak imposition of initial/boundary conditions. Inspired by recent work in the field of numerical analysis of partial differential equations, I construct a regularized summation-by-parts finite difference operator using affine coordinates, which is combined with Langrange multipliers to impose the boundary conditions weakly. Application to classical initial value problems with first and second order derivatives are presented.
[1] A. Rothkopf, arXiv:2102.08616
[2] A. Rothkopf, J. Nordström arXiv:2205.14028
A class of non-linear, massive electrodynamics theories known as Generalized Proca (GP) was proposed in 2014 in the context of classical effective field theories and has held a prominent role in cosmology. As a quantum field theory GP has the potential to describe phenomena in condensed matter, optics, and lattice field theories. In this talk, we show how to quantize a family of GP theories using the symplectic approach, featuring two main advantages: it is algebraically simple and its outcome is amenable to numerical simulations. Additionally, by unveiling the existence of quantum consistency conditions, we conclude that not all classically well-defined (multi-)GP theories are amenable to quantization, and discuss the implications of our results.
Flavor observables are usually computed with the help of the electroweak Hamiltonian which separates the perturbative from the non-perturbative regime. The Wilson coefficients are calculated perturbatively, while matrix elements of the operators require non-perturbative treatment, e.g. through lattice simulations. The resulting necessity to compute the transformation between the different renormalization schemes in the two calculations constitutes an important source of uncertainties. An elegant solution to this problem is provided by the gradient flow formalism because its composite operators do not require renormalization. In this talk we report on the construction of the electroweak Hamiltonian in the gradient flow formalism through NNLO in QCD.
In this talk I will outline a strategy to include the effects of the electromagnetic interactions of the sea quarks in QCD+QED. When computing leading order corrections in the electromagnetic coupling, the sea-quark charges result in quark-line disconnected diagrams which are not easily computed using stochastic estimators. An analysis of their variance can help construct better estimators for the relevant traces of quark propagators. I will present preliminary numerical results for the corresponding contributions to the hadronic spectrum using ensembles of domain-wall fermions from the RBC/UKQCD collaboration.
We develop digital quantum algorithms for simulating a 1+1 dimensional SU(2) lattice gauge theory in the Schwinger boson and loop-string-hadron (LSH) formulations. These algorithms complement and improve on the algorithm by Kan & Nam (arXiv:2107.12769) based on the angular momentum basis, which generalized an earlier algorithm for a U(1) gauge theory (the Schwinger model) [Quantum 4, 306 (2020)]. We share the lessons learned regarding the application of product formulas to time evolution in various formulations of this lattice gauge theory, especially the identification of individually-circuitizable Hamiltonian terms, how to circuitize the SU(2) interactions, and what factors make a given formulation more or less costly. Within this framework, the LSH formulation leads to the least resource-intensive algorithm to date for the model considered.
Recently we introduced a new gradient flow based beta-function which is
defined over infinite Euclidean space-time to calculate and integrate
infinitesimal scale changes in RG flows. It can be applied in high-
precision determination of the strong coupling at the Z-pole in QCD. In
this talk we will discuss the results and challenges of the method
applied to quenched QCD ( pure Yang-Mill theory) as a pilot test for application to
full QCD.
We report results on the Schwinger model at finite temperature and density using a variational algorithm for near-term quantum devices. We adapt β-VQE, a classical-quantum hybrid algorithm with a neural network, to evaluate thermal and quantum expectation values and study the phase diagram for the massless Schwinger model along with the temperature and density. By comparing the exact variational free energy, we find that the variational algorithm works for the Schwinger model for T>0 and μ>0. As a result, we obtain a qualitative picture of the phase diagram for the massless Schwinger model. This talk is based on arXiv:2205.08860.
Variational methods can be used to provide robust upper bounds on the energy spectra of hadrons and nuclei, but the presence of small energy gaps for multi-hadron states makes it difficult to ensure that the ground- and lowest-energy excited-states have been identified. I will discuss recent calculations of two-baryon systems using large and varied sets of interpolating operators, including non-local products of plane-wave baryons as well as operators spanning the full Hilbert space of local six-quark operators, to probe for the existence of two-baryon bound states at unphysically large quark masses. Results for baryon-baryon scattering phase shifts and their implications for understanding the quark mass dependence of baryon-baryon interactions will also be discussed.
Substantial progress has been made recently in the generation of master-field ensembles.
This has to be paired with efficient techniques to compute observables on gauge field configurations with a large volume.
Here we present the results of the computation of hadronic observables, including hadron masses and meson decay constants, on large-volume and master-field ensembles with physical volumes of up to
We obtain sub-percent determinations from single gauge configurations with the combined use of position-space techniques, volume averages and master-field error estimation.
We present a fully non-perturbative determination of a relativistic heavy quark action's parameters on the CLS ensembles using neural networks, with a particular focus on the charm sector. We then further illustrate the applicability of such an approach for lattice NRQCD bottom quarks.
In this talk we present an exact distillation setup with stabilised Wilson fermions at the SU(3) flavour symmetric point utilising the flexibility of the Grid and Hadrons software libraries. This work is a stepping stone towards the non-perturbative investigation of hadronic D-decays where we need to control the multi-hadron final states. As a first step we study two-to-two s-wave scattering of pseudoscalar mesons. In particular we examine the reliability of the extraction of finite volume energies as a function of the number of eigenvectors of the gauge-covariant Laplacian entering our distillation setup.
We present results on the phase diagram of Quantum Chromodynamics (QCD) with two light quark flavours at finite chemical potential from first principle lattice simulations. To circumvent the sign problem we use the complex Langevin method. The pion mass is of approximately 480 MeV. We report on the pressure, energy and entropy equations of state. A particular emphasis is put on the “cold” regions of the phase diagram and the observation of the Silver Blaze phenomenon.
Lattice simulations of non-zero density QCD introduce the so-called sign problem (complex or negative probabilities), which invalidates importance sampling methods. We use the Complex Langevin equation (CLE) to circumvent the sign problem, measure boundary terms and use reweighting to test the reliability of the boundary term observable, confirming expectations from previous studies.
We also investigate boundary terms in simulations using CLE with dynamic stabilization and compare this, to results calculated with reweighting.
We obtain the equation of state (
Our result is consistent with several recent works based on effective models which have shown the peak of sound velocity.
We investigate hadron masses in two-color QCD with
Furthermore, we measure hadron masses with isospin
Accurate modeling of the many-body properties of the neutrinosphere appears important for a correct description of core-collapse supernovae. The neutrinosphere is within the region of validity of pionless effective field theory.
We leverage techniques from lattice field theory to do a direct calculation of the many-body physics from leading-order pionless EFT. We present a calculation of thermodynamic observables and the static structure factors of the neutrinosphere accounting for all sources of uncertainty.
I will describe a method to reduce spatial discretization errors in lattice formulations of pionless effective field theory. All
In published work, we reported a study of the H dibaryon in the continuum limit of SU(3)-flavor-symmetric lattice QCD with a pion/kaon/eta mass of roughly 420 MeV, employing finite-volume quantization conditions and distillation. The data were affected by large discretization effects, leading to a small binding energy in the continuum. In this talk, I will present results for nucleon-nucleon scattering based on the same dataset. In the S wave, we find that nucleon-nucleon systems with both isospin zero and one are unbound. We also obtain a nonzero signal for some higher partial waves as well as the mixing between spin-1 coupled S and D waves.
In this talk, I will illustrate an alternative approach to Luscher formulas for extracting the nuclear force from finite volume energy levels using the plane wave basis and eigenvector continuation. We adopt the formalism of semilocal momentum-space regularized chiral nuclear force up to fifth order to investigate the two-nucleon energy levels in the finite volumes using plane wave basis with no reliance on the partial wave expansion. In the chiral EFT framework, the long-range one-pion-exchange interaction is included nonperturbatively and the low energy constants are determined by fitting lattice QCD data at mpi=450 MeV from NPLQCD Collaboration. The pion mass dependence is incorporated self-consistently in the EFT framework. In the calculation, the eigenvector continuation is used to accelerate the fitting and uncertainty quantification, which also generates an interface to fitting the upcoming lattice QCD results in the future.
We report on our computation of the eta transition form factor
twisted mass lattice QCD at physical quark masses and at a single lattice spacing.
On the lattice, we have access to a broad range of (space-like) photon momenta and
can therefore produce data complementary to the experimentally accessible singly
virtual case. We use the form factor to determine the eta pole contribution to the
hadronic light-by-light scattering in the muon
accuracy of below 40%. Since so far there are no determinations of this contribution
from first principles, even such a crude determination is interesting from a
phenomenological point of view.
The rare Hyperon decay
current process, which is highly suppressed within the Standard Model, and is therefore sensitive to new physics. Due to recent improvements in experimental measurements of this decay, the Standard Model theory prediction must also be improved in order to identify any new physics in this channel.
We present updates on our progress towards the first exploratory lattice calculation of the long-distance part of the form factors of this decay on a 340MeV pion mass ensemble using domain-wall fermions as part of the RBC-UKQCD collaboration.
The rare Hyperon decay
Integrated time-slice correlation functions
the moments method to determine
in the muon g-2 determination or in the determination of smoothed spectral
functions. We show that the short distance part of the integral may lead to
discretization errors when
of the integrand we derive the asymptotic convergence of the integral at small lattice spacing.
For the (tree-level-) normalized moment
we have non-perturbative results down to
mass. A bending of the curve as a function of
spacings. We try to understand the behavior and extract an improved continuum limit.
The parametric error on the QCD-coupling can be a dominant source of uncertainty in several important observables. One way to extract the coupling is to compare high order perturbative computations with lattice evaluated moments of heavy quark two-point functions. The truncation of the perturbative series is a sizeable systematic uncertainty that needs to be under control.
In this talk we give an update on our study [hep-lat/2203.07936] on this issue. We measure pseudo-scalar two-point functions in volumes of
Our results show that both the continuum extrapolations and the extrapolation of the
We perform the complete non-perturbative running of the flavour non-singlet
tensor operator from hadronic to elecroweak scales in
massless QCD, comparing four different definitions of the renormalisation
constant. We use the same configuration ensembles of arXiv:1802.05243,
subject to Schrödinger Functional (SF) boundary conditions, whereas
we use valence quarks with (
results in
counterterms. Following the recent ALPHA strategy, we exploit two
different running couplings: at high energies (
we use a SF-type coupling, while at low energies (
a Gradient Flow (GF)-type coupling.
The Lorentzian type IIB matrix model is a promising candidate for a nonperturbative formulation of superstring theory. However, it was found recently that a Euclidean space-time appears in the conventional large-
When designing lattice actions, gauge field smearing is frequently used to define the lattice Dirac operator. Since the smearing procedure removes effects of ultraviolet fluctuations, the fermions effectively see a larger lattice spacing than the gauge fields. Creutz ratios, formed from ratios of rectangular Wilson loops, based on smeared gauge fields are an adequate observable to investigate the effect of smearing since they do not need renormalisation and provide a measure of the physical forces felt by the fermions. We study their behaviour at various smearing radii (fixed in lattice units) and in particular how the smearing influences the scaling towards the continuum limit. Since we employ the Wilson gradient flow as smearing, the same Creutz ratios have another, well defined continuum limit, when the flow time is fixed in physical units. We make an attempt to approximately separate the close-to-continuum region for smearing from the one of the physically flowed Creutz ratios.
We report on the development of a lattice formalism for studying the realtime behaviour of radially symmetric configurations of massless scalar fields in radially symmetric, curved spacetimes in 3+1 dimensions.
It is intended to numerically study back reaction effects due to semiclassical gravity in the time evolution of scalar field configurations, especially for those that will eventually evolve into black holes.
The emergence of a strongly coupled ultraviolet fixed point as 4-dimensional fermion-gauge systems cross into the conformal window has long been hypothesized. Using an improved lattice actions that include heavy Pauli-Villars (PV) type bosons I show that an SU(3) system with 8 fundamental flavors described by two sets of staggered fermions has a smooth phase transition from the weak coupling to a strongly coupled phase.
I investigate the critical behavior of this phase transition using finite size scaling of the renormalized gradient flow coupling. The result of the scaling analysis is not consistent with a first order phase transition, but it is well described by Berezinsky-Kosterlitz-Thouless or BKT scaling. BKT scaling could imply that the 8-flavor system is the opening of the conformal window, an exciting possibility that I study by investigating the renormalization group
The strongly coupled phase exhibits symmetric mass generation (SMG), so the associated fixed point must be related to t'Hooft anomaly cancellation. The existence of a non-perturbative fixed point and SMG phase could lead to many novel phenomena, justifying future detailed studies of both.
We use exact diagonalization to study quantum chaos in a simple model with two bosonic and one fermionic degree of freedom. Our model has a structure similar to the BFSS matrix model (compactified supersymmetric Yang-Mills theory), and is known to have a continuous energy spectrum. To diagnose quantum chaos, we consider energy level statistics and the out-of-time-order correlators (OTOCs). We find that OTOCs exhibit monotonous growth down to the lowest temperatures, thus indicating Lyapunov instability at all temperatures. This is in contrast to the purely bosonic models like pure Yang-Mills theory, which are non-chaotic at low temperatures because of their gapped energy spectrum. Despite the apparently chaotic behavior at all temperatures, we find that the energy level statistics undergoes a sharp transition between non-chaotic, one-dimensional low-energy states and delocalized high-energy states with random-matrix-type statistics of energy levels.
The last two decades witness the discovery of tens of hadronic structures beyond the expectations of the traditional quark model. They are candidates of exotic hadrons. Many of these structures are close to the thresholds of a pair of hadrons, and thus allow for an EFT treatment. In this talk, I will give an overview of the understanding of such resonances, covering positive-parity heavy meson, hidden-charm and double-charm near-threshold hadrons.
The past decade has seen rapid developments in flavour physics, in particular driven by the LHCb experiment. A wealth of heavy-hadron states have been discovered, with some of them not fitting in the conventional meson-baryon classification scheme. Precision studies of beauty and charm hadron decays have not only improved our understanding of the flavour structure of the Standard Model, but also revealed a number of intriguing anomalies. This talk will present highlights from the LHCb experiment, focusing on the recent results in hadron spectroscopy and heavy-flavour decays.
Meeting of the International Advisory Committee, on invitation only!
An understanding of the nearly decades-long controversy between calculations of nucleon-nucleon interactions using the Luscher spectroscopy method and the HALQCD potential method has seen significant advancement in recent years due to the efforts of several groups. In particular, the use of improved operator methods has shed light on possible issues related to excited state contamination, while the first study of the lattice spacing dependence in a baryon-baryon system has shown large potential discretization systematics. In this talk, I will present a new study which compares the use of all methods in the literature for computing NN interactions on a single ensemble, in order to discriminate between excited state contamination and discretization effects, and discuss conclusions that this controversy has finally brought to light.
Partial quenching can be used to avoid isospin mixing in a theory incorporating a mass twist, but comes at the cost of introducing unitarity violation. This talk will examine pion-pion scattering in partially-quenched twisted-mass lattice QCD using chiral perturbation theory. The specific partially-quenched setup corresponds to that used in numerical lattice QCD calculations of the
Several outstanding puzzles involve electroweak interactions of low-energy nuclear systems. Observables such as long-range matrix elements can be used to study processes such as neutral meson mixing or the substructure of hadrons. Contributions from multi-hadron states to these matrix elements are central to many of these puzzles. In this talk, we present a framework for studying long-range matrix elements from lattice QCD, which extends previous work to include three-hadron on-shell effects. We show the relevant finite-volume scaling relations for connecting correlation functions from lattice QCD to the infinite-volume transition amplitudes.
We present the finite volume contributions to the long distance behavior of the vector correlator, which is dominated by the two-pion scattering states in the I = 1 channel. The finite volume spectroscopy calculations have been performed using the (stochastic) distillation framework on the physical point Nf = 2 + 1 CLS ensemble. We also compute the timelike pion form factor to reconstruct the long distance part of the vector correlator. The reconstructions improve the lattice estimates of hadronic vacuum polarization contribution to the muon anomalous magnetic moment.
We calculate the decay rates for
We present results for an exploratory lattice calculation of the leading parity-violating pion-nucleon coupling
We study the dependence of hadronic resonances on the mass of quarks through the analysis of data from QCD lattice simulations form various collaborations. Using Machine Learning techniques as the LASSO algorithm we fit lattice data in order to extrapolate them to the physical point and extract the results for the quark mass dependence for exotic resonances like Ds0 and Ds1.
Calculations of nucleon charges and form factors have reached a level
of precision requiring a more precise accounting of the contribution
of excited states in both the two and three point functions. Recently,
it was suggested that the excited states that are suppressed in two-point
function maybe enhanced in certain three point functions. Such an enhancement increases when using lattice simulations at the physical point where
We present preliminary results of the renormalization functions (RFs) for a number of quark and gluon operators studied in lattice QCD using a gauge-invariant renormalization scheme (GIRS). GIRS is a variant of the coordinate-space renormalization prescription, in which Green's functions of gauge-invariant operators are calculated in position space. A novel aspect is that summations over different time slices of the operators' positions are employed in order to reduce the statistical noise in lattice simulations. We test the reliability of this scheme by calculating RFs for the vector one-derivative quark bilinear operator, which enters the average momentum fraction of the nucleon. We use
We study the electric polarizability of a charged pion from four-point functions in lattice QCD as an alternative to the background field method. We show how to evaluate the correlation functions under special kinematics to access the polarizability. The elastic form factor (charge radius) is needed in the method which can be obtained from the same four-point functions at large current separations. Preliminary results from the connected quark-line diagrams will be presented.
We investigate an improved analysis method of the recently-proposed model-independent method to obtain the pion charge radius from the electromagnetic pion three-point function. We discuss a systematic error of the original method in small volume, and propose an improvement to reduce it. Using the
I will give a status report on our calculations of matrix elements of quark bilinear operators between nucleon states. Summary of results for isovector charges, moments, and axial, electric and magnetic form factors will be presented.
The magnetic fields generated in non-central heavy-ion collisions are among the strongest fields produced in the universe, reaching magnitudes comparable to the scale of strong interactions. Backed by model simulations, we expect the resulting field to be spatially modulated, deviating significantly from the commonly considered uniform profile. In this work, we present the next step to improve our understanding of the physics of quarks and gluons in heavy-ion collisions by adding an inhomogeneous magnetic background to our lattice QCD simulations. We simulate 2+1 staggered fermions with physical quark masses for a range of temperatures covering the QCD phase transition. We assume a
We discuss the QCD phase diagram in the presence of a strong magnetic background field. We provide numerical evidence, based on lattice simulations of QCD with 2+1 flavours and physical quark masses, for a crossover transition at
The introduction of parallel electric and magnetic fields in the QCD vacuum enhances the weight of topological sectors with a non-zero topological charge. For weak fields, there is a linear response for the topological charge. We study this linear response which can be interpreted as the axion-photon coupling. In this work we use lattice simulations with improved staggered quarks including background electric and magnetic fields.
The photon emissivity of the quark-gluon plasma (QGP) is an important input to predict the photon yield in heavy-ion collisions, particularly for transverse momenta in the range of 1 to 2 GeV. Photon production in the QGP can be probed non-perturbatively in lattice QCD via (Euclidean) time-dependent correlators. Analyzing the spatially transverse channel, as well as the difference of the transverse and longitudinal channels as a consistency check, we determine the photon emissivity based on continuum-extrapolated correlators in two-flavour QCD. Estimates of the lepton-pair production rate can be derived by combining the two aforementioned channels.
Thermal photons from the QGP provide important information about the interaction among the plasma constituents. The photon production rate from a thermally equilibrated plasma is proportional to the transverse spectral function
We investigate the properties of the pion quasiparticle in the thermal hadronic phase of
The electromagnetic coupling constant,
These non-perturbative effects can be determined from ab initio calculations on the lattice. We present preliminary lattice results for the leading order hadronic contribution to this running at different values of
It is well known that the electromagnetic coupling constant alpha is an energy scale dependent quantity. Some of these dependencies originate from hadrons and can therefore be computed using Lattice QCD. The value at the mass of the Z boson is of particular interest. The large energy range makes a direct simulation unfeasible, so it has to be split up into several ranges. Setting the scale of the smaller and finer lattices, which cover the higher energies, is a challenging task. We present a general method to handle this issue in lattice gauge theories. A test of this strategy in two-dimensional QED is done and the hadronic vacuum polarization is computed on an energy range that spans two orders of magnitude.
This talk presents a determination of the short-distance contributions to the unphysical
Experimental searches for neutrinoless double-beta decay aim to determine whether the neutrinos are Dirac or Majorana fermions. Interpreting double-beta half-lives or experimental exclusions in terms of neutrino physics requires knowledge of the nuclear matrix elements, which are currently estimated from various nuclear models and carry a large model uncertainty. This talk will present preliminary results from a first-principles lattice QCD calculation of the short-distance (from a heavy intermediate Majorana neutrino) and long-distance (from a light Majorana neutrino) nuclear matrix elements for the simple
The Budapest-Marseille-Wuppertal collaboration computed the leading hadronic vacuum polarization contribution to the anomalous muon magnetic moment with unprecedented accuracy on the lattice. The result was obtained using staggered fermions. Here we present an improved crosscheck of the staggered result for the intermediate window observable using a mixed action setup: overlap valence quarks on staggered sea ensembles. We focus on the light connected contribution. Details of the overlap fermion formulation and of the methods used for the measurements of the hadronic vacuum polarization are described. We present first results for two different setups on lattices with a spatial extent of 3 fm.
We present preliminary results toward the extraction of the transition form factors
We present the status of ongoing work to extract pseudoscalar and vector decay constants for
Our calculation is based on
Using domain wall light, strange and charm quarks, and relativistic
We then extrapolate to physical quark masses and the continuum and compare to predictions by other lattice collaborations and QCD sum rules. Furthermore, we use our results to test heavy quark symmetry relations.
We report on our progress in the non-perturbative calculation of the decay rates for inclusive semi-leptonic decays of charmed mesons from lattice QCD. In view of the long-standing tension in the determination of the CKM matrix elements
We perform a pilot lattice simulation for the
We address the non-perturbative calculation of the decay rate of the inclusive semi-leptonic
We perform a pilot lattice computation for
Among the many anomalies and tensions in flavour physics, one of the most persistent ones is the
Over the years, lattice QCD has been extremely successful in calculating physical quantities needed for the exclusive determination of
It is only until recently that new methods have been proposed for computing inclusive decay rates of semileptonic B-decays using lattice QCD. These new methods rely on the extraction of the hadronic spectral density from Euclidean correlators computed on the lattice.
In this talk, one of these new methods will be discussed together with the presentation of the first results of the inclusive decay rate and related observables. We use one of the gauge ensembles provided by the ETM collaboration with an unphysical pion mass and unphysically light
In this talk, we present Lattice QCD measurements for the matrix elements of
We give an update on the ongoing effort of the RC
We determine, from Lattice QCD, the elastic
We present our calculations for the I=1/2,3/2 K-pi scattering length, extracted from the interaction energy of Euclidean two-point functions. We use the domain wall fermion action with physical quark masses at a single lattice spacing. We are specifically interested in the systematic effects due to around-the-world terms on the overall determination of the scattering length. We present our progress and discuss the various systematic effects in our preliminary results.
Resonances play an important role in Standard Model phenomenology. In particular, hadronic resonances are found in flavour-physics processes, such as
Excited state contamination is one of the most challenging sources of systematics to tackle in the determination of nucleon matrix elements and form factors. The signal-to-noise problem prevents one from considering large source-sink time separations. Instead, state-of-the-art analyses consider multi-state fits. Excited state contributions to the correlation functions are particularly significant in the axial channel. In this work, we confront the problem directly. Since the major source of contamination is understood to be related to pion production, we consider three-point correlators with a
constructed using different bases of
Determining the quark and gluon contributions to the momentum of a hadron is a difficult and computationally expensive problem. This difficulty mainly arises from the calculation of the gluon matrix element which involves a quark-line disconnected gluon operator, which suffers from noisy ultra-violet fluctuations. Furthermore, a complete calculation also requires a determination of the non-perturbative renormalisation of this operator. In this work, we performed a quenched QCD study of the fully-renormalised quark and gluon contributions to the pion and nucleon momenta via an adaption of the Feynman-Hellmann technique. We find the momentum sum rules are satisfied within our uncertainties for both the pion and nucleon for 3 different values of the quark masses. We also discuss some recent progress on extending this procedure to a dynamical simulations.
The light-cone distribution amplitude (LCDA) of the pion carries information about the parton momentum distribution and is an important theoretical input into various predictions of exclusive measurements at high energy, including the pion electromagnetic form factor. We present progress towards a lattice calculation of the fourth Mellin moment of the LCDA using the heavy quark operator product expansion (HOPE) method.
We present an update on the calculation of flavor diagonal nucleon axial, scalar and tensor charges on eight 2+1+1-flavor MILC HISQ ensembles using Wilson-clover fermions. We discuss the excited state contributions (ESC) in the connected and disconnected diagrams, nonperturbative calculation of the renormalization constants and flavor mixing in the RI-sMOM scheme. These data are extrapolated to the physical point using a simultaneous chiral-continuum-finite-volume fit.
Investigation of QCD thermodynamics for
We perform finite temperature 2+1-flavor lattice QCD simulation employing
the Möbius domain-wall fermion near the (pseudo-)critical point with Nt=12 and 16. The simulation points are chosen along the lines of constant physics,
where the quark mass is fixed near physical point.
The input quark mass for Möbius domain wall fermion with Ls=12 are tuned
by taking into account the residual mass. In this talk, we focus on simulation
details and present some preliminary results.
The relativistic rotation causes a change in QCD critical temperature. Various phenomenological and effective models predict a decrease in the critical temperature in rotating QCD. Nevertheless, it follows from lattice simulations that the critical temperature in gluodynamics increases due to rotation. But in QCD the rotation acts on both gluons and fermions, and combination of these effects may lead to unexpected results. In this report the first lattice results for a rotating QCD with dynamical
Polyakov loop effective theories have been shown to successfully describe the thermodynamics of QCD. Furthermore, due to the sign problem, they represent an alternative avenue to investigate the physics at non-zero chemical potential. However, when working with these effective theories, a new set of couplings appear whose expressions in terms of the gauge coupling and
For the exploration of the phase diagram of lattice QCD effective Polyakov loop theories provide a valuable tool in the strong coupling and heavy quark mass regime. In practice, the evaluation of these theories is limited by the appearance of long-range and multi-point interaction terms. It is well known that for theories with such kind of interactions mean-field approximations can be expected to yield reliable results. Here, those approximations are applied to such effective theories. Using this framework the critical endpoint of the deconfinement transition is determined and results are compared to the literature. This treatment can also be used to investigate the phase diagram at non-zero baryon and iso-spin chemical potential.
Fluctuations of conserved charges in a grand canonical ensemble
can be computed on the lattice and, thus, provide theoretical
input for freeze-out phenomenology in heavy ion collisions.
Electric charge fluctuations and the corresponding higher order
correlators are extremely difficult, suffering form the most severe
lattice artefacts. We present new simulation data with a novel
discretization where these effects are strongly suppressed and provide
continuum extrapolated results in the temperature region of the chemical
freeze-out.
The Fermilab Lattice, HPQCD, and MILC collaborations are engaged in multi-year projects to compute the hadronic vacuum polarization (HVP) contribution to the anomalous magnetic moment of the muon with high precision. In this talk, we present the status of our calculation of the light-quark connected contributions to HVP. The calculation relies on four ensembles of gauge configurations generated by the MILC collaboration. These ensembles have 2+1+1 flavors of dynamical highly-improved staggered quarks (HISQ) with the common up and down quark masses tuned to give a pion mass very close to its physical value. Lattice spacings range from approximately 0.15 fm to 0.06 fm. For most ensembles, the statistics have been increased from our last publication. We have results for various window regions that restrict the contribution to time ranges for which the vector-current correlation is precisely determined.
We discuss ongoing improvements to the hadronic vacuum polarization computation by the Budapest-Marseille-Wuppertal collaboration.
Isospin-breaking corrections to the HVP component of the anomalous magnetic moment of the muon are needed to ensure the theoretical precision of
We present a lattice calculation of the window contribution (
We present new results for the light-quark connected part of the leading order hadronic-vacuum-polarization (HVP) contribution to the muon anomalous magnetic moment, using staggered fermions. We have collected more statistics on previous ensembles, and we added two new ensembles. This allows us to reduce statistical errors on the HVP contribution and related window quantities significantly. We also calculated the current-current correlator to next-to-next-to-leading order (NNLO) in staggered chiral perturbation theory, so that we can correct to NNLO for finite-volume, pion-mass mistuning and taste-breaking effects. We discuss the applicability of NNLO chiral perturbation theory, emphasizing that it provides a systematic EFT approach to the HVP contribution, but not to short- or intermediate-distance window quantities. This makes it difficult to assess systematic errors on the standard intermediate-distance window quantity that is now widely considered in the literature. In view of this, we investigate a longer-distance window, for which EFT methods should be more reliable. Our most important conclusion is that new high-statistics computations at lattice spacings significantly smaller than 0.06 fm are indispensable. The ensembles we use have been generously provided by MILC and CalLat.
This talk gives an update of the HVP contribution to the muon g-2 from the RBC/UKQCD collaborations.
One major systematic uncertainty of lattice QCD results is due to the continuum extrapolation to extract the continuum limit at lattice spacing
I will present an analysis based on Symanzik Effective Theory for lattice QCD actions with Ginsparg-Wilson and Wilson quarks. This analysis yields various powers
Simulating QCD in the traditional way on very large lattices leads to conceptual and technical issues with impact on performance and reliability. In view of master-field simulations, introduced at Lattice 2017, simulations with dynamical fermions are particularly challenging and require additional stabilising measures to reach physical point lattices without compromising the quality of the simulation. The proposed stabilising measures comprise algorithmic changes as well as a new O(a)-improved Wilson action with exponential clover term.
In this talk, the motivation for stabilising measures and its effects are reviewed as both, standard-sized and master-field simulations, profit from its implementation. Furthermore, the current status and prospects of QCD master-field simulations are presented.
The Hubbard model is an important tool to understand the electrical properties of various materials. More specifically, on the honeycomb lattice it is used to describe graphene predicting a quantum phase transition from a semimetal to a Mott insulating state. In this talk I am going to explain two different numerical techniques we employed for simulations of the Hubbard model: The Hybrid Monte Carlo algorithm on the one hand allowed us to simulate unprecedentedly large lattices, whereas Tensor Networks can be used to completely avoid the sign problem. Respective strengths and weaknesses of the methods will be discussed.
The talk will review recent algorithms that are enabling state-of-the-art lattice QCD simulations. We will begin with an overview of the developments that have been crucial in simulating fermions at physical quark masses and fine lattice spacings. This will include an overview on iterative linear solver methods, such as multi-grid methods, and challenges arising from large scale Markov-chain Monte Carlo simulations on state-of-the-art HPC machines.
We will also discuss promising methods that could potentially alleviate topological freezing, to enable simulations at very fine lattice spacings close to the continuum.
The first step in any QFT calculation of a phenomenological observable is the matching of the theory to Nature. The matching procedure fixes the parameters of the theory in terms of an equal number of external inputs that, if the theory is expected to reproduce observations, must be experimentally measured physical quantities. At the (sub)percent level of accuracy QED radiative corrections become important and it is QCD+QED that is expected to describe the hadronic Universe. At this level of precision phenomenological predictions deriving from lattice QCD calculations do depend on the choice of the external inputs used to match/define the approximate theory.
In the first part of this talk I will concentrate on the theoretical aspects of the matching procedure of lattice QCD and of the lattice calculation of strong-isospin and QED radiative corrections to hadronic observables. In the second part I will concentrate on the so called theory scales. By heavily relying on the work recently done by the scale setting working group in the last edition of the FLAG review, I will discuss the numerical results obtained by the different collaborations for these useful auxiliary quantities.