Francesco A.
Evangelista
*
Department of Chemistry and Cherry Emerson Center for Scientific Computation, Emory University, Atlanta, GA 30322, USA. E-mail: francesco.evangelista@emory.edu
First published on 10th October 2024
The Faraday Discussion on Correlated electronic structure took place from the 17th to the 19th of July 2024 in London, UK. The Discussion encompassed various facets of electron correlation, ranging from its formal definition and quantification to emerging frontiers in electronic structure theory, with applications in the solid state, integration with machine learning, and quantum computing.
In the Spiers Memorial Lecture (https://doi.org/10.1039/D4FD00141A), Chan argued that for chemically relevant problems, combinatorial computational complexity is unlikely. Chan proposed dividing the computational complexity of quantum chemistry tasks into two components: a ‘preparation’ cost and a ‘refinement’ cost. The preparation cost refers to the effort required to identify a good starting approximation to the ground state, while the refinement cost refers to the effort needed to improve this initial state. Chan further discussed how the observed locality of chemistry—referring to the limited interaction distance and ability to reason in terms of localised bonds in chemical systems—and numerical evidence from approximate quantum chemistry methods (classical heuristics) support a conjecture that the cost of the refinement step is a polynomial in L and 1/ε:
Crefine(L,ε) = O(poly(L)poly(1/ε)), | (1) |
Formalising the computational cost into such a form, even if generic and not directly useful for estimating the actual cost of quantum chemistry computations, is still of great importance as it facilitates comparing classical heuristics with quantum algorithms. Using this classical heuristic cost conjecture, Chan examined the advantages of quantum algorithms, concluding that quantum advantage in chemically relevant systems will most likely arise from the refinement component. In the author's opinion, eqn (1) could serve as a foundation for productive exchange between the quantum chemistry and quantum computing communities and open new lines of inquiry that combine formal analyses and numerical evidence to better understand the computational complexity of classical quantum chemistry methods.
Another theme that emerged from the Discussion was the exploitation of both manifest and hidden spin symmetries in quantum chemistry. Li Manni's contribution (https://doi.org/10.1039/D4FD00061G) focused on the quantum anamorphosis approach, which compresses wave functions by exploiting local spin symmetries within a spin-adapted basis. This approach involves identifying sets of orbitals whose corresponding spin operator commutes with the molecular Hamiltonian. Also focusing on spin symmetry, two other contributions explored its application in multi-reference theories. Guo introduced a spinless formulation of the linearised adiabatic connection and compared its performance with second-order n-electron valence perturbation theory (NEVPT2), highlighting its potential advantages in certain scenarios (https://doi.org/10.1039/D4FD00054D). Gunasekera reported on a spin-adapted coupled-cluster (CC) formalism based on Lindgren's normal-ordered exponential ansatz, specifically tailored for open-shell-configuration state functions (https://doi.org/10.1039/D4FD00044G). Both contributions addressed the problem of reducing the computational cost and enforcing spin symmetry in multi-reference methods.
Regarding stochastic methods, Filip's paper (https://doi.org/10.1039/D4FD00035H) reported two algorithmic developments aimed at accelerating quantum Monte Carlo computations. The first development involves using a BK-tree-based search algorithm to reduce the computational scaling associated with searching for determinants in lists during multi-reference coupled-cluster Monte Carlo (MR-CCMC) computations. The second development utilises a Chebyshev expansion of the exponential projector, along with its infinite time limit (known as the wall-Chebyshev function), to further accelerate quantum Monte Carlo simulations.
Software that facilitates the implementation and application of electronic structure methods is a critical component of research for many of the participants in this Discussion. These projects often involve large communities of developers with varying levels of proficiency in software development and diverse programming styles. Neese's contribution (https://doi.org/10.1039/D4FD00056K) provided an in-depth analysis of the challenges associated with developing software in a sustainable manner, focusing specifically on the ORCA package. His analysis highlighted key issues such as maintaining code quality, integrating contributions from a wide range of developers, and ensuring long-term software sustainability.
Two contributions in this Discussion focused on advancing the GW method. Harsha's paper emphasised the importance of relativistic effects beyond pseudopotentials in GW computations on solids, demonstrating how neglecting these effects can lead to inaccuracies (https://doi.org/10.1039/D4FD00043A). Loos' contribution (https://doi.org/10.1039/D4FD00037D) explored the use of the cumulant expansion for the Green's function within the GW framework, showing that while this modified approach can lead to improvements, they are not systematic. The discussion of these works highlighted several critical open issues. Converging to the basis-set limit remains problematic in correlated solid-state computations, particularly due to the severity of linear dependencies. Furthermore, the accuracy of the GW method itself is limited. These challenges underscore the need for further methodological advancements to fully realise the potential of Green's function-based approaches in quantum chemistry.
An alternative strategy for performing computations on solids is quantum mechanical embedding, which exploits the locality of correlation effects to map the full system into a local impurity problem treated with a high-level theory embedded in a surrounding bath typically approximated at the mean-field level. An example is dynamical mean-field theory (DMFT),6,7 which is well-suited for addressing strong correlation effects within the impurity by incorporating a self-energy correction. Regarding DMFT, Zhu's contribution focused on preserving translational symmetry within calculations on systems with periodic boundary conditions (https://doi.org/10.1039/D4FD00068D), exploring the use of overlapping, atom-centered impurity fragments in ab initio all-orbital DMFT for weakly correlated 2D materials. This approach improved the description of spectral functions; however, achieving systematic convergence towards the full system limit while maintaining translational symmetry remains challenging. Mejuto-Zaera introduced a quantum-embedding method that incorporates ghost particles—an approach designed to more effectively capture non-local correlations (https://doi.org/10.1039/D4FD00053F). This method was tested on molecular bond-breaking processes, demonstrating its potential to enhance the accuracy of computations by addressing non-local effects that are typically difficult to model.
Over the past two decades, considerable effort has been devoted to improving the accuracy of quantum chemistry methods, particularly by introducing explicit dependence on interelectron coordinates, as seen in the development of f12 methods.8 More recently, an approach has emerged using basis-set corrections based on Kohn–Sham density functional theory (DFT) that promises to achieve the same accuracy as f12 methods at a much lower cost.9 Giner's contribution to this Discussion applied these DFT-based corrections to excited-state computations using the equation-of-motion coupled-cluster singles and doubles (EOM-CCSD) method, showing the potential of this approach to improve basis-set convergence in correlated excited-state calculations (https://doi.org/10.1039/D4FD00033A).
Another solution to the basis-set incompleteness error is offered by the transcorrelated Hamiltonian method, first proposed by Boys and Handy.10 This method aims to remove divergences in the Coulomb operator at the electron–electron coalescence point via a similarity transformation of the Hamiltonian. It has gained renewed attention,11,12 and was the focus of three contributions in this Discussion. Reiher's paper (https://doi.org/10.1039/D4FD00060A) explored several aspects of the transcorrelated method in combination with the density-matrix renormalisation group.13 This work focused on the selection of correlators and analysed how the parameters within these correlators influence the accuracy of the computed results. Kats introduced an innovative approach that combines bi-orthogonal orbital optimisation with an approximate transcorrelation scheme termed xTC (https://doi.org/10.1039/D4FD00036F). The xTC method combines on-the-fly integral evaluation with the generation of modified integrals, including up to two-electron terms. This flexibility allows xTC to be applied alongside most standard electronic structure methods, as demonstrated in its implementation for second-order Møller–Plessett perturbation theory (MP2), distinguishable cluster singles and doubles (DCSD), and Λ coupled-cluster with singles, doubles, and perturbative triples [ΛCCSD(T)]. Dobrautz explored the use of the transcorrelated method in combination with adaptive variational quantum imaginary time evolution (AVQITE), a hybrid quantum-classical algorithm (https://doi.org/10.1039/D4FD00039K). The transcorrelated method was shown to enhance basis-set convergence and reduce the circuit depth of the AVQITE algorithm, making the approach more computationally efficient.
While these contributions demonstrated the promise of the transcorrelated method, they also highlighted the need for further refinement, particularly concerning the choice of correlators. During the Discussion, two key strategies were proposed to address this issue: developing improved, molecule-independent correlation factors or finding efficient ways to optimise the parameters within the correlator deterministically. An interesting aspect raised in the Discussion is that the transcorrelated method introduces three-body interactions, which are absent from traditional chemical Hamiltonians. This raises the question of how to manage the growth of operator rank in similarity transformation-based theories and emphasises the importance of understanding and efficiently incorporating these induced many-body interactions into quantum chemical computations.
Rubenstein's contribution (https://doi.org/10.1039/D4FD00051J) explored how machine learning, specifically Gaussian processes, can be employed to extrapolate finite-size many-body simulations to their thermodynamic limit. This approach offers a promising way to overcome the limitations of finite-size effects in electronic structure calculations. Another approach to integrating machine learning in electronic structure is by encoding information about a quantum state. Chen's paper focused on variational quantum Monte Carlo (VMC) using real-space deep neural network wave functions for solids (https://doi.org/10.1039/D4FD00071D). This work introduced new methods for extending the approach beyond energy calculations to also evaluate forces on nuclei, potentially broadening the applicability of deep neural network VMC.
Another noteworthy contribution by Atalar (https://doi.org/10.1039/D4FD00062E) focused on using eigenvector interpolation to perform nonadiabatic dynamics computations. This approach uses a fixed set of non-orthogonal many-body states (with geometry-dependent orbitals) as a training set to diagonalise the ab initio Hamiltonian at arbitrary geometries. The authors then extended this method to compute multiple electronic states along with their gradients and nonadiabatic couplings, providing all the necessary information for fewest-switches surface hopping14 simulations in nonadiabatic molecular dynamics.
These contributions are particularly exciting because they demonstrate creative ways to integrate machine learning with electronic structure methods in a manner consistent with the underlying physics. While data-driven approaches that bypass the costly representation of the Hamiltonian on a finite basis are likely to be faster, machine-learning methods grounded in physics could retain essential properties of conventional computational methods, such as the ability to systematically improve results. This balance between efficiency and physical motivation may prove crucial in advancing the field of quantum chemistry.
Grüneis' contribution focused on the application of periodic coupled-cluster theory to the adsorption of CO on a Pt(111) surface (https://doi.org/10.1039/D4FD00085D). He discussed the critical issues of basis-set incompleteness and finite-size errors, emphasising the necessity of using triples corrections that include additional ring terms to avoid divergences in the (T) energy. This refinement is crucial for ensuring the accuracy and stability of the results in such complex systems. Berkelbach also contributed to this theme (https://doi.org/10.1039/D4FD00041B), presenting an application of CCSD(T) combined with a periodic extension of the local natural orbital approximation to the adsorption and vibrational spectroscopy of CO on the MgO surface, showcasing the potential of CCSD(T) to provide detailed insights into surface phenomena.
Another theme explored in the Discussion was the treatment of strongly correlated solids. Contreras-García's paper (https://doi.org/10.1039/D4FD00073K) tackled the challenge of modelling superconductivity using conventional DFT, focusing on how to reconstruct the superconducting one-body reduced density matrix from a DFT calculation on the normal state of superconductors. This method provides a novel way to bridge conventional DFT with the complex physics of superconductivity. Lepetit's work offered a comprehensive study of the magnetic structure of a high-temperature spin-driven multiferroic system (Cu2OCl2) (https://doi.org/10.1039/D4FD00042K). To achieve this, state-of-the-art quantum chemistry techniques were combined with classical Monte Carlo simulations of a parameterised Heisenberg Hamiltonian, providing detailed insights into the magnetic properties of this complex material. Together, these two contributions demonstrate how challenging problems in solid-state physics can be addressed using a combination of molecular and solid-state tools. Both papers not only pushed the boundaries of computational techniques but connected quantitative results with interpretable physical models.
As noted in the opening lecture, many of these methods systematically converge to the exact solution of the Schrödinger equation and are well-suited to be combined with adaptive numerical truncation techniques that exploit sparsity in the parameterisation. When these methods are combined with the principle of locality, they offer a way to solve nearly all chemically relevant problems. A prime example is compressing the CC parameterisation using pair natural orbitals (PNOs)20 to achieve reduced scaling.21 Although the foundational principles of the PNO technique were established nearly 30 years earlier, the computational overhead associated with working in this representation prevented its widespread adoption until advances in computing power made it feasible. This situation is reminiscent of developments in machine learning, where recent innovations22–24 have built upon earlier ideas, with their success being driven in part by the dramatic increase in available data and computational power afforded by graphics processing units. Similarly, it is possible that for many quantum chemistry methods and their adaptive variants, our current computational resources still fall short of the thresholds necessary for achieving the low polynomial scaling expected from the principle of locality. As computing technology continues to advance, these methods may reveal their full potential.
Another notable trend has been the cross-pollination of well-established quantum chemistry methods—DFT and coupled-cluster theory—with emerging many-body methods from the physics community. In recent years, this trend has further evolved, integrating new ideas and perspectives from machine learning and quantum computing into quantum chemistry methods. These later influences have been seen by many as a diversion for a community laser-focused on solving the electronic structure problem using rigorous numerical methods optimised for classical computing hardware. However, these should be recognised as positive developments, injecting the field with fresh ideas and keeping it dynamic and innovative. This was evident in this Discussion, where papers on machine learning and quantum computing were abundant. These new influences have also challenged some of the traditional constraints of quantum chemistry that were once considered inviolable—such as size extensivity, orbital invariance, and uniqueness of solutions. The field has consequently seen a revival of older ideas like selected configuration interaction, demonstrating how quantum chemistry continues to evolve by reinterpreting and adapting past concepts to fit the needs of modern computational contexts. This adaptability is crucial for the field's continued growth and relevance.
Existing quantum chemistry methods still face severe limitations for systems where both strong (static) and weak (dynamical) correlations play a significant role. While some contributions to this Discussion focused on addressing problems involving both forms of correlation, much work is still needed to achieve the same level of accuracy and user-friendliness—often referred to as a ‘black-box’ character—seen in conventional methods. One major obstacle is the lack of well-understood guiding principles for effectively combining together methods that specialise in either strong or weak electron correlations. To overcome this challenge, the community may need to identify new unifying principles in quantum chemistry that can help reframe existing paradigms as special cases within a broader, more general framework. Some efforts toward this grand unification of quantum chemistry are already visible, particularly in recent works that explore connections between coupled-cluster theory and other methods.25–28 Developments along these lines hold the potential to deepen our understanding and expand the applicability of quantum chemistry to a wider range of complex systems.
Lastly, it is worth noting that this Discussion primarily focused on electron correlation arising from the bare Coulomb interaction. However, many other factors may modulate the importance of electron correlation. These include electron–phonon coupling, spin–orbit and spin–phonon coupling, geometric frustration, strong coupling to external fields, and finite temperature effects. Multiparticle generalisations of quantum chemistry methods29–31 where electron correlation is influenced by the interplay of various degrees of freedom represent a vast and largely unexplored landscape for the electronic structure community.
This journal is © The Royal Society of Chemistry 2024 |