Pages

Thursday, April 30, 2026

Density Functional Theory Surrogate Enables Fast and Broad Computational Evaluation of Homogeneous Transition Metal Catalytic Energy Landscapes

Kevin P. Quirion, Wang-Yeuk Kong, Britton Stanley, Jyothish Joy, and Daniel H. Ess (2026)
Highlighted by Jan Jensen


It has been about 10 months since Meta FAIR released the Universal Models for Atoms, or UMA, machine-learning interatomic potentials. Since then, the first independent benchmarking studies have begun to appear, and this paper by Quirion and co-workers asks a very practical question: can UMA be used as a fast surrogate for DFT in homogeneous organometallic catalysis?

The authors examine seven catalytic/organometallic case studies taken from the literature, including Ir pincer alkane dehydrogenation, Rh hydroformylation, Ru olefin metathesis, Pd Buchwald–Hartwig amination, Cu-catalyzed difluorocarbene insertion, Ni asymmetric radical capture/reductive elimination, and a dinuclear Ni–Ni naphthyridine-diimine cycloaddition.

For literature geometries, they recompute reaction energies using ωB97M-V/def2-TZVPD single points, which is close to the level of theory that UMA is trained to reproduce. They then compare these values to UMA-S and UMA-M single-point energies, and in many cases also to UMA-optimized structures and energies. 

The headline result is encouraging: in most cases, UMA tracks ωB97M-V very well, often within a few kcal/mol and with good agreement in relative barriers and reaction-profile shapes. This is particularly impressive because the systems include different metals, oxidation-state changes, large ligands, charged species, and transition states. For routine conformer screening, preliminary mechanism mapping, or fast evaluation of many candidate catalysts, this suggests UMA could be genuinely useful.

There are, however, two important problem cases.

The first is the Cu-catalyzed difluorocarbene insertion, where the key issue is an open-shell singlet intermediate. UMA could not locate the TS1e transition state during optimization or NEB, gave unphysical conformational changes when optimizing the singlet 3e, and predicted the triplet state of 3e to be much lower than the singlet. At first glance this looks like a UMA failure, but ωB97M-V itself has similar problems with the singlet–triplet energetics. So this is not simply a machine-learning-potential problem. UMA is trained to reproduce ωB97M-V-like energies and forces; it should not be expected to magically repair failures of the underlying DFT reference method. The more specific concern is that UMA also has practical difficulties optimizing the open-shell singlet surface and locating the associated transition state. It was not tested whether ωB97M-V had the same problem.

The second problem case is the dinuclear Ni–Ni naphthyridine-diimine diene cycloaddition. Here UMA struggles with the relative spin states and barriers. In particular, it does not reproduce the same doublet/quartet ordering as ωB97M-V, and it overstabilizes some parts of the profile. This is perhaps less surprising because OMol25 did not include multinuclear transition-metal complexes, and the authors note that the naphthyridine-diimine ligand is not represented in the training set. Interestingly, the optimized geometries are not disastrous: UMA-S gives heavy-atom RMSDs of roughly 0.22 Å for the doublet and 0.36 Å for the quartet relative to the reported M06-L structures. So the failure is more severe for relative energetics and spin-state ordering than for generating plausible structures.

Overall, the study is a strong endorsement of UMA as a practical tool for organometallic mechanism work, provided it is used with the same caution one would apply to DFT. UMA appears especially promising for rapid conformer screening, approximate reaction-profile generation, and preoptimization before higher-level single-point calculations.

One unresolved issue is training-set overlap. The authors write that the OMol25 training database is so large that it “cannot be easily queried,” and that UMA does not provide an intrinsic nearest-neighbor or structure-comparison analysis for new inputs. That is a real limitation: if a benchmark system, or something very close to it, is already in the training data, the benchmark is much less informative about out-of-distribution generalization.

At the same time, the paper also states that the authors queried the dataset for the naphthyridine-diimine ligand and provide code in the Supporting Information. So the situation is somewhat unclear. The database may be inconvenient to search, but it does not seem impossible to search. For future UMA benchmark studies, it would be very useful to include at least a basic training-set check: for example, filtering OMol25 by metal, composition, charge, spin state, ligand identity, and local coordination environment. This would help distinguish cases where UMA is genuinely extrapolating from cases where it is interpolating within a familiar chemical neighborhood.

Wednesday, March 25, 2026

Stochastic tensor contraction for quantum chemistry

Jiace Suna and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen


What this paper lacks in terms of punchy title, it makes up for in content. I guess I would have gone with something like "Monte Carlo Meets Coupled Cluster: Slashing the Cost of CCSD(T)" or "Stochastic Tensor Contraction Pushes CCSD(T) Toward Mean-Field Cost". 

Anyway, tensor contraction is the algebraic core of much of quantum chemistry: large multidimensional arrays representing amplitudes and integrals are multiplied and summed over shared indices to produce energies and intermediates. It matters because these contractions set the scaling wall for methods like CCSD(T), where the formal cost rises far faster than Hartree–Fock. 

This study uses importance samplling to evaluate the tensor contraction, Importance sampling means drawing the most important terms in a sum more often than the unimportant ones, while reweighting so the final estimator stays unbiased. Here, Sun and Chan use it to evaluate high-order tensor contractions stochastically.

The headline result is that stochastic tensor contraction (STC) drives the scaling of CCSD(T) down dramatically: from the usual O(N^6) and O(N^7) down to O(N^4). In practice, water-cluster tests show very large FLOP reductions and wall-time crossovers at surprisingly small sizes. 

Figure 7 in the paper is the real selling point, because it compares against the incumbent approximate workhorse, DLPNO-CCSD(T), on 20 realistic molecules. STC is faster than DLPNO for every system in the set, with speedups ranging from 2.5× to 32×, while also delivering smaller errors than all DLPNO/Normal results and 15 of 20 DLPNO/Tight results. Just as importantly, the STC errors stay tightly clustered around the chosen target of 0.2 kcal/mol, whereas DLPNO errors vary much more from system to system. That makes STC look not just fast, but controllable. 

Table 3 sharpens that message. Averaged over the benchmark set, STC has a mean absolute error of 0.2 kcal/mol at a geometric mean runtime of 10.7 min, compared with 3.00 kcal/mol / 58 min for DLPNO/Normal, 0.70 kcal/mol / 159 min for DLPNO/Tight, and 773 min for exact CCSD(T). So the paper’s central claim is not merely better asymptotic scaling, but a roughly order-of-magnitude win in both time and error relative to state-of-the-art local correlation in this benchmark. 

One caveat: while the speed-up is undeniably impressive, another likely limiting factor is memory. The paper notes the use of density fitting “to reduce memory requirements,” but does not really quantify memory use or memory scaling in the same systematic way as FLOPs and wall time. Given that modern CC implementations are often limited as much by storage and movement of intermediates as by raw arithmetic, that omission stands out. 

Overall, this is prototype code, but very exciting prototype code. It will be very interesting to see whether this stochastic route can mature into something that genuinely displaces DLPNO-CCSD(T) as the default reduced-cost gold-standard method. Code: GitHub repository



This work is licensed under a Creative Commons Attribution 4.0 International License.



Saturday, February 28, 2026

Classical solution of the FeMo-cofactor model to chemical accuracy and its implications

Huanchen Zhai, Chenghan Li, Xing Zhang, Zhendong Li, Seunghoon Lee, and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen



The FeMo cofactor in nitrogenase enzymes is often mentioned as the killer application of quantum computing (QC) in chemistry. That is due to its complex electronic structure, which has made is difficult to model accurately. However, Chan and co-workers now claim to have computed the electronic energy to, by their estimate, chemical accuracy by conventional means.

They have done so by a series of calculations as indicated in the figure above. The CPU requirements are not given in detail, but the authors point out that no supercomputer was needed. 

Interestingly, the authors found that the ground state wavefunction is not inherently strongly multireference. Rather the main challenge is to identify the correct (mostly) single-reference state.

Where does that leave chemical applications of QC? For one thing, it moves the goalpost further back. The active space is the one typically used to estimate QC requirements, but it may have to be expanded to include MOs from the surrounding protein to accurately capture the chemistry, which would require even larger quantum computer. But that will be even further into the future with plenty of time for conventional approached to get there first. 

In my opinion, the case for QC-based quantum chemistry was never very strong, and this study is just another blow.

Wednesday, January 28, 2026

Predicting Enantioselectivity via Kinetic Simulations on Gigantic Reaction Path Networks

Yu Harabuchi, Ruben Staub, Min Gao, Nobuya Tsuji, Benjamin List, Alexandre  Varnek, and Satoshi Maeda (2026)
Highlighted by Jan Jensen



The automated predict of chemical reaction networks have thus far been limited to relatively small systems, typically with less than 50 atoms (including Hs) due to computational expense. This study goes significantly beyond this by studying a system with 228 atoms.

This is made possible by three things: 

1. While the system is big, the reaction is relatively simple, so the reaction network is relatively small. 

The reaction is an acid-catalysed cyclisation reaction involving a relatively small and chemically simple molecules. It is the (chiral) acid catalyst that contributes most of the atoms. The reaction itself has three steps: protonation of alkene group, intramolecular C-O bond formation on the activated alkene, deprotonation of the O to regenerate the catalyst. Most of the atoms are chemically inert, and there are 12 chemically active atoms (defined by the user). In all, the study identified 74 possible intermediates/products and only about half of those are chemically distinct if you ignore chirality. 

2. Cheap surrogate energy function

They use a Δ-ML approach that corrects the xTB energy and gradient to obtain better accuracy. The ML model is trained on-the-fly against DFT calculations. 

3. Massive computational resources 

In spite of 1 and 2 they this study required massive computational resources. They don't address this point specifically, other than to mention that it requires millions of gradient evaluations, but Maeda stressed this point during his talk at the WATOC last year. 

So this is not exactly a routine application.