Thursday, April 30, 2026

Density Functional Theory Surrogate Enables Fast and Broad Computational Evaluation of Homogeneous Transition Metal Catalytic Energy Landscapes

Kevin P. Quirion, Wang-Yeuk Kong, Britton Stanley, Jyothish Joy, and Daniel H. Ess (2026)
Highlighted by Jan Jensen


It has been about 10 months since Meta FAIR released the Universal Models for Atoms, or UMA, machine-learning interatomic potentials. Since then, the first independent benchmarking studies have begun to appear, and this paper by Quirion and co-workers asks a very practical question: can UMA be used as a fast surrogate for DFT in homogeneous organometallic catalysis?

The authors examine seven catalytic/organometallic case studies taken from the literature, including Ir pincer alkane dehydrogenation, Rh hydroformylation, Ru olefin metathesis, Pd Buchwald–Hartwig amination, Cu-catalyzed difluorocarbene insertion, Ni asymmetric radical capture/reductive elimination, and a dinuclear Ni–Ni naphthyridine-diimine cycloaddition.

For literature geometries, they recompute reaction energies using ωB97M-V/def2-TZVPD single points, which is close to the level of theory that UMA is trained to reproduce. They then compare these values to UMA-S and UMA-M single-point energies, and in many cases also to UMA-optimized structures and energies. 

The headline result is encouraging: in most cases, UMA tracks ωB97M-V very well, often within a few kcal/mol and with good agreement in relative barriers and reaction-profile shapes. This is particularly impressive because the systems include different metals, oxidation-state changes, large ligands, charged species, and transition states. For routine conformer screening, preliminary mechanism mapping, or fast evaluation of many candidate catalysts, this suggests UMA could be genuinely useful.

There are, however, two important problem cases.

The first is the Cu-catalyzed difluorocarbene insertion, where the key issue is an open-shell singlet intermediate. UMA could not locate the TS1e transition state during optimization or NEB, gave unphysical conformational changes when optimizing the singlet 3e, and predicted the triplet state of 3e to be much lower than the singlet. At first glance this looks like a UMA failure, but ωB97M-V itself has similar problems with the singlet–triplet energetics. So this is not simply a machine-learning-potential problem. UMA is trained to reproduce ωB97M-V-like energies and forces; it should not be expected to magically repair failures of the underlying DFT reference method. The more specific concern is that UMA also has practical difficulties optimizing the open-shell singlet surface and locating the associated transition state. It was not tested whether ωB97M-V had the same problem.

The second problem case is the dinuclear Ni–Ni naphthyridine-diimine diene cycloaddition. Here UMA struggles with the relative spin states and barriers. In particular, it does not reproduce the same doublet/quartet ordering as ωB97M-V, and it overstabilizes some parts of the profile. This is perhaps less surprising because OMol25 did not include multinuclear transition-metal complexes, and the authors note that the naphthyridine-diimine ligand is not represented in the training set. Interestingly, the optimized geometries are not disastrous: UMA-S gives heavy-atom RMSDs of roughly 0.22 Å for the doublet and 0.36 Å for the quartet relative to the reported M06-L structures. So the failure is more severe for relative energetics and spin-state ordering than for generating plausible structures.

Overall, the study is a strong endorsement of UMA as a practical tool for organometallic mechanism work, provided it is used with the same caution one would apply to DFT. UMA appears especially promising for rapid conformer screening, approximate reaction-profile generation, and preoptimization before higher-level single-point calculations.

One unresolved issue is training-set overlap. The authors write that the OMol25 training database is so large that it “cannot be easily queried,” and that UMA does not provide an intrinsic nearest-neighbor or structure-comparison analysis for new inputs. That is a real limitation: if a benchmark system, or something very close to it, is already in the training data, the benchmark is much less informative about out-of-distribution generalization.

At the same time, the paper also states that the authors queried the dataset for the naphthyridine-diimine ligand and provide code in the Supporting Information. So the situation is somewhat unclear. The database may be inconvenient to search, but it does not seem impossible to search. For future UMA benchmark studies, it would be very useful to include at least a basic training-set check: for example, filtering OMol25 by metal, composition, charge, spin state, ligand identity, and local coordination environment. This would help distinguish cases where UMA is genuinely extrapolating from cases where it is interpolating within a familiar chemical neighborhood.

Wednesday, March 25, 2026

Stochastic tensor contraction for quantum chemistry

Jiace Suna and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen


What this paper lacks in terms of punchy title, it makes up for in content. I guess I would have gone with something like "Monte Carlo Meets Coupled Cluster: Slashing the Cost of CCSD(T)" or "Stochastic Tensor Contraction Pushes CCSD(T) Toward Mean-Field Cost". 

Anyway, tensor contraction is the algebraic core of much of quantum chemistry: large multidimensional arrays representing amplitudes and integrals are multiplied and summed over shared indices to produce energies and intermediates. It matters because these contractions set the scaling wall for methods like CCSD(T), where the formal cost rises far faster than Hartree–Fock. 

This study uses importance samplling to evaluate the tensor contraction, Importance sampling means drawing the most important terms in a sum more often than the unimportant ones, while reweighting so the final estimator stays unbiased. Here, Sun and Chan use it to evaluate high-order tensor contractions stochastically.

The headline result is that stochastic tensor contraction (STC) drives the scaling of CCSD(T) down dramatically: from the usual O(N^6) and O(N^7) down to O(N^4). In practice, water-cluster tests show very large FLOP reductions and wall-time crossovers at surprisingly small sizes. 

Figure 7 in the paper is the real selling point, because it compares against the incumbent approximate workhorse, DLPNO-CCSD(T), on 20 realistic molecules. STC is faster than DLPNO for every system in the set, with speedups ranging from 2.5× to 32×, while also delivering smaller errors than all DLPNO/Normal results and 15 of 20 DLPNO/Tight results. Just as importantly, the STC errors stay tightly clustered around the chosen target of 0.2 kcal/mol, whereas DLPNO errors vary much more from system to system. That makes STC look not just fast, but controllable. 

Table 3 sharpens that message. Averaged over the benchmark set, STC has a mean absolute error of 0.2 kcal/mol at a geometric mean runtime of 10.7 min, compared with 3.00 kcal/mol / 58 min for DLPNO/Normal, 0.70 kcal/mol / 159 min for DLPNO/Tight, and 773 min for exact CCSD(T). So the paper’s central claim is not merely better asymptotic scaling, but a roughly order-of-magnitude win in both time and error relative to state-of-the-art local correlation in this benchmark. 

One caveat: while the speed-up is undeniably impressive, another likely limiting factor is memory. The paper notes the use of density fitting “to reduce memory requirements,” but does not really quantify memory use or memory scaling in the same systematic way as FLOPs and wall time. Given that modern CC implementations are often limited as much by storage and movement of intermediates as by raw arithmetic, that omission stands out. 

Overall, this is prototype code, but very exciting prototype code. It will be very interesting to see whether this stochastic route can mature into something that genuinely displaces DLPNO-CCSD(T) as the default reduced-cost gold-standard method. Code: GitHub repository



This work is licensed under a Creative Commons Attribution 4.0 International License.



Saturday, February 28, 2026

Classical solution of the FeMo-cofactor model to chemical accuracy and its implications

Huanchen Zhai, Chenghan Li, Xing Zhang, Zhendong Li, Seunghoon Lee, and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen



The FeMo cofactor in nitrogenase enzymes is often mentioned as the killer application of quantum computing (QC) in chemistry. That is due to its complex electronic structure, which has made is difficult to model accurately. However, Chan and co-workers now claim to have computed the electronic energy to, by their estimate, chemical accuracy by conventional means.

They have done so by a series of calculations as indicated in the figure above. The CPU requirements are not given in detail, but the authors point out that no supercomputer was needed. 

Interestingly, the authors found that the ground state wavefunction is not inherently strongly multireference. Rather the main challenge is to identify the correct (mostly) single-reference state.

Where does that leave chemical applications of QC? For one thing, it moves the goalpost further back. The active space is the one typically used to estimate QC requirements, but it may have to be expanded to include MOs from the surrounding protein to accurately capture the chemistry, which would require even larger quantum computer. But that will be even further into the future with plenty of time for conventional approached to get there first. 

In my opinion, the case for QC-based quantum chemistry was never very strong, and this study is just another blow.

Wednesday, January 28, 2026

Predicting Enantioselectivity via Kinetic Simulations on Gigantic Reaction Path Networks

Yu Harabuchi, Ruben Staub, Min Gao, Nobuya Tsuji, Benjamin List, Alexandre  Varnek, and Satoshi Maeda (2026)
Highlighted by Jan Jensen



The automated predict of chemical reaction networks have thus far been limited to relatively small systems, typically with less than 50 atoms (including Hs) due to computational expense. This study goes significantly beyond this by studying a system with 228 atoms.

This is made possible by three things: 

1. While the system is big, the reaction is relatively simple, so the reaction network is relatively small. 

The reaction is an acid-catalysed cyclisation reaction involving a relatively small and chemically simple molecules. It is the (chiral) acid catalyst that contributes most of the atoms. The reaction itself has three steps: protonation of alkene group, intramolecular C-O bond formation on the activated alkene, deprotonation of the O to regenerate the catalyst. Most of the atoms are chemically inert, and there are 12 chemically active atoms (defined by the user). In all, the study identified 74 possible intermediates/products and only about half of those are chemically distinct if you ignore chirality. 

2. Cheap surrogate energy function

They use a Δ-ML approach that corrects the xTB energy and gradient to obtain better accuracy. The ML model is trained on-the-fly against DFT calculations. 

3. Massive computational resources 

In spite of 1 and 2 they this study required massive computational resources. They don't address this point specifically, other than to mention that it requires millions of gradient evaluations, but Maeda stressed this point during his talk at the WATOC last year. 

So this is not exactly a routine application. 

Wednesday, December 31, 2025

One step retrosynthesis of drugs from commercially available chemical building blocks and conceivable coupling reactions

Babak Mahjour, Felix Katzenburg, Emil Lammi, and Tim Cernak (2025)
Highlighted by Jan Jensen

What are important reactions that we currently can't perform? I asked myself this a few years ago and found that there were very few papers in the literature that addressed this. It turns out that I possessed the skills to figure it out for myself if I had only had the idea. The idea being that "the most valuable couplings would utilize the most abundant building blocks to form the most common types of bonds found in [a] target dataset."

As an example, the authors took a list of 9028 known drugs and asked how many could potentially be made in a single step from molecules in the MilliporeSigma catalog by hypothetical coupling reactions. The answer turns out to be 2573 (28%), which is a surprisingly large number. The most common reaction was the coupling of alkyl alcohols and alkyl amines, followed by alkyl acid-alkyl amine and alkyl acid-alkyl alcohols. All reaction for which there's no robust and generally applicable synthetic protocol, although AFAIK, although Zhang and Cernak took a stab at the alkyl acid-alkyl amine coupling. 

I really wish there were more papers like this. Identifying important questions to work on is just as important as solving them, and the latter is almost always a communal effort.


This work is licensed under a Creative Commons Attribution 4.0 International License.



Thursday, November 27, 2025

From Random Determinants to the Ground State

Hao Zhang and Matthew Otten (2025)
Highlighted by Jan Jensen


The paper introduces a method they call TrimCI that very efficiently finds a relatively small set of determinants that accurately describes strongly correlated systems. (Well, it actually works for any system, but the main advantage is for strongly correlated systems). 

Unlike most new correlation methods, this one is actually simple enough to describe in a few sentences. TrimCI starts by constructing a set of orthogonal (non-optimised!) MOs (e.g. by diagonalising the AO overlap matrix). From these MOs you construct a small number of random determinants (e.g.100), construct the wavefunction (i.e. construct the Hamiltonian matrix and diagonalise, as per usual). Then you compute all the Hamiltonian elements between this wavefunction ($H_{ij}$) and the remaining determinants and add determinants with sufficiently large |$H_{ij}$| to the wavefunction. Finally, there is the trimming step "which removes negligible basis states by first diagonalising randomised blocks of the core and then performing a global diagonalising step on the surviving set." And repeat.


The authors find that this approach converges much quicker than other similar methods, using many fewer determinants. Another big advantage is that the method does not require a single-determinant ground state as a starting point and is thus not sensitive to how much such a single-determinant deviates from the actual wavefunction.

So, what's the catch here? In order to be practically useful, we need to compute energy differences with mHa accuracy, and I did not see any TrimCI results for chemical systems where the energy had converged to that kind of accuracy. It's possible that error cancellation can help here, but that needs to be investigated. The authors do look at extrapolation, which looks promising, but needs to be systematically investigated. Yet another option is to use the (compact) TrimCI wavefunction as an ansatz for dynamic-correlation methods.

It's also not clear what AO basis set it used for some of these calculations (including the one shown above). I suspect small basis sets are used and even FCI energies with very small basis sets are of limited practical use. Are the TrimCI calculations on large systems still practical with more realistic basis sets?

Nevertheless, this seems like a very promising step in the right direction.


This work is licensed under a Creative Commons Attribution 4.0 International License.



Friday, October 31, 2025

Electron flow matching for generative reaction mechanism prediction

Joonyoung F. Joung, Mun Hong Fong, Nicholas Casetti, Jordan P. Liles,  Ne S. Dassanayake & Connor W. Coley (2025)
Highlighted by Jan Jensen



While the title says reaction mechanism prediction, it's really reaction mechanism-based reaction outcome prediction. The approach uses glow matching (a generalization of diffusion-based  approaches) to predict changes to the bond-electron (BE) matrix (basically connectivity matrix with the lone pair electron count on the diagonal), thus ensuring mass and charge conservation because changes in the BE matrix are constrained to sum to 0. The method is trained 1.4 million elementary reaction steps derived primarily from the USPTO dataset.  

Recursive predictions yield a complete reaction mechanism step by step, starting from the reactants. (I assume the products are defined as the state where no more changes are predicted.) The method is probabilistic so several different reaction outcomes are possible if the process is repeated, and ranked according to frequency. Another option is to use DFT calculation to rank the different mechanisms.
 
Like any ML method its applicability is tied to the training set. For example, of 22,000 reactions from patents reported in 2024 that were not assigned  a specific reaction class in the Pistachio dataset, the approach successfully recovered products in only 351 cases. However, the authors show that a new reaction class can be added with as few as 32 examples.