Wednesday, March 25, 2026

Stochastic tensor contraction for quantum chemistry

Jiace Suna and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen


What this paper lacks in terms of punchy title, it makes up for in content. I guess I would have gone with something like "Monte Carlo Meets Coupled Cluster: Slashing the Cost of CCSD(T)" or "Stochastic Tensor Contraction Pushes CCSD(T) Toward Mean-Field Cost". 

Anyway, tensor contraction is the algebraic core of much of quantum chemistry: large multidimensional arrays representing amplitudes and integrals are multiplied and summed over shared indices to produce energies and intermediates. It matters because these contractions set the scaling wall for methods like CCSD(T), where the formal cost rises far faster than Hartree–Fock. 

This study uses importance samplling to evaluate the tensor contraction, Importance sampling means drawing the most important terms in a sum more often than the unimportant ones, while reweighting so the final estimator stays unbiased. Here, Sun and Chan use it to evaluate high-order tensor contractions stochastically.

The headline result is that stochastic tensor contraction (STC) drives the scaling of CCSD(T) down dramatically: from the usual O(N^6) and O(N^7) down to O(N^4). In practice, water-cluster tests show very large FLOP reductions and wall-time crossovers at surprisingly small sizes. 

Figure 7 in the paper is the real selling point, because it compares against the incumbent approximate workhorse, DLPNO-CCSD(T), on 20 realistic molecules. STC is faster than DLPNO for every system in the set, with speedups ranging from 2.5× to 32×, while also delivering smaller errors than all DLPNO/Normal results and 15 of 20 DLPNO/Tight results. Just as importantly, the STC errors stay tightly clustered around the chosen target of 0.2 kcal/mol, whereas DLPNO errors vary much more from system to system. That makes STC look not just fast, but controllable. 

Table 3 sharpens that message. Averaged over the benchmark set, STC has a mean absolute error of 0.2 kcal/mol at a geometric mean runtime of 10.7 min, compared with 3.00 kcal/mol / 58 min for DLPNO/Normal, 0.70 kcal/mol / 159 min for DLPNO/Tight, and 773 min for exact CCSD(T). So the paper’s central claim is not merely better asymptotic scaling, but a roughly order-of-magnitude win in both time and error relative to state-of-the-art local correlation in this benchmark. 

One caveat: while the speed-up is undeniably impressive, another likely limiting factor is memory. The paper notes the use of density fitting “to reduce memory requirements,” but does not really quantify memory use or memory scaling in the same systematic way as FLOPs and wall time. Given that modern CC implementations are often limited as much by storage and movement of intermediates as by raw arithmetic, that omission stands out. 

Overall, this is prototype code, but very exciting prototype code. It will be very interesting to see whether this stochastic route can mature into something that genuinely displaces DLPNO-CCSD(T) as the default reduced-cost gold-standard method. Code: GitHub repository



This work is licensed under a Creative Commons Attribution 4.0 International License.



Saturday, February 28, 2026

Classical solution of the FeMo-cofactor model to chemical accuracy and its implications

Huanchen Zhai, Chenghan Li, Xing Zhang, Zhendong Li, Seunghoon Lee, and Garnet Kin-Lic Chan (2026)
Highlighted by Jan Jensen



The FeMo cofactor in nitrogenase enzymes is often mentioned as the killer application of quantum computing (QC) in chemistry. That is due to its complex electronic structure, which has made is difficult to model accurately. However, Chan and co-workers now claim to have computed the electronic energy to, by their estimate, chemical accuracy by conventional means.

They have done so by a series of calculations as indicated in the figure above. The CPU requirements are not given in detail, but the authors point out that no supercomputer was needed. 

Interestingly, the authors found that the ground state wavefunction is not inherently strongly multireference. Rather the main challenge is to identify the correct (mostly) single-reference state.

Where does that leave chemical applications of QC? For one thing, it moves the goalpost further back. The active space is the one typically used to estimate QC requirements, but it may have to be expanded to include MOs from the surrounding protein to accurately capture the chemistry, which would require even larger quantum computer. But that will be even further into the future with plenty of time for conventional approached to get there first. 

In my opinion, the case for QC-based quantum chemistry was never very strong, and this study is just another blow.

Wednesday, January 28, 2026

Predicting Enantioselectivity via Kinetic Simulations on Gigantic Reaction Path Networks

Yu Harabuchi, Ruben Staub, Min Gao, Nobuya Tsuji, Benjamin List, Alexandre  Varnek, and Satoshi Maeda (2026)
Highlighted by Jan Jensen



The automated predict of chemical reaction networks have thus far been limited to relatively small systems, typically with less than 50 atoms (including Hs) due to computational expense. This study goes significantly beyond this by studying a system with 228 atoms.

This is made possible by three things: 

1. While the system is big, the reaction is relatively simple, so the reaction network is relatively small. 

The reaction is an acid-catalysed cyclisation reaction involving a relatively small and chemically simple molecules. It is the (chiral) acid catalyst that contributes most of the atoms. The reaction itself has three steps: protonation of alkene group, intramolecular C-O bond formation on the activated alkene, deprotonation of the O to regenerate the catalyst. Most of the atoms are chemically inert, and there are 12 chemically active atoms (defined by the user). In all, the study identified 74 possible intermediates/products and only about half of those are chemically distinct if you ignore chirality. 

2. Cheap surrogate energy function

They use a Δ-ML approach that corrects the xTB energy and gradient to obtain better accuracy. The ML model is trained on-the-fly against DFT calculations. 

3. Massive computational resources 

In spite of 1 and 2 they this study required massive computational resources. They don't address this point specifically, other than to mention that it requires millions of gradient evaluations, but Maeda stressed this point during his talk at the WATOC last year. 

So this is not exactly a routine application.