Wednesday, December 30, 2015

Scalable Quantum Simulation of Molecular Energies

P. J. J. O’Malley, R. Babbush, I. D. Kivlichan, J. Romero, J. R. McClean, R. Barends, J. Kelly, P. Roushan, A. Tranter, N. Ding, B. Campbell, Y. Chen, Z. Chen, B. Chiaro, A. Dunsworth, A. G. Fowler, E. Jeffrey, A. Megrant, J. Y. Mutus, C. Neill, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, P. V. Coveney, P. J. Love, H. Neven, A. Aspuru-Guzik, and J. M. Martinis (2015)
Contributed by Jan Jensen

This paper describes the first electronic structure calculation, the PES of H$_2$, performed on a quantum computer without "exponentially costly precompilation".  I know very little of quantum computing so bear with me as I try to explain this paper to myself in terms even I can begin to understand.

From Wikipedia
A classical computer has a memory made up of bits, where each bit is represented by either a one or a zero. A quantum computer maintains a sequence of qubits. A single qubit can represent a one, a zero, or any quantum superposition of those two qubit states; a pair of qubits can be in any quantum superposition of 4 states, and three qubits in any superposition of 8 states. In general, a quantum computer with $n$ qubits can be in an arbitrary superposition of up to 2$^n$ different states simultaneously (this compares to a normal computer that can only be in one of these 2$^n$ states at any one time). A quantum computer operates by setting the qubits in a controlled initial state that represents the problem at hand and by manipulating those qubits with a fixed sequence of quantum logic gates. The sequence of gates to be applied is called a quantum algorithm. The calculation ends with a measurement, collapsing the system of qubits into one of the 2$^n$ pure states, where each qubit is zero or one, decomposing into a classical state. The outcome can therefore be at most $n$ classical bits of information. Quantum algorithms are often non-deterministic, in that they provide the correct solution only with a certain known probability.
The paper describes two experiments and I'll focus on the first one. The experiment uses two Xmon transmon qubits, which is a kind of superconducting charge qubit where the two qubit states are uncharged and charged superconducting islands (essentially circuits made of superconducting aluminum film deposited on a sapphire substrate).

The molecular Hamiltonian of H$_2$ within a minimal basis set is rewritten as
$$H = g_0 \textbf{1} + g_1Z_0 + g_2Z_1 + g_3Z_0Z_1 + g_4X_0X_1+g_5Y_0Y_1 $$
{$X_i,Y_i,Z_i$} is measured (quantum computed) and {$g_i$} are numbers that can be (classically) computed and that depend on the H-H bond length ($R$).  Since there are only two orbitals the energy depends on one variational parameter ($\theta$) so the objective is to find the value of $\theta$ that minimizes the energy for a given value of $R$.

So the authors set up a quantum algorithm that outputs {$X_i,Y_i,Z_i$} given an initial state that depends on $\theta$.  This is done for a thousands different values of $\theta$ and for each value of $R$ the value of $\theta$ that minimizes the energy defined by Eq 1 is found by classical minimization, to yield a PES (see Figure 2 from the paper reproduced below)

The "exponentially costly precompilation" mentioned above refers to the fact that the conventional quantum algorithm approach (QPE) (source):
requires a large number of n-qubit quantum controlled operations to be performed in series—placing considerable demands on the number of components and coherence time—while the inherent parallelism of our scheme enables a small number of n-qubit gates to be exploited many times, drastically reducing these demands. 

This work is licensed under a Creative Commons Attribution 4.0     

Wednesday, December 23, 2015

Efficient ab initio free energy calculations by classically assisted trajectory sampling

Hugh F. Wilson Computer Physics Communications 2015, 197, 1
Contributed by David Bowler
Reposted from Atomistic Computer Simulations with permission

Ab initio thermodynamics is both extremely challenging and extremely important. The challenge arises from the need to sample an energy distribution sufficiently well to converge calculations; the importance comes from the insight that we can gain into experimentally inaccessible situations (I have several colleagues who work on iron in the Earth’s core which is not readily accessible experimentally). A new paper[1] suggests an approach to ab initio thermodynamics that will be extremely helpful for certain calculations (and potentially useful for general calculations). I have written about calculations on liquid iron in Section 4.6 of the book, and on general approaches to thermodynamics in Chapter 6.
When finding average values of variables at finite temperature, we have to sample over a set of micro-states which are distributed according to a potential energy, U1(r), with a Boltzmann factor that depends on the potential giving the probability of each state. The standard approach to this is to use either MD or Monte Carlo (MC) to sample the potential energy surface, possibly using a weighting scheme to speed up convergence. This tends to be quite expensive when using ab initio methods where a long MD run may be required.
The key insight of the new method is that we can perform the same averaging using a set of micro-states that are distributed according to a different potential energy, U0(r), with the Boltzmann factor now accounting for the distribution of each state relative to the new potential,U1(r)U0(r). If the new potential is significantly cheaper than the first, then we can perform a long sampling run using this potential, and draw the micro-states from this distribution, reducing significantly the number of expensive calculations that need to be performed.
This paper presents a careful analysis of the effect of the accuracy of the cheap method (here taken to be a classical potential, ideally fitted to some ab initio MD) and its effect on the sampling. While the method is efficient for standard averages, it is outstanding for thermodynamic integration, where it can reduce the number of simulations by an order of magnitude or more. It is clear that it’s been developed in this context - where the absolute free energy is required. In the context of ab initio thermodynamics, this is a significant step forward.
[1] Comp. Phys. Commun. 127, 1 (2015) DOI:10.1016/j.cpc.2015.07.008

Tuesday, December 8, 2015


Mackay, E. G.; Newton, C. G.; Toombs-Ruane, H.; Lindeboom, E. J.; Fallon, T.; Willis, A. C.; Paddon-Row, M. N.; Sherburn, M. S.  J. Am. Chem. Soc. 2015, 137, 14653
Contributed by Steven Bachrach
Reposted from Computational Organic Chemistry with permission

What may be something of a surprise, [5]radialene 1 has only just now been synthesized.1 What makes this especially intriguing is that [3]radialene 2, [4]radialene 3 and [6]radialene 4 have been known for years.
Paddon-Row, Sherburn, and coworkers speculated that [5]radialene must undergo polymerization much more rapidly than the other radialenes. They computed the activation barrier for the Diels-Alder dimerization of the radialenes at G4(MP2). (The optimized structure of 1 and the transition state for the dimerization of 1 are shown in Figure 1.) The activation barrier for the dimerization of 1 is computed to be only 14.3 kJ mol-1, much lower than for the dimerization of 3 (59.2 kJ mol-1) or 4 (31.5 kJ mol-1).


Figure 1. G4(MP2) optimized geometries of 1 and the TS for the dimerization of 1.

Application of the distortion/interaction energy model helps to understand why 1 is the outlier among the radialenes. The distortion energy to bring two molecules of 1 to the transition state geometry is about 63 kJ mol-1, and this is much less than for [4]radialene (102 kJ mol-1) or [6]radialene (96 kJ mol-1). The reason lies in that [5]radialene is close to planarity and so only the pyramidalization at one carbon is necessary to reach the TS geometry. For 4, which is in a chair geometry, significant distortion is needed to bring the double bonds into conjugation. For 3, the high distortion energy is due to the significant pyramidalization energy needed.

Another interesting note is that the TSs for the Diels-Alder reactions of the radialenes is bis-pericyclic. The authors point out that dynamic effects may be important – though they did not perform any MD studies.

These computations drove the synthesis of 1 by coordinating it to two equivalents of Fe(CO)3 and then driving off the metals with cerium ammonium nitrate in acetone at -78 °C. The free [5]radialene was then detected by NMR, and it decomposes with a half-life of about 16 min at -20 °C.


(1) Mackay, E. G.; Newton, C. G.; Toombs-Ruane, H.; Lindeboom, E. J.; Fallon, T.; Willis, A. C.; Paddon-Row, M. N.; Sherburn, M. S. "[5]Radialene," J. Am. Chem. Soc. 2015137, 14653–14659, DOI:10.1021/jacs.5b07445.


1: InChI=1S/C10H10/c1-6-7(2)9(4)10(5)8(6)3/h1-5H2

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.

Thursday, December 3, 2015

Small Atomic Orbital Basis Set First-Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

Sure, R.; Brandenburg, J. G.; Grimme, S. ChemistryOpen, EarlyView, DOI: 10.1002/open.201500192 (CC by-nc-nd)
Contributed by Grant Hill

The use of density functional theory (DFT) calculations to produce insights into the chemistry unveiled by experiment is widespread due to its relative ease-of-use and ability to explain many chemical phenomena of interest. In particular, the B3LYP hybrid functional [1] and the Pople-type 6-31G* basis set [2] are incredibly popular, with some referring to this as Default Favourite Theory (a play on DFT).[3] A recent review by Grimme and co-workers sets out a case against using this B3LYP/6-31G* model chemistry by careful examination of errors.

The review contends that the use of small basis sets such as 6-31G* leads to relatively large errors due to both basis set superposition error (BSSE) and basis set incompleteness error (BSIE). It's not entirely clear how to separate these two terms and as a result the review mostly focuses on the BSSE element, with some emphasis on intramolecular BSSE in addition to the more familiar intermolecular BSSE. The question of why B3LYP/6-31G* still performs well in a number of cases is then examined in terms of a fortuitous cancellation of errors between BSSE and London forces (dispersion energy). It is demonstrated that this cancellation cannot be relied upon in all cases and a convincing case is made for choosing different basis sets and methods. While the review mostly focuses on intermolecular interactions, there is some generalisation to other problems of interest.

A number of alternative methods for including dispersion in DFT calculations are reviewed, and final recommendations are made that can easily be incorporated into the workflow of a non-specialist, without a significant increase in computational cost. This includes the use of the def2-SVP basis set of Weigend, Ahlrichs and co-workers.[4] This review should make for interesting reading for anyone routinely using DFT methods in conjunction with double-zeta basis sets.

[1]a) Stephens, P. J.; Devlin, F. J.; Chablowski, C. F.; Frisch, M. J. J. Phys. Chem. 1994, 98, 11623. b) Becke, A. D. J. Chem. Phys. 1993, 98, 5648.
[2] Hehre, W. J.; Ditchfield, R.; Pople, J. A. J. Chem. Phys. 1972, 56, 2257.
[3] I first heard this at the Computational Molecular Science conference in 2008, but I can't recall the originator. The late Nick Handy responded by suggesting that DFT could instead be "Damn Fine Theory".
[4] See Weigend, F.; Ahlrichs, R. Phys. Chem. Chem. Phys. 2005, 7, 3297 and references therein. These basis sets are available to download from the EMSL basis set exchange in formats suitable for most electronic structure packages.

Tuesday, December 1, 2015

Bis-corannulene Receptors for Fullerenes Based on Klärner’s Tethers: Reaching the Affinity Limits

Abeyratne Kuragama, P. L.; Fronczek, F. R.; Sygula, A. Org. Lett. 2015, ASAP
Contributed by Steven Bachrach
Reposted from Computational Organic Chemistry with permission

Capturing buckyballs involves molecular design based on non-covalent interactions. This poses interesting challenges for both the designer and the computational chemist. The curved surface of the buckyball demands a sequestering agent with a complementary curved surface, likely an aromatic curved surface to facilitate π-π stacking interactions. For the computational chemist, weak interactions, like dispersion and π-π stacking demand special attention, particularly density functionals designed to account for these interactions.

Two very intriguing new buckycatchers were recently prepared in the Sygula lab, and also examined by DFT.1 Compounds 1 and 2 make use of the scaffold developed by Klärner.2 In these two buckycatchers, the tongs are corranulenes, providing a curved aromatic surface to match the C60 and C70 surface. They differ in the length of the connector unit.
B97-D/TZVP computations of the complex of 1 and 2 with C60 were carried out. The optimized structures are shown in Figure 1. The binding energies (computed at B97-D/QZVP*//B97-D/TZVP) of these two complexes are really quite large. The binding energy for 1:C60 is 33.6 kcal mol-1, comparable to some previous Buckycatchers, but the binding energy of 2:C60 is 50.0 kcal mol-1, larger than any predicted before.


Figure 1. B97-D/TZVP optimized geometries of 1:C60and 2:C60.

Measurement of the binding energy using NMR was complicated by a competition for one or two molecules of 2 binding to buckyballs. Nonetheless, the experimental data show 2 binds to C60 and C70more effectively than any previous host. They were also able to obtain a crystal structure of 2:C60.


(1) Abeyratne Kuragama, P. L.; Fronczek, F. R.; Sygula, A. "Bis-corannulene Receptors for Fullerenes Based on Klärner’s Tethers: Reaching the Affinity Limits," Org. Lett. 2015, ASAP, DOI:10.1021/acs.orglett.5b02666.
(2) Klärner, F.-G.; Schrader, T. "Aromatic Interactions by Molecular Tweezers and Clips in Chemical and Biological Systems," Acc. Chem. Res. 201346, 967-978, DOI: 10.1021/ar300061c.


1: InChI=1S/C62H34O2/c1-63-61-57-43-23-45(41-21-37-33-17-13-29-9-5-25-3-7-27-11-15-31(35(37)19-39(41)43)53-49(27)47(25)51(29)55(33)53)59(57)62(64-2)60-46-24-44(58(60)61)40-20-36-32-16-12-28-8-4-26-6-10-30-14-18-34(38(36)22-42(40)46)56-52(30)48(26)50(28)54(32)56/h3-22,43-46H,23-24H2,1-2H3/t43-,44+,45+,46-
2: InChI=1S/C66H36O2/c1-67-65-51-24-45-43-23-44(42-20-38-34-16-12-30-8-4-27-3-7-29-11-15-33(37(38)19-41(42)43)59-55(29)53(27)56(30)60(34)59)46(45)25-52(51)66(68-2)64-50-26-49(63(64)65)47-21-39-35-17-13-31-9-5-28-6-10-32-14-18-36(40(39)22-48(47)50)62-58(32)54(28)57(31)61(35)62/h3-22,24-25,43-44,49-50H,23,26H2,1-2H3/t43-,44+,49+,50-

This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License.