Sunday, September 29, 2024

Toy Models of Superposition

Nelson Elhage, Tristan Hume, Catherine Olsson, Nicholas Schiefer, Tom Henighan, Shauna Kravec, Zac Hatfield-Dodds, Robert Lasenby, Dawn Drain, Carol Chen, Roger Grosse, Sam McCandlish, Jared Kaplan, Dario Amodei, Martin Wattenberg, Christopher Olah (2022)
Highlighted by Jan Jensen



Most NNs are notoriously hard to interpret. While there are a few cases, mostly in image classification, where some features (like lines or corners) can be assigned to particular neurons, in general is it seems like every part of the NN contributes to every prediction. This paper provides some powerfull insight into why this is, by analysing simple toy models. 

The study builds on the idea that the output of a hidden layers is an N-dimensional embedding vector (V) that encodes a feature of the data (N is the number of neurons in the layers). You might have seen this famous example from language models: V("king") - V("man") + V("woman") = V("queen"). 

Naively, one would expect that a N-neuron layer can encode N different features, since there are N different (i.e. orthogonal) vectors. However, the papers points out that the number of almost orthogonal vectors (say, with angles between 89° and 91°) increases exponentially with N, so that NNs can represent many more features than they have dimensions, which they term "superposition". 

Since most features are stored in orthogonal vectors they will necessarily have many non-zero contributions and this cannot be assigned to a specific neuron. The authors further show that the superposition is driven by data sparcity, i.e. few examples of a particular input feature: more data sparcity, more superposition, less interpretability.

The paper is very thorough and there are many more insights that I have skipped. But I hope this highlight has made you curious enough to have a look at the paper. I can also recommend this brilliant introduction superposition by 3Blue1Brown to get you started.

Now, it's important to note that these insights are obtained by analysing simple toy problems. It will be interesting to see if and how they apply to real-world applications, including chemistry. 


This work is licensed under a Creative Commons Attribution 4.0 International License.



Wednesday, August 28, 2024

Variational Pair-Density Functional Theory: Dealing with Strong Correlation at the Protein Scale

Mikael Scott, Gabriel L. S. Rodrigues, Xin Li, and Mickael G. Delcey (2024)
Highlighted by Jan Jensen

As I've said before, one of the big problems in quantum chemistry is that we still can't routinely predict the reactivity of TM-containing compounds with the same degree of accuracy as we can for organic molecules. This paper might offer a solution by combining CASSCF with DFT in a variational way.

While such a combination has been done before, that implementation basically compute the DFT energy based on the CASSCF density. If you haven't heard of this approach, it's probably because it didn't work very well. 

This paper presents a variational implementation, where you minimise the energy if a CASSCF wavefunction subject to an exchange-correlation density functional, an the results are significantly better - in some cases approaching chemical accuracy! This is pretty impressive given that they used off-the-shelf GGA functionals (BLYP and PBE) so further improvements in accuracy with bespoke functionals is quite likely.

Oh, and one of the applications presented in the paper is multiconfigurational calculation on an entire metallo-protein!



This work is licensed under a Creative Commons Attribution 4.0 International License.



Tuesday, July 30, 2024

Reproducing Reaction Mechanisms with Machine Learning Models Trained on a Large-Scale Mechanistic Dataset

Joonyoung F. Joung, Mun Hong Fong, Jihye Roh, Zhengkai Tu, John Bradshaw, and Connor Wilson Coley (2024)
Highlighted by Jan Jensen

Figure 1 from the paper. (c) the authors 2024

If you don't follow this particular subject, you might be surprised to learn that there isn't a large database of elementary reactions relevant to organic synthesis. Until now. 

While datasets such as Reaxys contain millions of reactions, they are typically multistep reactions. That's mostly fine for training retrosynthesis algorithms (although the authors present discuss some disadvantages), but presents a challenge if you want to use more physically based methods such as QM to predict reactivity. For example, while there are some databases of transition states (TSs) they are typically for synthetically irrelevant reactions. So, for example, while very promising methods have been developed for TS prediction, they have been trained on these datasets and are thus have limited practical applicability to synthesis.

This paper is an important step towards fixing this:

"We  identified the most popular 86 reaction types in Pistachio and curated elementary reaction templates (Figure 1c) for each of these 86 reaction types with 175 different reaction conditions (e.g., types of mechanisms). ... By applying these expert elementary reaction templates to the reactants in Pistachio, we obtained the recorded products as well as unreported  byproducts and side  products. We systematically  selected  and  preserved  the  mechanistic  pathways leading to the formation of the recorded product for  each  entry,  resulting in a comprehensive dataset comprising 1.3 million overall reactions and 5.8 million elementary reactions."

The next step is now to use this data to obtain TSs for these elementary reactions - a difficult but important challenge to the CompChem community.



This work is licensed under a Creative Commons Attribution 4.0 International License.



Sunday, June 30, 2024

Using GNN property predictors as molecule generators

Félix  Therrien, Edward H. Sargent, and Oleksandr Voznyy (2024)
Highlighted by Jan Jensen

Figure 1 from the paper. (c) 2024 the authors

Now this is a very neat idea. Normally, we use back propagation to alter the weight in order to minimise the difference between the output and the ground truth. Instead, the authors use back propagation to alter the input to minimise the difference between the output and a desired value. In this case the input is the molecular adjacency matrix and the result is a molecule with the desired property.

It's one of those "why didn't I think of this?" ideas, but, in practise, there are a few tricky problems to overcome. These include recasting the integer adjacency matrix as a smooth float matrix, finding the right constraints to yield valid molecules, and finding the right loss function.  The authors manage to find clever solutions to all these problems and show that this simple idea actually works quite well. As I read it, the current implementation if limited to HCNOF molecules, but generalising it should not be an insurmountable task.

Even if this approach doesn't turn out to be the best generative model, it is one of these obvious (in hindsight) methods that have to be tested to justify more complicated approaches.   



This work is licensed under a Creative Commons Attribution 4.0 International License.



Thursday, May 30, 2024

FragGT: Fragment-based evolutionary molecule generation with gene types

Joshua Meyers and Nathan Brown (2024)
Highlighted by Jan Jensen


Figure 1 from the paper. (c) The authors. Reproduced under the CC-BY license

Genetic algorithms (GAs) allow for changes at the atom level (as opposed to molecular fragments) allow for a very fine-grained search of chemical space. However, some of the resulting molecules are not chemically sensible and one usually has to include a synthetic accessibility constraint in the scoring function. 

However, another approach is to use fragments and include synthetic accessibility in the fragmentation scheme, which is what this study did. Specifically they use the BRICS fragmentation scheme implemented in RDKit and the corresponding combination rules to turn the genes into molecules. 

The authors do indeed find that the resulting molecules do indeed look more reasonable (though it is not quantified). However, the authors note that the method is a "relatively inefficient explorer of chemical space", requiring a large number of scoring function evaluations.

The problem is probably, the short-chromosome/many-genes problem. GAs do best at optimizing long chromosomes made of only a few different genes, while the opposite is the case here: there are 211,388 unique BRICS fragments and each molecule contains only around 10 fragments. So you need to run a lot to make sure that all (reasonably) possible genes have been sampled at each position.

It presents a very interesting open challenge to the cummunity.


This work is licensed under a Creative Commons Attribution 4.0 International License.



Tuesday, April 30, 2024

Invalid SMILES are beneficial rather than detrimental to chemical language models

Michael A. Skinnider (2024)
Highlighted by Jan Jensen

Figure 3c from the paper. (c) The author. Reproduced under the CC-BY License

Language models (LMs) don't always produce valid SMILES and while for modern methods the percentage of invalid SMILES tends to be relatively small, much effort has been expended on making it as small as possible. SELFIES was invented as a way to make this percentage 0, since SELFIES is design to always produce valid SMILES.

However, several studies have shown that SMILES-based LMs tends to produce molecular distributions that is closer to the training set, compared to SELFIES. This paper has figured out the reason and it turns out to be both trivial and profound at the same time.

It turns out that the main difference in the molecules produced using SMILES and SELFIES is that the former has a much larger proportion of aromatic atoms. Furthermore, this difference goes away if the SELFIES-based method is allowed to make molecules with pentavalent carbons, which are then subsequently discarded when converted from SELFIES to SMILES.

The reason for this is that in order to generate a valid SMILES or SELFIES string for an aromatic molecule you have to get the sequence of letters exactly right. If it goes wrong for SMILES it is discarded, but if it goes wrong for SELFIES it is usually turned into a valid non-aromatic molecule, i.e. the mistake is not discarded. 

For example, the correct SMILES string for benzene is "c1ccccc1", and generated strings with one more or one less "c" character ("c1cccccc1" and "c1cccc1") are invalid and will be removed. The corresponding SELFIES string for benzene is "[C][=C][C][=C][C][=C][Ring1][=Branch1]", but generated strings with one more or one less [C] character will result in non-aromatic molecules with SMILES strings like "C=C1C=CC=CC1" and "C1=CC=CC=1".

There's a lot ML papers that simply observe what works best, but very few that determine why. This is one of them and it is very refreshing!



This work is licensed under a Creative Commons Attribution 4.0 International License.



Sunday, March 31, 2024

An evolutionary algorithm for interpretable molecular representations

Philipp M. Pflüger, Marius Kühnemund, Felix Katzenburg, Herbert Kuchen, and Frank Glorius (2024)
Highlighted by Jan Jensen

Parts of Figures 2 and 6 combined. (c) 2024 Elsevier, Inc

This paper presents a very novel approach to XAI that allows for direct comparison with chemical intuition. Molecular fingerprints (either binary or count) are defined using randomly generated SMARTS patterns and then uses a genetic algorithm to find the optimum fingerprint of a certain length. Here the optimum is defined as the one giving the lowest error when used with CatBoost. The GA search requires many thousands of models so the approach is not practical for more computational expensive ML models. 

Nevertheless, the authors show that CatBoost is competitive with more sophisticated ML models even when using FP lengths as low as 256 (or even 32 in some cases). One can then analyse the SMARTS patterns to gain chemical insights. 

Even more interestingly, one can use the approach to directly compare to chemical intuition. The authors did this by asking five groups of chemists to come up with the 16 most structural features that explain the Doyle-Dreher dataset of 3,960 Buchwald-Hartwig cross-coupling yields. ML models based on the corresponding FPs tended to perform worse than the 16-bit FPs found by the GA. However, it there were also many similarities between the FPs indicating that the method can extract features that are in agreement with chemical intution.  


This work is licensed under a Creative Commons Attribution 4.0 International License.