Thursday, September 30, 2021

Benchmarking molecular feature attribution methods with activity cliffs

José Jiménez-Luna, Miha Skalic, and Nils Weskamp (2021)
Highlighted by Jan Jensen


Figure 1 from the paper. (c) The authors 2021. Reproduced under the CC-BY-NC license.

This is a follow-up of sorts on a previous post on trying to explain ML models using feature attribution.  While the idea is very attractive it is not obvious how to best benchmark such methods for chemical applications since it's rarely clear what the right answer is. Most benchmarking so far has therefore been done on toy problems that basically amount to substructure identification. 

This paper suggests that a solution to this is trying to identify activity cliffs in protein-ligand binding data, i.e. small structural changes that lead to large changes in binding affinity. The idea is that the atom attribution algorithms should identify these structural differences as illustrated in the figure above. The paper goes on to test this premise for an impressive number of feature attribution algorithms on an impressive number of datasets. 

The main conclusion is that none of the methods work unless the molecule pairs are included in the training set! Thus the authors ...
"... discourage the overall use of modern feature attribution methods in prospective lead optimization applications, and particularly those that work in combination with message-passing neural networks."
However, this paper by Cruz-Monteagudo et al. argues that ML models in general should fail to predict activity cliffs. One way to view activity cliffs is as exceptions that test the rules and ML models are supposed to learn the rules. The only way to predict the exceptions is to memorise them (i.e. overfit). 

On the other hand the examples shown above are, in my opinion, pretty drastic changes in structure that may not fit the conventional definition of activity cliffs and could conceivably be explained with learned rules. Clearly the feature attribution methods tested by Jiménez-Luna et al. are not up to the task. Or perhaps such methods require a larger training set to work. One key questions the authors didn't discuss is whether the ML models also fail to predict the change in binding affinity in addition to failing to correctly attribute the change.  



This work is licensed under a Creative Commons Attribution 4.0 International License.