Mayk Caldas
Ramos
and
Andrew D.
White
*
Chemical Engineer Department, University of Rochester, Rochester, NY 14642, USA. E-mail: andrew.white@rochester.edu
First published on 13th March 2024
Aqueous solubility is a valuable yet challenging property to predict. Computing solubility using first-principles methods requires accounting for the competing effects of entropy and enthalpy, resulting in long computations for relatively poor accuracy. Data-driven approaches, such as deep learning, offer improved accuracy and computational efficiency but typically lack uncertainty quantification. Additionally, ease of use remains a concern for any computational technique, resulting in the sustained popularity of group-based contribution methods. In this work, we addressed these problems with a deep learning model with predictive uncertainty that runs on a static website (without a server). This approach moves computing needs onto the website visitor without requiring installation, removing the need to pay for and maintain servers. Our model achieves satisfactory results in solubility prediction. Furthermore, we demonstrate how to create molecular property prediction models that balance uncertainty and ease of use. The code is available at https://github.com/ur-whitelab/mol.dev, and the model is useable at https://mol.dev.
Solubility prediction is essential, driving the development of various methods, from physics-based approaches—including first principles,10,11 semi-empirical equations,12–14 molecular dynamics (MD),15–18 and quantum computations19—to empirical methods like quantitative structure–property relationship (QSPR)20–23 and multiple linear regression (MLR).24,25 Despite their sophistication, physics-based models often present complexity that limits accessibility to advanced users26 and do not guarantee higher accuracy than empirical methods.27 Data-driven models emerge as efficient alternatives, capable of outperforming physics-based models.26 However, achieving accurate and reliable solubility predictions remains a significant challenge.26,28
To address the persistent issues of systematic bias and non-reproducibility in aqueous solubility datasets, Llinàs et al.29,30 introduced two solubility challenges featuring consistent data. The first challenge red participants based on the root mean square error (RMSE) and the accuracy within a ±0.5logS error range.31 The second challenge revealed that despite the freedom in method selection, all entries relied on QSPR or machine learning (ML) techniques,32 yet did not achieve a notable improvement over the first challenge.31 These challenges highlighted the importance of data quality over model selection for accurate solubility predictions.32 Sorkun et al.28 further emphasized this by demonstrating how data quality assessments on subsets of the AqSolDB1 significantly impacted model performance.28
McDonagh et al.27 demonstrated that cheminformatic methods surpass first principle theoretical calculations in calculating solubilization free energies, highlighting the superior accuracy of cheminformatics and the efficacy of Random Forest models, evidenced by an RMSE of 0.93 using Llinàs’ first dataset.29 Data-driven approaches, particularly feature-based models, have contributed to accurate aqueous solubility prediction. Delaney25 used MLR to develop a model called Estimated SOLubility (ESOL) adjusted on a 2874 small organic molecules dataset with an average absolute error (AAE) of 0.83. Comparable performance has been achieved using various methods including MLR,24 Gaussian processes,33 undirected graph recurrent neural networks (UG-RNN),34 deep neural networks (DNN),35 and random forests (RF).36,37
Recently, transformers38 models have been applied to compute solubility of small molecules.39–43 Francoeur and Koes41 developed the SolTranNet, a transformers model trained on AqSolDB1 solubility data. Notably, this architecture results in an RMSE of only 0.278 when trained and evaluated on the original ESOL25 dataset using random split. Nevertheless, it shows an RMSE of 2.99 when trained using the AqSolDB1 and evaluated using ESOL. It suggests that the molecules present in ESOL may have low variability, meaning that samples in the test set are similar to samples in the training set. Hence, models trained on the ESOL training set performed excellently when evaluated on the ESOL test set.
Solubility models should ideally combine accuracy with ease of access. Thus, a common idea is to use web servers to provide easier public access. However, web servers demand continuous financial and time investments for maintenance, leading to the eventual disappearance of some, despite having institutional or government backing.44 For instance, eight out of 89 web server tools featured in the 2020 Nucleic Acids Research special web server issue were offline by the end of 2022.45 Moreover, computational demands can be significant, with tools like RoseTTAFold46 and ATB47 requiring hours to days for job completion, thus creating potential delays due to long queues and wait times.48
An alternative approach is to perform the computation directly on the user's device, removing the need for the server's maintenance and cost. This method allows hosting the website as a static file on platforms such as GitHub, with potential archiving on the Internet Archive.† We explored this approach in Ansari and White49 for bioinformatics. Our web application implements a deep ensemble50 recurrent neural network (RNN) capable of extracting data directly from molecular string representations, such as SMILES51 or SELFIES,52 which can be easily quickly accessed.53,54
The primary difficulty lies in the application's dependence on the device's capabilities, which is crucial for smartphones with limited resources. Balancing performance in low-resource settings, the use of transformer models38 becomes impractical due to their large size, incompatible with smartphone memory and prolonged inference times. Additionally, our model implements a deep ensemble to calibrate uncertainties, making the application of transformers even more unfeasible. In contrast, using descriptors is an easy way to convey physical information to the model and, consequently, enables smaller models. However, descriptor computation is time-intensive. In our tests, using PaDEL to compute descriptors for all molecules in AqSolDB took roughly ∼20 hours. Furthermore, feature-based model development requires specialized knowledge for feature selection,55 and is limited by the regions of the chemical space these descriptors cover.56 Even application usage may need specialized data, as Kurotani et al.37 illustrate. RNNs present an alternative for property extraction directly from string representations while allowing for adaptable computational resource management.
In this work, we developed a front-end application using a JavaScript (JS) implementation of TensorFlow framework.57 Our application can be used to predict the solubility of small molecules with uncertainty. To calibrate the confidence of the prediction, our model implements a deep ensemble approach50 which allows reporting model uncertainty when reporting the prediction. Our solution implements a deep ensemble of RNN models specially designed to achieve satisfactory performance while being able to run in an environment without strong computational resources. This application runs locally on the user's device and can be accessed at https://mol.dev/. Mol.dev does not save data input for predictions in any way.
We augmented AqSolDB to 96625 molecules using SMILES randomization.61,62 Each entry of AqSolDB was used to generate at most ten new unique randomized SMILES strings. Training the model on multiple representations of the same molecule improves its ability to learn the chemical space constraints of the training set, as demonstrated in previous studies.61,62 Duplicates were removed.
After shuffling, the augmented dataset was split into 80%/20% for the training and test datasets, respectively. The curated datasets for the solubility challenges29,32 were used as withheld validation data to evaluate the model's ability to predict solubility for unseen compounds. To refer to the validation datasets, we labeled the first solubility challenge dataset as “solubility challenge 1” and the two sets from the second solubility challenge as “solubility challenge 2_1” and “solubility challenge 2_2”, respectively. Molecules in these three datasets were not found in train and test datasets.
Given a model that outputs two values – m and m – that characterize a normal distribution , a deep ensemble creates an ensemble of m models that can estimate prediction uncertainty. For a given data point , the estimates for the ensemble predictions are computed as follows:
(1) |
(2) |
We used a deep neural network (DNN) implemented using Keras66 and TensorFlow67 to build the deep ensemble. Our DNN model uses Self-referencing embedded strings (SELFIES)52 tokens as input. A pre-defined vocabulary was created by analyzing all training data. Each unique SELFIES group was assigned to a corresponding integer, yielding 273 distinct tokens. Simplified molecular-input line-entry system (SMILES)51 or SELFIES52 molecule representations are converted to tokens based on the pre-defined vocabulary. Fig. 1 illustrates the model architecture. The network can be divided into three sections: (i) embedding, (ii) bi-RNN, and (iii) fully connected NN.
The embedding layer converts a list of discrete tokens into a fixed-length vector space. Working on a continuous vector space has two main advantages: it uses a more compact representation, and semantically similar symbols can be described closely in vector space. Our embedding layer has an input dimension of 273 (vocabulary size) and an output dimension of 64.
Following the embedding layer, the data are fed into the bidirectional Recurrent Neural Network (RNN) layer. We used two RNN layers, each containing 64 units. The effects of using Gated Recurrent Unit (GRU) or Long Short-Term Memory (LSTM)68 layers as the RNN layers were investigated (refer to Section 3.1). Using bi-RNN was motivated based on our previous work49 in which LSTM helped improve the model's performance for predicting peptide properties using its sequences. More details regarding RNN, LSTM, and GRU layers can be found in ref. 69.
The output from the bi-LSTM stack undergoes normalization via Layer Normalization.70 There is no agreement on why Layer Normalization improves the model's performance.71–74 The absence of a comprehensive theoretical understanding of normalization effects hinders the evolution of novel regularization schemes.75 Despite the limited understanding, Layer Normalization is employed due to its demonstrated effectiveness.74
After normalization, data is processed through three dense layers containing 32, 16, and 1 units, respectively. The 16-unit layer's output goes to two different 1-unit layers. One layer uses a linear function and the other uses a softplus function, producing m and m, respectively.
Negative log-likelihood loss l was used to train the model. It is defined as the probability of observing the label y given the input :
(3) |
During the training phase, dropout layers with 0.35 dropout rate were incorporated after the embedding and each dense layer to mitigate over-fitting.76 Models were trained using the Adam77 optimizer with a fixed learning rate of 0.0001 and default values for β1 and β2 (0.9 and 0.999, respectively).
Our model employs adversarial training, following the approach proposed by Lakshminarayanan et al.50 to improve the robustness of our model predictions. Because the input for our model is a discrete sequence, we generate adversarial examples by modifying the embedded representation of the input data. Each iteration in the training phase consists of first computing the loss using eqn (3) and a second step with a new input to smooth the model's prediction:
(4) |
Details of the model performance, limitations, training data, ethical considerations, and caveats are available as a model card78 at http://mol.dev/.
Model | Solubility challenge 1 | Solubility challenge 2_1 | Solubility challenge 2_2 | ||||||
---|---|---|---|---|---|---|---|---|---|
RMSE | MAE | r | RMSE | MAE | r | RMSE | MAE | r | |
RF | 1.121 | 0.914 | 0.547 | 0.950 | 0.727 | 0.725 | 1.205 | 1.002 | 0.840 |
DNN | 1.540 | 1.214 | 0.433 | 1.315 | 1.035 | 0.651 | 1.879 | 1.381 | 0.736 |
DNNAug | 1.261 | 1.007 | 0.453 | 1.371 | 1.085 | 0.453 | 2.189 | 1.710 | 0.386 |
kde4GRU | 1.610 | 1.145 | 0.462 | 1.413 | 1.114 | 0.604 | 1.488 | 1.220 | 0.704 |
kde4LSTM | 1.554 | 1.191 | 0.507 | 1.469 | 1.188 | 0.650 | 1.523 | 1.161 | 0.706 |
kde4GRU-NoAdv | 1.729 | 1.348 | 0.525 | 1.483 | 1.235 | 0.622 | 1.954 | 1.599 | 0.517 |
kde4LSTM-NoAdv | 1.425 | 1.114 | 0.505 | 1.258 | 0.972 | 0.610 | 1.719 | 1.439 | 0.609 |
kde4GRUAug | 1.329 | 1.148 | 0.426 | 1.354 | 1.157 | 0.674 | 1.626 | 1.340 | 0.623 |
kde4LSTMAug | 1.273 | 0.984 | 0.473 | 1.137 | 0.932 | 0.639 | 1.511 | 1.128 | 0.717 |
kde8LSTMAug | 1.247 | 0.984 | 0.542 | 1.044 | 0.846 | 0.701 | 1.418 | 1.118 | 0.729 |
kde10AugLSTM-NoAdv | 1.689 | 1.437 | 0.471 | 1.451 | 1.238 | 0.676 | 1.599 | 1.405 | 0.699 |
kde10LSTMAug | 1.095 | 0.843 | 0.559 | 0.983 | 0.793 | 0.724 | 1.263 | 1.051 | 0.792 |
We trained models with four elements in the deep ensemble using GRU or LSTM. Metrics can be found in Table 1; for an explanation of the naming syntax used in this work, refer to Table 1 caption. Using LSTM resulted in a decrease in RMSE and MAE and an increase in the correlation coefficient, indicating better performance. For solubility challenges 1, 2_1, and 2_2, the kde4GRUAug model yielded RMSE values of 1.329, 1.354, and 1.626, respectively, while the kde4LSTMAug model achieved 1.273, 1.137, and 1.511, respectively. This trend was also observed for the models trained without data augmentation (see Table 1). Considering that LSTM performs better regarding this model and data, we will consider only bi-LSTM layers for further discussion. Those results are in accordance with our previous work49 in which using LSTM helped improve the model's performance.
Augmenting the dataset had a significant impact on the metrics. It could be seen improvements of ∼0.5 in the RMSE when evaluating on challenge datasets 1 and 2_1, and a gain of ∼0.2 on 2_2 (see Table 1). Concerning the first two datasets, augmenting data improved every model used in this study. However, surprisingly, data augmentation led to a deprecation of the DNN model on the solubility challenge 2_2 dataset. This behavior was not further investigated.
Comparing kde4LSTM-NoAdv and kde4LSTM, using adversarial training decreases model performance. It can be seen in Table 1 that using adversarial perturbation increased the RMSE from 1.425 to 1.554 and 1.258 to 1.469 in solubility challenges dataset 1 and 2_1, respectively. However, the RMSE decreased from 1.719 to 1.523 in dataset 2_2. Using adversarial perturbation affected our kde4LSTM's performance by a change in RMSE of ±0.2.
The inconsistent performance improvement observed when using adversarial training was further investigated with models in which the dataset was augmented. Due to the lack of multiple string representations in the training dataset, it is known that kde4LSTM may have generalization problems. A generalization issue could direct the adversarial perturbation in a non-physical direction because the model does not have complete knowledge about the chemical representation space. This hypothesis is reinforced when we compare kde10LSTMAug-NoAdv and kde10LSTMAug. When using adversarial training on a model trained with an augmented dataset, the performance improvement is more evident (∼0.5) and consistent for all the test datasets.
Besides the immediate improvement in RMSE, increasing the ensemble size also improves the uncertainty of the model. Fig. 2 shows the density distribution of the aleatoric variance and the epistemic variance (respectively related to AU and EU) for kde4LSTMAug (top 6 panels) and kde10LSTMAug (bottom six panels).
The increase in ensemble size led to a decrease in both uncertainties. AU distributions for the kde4LSTMAug are centered around 4logS2, displaying a long tail that extends to values as high as 20logS2 in the worst case (solubility challenge 2_2). A similar trend is observed in EU distributions. On the other hand, the kde10LSTMAug model results in narrower distributions. The mean of these distributions remains relatively unchanged, but a noticeable reduction in the extent of their tails can be observed. AU distribution ends in values around 10 logS2.
Model | Solubility challenge 1 | Solubility challenge 2_1 | Solubility challenge 2_2 | ||||||
---|---|---|---|---|---|---|---|---|---|
RMSE | MAE | r | RMSE | MAE | r | RMSE | MAE | r | |
a Has overlap between training and test sets. b Pre-trained model was fine-tuned on ESOL. | |||||||||
RF | 1.121 | 0.914 | 0.950 | 0.727 | 1.205 | 1.002 | |||
DNN | 1.540 | 1.214 | 1.315 | 1.035 | 1.879 | 1.381 | |||
DNNAug | 1.261 | 1.007 | 1.371 | 1.085 | 2.189 | 1.710 | |||
kde4LSTMAug | 1.273 | 0.984 | 1.137 | 0.932 | 1.511 | 1.128 | 1.397 | 1.131 | |
kde8LSTMAug | 1.247 | 0.984 | 1.044 | 0.846 | 1.418 | 1.118 | 1.676 | 1.339 | |
kde10LSTMAug | 1.095 | 0.843 | 0.983 | 0.793 | 1.263 | 1.051 | 1.316 | 1.089 | |
Linear regression25 | 0.75 | ||||||||
UG-RNN34 | 0.90 | 0.74 | |||||||
RF w/CDF descriptors27 | 0.93 | ||||||||
RF w/Morgan fingerprints36 | 0.64 | ||||||||
Consensus88 | 0.91 | ||||||||
GNN89 | ∼1.10 | 0.91 | 1.17 | ||||||
SolvBert90 | 0.925 | ||||||||
SolTranNet41 | 1.004 | 1.295 | 2.99 | ||||||
SMILES-BERT91 | 0.47 | ||||||||
MolBERT40 | 0.531 | ||||||||
RT42 | 0.73 | ||||||||
MolFormer43 | 0.278 |
Fig. 3 Parity plots for two selected models being evaluated on the solubility challenge datasets: (i) kde4LSTMAug (top row), and (ii) kde10LSTMAug (bottom row). The left, middle, and right columns show the parity plots for solubility challenge 1,29 2-set1, and 2-set2,30 respectively. Pearson correlation coefficient is displayed together with RMSE and MAE. “acc-0.5” stands for the ±0.5log% metric. Red dashed lines show the limits for molecules considered a correct prediction when computing the ±0.5log%. The correlation between predicted values and labels increases when more models are added to the ensemble. RMSE and MAE also follow this pattern. However, the ±0.5log% decreases in set-2 of the second solubility challenge dataset (SolChal2-set2). While kde10LSTMAug improved the prediction of molecules that were being poorly predicted by kde4LSTMAug, the prediction of molecules with smaller errors was not greatly improved. |
Comparing the performance of different models is a complex task, as performance metrics cannot be directly compared across models evaluated on distinct datasets. To address this issue, Panapitiya et al.89 curated a large and diverse dataset to train models with various architectures and molecular representations. They also compared the performance of these models on datasets from the literature.24,25,29,30,88,92–97 Although their models achieved an RMSE of ∼1.1 on their test set, using descriptors as molecular representations resulted in RMSE values ranging from 0.55 to ∼1.35 when applied to other datasets from the literature. According to their study, the solubility challenge datasets by Llinàs et al.29,30 were found to be particularly challenging due to their more significant reproducibility error. Therefore, we focused on the Llinàs datasets to compare our performance with the literature.
Focusing on the solubility challenge 1 dataset,29 kde10LSTMAug is only ∼0.2 RMSE units worse than the best model available in the literature.34 The RMSE of the participants of the challenge was not reported.31 The primary metric used to evaluate models was the percentage of predictions within an error of 0.5LogS units (called ±0.5log%). Computing the same metric, kde10LSTMAug has a percentage of correct prediction of 44.4%. This result would place our model among the 35% best participants. The participant with the best performance presented a ±0.5log% of 60.7%.
The architecture of the models was not published in the findings of the first challenge.31 Nevertheless, the findings for the second challenge32 investigated the participants more thoroughly. Participants were asked to identify their models' architecture and descriptors used. The challenge is divided into two datasets. Set-1 contains LogS values with an average interlaboratory reproducibility of 0.17LogS. Our kde10LSTMAug achieve an RMSE of 0.983 and a ±0.5log% of 40.0% in this dataset. Therefore, our model performs better than 62% of the published RMSE values and 50% of the ±0.5log%. In addition, the model with the best performance is an artificial neural network (ANN) that correctly predicted 61% (±0.5log%) of the molecule's LogS using a combination of molecule descriptors and fingerprints. The second dataset (set-2) contains molecules whose solubility measurements are more challenging, reporting an average error in reproducibility of 0.62LogS. The kde10LSTMAug achieves an RMSE of 1.263 and a ±0.5log% of 23.3%. It performs better than 82% of the candidates when considering the RMSE. Surprisingly, ±0.5log% does not follow this outstanding performance, which is more significant than only 32% regarding the literature, kde10LSTMAug has an RMSE only ∼0.1 higher than a GNN that used an extensive set of numeric and one-hot descriptors in their feature vector.89 Our model performs better than a transformer model that uses SMILES-string and an adjacency matrix and inputs.41 The performance of those models is available in Table 2.
Notably, all participants in the solubility challenge 2 submitted a kind of QSPR or descriptor-based ML model. Using descriptors provides an easy way to ensure model invariance concerning molecule representation and is more informative since they can be physical quantities. However, selecting appropriate descriptors is crucial for developing descriptor-based ML models. It often requires specialists with a strong intuition about the relevant physical and chemical properties for predicting the target quantity. Feature-based models are still being considered to be the SOTA of solubility prediction. Recently, studies investigating different descriptors and fingerprints were performed.36,98 These studies showed that similarly to the impacts of data quality,28 molecular representation also has a great impact on models' performance. Despite Tayyebi et al.36 being able to achieve an MAE of 0.64 on solubility challenge 1 when using Morgan fingerprints (MF), Zagidullin et al.98 reported poor performance when using MF. Our approach, on the other hand, is based on extracting information from simple string representations, a more straightforward raw data. Furthermore, we could achieve state-of-the-art performance while balancing the model size and complexity and using a raw input (a simple string). This simplified usage enables running the model on devices with limited computing power.
Lastly, transformer models have been used to address the issue of accurately predicting the solubility of small compounds. The typical workflow for transformers involves pre-training the model using a large dataset and subsequently fine-tuning it for a specific downstream task using a smaller dataset. Most existing models were either pre-trained on the ESOL25 dataset or pre-trained on a larger dataset and fine-tuned using ESOL. Hence, the generalizability of those models cannot be verified. In a study by Francoeur and Koes,41 they considered two versions of their model, SolTranNet. The first version of SolTranNet was trained with the ESOL dataset using random splits. This approach achieved an RMSE of 0.278. Subsequently, the deployed version of SolTranNet was trained with the AqSolDB.1 When ESOL was used to evaluate their deployed version, the model presented an RMSE of 2.99. While our model achieved an RMSE of 1.316 on ESOL, outperforming the SolTranNet deployed version, it cannot be compared with other models trained on ESOL.
Our based on a deep ensemble of recurrent neural networks (RNNs) model was trained using SMILES randomization for data augmentation on the AqSolDB dataset and validated using the solubility challenges by Llínas et al.29,30 It directly processes molecular string representations, such as SMILES or SELFIES, to predict solubility without relying on pre-selected descriptors. This approach not only simplifies the prediction process but also enhances its applicability across a broader chemical space. In addition, we show that this deep ensemble RNN model could achieve similar performance compared to a random forest (RF) using PaDEL descriptors. RFs with descriptors were shown to perform relatively well in other datasets.
By carefully compromising between performance and complexity, we developed a model with acceptable performance and that is not computationally intensive. It enables us to host the model on a static website using TensorFlow JS. Our model was designed to operate on devices with limited computational resources, aiming to broaden the accessibility of advanced solubility prediction tools. This application can satisfactorily run on any device with limited computational resources, such as laptops and smartphones. This approach ensures wider applicability, catering to the needs of users without access to high-performance computing facilities, improving usability and flexibility, and decreasing implementation costs. We believe this is a considerable step in improving the usability of deep learning models and promoting such models to a broader scientific community.
Footnote |
† https://archive.org/ |
This journal is © The Royal Society of Chemistry 2024 |