Swarn
Jha
*a,
Matthew
Yen
b,
Yazmin Soto
Salinas
a,
Evan
Palmer
a,
John
Villafuerte
a and
Hong
Liang
*ac
aJ. Mike Walker ‘66 Department of Mechanical Engineering, Texas A&M University, College Station, TX 77843-3123, USA. E-mail: swarn.jha14@tamu.edu; hliang@tamu.edu; Fax: +1 979-845-3081; Tel: +1 979-862-2623
bArtie McFerrin Department of Chemical Engineering, Texas A&M University, College Station, TX 77843-3123, USA
cDepartment of Materials Science and Engineering, Texas A&M University, College Station, TX 77843-3123, USA
First published on 4th January 2023
Machine learning (ML) has been the focus in recent studies aiming to improve battery and supercapacitor technology. Its application in materials research has demonstrated promising results for accelerating the discovery of energy materials. Additionally, battery management systems incorporating data-driven techniques are expected to provide accurate state estimation and improve the useful lifetime of batteries. This review briefs the ML process, common algorithms, advantages, disadvantages, and limitations of first-principles materials science research techniques. The focus of discussion is on the latest approaches, algorithms, and model accuracies for screening materials, determining structure–property relationships, optimizing electrochemical performance, and monitoring electrochemical device health. We emphasize the current challenges of ML-based energy materials research, including limited data availability, sparse datasets, and high dimensionality, which can lead to low generalizability and overfitting. An analysis of ML models is performed to identify the most robust algorithms and important input features in specific applications for batteries and supercapacitors. The accuracy of various algorithms for predicting remaining useful life, cycle life, state of charge, state of health, and capacitance has been collected. Given the wide range of methods for developing ML models, this manuscript provides an overview of the most robust models developed to date and a starting point for future researchers at the intersection of ML and energy materials. Finally, an outlook on areas of high-impact research in ML-based energy storage is provided.
ML algorithms can be placed into four main categories: supervised, unsupervised, reinforced, and semi-supervised learning.11 Supervised learning algorithms utilize labeled data, where the model determines the relationship between known features and an output. Meanwhile, unsupervised learning algorithms are utilized when the data has no labels. This allows users to identify hidden patterns and gain insights from a large, complex dataset that has no ground truth. Semi-supervised learning algorithms are the middle ground between supervised and unsupervised learning, where the available data has both labeled and unlabeled observations. Reinforcement learning also does not use ground truth labels but is capable of taking actions that maximize rewards to ultimately reach a solution.12 These algorithms provide flexible and efficient approaches to discovering new understandings from “big data”. The decision tree in Fig. 1 illustrates the situations in which each type of ML algorithm can be used.
Fig. 1 Decision tree identifying when supervised, semi-supervised, unsupervised, and reinforcement learning should be used. |
Most approaches in physical science studies rely on supervised learning algorithms to train models that make predictions.13 In prediction tasks, each data point is described by a set of features or descriptors, and the output ground-truth label of a data point is already known. A trained supervised ML model uses this set of features to map the relationship to its respective output label. Prediction tasks that involve supervised ML include regression and classification, where the output is a continuous value and a categorical label, respectively. A key element of supervised learning is that error can be evaluated quantitatively based on discrepancies between the prediction and the actual output, also known as a loss function. During training, an optimization algorithm like gradient descent is used to minimize this loss function and achieve the minimum error.
Although less commonly used in physical science studies, unsupervised learning algorithms are important for descriptive tasks, like clustering and anomaly detection.14 As with supervised ML, each data point is described by a set of features. However, the output label of a data point is unknown, and rather than predicting a continuous value as the output, the output is the cluster or clusters that a data point belongs. The purpose of unsupervised ML is to enable the automatic labeling of data points in a dataset, which would be extremely time-consuming to manually perform. The challenge of unsupervised learning tasks is that without any ground-truth labels, determining model performance is much more difficult. Evaluating unsupervised models requires internal validation metrics, which rely on quantifying how similar data points within the same cluster are to each other and how different they are from data points in other clusters.
Semi-supervised learning is especially useful in physical science studies, where small labeled datasets are commonly an issue. One important application is active learning, where the algorithm is initially trained on a labeled dataset and then poses queries to the user in a human-in-the-loop process to determine the label for the queried data point.15 This framework has excellent potential in accelerating the search for optimal material designs through guided experimentation, as will be demonstrated throughout this review.
Reinforcement learning is different from both supervised and unsupervised learning in that no initial dataset is needed. The concept is inspired by behavioral psychology, where the algorithm learns through trial and error, ultimately aiming to maximize a reward function while interacting with its environment.16 It also has great potential for accelerating the materials design processes.
Y = βx + C + ε | (1) |
Y = β1x1 + β2x2 +…+ βnxn +C +ε | (2) |
Linear regression is attractive due to its excellent interpretability, unparalleled performance when dealing with linear data, as well as its simplicity. For example, Fig. 2a shows how the data points would be modeled by linear regression, where increasing x values lead to increasing y values. On the other hand, dealing with linear regression introduces some disadvantages that include the assumption of linearity between input and output variables, as well as being insufficient in modeling complex relationships.23 However, nonlinearity may be accounted for by using the “kernel trick”, which transforms a feature before performing the regression.22
(3) |
The classification with the highest probability is the output. This is demonstrated in Fig. 2b, where data points lie between two categories (binary classification) based on the independent variable. Logistic regression is resilient to overfitting, has high computational efficiency, avoids making assumptions regarding the distribution of classes, and achieves high accuracy with linear problems. However, logistic regression is prone to overfitting and is unable to accurately model nonlinear decision boundaries.25
Fig. 4 (a) Active learning coupled with MD to simulate the collective diffusion of Li-ions in Li3B7O12, where colored arrows illustrate simultaneous movement direction. Reprinted with permission from ref. 54. Copyright 2020 American Chemical Society. (b) General workflow for developing machine learning models and their application in energy materials research. Some applications of trained machine learning models include guided experimentation and data mining. Created with http://biorender.com/. |
By combining ML algorithms with first-principles computations, the result is a trained model that can quickly derive material properties. This allows researchers to screen for candidate materials from large materials databases and predict properties of novel materials without having to perform any more time-consuming computational simulations.40,55–57 A general workflow for developing ML models and their application in materials science research is illustrated in Fig. 4b. Researchers that have applied ML methods to DFT calculations discovered that optimized models were capable of making accurate predictions of material properties outside of the training dataset.58 Some researchers have even bypassed performing DFT calculations by using easily accessible material properties from previous experiments or materials databases to make predictions for other properties, reducing the computational cost of the materials screening process by several orders of magnitude.59,60 A hierarchy demonstrating materials design through a combination of ML, first-principles methods, and experimentation is illustrated in Fig. 5. At the bottom is experimentation, the slowest among the methods, which provides the ground truth information. In the next layer up is computational simulation like DFT and MD, which is faster than experimentation, but still computationally inefficient and constrained in its abilities. At the top is ML with the highest speed for properties prediction, but can have limited accuracy and generalizability. Data from experiments and computational simulations are leveraged for training and improving models that map features to target properties in ML models. For materials discovery and design through ML, the ultimate goal is to accelerate the identification of new materials with excellent properties outside of the given training dataset. Thus, model extrapolation is necessary, followed by verification through theoretical simulations and experimentation.
Fig. 5 Hierarchy of materials design through ML involving a framework of experimentation, computational simulations, and ML model training. Adapted from ref. 118, 196–198 with permission from Elsevier and the Royal Society of Chemistry. Copyright 2018 Elsevier. Copyright 2018 Royal Society of Chemistry. Created with http://biorender.com/. |
Determining structure–function relationships from training datasets through ML is key to significantly accelerating materials discovery and design.61 ML was first developed around the 1950s. Some techniques commonly used in ML, such as linear and logistic regression, were introduced much earlier, as shown in the timeline of Fig. 6a. In the general ML workflow, the first step involves data collection. This can be done through experimentation or first-principles calculations or can be collected from public databases. Next, pre-processing data includes splitting the dataset into training and test sets, normalization, and homogenization. Then, feature engineering can be carried out to reduce dimensionality, remove highly correlated features, decrease model complexity, and improve model accuracy.62 Working with high-dimensional data can lead to sparse and computationally intensive models. Some commonly used dimensionality reduction techniques include principal component analysis (PCA), Pearson correlation, and least absolute shrinkage and selection operator (LASSO). During model training, techniques such as grid search and cross-validation are used for hyperparameter optimization. Evaluation metrics, commonly including coefficient of determination (R2), root mean squared error (RMSE), mean absolute error (MAE), and mean absolute percent error (MAPE) are then used to quantify model accuracy.63 Ultimately, ML can be used to generate knowledge-based rules for screening a large number of materials.64 Another application of ML is active learning systems capable of guiding experimentation to achieve much faster material design and optimization.65 ML has also been introduced for estimating the health of energy storage devices like batteries and supercapacitors with computational efficiency useful for real-time management.34 As shown in Fig. 6b and c, the number of publications regarding the application of ML in battery and supercapacitor research each year in the past decade has grown substantially.
ML has demonstrated high-quality results for designing energy storage materials that have been experimentally validated by researchers in a design-to-device process. The design-to-device process is key for confirming the accuracy of ML and simulated optimal materials design. Liow et al. demonstrated a design-to-device approach by training a gradient boosting regression (GBR) model to predict the discharge capacities of Li-ion batteries (LIBs) based on cathode synthesis variables.66 Experimental validation of the optimal synthesis parameters demonstrated a high discharge capacity of 209.5 mA h g−1 and coulombic efficiency of 86%. Similarly, for a supercapacitor design-to-device approach, Ghosh et al. used a RF model to predict the specific capacitance of a novel electrode material, cerium oxynitride, and observed a RMSE of 0.3 F g−1.67 With experimental validation, the synthesized supercapacitor achieved a high specific capacitance of 214 F g−1 and high cyclic stability of 100% after ∼10000 cycles. Park et al. employed a combined ML-DFT screening using a multilayer perceptron (MLP) model to screen formation energies of layered-oxide K-ion battery cathode materials.68 After screening potential candidates, the predicted material with the greatest stability was K0.3Mn0.9Cu0.1O2, which was then synthesized to experimentally validate the simulated data. The synthesized compound proved to be in good agreement with the predicted values, with the synthesized battery demonstrating high discharge capacity, power density, and cyclic stability. Dave et al.145 proposed a closed-loop experimental process that combines robotics for automated experimentation with a Bayesian optimization framework to optimize the ionic conductivity of non-aqueous electrolytes. After 42 experimental iterations, 6 electrolytes were measured with ionic conductivities above 12 mS cm−1. When applied in a cell for validation, the proposed electrolyte demonstrated improved discharging capacity after 4C fast-charging. Overall, this achievement demonstrated a 6-fold time acceleration compared to random search experimentation. Verduzco et al. demonstrated the effectiveness of ML in accelerating the search for high ionic conductivity lithium lanthanum zirconium oxide (LLZO) garnets.69 Emulating the search for the highest ionic conductivity LLZO garnet as of 2020, the researchers utilized an active learning framework to guide which experimental data points to train the RF model on and predict the best candidate composition. Ultimately, 30% fewer experimental investigations of different LLZO garnets published since 2005 were necessary to identify the highest ionic conductivity composition. As described above, ML has already demonstrated its potential for accelerating the search for novel high-performing energy storage materials through studies that have carried out the entire design-to-device process.
Previous reviews on ML-guided energy materials and energy storage devices research have demonstrated a trend of novel approaches allowing for an expanding number of more complex applications. Chen et al. provided a critical review of the applications of ML to predict ionic conductivity, elastic moduli, and interatomic potentials of Li compounds.70 Wang et al. focused on recent applications of ML for predicting the electrochemical performance of carbon-based supercapacitors.71 Liu et al. compiled several studies regarding the application of ML to predict the properties of rechargeable battery electrolytes and electrode materials.72 Kang et al. portrayed how ML applications for energy materials have assisted quantum chemistry techniques, allowing for the prediction of electronic structure, crystal structure, LIB performance, and the optimization of experimental studies.73 These reviews are mainly focused on the novel applications of ML research in energy materials and storage devices, but there is a need for a review of the predictive accuracy of these approaches to highlight the most promising approaches and the most robust models for specific applications.
This review compares and provides insights into the accuracies of recently developed ML approaches in energy materials research. Here, we provide a review of recently developed ML-guided models applied to predicting energy material properties, discovering materials, and predicting and optimizing the performance of batteries and supercapacitors. We also provide a direct comparison of the various ML algorithms that have been proposed to clearly illustrate their prediction accuracies, highlight the most successful models, and recommend future directions for ML studies in energy storage technology. Finally, we highlight opportunities for future research in applying ML to aid advancements in energy materials.
Fig. 7 2D representation of homobenzylic ether molecules by reducing 49 features into 2 principal components, and color-coded by oxidation potential. Reprinted with permission from ref. 57. Copyright 2020 American Chemical Society. |
Fig. 8 (a) GPR prediction of optimal oxidation potentials (1.40–1.70 V) guided by Bayesian optimization, allowing for a 5-fold increase in efficiency over DFT. Reprinted with permission from ref. 57. Copyright 2020 American Chemical Society.85 (b) Examples of designed features based on the number of atoms with the same coordination number, rings, and radicals. Reprinted with permission from ref. 85. Copyright 2018 American Chemical Society.90 |
The redox stability of electrolytes is also essential for ensuring electrochemical stability at the anode and cathode. Okamoto & Kubo applied Gaussian KRR and GBR to predict the redox potentials of electrolyte additives for LIBs using a dataset of 149 entries.85 The researchers developed a method for describing the molecular structures using a total of 22 structural features. The features, shown in Fig. 8b, describe the number of atoms with the same coordination number, the number of rings, and indicate any radicals. GBR demonstrated the highest accuracy for predicting both reduction and oxidation potentials, achieving R2 scores of 0.851 and 0.643 in the test set, respectively. In contrast to the two previous studies mentioned, performing feature elimination led to slightly lower accuracy. In the test set, model accuracy suffered due to the presence of outliers. Under closer examination, the outliers predicted by ML helped to reveal the possible underestimation of reduction potentials by the ab initio molecular orbital calculations used for ground truth values. This interesting find illustrates how ML can reveal patterns in the dataset that otherwise would have gone unnoticed.
Radical redox-active polymers have gained attention recently as a cleaner and low-cost alternative to metal-based materials in batteries.86 This has attracted the attention of ML researchers looking to quickly predict the properties of polymeric materials and avoid the high computational cost of atomistic simulations. Li & Tabor trained a GPR model using electron affinity estimated through the semi-empirical GFN2-xTB method, molecular fingerprints, and 3D descriptors, a relatively simple set of input features, to predict the reduction potential of polymers.87 The cross-validation results achieved an R2 of 0.83 when compared to experimental values, demonstrating the strong potential for this method to lower the computational cost of DFT calculations by coupling GPR with a semi-empirical method.
With a small dataset, low generalizability and low predictive accuracy are difficult to overcome, even with feature elimination. From these studies, some of the most important features identified include electron affinity, HOMO–LUMO gap, number of tricoordinate carbons, number of tetracoordinate carbons, number of monodentate oxygen atoms, and number of bidentate oxygen atoms. Table 1 summarizes the ML models mentioned here trained for redox potential prediction. Among the studies mentioned here, it is unclear whether KRR or GBR has better performance for redox potential prediction. However, GPR has gained attention for this specific application, achieving an RMSE below 0.01 V due to its nonparametric form, making it useful for more complex mapping functions.57 GPR is widely used among computational chemists and material scientists due to its ability to interpolate between high dimensional data points and its probabilistic nature that enables uncertainty quantification.88 Through uncertainty quantification, ML can be utilized to expedite high-throughput screening and accelerate the discovery of materials with extraordinary properties in an active learning process.
Machine learning algorithm | Advantages | Disadvantages | Performance | Ref. |
---|---|---|---|---|
ANN | Effective non-linear modeling, good generalizability, implicit feature selection | Requires extensive hyperparameter tuning, prone to overfitting, error increases with feature elimination | RMSE = 0.179 V | 58 |
KRR | Fits non-linear data well, relatively simple | Strong reliance on kernel selection, unstable with multicollinearity, requires composite features | RMSE = 0.158 V | 58 |
R 2 = 0.801 (reduction potential) | 85 | |||
R 2 = 0.512 (oxidation potential) | 85 | |||
GBR | Effective non-linear modeling, implicitly selects features | Large number of hyperparameters, requires a large training dataset | RMSE = 0.212 V | 58 |
R 2 = 0.851 (reduction potential) | 85 | |||
R 2 = 0.643 (oxidation potential) | 85 | |||
GPR | High accuracy, provides uncertainty quantification | Poor efficiency with large datasets | RMSE = 0.097 V | 57 |
RMSE = 0.28 V (reduction potential) | 87 |
Fig. 9 Time comparison of GN paired with Bayesian optimization (GN-BO) to DFT paired with particle swarm optimization (DFT-PSO) for prediction of 25 different crystal structures. GN-BO requires three orders of magnitude less time compared to DFT-PSO. Reprinted with permission from ref. 90. Copyright 2022 Springer Nature. |
Current research in crystal structure prediction has shown the prevalence of active learning frameworks, specifically BO, and the importance of uncertainty quantification. This approach can drastically decrease computational cost by a few orders of magnitude compared to DFT. However, it is still a challenge for researchers to identify an ML model with high prediction accuracy for predicting crystal structures. Table 2 summarizes the performance of the ML models in the studies mentioned here. GNs show good potential as they can easily represent the structural features as crystal graphs, but this area of research still needs more attention.
Machine learning algorithm | Advantages | Disadvantages | Performance | Ref. |
---|---|---|---|---|
RF | High accuracy, not prone to overfitting, provides feature importance, implicitly selects features | Computationally intensive with larger datasets and more trees | Accuracy = 85–97% | 55 |
SVM | Self-adjusts kernel function, scales up to high-dimensional data well | Strong reliance on kernel selection | Accuracy = 87–93% | 55 |
GN | Enables physically meaningful descriptors for crystal structure | Currently low accuracy | MAE = 20.8 meV per atom | 89 |
Shen et al. performed least squares regression (LSR) to predict the dielectric breakdown of polymer-based nanocomposites as a function of dielectric constant, electrical conductivity, and Young's Modulus, obtaining an R2 value of 0.91 for the training/test set combined.96 The predictive function generated by LSR demonstrated that adding nanofillers with low dielectric constant and low electrical conductivity to the polymer nanocomposite improves dielectric breakdown strength. Kim et al. utilized KRR, RF, and LASSO to predict the intrinsic dielectric breakdown field of dielectrical insulators with a dataset of 82 entries.93 Though all three models achieved similar predictive accuracies, only LASSO generated an explicit functional formula, which related band gap and maximum phonon frequency to the dielectric breakdown field, achieving a test set R2 of 0.72. Their LASSO model revealed that boron-containing compounds were most consistently identified as having a high dielectric breakdown. Using the same dataset as Kim et al., Yuan et al. used genetic programming to search for the best function for predicting dielectric breakdown, also finding that band gap and maximum phonon frequency were the most important features.97 Their best model had a lower RMSE of 237 MV m−1 compared to 692 MV m−1 using the LASSO model determined by Kim et al. The genetic programming method was able to improve upon prediction accuracy, while also balancing speed and accuracy. However, in both approaches, overfitting to training data was a common risk. Kumar et al. proposed an approach that addresses the issue of instability and overfitting of ML models to small datasets.59 Using the same dataset of materials and 8 features used by Kim et al., Bootstrapped projected gradient descent was used to select the most important features along with dimensional analysis through the Buckingham-Pi Theorem. The researchers arrived at an equation relating band gap and nearest-neighbor distance to the intrinsic breakdown field, achieving an R2 of 0.85, outperforming the LASSO model obtained by Kim et al.
Currently, low data availability on dielectric breakdown has significantly affected the accuracy of trained ML models. The most common ML pipeline for predicting dielectric breakdown involves using a nonlinear function to generate compound features. Band gap, nearest neighbor distance, and phonon cutoff frequency have been found to have high importance in dielectric breakdown prediction. Table 3 summarizes the performance of the ML models used in the studies mentioned here. Future studies should continue exploring more ML algorithms and investigate how to overcome the issue of low predictive accuracy and limited generalizability when training on small datasets.
Machine learning algorithm | Advantages | Disadvantages | Performance | Ref. |
---|---|---|---|---|
LSR | High accuracy, simple and efficient, outputs mathematical function relating descriptors to target | Strong sensitivity to outliers, requires iterative optimization and good starting parameter values | R 2 = 0.91 | 96 |
KRR | Performs well with high dimensionality, calculates feature importance | Strong reliance on kernel selection | R 2 = 0.69 | 93 |
RF | Calculates feature importance, performs well with high dimensionality | Computationally intensive with larger datasets and more trees, low accuracy | R 2 = 0.64 | 93 |
LASSO | Performs feature selection, outputs mathematical function relating descriptors to target | Unstable with multicollinearity, requires composite features | R 2 = 0.72 | 93 |
Genetic programming | Outputs mathematical function relating descriptors to target, automatically generates composite descriptors | Prone to overfitting, may converge at local minimum | RMSE = 1.48 MV m−1 | 97 |
Materials databases are commonly insufficient in size and diversity for training highly accurate ML models, resulting in underfitting. Zhang et al. found that increasing model accuracy is associated with increasing degrees of freedom (and data size), as demonstrated in Fig. 10a.103 This higher degree of freedom leads to lower accuracy in unknown domains. By incorporating a crude estimation property, or a low accuracy prediction of the target property through non-expensive Generalized Gradient Approximation (GGA) calculations, the researchers decreased the RMSE of a KRR model by 33% (0.34 eV) in band gap prediction. This demonstrates a feasible approach for increasing the accuracy of models with high bias, when collecting more data is too expensive or not an option.
Fig. 10 (a) KRR cross-validation RMSE achieved for prediction of band gaps as a function of the average degree of freedom and data size of the training set. Reprinted with permission from ref. 103. Copyright 2018 Springer Nature. (b) Convex hull of Li–Ge calculated through DFT and predicted using SVR and KRR models. Reprinted with permission from ref. 105. Copyright 2019 Elsevier. |
Among recent studies on band gap prediction, SVR, GPR, and KRR are the most commonly used. Table 4 summarizes these studies and compares their performances. GPR was demonstrated to have very low prediction error out of these algorithms. As mentioned in a few studies, the lack of data on wide band gap materials has limited the accuracy and generalizability of ML. A potential workaround is to limit the dataset to materials with more narrow band gaps at the expense of generalizability. Another proposal is the use of a crude estimation property that can improve model accuracy with inexpensive computations. Further studies should be performed on overcoming low data availability in band gap prediction.
Machine learning algorithm | Advantages | Disadvantages | Performance | Ref. |
---|---|---|---|---|
SVM | High accuracy, excellent generalizability | Strong reliance on kernel selection | RMSE = 0.45 eV | 101 |
RMSE = 0.17 eV | 102 | |||
KNN | Simple to implement, nonparametric modeling | Poor efficiency with larger dataset, challenging optimization of number of neighbors, sensitive to outliers | RMSE = 0.54 eV | 101 |
KRR | Relatively simple | Low accuracy, strong reliance on kernel selection | RMSE = 0.72 eV | 101 |
RMSE = 0.19 eV | 102 | |||
GPR | High accuracy, highly efficient, improves with more features | Poor efficiency with larger datasets | RMSE = 0.14 eV | 102 |
Bootstrap aggregating | Not prone to overfitting, calculates feature importance | Low interpretability | RMSE = 0.16 eV | 102 |
LASSO | Outputs mathematical function relating descriptors to target | Unstable with multicollinearity | RMSE = 0.71 eV | 103 |
Noh et al. proposed an approach that applies uncertainty quantification to crystal graph convolutional neural networks (CNNs) through Monte Carlo sampling and dropout for formation energy prediction.107 In each iteration, interlayer connections between neurons are randomly dropped with a 0.2 probability, and 200 random samples are taken to produce a Bayesian approximation. The Monte Carlo sampling mean and variance correspond with the formation energy prediction and uncertainty, respectively. The materials that pass the formation energy screening criteria within a 95% confidence interval undergo detailed DFT relaxations to refine the prediction of the most promising material candidates. With this scheme and a dataset consisting of >7000 materials, 67% of the materials selected by a direct DFT screening method were successfully identified, while also reducing the required number of DFT calculations by a factor >50. Without uncertainty quantification, only 39% of the materials were successfully identified. This study demonstrates the significant performance boost to high-throughput screening offered by uncertainty quantification of ML models. Further exploration of this uncertainty quantification approach should be applied to other material properties and algorithms in high-throughput screening applications.
Overall, KRR and ANN have both demonstrated good accuracy levels for formation energy prediction, even achieving high accuracy in predicting the convex hull of Li–Ge shown in Fig. 10b. Table 5 summarizes the ML models mentioned here for predicting formation energies. Further studies should continue exploring how crystal structure information can be represented in input formats understandable by ML models. Other algorithms, such as graph neural networks should be further explored for formation energy prediction, as their accuracy is currently low.
Machine learning algorithm | Advantages | Disadvantages | Performance | Ref. |
---|---|---|---|---|
KRR | Relatively simple, fits non-linear data well | Strong reliance on kernel selection, does not scale to larger datasets well | RMSE = 20.3 meV per atom | 105 |
SVR | Efficient with large datasets | Strong reliance on kernel selection | RMSE = 20.9 meV per atom | 105 |
ANN | Dropout reduces model size with little impact on accuracy, excellent generalizability, capable of transfer learning | Requires extensive hyperparameter tuning, prone to overfitting | RMSE = 66.1 meV per atom | 104 |
Research on “beyond Li-ion” technologies like Na, K, Ca, Mg, and Al-ion batteries has been gaining attention recently with their potential to achieve lower costs and greater energy density.120 Joshi et al. trained deep neural network, SVR, and KRR models for the prediction of the voltage range with a training set of 3977 electrode materials consisting of Li-ion and several alternative metal ion compounds.121 PCA reduced the feature vector dimensionality from around 237 to 80, allowing all three ML methods to achieve test set MAE of 0.42–0.46 V. Surprisingly, when the models were used to predict Na electrode voltage ranges, MAE improved from 0.93–1.25 V when using the entire training dataset to 0.62–0.70 V when using Li materials only. Though Na materials were not contained in either of these datasets, the dataset containing only Li materials was more coherent than the dataset containing 6 different materials that were comprised of 65% Li materials. This demonstrates the importance of having a coherent training set, and the detriment that can be caused by including sparse material data. However, this may be an issue that can be overcome through the use of transfer learning to predict properties of the sparser materials.
Houchins et al. trained a Behler-Parrinello neural network model for predicting energy–volume curves, entropy, and Gibbs energy of NMC cathodes with a dataset of 12962 points, performing in good agreement with DFT calculations and experimental measurements.117 Then, using grand canonical Monte Carlo simulations to simulate Li-vacancy ordering or discharging/charging of the cathode, the neural network was able to accurately predict the voltage profiles of the NMC cathode and several other cathode materials. Okubo et al. investigated how well RF can predict the improvement of cathode capacity retention based on coating, doping, electrolyte additives, functional binders, cut-off voltage, and C-rate.122 The test set achieved only R2 of 0.52, likely a result of the limited dataset size. Most notably, the C-rate and cut-off voltage were identified as having the largest influence on capacity retention improvement. Takagishi et al. developed an ANN regression model for predicting the specific resistance of Li-ion electrode structures, which performed with an R2 of 0.99 on validation data.123 Then, BO was applied to solve the inverse problem of optimizing the process parameters for manufacturing a battery electrode with the minimum specific resistance, which was predicted to be 47 Ω m. This framework is demonstrated in Fig. 11.
Fig. 11 Workflow for predicting Li-ion battery electrode specific resistance by utilizing 3D physio-chemical simulations and an ANN model. Further, Bayesian optimization was performed to identify the optimal electrode properties and processing parameters. Reprinted with permission from ref. 123. Copyright 2019 Multidisciplinary Digital Publishing Institute. |
Data from both successful and failed experiments can be leveraged by ML to assist synthetic chemists in determining optimal synthesis conditions. Moosavi et al. developed an approach based on the combination of experimentation, ML, and GA to determine the optimal conditions for synthesizing metal organic frameworks (MOFs) by varying 9 parameters, ultimately achieving a BET surface area of 2045 m2 g−1.124 The idea behind this approach is to use GAs to guide the selection of the 9 experimental parameters while analyzing crystallinity and phase purity as the objective function. After collecting data from over 120 failed and partly successful experiments, the researchers trained a RF model on the 9 parameters to predict crystallinity and phase purity. From this trained RF model, the importance of each parameter was obtained and used as a quantified form of chemical intuition to search the chemical space more effectively in subsequent experiments. In other words, by varying the most important parameters as determined by RF much more frequently than the least important ones, the chemical space can be explored more efficiently without sacrificing sampling accuracy, significantly increasing the probability of successful synthesis conditions. This study demonstrates the importance of utilizing both failed and successful experiments in training ML models to obtain strong chemical intuition and accuracy for searching chemical spaces much more efficiently. However, researchers and journals typically do not publish failed experimental data, wasting valuable information that could be used in training less biased and more useful ML models.
Quantum mechanical approaches aided by data-driven ML have been proposed to discover structure–property relationships in electrode materials. Deringer et al. utilized Gaussian approximation potential (GAP) trained with DFT data to predict the energies of porous and graphitic carbon systems, with accuracy within 2 kJ mol−1 of DFT calculations.118 Then, nuclear magnetic resonance, pair distribution function, ab initio random structure searching, and MD simulations were utilized to understand the mechanism of Na intercalation useful for both Na-ion batteries and supercapacitors. Fig. 12 illustrates examples of the simulated porous carbon structures using the researchers' proposed GAP-driven MD method. The smaller boxes at the bottom provide a closer view of some examples of porous structures. Dou & Fyta demonstrated the prediction of adsorption energies of alkali elements on 2D transition metal dichalcogenides using DFT-calculated energy features.125 An R2 of 0.97 was achieved using ordinary least squares regression, which also revealed a strong linear correlation with the energy of the lowest unoccupied state. Choi et al. utilized the 3D distribution of electrostatic potentials (ESP) as features for an ANN model capable of predicting discharge energy density, capacity fading, and discovering novel inorganic crystalline cathode materials for LIBs.116 PCA was utilized to reduce the dimensionality of the ESP vectors, and the trained ANN model achieved an average test set percent error of 12.1% and 19.6% for predicting discharge energy density and capacity fading, respectively.
Fig. 12 Gaussian approximation potential-driven MD simulations used to generate structural models of disordered carbon fragments with varying porosity. Reprinted with permission from ref. 118. Copyright 2018 Royal Society of Chemistry. |
The high accuracies achieved by DFT-driven ML models demonstrate its feasibility for rapid electrode optimization and providing insights into energy storage materials, with ANN obtaining some of the highest accuracies. Further research should explore ML potentials for atomistic simulations that can be used to study electrode materials several orders of magnitude faster than quantum mechanical methods.126 ML potentials is an emerging area of research that has gained extensive focus recently with its ability to provide atomistic insights into complex materials and even generate new materials by modeling the atomic energies and interatomic potentials.127 However, there are still ongoing challenges that limit their accuracy and transferability. One of these limitations is the construction of ML potentials for molecules with several different elements since the configuration space will grow rapidly and to a prohibitively large configuration space.128 This consequently requires approximations that limit the accuracy or the transferability of the potentials. As with ML models in general, a large dataset is necessary for training an accurate ML potential model, where the dataset is typically generated by computationally intensive DFT simulations.129 Another current limitation is that ML potentials typically only take into account local electrostatic interactions, and systems where long-range interactions are prevalent are less accurate.130 The interested reader is referred to state-of-the-art ML potentials research.131–134
The ionic conductivity characterizes the mobility of ions traveling through the electrolyte.136 This property is the main bottleneck for realizing all-solid-state batteries, a technology that presents significantly enhanced energy density, charge rate, and safety.137 Research on solid electrolytes has made slow progress following the high-throughput DFT simulation process.40 Sendek et al. utilized logistic regression to identify solid-state superionic conductors from 21 known materials with 90.5% accuracy, while a guess and check method had 14.3% accuracy, and predictions by a group of 6 PhD students had 25% accuracy.40 The significant improvement in prediction accuracy demonstrates the potential of ML for accelerating the search for solid electrolyte materials. It is worth noting that DFT-calculated ionic conductivity may stray from experimental values, as seen by Sendek et al., where uncharacterized factors can influence experimental measurements. Wang et al. utilized BO to automate coarse-grained (CG) MD simulations of solid polymer electrolyte (SPE) materials using a BO framework.138 The researchers were able to gain a strong understanding of how each CG parameter individually and jointly influences ionic conductivity to identify SPE candidates with optimal ionic conductivity, thus significantly accelerating the search for highly conductive SPE materials. In a study by Xu et al. to predict the ionic conductivity of Li and Na-based superionic conductors, the researchers first selected 8 features from a set of 47 by calculating Pearson correlation coefficients.60 A logistic regression model achieved 84.2% and 76.3% accuracy in test sets of unseen NASICON and LISICON compounds, respectively, while classifying ionic conductivities above and below 1 × 10−6 S cm−1.
Wang et al. and Xu et al. both lacked structural features for training their ML models, however, Kajita et al. incorporated SOAP (smooth overlap of atomic position) and R3DVS (reciprocal 3D voxel space) features.139 In an approach employing an ensemble of three ML methods, one using chemical and physical properties in a partial least squares regression (PLS) model, the second using SOAP features in a KRR model, and the last using R3DCS features in a 3D CNN, the researchers trained their model to discover novel oxygen-ion conductors from a small dataset with 29 oxygen-ion conductors. This approach is shown in Fig. 13, where the average ionic conductivity from the three models is output. Additionally, five oxygen-ion conductor compounds were successfully identified from a dataset containing 13384 oxides, demonstrating the efficiency of ensemble-scope feature learning for use even with limited data. Wheatle et al. performed BO to optimize the ionic conductivity and viscosity of a polymer blend electrolyte simulated through CG MD.140 However, the results of this study were not in agreement with results from literature, demonstrating a limitation for modeling polar molecules through CG simulations.
Fig. 13 Scheme of three different ML approaches for predicting ionic conductivities of oxygen-ion conductors. Partial least squares (PLS) regression, kernel ridge regression using SOAP features, and a 3D CNN using R3DVS features were developed. Reprinted with permission from ref. 139. Copyright 2020 Springer Nature. |
Gao et al. demonstrated the optimization of electrolyte channel geometric structure for enhancing specific energy, capacity, and power while reducing lithium plating in thick electrode LIBs by using a deep neural network.141 Their finite element method-verified results demonstrate a potential 78.73% increase in specific energy compared to conventional cells. In Fig. 14a–d, the prediction results by their deep neural network for specific energy, specific power, specific capacity, and Ragone plot are demonstrated. Ahmad et al. employed a series of ML models to screen inorganic solid electrolytes that can suppress dendritic growth and achieve high ionic conductivity in Li metal batteries.142 First, a crystal graph CNN was used to predict shear and bulk moduli from structural features, then utilized GBR and KRR to predict elastic constants, and finally utilized the logistic regression proposed by Sendek et al. to screen superionic conductors. With a dataset of over 12,950 solid electrolytes, 6 candidates were successfully screened, demonstrating the ability of ensemble ML.
Fig. 14 Deep neural network prediction of (a) specific energy, (b) specific power, (c) specific capacity, and (d) Ragone plot based on varying tapered width of electrolyte channels in Li-ion batteries. WEA and WEC are electrolyte channel width and WH is periodic width. Reprinted with permission from ref. 141. Copyright 2020 IOP Publishing. |
Suzuki et al. employed an RF recommender system to propose unknown chemically relevant compositions of Li-conducting oxides.143 The system demonstrated the ability to discover novel materials in a third of the time compared to a random material search. Liu et al. developed an automated high-throughput screening method for determining the optimal cations for doping a garnet-type Li7La3Zr2O12 (LLZO) solid-state electrolyte to be used in a Li metal battery.144 The researchers utilized a dataset of 100 doped-LLZO compounds and up to 15 features mostly from DFT calculations. Using SVM for classifying thermodynamically stable and unstable Li–LLZO interfaces, the researchers were able to uncover the large dependence on chemical bond strength between the dopant and oxygen. Additionally, a KRR model was trained to predict reaction energies at the Li–LLZO interface for the different dopants, which achieved a test set R2 of 0.92.
In an approach that introduces complete automation of experimentation through robotics guided by BO, Dave et al. focused on optimizing aqueous electrolyte mixtures.145 The researchers performed 70 experimental iterations in 2 weeks, where the ML-guided optimization was completely responsible for converging onto the Na and Li aqueous electrolyte blends with the highest electrochemical stability window. As shown in Fig. 15a and b, the highest stability windows for the Na and Li electrolytes reached 3.04 V and 2.74 V, respectively.
Fig. 15 Fully automated experimentation guided by Bayesian optimization for identifying the optimal blend of (a) Na and (b) Li salt aqueous electrolyte systems within 70 experimental iterations. The machine learning guided system was able to search a wide design space and quickly converge on the blends that maximize electrochemical stability. Reprinted with permission from ref. 145. Copyright 2020 Elsevier. (c) Natural language processing text mined temperatures for processing of LLZO garnet solid electrolytes and (d) sintering temperatures grouped by article publication year. Reprinted with permission from ref. 146. Copyright 2020 Elsevier. |
Though most researchers train their ML models using DFT-based and experimental data from published material databases, there are still findings in scientific literature not yet published in any databases. Thus, to stay up to date with the most recent findings in literature, natural language processing (NLP) is a method capable of keeping up with the rapid rate at which scientific articles are published. Mahbub et al. utilized NLP for extracting the processing temperatures used for synthesizing Li solid-state electrolytes from various precursors.146 After extracting data from 891 articles on solid-state electrolyte synthesis, the researchers were able to reveal trends in the processing temperatures utilized for synthesizing different solid-state electrolyte materials, and also identify the resulting ionic conductivities of each of the synthesis parameters, as shown in Fig. 15c and d. This technique is feasible for revealing trends and findings in thousands of published articles, which may prove especially useful for materials science research due to its high-dimensional nature. By creating more comprehensive training datasets, NLP can also prove useful for improving ML model accuracy. Huang & Cole modified a NLP toolkit called ChemDataExtractor for extracting inorganic battery electrode and electrolyte material measurements including capacity, voltage, electrical conductivity, coulombic efficiency, and energy density.147 This project demonstrates the first automatically generated database of battery materials, which is now publicly available. ChemDataExtractor extracted 292313 data records from 229061 academic papers published between 1996–2019 related to batteries. However, text mining is not perfect and requires creating specific rule-based phrase parsers for extracting specific property-value pairs, which can be tedious. The most common error encountered was the mismatching of property data to the chemical compound when more than one compound or value occurs in the same sentence. Another common error was the extraction of incomplete composite material names or invalid chemicals. Overall, the database achieved a precision of 80%, which is relatively high and could potentially be used to aid in training ML models for battery materials design.
Fig. 16 (a) Relative contributions of features used to predict supercapacitor capacitance from structural features of a carbon-based supercapacitor as determined by ANN. SSA and PV were identified as having the highest relative importance for predicting capacitance. Adapted with permission from ref. 154. Copyright 2019 Royal Society of Chemistry. (b) Pearson correlation coefficient matrix for all structural features and capacitance from porous carbon-based supercapacitors. Reprinted with permission from ref. 156. Copyright 2021 Elsevier. |
In an optimization problem, Mathew et al. performed particle swarm optimization from an ANN regression model to identify the optimal process parameters based on the resulting specific capacitance and equivalent series resistance of an activated carbon supercapacitor.158 The model achieved an R2 of 0.998 and 0.979 for predicting specific capacitance and equivalent series resistance, respectively. However, the optimized synthesis parameters demonstrated no improvement over the highest-performing supercapacitor generated during the experimental trials. This demonstrates the limited ability of the ANN model to extrapolate outside of the dataset.
Electrolytes are also important for determining supercapacitor capacitance due to their influence on ionic conductivity and the formation of the electrical double layer.135 In a study performed by Yang et al. to uncover the relationship between surface potential, pore curvature, and electrolyte concentration on capacitance, an RT model was trained on classical DFT calculations.159 RT, specifically M5P, obtained an R2 of 0.84 for capacitance prediction of supercapacitors and generated an equation relating capacitance to pore radius, potential window, and electrolyte concentration. This equation allows for excellent interpretability, revealing that capacitance increases with greater electrolyte concentration, smaller electrode particle radius, and lower surface potential. Oladipo et al. considered the effects of doping percentage and electrolyte on the capacitance and energy density of biocarbon-based supercapacitors.160 Several experimental samples were prepared and tested while varying S-doping, N-doping, electrolyte type, and electrolyte concentration. An ANN model was trained with Levenberg–Marquardt backpropagation, achieving an average R2 of 0.96, where electrolyte type and concentration were the most important features. Supercapacitor capacitance can be enhanced through the addition of small solvent molecules, but the underlying mechanisms are not well understood.161 Su et al. used SVR, ANN, M5P, M5 rule, and linear regression to reveal insights into the impact of solvent effects on the capacitance of supercapacitors.162 This study used a dataset consisting of 13 different solvents used for supercapacitors collected from a previous experiment by Hou et al.163 The M5P model achieved the highest R2 of 0.79 and revealed that the solvent molecule size, dielectric constant, and dipole moments have the greatest importance for modeling capacitance. These findings were then followed up by classical DFT used to model the molecular structure of the electric double layer. To demonstrate the practicality of their ANN model, Rahimi et al. maximized the performance of an activated carbon-based supercapacitor using a GA, following the workflow in Fig. 17a.164 A maximum specific capacitance of 550 F g−1 at 1 A g−1 was achieved through optimization of physiochemical and operational features. Surface area and electrolyte properties are among the most important features for determining capacitance. Further studies should combine both structural and electrolyte properties to determine how both influence overall capacitance behavior.
Fig. 17 (a) ML workflow utilizing MLP and GA to determine optimal operational conditions, physical properties, and chemical features for an activated carbon-based supercapacitor. Reprinted with permission from ref. 164. Copyright 2022 Elsevier. (b) Supercapacitor retention prediction using a GPR model with only 100 training cycles. Reprinted with permission from ref. 166. Copyright 2021 Springer Nature. |
Currently, only a limited number of ML models and approaches have been proposed for use in supercapacitor capacitance prediction. Researchers should continue exploring the accuracy of other ML approaches and various optimization strategies to improve model parameterization. Given the large number of features that influence capacitance, active learning models such as BO could prove to be useful for researchers in this area. Currently, few studies have applied this active learning design process to supercapacitor experiments. Making use of GPR uncertainty quantification and optimization algorithms like BO in an active learning approach would thus be highly impactful in supercapacitor materials research.
Capacitance prediction of supercapacitors through ML models has achieved excellent accuracy and provided insights into how various features impact performance. As demonstrated through the collected studies in Table 6, SSA has widely been one of the most important features for predicting specific capacitance. This is due to the role that electrode porosity has in promoting ion diffusion. However, electrochemical kinetics based on pore size distribution is not straightforward. Different pore sizes affect performance in various ways, thus making optimization of pore sizes and the necessary processing parameters to synthesize a specific pore size a critical issue.165 Further research should consider RT models for modeling capacitance as a function of surface area and pore features. The advantage of RT, like M5P, is its ability to generate equations that relate features to the target property, making for easy interpretation and target optimization.159 Besides surface area, operational conditions, electrolyte type, and chemical features are highly important for predicting capacitance. This can be attributed to the influence that the electrolyte system has on ionic conductivity and operating voltage window, and thus capacitance.135
Model | Features | RMSE (R2) | Ref. |
---|---|---|---|
a The listed features are in descending relative importance. | |||
RT | Specific surface area, pore volume, pore size, voltage window, ID/IG, N-doping%, O-doping% | 67.62 (0.76) | 154 |
ANN | Specific surface area, B-doping%, electrolyte concentration, pore size, voltage window | 0.0089 (0.93) | 155 |
XGBoost | Micropore surface area percentage, specific surface area, pore size, mesopore surface area, mesopore volume percentage | 25.50 (0.80) | 156 |
SVM (grey wolf optimization) | Specific surface area, N-doping%, pore size, voltage window, ID/IG | 39.22 (0.90) | 157 |
ANN | Electrolyte type, electrolyte concentration, S/N co-doping%, S-doping%, N-doping% | 0.385 (0.96) | 160 |
ANN | Current density, micropore volume, micropore surface area, oxidized N-group%, pyrolytic-N group%, carboxyl-O group%, micropore volume/total pore volume, nitrogen/oxygen%, potential window, micropore surface area/specific surface area, hydroxyl-O group%, total pore volume, graphitic-N group, nitrogen%, pyridinic-N group% | 10.81 (0.97) | 164 |
Model | Application | Features | Performance | Ref. |
---|---|---|---|---|
ANN | Specific capacitance, power density, and energy density | Mesopore surface area, micropore surface area, scan rate | R 2 = 0.956 (capacitance), 0.964 (power density), 0.921 (energy density) | 167 |
RF | Specific capacitance and retention (classification) | Current collector, compositing, material, potential window, morphology, oxide/nitride, specific surface area, current density | R 2 = 0.593 (capacitance), RMSE = 0.44 (retention) | 67 |
ANN | Specific capacitance and retention | Cycle number, lignin weight%, transition metal oxide weight%, binder weight% | R 2 = 0.859 (capacitance), MAPE = 6.37% (retention) | 23 |
GPR | Retention | Cycle number | RMSE = 0.0056F | 166 |
ANN | Cyclic voltammetry | Potential, oxidation/reduction, doping concentration | R 2 = 0.95 | 168 |
Current studies still lack an understanding of how supercapacitor materials, pore size distribution, SSA, surface chemistries, morphologies, potential window, electrolyte, and operating conditions together influence energy storage. By taking into account these features combined, a more generalizable model that could provide more practical use in screening materials could be achievable. Given that these studies rely on data collected from literature, this would require researchers to provide all of these details. For now, recent training and optimization of ML models have demonstrated compelling evidence of high accuracy, but are limited to specific datasets containing few features.
Fig. 18 Average RMSE of various ML models for cycle life prediction of Li-ion batteries (BL = broad learning, ELM = extreme learning machine, ABC = artificial bee colony, GRNN = general regression neural network, PSO = particle swarm optimization, GBRT = gradient boosted RT).177,179,180,199–201 |
SOC and SOH are important indicators of a battery's capacity and are essential for battery management.181 The SOC is calculated by dividing the battery's capacity at its current state by the fully charged capacity. Understanding the SOC of a battery is fundamental to calculating a battery's energy availability, analogous to a fuel gauge. This helps to protect the battery from overcharging and discharging, therefore, optimizing its performance and lifetime.182 Currently, electrochemical models (physics-based) and equivalent circuit models (empirical-based) are the most widely used for predicting SOC.183 However, these methods are limited by high computational cost and low accuracy, respectively. For example, Coulomb counting involves calculating the net charge of a cell through the integral of charge and discharge currents over time.184 This method is fairly accurate but does not account for different operating conditions, self-discharge, coulombic efficiency, and typically relies on sampled measurements of current.185 These errors accumulate and lead to low accuracies over time, especially as the battery capacity fades. Several ML-based approaches have been explored for SOC prediction. The main challenge is the large amount of training data necessary to model different battery chemistries under varying operating conditions. ML methods are used to identify correlations between these features and battery capacity. The ML algorithms analyzed in Fig. 19a depict the difference in accuracy for predicting the SOH of LIBs. The figure illustrates how SVM had the highest RMSE and XGBoost and RF had the lowest RMSE. In other words, XGBoost and RF can predict the SOH with higher accuracy than any other ML method. Therefore, the use of decision trees utilized in both XGBoost and RF is what sets them apart from other commonly used algorithms.
Fig. 19 (a) Average RMSE of various ML models for predicting the state of health of lithium-ion batteries (LR = linear regression, EL = extreme learning, NN = neural network).202–204 (b) Average RMSE of various models for the prediction of state of charge for Li-ion batteries (AR-GPR = Autoregressive Recurrent GPR, R-GPR-2 = Recurrent GPR with 2-tap delay, R-GPR-1 = Recurrent GPR with 1-tap delay, RGPR = Regular GPR, G = Gaussian, 2G = 2 side Gaussian, RVM = relevance vector machine).204,205 |
Several ML-based SOC estimation methods were compared as depicted in Fig. 19b to determine the most robust model. From our literature review, RVM had the highest RMSE and GPR-based models had the lowest RMSE. Although both use the Bayesian approach to make predictions, GPR's nonparametric form allows it to have simple training, while also generating an accurate predictive function.186
Remaining useful life (RUL) prediction involves the forecasting of a battery's capacity as it degrades with continued charge–discharge cycles. Recently, researchers have combined filtering algorithm-based approaches with data-driven ML models. Among the approaches explored in this study seen in Fig. 20a, the models that have performed with the greatest accuracy based on RMSE are RVR-KF,187 LSTM-GPR,188 and RVR-PF.189 RVR-KF and RVR-PF both employ relevance vector regression, a data-driven approach using Bayesian inference to produce output in a probabilistic form.189 Kalman and particle filtering are model-based methods that estimate the state of a dynamic system using measurements from a battery model, while data-driven RVR utilizes historical data to generate a regression between the features and RUL.187 The fusing of the model-based filtering algorithms and data-driven RVR results in excellent forecasting accuracy by allowing RVR to perform parameterization of the battery models used by the filtering algorithm. Besides this approach, LSTM-GPR is a data-driven approach that combines the benefits of learning long-term capacity degradation through LSTM and capturing local fluctuations from the capacity regeneration phenomena through GPR.188 This approach demonstrated a significant improvement over using only GPR for LIB RUL prediction. Studies that utilized filtering algorithms alone tend to have lower performance compared to both data-driven and fusion methods.
Fig. 20 (a) Boxplots of various models' RMSE for RUL prediction of LIBs (RVR = relevance vector regression, PF = particle filter, KF = Kalman filter, SVR = support vector regression).187–189,206–212 (b) Violin plots of various models' RMSE for RUL prediction of supercapacitors (LSTM = long-short term memory).190,191,213 |
Fewer models have been proposed for supercapacitor RUL prediction, as shown in Fig. 20b, compared to battery RUL prediction. Data-driven methods, particularly those based on recurrent neural networks (RNN), have been widely explored by researchers. LSTM and gated recurrent unit, both of which are varieties of RNN, have demonstrated improved accuracy over RNN. LSTM, a deep learning network with a memory unit capable of capturing long-term dependencies, has also shown high predictive accuracy when applied to offline data.190 Further improvement of LSTM has been achieved through the use of genetic and hybrid GA, which aid in parameter optimization as well as significantly speeding up global optimal convergence.191 Future studies should explore the application of hybrid data-driven and filtering algorithms, similar to those proposed for battery RUL prediction.
Currently, ML has become a promising route to many computationally intensive or time-consuming problems in materials research, but still lacks accuracy and generalizability. For redox potential prediction, GPR has demonstrated one of the highest accuracies, with the added benefit of being able to pair up with active learning for guided experimentation. Active learning approaches, especially BO, have demonstrated promising results for accelerating material screening efforts by using uncertainty quantification to quickly search for target properties much faster than trial and error experimentation and DFT methods. Crystal structure prediction by random forest has demonstrated high accuracy, while GNs have been gaining significant attention recently due to their unparalleled ability to represent structural features in crystal graphs. However, GNs are still relatively new and further research is needed to implement them. Dielectric breakdown prediction through least squares regression has demonstrated very high accuracy, while also being a very simple method that can generate a mathematical function relating the descriptors to dielectric breakdown. In band gap prediction, GPR has achieved among the highest prediction accuracies, even with high-dimensional data. Formation energies prediction by kernel ridge regression and support vector regression have achieved low prediction error, but more studies are needed to compare their performances. Based on these studies, there is exciting potential for GPR for predicting properties beyond the ones studied so far, due to its unique uncertainty quantification and ability to deal with high-dimensional data. These advantages are particularly advantageous in guiding experimentation, where optimization of a large design space would be impractical.
In applications of materials design for battery and supercapacitor electrodes and electrolytes, many studies have demonstrated ML models for optimizing composition and synthesis parameters in an inverse design approach. Once a model has been trained for predicting a target property from a set of features, solving the inverse problem involves identifying the values for each feature that will achieve the targeted property's output value. In materials design, for example, this entails specifying a specific capacitance, and the trained model will determine the synthesis parameters necessary to achieve this target value.
There are still shortcomings of ML, including low data availability, sparsity of data points, poor generalizability of models to data outside the seen training set, and lack of published data on unsuccessful experiments. Small datasets limit a ML algorithm's ability to correlate features to a target output. In addition, the extensive domain of materials data commonly leads to diverse and sparse datasets, which can limit a model's ability to identify patterns. The insufficiency of training data can make generalizing to materials outside the range of the training dataset less accurate. Moreover, the absence of valuable information pertaining to failed experiments often prove detrimental to model practicality, as this missing information prevents key data-driven relationships to be uncovered. Published datasets typically contain “islands” of successful experiments and data points, which could introduce biases to a model.192
One method for overcoming the challenge of small datasets is transfer learning, which takes parameters from a pretrained artificial neural network or random forest model to initialize a model for a related task. This approach has already been demonstrated to improve performance over training from scratch on a small materials dataset. However, more variety of studies in transfer learning is currently needed. Also, the lack of transparency and interpretability calls for further research to demonstrate how to gain insights from the transfer of structure–property relationships between models. Transfer learning has good potential for improving ML model accuracy in low data availability situations common in materials research, while also leveraging large publicly available databases. Another promising tool for overcoming issues with small datasets is NLP, which uses ML to extract data from text-based information. This has exceptional potential in the future as it allows for automatic and efficient dataset generation, like the ChemDataExtractor tool, by extracting data points from thousands of published articles. However, further studies are still needed to improve the accuracy of rule-based data extraction.
Predicting the cycle life of Li-ion batteries through ML provides a highly efficient method for quickly assessing health and reliability. It has been demonstrated that hybrid-based models provided higher accuracies than ML models. General regression neural network-based hybrid models have demonstrated very high accuracies in predicting cycle life. One of the highest performing ML algorithms is a model based on a broad learning and extreme learning machine. This method avoids the use of time-consuming and complex deep neural networks by utilizing extreme learning machines, while also allowing for the use of high-dimensional data through broad learning.
Hybrid models combining model-based filtering algorithms and ML models have demonstrated very promising accuracy for Li-ion battery remaining useful life prediction. Approaches that use only model-based filtering algorithms are limited by their ability to learn long-term dependencies due to the particle impoverishment problem, demonstrating much lower performance compared to utilizing a hybrid model-based and data-driven approach. Using ML algorithms alone has achieved greater accuracy than filtering alone, but is not able to outperform hybrid approaches. Few studies have explored the application of ML to supercapacitor RUL prediction. Mostly recurrent neural networks have been explored, the most accurate of which have been based on LSTM optimized through a genetic algorithm framework.
Carbon-based supercapacitors have been studied through the application of ML methods, especially for capacitance modeling. The highest accuracy models for predicting capacitance have been ANN, RT, and SVM. Features including SSA, pore size, electrolyte type, and electrolyte concentration have repeatedly demonstrated high relative importance for ML models. M5P models have been applied to supercapacitors and provided useful insights through generated equations that relate features, such as pore size, potential window, electrolyte concentration, and solvent molecule properties to capacitance. Linear regression models have also been widely used by researchers for supercapacitor modeling due to their simplicity but to little avail. Supercapacitor properties exhibit complex nonlinear interactions that simple linear models are unable to capture.
As future prospects, we envision several areas of high-impact research in the near future related to ML-assisted materials design and improvement of performance in batteries and supercapacitors. One important area is the use of active learning, such as the combination of GPR with BO, to guide experimentation in the lab, when a prohibitively large number of parameters need to be optimized. With this approach, an expedited optimization process that balances exploitation and exploration can enable much fewer iterations, reducing costs and accelerating the search for target properties. Another significant area for research is NLP used for automatically extracting data from text. This has huge potential in automatically generated databases derived from published articles. Additional research should focus on the continued development of graph neural networks, which is currently a very fast-growing topic. They are particularly useful in the representation of crystal structures with applications for property predictions, but further research is needed on the development of new material representations.
This review provided insights into the predictive accuracy of recent ML-based approaches for predicting material properties, and battery and supercapacitor health prognostics. By combining first principles and ML approaches, researchers have been able to rapidly predict material properties and accelerate the screening and discovery of novel materials. In addition, the complex, high-dimensional data involved in material property datasets and device prognostics are the ideal conditions to use ML approaches.
• Automation of experiments through active learning needs focus: active learning models like BO have already demonstrated promising results for accelerating materials screening efforts. By automating the learning process through posing queries, active learning has proven to be especially advantageous for accurately identifying trends with a small amount of training data. The ability to learn from small datasets is crucial in ML applications in materials science research. Further research should focus on utilizing uncertainty quantification to enable active learning models and perform guided experimentation, which would significantly reduce the time and cost of carrying out trial-and-error experiments.
• Different variations of ensemble ML models need to be developed: ensemble ML makes use of multiple models for making predictions. This approach can be advantageous for application in material datasets due to the high variation in the accuracy of ML models based on available data, noise levels, and input features. As demonstrated in this review paper, it is difficult to pinpoint a single ML model that has the highest accuracy for all applications. By employing multiple models, the most accurate ones can be used for individual predictive tasks that can later be combined in an ensemble approach. Future research should further explore different variations of ensemble ML for predicting material properties and gaining a better understanding of how different material properties affect a target property.
• Further research is needed to implement graph neural networks: graph-based representations of molecules and crystal structures have opened up a novel method for the application of deep learning. Graph neural networks have already demonstrated promising applications through their advanced ability for learning material systems through both quantum chemistry and solid-state physics.193 However, current challenges in this emerging field limit accuracy, scalability, and generalizability. Future research should focus on the continued development of novel architectures that incorporate physical principles and additional material information, as well as the development of a labeled materials dataset for model training.194
• NLP capabilities for data extraction need improvement: NLP text mining has promising applications for energy materials research, especially for creating more comprehensive training datasets that could improve ML model accuracy. NLP greatly depends on the quality of the input text, which means data labels and the amount of data that can be extracted from articles depending on the details provided and terms used by the authors. Therefore, a standardized method for reporting experimental parameters and results would aid in improving the efficiency of text mining methods. Further research should consider how NLP models can be trained to understand synonyms, all the possible meanings of a word, and context. This is crucial for improving its accuracy and usefulness for researchers in automating the collection of data from literature, where the data cannot be found in a database.
• The viability of data augmentation for materials datasets needs more exploration: data augmentation is a promising tool for improving the predictive ability of ML models in unseen domains, an especially critical challenge given the constraint of low data availability in several materials datasets. A data augmentation approach to improving a deep neural network was performed by Kim et al., who were able to gradually expand the domain of a dataset without affecting the degree of accuracy of the model.195 Future studies should explore other approaches to generative models to improve the generalizability of ML applied to material datasets.
• More studies are needed on the optimization of supercapacitor electrolytes by ML: the electrolyte system influences capacitance, pseudocapacitance, energy density, power density, and cycle life, demonstrating its importance for electrochemical performance optimization.135 There is a lack of studies on the topic of optimizing supercapacitor electrolytes using ML models. ML can be used to reveal the structure–property relationships of electrolytes to improve the performance of supercapacitors.
• Hybrid models for supercapacitor RUL prediction need more investigation: hybrid models fusing model-based and data-driven ML approaches have demonstrated high accuracy for RUL prediction of LIBs. However, current supercapacitor RUL prediction approaches have been limited to ANN-based models. Future research should investigate the accuracy of filtering algorithms combined with data-driven models like RVR or ANN.
• Selecting features for supercapacitor capacitance prediction needs attention: future ML studies examining the influence of electrolytes on supercapacitor capacitance should use features for ion concentration, size, and type, while simultaneously taking into account pore size distribution. This could help provide insight into the interaction between the pore structure and ion transport for tailoring surface area to a specific electrolyte.165
• A convention for reporting model performance metrics is needed: currently, there is no conventional method for reporting model performance. This makes it difficult to grasp how model accuracies compare. Some inconsistencies observed when conducting this literature review include the use of performance metrics (e.g., R2, MSE, RMSE, MAPE, MAE), train/test/validation split ratios, reporting train/test results, selecting the number of folds for cross-validation, reporting average scores from cross-validation, specifying whether the reported scores are from the train or test sets, and specifying whether the reported scores are from standardized or normalized or original values.
• Transfer learning from pretrained models in materials science requires further exploration: neural network-based models generally require large training datasets to achieve reliable prediction accuracy, but the current bottleneck lies in the commonly small and inhomogeneous datasets found in materials databases. The approach of transfer learning has been explored extensively for improving the learned structure–property relationships in smaller datasets by using models pretrained on larger, related materials datasets, or different material properties. Researchers have already shown its excellent potential for achieving high prediction accuracy in small datasets. However, there is a lack of understanding on how to gain insights into the relationships between the pretrained model and the derived transfer learning model. This could provide a stronger scientific understanding of the structure–property relationships of different materials and properties.
This journal is © The Royal Society of Chemistry 2023 |