Gerliz M. Gutiérrez-Finol,
Aman Ullah,
María González-Béjar and
Alejandro Gaita-Ariño*
Instituto de Ciencia Molecular (ICMol), Universitat de València, Paterna, Spain. E-mail: alejandro.gaita@uv.es
First published on 8th January 2025
As scientists living through a climate emergency, we have a responsibility to lead by example, or to at least be consistent with our understanding of the problem. This common goal of reducing the carbon footprint of our work can be approached through a variety of strategies. For theoreticians, this includes not only optimizing algorithms and improving computational efficiency but also adopting a frugal approach to modeling. Here we present and critically illustrate this principle. First, we compare two models of very different level of sophistication which nevertheless yield the same qualitative agreement with an experiment involving electric manipulation of molecular spin qubits while presenting a difference in cost of >4 orders of magnitude. As a second stage, an already minimalistic model of the potential use of single-ion magnets to implement a network of probabilistic p-bits, programmed in two different programming languages, is shown to present a difference in cost of a factor of ≃50. In both examples, the computationally expensive version of the model was the one that was published. As a community, we still have a lot of room for improvement in this direction.
Green foundation1. Publicly available data on resource distribution by discipline is often limited, underscoring the need for greater transparency in facility usage reports. By focusing on optimizing computational workflows in high-demand areas, the scientific community can make significant progress in reducing the overall carbon footprint of scientific computing, showing that even small efforts, like embracing frugal computing or other greener practices, can contribute to larger sustainability goals.2. Here we present two general cases in the field of computational chemistry and molecular magnetism (which can be generalized to other theoretical chemical calculation processes) proving that is possible by changing the paradigm, obtaining good results while reducing computational cost. 3. We emphasize the idea of prioritizing energy-efficient algorithms and minimizing resource-intensive operations, therefore, simulations can achieve their research goals while simultaneously reducing their environmental impact. |
The role of science in this crisis has been twofold: quantifying and explaining the processes, and also pointing towards our possible ways out. Abundant datatasets have been employed to estimate climate change projections and key indicators related to forcing of the climate system, including emissions of greenhouse gases and short-lived climate forcers, greenhouse gas concentrations, radiative forcing, surface temperature changes, the Earth's energy imbalance, warming attributed to human activities, the remaining carbon budget, and estimates of global temperature extremes. In a never-ending string of records, Copernicus Climate Change Service reported that June 2024 was the thirteenth month in a row that was the warmest in the ERA5 data record for the respective month of the year. Indeed, over the 2013–2022 period, human-induced warming has been increasing at an unprecedented rate of over 0.2 °C per decade.8
For a more solid perspective, let us explicitly ground our arguments on the Intergovernmental Panel on Climate Change (IPCC). Recent IPCC reports point out how digital technologies, analytics and connectivity consume large amounts of energy, implying higher direct energy demand and related carbon emissions. Thus, the demand of computing services increased by 550% between 2010 and was by 2022 estimated at 1% of global electricity consumption. Climate concerns have prompted the development and implementation of green computing policies, which refers to the environmentally responsible use of computers and related technology, focusing on minimizing the environmental impact of information technology (IT) operations by promoting energy efficiency, reducing waste, and encouraging sustainable practices throughout the lifecycle of computer systems. This includes many strategies that take into account designing energy-efficient hardware, utilizing cloud computing, adopting virtualization, recycling electronic waste, and implementing eco-friendly policies in IT environments. All of this aiming to reduce the carbon footprint of computing activities and promote sustainability in the tech industry. Despite the green computing policies implemented by chip manufacturers and computer companies the increase in demand is not quite compensated by efficiency improvements, resulting in an energy demand rising by 6% between 2000 and 2018.9 This is an example of the uneven policy coverage that exists (high confidence) across sectors: policies implemented by the end of 2020 were projected to result in higher global greenhouse gases emissions in 2030 than emissions implied by the Nationally Determined Contributions (high confidence). Without a strengthening of policies, global warming of 3.2 °C [2.2 to 3.5 °C] by 2100 was projected (medium confidence).10 As the potential of demand-side mitigations is considered, a 73% reduction in electricity use (before additional electrification) is considered by 2050.10 To the best of our knowledge, IPCC reports have not yet estimated the projected growth of the electrical demand in computing due to the so-called “generative artificial intelligence” although informal estimates, extrapolating from recent trends, anticipate a growth that is incompatible with any reasonable climate goal.
Of course, the most relevant decisions towards mitigation and adaptation are in the hands of policymakers and mostly outside our hands for those of us working in the academia, and yet an obvious question is: do we need to do any changes in our academic policies and in our day-to-day work? The first aspect in which the academia has answered “yes” concerns air travel, and there are some ongoing efforts to reduce our collective carbon footprint in that direction.11 Indeed, an extensive internal study at the Institute for Chemical Research of Catalonia resulted in an estimated 48 tons of equivalent CO2 footprint per centre user in 2022, where business-related travel accounted for over 80% of the centre's emissions. This is not to be understood as an universal behavior: a wider study comprising over 100 research centres in France found a much lower per person research footprint of about 5 tons of equivalent CO2, with different contributions such as purchases, commute, travel and heating having each comparable weight; this would be roughly comparable to their average footprint as consumers. It seems obvious that different research environments seem to favour different sources as the main contributions and, crucially, also very different total carbon footprints.12
The twelve principles of green chemistry constitute a foundational guideline for the development of green chemistry. Principle 1 is prevention, which emphasizes that it is preferable to prevent waste generation rather than dealing with its treatment or cleanup after it has been created. Principle 6 is design for energy efficiency, and refers to energy requirements of chemical processes and how to minimize their environmental and economic impacts.13
In this context, there is little doubt that computer-based predictive tools are contributing to reduce CO2 footprint. Currently, researchers can design safer chemicals and processes, anticipate potential environmental impacts, and optimize reactions for minimal energy and resources consumption while reducing waste generation. For instance, in the field of toxicology, by predicting their properties before the actual synthesis; thereby reducing the need for extensive experimental trials.13,14 However, in the context of green chemistry, the environmental impact of resources and time-consuming computational modelling as opposed to low-tier computational models remains barely explored.12
More recently, we have seen increased concerns regarding the carbon footprint from computations,15 and only lately have tools and guidelines been widely available to computational scientists to allow them to estimate their carbon footprint and be more environmentally sustainable.16,17 For what is known, emission-reduction strategies are especially impactful in high-demand fields, where optimizing workflows and adopting sustainable practices, such as carbon tracking tools like CodeCarbon, can lower emissions by up to 23%. These tools not only track energy use but also raise awareness, helping researchers reduce the environmental impact of computational tasks.18
During the peer review of this work, an excellent tutorial review appeared on this topic.19 The exploding footprint of the so-called “generative artificial intelligence” is of course a cause for major concern from this perspective, and we should note that it also affects academia. Early studies are estimating a substantial and rapidly increasing fraction of academic content that is being produced by means of chatGPT.20
Zooming momentarily out from scientific computing to computing in general, computing's global share of carbon emissions has been estimated to be from as low as 1.8% if one focuses on operating costs to as high as 3.9% if the full supply chain is taken into account, meaning it's comparable to air travel.21 The exact contribution of scientific computing to global carbon emissions remains unclear, but it is a significant part of ICT emissions (2–3.9% of global CO2), with data centers responsible for up to 45%.22,23 In this line of thought, while most economic sectors overall are starting to design or implement plans to reduce carbon emissions, computing's emissions are still strongly on the rise. This is despite continuous improvements in computational efficiency, including efforts towards green computing, as these have been consistently overtaken by increases in demand.24 Indeed, emissions from computing, accounting for the production of the devices, have been projected to be close to 80% of our emissions budget by 2040 to limit warming to 1.5 °C.25
The current trajectory of “business as usual” in computing is unsustainable, as it increasingly strains the planet's environmental limits. This has led to a call for “frugal computing”, which emphasizes treating computing as a finite resource. Achieving zero-carbon computing is an urgent priority, requiring us to either “do more with less” or even “do less with much less”. For instance, initiatives such as the EU Directive 2024/1760 encourage institutions to adopt sustainable practices, showing that even small efforts, like embracing frugal computing, can contribute to larger sustainability goals: many minor additive actions are a big collective one and similarly to kindness “no act of sustainability, no matter how small, is ever wasted.26
As we know, frugal computing focuses on minimizing hardware and energy resources while maintaining accessibility and affordability. Frugal modeling, in a similar vein, aims to develop models that require minimal computational power, data, and complexity. These models are designed to be efficient, simplified, and resource-conscious, allowing them to perform essential functions without excessive overhead or reliance on advanced infrastructure. Following the last ideas, we define frugal modeling as an approach that begins by considering the scientific question to be answered and the carbon footprint that is justifiable to emit in the process. When including the carbon footprint in the cost–benefit analysis, the goal is not only to optimize the efficiency of the method but also to limit the overall damage to the climate.27 In other words, frugal modelling consciously avoids the “rebound effect”: improvements in efficiency should not be offset by running significantly more calculations, which would result in an increasing overall carbon footprint. This was stated as “Rule 9: be aware of unanticipated consequences of improved software efficiency” in Lannelongue et al.'s “ten simple rules to make your computing more environmentally sustainable”.27 It is equally important to avoid perverse economic incentives, where sunk and fixed costs, such as the purchase of a supercomputer or the salary of a researcher, make it seem economically optimal to maximize the comparatively lower cost of keeping the supercomputer running at full capacity. Paradoxically, what can seem avoiding computational waste from the point of view of the system's administrator can actually be wasteful if the extra calculations performed to keep the supercomputer from being idle do not bring in any improved scientific insights.
In terms of implementation, frugal modeling emphasizes critical scientific thinking to minimize waste while pursuing knowledge of the highest available epistemic quality, if possible beyond a “justified true belief”.28 We do not need to settle for lower quality science, but we need to be held accountable for our carbon footprint, and this includes avoiding emissions that do now significantly improve the quality of the generated knowledge. Similar to the principles of green chemistry, where the use of problematic solvents in a method must be justified by demonstrating that no other viable alternatives exist for achieving the desired outcome, we propose that for models requiring thousands or tens of thousands of processor hours, a good faith analysis should be conducted to show that addressing the same problem with less harmful methods is unfeasible. Furthermore, a convincing justification must be provided that answering the specific scientific question in question warrants the associated climate impact. For complicated questions, it is often the case that in a sequence of stages taken to answer a scientific question, the one that we can improve by throwing more computing power at it is not the same one that limits the actual knowledge we can obtain. We will see an example below in the case study of spin states vs. molecular distortions.
Unfortunately, as a community, we have become accustomed to employ increasingly unsustainable amounts of computing resources in our calculations, meaning business as usual is not an option compatible with our societal commitments to a lesser climate catastrophe. Let us emphasize again here the fact that more efficient does not equal more frugal. There is a continuous striving for efficiency in computing, also in theoretical chemistry, see e.g.29 but as long as this is oriented to optimizing the return on investment, it is likely to produce increased emissions, as it has been happening historically, in this as in other economical sectors.30 Fortunately, it is possible to answer plenty of interesting questions in Nanoscience, chemistry, physics and materials modelling while employing a frugal approach. In particular, we will focus here on our own field of expertise, namely magnetic molecules, although the general ideas may be extrapolated to many fields as well as to experimental chemistry.
Magnetic molecules have been studied for decades, firstly as controllable models for interactions and phenomena in solid-state Physics and more recently molecular nanomagnets have been presented as candidates for bits,31–34 qubits,35,36 p-bits,37 and also as components for nanotechnological devices.38–41 Manipulation of individuals spins, once a distant dream, is today a practical reality, if not one of immediate practical applicability. In parallel, advances have been made in modelling the influence of the chemical environment on said spin states and their dynamics,42–48 with wildly different computational costs, as we will see below in some detail. Indeed, a frugal approach is possible in this field thanks to the efforts over many years resulting in the development of analytical approaches and semi-empirical methods.49,50
Herein we present a couple of case studies focusing firstly on the contents of the models and secondly on their implementation. In the next section “choosing and solving affordable models” we will present different alternatives for the modelling of the electric field modulation of the ligand field for the coherent control of the spin states in a molecular spin qubits.51 In the section “coding and running inexpensive implementations” we will present a frugal model for magnetic molecules as probabilistic bits,37 which can also serve, less frugally, to model their macroscopic magnetic properties.52 As we will see in these examples, models that can be similarly useful in practice can have costs varying in many orders of magnitude. Additionally, re-implementing an already frugal model to a more efficient implementation can significantly further the savings. Note however that unless frugality is maintained as a boundary condition, mere computational efficiency will often just lead to a rebound effect, i.e. increased use (precisely because of the improved return on investment) and increased emissions, same as in other sectors.30 We will include some further context in the conclusions to clarify how the efforts we propose fit within the scope of green chemistry.
To illustrate this problem, let us focus on the different pathways that one can choose to model the effect of the ligand field on the magnetic and spectroscopic properties of metal ions, a question that has received some attention in the past decade in the context of the so-called single ion magnets and molecular spin qubits, since the spin dynamics of magnetic molecules are in large part based in the variation of the energies of the different spin states with distortions of the molecular structure.31,32,46–48,53–55
The widely accepted standard in this field are ab initio calculations, where complete active space perturbation theory (CASPT2) or n-electron valence state perturbation theory (NEVPT) are considered superior to the complete active space self-consistent field (CASSCF) for fundamental reasons, and MOLCAS or ORCA are employed as standard computational codes. A comparatively fringe modelling approach is based on effective charges acting on the f orbitals; this is widely considered much less exact, again for fundamental reasons. It is not often that the predictive power of the two tools is compared with the measuring stick of experimental spectroscopic information, but at least in one example where this was done, we found no clear benefit in the extra computational cost of using more sophisticated models since, for the task of predictively estimating energy-level distribution, including the energy of the first excited state, CASPT2 did not prove to be superior to CASSCF and CASSCF was not found to be superior to the radial effective charge (REC).56 Wider and very critical reviews have also found a similar trend, when comparing effective theories vs. ligand-field theory vs. ab initio calculations, in the sense that neither of the approaches is a good fit for experimental results. This means both kinds of methods demonstrably fail at allowing us a high quality knowledge, although they do so in different ways. Effective theories can miss important parts of the physics and high-level ab initio calculations tempt us to lose a critical perspective.57 It has been argued that CASxxx methods in particular have to be considered qualitative with respect to magnetochemical properties.57 This is indeed a general problem when striving for a frugal approach: the need for benchmarking, which ideally should be done with experiments rather than with another theoretical method.
In understanding complex electronic systems, two key methods'CASSCF-SO and crystal field parameters (CFPs)'play an essential role. The CASSCF-SO (complete active space self-consistent field with spin–orbit coupling) method is a quantum chemistry approach tailored to systems with significant electron correlation and spin–orbit effects, particularly valuable for transition metals and lanthanides with complex electronic interactions.58 In CASSCF-SO, an active space of molecular orbitals is optimized, capturing essential electron interactions, while spin–orbit coupling is added post-CASSCF to account for relativistic effects.59 Meanwhile, crystal field parameters (CFPs) describe the splitting of metal ion electronic states in the presence of surrounding ligands, crucial for predicting magnetic and optical properties. CFPs, commonly derived from experimental spectra or ab initio methods, quantify the electric field's strength and symmetry and are essential in coordination chemistry and materials science.
As we know, spin dynamics in molecules are strongly influenced by their vibrational degrees of freedom, which increase with the number of atoms in the system (3n − 6, where n is the number of atoms). In case of single-molecule-magnets for memory storage application, a record of 60 Kelvin blocking temperature was achieved,32 which later was increased to 80 Kelvin.31 This success was due to mitigating resonance between spin and vibrational degree of freedom. Initially, ab initio methods like CASSCF-SO were employed to study these kinds of interactions, requiring significant computational resources. However, a semi-empirical approach (the REC model) was later applied to a 80 Kelvin molecule55 yielding accurate predictions of spin-vibration interactions with significantly reduced computational demand. This validated the REC model as a practical and precise tool for exploring these phenomena.
Herein, we present a computationally inexpensive methodology to explore both vibronic couplings and SECs. This computational methodology consists of three steps, the first step is to determine the spin energy levels at multireference level (e.g., CASSCF with spin–orbit coupling (CASSCF-SO)) in crystal geometry, the obtained energy levels are employed to parameterize the ligand field Hamiltonian within REC model implemented in the SIMPRE code.60,61 Alternately, experimental spectroscopic information can be used for this step. In the second step, the geometry is optimized using density functional theory (DFT) to determine the vibrational frequencies and their corresponding displacement vectors. To determine vibronic couplings, the final step consists of generating distorted geometries along the normal vectors and employing the REC model to determine the spin energy levels and crystal field parameters (CFPs). To estimate SECs, an additional step is required where dipole moment is determined along a vibrational coordinate at DFT level to construct a new charge affected vibrational basis; alternately, this can be obtained inexpensively by employing effective charges. In a final step, spin levels and CFPs are determined using the REC model.
This process is computationally very demanding when performed solely at ab initio level instead of with an effective charge model. We applied this scheme to spin-qubit candidate [Ho(W5O18)2]9− (in short HoW10) and compared the vibronic couplings and SECs with those already determined by solely using ab initio level.51,62 The equilibrium spin energy levels and wave function composition of HoW10 are provided in ESI Table S1.† This scheme has already proven effective for determining key vibrations responsible for spin relaxation in molecular nano-magnets.55
The vibronic couplings are obtained by distorting the equilibrium geometry along each normal mode coordinate (xi). The evolution of each crystal field parameters (CFPs) was fitted into a second-order polynomial, the first derivative versus xi, allowing us to determine vibronic couplings for each normal mode, i.e. . The overall effect can be obtained by averaging over different ranks (k, q) of CFPs, as in eqn (1).63
(1) |
The obtained vibronic couplings of each vibrational mode are plotted in Fig. 1 and compared with previously determined using CASSCF-SO method. The vibronic couplings obtained using the REC model differ, on average, by ±0.022 cm−1 from CASSCF-SO results, an overall satisfactory agreement where nevertheless one finds substantial relative deviations in several vibrational modes. Differences between the estimates of the two models are generally attributed to not considering the second coordination sphere of HoW10 in the REC model. Nevertheless, the overall comparison is satisfactory. The detailed values of vibronic for each vibrational mode is provided in ESI Table S2.†
Fig. 1 Vibronic coupling strength Si, calculated for each vibrational frequency of HoW10 using both CASSCF-SO method and REC model.62 Note that the modes are merely ordered by increasing energy and have not been reordered by similarity of the displacement vectors. |
To seek insight into spin-electric couplings, we established a relation between spin Hamiltonian and molecular distortion as a function of the dipole moment. The spin-electric couplings are defined as shifts in transition frequency (δf) between two spin energy levels. The relationship between applied electric field (E-field) and spin state is determined by noting that E-field will cause a change in the dipole-moment (δp) of the molecule, lowering the electric potential. The stabilization via the electric potential is exactly compensated by the elastic cost of distorting the molecular structure, i.e. . Thus, by calculating the electric dipole moment as a function of the mode displacements, we can quantitatively extract the displacements as a function of the applied E-field. Each normal mode is associated with force constant κi and reduced mass μi (yielding eigen frequency . The electric dipole p depends on the displacement of modes xi, and this determines the coupling of the mode to an applied E-field or to incident light, that is, its infrared intensity. By linear combination of all normal modes, we can find an effective displacement as a function E-field, e.g., .
Note that this model, while reasonable, relies on a series of approximations. The molecular structure, electric dipole and vibrational modes obtained by DFT do not correspond exactly to what happens within the crystal, e.g. we are not considering the displacement of crystallization water molecules and counterions and neither are we considering the distortion of the orbitals as a result of the electric field. Within this framework, one can estimate how the spin state has evolved as a function of, xeff using the REC model described above to determine the δf. The results are shown in Fig. 2 and compared with SECs determined at CASSCF-SO. We repeated this process for both DFT optimized geometry and crystallographic geometry for different applied E-field (the distance between two plates where the sample is placed one could convert E-field to voltage units (V)).
Fig. 2 The shift in transition frequency (δf) versus applied voltage V, showing a linear E-field coupling in HoW10. Calculations correspond to the DFT-optimized structure (left) and to the crystallographic structure (right).51 |
The linear increment in transition frequency was observed for both optimized and crystal geometries, which is in accordance with experiment and previously determined CASSCF-SO level. From Fig. 2, one can see that the REC values are very sensitive to the geometry, whereas CASSCF are comparatively stable. But the overall tendency is well reproduced and satisfactory and moreover both methodologies are similarly inaccurate: for the optimized structure both techniques underestimate the shift, with REC being off by a factor <2 and CASSCF-SO being off by a factor of >5, whereas for the crystal structure they fail in different directions, and it is REC the one which is off by a factor of <5 and CASSCF is off just by a factor of >2. In any case, with this linear progression spin-electric couplings constant in units of Hz V−1 m are provided in Table 1, both methodologies resulted in same order of magnitude for SECs constant. The reason for the qualitative coincidence between the two very different models of the δf vs. voltage is likely the fact that neither is perfect but they are both good enough, and their exactness is actually limited by the many approximations in the previous parts of the model as detailed above. The detailed values of SECs for different voltages are provided in ESI Table S3.†
Exp. | CASSCF-SO (opt.) | CASSCF-SO (crys.) | REC (opt.) | REC (crys.) | |
---|---|---|---|---|---|
SEC (Hz V−1 m) | 11.4 | 2.0 | 4.5 | 6.2 | 54.5 |
For optimization and vibrational frequency calculations, the computing time was ≈160 h using Density function theory (DFT) implemented in Gaussian16. The spin-energy levels, using ab initio approach (either CASSCF of the restricted active space state interaction (RASSI)) method implemented in Molcas, the processing time of ≈10.33 h was spent using 4 i9 processors in MPI parallel processing and 64 GB of memory on a local server. The determination of spin energy levels for spin-electric coupling at fully ab initio approach, a total of 13 geometries including equilibrium was calculated, an approx. of 134.33 hours processing time was used. For vibronic couplings, 135 × 6 × 10.333 (no. of modes × no. of geoms. × processing time) ≈8370 h of computation. When the corresponding task was performed using the REC model implemented in SIMPRE, the total processing time to determine spin-electric couplings, was 13 × 1 = 13 seconds and for spin-phonon couplings, 135 × 6 × 1 = 810 seconds (13.5 minutes).
The associated energy expense of the ab initio approach (considering the electricity cost, rather than the full supply chain) is about 1 MW h, with a carbon footprint which can be estimated to be (assuming the average energy mix in Spain) in the order of 200 kg of CO2 equivalents, see Table 2 for detailed computational time and energy cost at each step.64 This is comparable to a thousand km in a passenger car, or similar to the per-passenger emissions of a medium-distance flight. This is not an absurd cost, but it is not environmentally negligible, either. When intensive calculations result in carbon footprint comparable to those of flying, this cost should be taken into account by environmentally-conscious researchers.15 In the REC approach, in contrast, the SEC estimation would have a negligible carbon footprint about 104 times smaller. Actually, in that case the total cost would be dominated by the initial DFT cost of structure optimization and calculation of the vibrational modes, so the actual factor in the savings is about 50 i.e. less than 5 kg CO2 equivalents, enough to consider the inexpensive method as environmentally acceptable.
Computation time (hr) | Energy consumed (kW h) | |
---|---|---|
DFT (opt, freq) | 160 | 26 |
CASSCF-SO | 10 | 1.27 |
REC | 0.00028 | 0.00005 |
Ab initio/Molcas | ||
Spin-electric couplings | 294 | 43 |
Spin-vibrational couplings | 8540 | 1056 |
REC/SIMPRE | ||
Spin-electric couplings | 170 | 28 |
Spin-vibrational couplings | 171 | 28 |
Even more promising, an analytical scheme was recently demonstrated to estimate nonadiabatic coupling and state-specific energy gradient for the crystal field Hamiltonian describing lanthanide single-ion magnets.54 Within this scheme, a single-point calculation of the desired accuracy -or experimental spectroscopic information- can be employed to fine-tune parameters of a very inexpensive model, which then allows taking analytical derivatives for any desired perturbation in the molecular geometry, therefore saving hundreds of calculations that would be required to estimate the same derivatives numerically. As a result, even within frugal modelling constrictions, it is possible to model spin relaxation using sophisticated nonadiabatic molecular dynamics. This clearly points towards the right direction we need to follow to keep producing good science that is compatible with the planetary boundaries: we need to know our systems well, find or develop a minimal model that recovers a good part of the relevant physics, and then, if possible, employ an analytical approach to solve the problem, rather than brute-force throwing computational power into it (Table 3).
Operation | File | Time in sec | |
---|---|---|---|
Matlab | Python | ||
The times of the three most time-consuming steps are marked in bold. | |||
Read input data from EXCEL file | User configurations | 0.2346 | 0.2080 |
Read system characteristics from EXCEL file | Read_data | 0.0725 | 0.0094 |
Calculate magnetic relaxation | Mag_relaxation | 0.0021 | 0.0010 |
Calculate probabilities of each spin to flip in the 1st p-bit | Bolztmann_distribution | 0.0010 | 0.0010 |
Iteration process (“for” loop) for the 1st p-bit | Changeable_field | 305 | 16800 |
Calculate probabilities of each spin to flip in the 2nd p-bit | Bolztmann_distribution | 0.0473 | 0.2630 |
Iteration process (“for” loop) for the 2nd p-bit | Changeable_field | 275 | 16900 |
Average p-bits states over time | Mean_matrix_state | 0.0460 | 0.0036 |
Association analysis between both p-bits | Association | 0.1484 | 2.3946 |
Plotting results | Plotting | 0.7876 | 0.4762 |
Total time | 581 | 33700 |
STOSS consists in a custom implementation of a Markov Chain Monte Carlo algorithm for each of the N independent particles (in this case, effective spins S = 1/2). The relative Markov chain probabilities for the spin flip between ground and excited spin states correspond to the relative Boltzmann populations of the two effective spin states Ms = +1/2, Ms = −1/2. Each computational step has an associated natural time duration that is derived from parameterized average spin dynamics, thus the model allows one to follow N independent time trajectories. There are three main scenarios studied using experimental data for comparison, and all the details could be found in the ESI† of Gutiérrez-Finol et al.37
Each implementation of the same program, even if following a given algorithm as closely as possible, have distinct costs, in terms of memory use, runtime and energy consumption. This has often been analyzed in particular for implementations employing different programming languages.66 In the case of Matlab vs. Python a major difference arises in “for” loops, where Matlab is faster than Python. More generally, M language has a strongly typed syntax, often resulting in a improvements in memory usage and processing time. Identifying the type of each variable at compile-time allows the compiler to optimize the code, saving time and being able to use the minimum amount of memory. That being said, we are not claiming here that the behaviour we present here is univocally associated with implementing the model in python vs. in M language, since many different approaches are always possible even within a given language and algorithm.
For this research the study was carried out using a desktop computer (Processor 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50 GHz, and installed memory of 16.GB with 15.8GB available) with Windows 10 Enterprise (22H2 version). The following Python (3.10 64-bit) modules were used: NumPy/Scipy (using Intel Math Kernel Library extension), Matplotlib, Pandas, Collections, random, math, and time. On the other hand, we used Matlab R2023b, 64-bit.
We performed the calculations on the third case of study in a lanthanide-based, molecular spin p-bit network37 which corresponds to the longest computation in the paper. Having in mind the original case, we implemented the simulation maintaining the same conditions in both scenarios, therefore, we explored a 2-p-bit architecture where each p-bit is constituted by the collective signal of 106 magnetic molecules which evolve freely, with the molecules corresponding to the first p-bit evolve in absence of a magnetic field and the one corresponding to the second p-bit evolve in presence of a magnetic field determined by the state of the first p-bit. The program is divided in seven functions where each one accomplishes a specific task, two functions are reused for the calculation of the probability of each p-bit to change its state. Table 3 presents the time speed processing for each function and as it could suggest by far the largest difference is from the “for” loops when the program iterates over each spin at each time step.
In this case the overall total runtime is rather short in any case, as are the associated emissions. However, should one employ STOSS to fit experimental data, as was done recently,52 one would be tempted to explore a wide parameter range, and that would give rise to a larger carbon footprint if the more expensive implementation was used. A good approach here would be, as soon as the expected calculation time veers into the hundreds or thousands of hours, start thinking about a less expensive implementation. Of course it can be even better to optimize the model rather than just the implementation, and we are indeed working on that. Additionally, we now present an updated version of the program, implementing several optimizations to enhance performance. By leveraging the vectorized operations and built-in functions available in the MATLAB programming language, we significantly reduced computational time. These optimizations allow for more efficient data handling and processing, minimizing the need for iterative loops. As a result, the program is now capable of delivering faster results without compromising accuracy, offering an overall improvement in both speed and functionality.
Simulations have the potential to drive valuable discoveries in green chemistry, such as optimizing processes or identifying effective catalysts. However, we believe that these computational efforts should also adhere to green coding practices to maximize sustainability. By prioritizing energy-efficient algorithms and minimizing resource-intensive operations, simulations can achieve their research goals while simultaneously reducing their environmental impact. In this context, it is crucial that computational chemistry integrates energy-efficient strategies and minimizes the carbon footprint of simulations, ensuring that progress in green chemistry is complemented by the adoption of greener computational practices. Frugality should be explored to ascertain which is the right balance between accuracy and efficiency for each simulation.
Quantifying CO2 footprint in chemistry laboratories is far from a trivial task and requires and exhaustive life cycle assessment (LCA). We should account for everything when assessing an LCA: materials acquisition and input, manufacturing and production, packaging and distribution, product use, disposal and recycling.67 Consequently, in the framework of green chemistry, we urge to implement objectives of sustainability also into computational chemistry research, not only to design safer chemicals and processes to avoid physical experiments, but also to reduce energy consumption and minimize waste. Just as we aim to prevent and minimize when designing a chemical reaction, we should also standardize preventive practices during computational experiments.
Here we need to be aware of the urgency of the climate crisis and thus avoid relying on unproven solutions and risky bets on techno-optimistic futures. In particular, in the field of theoretical and quantum chemistry, it would be irresponsible to wait until some supposedly energy efficient quantum computers start solving practical problems. Presently, both the operation of quantum computers and the simulation of quantum circuits carry a considerable carbon footprint and are not being used to avoid the footprint of conventional supercomputers.68,69 The same can be said of AI-based solutions, which may well bring increased productivity but have so far substantially increased the overall computing carbon footprint, mainly through the training cost of the models, with no clear path to net reduction.
We presented here two particular examples from the field of computational materials science illustrating an extremely common situation: when confronted to a calculation with a large carbon footprint, one can often choose either to solve a different model or to solve the same model via a different implementation, and obtain significant savings in carbon footprint without making any significant sacrifices in scientific yield. Currently this is mostly overlooked, and if anything the most expensive methods tend to enjoy a higher prestige and are considered more trustworthy. Often, this means that wasteful methods allow for easier or better publishing venues. Indeed, this is a general problem rather than exclusive for computation: a wasteful excess of experimental techniques is rarely if ever seen as a problem from the publishing perspective, and only a matter of money.
In our case, the research presented in Liu et al.51 was originally submitted with the frugal method, but during the refereeing process we were requested to switch to the method that is in principle more exact but which we show here is actually wasteful in this case. As a community we need to do better, and the main factor is choosing affordable models to solve problems, with a minor but sizeable contribution of finding an inexpensive way to implement these models. Crucially, we need to beware of the Jevons’ paradox or rebound effect,30 meaning a more efficient method, if not coupled to resource consciousness, by itself leads to an increased usage which often overshoots the savings. This can be seen as a particular case of the necessary attitudinal change in all of chemistry.70 Thus, we call herein for “frugality” rather than for “efficiency”. More generally, just as we consider the ethical repercussions of animal experimentation, or dealing with patient data, eventually we will need to include the risk of carbon footprint wastefulness as an ethical concern in research.
Footnotes |
† Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d4gc04900d |
‡ During the refereeing process A.G.A. was caught in the disaster zone of the 2024 Spanish floods, which were fueled by sea warming. This paper is gratefully dedicated to the autonomous volunteers that spearheaded the cleanup and relief efforts. |
This journal is © The Royal Society of Chemistry 2025 |