Learning and spiking dynamics in brain-like nanoscale networks

B. L. Monaghana, Z. E. Heywooda, S. J. Studholmea, F. Houardb, J. Grisoliab, S. Tricardb and S. A. Brown *a
aThe MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, University of Canterbury, Christchurch, New Zealand. E-mail: simon.brown@canterbury.ac.nz
bLaboratoire de Physique et Chimie des Nano-Objets, INSA, CNRS, Université de Toulouse, 135, av de Rangueil, 31077 Toulouse Cedex 4, France

Received 10th June 2025 , Accepted 21st July 2025

First published on 23rd July 2025


Abstract

Neuromorphic approaches to computation are driven by both the low-power operation of the biological brain and ever-increasing energy consumption of modern computing systems. Percolating networks of nanoparticles are promising candidates for self-assembled neuromorphic hardware systems as they exhibit a range of brain-like properties, including neuron-like spiking dynamics and critical behaviour. Here we show that random placement of synaptic memristors within these neuron-like networks leads to changes in the spiking dynamics and to learning behaviour. We consider two models of the memristors and show that different types of memristive hysteresis lead to differing effects on the network-level spiking dynamics. We then demonstrate that mixtures of neurons and synapses exhibit potentiation and de-potentiation, i.e. learning and forgetting. These results suggest that the addition of synaptic ‘memory’ to self-assembled networks provides functionality that could enable new types of computation.


image file: d5nh00407a-p1.tif

S. A. Brown

We are very pleased to publish this, our third article in Nanoscale Horizons, as part of the 10th anniversary issue. It describes new simulations that capture the potential for nanoscale memristors to be added self-assembled devices in order to modify performance and as the basis for new applications. I have appreciated the very reasonable and thoughtful approach taken by the Editors at this journal and I hope that we will continue to publish here.



New concepts

Percolating networks of nanoparticles exhibit brain-like properties and so are promising candidates for fabrication of self-assembled nanoscale hardware variants of neural networks. However, to date, the absence of intrinsic memory has meant that learning must be performed outside the networks. Here we replace a portion of the “spiking neurons” in the networks with memristive “synapses”, and demonstrate biologically realistic potentiation behaviour i.e. learning. More specifically we show that different memristor properties lead to learning behaviour on different timescales and to different neural spiking dynamics in the networks. This work therefore provides a key step towards building more realistically brain-like nanoscale networks, to incorporating synaptic functionality, and towards enabling new types of neuromorphic computation with self-assembled networks.

1 Introduction

As global demand for high-performance computing systems increases, the scale and performance limits of conventional transistor-based architectures have become increasingly apparent, and concerns about the energy consumption of these systems have grown. In response, alternative (‘neuromorphic’) approaches to computation inspired by the human brain have been proposed to mitigate and overcome these issues.1,2 The brain is a very high-performance cognitive system that operates with remarkable energy efficiency,3 and so the development of computers that take inspiration from the brain could provide powerful and energy efficient new information processing systems.

Self-assembled nanoscale systems such as networks of nanowires and nanoparticles are promising for neuromorphic computing due to their small scale, inherent brain-like properties, and low power consumption.4,5 In both nanowire networks (NWNs)6–10 and percolating networks of nanoparticles (PNNs)11–17 brain-like dynamics emerge from the complex collective response of the junctions between the nanowires or nanoparticles. We focus here on PNNs because – as will be explained below – they are remarkably robust and can be operated in a regime where information is processed through neuron-like spikes, as in the brain. Spiking is believed to provide significant computational advantages, such as low power consumption and suitability for tasks that require processing of temporal information, e.g. real-time decision-making.18

1.1 Percolating networks of nanoparticles

PNNs exhibit small-world and scale-free topologies, stochastic spiking, and brain-like dynamics with long-range temporal correlations.19–22 As shown schematically in Fig. 1, PNNs are comprised of metallic nanoparticles deposited onto an atomically smooth insulating substrate. Particles are deposited until the surface coverage p reaches the percolation threshold (pc ∼ 0.68 for 2D continuum percolation23). In this regime the conductance of the PNNs is dominated by tunnel gaps between highly conducting groups of nanoparticles (Fig. 1).
image file: d5nh00407a-f1.tif
Fig. 1 Schematic of a PNN showing groups of nanoparticles (pale blue), spiking tunnel gaps (red), and introduced memristors (orange). In this example 30% of the original tunnel gaps have been replaced by memristors.

When a voltage Vapp is applied to the input electrodes, electric field-driven surface diffusion processes11 cause atoms to migrate within the tunnel gaps. Fig. 2a and b show the initial growth at low voltages of a ‘hillock’ within a tunnel gap. The hillock decreases the tunnelling distance and hence increases the conductance of the gap (if Vapp is decreased surface energy effects cause the decay of the hillock, decreasing the conductance). These dynamics have been shown to lead to memristive behaviour24 (see discussion of synapses below) and allow implementation of neuromorphic computational schemes such as reservoir computing25–28 (note that related computations have been performed with NWNs7–9), in addition to computation schemes based on neuron-like spiking.29,30


image file: d5nh00407a-f2.tif
Fig. 2 Hillock and filament formation in PNNs. (a) A schematic of a tunnel gap that separates two nanoparticles prior to the formation of a hillock. (b) In response to an applied network voltage, the electric field in the gap forms a hillock which decreases the size of the tunnel gap and increases its conductance. Hillock growth (relaxation) is driven by the electric field (surface tension).24,25 (c) After a sufficiently long time, or at sufficiently large applied voltages, an atomic-scale filament forms that fully bridges the tunnel gap. Electromigration later leads to breaking of the filament.

Fig. 2c shows that high voltages lead to formation of atomic-scale filaments that bridge the tunnel gap, causing a sudden increase in conductance. The filaments later break due to electromigration effects.11 The formation and destruction of these filaments generates neuron-like spiking events,19 and it has been shown that the atomic-scale dynamics resemble leaky integrate-and-fire behaviour in biological neurons.20 Together with the scale-free topology of the PNNs,12,21 these dynamics result in highly correlated scale-invariant bursts of network activity – ‘avalanches’ – that are quantitatively similar to those in the human brain. The avalanches meet strict criteria for criticality19 which is associated with optimum computation and is thought to be the operating point of the brain.31,32 Several types of computation have been successfully demonstrated that exploit this spiking,29,33 but exploitation of criticality for specific tasks is a remaining challenge.30

Experimental fabrication of PNNs has been described in detail in ref. 12. The physical devices are remarkably robust and can be easily integrated with CMOS electronics, making them attractive for real-world applications.26 Physically realistic simulations of both the electrical and neuromorphic properties of PNNs have previously been shown to be in excellent agreement with experimental results,20 confirming that a model of percolation with tunnelling34 provides an accurate description of the real physical system. We highlight that simulations include a model of neuron-like spiking based on the formation and breaking of atomic-scale filaments (see Methods, eqn (2) and (3)). Here we label this the ‘Type A’ model, in anticipation of the models of memristive/synaptic behaviour that will be introduced below.

1.2 The need for synaptic plasticity

A key feature of the biological brain is the connectivity between neurons, which is provided by synapses. Modulation of synaptic connections is a key mechanism for the formation and storage of memory.35 In PNNs, the (volatile) memristive behaviour in the low voltage regime provides short term memory but in the high voltage spiking regime there is no synaptic/memory mechanism: the connectivity of the network of neurons is fixed and learning must be performed outside the network.29 Incorporation of synapses within these spiking neural networks would be a significant step towards the construction of more biologically-realistic self-assembled systems, as well as possibly allowing tuning of the networks between critical and non-critical states and enabling a range of different computational algorithms to be implemented. Learning is a key kind of neuromorphic behaviour that is necessary for a range of applications, including associative learning,36 unsupervised learning,37 and reservoir computing,26–28 and has been especially emphasised in the literature on related nanowire devices.7,38,39

Fig. 1 shows schematically the replacement of some spiking tunnel gaps (red) with new memristive elements (orange). We expect that such memristive synapses can be incorporated into PNNs experimentally by methods such as deposition of inorganic coating materials,40 by controlled sulphidisation of Ag or Cu PNNs,6,41 or by the introduction of novel memristive molecules.42–44 First steps in this direction were taken in ref. 45, although we note the network architecture in that case is different to that of the PNNs considered here.

Here we show, using physically realistic simulations, that the addition of memristive synapses to PNNs controls the connectivity between neuron-like tunnel gaps and significantly modifies the network properties. We begin by considering two different models of memristive synapses and show that different parameterisations in the models lead to very different hysteresis of the individual memristors, which in turn lead to different hysteresis in the networks. We then show that when the individual memristor characteristics are tuned appropriately the inclusion of ‘synapses’ in the networks significantly changes the spiking dynamics (e.g. patterns of long range temporal correlations). We further demonstrate that the mixed networks exhibit clear learning behaviour: in response to sequences of voltage pulses the plasticity of the synapses leads to increases in network conductance, increased connectivity between neurons, and hence to increased neuronal spiking. In the absence of stimulus the synapses de-potentiate, and the network conductance and spike rate both decrease. Both learning and forgetting46 are essential for various types of brain-like computation (see ref. 30, 37 and 47–49 and references therein). We emphasise that the results presented here are from simulations, but that they pave the way for new experiments which we hope to report on in the near future.

2 Results

The focus of this paper is on the changes in network dynamics that result from the introduction of memristive synapses. We begin by discussing the dynamics of the individual memristors and showing the effects of incorporating new memristors into the networks in the low voltage regime (i.e. in the absence of spiking). We then demonstrate their effects on the spiking dynamics in the high-voltage regime, where the behaviour of the memristors complements that of the spiking tunnel gaps. Lastly, we demonstrate the network plasticity that results from inclusion of memristors with appropriately chosen parameters.

2.1 Memristor models

Two memristor models were investigated to complement the spiking tunnel gaps governed by the Type A model discussed above. The first is a model that has previously been used to simulate memristive tunnel gaps in PNNs (i.e. the formation of hillocks, as in Fig. 2) in the low-voltage regime.24,25 It models atomic-scale dynamics that are linear in response to the local gap voltage Vg. We label this model ‘Type B’, and it is governed by eqn (4) and (5) (see Section 5.3).

The second memristor model was developed to capture electrochemical effects in memristive junctions between Ag NWNs.8,39 This model is governed by eqn (7)–(10) (see Section 5.4). This ‘Type C’ model provides greater ability to precisely tune the size and shape of memristive hysteresis loops due to its large parameter space and its exponential dependence on the local voltage.50 Note that both models describe volatile memristors since this allows for ‘forgetting’ (see Section 2.6). Both models are unipolar to maintain consistency with previous work.8,20

We emphasise that the Type B model is derived from the Type A model, whilst the Type C model is an entirely separate model that was developed as a way of modelling very different memristive devices. The Type B model is similar to the Type A model in the low voltage regime, where only continuous changes of the gap resistance is allowed. The main difference is that at high voltages the Type A model generates discontinuous changes in resistance i.e. spikes. This spiking is the result of filament formation and breaking in “virgin” nanogaps between nanoparticles whereas memristive behaviour is a property of additional memristors (e.g. molecules or sulphides).

This paper discusses the effect of replacing (randomly chosen) Type A tunnel gaps with Type B or C memristors. We label the networks according to the type of memristor and the ratio: for example, an ‘A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network’ refers to one in which 25% of its tunnel gaps were replaced with Type C memristors. Section S1 (ESI) discusses the effects of different parameter choices in each model. In particular, Section S1.4 (ESI) shows that the relative conductances of the gaps/memristors does not have a significant impact on the observed network properties.

2.2 Hysteresis of single memristors and networks of memristors

Memristors are typically characterised by performing current–voltage (IV) measurements, and the observed hysteresis loops are signatures of both non-linear and memory effects. Plotting the data as conductance–voltage (GV) curves can be useful in clarifying the amount of hysteresis in some cases.

Fig. 3 compares the hysteresis of single Type B and C memristors (dashed lines) with the hysteresis of 100% memristor networks (solid lines). GV and IV curves are shown in panels (a), (e) and (b), (f) respectively. Note that the voltage applied to the individual memristors V is 10 times lower, because in the networks the applied voltage Vapp is dropped across approximately 10 gaps in series. The conductance and current of the single memristors are normalised to be equal to those of the 100% networks at Vmax.


image file: d5nh00407a-f3.tif
Fig. 3 Hysteresis of single memristors and networks of memristors (dashed and solid lines respectively). Green and blue curves are for Type B and Type C memristor cases. The hysteresis is shown in several forms: (a) and (e) GV curves. (b) and (f) IV curves. (c) and (g) Gt trace. (d) and (h) It trace. In general, plotting conductance enhances the visibility of the hysteresis. The voltage ramps (grey dashed lines) have Vmax = 2 V for the network of 100% memristors (panels (c), (d), (g) and (h)) and Vmax = 0.2 V for the single memristors. Vmin = 0 V, period = 200 timesteps. The conductance and current of the single memristors are scaled such that the values at the peak voltage are equal to those of the 100% networks to clearly show the hysteresis (this results in the dashed and solid lines being almost identical in panels (e)–(h)).

Fig. 3a–d show that for the Type B memristors there is a significant difference in the shape of the hysteresis for a single memristor and a network of memristors. The difference is due to the heterogeneity of the Type B memristors (the conductance of each memristor depends on the length of the tunnel gap within which it is located; see eqn (5)). In contrast, Fig. 3e–h show that the hysteresis measured for a single Type C memristor is very similar to that of the network (Type C memristors are homogeneous, with identical parameters). Networks containing a mixture of Type A gaps and Type B/C memristors exhibit hysteresis that is similar to that of the 100% memristor networks (Fig. S2 (ESI)), as long as the applied voltage is small enough to avoid spiking. Note that the network currents are orders of magnitude higher than those measured in experiments11 because the simulation parameters were chosen for consistency with ref. 20 and 22. Simulated currents could be scaled to be more realistic but we choose to maintain consistency with previous work, as this scaling does not impact the results. See Methods (Section 5.2) for more detail.

Section S2 (ESI) discusses how the applied voltage is distributed in the network. Since the network is complex, scale-free and heterogeneous (comprising tunnel gaps, filaments and memristors) there is a broad distribution of the resultant voltages Vg measured across individual memristors. This in turn leads to a wide range of hysteresis curves for individual memristors, and contributes to the complex dynamics that emerge within the networks.

2.3 Network topology, voltage and current distributions

Fig. 4 compares the network topology, voltage, and current distributions for a 100% Type A gap network (left column), an A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (middle column), and an A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (right column). Graph representations of each network are shown in Fig. 4a–c. Each filled circle represents the geometric centre of a group of particles and the links between the nodes (called ‘edges’) represent the tunnel gaps between each group. Note that the tunnel gaps into which new memristors are inserted are the same in panels (b) and (c): the nodes and edges are unchanged, but the type of edge changes.
image file: d5nh00407a-f4.tif
Fig. 4 Physical characteristics of simulated PNNs. 100% Type A gap network (left column), A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (middle column), A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (right column). (a)–(c) Graph representations. Nodes represent the geometric centres of the nanoparticle groups while edges represent the tunnel gaps between groups. Type B (Type C) memristors are indicated by green (blue) edges; note that for clarity the coloured edges are slightly thicker than the black edges, which may give the impression that there are more than 25% memristors. (d)–(f) Voltage distribution across the PNNs shown in (a)–(c). A DC bias is applied to the left side of the networks while the right side is held at ground potential. Particles are represented by discs which overlap to form groups, and the colours represent the potential on the groups (relative to ground). Groups that are connected to the electrodes are coloured black. (g)–(i) Current maps for the networks shown in (a)–(c). Note that the pale blue lines represent tunnel gaps carrying no current. Here the input and output electrodes are orange and pink respectively. (d)–(i) are representative snapshots of each network at timestep 500[thin space (1/6-em)]000. Vapp = 2 V for all panels.

Fig. 4d–f and g–i show the voltages at each group of nanoparticles and the currents through each tunnel gap respectively. In panels (d)–(f) the nanoparticle groups are coloured by their potential to ground, and in panels (g)–(i) the tunnel gaps (including both Type A gaps and memristors) are coloured by the magnitude of their current flow. The maps of voltages and currents are similar for all three cases showing that the inclusion of memristors does not significantly alter the network topology.

Section S3 (ESI) discusses in detail the differences between networks containing Type B and Type C memristors, and in particular compares in detail the distributions of the voltages across each memristor and their conductances. The voltage distributions for networks with Type B memristors are shown to more closely follow power law distributions, consistent with critical dynamics. It is shown that this is at least partially because the range of conductances of the Type B memristors is larger than for the Type C memristors, and consequently that the distributions of voltages and currents in the networks with Type C memristors are more uniform.

2.4 Spiking dynamics

Fig. 5 compares the output currents for networks containing Type B and C memristors with those for a 100% Type A gap network. In Fig. 5a and b, the spiking in the network of Type A gaps is characterised by avalanches of spiking events,19 with a low baseline current. Each change in the current corresponds to formation or breaking of an atomic scale filament. In Fig. 5c and d, the spiking behaviour can still be observed in the A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network but the inclusion of Type B memristors leads to some continuous changes in current (panel (d)) and to subtle differences in the spiking patterns (discussed in Section 2.5). The baseline current is higher (when the voltage is applied the conductance of the individual memristors increases, causing an increase in the network conductance) and the sizes of the spikes are generally smaller than those in panels (a) and (b). In Fig. 5e and f the spiking activity of the A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network is again subtly different, and in panel (f) the continuous conductance changes of the Type C memristors are even more evident. A further increase in the base conductance can also be observed, and the size of the spikes is again smaller, as the average conductance of the Type C memristors is higher.
image file: d5nh00407a-f5.tif
Fig. 5 PNN spiking activity for DC applied voltages. (a) and (b) 100% Type A network. (c) and (d) A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network. (e) and (f) A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network. The left column shows the final 100[thin space (1/6-em)]000 timesteps of the simulation, while the right column shows the last 5000 timesteps of the same simulation to better show the spiking. Clearly, the spiking dynamics differ significantly between each case. The 100% Type A network shown in (a) and (b) is dominated by large spiking events that decay quickly, while the networks with included memristors shown in (c)–(f) have smaller spiking events that are complemented by continuous changes in the current. Note that Vapp in panels (c)–(f) is higher (3 V compared to 2 V in panels (a) and (b)) to obtain comparable spiking rates (this is required because some Type A gaps have been replaced by memristors). Note that the current scales observed in panels (c)–(f) are much higher than those in panels (b) and (f) of Fig. 3. This is due to the presence of spiking Type A gaps in the networks depicted in this figure, which yield very high conductances (∼10 Ω−1) during a spike. We emphasise that the base currents seen in panels (c)–(f) correspond well to the current values observed in Fig. 3.

The key point is that avalanches of spikes are observed in Fig. 5 in all cases, i.e. the spiking dynamics are not fundamentally changed by replacing 25% of the tunnel gaps by memristors. The next sections describe the changes in dynamics in more detail.

2.5 Temporal correlations

Temporal correlations in the measured spike trains are typically characterised by examining distributions of inter-event intervals (IEIs) and autocorrelation functions (ACFs).19,20 Fig. 6a–c compare the IEI distributions for a 100% Type A gap network (red), an A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (green) and an A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network (blue). Fig. 6d–f show the corresponding ACFs.
image file: d5nh00407a-f6.tif
Fig. 6 Inter-event intervals (IEIs) and autocorrelation functions (ACFs). (a)–(c) IEI distributions (PDFs) for a 100% Type A network, an A[thin space (1/6-em)]:[thin space (1/6-em)]B = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network and an A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network. P(IEI) is the probability density for the distributions. (d)–(f) Corresponding ACFs. PDFs calculated with linear (log) bin sizes are plotted as grey (coloured) points. Fits to the IEI distributions are shown as black dashed lines. The data in all panels are obtained from the final 400[thin space (1/6-em)]000 timesteps of a 500[thin space (1/6-em)]000 timestep simulation. Note that Vapp is higher in panels (b, c, e, f) (3 V compared to 2 V in panels (a) and (d)) to obtain comparable spiking rates in the networks with included memristors.

In Fig. 6a and b, the power law fits to the tails of the IEI distributions indicate that the critical dynamics previously reported19,20 in networks of Type A gaps persist after the addition of Type B memristors. However in Fig. 6c, the long-range temporal correlations appear to be more strongly impacted by the presence of Type C memristors, as the range of IEIs is narrower than that in panel (b).

Fig. 6d–f show that long range temporal correlations persist when memristive synapses are added to PNNs. The ACFs are approximately power laws in all cases but the slopes depend on details such as the homogeneity and relative conductances of the memristors in each model – see Section S1 (ESI). It is likely that the memristive synapses tune the networks away from criticality, but a detailed investigation would be required to confirm this. Such an investigation would require very long simulations and a careful consideration of the possibility of emerging concepts such as quasi-criticality.51

2.6 Potentiation and de-potentiation

Persistent changes in synaptic strength that result from stimuli (or absence thereof) are called potentiation and de-potentiation, and are thought to help facilitate the formation of memory in the brain.35,52 The insertion of memristors into PNNs is intended to provide synaptic plasticity in the form of continuous changes in the conductance of the connections between groups of nanoparticles.

The relatively small sizes of the networks (200 × 200) discussed in the preceding sections lead to strong stochastic effects (since in some cases there are just a few memristors on the dominant current paths between the electrodes). Hence in this section we focus on larger networks (i.e. 800 × 800) where stochastic effects are less dominant. Results for 200 × 200 networks are provided in Sections S4.2 and S4.3 (ESI) for comparison. The Type A parameters were tuned for the larger network size in order to facilitate comparison with smaller networks (see Section 5.2 for detail).

Fig. 7 shows the response of an A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network to a sequence of seven voltage pulses. Fig. 7a shows the total current (blue curve) as well as the current measured at each electrode (other colours): clearly with each subsequent pulse, the currents increase and there is more spiking activity (see Fig. 7b). These effects are demonstrated even more clearly in Fig. 7c by the moving averages of the conductance (〈G〉, purple) and event rate (〈ER〉, blue). The increasing network activity in response to a series of voltage pulses is an example of potentiation, which indicates that the Type C memristors perform a synaptic role within the network and allow for the network to ‘learn’.


image file: d5nh00407a-f7.tif
Fig. 7 Demonstration of potentiation effects for an A[thin space (1/6-em)]:[thin space (1/6-em)]C = 75[thin space (1/6-em)]:[thin space (1/6-em)]25 network. (a) The total output current (blue) and output currents from individual groups (other colours) as a function of time. The applied voltage Vapp (Vmin = 1.25 V, Vmax = 6 V, pulse length = pulse spacing = 250 timesteps) is plotted in grey in all panels. (b) The corresponding changes in conductance ΔG, showing that amplitude and frequency of spikes increases for successive input pulses. (c) Moving averages (over 100 timesteps) of the network conductance (〈G〉, purple) and network event rate (〈ER〉, blue).

The volatility of the memristors (see Section 2.1) also allows for the networks to ‘forget’. Fig. 7c shows a decay in both 〈G〉 and 〈ER〉 in the absence of stimulus, i.e. de-potentiation. Fig. S18 (ESI) shows that if the spacing between stimuli is larger, the memristors de-potentiate further so that the response to subsequent pulses is lowered (smaller increases in 〈G〉 and 〈ER〉). Similar potentiation and de-potentiation is also observed in PNNs with Type B memristors, as shown in Fig. S15 and S16 (ESI). The details of the potentiation and de-potentiation processes depend on specific attributes of the networks and memristor models, but Section S4 (ESI) shows that qualitatively similar behaviour is observed for a range of parameters.

A range of simulation parameters were tuned to provide the memory observed in Fig. 7, including the size of the PNN, the number and configuration of the input and output electrodes, the ratio of Type A gaps to Type C memristors, and the distribution of the memristors in the network. The characteristics of the applied voltage pulses – including the number, their size (i.e. Vmin and Vmax), their length, and their separation – also play a role, e.g. different learning responses are induced by small, rapid voltage pulses compared to large, infrequent pulses. This tunability provides flexibility to provide optimised memory effects for different computational tasks.

3 Discussion

3.1 Memristor ratio

We have focused here on PNNs in which 25% of the Type A gaps have been replaced with memristors. This allows us to focus on the correlated spiking dynamics in observed output signals (memristive hysteresis effects are present simultaneously, see e.g. Fig. 5f). We have studied networks with higher proportions of memristors and find similar hysteresis effects at low voltages (Fig. S2 (ESI)), however as the number of spiking gaps decreases the number of observed spikes decreases. Thus the complexity of the spiking patterns is also reduced, and in the limit that all of the gaps are memristive (i.e. all the Type A gaps are replaced) there is no spiking. In this limit the continuous changes in outputs from the networks (see Fig. 3) are ideal for implementation of brain inspired algorithms based on reservoir computing.25,26 Simulations of RC using networks of Type B and Type C memristors will be presented elsewhere.

In the biological brain, the ratio of synapses to neurons is on the order of 1000,53 whereas in the simulated PNNs the ratio of Type B/C memristors (synapses) to Type A gaps (neurons) is much lower. The synapse to neuron ratio that can be achieved in PNNs is limited by the mean degree (i.e. mean number of connections between nodes,54 which is ∼10). This is not necessarily an issue for the use of PNNs for computation – in fact, it is worth emphasising that the aim in the field of neuromorphic computing is to investigate what can be achieved with a limited number of brain-inspired elements rather than attempting to faithfully replicate all the details of the biological brain.

3.2 Experimental realisation

As mentioned in the introduction we believe that there are several possible routes to the addition of memristive synapses to spiking PNNs, including deposition of inorganic coating materials,40 controlled sulphidisation of Ag or Cu PNNs,6,41 and the introduction of novel memristive molecules.42–44 We believe that the latter method is particularly promising, especially as a range of new molecular synapses are becoming available. Simple drop-casting methods should be sufficient to achieve mixed memristive/spiking networks, as it is usually straightforward to dilute the deposited solution in order to achieve a sparse random placement of molecules. The choice of molecule could also allow for precise selection of memristive properties, analogous to tuning the memristor parameters in the simulations described here. Of course it will be essential to build PNNs from non-reactive materials that can survive exposure to air, moisture and solvents.

4 Conclusion

We have shown that the addition of synapse-like memristive elements to the neuron-like spiking gaps that are inherently present in percolating networks of nanoparticles leads to significant changes in network dynamics. We highlight that the strengthening of Type B/C synaptic connections leads to increases in the connectivity between Type A neurons and consequently to increased neuronal spiking across the network. This results in biologically-realistic potentiation and de-potentiation of the networks in response to repeated input pulses, i.e. in learning and forgetting, which could be valuable for a variety of styles of neuromorphic computation.30,37,47–49

5 Methods

This section describes the basic features of the simulations. Note that a detailed comparison of the Type B and Type C memristor models is presented in Section S1 (ESI).

5.1 Simulations

Results from experimental PNNs are well-described by continuum percolation models (see ref. 20 and references therein). Deposited particles are represented by disks that overlap after deposition, and overlapping disks form conducting nanoparticle groups. Deposition is ceased just before the 2D continuum percolation threshold (surface coverage p < pc ∼ 0.68)23 is reached so that no single group fully spans the network, and thus the network conductance results from tunnelling through gaps between the groups. We focus on networks with a surface coverage of p = 0.65 since this value is close enough to pc to be near criticality but far enough away to ensure that unrealistic configurations (such as those dominated by very large groups) are avoided.20 Similar results are obtained for 0.64 < p < pc.21 The conductance of each gap is determined by its length Li by
 
Gi = αeβLi, (1)
where α = 1 Ω−1 and β = 200 pd−1 are constants.34 Note that the parameter β is in units of inverse particle diameters (pd−1), and so the value of β = 200 pd−1 corresponds well with typical literature values34,55 of ∼10 nm−1 given experimental particle diameters of ∼20 nm. The unit of length in the simulations is particle diameters and is thus normalised to 1.

When an external voltage stimulus is applied to the nanoparticle groups chosen as the input electrodes, current flows through the network tunnel gaps as determined by Kirchhoff's laws.20 The current that flows through the groups designated as the output electrodes is recorded.

5.2 Type A model

The formation and destruction of the atomic-scale filaments in individual Type A gaps is governed by a deterministic electric-field driven model.20 The length of a growing filament di in gap i changes as a result of the local electric field Ei according to
 
image file: d5nh00407a-t1.tif(2)
and the width of each fully-formed filament wi (which is initially w0) thins due to its current Ii according to
 
image file: d5nh00407a-t2.tif(3)
where ET and IT are the electric field and current thresholds required for filament formation and destruction, respectively, and rd and rw are the respective rates of formation and destruction. In the simulations ET = 10 V pd−1 and IT = 0.01 A, which are consistent with experimental estimates.11 The conductance of a Type A gap depends on whether a filament had formed or not, i.e. it is either calculated based on its gap length Li by eqn (1) if no filament is formed or it is set to Gi = 10 Ω−1 if a filament is formed. The values of conductance parameters Gon = 10 Ω−1 and α = 1 Ω−1 and constants ET, IT, rd, and rw could be scaled so that the magnitude of voltages, electric fields, and currents match experimental values more closely, but this does not change the overall results so we instead retain values that are consistent with previous work.20,22

rd and rw were tuned slightly for the 800 × 800 networks so that the spiking event rate match that of the 200 × 200 networks. The 800 × 800 networks were used to observe synaptic memory effects; see Section 2.6 and Section S4 (ESI). This change increases the rate of Type A spiking which is necessary to match the spiking rates of smaller networks.

5.3 Type B model

The Type B model25 governs the height of the partial hillocks that form in the tunnel gaps under an electric field. It is governed by the equation
 
image file: d5nh00407a-t3.tif(4)
where z is the hillock height, D is the total gap length, V is the gap potential, T is the characteristic time scale, and μ and κ are scaling parameters. This model has been shown to be equivalent to existing models of voltage-driven memristance,24 and PNNs with tunnel gaps governed by this model in a low-voltage regime have been shown to perform well in RC tasks.24,25 The characteristic time T = 20 s was chosen to tune the time scale of the Type B memristors to be comparable with that of the Type C memristors.

The conductance of memristor i is calculated by

 
Gi = αeβ(Dizi), (5)
and so the current response to the gap voltage remains nonlinear, regardless of the linearity of the equation governing the hillock height z. See Table 1 for the parameters of the Type B model and Section S1.4 (ESI) for detail on scaling the conductance parameter α.

Table 1 Memristor parameters used for all simulations. The Type B parameter values are the same as those used in previous work,24,25 except that α = 10 Ω−1 was scaled to increase the conductance of the Type B memristors to increase their influence on the networks. The Type C parameter values were chosen to yield wide yet realistic hysteresis (see Fig. 3). See Section S1.4 (ESI) for detail on effect of varying α and Gmax. Note that κ is dimensionless
Type B Type C
μ (nm−2 V−1) κ α−1) β (pd−1) T (s) κP0 (s−1) κD0 (s−1) ηP (V−1) ηD (V−1) Gmin−1) Gmax−1)
3.46 × 10−5 3.8 × 10−2 10 200 20 5 × 10−4 5 × 10−2 10 10 0 1


In order to prevent numerical instabilities and unwanted spiking, the heights z were artificially limited to grow no larger than half of the gap lengths such that

 
image file: d5nh00407a-t4.tif(6)
at all points of the simulation.

5.4 Type C model

The Type C model8,39 is based on the rate-balance equation
 
image file: d5nh00407a-t5.tif(7)
where 0 ≤ g ≤ 1 is the normalised conductance of the memristor. The parameters κP(V) and κD(V) are the potentiation and depression rate coefficients, respectively, and are functions of the local voltage such that
 
κP = κP0[thin space (1/6-em)]exp(ηPV) & κD = κD0[thin space (1/6-em)]exp(−ηDV), (8)
where κP0,D0 and ηP,D are sub-parameters. The model8 yields the exponential behaviour seen in electrochemical memristors.50
 
image file: d5nh00407a-t6.tif(9)
for t > 0. The memristor conductance is then given by
 
G(t) = Gmin(1 − g(t)) + Gmaxg(t), (10)
where Gmin and Gmax are the minimum and maximum conductances of the memristor, respectively. The parameters of this model are listed in Table 1.

An advantage of the Type C model is its large number of parameters which allows for very fine control over the shape of the memristor hysteresis. Fig. S1 (ESI) shows a range of examples of Type C hysteresis afforded by this model.

Author contributions

BM and ZH performed the simulations. BM, ZH and SAB performed data analysis. BM, ZH and SJS contributed to model development. ST, JG, FH and SAB conceptualised the project. BM and SAB wrote the first draft. All authors contributed to discussions and to writing the manuscript.

Conflicts of interest

There are no conflicts to declare.

Data availability

Data for this article are available via open science framework at https://doi.org/10.17605/OSF.IO/49DKB.

Acknowledgements

We thank Carlo Ricciardi and Gianluca Milano for useful discussions and for introducing us to the Type C memristor model. We also acknowledge useful discussions with Ilia Valov, Jamie Steel, Joshua Mallinson, Hamish Mountford, Philip Bones and Matthew Arnold. This project was financially supported by the Dumont D’Urville NZ-France S&T Support Programme, the MacDiarmid Institute for Advanced Materials and Nanotechnology and the Marsden Fund. Financial support from Agence Nationale de la Recherche (DINAPO grant ANR-23-CE09-0006-02, and MOMA grant ANR-23-ERCC-0008-01) is acknowledged, as well from the Occitanie region and from INSA Toulouse. This study has been partially supported through the EUR grant NanoX no ANR-17-EURE-0009 in the framework of the Programme des Investissements d’Avenir.

References

  1. D. Marković, A. Mizrahi, D. Querlioz and J. Grollier, Nat. Rev. Phys., 2020, 2, 499–510 CrossRef.
  2. A. Mehonic and A. J. Kenyon, Nature, 2022, 604, 255–260 CrossRef CAS PubMed.
  3. V. Balasubramanian, Proc. Natl. Acad. Sci. U. S. A., 2021, 118, e2107022118 CrossRef CAS PubMed.
  4. Z. Kuncic and T. Nakayama, Adv. Phys.:X, 2021, 6, 1894234 Search PubMed.
  5. A. Vahl, G. Milano, Z. Kuncic, S. A. Brown and P. Milani, J. Phys. D: Appl. Phys., 2024, 57, 503001 CrossRef CAS.
  6. A. Z. Stieg, A. V. Avizienis, H. O. Sillin, C. Martin-Olmos, M. Aono and J. K. Gimzewski, Adv. Mater., 2012, 24, 286–293 CrossRef CAS PubMed.
  7. J. Hochstetter, R. Zhu, A. Loeffler, A. Diaz-Alvarez, T. Nakayama and Z. Kuncic, Nat. Commun., 2021, 12, 4008 CrossRef CAS PubMed.
  8. G. Milano, G. Pedretti, K. Montano, S. Ricci, S. Hashemkhani, L. Boarino, D. Ielmini and C. Ricciardi, Nat. Mater., 2022, 21, 195–202 CrossRef CAS PubMed.
  9. H. Tanaka, S. Azhari, Y. Usami, D. Banerjee, T. Kotooka, O. Srikimkaew, T. T. Dang, S. Murazoe, R. Oyabu, K. Kimizuka and M. Hakoshima, Neuromorphic Comput. Eng., 2022, 2, 022002 CrossRef.
  10. F. Caravelli, G. Milano, C. Ricciardi and Z. Kuncic, Ann. Phys., 2023, 535, 2300090 CrossRef.
  11. A. Sattar, S. Fostner and S. A. Brown, Phys. Rev. Lett., 2013, 111, 136808 CrossRef PubMed.
  12. S. K. Bose, J. B. Mallinson, R. M. Gazoni and S. A. Brown, IEEE Trans. Electron Devices, 2017, 64, 5194–5201 CAS.
  13. C. Minnai, A. Bellacicca, S. A. Brown and P. Milani, Sci. Rep., 2017, 7, 7955 CrossRef PubMed.
  14. N. Carstens, T. Strunskus, F. Faupel, A. Hassanien and A. Vahl, Part. Part. Syst. Charact., 2023, 40, 2200131 CrossRef CAS.
  15. T. S. Rao, I. Mondal, B. Bannur and G. U. Kulkarni, Discover Nano, 2023, 18, 124 CrossRef PubMed.
  16. O. Gronenberg, B. Adejube, T. Hemke, J. Drewes, O. H. Asnaz, F. Ziegler, N. Carstens, T. Strunskus, U. Schürmann, J. Benedikt, T. Mussenbrock, F. Faupel, A. Vahl and L. Kienle, Adv. Funct. Mater., 2024, 34, 2312989 CrossRef CAS.
  17. A. J. T. V. D. Ree, M. Ahmadi, G. H. T. Brink, B. J. Kooi and G. Palasantzas, Phys. Rev. Mater., 2025, 9, 036001 CrossRef.
  18. M. Davies, N. Srinivasa, T.-H. Lin, G. Chinya, Y. Cao, H. Choday, G. Dimou, P. Joshi, N. Imam, S. Jain, Y. Liao, C.-K. Lin, A. Lines, R. Liu, D. Mathaikutty, S. Mccoy, A. Paul, J. Tse, G. Venkataramanan, Y.-H. Weng, A. Wild, Y. Yang and H. Wang, IEEE Micro, 2018, 38, 82–99 Search PubMed.
  19. J. B. Mallinson, S. Shirai, S. K. Acharya, S. K. Bose, E. Galli and S. A. Brown, Sci. Adv., 2019, 5, eaaw8438 CrossRef CAS PubMed.
  20. M. D. Pike, S. K. Bose, J. B. Mallinson, S. K. Acharya, S. Shirai, E. Galli, S. J. Weddell, P. J. Bones, M. D. Arnold and S. A. Brown, Nano Lett., 2020, 20, 3935–3942 CrossRef CAS PubMed.
  21. S. Shirai, S. K. Acharya, S. K. Bose, J. B. Mallinson, E. Galli, M. D. Pike, M. D. Arnold and S. A. Brown, Network Neurosci., 2020, 4, 432–447 CrossRef PubMed.
  22. S. K. Acharya, E. Galli, J. B. Mallinson, S. K. Bose, F. Wagner, Z. E. Heywood, P. J. Bones, M. D. Arnold and S. A. Brown, ACS Appl. Mater. Interfaces, 2021, 13, 52861–52870 CrossRef CAS PubMed.
  23. D. Stauffer and A. Aharony, Introduction to Percolation Theory, Taylor and Francis, 2nd edn, 1994 Search PubMed.
  24. R. K. Daniels, J. B. Mallinson, Z. E. Heywood, P. J. Bones, M. D. Arnold and S. A. Brown, Neural Networks, 2022, 154, 122–130 CrossRef CAS PubMed.
  25. J. B. Mallinson, Z. E. Heywood, R. K. Daniels, M. D. Arnold, P. J. Bones and S. A. Brown, Nanoscale, 2023, 15, 9663–9674 RSC.
  26. J. B. Mallinson, J. K. Steel, Z. E. Heywood, S. J. Studholme, P. J. Bones and S. A. Brown, Adv. Mater., 2024, 36, 2402319 CrossRef CAS PubMed.
  27. Z. E. Heywood, J. B. Mallinson, P. J. Bones and S. A. Brown, Neuromorphic Comput. Eng., 2024, 4, 034011 CrossRef.
  28. Z. E. Heywood, B. L. Monaghan, J. B. Mallinson and S. A. Brown, 2025, submitted.
  29. S. J. Studholme, Z. E. Heywood, J. B. Mallinson, J. K. Steel, P. J. Bones, M. D. Arnold and S. A. Brown, Nano Lett., 2023, 23, 10594–10599 CrossRef CAS PubMed.
  30. S. J. Studholme, J. B. Mallinson, J. K. Steel and S. A. Brown, Neuromorphic Comput. Eng., 2025, 5, 014017 CrossRef.
  31. W. L. Shew and D. Plenz, Neuroscientist, 2013, 19, 88–100 CrossRef PubMed.
  32. J. O’Byrne and K. Jerbi, Trends Neurosci., 2022, 45, 820–837 CrossRef PubMed.
  33. S. J. Studholme and S. A. Brown, ACS Nano, 2024, 18, 28060–28069 CrossRef CAS PubMed.
  34. S. Fostner, R. Brown, J. Carr and S. A. Brown, Phys. Rev. B:Condens. Matter Mater. Phys., 2014, 89, 075402 CrossRef.
  35. T. Takeuchi, A. J. Duszkiewicz and R. G. Morris, Philos. Trans. R. Soc., B, 2014, 369, 20130288 CrossRef PubMed.
  36. H. G. Manning, F. Niosi, C. G. da Rocha, A. T. Bellew, C. OCallaghan, S. Biswas, P. F. Flowers, B. J. Wiley, J. D. Holmes, M. S. Ferreira and J. J. Boland, Nat. Commun., 2018, 9, 3219 CrossRef PubMed.
  37. W. Wang, G. Pedretti, V. Milo, R. Carboni, A. Calderoni, N. Ramaswamy, A. S. Spinelli and D. Ielmini, Sci. Adv., 2018, 4, eaat4752 CrossRef CAS PubMed.
  38. G. Milano, K. Montano and C. Ricciardi, J. Phys. D: Appl. Phys., 2023, 56, 084005 CrossRef CAS.
  39. E. Miranda, G. Milano and C. Ricciardi, IEEE Trans. Nanotechnol., 2020, 19, 609–612 CAS.
  40. N. Carstens, B. Adejube, T. Strunskus, F. Faupel, S. Brown and A. Vahl, Nanoscale Adv., 2022, 4, 3149–3160 RSC.
  41. Z. Xu, Y. Bando, W. Wang, X. Bai and D. Golberg, ACS Nano, 2010, 4, 2515–2522 CrossRef CAS PubMed.
  42. Y. Wang, Q. Zhang, H. P. Astier, C. Nickle, S. Soni, F. A. Alami, A. Borrini, Z. Zhang, C. Honnigfort, B. Braunschweig, A. Leoncini, D. C. Qi, Y. Han, E. del Barco, D. Thompson and C. A. Nijhuis, Nat. Mater., 2022, 21, 1403–1411 CrossRef CAS PubMed.
  43. Y. Zhang, L. Liu, B. Tu, B. Cui, J. Guo, X. Zhao, J. Wang and Y. Yan, Nat. Commun., 2023, 14, 247 CrossRef CAS PubMed.
  44. R. S. Williams, S. Goswami and S. Goswami, Nat. Mater., 2024, 23, 1475–1485 CrossRef CAS PubMed.
  45. C. Huez, D. Guérin, F. Volatron, A. Proust and D. Vuillaume, Nanoscale, 2024, 16, 21571–21581 RSC.
  46. R. L. Davis and Y. Zhong, Neuron, 2017, 95, 490–503 CrossRef CAS PubMed.
  47. S. Wu, K. Y. M. Wong and M. Tsodyks, Front. Comput. Neurosci., 2013, 7, 188 Search PubMed.
  48. S. Lim, J. L. McKee, L. Woloszyn, Y. Amit, D. J. Freedman, D. L. Sheinberg and N. Brunel, Nat. Neurosci., 2015, 18, 1804–1810 CrossRef CAS PubMed.
  49. B. Voloh, M. Oemisch and T. Womelsdorf, Nat. Commun., 2020, 11, 4669 CrossRef CAS PubMed.
  50. I. Valov, R. Waser, J. R. Jameson and M. N. Kozicki, Nanotechnology, 2011, 22, 254003 CrossRef PubMed.
  51. L. J. Fosque, R. V. Williams-García, J. M. Beggs and G. Ortiz, Phys. Rev. Lett., 2021, 126, 098101 CrossRef CAS PubMed.
  52. A. Citri and R. C. Malenka, Neuropsychopharmacology, 2008, 33, 18–41 CrossRef.
  53. S. Budday, P. Steinmann and E. Kuhl, Front. Cell. Neurosci., 2015, 9, 257 Search PubMed.
  54. R. K. Daniels, M. D. Arnold, Z. E. Heywood, J. B. Mallinson, P. J. Bones and S. A. Brown, Phys. Rev. Appl., 2023, 20, 034021 CrossRef CAS.
  55. H. Moreira, J. Grisolia, N. M. Sangeetha, N. Decorde, C. Farcau, B. Viallet, K. Chen, G. Viau and L. Ressier, Nanotechnology, 2013, 24, 095701 CrossRef PubMed.

Footnote

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d5nh00407a

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.