Harnessing a silicon carbide nanowire photoelectric synaptic device for novel visual adaptation spiking neural networks

Zhe Feng a, Shuai Yuan b, Jianxun Zou a, Zuheng Wu *a, Xing Li a, Wenbin Guo a, Su Tan a, Haochen Wang a, Yang Hao a, Hao Ruan a, Zhihao Lin a, Zuyu Xu a, Yunlai Zhu a, Guodong Wei *b and Yuehua Dai *a
aSchool of Integrated Circuits, Anhui University, Hefei, Anhui 230601, China. E-mail: wuzuheng@ahu.edu.cn; daiyuehua2013@163.com
bXi’an Key Laboratory of Compound Semiconductor Materials and Devices, School of Physics & Information Science, Shaanxi University of Science and Technology, Xi’an 710021, Shaanxi, China. E-mail: wgd588@163.com

Received 23rd May 2024 , Accepted 5th August 2024

First published on 6th August 2024


Abstract

Visual adaptation is essential for optimizing the image quality and sensitivity of artificial vision systems in real-world lighting conditions. However, additional modules, leading to time delays and potentially increasing power consumption, are needed for traditional artificial vision systems to implement visual adaptation. Here, an ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device is developed for compact artificial vision systems with the visual adaption function. The theoretical calculation and experimental results demonstrated that the heating effect, induced by the increment light intensity, leads to the photoelectric synaptic device enabling the visual adaption function. Additionally, a visual adaptation artificial neuron (VAAN) circuit was implemented by incorporating the photoelectric synaptic device into a LIF neuron circuit. The output frequency of this VAAN circuit initially increases and then decreases with gradual light intensification, reflecting the dynamic process of visual adaptation. Furthermore, a visual adaptation spiking neural network (VASNN) was constructed to evaluate the photoelectric synaptic device based visual system for perception tasks. The results indicate that, in the task of traffic sign detection under extreme weather conditions, an accuracy of 97% was achieved (which is approximately 12% higher than that without a visual adaptation function). Our research provides a biologically plausible hardware solution for visual adaptation in neuromorphic computing.



New concepts

Here, an ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device is developed for compact artificial vision systems with the visual adaption function. The theory calculation and experiments results demonstrated that the heating effect, induced by the increment light intensity, leads to the photoelectric synaptic device enabling the visual adaption function. Additionally, a visual adaptation artificial neuron (VAAN) circuit is implemented, the output frequency of which increases and then decreases with the gradual intensification of light, reflecting the dynamic process of visual adaptation, by incorporating the photoelectric synaptic device into a LIF neuron circuit. Furthermore, a visual adaptation spiking neural network (VASNN) was constructed to evaluate the photoelectric synaptic device based visual system for perception tasks. The results indicate that, in the task of traffic sign detection under extreme weather conditions, an accuracy of 97% was achieved (which is approximately 12% higher than that without visual adaptation function). Our research provides a biologically plausible hardware solution for visual adaptation in neuromorphic computing.

Introduction

The retina plays a key role in the visual perception system, with its function extending beyond merely decoding visual stimuli.1,2 In fact, as a part of the retina, various visual neurons, through their complex and layered structure, process not only basic shapes but also interpret complex scenes, enabling the perception of visual information.3,4 Among these, visual adaptation is a core aspect of the perception process, showcasing the ability of visual neurons to adjust their sensitivity to light intensity.5 When the light intensity changes, the dynamic adjustment of the retinal output threshold triggers biochemical reactions in the photoreceptor cells, thereby altering their response to light.6,7 As light intensity gradually increases, these photoreceptor cells are progressively activated. Notably, once the light intensity reaches a certain threshold, the photoreceptor cells become saturated. Subsequently, they begin to adapt to higher brightness levels, gradually reducing their response. The adaptation process includes adjustments in the photoreceptor cells’ response gain and dynamic changes in the response range, which are crucial for maintaining stable visual perception under varying environmental lighting conditions.8,9

In recent years, significant advancements have been made in integrating visual adaptation mechanisms into artificial vision systems.10,11 Particularly, with the advancement of neuromorphic engineering, the development of silicon retinas has been accelerated.12,13 These retinas not only successfully mimic the adaptive characteristics of biological retinas but also have achieved breakthroughs in the field of computer vision. Moreover, algorithms inspired by biological visual adaptation have been innovatively applied, greatly enhancing the performance of computer vision systems in dynamic environments.14,15 These artificial visual adaptation mechanisms significantly enhance the robustness and flexibility of visual systems, playing a crucial role in scenarios with frequent changes in lighting conditions, such as outdoor surveillance, robotic navigation, and image enhancement applications.16,17

In the development of neuromorphic systems, traditional silicon-based devices have always played a foundational role. However, they exhibit some limitations when combined with preprocessing algorithms. Under the Von-neumann architecture, visual sensors are processed separately from memory and processing units, leading to a physical separation between the visual sensors and subsequent information processing units.18,19 This mode of information transmission and exchange increases time delays and energy consumption. Directly processing optical information within the visual sensor can mitigate the impacts caused by physical distance. Utilizing the special physical properties of visual sensors for optical information preprocessing, such as enhancing contrast and reducing noise, has been proven to improve the accuracy and efficiency of image recognition.20–22 Furthermore, some studies have accomplished image recognition tasks directly within the sensor by emulating the synaptic weight characteristics of photodetectors.23,24 However, the potential of combining visual adaptation properties with sensor-computing integrated systems has rarely been explored in current research. Therefore, further investigation into the compatibility between visual adaptation and emerging devices to promote their practical application is particularly important.

Here, an ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device is developed for compact artificial vision systems with the visual adaption function. The theoretical calculation and experimental results demonstrated that the heating effect, induced by the increment light intensity, leads to the photoelectric synaptic device enabling the visual adaption function. Additionally, a visual adaption leaky integrated-and-fire neuron (VAAN) circuit is implemented, which output frequency increases and then decreases with the gradual intensification of light, reflecting the dynamic process of visual adaptation, by incorporating the photoelectric synaptic device into a LIF neuron circuit. Furthermore, a visual adaptation spiking neural network (VASNN) was constructed to evaluate the photoelectric synaptic device based visual system for perception tasks. The results indicate that, in the task of traffic sign detection under extreme weather conditions, an accuracy of 97% was achieved (which is approximately 12% higher than that without visual adaptation function). Our research provides a biologically plausible hardware solution for visual adaptation in neuromorphic computing.

Result and discussion

As shown in Fig. 1a, visual adaptation is a biological process that enables the visual system to function effectively under different lighting conditions. This process involves multiple levels of visual processing, from photoreceptors on the retina, such as rods and cones, to ganglion cells. Specifically, the retina adjusts its response to light intensity and contrast to avoid response saturation under bright light conditions. For example, under intense illumination, retinal ganglion cells experience a reduction in gain and undergo sustained hyperpolarization, which together modulate their responses to light stimuli. These adjustments are achieved through the input from bipolar cells, the process of pulse generation, and other synaptic and intrinsic mechanisms. This optimizes the processing of light signals by the retina, ensuring efficient encoding and transmission of information across various visual environments.5
image file: d4nh00230j-f1.tif
Fig. 1 Schematic of visual adaption process and artificial vision system with visual adaption function. (a) The biological visual adaptation process begins with photoreceptors in the retina undergoing multilayer visual processing to regulate responses to light intensity, preventing response saturation under high light conditions. (b) The schematic of tradition artificial vision system with visual adaption function. The photosensitive CMOS circuits and analog-to-digital converters (ADC) are used for capturing and preprocessing light signals. The signals are then processed through a visual adaptation module and a spiking neural network. (c) The schematic of artificial vision system with visual adaption function, basing on the developed ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device. The intrinsic visual adaption properties photoelectric synaptic device enables for developing more compact artificial vision system with visual adaption function.

In contrast, traditional artificial vision systems, as shown in Fig. 1b, use photosensitive CMOS circuits as pixel arrays to encode light intensity information into analog voltage signals. However, this system requires an analog-to-digital converter (ADC) to transform the analog signals into digital ones, and it undergoes preprocessing through a visual adaptation module before learning and inference are performed by a spiking neural network for tasks such as classification or regression. In fact, the digital conversion and visual adaptation modules lead to additional response time and power consumption, which are not favorable for developing compact artificial vision systems. To address this issue, a novel VAAN neuron circuit (Fig. 1c) was designed, capable of generating spike information with visual adaptation characteristics based on changes in light intensity. Since the output response of the developed ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device exhibits visual adaptation properties, the spike signals output by the neuron circuit are also encoded according to these characteristics. This design leverages the physical properties of the device to integrate photoreception and visual adaptation functionalities. Compared to traditional artificial vision systems, this approach avoids the complexity of digital-to-analog conversion and the need for an additional visual adaptation module for visual adaptation realization, making the system more intuitive and efficient.

Fig. 2a presents the structure of the developed ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device, consisting of an ∼60 nm PMMA layer and a silicon carbide nanowire film sandwiched between two ∼1 mm wide ITO electrodes (Fig. S1, ESI). SiC-NWs were chosen for their excellent photosensitive properties and structural features capable of emulating neural behavior.25 ITO was selected as the top electrode for its transparent transmission of light signals and support for unobstructed current flow. Furthermore, to isolate the electrodes from the nanowire layer and prevent short circuits while stabilizing the accumulation and release of charges in the SiC-NWs film, PMMA, a transparent insulating material, was used as the middle layer. The SiC NW film was prepared by electrophoretic deposition (EPD), a simple and fast method suitable for manufacturing large-area SiC NW functional films (preparation method seen in Fig. S1 and S2, ESI). The SiC NWs were characterized by X-ray diffraction (XRD). The XRD results clearly show sharp and distinct peaks at 35.653°, 41.402°, 59.989°, 71.776°, and 75.507°, corresponding to the crystal orientations (111), (200), (220), (311), and (222) of 3C–SiC, indicating SiC NWs have a high-quality crystal structure (Fig. 2b). Moreover, to characterize the SiC NWs in detail, Fig. 2c reveals a single SiC-NW with a diameter of about 200 nanometers. The high-resolution transmission electron microscopy (HRTEM) images (Fig. 2d and e) clearly show the defect-free atomic structure of SiC-NW, with Si–C atoms arranged in an orderly manner, forming a distinct hexagonal pattern with a lattice spacing of 2.48 Å. This suggests that devices based on SiC-NWs may possess excellent electronic performance and low noise levels, with the potential to improve device stability.26


image file: d4nh00230j-f2.tif
Fig. 2 Materials properties characterization. (a) The schematic structure of the photoelectric synaptic device, including a top ITO electrode, a PMMA layer, a silicon carbide nanowire film, and a bottom ITO electrode. (b) The X-ray diffraction (XRD) analysis results confirm the high-quality crystal structure of SiC NWs. (c) The SEM image of the SiC-NW reveals a single SiC-NW with a diameter of about 200 nanometers. (d) and (e) High-resolution transmission electron microscope (HRTEM) images present the defect-free atomic structure of SiC-NW and hexagonal crystal plane arrangement.

Here, the electrical stimulation (a direct current voltage (2 V) and light pulses of varying power densities (ranging from 36 to 326 mW cm−2) were applied to stimulate the device, exploring its response characteristics under both continuous and discrete light pulse conditions (Fig. 3a and b). Under discrete light pulse stimulation, gradually increasing light pulses with 5-second intervals were used, while continuous light pulse stimulation was achieved through consistently increasing light pulses. As the light intensity increases from 36 mW cm−2 to 195 mW cm−2, the device conductance could be modulated, which resembles the plasticity properties of neural synapses.27 We observe that the long-term plasticity of our proposed photoelectric synaptic device originates from the persistent photoconductance (PPC) phenomenon. PPC is a phenomenon where materials maintain higher conductivity even after the cessation of light illumination. This effect is usually related to the slow recombination of photogenerated carriers or the presence of trap states.28 In our SiC nanowires, PPC likely originates from surface states or bulk defects that can trap photogenerated electrons, thus prolonging their lifetime.29 This mechanism enables our device to maintain a high conductivity state for an extended period after light stimulation, thus mimicking the long-term plasticity of neural synapses.30 Interestingly, with further increase in light intensity to 325 mW cm−2, under either continuous or discrete light pulse stimulations, the device's output response no longer showed a monotonic increase but slightly decreased. As illustrated in Fig. 3c, our device exhibits a light response characteristic that shares similarities with Weber's Law, demonstrating excellent adaptive capability. While not strictly adhering to Weber's Law, our device shows decreasing response sensitivity (ΔIL) with increasing background light intensity, mimicking the adaptive behavior of biological visual systems. This feature enables effective signal detection over a wide range of light intensities, highlighting the device's potential in simulating visual adaptation processes.31 Furthermore, the device's synaptic characteristics were thoroughly investigated. Detailed experimental analysis found that the device could not only mimic typical synaptic behaviors such as long-term plasticity but also had precisely controllable response characteristics, further confirming its potential in simulating biological neural systems (details on the synaptic characteristics can be found in Fig. S3 and S4, ESI).


image file: d4nh00230j-f3.tif
Fig. 3 The photoelectric synaptic device characteristics. (a) and (b) The device's light response characteristics under continuous and discrete light pulse conditions, showing responses to different light power density stimuli and revealing visual adaption properties. (c) Peak response current analysis under light pulse stimulation, displaying a trend of increase and then a decrease in the device response with light intensity. (d) Temperature-dependent output response of the ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device. (e) The schematic of light illumination induced structure variation of ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device. (f) The schematic of light illumination induced energy band variation of ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device.

Then, the mechanism behind the visual adaptation characteristics of the developed ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device has been investigated. When light is irradiated on the device, it induces a heating effect. This heating effect, particularly strong under intense illumination, may lead to changes in the intermolecular gaps of PMMA, thereby affecting the carrier transmission rate. This physical process is expected to cause changes in material resistance and may impact the overall performance of the device. Specifically, Fig. 3e illustrates that under illumination, the intermolecular gaps of PMMA expand, leading to adjustments in the band structure, directly increasing the material's resistance. Fig. 3f shows that as the temperature rises, the widening of the PMMA bandgap reduces the possibility of FN tunneling, leading to decreased carrier tunneling and thus a reduced current output from the device.

To accurately grasp these microscopic changes, analysis was carried out using first-principles calculations based on density functional theory (DFT) and the projector augmented wave (PAW) pseudopotential. These calculations help to understand in detail how the intermolecular gaps of PMMA change with temperature and how these changes intricately affect the performance of photoelectric synaptic devices at the atomic level. Specifically, by using the Quantum ESPRESSO software, applying the generalized gradient approximation (GGA) and the Perdew–Burke–Ernzerhof (PBE) gradient correction function, the behavior of PMMA chains in a 30 × 30 × 30 Å supercell was simulated, where the Brillouin zone was sampled at a Γ point. The convergence thresholds for energy and force were set to 1 × 10−5 eV and 0.02 eV Å−1, respectively. The calculations revealed that as the temperature increases, the widening of the gap between PMMA polymer molecular chains leads to a bandgap increase of 0.11 eV, directly affecting the material resistance, thereby unveiling the potential impact of thermal effects on device performance (Fig. S5 and Table S1, ESI).

Furthermore, experimental studies were conducted by measuring the output response of the developed ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device under different temperatures without light exposure (Fig. 3d). It was found that as the temperature increased, the device's output response initially increased but began to decrease when the temperature exceeded 50 °C. Additionally, using a thermocouple, temperature changes under different light power conditions were measured under 365 nm light source irradiation (Fig. S6, ESI), further confirming the significant rise in device temperature at higher light intensities. These experiment results demonstrate that the heating effect induced by light illumination promote thermal expansion of PMMA units, thereby affecting the size of PMMA molecular gaps, leading to the ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device with visual adaption characteristics.

To delve deeper into the capability of photoelectric devices in simulating biological visual adaptation, the photoelectric synaptic devices were modeled and incorporated into a LIF neuron circuit based on threshold switching memristors (TSM) to construct VAAN (Supplementary Note 1, 2 and Fig. S7–S9, ESI).32 As shown in Fig. 4a, in the VAAN circuit, the photoelectric synaptic device is responsible for converting light intensity information into electrical current signals. The LIF neuron circuit, designed based on threshold switching memristor (TSM), receives these current signals and converts into output spikes. To further verify the performance of the VAAN circuit, different light intensities were set, and the brightness of grayscale images was adjusted to simulate the visual information reading process under varying lighting conditions (for specific details, see Supplementary Note 1, ESI). The results shown in Fig. 4b and c reflect the relationship between light intensity and spike voltage, spike frequency, demonstrating that despite changing light intensities, the VAAN circuit can still maintain the consistency of the spike signal output. Fig. 4d demonstrates a comparison between the VAAN circuit and traditional visual sensors in processing images under different lighting conditions. The figure contains two rows of image sequences, each representing a different image processing method. The upper row shows the output of traditional visual sensors and LIF neurons, while the lower row displays the output of the VAAN circuit. From left to right, the exposure gradually increases, simulating changes in lighting from dark to bright. In the output of traditional visual sensors, we can observe that as the exposure increases, the images progress from being too dark to overly bright, eventually leading to loss of detail. In contrast, the output of the VAAN circuit demonstrates excellent adaptability: under low light conditions, it can enhance the brightness and contrast of the image, making details of the originally dark image clearly visible; under high light conditions, it avoids overexposure by adjusting sensitivity, maintaining the overall structure and key features of the image. This comparison clearly demonstrates the visual adaptation capability of the VAAN circuit. Whether in low or in high light conditions, the VAAN circuit can maintain image clarity and detail, while traditional methods struggle to cope with such a wide dynamic range of light changes. This adaptability is crucial for maintaining stable visual perception in changing environments, especially in application scenarios that require accurate identification and classification, such as autonomous driving or security systems.


image file: d4nh00230j-f4.tif
Fig. 4 Evaluation of the visual adaptation performance of the VAAN circuit. (a) The schematic of VAAN neuron circuit, which replaces the input resistor in traditional LIF neuron circuit by photoelectric synaptic device. (b) The relationship between frequency and output of VAAN. (c) The extracted output spike of VAAN under vary light intensity. (d) Comparison of image processing capabilities between the VAAN circuit and traditional visual sensors under different lighting conditions. Upper row: Output of traditional visual sensors and LIF neurons; lower row: Output of VAAN circuit. Exposure gradually increases from left to right, simulating changes in lighting from dark to bright.

The visual adaptation feature of our VAAN circuit offers significant advantages in extreme weather conditions such as rainy and foggy days. Its adaptive nature provides crucial benefits in rapidly changing light conditions typical of such weather. The circuit's quick response adjustment prevents information loss in low light and avoids saturation in sudden bright conditions, like emerging headlights in fog. The initial increase in response sensitivity enhances image contrast, aiding in the identification of low-contrast objects like road signs or pedestrians in poor visibility. Conversely, when encountering sudden strong light sources such as oncoming high beams, the subsequent decrease in sensitivity prevents visual system overload, maintaining overall scene visibility. This adaptive response also helps filter out rapidly changing visual noise like raindrops or fog particles, emphasizing more stable visual information. Moreover, by dynamically adjusting its response, the VAAN achieves energy efficiency by reducing unnecessary signal processing while maintaining essential information, which is particularly valuable in resource-constrained systems.

To demonstrate the application potential of the VAAN circuit in real-world scenarios, a visual adaptation spiking neural network (VASNN) was designed for traffic sign recognition tasks under different weather conditions. The architecture of the network, as shown in Fig. 5a, incorporates 728 VAAN circuits forming the input layer, aimed at processing light intensity signals received and mimicking the human eye's response to changes in illumination. Subsequently, signal feature extraction is performed through two 3 × 3 × 10 convolutional layers, simulating the functionality of the brain's primary visual processing areas. Finally, a 490 × 43 fully connected layer, using a memristor based LIF neurona circuit for signal decoding, reflects the brain's advanced processing of visual information. Notably, both the convolutional and fully connected layers of the entire network are composed of memristor arrays. The light intensity signals converted by VAAN circuits are integrated and batch-fed into a 9 × 20 memristor array for two rounds of convolutional processing, with the processed results temporarily stored. The weighted pulse signals after convolutional processing are then aggregated and transmitted to the next memristor array for further inference.


image file: d4nh00230j-f5.tif
Fig. 5 The VASNN for traffic sign recognition tasks. (a) The schematic diagram of the VASNN architecture, including 728 VAAN circuits in the input layer, two convolutional layers, and a fully connected output layer. (b) Validation results after a fixed number of system iterations. As the number of iterations increases, the system's recognition accuracy gradually improves. (c) Comparison of recognition accuracy under extreme weather conditions. ‘Original’ represents the idealized VAAN without circuit non-idealities, ‘VAAN’ is the model considering parasitic capacitances, resistances, and 5% Gaussian noise in memristors, and ‘LIF’ is the traditional spiking neural network without visual adaptation. Comparison of recognition accuracy between VASNN and traditional SNN under extreme weather conditions, showing VASNN's significant advantage in processing images in complex environments. (d) and (e) The recognition accuracy of VASNN under different ratios of noise and fog levels, demonstrating that even under adverse conditions, VASNN still maintains a high recognition accuracy.

During the training phase, 28 × 28 pixel traffic sign images are converted into light intensity signals, further simulating the visual reception characteristics of the human eye under variable weather conditions. For example, to simulate the visual effects of rainy and foggy days, these images undergo random noise addition and exposure adjustment. Moreover, VASNN employs surrogate gradient descent for weight updates.33 The network's ability to process images under extreme weather conditions was assessed by sampling and normalizing the final decoded pulse frequency of LIF neurons (Fig. 5b). As shown in Fig. 5c, using a test set of 1000 images, our VAAN, which incorporates realistic circuit characteristics, demonstrates approximately 12% higher recognition accuracy under extreme weather conditions compared to traditional LIF. While the ‘Original’ model, representing an idealized VAAN without considering circuit non-idealities, shows the best performance, it serves primarily as a theoretical upper bound. The VAAN, despite slight performance degradation due to practical circuit constraints, still significantly outperforms traditional LIF, highlighting the advantages of our proposed visual adaptation mechanism in extracting key information. Furthermore, to understand the impact of extreme weather conditions on VASNN network performance, a detailed evaluation of the network's inference capability was conducted by simulating different degrees of adverse weather conditions. Using 1000 images as the validation set and adjusting the “weather noise ratio” and “proportion of atomization degree” simulated the impact of rainy and foggy weather. Fig. 5d and e demonstrate the performance comparison between VAAN and traditional LIF neurons under varying ‘weather noise ratio’ and ‘proportion of atomization degree’, simulating worsening rain and fog conditions respectively.

In Fig. 5d, we observe that both VAAN and LIF show decreased performance as the weather noise ratio increases. However, VAAN maintains a higher recognition accuracy throughout the noise range. For instance, at a 20% noise ratio, VAAN achieves approximately 85% accuracy compared to 72% for LIF. Notably, under high noise conditions (e.g., 60% noise ratio), the performance gap between the two methods narrows, but VAAN still slightly outperforms LIF.

Fig. 5e reveals the performance changes as the atomization degree increases. At low atomization levels (0–20%), VAAN demonstrates a significant advantage. For example, at 10% atomization, VAAN's accuracy is about 90%, while LIF is around 80%. However, as the atomization degree further increases, the performance gap between the two methods gradually narrows, and at high atomization levels (e.g., 60%), LIF's performance slightly surpasses that of VAAN. These results indicate that VAAN offers significant advantages under moderately challenging weather conditions (moderate levels of rain and fog). This can be attributed to VAAN's visual adaptation mechanism, which better handles dynamic light changes under these conditions. However, in extremely severe weather conditions, both systems face challenges, and the performance gap narrows. It is worth noting that while VAAN performs better in most cases, traditional LIF neurons still demonstrate their advantages under certain extreme conditions, especially in highly atomized environments. This suggests that in different application scenarios, the characteristics of both methods may need to be considered comprehensively. Accordingly, exploring further methods to improve recognition accuracy will be the focus of our future research.

Conclusions

In this work, an ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device is developed for compact artificial vision systems with visual adaption function. The ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device could exhibit typical synaptic and visual adaption characteristics. The theory calculation and experiments results demonstrated that the visual adaption characteristics of the ITO/PMMA/SiC-NWs/ITO photoelectric synaptic device is originated from the heating effect, induced by the increment light intensity. Furthermore, a visual adaption leaky integrated-and-fire neuron (VAAN) circuit is designed, the output frequency of which increases and then decreases with the gradual intensification of light, reflecting the dynamic process of visual adaptation, by incorporating the photoelectric synaptic device into traditional LIF neurons. Furthermore, a visual adaptation spiking neural network (VASNN) was constructed to evaluate the photoelectric synaptic device based visual system for perception tasks. The results indicate that, in the task of traffic sign detection under extreme weather conditions, an accuracy of 97% was achieved (which is approximately 12% higher than that without visual adaptation function). Our research provides a biologically plausible hardware solution for visual adaptation in neuromorphic computing.

Methods

Fabrication of samples

The fabrication processes of synaptic devices include: First, 50 mg of SiC NWs, 75 mg of sodium dodecylbenzenesulfonate, 2.5 mg of aluminum nitrate, and 100 mL of isopropanol were mixed and subjected to ultrasonic treatment (ultrasonic power 800 W, 30 min) to obtain a SiC-NWs deposition solution; subsequently, the SiC-NW film was prepared on the ITO substrate using the EPD method, with deposition parameters set at 60 V for 6 min. After deposition, the ITO substrate with the SiC-NW film was dried in a vacuum drying oven at 60 °C for 6 h. Then, 500 μL (0.1 g mL−1) of PMMA solution was drop-cast onto the obtained SiC-NW thin film, and the top ITO electrode was placed on top of the PMMA, at a 90-degree angle to the bottom ITO. Finally, the encapsulated device was obtained through a hot-pressing process (pressure ∼6800 Pa, temperature 120 °C, 120 s).

Characterization

The obtained materials were characterized as follows using X-ray diffraction (XRD, Smart Lab 9 kW, Hitachi, Japan), and field emission scanning electron microscopy (FESEM, S-4800, Hitachi, Japan), transmission electron microscopy (TEM, Tecnai G2 F20, FEI, USA), double spherical aberration-corrected transmission electron microscope (ACTEM, JEM GRAND ARM 300F, JEOL, Japan), respectively. Electrical and optoelectronic measurements were performed by using a four-probe station and a semiconductor characterization system (Keithley 4200CSC).

Author contributions

ZF, SY and JXZ contributed equally to this work. Conceptualization: ZF, ZHW. Methodology: ZF, SY, JXZ, XL, WBG, ST. Investigation: ZF, SY, JXZ, HCW, YH, HR, ZHL. Data curation: ZHW, ZYX. Formal analysis: ZHW. Visualization: ZF, SY, JXZ, ZHW. Funding acquisition: ZHW, YHD, ZYX. Project administration: ZHW, GDW, YHD. Supervision: ZHW, GDW, YHD. Writing – original draft: ZF. Writing – review and editing: ZHW.

Data availability

The data supporting this study's findings are available from the corresponding author upon reasonable request.

Conflicts of interest

The authors declare no competing financial interest.

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (NSFC) under Grant No. 62274002, 62304001, 61874001 and 62201005, in part by the Anhui Provincial Natural Science Foundation under Grant No. 2308085QF213, and in part by the Natural Science Research Project of Anhui Educational Committee under Grant No. 2023AH050072.

References

  1. J. B. Demb and J. H. Singer, Functional circuitry of the retina, Ann. Rev. Vis. Sci., 2015, 1(1), 263–289,  DOI:10.1146/annurev-vision-082114-035334.
  2. R. Sabesan, B. P. Schmidt, W. S. Tuten and A. Roorda, The elementary representation of spatial and color vision in the human retina, Sci. Adv., 2016, 2(9), e1600797,  DOI:10.1126/sciadv.1600797.
  3. E. Herrera, L. Erskine and C. Morenilla-Palao, Guidance of retinal axons in mammals, in Seminars in cell & developmental biology, Academic Press, 2019, vol. 85, pp. 48–59 DOI:10.1016/j.semcdb.2017.11.027.
  4. K. T. Ngo, I. Andrade and V. Hartenstein, Spatio-temporal pattern of neuronal differentiation in the Drosophila visual system: A user's guide to the dynamic morphology of the developing optic lobe, Dev. Biol., 2017, 428(1), 1–24,  DOI:10.1016/j.ydbio.2017.05.008.
  5. A. S. Prayag, R. P. Najjar and C. Gronfier, Melatonin suppression is exquisitely sensitive to light and primarily driven by melanopsin in humans, J. pineal Res., 2019, 66(4), e12562,  DOI:10.1111/jpi.12562.
  6. M. Y. Lipin and J. Vigh, Calcium spike-mediated digital signaling increases glutamate output at the visual threshold of retinal bipolar cells, J. Neurophysiol., 2015, 113(2), 550–566,  DOI:10.1152/jn.00378.2014.
  7. M. Rivlin-Etzion, W. N. Grimes and F. Rieke, Flexible neural hardware supports dynamic computations in retina, Trends Neurosci., 2018, 41(4), 224–237,  DOI:10.1016/j.tins.2018.01.009.
  8. R. E. Mazade, M. D. Flood and E. D. Eggers, Dopamine D1 receptor activation reduces local inner retinal inhibition to light-adapted levels, J. Neurophysiol., 2019, 121(4), 1232–1243,  DOI:10.1152/jn.00448.2018.
  9. D. S. Joyce, B. Feigl and A. J. Zele, The effects of short-term light adaptation on the human post-illumination pupil response, Invest Ophthalmol. Visual Sci., 2016, 57(13), 5672–5680,  DOI:10.1167/iovs.16-19934.
  10. V. M. Patel, R. Gopalan, R. Li and R. Chellappa, Visual domain adaptation: A survey of recent advances, IEEE Signal Process. Mag., 2015, 32(3), 53–69,  DOI:10.1109/MSP.2014.2347059.
  11. I. Ullah, S. An, M. Kang, P. Chikontwe, H. Lee, J. Choi and S. H. Park, Video domain adaptation for semantic segmentation using perceptual consistency matching, Neural Networks, 2024, 179, 106505,  DOI:10.1016/j.neunet.2024.106505.
  12. K. Roy, A. Jaiswal and P. Panda, Towards spike-based machine intelligence with neuromorphic computing, Nature, 2019, 575(7784), 607–617,  DOI:10.1038/s41586-019-1677-2.
  13. G. Giamougiannis, A. Tsakyridis, M. Moralis-Pegios, A. R. Totovic, M. Kirtas, N. Passalis, A. Tefas, D. Lazovsky and N. Pleros, Universal linear optics revisited: new perspectives for neuromorphic computing with silicon photonics, IEEE J. Sel. Top. Quantum Electron., 2022, 29(2), 1–6,  DOI:10.1109/JSTQE.2022.3228318.
  14. K. F. Yang, C. Cheng, S. X. Zhao, H. M. Yan, X. S. Zhang and Y. J. Li, Learning to adapt to light, Int. J. Comput. Vis., 2023, 131(4), 1022–1041,  DOI:10.1007/s11263-022-01745-y.
  15. H. Yin, P. X. Liu and M. Zheng, Stereo visual odometry with automatic brightness adjustment and feature tracking prediction, IEEE Trans. Instrum. Meas., 2022, 72, 1,  DOI:10.1109/TIM.2022.3223070.
  16. S. C. Huang, D. W. Jaw, B. H. Chen and S. Y. Kuo, An efficient single image enhancement approach using luminance perception transformation, IEEE Trans. Emerg. Top. Comput., 2019, 9(2), 1083–1094,  DOI:10.1109/TETC.2019.2943231.
  17. S. Kim, R. Lussi, X. Qu, F. Huang and H. J. Kim, Reversible data hiding with automatic brightness preserving contrast enhancement, IEEE Trans. Circuits Syst. Video Technol., 2018, 29(8), 2271–2284,  DOI:10.1109/TCSVT.2018.2869935.
  18. Z. Wang, T. Wan, S. Ma and Y. Chai, Multidimensional vision sensors for information processing, Nat. Nanotechnol., 2024, 1–2,  DOI:10.1038/s41565-024-01665-7.
  19. T. Ferreira De Lima, A. N. Tait, A. Mehrabian, M. A. Nahmias, C. Huang, H. T. Peng, B. A. Marquez, M. Miscuglio, T. El-Ghazawi, V. J. Sorger and B. J. Shastri, Primer on silicon neuromorphic photonic processors: architecture and compiler, Nanophotonics, 2020, 9(13), 4055–4073,  DOI:10.1515/nanoph-2020-0172.
  20. Y. Chai, In-sensor computing for machine vision, Nature, 2020, 579(7797), 32–33,  DOI:10.1038/d41586-020-00592-6.
  21. T. Wan, B. Shao, S. Ma, Y. Zhou, Q. Li and Y. Chai, In-sensor computing: materials, devices, and integration technologies, Adv. Mater., 2023, 35(37), 2203830,  DOI:10.1002/adma.202203830.
  22. F. Zhou and Y. Chai, Near-sensor and in-sensor computing, Nat. Electron., 2020, 3(11), 664–671,  DOI:10.1038/s41928-020-00501-9.
  23. H. Seung, C. Choi, D. C. Kim, J. S. Kim, J. H. Kim, J. Kim, S. I. Park, J. A. Lim, J. Yang, M. K. Choi and T. Hyeon, Integration of synaptic phototransistors and quantum dot light-emitting diodes for visualization and recognition of UV patterns, Sci. Adv., 2022, 8(41), eabq3101,  DOI:10.1126/sciadv.abq3101.
  24. J. Xue, Z. Zhu, X. Xu, Y. Gu, S. Wang, L. Xu, Y. Zou, J. Song, H. Zeng and Q. Chen, Narrowband perovskite photodetector-based image array for potential application in artificial vision, Nano Lett., 2018, 18(12), 7628–7634,  DOI:10.1021/acs.nanolett.8b03209.
  25. L. A. Liu, J. Zhao, G. Cao, S. Zheng and X. Yan, A Memristor-Based Silicon Carbide for Artificial Nociceptor and Neuromorphic Computing, Adv. Mater. Technol., 2021, 6(12), 2100373,  DOI:10.1002/admt.202100373.
  26. R. Bange, E. Bano, L. Rapenne and V. Stambouli, Superior long term stability of SiC nanowires over Si nanowires under physiological conditions, Mater. Res. Express, 2018, 6(1), 015013,  DOI:10.1088/2053-1591/aae32a.
  27. Y. Sun, L. Qian, D. Xie, Y. Lin, M. Sun, W. Li, L. Ding, T. Ren and T. Palacios, Photoelectric synaptic plasticity realized by 2D perovskite, Adv. Funct. Mater., 2019, 29(28), 1902538,  DOI:10.1002/adfm.201902538.
  28. S. Jeon, S. E. Ahn, I. Song, C. J. Kim, U. I. Chung, E. Lee, I. Yoo, A. Nathan, S. Lee, K. Ghaffarzadeh and J. Robertson, Gated three-terminal device architecture to eliminate persistent photoconductivity in oxide semiconductor photosensor arrays, Nat. Mater., 2012, 11(4), 301–305,  DOI:10.1038/NMAT3256.
  29. L. Hu, J. Yang, J. Wang, P. Cheng, L. O. Chua and F. Zhuge, All-optically controlled memristor for optoelectronic neuromorphic computing, Adv. Funct. Mater., 2021, 31(4), 2005582,  DOI:10.1002/adfm.202005582.
  30. J. Yang, L. Hu, L. Shen, J. Wang, P. Cheng, H. Lu, F. Zhuge and Z. Ye, Optically driven intelligent computing with ZnO memristor, Fundam. Res., 2022, 4(1), 158–166,  DOI:10.1016/j.fmre.2022.06.019.
  31. J. L. Pardo-Vazquez, J. R. Castiñeiras-de Saa, M. Valente, I. Damião, T. Costa, M. I. Vicente, A. G. Mendonça, Z. F. Mainen and A. Renart, The mechanistic foundation of Weber's law, Nat. Neurosci., 2019, 22(9), 1493–1502,  DOI:10.1038/s41593-019-0439-7.
  32. Z. Wu, J. Lu, T. Shi, X. Zhao, X. Zhang, Y. Yang, F. Wu, Y. Li, Q. Liu and M. Liu, A habituation sensory nervous system with memristors, Adv. Mater., 2020, 32(46), 2004398,  DOI:10.1002/adma.202004398.
  33. E. O. Neftci, H. Mostafa and F. Zenke, Surrogate gradient learning in spiking neural networks: Bringing the power of gradient-based optimization to spiking neural networks, IEEE Signal Process. Mag., 2019, 36(6), 51–63,  DOI:10.1109/MSP.2019.2931595.

Footnotes

Electronic supplementary information (ESI) available. See DOI: https://doi.org/10.1039/d4nh00230j
These authors contributed equally to this work.

This journal is © The Royal Society of Chemistry 2024
Click here to see how this site uses Cookies. View our privacy policy here.