Open Access Article
This Open Access Article is licensed under a
Creative Commons Attribution 3.0 Unported Licence

Flexible, wearable mechano-acoustic sensors for body sound monitoring applications

Tran Bach Dang a, Thanh An Truonga, Chi Cong Nguyen a, Michael Listyawana, Joshua Sam Sapersa, Sinuo Zhaoa, Duc Phuc Truongb, Jin Zhanga, Thanh Nho Do cd and Hoang-Phuong Phan *ad
aSchool of Mechanical and Manufacturing Engineering, UNSW Sydney, Kensington Campus Sydney, NSW 2052, Australia. E-mail: hp.phan@unsw.edu.au
bSchool of Mechanical Engineering, Hanoi University of Science and Technology, Hanoi, Vietnam
cGraduate School of Biomedical Engineering, UNSW Sydney, Kensington Campus Sydney, NSW 2052, Australia
dTyree Foundation Institute of Health Engineering, UNSW Sydney, Kensington Campus, Sydney, NSW 2052, Australia

Received 6th December 2024 , Accepted 7th February 2025

First published on 11th February 2025


Abstract

Body sounds serve as a valuable source of health information, offering insights into systems such as the cardiovascular, pulmonary, and gastrointestinal systems. Additionally, body sound measurements are easily accessible, fast, and non-invasive, which has led to their widespread use in clinical auscultation for diagnosing health conditions. However, conventional devices like stethoscopes are constrained by rigid and bulky designs, limiting their potential for long-term monitoring and often leading to subjective diagnoses. Recently, flexible, wearable mechano-acoustic sensors have emerged as an innovative alternative for body sound auscultation, offering significant advantages over conventional rigid devices. This review explores these advanced sensors, delving into their sensing mechanisms, materials, configurations, and fabrication techniques. Furthermore, it highlights various health monitoring applications of flexible, wearable mechano-acoustic sensors based on body sound auscultation. Finally, the existing challenges and promising opportunities are addressed, providing a snapshot of the current picture and the strategies of future approaches in this rapidly evolving field.


image file: d4nr05145a-p1.tif

Tran Bach Dang

Tran Bach Dang is a Ph.D. student at the School of Mechanical and Manufacturing Engineering, UNSW Sydney. His research focuses on flexible, wearable, implantable mechano-acoustic sensors for health monitoring applications. He obtained his Bachelor of Engineering in Mechatronics from Hanoi University of Science and Technology (HUST, Vietnam) in 2023.

image file: d4nr05145a-p2.tif

Chi Cong Nguyen

Chi Cong Nguyen is an Associate Lecturer at the School of Mechanical and Manufacturing Engineering, UNSW Sydney. His research focuses on the development of medical devices using soft robotics and flexible biosensors for a wide range of healthcare applications. He obtained his B.E. in Mechatronics from Hanoi University of Science and Technology (HUST, Vietnam) in 2019. Following his graduation, he worked as a Mechanical Designer on a 5G project at Viettel High Technology Industries Corporation (VHT) in Vietnam. In 2021, he was awarded a full scholarship from the Vingroup Scholarship Program in Vietnam to pursue a PhD at the Graduate School of Biomedical Engineering (GSBmE), UNSW Sydney.

image file: d4nr05145a-p3.tif

Thanh Nho Do

Thanh Nho Do is a Scientia Senior Lecturer at Graduate School of Biomedical Engineering, UNSW Sydney. He received his PhD in surgical robotics from the School of Mechanical & Aerospace Engineering, NTU, Singapore, followed by a postdoc at California NanoSystems Institute, USA. He currently holds the prestigious UNSW Scientia and CINSW Career Development Fellowships. His works have been recognized with numerous awards, including the Google Research Scholar Award, Young Tall Poppy Science Award, by WHO, and he has been featured in major media (e.g., Reuters, Washington Post, New York Post, IEEE Spectrum). His research focuses on flexible surgical devices, soft robotics, artificial organs, wearables, and haptics.

image file: d4nr05145a-p4.tif

Hoang-Phuong Phan

Hoang-Phuong Phan is an Associate Professor at the School of Mechanical and Manufacturing Engineering, UNSW Sydney. He obtained his B.E. and M.E from The University of Tokyo, Japan in 2011 and 2013, and Ph.D. from Griffith University, Australia in 2016. His research interests include integrated sensors, flexible electronics, and 3D microarchitectures. Prof. Phan was a visiting scholar at AIST, Japan in 2016, Stanford University, USA in 2017, and Northwestern University, USA in 2019. He received the Springer outstanding theses award, the ANN Fellowship, GU Postdoc Fellowship, the DECRA fellow, the Griffith Vice-Chancellor Research Excellence Award, and the ARC Future Fellow.


1. Introduction

The human body functions through the operating and coordinated interaction of its organs. The activities of several organs such as the heart, lung, and bowel involve mechanical motions, producing vibrations and contractions that are transduced through body tissues and the skin, manifesting as body sounds. In the broad sense, body sounds are all the rhythmic signals emitted by the body, whether audible or inaudible. For example, the activity of the heart causes audible sounds that can be heard from the chest wall, and it also causes the pulse wave to expand through blood vessels that can be felt at points like the wrist or throat.1 Body sounds are a valuable source of health information, providing insights into systems such as the cardiovascular, pulmonary, and gastrointestinal systems.2 Even small structural changes in organs can be detected and recorded through these sound patterns. Moreover, body sound measurements are easily accessible, fast, and non-invasive, which has led to their widespread use in clinical auscultation for diagnosing health conditions.

Stethoscopes are among the most common medical instruments used for body sound measurement. Developed based on a concept invented more than 200 years ago with key components including a small disc-shaped resonator that is placed against the skin and tubes connected to two earpieces,3 stethoscopes are available worldwide and are one of the first medical tools used by clinicians to assess the symptoms and physical status of patients, besides temperature sensors. Despite their low cost and effectiveness in assessing body sounds, analog stethoscope still poses several limitations. In particular, their rigid and bucky housing prevents them from being able to monitor continuously for long periods of time. These devices highly depend on doctors’ expert knowledge and experience to diagnose diseases, and the measured heart sounds cannot be shared among doctors or between healthcare providers and patients. In some cases, the human ear is less sensitive to low-frequency signals,4 including heart sounds and lung sounds. This makes the assessment highly subjective. Furthermore, auscultation performed in clinical settings may be associated with some abnormal physical signals in patients that are not seen in their routine activities due to changes in the environment. For example, white coat hypertension, also known as white coat syndrome, is a form of labile hypertension in which people exhibit a blood pressure level above the normal range in a clinical setting but do not exhibit it under other conditions. Those signals do not accurately reflect the true physical status of the patients outside the clinic. The demand for reducing the subjectivity of auscultation and minimizing the need for frequent hospital visits has driven research and development of digital stethoscopes and acoustic sensing devices, such as Inertial Sensing Units. These devices enable measurements in ambulatory environments, offering wireless platforms that address the limitations of traditional auscultation by facilitating data sharing and post-measurement analysis through machine learning algorithms. However, these devices are often rigid and bulky, limiting their suitability for continuous monitoring.

Flexible, wearable mechano-acoustic sensors have emerged as an innovative solution for body sound auscultation, providing several advantages over conventional rigid devices. Advances in micromachining have enabled the development of miniature mechano-acoustic sensors, such as MEMS microphones and accelerometers, with footprints as small as a few square millimeters. These sensors can be integrated into flexible printed circuit boards (fPCB) to create compact wearable devices.1,5–9 The introduction of fully flexible sensors has further improved skin contact and enhanced sensitivity. Some flexible sensors are permeable,10 self-adhesive,11 and transparent,12 making them more suitable for long-term wear by reducing discomfort and skin irritation. These attributes allow such devices to be comfortably attached to the skin for prolonged monitoring, reducing artifacts and improving the overall user experience. With those advantages, acoustic sensors in soft, wearable form factors have demonstrated their capability to continuously capture distinct soundwaves from the human body. Advances in material engineering, soft lithography fabrication, wireless communication, and data processing techniques (e.g., machine learning) have further supported the translation of these devices toward practical applications. Several flexible acoustic sensors have undergone clinical validation, underscoring their potential for real-world healthcare monitoring and diagnostics.

Considering the significant progress and high activity of this research area, this review highlights recent advances in flexible, wearable mechano-acoustic sensors for monitoring body sounds in healthcare applications (Fig. 1a). Firstly, it introduces a range of body sounds, including heart sounds, breath sounds, bowel sounds, and cough and swallow sounds, along with their importance in auscultation (section 2). Secondly, the paper continues with emerging sensing mechanisms including piezoresistive, capacitive, piezoelectric, and triboelectric with a focus on their working principle and materials (section 3). Sensor configurations, including acoustic sensors, accelerometers, and pressure/strain sensors, are then presented (section 4), together with advances in fabrication techniques (section 5). The detailed applications of the mechano-acoustic sensors in health monitoring applications are subsequently discussed (section 6). Finally, the paper concludes with a perspective on the future directions and potential of this rapidly growing research field.


image file: d4nr05145a-f1.tif
Fig. 1 Mechano-acoustic sensing for body sound monitoring. Created with BioRender.com. (a) Flexible, wearable mechano-acoustic devices attached to human skin to capture several body sounds. (b) Typical sensing mechanisms of flexible, wearable mechano-acoustic sensors, including, left to right, piezoresistive, capacitive, and piezoelectric.

2. Body sounds – a valuable source of health information

The pattern of body sounds serves as a valuable source of health information and is widely used for early disease diagnosis and monitoring. Non-speech body sounds, such as those produced by the heart, lungs, or gastrointestinal system, predominantly occupy the lower frequency spectrum, ranging from 20 Hz to 1300 Hz, compared to speech sounds and environmental sounds, which are typically found in the higher frequency spectrum of 300 Hz to 3500 Hz.2 Most body sounds exhibit greater intensity within the band from 20 Hz to 200 Hz yet experience significant attenuation as frequency increases. The concentration of body sounds in low-frequency ranges necessitates a focus on subtle changes in these frequencies, in other words, higher frequency resolution in the low frequencies. Body sounds auscultation therefore requires the use of specialized acoustic sensors with high sensitivity to low-frequency variations to effectively detect and differentiate between the acoustic patterns of body sounds. Each type of body sounds, such as those originating from the heart (e.g., murmurs or rhythm abnormalities) or lungs (e.g., wheezing or crackles), exhibit distinct characteristics, associated with specific anatomical locations, aligning with the function and position of the underlying organs. The body sounds heard through the skin are often complex sounds and consist of multiple components. For instance, heart sounds can be picked up clearly from the chest wall yet often mixed with lung sounds. The separation of these sounds for effective auscultation requires a thorough understanding of their distinct characteristics mainly including frequency, amplitude, and duration. This section summarizes the clinical information of various body sounds from the heart, breath, cough, swallowing, and bowel, as well as their mechanisms and distinctive characteristics.

Heart sound

The heart sound is one of the most critical physiological signals in clinical auscultation. For decades, it has been extensively investigated for the diagnosis of heart diseases, as it provides essential information that aids in identifying various pathological conditions of the heart, such as heart failure, valvular disease, and cardiomyopathy.

Heart sounds are generated by the flow of blood during cardiac activity as the heart valves open and close.3 The sudden opening and closing of cardiac valves generate pulse waves that propagate throughout the cardiovascular system, leading to the dilation and contraction of blood vessels. Audible sounds caused by these mechanisms are typically auscultated at four specific sites on the chest wall: the aortic area, pulmonic area, tricuspid area, and mitral area.13 Moreover, these vibrations generate blood pulse waves that propagate through the cardiovascular system, expressed as skin vibrations that can be detected from various locations in the human body, including the fingertips, wrists, throat, and chest. A typical heart sound signal consists of four primary components. The first heart sound (S1) occurs during ventricular systole, typically lasting between 0.1 and 0.12 seconds, within a frequency range of 40 Hz to 60 Hz. The second heart sound (S2) occurs during ventricular diastole, lasting approximately 0.08 seconds, with a frequency range of 60 Hz to 100 Hz.14,15 The third and fourth heart sounds (S3–S4) are relatively faint, occurring within a frequency range of 15 Hz to 75 Hz. S3 is produced at the beginning of diastole, while S4 occurs during late diastole.16 Abnormalities in these sounds have been shown to be indicators of heart failure during the diastolic phase. The auscultation of S3 and S4 is crucial for noninvasive diagnosis and early detection of myocardial ischemia.17

Abnormal heart sounds have been shown to correlate with various cardiovascular diseases. For example, researchers have observed differences in heart sound patterns between healthy individuals and patients with valvular heart disease. Healthy heart sounds consist solely of the fundamental S1 and S2 patterns, which result from the contraction and relaxation of the heart. In contrast, unhealthy heart sounds display additional noisy patterns alongside S1 and S2, such as systolic or diastolic murmurs.18,19 The systolic murmurs of the mitral and tricuspid valve regurgitation during systole have acoustic signatures of constant intensity and high frequency. In contrast, diastolic murmurs are often detected in patients with aortic or pulmonic valve regurgitation.7 Congenital heart disease (CHD) is the most prevalent type of birth defect. It presents at birth and potentially affects the structure of the heart and its normal functioning. Patients with CHD exhibit additional sounds, such as S3 and S4 sounds, murmurs, and clicks. Variations in the heart structures associated with different categories of CHD produce differences in the heart sounds, which finally manifest as additional pathological heart sounds at various stages of the cardiac cycle.20

Besides the appearance of abnormal sounds, the spectrum pattern of heart sounds holds significant clinical value in differentiating among various types of heart valve diseases. For instance, coronary artery disease (CAD), caused by the deposition of materials within or beneath the intima of the arteries, alters the frequency patterns of heart sounds. Several studies on analyzing diastolic function have shown that CAD is associated with an increase of energy occurring in the frequency range below 200 Hz.21,22

Breath sound

Breath sounds serve as a valuable source of data on respiratory patterns. The distinction between normal breath sounds and those accompanied by adventitious sounds, such as wheezing and crackles, provides critical information regarding the physiology and pathology of respiration, including lung condition, airway obstruction, and airway dimensions.26 These sounds are generated by turbulent and vorticose airflow moving through the tracheobronchial tree of the lungs. They are typically recorded from over the trachea or lungs using acoustic sensors. The characteristics of the respiratory sound signals are highly dependent on the capturing locations.26 The frequency of breath sounds ranges from 100 Hz to 4000 Hz, depending on the positions they are recorded from. Since the chest acts as a reducer and low-pass filter, breath sounds recorded over the lung area typically fall within the range of 100 Hz to 1000 Hz. In contrast, breath sounds recorded at the trachea are usually accompanied by noise with resonances, primarily within the 100 Hz to 3000 Hz range.24 For the purpose of acoustic flow estimation, the tracheal respiratory sound signal is preferred due to its high intensity and sensitivity to changes in respiratory flow, compared to lung sounds.27

Breath sound analysis provides valuable insights into pulmonary conditions and heart failure, offering important diagnostic information for both respiratory and cardiovascular diseases. Malmberg et al.28 demonstrated that spectral analysis of breath sounds can effectively indicate airway obstruction during bronchial challenge tests in children. Through experiments, they observed an increase in the frequency content of breath sounds in children with asthma, likely caused by inhaled airflow limitation. Alshaer et al.29 demonstrated a strong correlation between the breath sound envelope and the detection of apneas and hypopneas, which are the primary causes of sleep-disordered breathing. Furthermore, the identification of continuous adventitious breath sounds, such as wheezing and crackles during the respiratory cycle, is important in diagnosing obstructive airway pathologies.30 Table 1 summarizes the acoustic characteristics of adventitious sounds and possible lung diseases.

Table 1 Acoustic characteristics of adventitious sounds and possible lung diseases
Adventitious breath sounds Location best heard23 Acoustics24 Characteristics23 Possible lung diseases25
Crackles Peripheral lung Rapidly dampened wave deflection Discontinuous Alveolitis, pulmonary fibrosis, atelectasis, congestive heart failure
Frequency 100–2000 Hz High-pitched in fine crackles and low-pitched in coarse crackles
Duration < 20 ms Inspiratory
Wheezes Bronchi Sinusoid Continuous Obstructive lung diseases (e.g. asthma), cystic fibrosis
Frequency 100–1000 Hz High-pitched
Duration > 80 ms Expiratory > inspiratory
Rhonchi Bronchi Series of sinusoid Continuous Chronic bronchitis, tumors, pneumonia, obstructive pulmonary diseases
Frequency < 300 Hz Low-pitched
Duration > 100 ms Expiratory > inspiratory
Stridor Larynx, trachea Sinusoid Continuous Laryngitis, laryngomalacia, anatomic hypothesis
Frequency > 500 Hz High-pitched
  Inspiratory
Pleural friction rub Chest wall Rhythmic succession of short sounds Continuous Inflammation causes roughness of the surfaces of the visceral and parietal pleura
Frequency < 350 Hz Low-pitched
Duration > 15 ms Inspiratory and expiratory


Cough sound

Coughing is one of the body's airway protection mechanisms, preventing the entry of noxious materials into the respiratory system.31 Cough sound has been utilized for the auscultation of over 100 diseases related to respiration and other medically relevant conditions. Analyzing the spectral patterns of cough sounds can reveal changes in the structural properties of tissues during therapy.32

Cough signals can be readily detected from tracheal sounds due to their distinct patterns compared to other body sounds.33 However, their characteristics have been found to vary significantly based on gender, type of sputum, and body structure.34 In 1996, Korpáš et al.32 demonstrated that the intensity of cough sounds in patients with airway inflammation is significantly higher than in healthy subjects. Additionally, a study by Singh et al.35 found that the fundamental frequency of cough sounds tends to decrease with the age of the speaker. For example, the frequency for the 14- to 20-year-old age group was 400 Hz to 600 Hz, while that for speakers aged over 40 years was 200 Hz to 400 Hz.

Depending on the condition of the airways, coughs can be classified into two categories: wet cough, which produces sputum, and dry cough, which does not. Wet coughs are widely considered to result from viral or bacterial infections and are often contagious, while dry coughs may result from conditions such as asthma, gastroesophageal reflux, postnasal drip, sinusitis, and viral infections of the upper respiratory tract. The detection of these types of coughs assists pulmonologists in the differential diagnosis of conditions such as pneumonia and bronchiolitis, particularly in children under the age of two.36 In recent years, with advances in signal processing techniques, machine learning has been widely employed for wet cough and dry cough classification37–39 and detection of various types of diseases. In recent years, cough sounds, along with breath sounds, have emerged as two common physiological signals investigated and utilized for the diagnosis of COVID-19.33,40–44 In addition, neuromuscular disorders, a disease affecting the peripheral nervous system, can be diagnosed through cough impairment. A study by Recasens et al.45 demonstrated a reliable estimation of cough peak flow in patients with neuromuscular disorders, indicating that a cough peak flow (CPF) of less than 270 l min−1 is considered abnormal. Another approach proposed by Infante et al.46 involved a machine learning model based on cough sound analysis, achieving an accuracy of 74% for distinguishing between healthy and unhealthy subjects, and 80% between obstructive and non-obstructive conditions.

Swallow sound

Similar to coughing, swallowing is another airway protective behavior that involves the movement of substances from the oral cavity through the pharynx and into the esophagus. Monitoring swallowing sounds is a crucial method for the early diagnosis of swallowing disorders, including dysphagia – a health issue that can cause difficulty in swallowing and may lead to potentially fatal consequences.47

Swallowing sounds are associated with pharyngeal reverberations resulting from the opening and closing of valves, as well as the vibrations of the vocal tract.48 Acoustically, they are complex and influenced by various factors, including age, gender, bolus volume, and different swallowing efforts such as forceful, normal, or easy swallowing.47 Lima Nunes et al.49 demonstrated that men exhibited lower frequencies and shorter durations for liquid swallowing compared to women. Additionally, they observed that in older age groups, swallowing time tended to decrease, and the peak frequency for liquid swallowing was higher than that for saliva. A frequency range of 150–450 Hz is demonstrated to yield the highest sensitivity for detecting spontaneous swallows.50 The site over the lateral border of the trachea immediately inferior to the cricoid cartilage has been shown to be the optimal site for detecting swallowing sounds.51,52

Swallow records reveal the eating habits and ingestive behaviors of patients suffering from eating disorders, which serve as a valuable source of data for obesity disease monitoring and treatment. Monitoring ingestive behavior (MIB) has been widely used in active weight control programs by providing objective feedback needed for diet management.53 There have been several efforts on food type classification and volume estimating using swallowing patterns.53–57 Furthermore, swallow sound patterns analysis provides valuable information on swallowing diseases. For instance, the mean swallow duration for neurological patients with dysphagia was found to be 1402.1 ms for a liquid bolus of 10 ml water, which is much longer compared to the mean swallow duration of 440 ms in healthy individuals.58 Additionally, in male and female subjects with Parkinson's disease, swallow reflexes were triggered over three times more frequently than in age-matched controls. This increased frequency of swallowing in Parkinson's disease patients is often due to laryngeal bobbing, a failed attempt to achieve full laryngeal elevation and open the cricopharyngeal sphincter.59 In another approach to distinguish between healthy subjects and those with swallowing disorders, Dudik et al.60 compared data from patients with dysphagia but without stroke to previous data collected from healthy individuals. They identified significant differences in center frequency, peak frequency, and bandwidth, highlighting the potential diagnostic value of these acoustic features in detecting swallowing abnormalities.

Bowel sound

Bowel sound is closely associated with vital processes that reflect health conditions and are influenced by a wide range of intrinsic and extrinsic factors.61 It can be considered a vital sign, comparable to heart sounds, particularly when intestinal function is impaired or disrupted. Bowel sound monitoring is particularly important for the early resumption of oral feeding in patients after surgery to reduce the incidence of postoperative ileus (POI).62 Bowel sound can occur as an isolated single burst or as a consecutive pattern with a very short time interval between the occurrences, called multiple bursts.63 Although bowel sounds are produced regularly, the knowledge about their mechanisms has been limited due to their random frequency and variability. Understanding the relationship between bowel movements, the movement of luminal contents, and bowel sounds remains challenging, primarily because of the absence of a comprehensive theoretical model.

In recent studies, physiologists believe that bowel sounds are generated by peristaltic movements, which involve the contraction and relaxation of the gut walls, propelling intraluminal liquids and gases. This process creates audible sounds that are indicative of intestinal activity and can provide insight into the functioning of the digestive system.64–66 The dominant frequency of bowel sounds ranged between 100 Hz and 300 Hz, with none of the recordings exhibiting a dominant frequency above 1000 Hz.67 A study introduced by Craine et al.68 indicated that the frequency of bowel sounds is predominantly centered around 300 Hz, with an approximate Gaussian profile with a half-maximum width of about 150 Hz.

Bowel sounds, produced by the movement of intestinal contents and gas during peristalsis, are clinically recognized as useful indicators of intestinal function.68,69 For instance, hyperactive bowel sounds, described as “loud”, “high-pitched”, or “tinkling”, are often associated with conditions such as diarrhea or early-stage intestinal obstruction. In contrast, hypoactive bowel sounds, characterized by significantly diminished or absent sound, are linked to conditions like bowel obstruction, paralytic ileus, bowel torsion, or peritonitis, all of which may result in reduced peristalsis.70 Patients suffering from bowel diseases, such as colon cancer and irritable bowel syndrome (IBS), or medical and neurological conditions that affect the intestinal tract, often experience motility and functional bowel disorders that result in changes in bowel sound patterns. A significant difference in the sound-to-sound interval has been observed between patients with IBS and healthy individuals.68 For healthy subjects, the interval is approximately 1931 ± 365 ms, whereas for the IBS group, it is significantly shorter, around 452 ± 35 ms. This difference highlights the altered bowel motility patterns associated with IBS. Bowel sounds dominant frequency and duration have been also proved to differ based on the condition of intestinal obstruction.67 It was found that in acute large bowel obstruction, the sound duration was significantly longer, with a median of 0.81 seconds compared to 0.55 seconds in acute small bowel obstruction (P = 0.021). Additionally, the dominant frequency was notably higher in large bowel obstruction, at 440 Hz, compared to 288 Hz in small bowel obstruction. These findings suggest that swallow sound analysis could serve as a useful non-invasive indicator for differentiating between types of bowel obstructions.

3. Sensing mechanisms and materials

Body sounds produce various types of mechanical stimuli on the skin surface, including strain, pressure, and vibration. Electromechanical transducers are crucial for detecting and analyzing the mechano-acoustic signals generated by the human body. This section provides an overview of common sensing mechanisms such as piezoresistive, capacitive, and piezoelectric effects that can be employed in the development of miniaturized, wearable sensors for measuring the aforementioned body sounds.

Piezoresistive

Piezoresistive sensors function based on the principle that a mechanical load deforms the sensing element, resulting in a change in its electrical resistance. This deformation occurs when a sensor is subjected to acoustic waves, allowing it to convert sound pressure into an electrical output. This signal can then be processed and analyzed to provide information on the acoustic characteristics. For isotropic electrical conductors, the relative change in resistance can be expressed in terms of strain as follows:71
 
image file: d4nr05145a-t1.tif(1)
where l is the length, ν is the Poisson's ratio of the material and ρ is the resistivity. Generally, the change in resistance of a stressed metal is predominantly influenced by alterations in its geometry, whereas for a semiconductor, the change in resistance primarily relies on variations in the resistivity.72

The piezoresistive effect has been employed in a diverse range of materials, including metals, semiconductors, graphene, and hydrogels. In metallic materials, electrical resistivity typically remains constant when subjected to mechanical loads, resulting in resistance changes that are primarily attributable to geometric modifications. This characteristic leads to a relatively low sensitivity compared to semiconductors. However, the fabrication of metallic materials – particularly on flexible substrates such as polyimide – is simpler and requires fewer patterning and transfer steps. Hence, metallic materials have been widely employed in a broad range of strain sensors, including acoustic sensors. Metals such as gold (Au), silver (Ag), platinum (Pt), copper (Cu), titanium (Ti), and aluminum (Al) are known for their excellent ductility, suitable for use in flexible electronic devices and systems. Ag, Au, and Al provide good conductivity but exhibit high chemical reactivity that may result in the release of metal ions subjected to prolonged contact with body tissues. To address these limitations, they are typically encapsulated within a biocompatible polymer to prevent exposure to skin chemicals such as sweat. For instance, Cu-on-polyimide has been utilized as an industrial standard material for manufacturing fPCB circuits. These Cu-based circuitries are typically packaged within stretchable substrates such as Silbione™ rubber or Ecoflex to enhance their attachment to the skin.6,8,9 Furthermore, matrices of Ag nanowires (NWs) mixed with PDMS for Ecoflex, offer exceptional mechanical and electrical properties, which have been widely employed in wearable thermotherapy patches and can be extended to mechano-acoustic sensing applications.73 On the other hand, biocompatible metals such as Pt, Au, and Ti can serve as both metallic contacts or sensing elements (or in some cases, ECG (electrocardiography) electrodes) in wearable sensors, without requiring sophisticated encapsulation layers. As such, a pressure sensor using 50 nm thick Cr/Au was demonstrated that can pick up small skin vibrations caused by vessel expansion under blood pulse waves to capture heartbeat from the wrist.1 Using a Wheatstone full-bridge for signal readout, the sensor achieved a sensitivity of Vp/Vref = 0.0031 mmHg−1 (here Vref and Vp are the input and output voltage of the Wheatstone bridge, respectively).

Recent improvements in the sensitivity of metallic piezoresistive substrates involve nanowire and surface engineering techniques. Metallic nanowires exhibit a greater piezoresistive effect due to their high aspect ratio, enabling a higher sensitivity to small forces than traditional bulk materials.72 The utilization of high-conductivity metallic nanowires, such as silver and gold, further contributes to reduced power consumption for wearable applications.1 Another method that was recently introduced to enhance the sensitivity of metallic materials is the use of surface engineering techniques, such as microcracks. This class of sensors operates upon the increase of their dimensions resulting in the enlargement of cracks and creating disconnections in conductive paths, resulting in an increase in their resistance. This mechanism offers microcrack sensors the ability to capture ultrasmall pressure caused by sound waves. For instance, Gong et al.74 developed hierarchically resistive wearable sensors (HR) designed for body sound monitoring from the throat. The sensor includes a cracked platinum film engineered with 20 nm cracks on its surface (cracked Pt), for acoustic signals detection, and two other layers based on gold nanowires (v-AuNWs and u-AuNWs), for larger signals detection such as finger touch or throat movement. The cracked Pt layer was demonstrated to obtain a low detection limit down to 0.01% with a sensitivity as high as 0.33 Pa−1, enabling it to detect signals such as the human carotid artery pulse, respiration, and speech.

While metallic sensing elements typically offer a gauge factor of below 10, several semiconductors exhibit a higher gauge factor of up to 200,75 making them suitable for the development of wearable acoustic sensors. Furthermore, their electrical conductivity can be tuned by varying the carrier concentration, facilitating the miniaturization process and integration with external conditioning circuits. The unique mechanical properties (e.g., a high Young's modulus of 169 GPa in 〈110〉 Si) and chemical stability are some key characteristics that enable the development of free-standing microstructures such as cantilevers and diaphragms, which are critically important for acoustic sensors.76,77 Examples of wearable acoustic sensors for the detection of low-frequency acoustic signals include the work reported by Nguyen et al.77 utilizing a silicon-based cantilever that can achieve a resolution of approximately 0.2 mPa, over the frequency range of 0.1–250 Hz. The highly flexible structure allowed the cantilever to obtain a sensitivity of over 10−2 Pa−1. The high sensitivity and low-frequency detection capability were achieved by employing narrow hinges (10 μm in width) in the cantilever while retaining a small footprint of the sensing element (300 × 300 μm square).

Most recently, low-dimensional materials (e.g., 1D and 2D) such as graphene,78–80 carbon nanotubes (CNTs),81–83 and MXene81,84–86 have emerged as a candidate for flexible sensors due to their outstanding properties (Fig. 2a). Mixing these highly conductive, low-dimensional materials with soft polymeric substrates offers much higher stretchability compared to traditional strain sensors based on metals or semiconductors, which usually exhibit narrow sensing ranges of below 5%.87,88 One of the most commonly used materials is graphene, which is composed of a single layer of carbon atoms arranged in a hexagonal honeycomb lattice and demonstrates exceptional electrical conductivity and mechanical strength. Employing reduced graphene oxide (rGO) with a gauge factor ranging from 16.2 to 150, Liu et al.78 reported a fish-scale-like sensor for heart pulse capturing that offers an extensive sensing range of up to 82% strain and a detection limit of down to 0.1% strain. Another class of low-dimensional materials used in acoustic sensors is carbon nanotubes (CNTs), which exhibit cylindrical structures created by rolling up graphene sheets. Their high aspect ratio and nanoscale dimensions enhance their sensitivity to acoustic vibrations. Liu et al.89 developed a strain sensor utilizing a thickness-gradient film of single-wall carbon nanotubes (SWCNTs) on an elastic polydimethylsiloxane (PDMS) substrate through a self-pinning method. The resulting material exhibited a remarkable gauge factor of up to 161 for strains less than 2% and the capability to withstand uniaxial strains exceeding 150%. MXenes are a new class of two-dimensional transition metal carbides and nitrides with exceptional electrical properties. Composites based on MXenes offer additional tunability and can significantly enhance the performance of piezoresistive acoustic sensors. By utilizing the layer-by-layer (LBL) spray coating technique, Cai et al.81 developed sandwich-like Ti3C2Tx MXene/CNT sensing layers that were fabricated by using delaminated Ti3C2Tx MXene flakes incorporating with single-walled carbon nanotubes (SWNTs). The wearable sensor with a thickness of less than 2 μm can detect a large range of deformation with a detection limit of 0.1% strain. It exhibited a high stretchability of 130%, with a gauge factor of up to 772.6, enabling real-time monitoring of large-scale motions and detecting several vocal sounds from human throats. Despite their unique characteristics, 2D materials still represent a number of limitations such as hysteresis, poor linearity, and instability under environmental variations. Additionally, adverse conditions, such as sweating and high humidity, can significantly impact the stability of sensing materials and compromise sensor performance. For instance, MXene can be oxidized under ambient conditions, reducing the lifetime of the sensors and causing consistency and reliability issues. To address these challenges, one promising approach involves surface modification techniques, such as functionalization or passivation, to shield the MXene surface from oxidative degradation. Recently, biomimetic superhydrophobic surfaces have been used which enhance material stability, significantly improve environmental adaptability, increase durability, and bolster resistance to corrosion and liquid exposure.90 For example, He et al.91 developed a cotton-based superhydrophobic polypyrrole (PPy)/MXene pressure sensor with outstanding sensing performance and excellent stability. The sensor, consisting of MXene entirely covered by PPy, demonstrated stability in wet and corrosive environments, with favorable long-term performance over 1000 cycles. Additionally, with a wide detection range of 0–80 kPa and a high sensitivity of −20.1 kPa−1 for the 0–2 kPa range, the sensor can be mounted on various body parts, such as fingers, elbows, and wrists, to monitor physiological signals.


image file: d4nr05145a-f2.tif
Fig. 2 Typical materials of sensing mechanisms. (a) Piezoresistive, (b–c) capacitive, and (d–e) piezoelectric. (a) MXene-based piezoresistive sensor with Bionic Intermittent Structure (BIS). Scale bar: 0.5 cm. Reproduced with permission.85 2023, Wiley-VCH. (b) Capacitive pressure sensors using pyramidal microstructured PDMS. Reproduced with permission.119 2014, Wiley-VCH. (c) Capacitive pressure sensor with natural material structured dielectric layers. Reproduced with permission.122 2018, Wiley-VCH. (d) Piezoelectric sensors with island-bridge structures based on multi-walled carbon nanotube (MWCNT). Reproduced with permission.132 2017, Elsevier. (e) Piezoelectric PVDF nanofibers 310 ± 60 nm diameter. Reproduced with permission.136 2016, Springer Nature.

In addition to these piezoresistive materials, various filler compositions, such as polyurethane (PU),92 polydimethylsiloxane (PDMS),83 and poly(3,4-ethylene dioxythiophene):poly(styrene sulfonate) (PEDOS:PSS)12,93 have been employed to tune both the electrical and mechanical properties of sensors. Among these conductive polymers, PEDOT:PSS has been one of the most widely utilized materials due to its excellent dispersibility in water and polar solvents, biocompatibility, high electrical conductivity, and remarkable stability.94 The chemical stability of the polymers also enhances long-term reliable performance in wearable sensors under various biological environments, as well as under high temperatures and highly corrosive conditions.95 Recently, hydrogels82 have emerged as promising candidates for long-term wearable devices due to their biocompatibility, exceptional stretchability exceeding 1200%, and self-healing capabilities.96 However, due to their high water content, hydrogel-based sensors are relatively sensitive to ambient temperature, leading to reduced stability and limiting their utility in diverse monitoring applications.97 This issue can be addressed by incorporating organic solvents into hydrogels, forming binary-solvent-based organic conductive hydrogels.98 This crosslinking strategy disrupts the strong hydrogen bonding between water molecules, effectively decreasing water evaporation and freezing within the hydrogel. The solvent can strengthen the matrix, making the sensor highly suitable for prolonged and continuous monitoring of physical activities under environmental changes.

Capacitive

Capacitive sensors operate based on the change of the dielectric constant or geometrical dimensions of capacitors under acoustic pressure. These sensors consist of two conductive plates (electrodes) that are placed parallel to each other, separated by a dielectric material (an insulating layer). The simplest configuration is two parallel flat plates that form a capacitance of:99
 
image file: d4nr05145a-t2.tif(2)
where A is the overlapped area of the two plates, ε0 is the permittivity of free space, εr is the dielectric constant of the material between the plates, and d is the separation between the plates. Changes in capacitance primarily occur due to variations in the dielectric constant of the dielectric layer εr, the distance between the electrode layers d, and the area of overlap between them A. The common mechanisms of capacitive sensors involve the variation of the first two parameters,100 with the electrodes typically made from either thin film membranes or flexible patches.

In the parallel thin film membrane configuration, diaphragm properties, such as Young's modulus, Poisson's ratio, thickness, density, and geometric shape are the key features deciding the performance of acoustic sensors.72 A variety of membrane materials have been employed, including mylar,101 metals,102,103 p-doped silicon,104 silicon nitride,105–107 silicon carbide,108 polysilicon,109,110 polyimide,111 and graphene.112 Several research groups have utilized metals as diaphragm materials for sensors.102,103 Although metals can be easily patterned, they typically exhibit lower mechanical sensitivity and are more likely to fail prematurely due to fatigue compared to other materials. The mylar diaphragm was first introduced by Hohm et al.105 showed higher durability. However, as it was found to wrinkle under compressive stress, the authors proposed the use of silicon nitride (Si3N4) to enhance the robustness of sensors. Silicon nitride offers better tensile stress and advantages in process integration. However, it still exhibits relatively high intrinsic stress, affecting sensor performance. Additionally, as an insulating material, Si3N4 requires the deposition of metal electrodes on the membrane for electrical functionality, adding complexity to the sensor design.113 Pedersen et al.111 developed polyimide diaphragms, featuring several key advantages such as reasonable stress values, and low processing temperature, typically below 300 °C, suitable for integration with other components and materials without risking thermal damage. Similar to Si3N4, polyimide requires metal deposition and patterning on its top surface for sensing acoustic waves. Compared to these non-conductive membranes, Si (single crystal or polycrystal) can serve as both the mechanical supporting layer and the electrodes. Advancements in material engineering have enabled the development of ultrathin Si membranes with thicknesses in a range of a few hundred nanometers, enhancing the sensitivity of Si-based capacitive sensors. The use of polysilicon which can be formed under low-temperature processing can reduce residual stress, further improving the performance of acoustic sensors.109,114

The thin film capacitive sensors are composed of flexible electrodes and elastic dielectric layers. Increasing the dielectric constant is an effective strategy to enhance the sensitivity based on changing the dielectric properties. This method achieves both high initial capacitance and significant variations in capacitance. Commonly utilized dielectric layers include PDMS, Ecoflex, and PET films with the dielectric constants 2.3–2.8,114 2.17,115 and 3.5,116 respectively. Geometric changes, such as a reduction in the separation distance d between electrodes in response to pressure, depending on the stiffness of the dielectric layer. Therefore, reducing the elastic modulus of the dielectric layer can improve the sensitivity of capacitive sensors.100 A common approach based on this concept involves engineering dielectric layer surfaces with microstructures. For instance, structuring the dielectric with air pockets can effectively increase permittivity as air is displaced during deformation. This approach also softens the dielectric layers, further increasing the deformability of the sensor. Several types of microstructures have been proposed, including electrode microarrays,117 micropyramid structures,117–119 abrasive papers,120 human tissues inspired,121 and plant leaves122–124 (Fig. 2b and c). Micropyramids are a widely used architecture due to their simple fabrication and sensitivity. An 8 × 8 pixel pressure sensor pad employing microstructured PDMS film proposed by Mannsfeld et al.117 offers a sensitivity as high as 0.55 kPa−1, much higher than that achieved in previous studies without microstructures at 0.048 kPa−1.125 Yang et al. employed porous materials, incorporating air into the dielectric layer during deformation, effectively reducing the elastic modulus of dielectric layers 44.5 kPa−1 in a low-pressure regime below 100 Pa.126 The proposed sensor utilized a dielectric layer combining porous and micropattern structures, resulting in a significant improvement in sensing performance, enabling it to capture wrist pulses and gentle airflow and detect the landing of fruit flies with a small pressure of 0.14 Pa.

Piezoelectric

Broad bandwidth, high sensitivity, and self-powered operation make piezoelectric materials a preferred option for vibration and sound detection.127 Unlike capacitive and piezoresistive transducers which require power sources, piezoelectric devices offer a unique capability for self-sensing and self-powering wearable systems. Piezoelectric transducers, while functioning as acoustic sensors, can also serve as actuators that propagate sound waves into the body for ultrasound scans. Piezoelectricity refers to the ability of certain materials to generate an electric charge in response to applied mechanical stress, which can be approximated by static linear relations between two electrical and mechanical variables:128
 
D = εTE + dT (Converse effect) (3)
 
S = sET + dE (Direct effect) (4)
where S is a strain tensor, T is a stress tensor, E is an electric field vector, D is an electric displacement vector, sE is an elastic compliance matrix when subject to a constant electric field, d is a matrix of piezoelectric constants and εT is a permittivity measured at constant stress. The magnitude of the piezoelectric effect is typically quantified by the piezoelectric constants d,129 representing the ability of a material to convert mechanical stress into electrical charge. The larger the piezoelectric coefficient, the more effective the material is in converting mechanical energy into electrical energy, making it a key factor in evaluating piezoelectric performance for both sensing, actuation, and energy harvesting applications. To characterize piezoelectric materials, two key coefficients are typically considered: d33 and d31. The d33 coefficient denotes that the force is applied along the polarization axis, and the charge is collected along the same axis, whereas for d31 the force is applied along the polarization axis, but the charge is collected along a perpendicular direction.130

These parameters in several natural materials have been explored, including quartz (SiO2), topaz, and organic materials: silk, wood, rubber, bone, and hair.131 Although natural crystal-based materials show a high mechanical quality factor (Qm), their manufacturing process is difficult and expensive. Advances in material science enable the development of a broad range of highly efficient piezoelectric materials including semiconductors (PZT, GAN, ZnO), ceramics (e.g. BaTiO3, LiNbO3), polymers (e.g. PVDF, PLLA), and composite materials.

Ceramic materials with high piezoelectric constants are extensively used in medical applications and underwater communication due to their exceptional properties.72 Ceramic materials have large piezoelectric and dielectric coefficients, high electromechanical coupling factors, and efficient energy conversion rates. For instance, Barium Titanate (BaTiO3) has a piezoelectric constant d33 of 190, which is significantly greater than that of natural materials like quartz, which has a d33 of 2.3. However, piezoceramics are brittle and have low stretchability, making them easily damaged under large mechanical strains. As a potential solution for this limitation, the island-bridge structure, consisting of movable floating islands and flexible serpentine connections, significantly enhances the overall stretchability of the structure132 (Fig. 2d). Complex and expensive fabrication processes, along with their mechanical brittleness are some of the main challenges in ceramic piezoelectric materials for flexible, wearable acoustic sensors.

Despite having smaller piezoelectric effects, polymer materials with mechanical flexibility are highly conformable to human skin, enhancing their suitability for wearable sensor applications. Piezoelectric polyvinylidene fluoride (PVDF) sensors have attracted attention for wearable applications due to their numerous advantages, including flexibility, wide frequency response, low cost, ease of fabrication, biocompatibility, and air permeability.133 These features make PVDF an ideal material for developing wearable sensors that can effectively detect and monitor physiological signals.134–136 A belt-type device with PVDF-based 30 μm thin film was introduced early by Choi et al.134 to capture cardiorespiratory signals from the chest wall. The PVDF sensor demonstrated the capability to detect low-frequency chest movements down to 0.3 Hz with an SNR of 18.06. The high stretchability of up to 30% stretching allows for integration with PDMS substrates, enabling conformal contact with human skin, and making it suitable for capturing subtle skin deformations on the human wrist (Table 2).

Table 2 Piezoelectric coefficients for different piezoelectric inorganic and organic materials137
  Material Type Piezoelectric constants
d33 (pC N−1) d31 (pC N−1)
Inorganic PMN-PT Single crystal 2000–3000  
Quartz 2.3 −67
ZnO Crystal 6–13 −5
GaN 2–4 −1.5
AIN Ceramic 3–6 −2
PZT-5H 593 −274
BaTiO3 190 −78
LiNbO3 16 −1
Organic PVDF Polymer −33 23
PLLA 6–12  


The low piezoelectric coefficients of piezoelectric polymers can be addressed by utilizing piezoelectric composites that combine ceramics and polymers, offering advantages such as good flexibility, ease of processing, and a high piezoelectric constant, making them ideal for wearable applications.138,139 For instance, incorporating graphene oxide into dissolved PVDF forms a PVDF/GO piezoelectric material.138 This material achieves a high sensitivity of 4.3 mV Pa−1 and the ability to detect pressures as low as 10 Pa, suitable for capturing vocal vibrations when attached to the throat. Furthermore, the sensor can function as a nanogenerator, generating up to 1.2 nW m−2 of power harvested from respiration when employed on a face mask. BaTiO3 particles can also be added to polymers to improve their piezoelectric effect. The addition of BaTiO3 further enhances the piezoelectric constant d31 of P(VDF-TrFE) copolymer from 8.3 to 46 pC N−1,139 maintaining high stretchability and providing high acoustic sensing capabilities. These sensors can be incorporated into woven fabrics and integrated into clothing, enabling the detection of heart sounds and human voice vibrations transduced through the chest wall.

Advances in micromachining have improved piezoelectric performance with the development of nanofiber and microstructured materials. PVDF nanofibers exhibit significant enhancements over thin films, including smaller diameters, higher piezoelectric effects due to a higher length-to-diameter ratio, and higher surface-to-weight ratio. Nanofiber-based PVDF devices have achieved sensitivities as high as 266 mV Pa−1,136 over five times greater than traditional PVDF film devices (Fig. 2e). The sensitivity can be further extended by employing three-dimensional topologies such as microstructures, greatly improving performance characteristics. Inspired by human skin, multimodal electronic skin (e-skin) has been developed, mimicking the diverse sensory structures and functions of human fingertips.140 This e-skin employs flexible and microstructured ferroelectric films composed of PVDF and rGO, enabling the detection and differentiation of acoustic pulse waves and airflow pressures as low as 0.6 Pa. These innovations highlight the versatility and potential of advanced piezoelectric materials in wearable sensing technologies.

4. Mechano-acoustic sensors configuration

In addition to sensing mechanisms, the configuration of mechano-acoustic sensors plays a key role in determining the measurement bandwidth, range, and resolution. Depending on the application, these sensors are generally categorized as microphones, accelerometers, or flexible pressure/strain sensors. While soft polymeric materials can conformally attach to human skin due to their inherent flexibility, rigid MEMS microphones and accelerometers, with their small footprints, can be integrated into the fPCBs, allowing for stable contact with the skin and minimizing artifacts. This section discusses the mechano-acoustic sensors architecture and their characteristics.

Microphones

Microphones are transducers that convert sound pressure into electrical signals. Typical microphones can pick up frequencies ranging from 20 Hz to 20 kHz, including almost audible body sounds, such as cardiorespiratory and cough sounds,9,76,141,142 and bowel sounds.66,70,143

Most of the structural designs of microphones utilize either diaphragms or cantilever beams. When the sound pressure impacts the microphones, it applies a force that causes the thin diaphragm or cantilever beam to vibrate at a frequency matching the sound wave. This vibration results in deflection or bending of the diaphragm or beam in response to the sound pressure. A sensing mechanism – mostly using piezoelectric, capacitive, or resistive sensors – detects this deflection and converts it into an electrical signal. This electrical signal corresponds to the intensity and frequency of the sound, enabling the acoustic sensors to capture audio or pressure variations. Each of the three primary sensing mechanisms – piezoresistive, capacitive, and piezoelectric – offers distinct advantages and limitations. The comparison of sensitivity and dynamic range between these mechanisms is shown in Table 3.

Table 3 Comparative analysis of three different transduction schemes144
Parameters Piezoresistive Capacitive Piezoelectric
Input power Required Required None
Sensitivity (μV Pa−1) Low Good Medium
0.1 to 100 400 to 1000 10 to 500
Dynamic range Relatively wide Narrow Wide


Fig. 3 presents the typical structure designs of acoustic sensors, including diaphragm, cantilevers, and structured membranes. Diaphragm microphones suffer from residual stress after manufacturing, which is often caused by their fixed boundaries and various factors during fabrication (e.g., thermal expansion) that may induce unwanted tension across the diaphragm. Due to these stresses, buckling in the radial or circumferential direction can occur in the membrane,145 limiting their mechanical properties and sensitivity. Several efforts have been made to address this issue. Membrane cuts can reduce buckling where buckling in the radial direction is addressed by making some cuts in the circumferential direction146 while buckling in the circumferential direction is addressed by creating cuts in the tangential direction, Fig. 3b. Corrugations in the diaphragms and spring-supported diaphragms are some of the other techniques used to reduce the initial stresses in the thin film diaphragms. Corrugations are usually incorporated along the edges of the membrane to maintain a flat central diaphragm area,147–149 but the corrugation depth needs to be carefully considered, as an increase in corrugation depth can reduce effectively buckling but also impact the mechanical sensitivity.107 Spring-supported diaphragms can be achieved by using a rigid diaphragm with flexible springs. Thus, the deformation caused by residual stress can be significantly reduced. This design was further improved to have better sensitivity (SNR) and bandwidth by using a flexible V-shaped spring, silicon nitride electrical isolation, and the ring-type oxide/polySi mesa, respectively.150


image file: d4nr05145a-f3.tif
Fig. 3 Typical configuration of wearable microphones. (a) Configurations of diaphragm microphones based on different sensing mechanisms, left to right: piezoresistive, piezoelectric, and capacitive. (b) Membrane modification strategies to reduce initial residual stress, left to right: membrane with surface cutting, membrane with spring supports, and membrane with corrugations. (c) Configurations of cantilever microphones based on different sensing mechanisms, left to right: piezoresistive, piezoelectric.

Those structural modification strategies can significantly reduce the diaphragm buckling, however, are generally complex and usually require multiple steps of fabrication. Cantilever sensors can overcome these limitations owing to their unique structural design. The single-point anchoring configuration instead of clamped membrane minimizes the buildup of initial stress that commonly occurs in diaphragms. This structure also allows pressure to be released through the air gaps to avoid air trap that frequently occurs in the diaphragm design.151 Compared to the fully clamped structure, the cantilever structure offers a higher sensitivity due to its lower mechanical stiffness.152 Square and triangular cantilever diaphragm sensors have been introduced by Fang's group by cutting a silicon membrane into four separate blades.150,153,154 Each of these cantilevers is suspended by one edge with the cavity chamber. With the same diaphragm, the square cantilever was found to provide larger stress and wider stress distribution than the triangle shape, making a significant increase in the sensitivity.150 To further enhance the sensing performance, serpentine support beams were employed to gain a higher aspect ratio and thus increase the sensitivity. A piezoelectric MEMS resonant microphone array proposed by Liu et al.152 was demonstrated to gain an output voltage as high as 131.4 mV Pa−1. This high sensitivity enables the device to capture respiration and detect wheezing in lung sounds.

Despite those advantages, the air leakage from the gap acts like a high-pass filter,151 making cantilever microphones less sensitive to low-frequency acoustic signals, typically below 20 Hz. This limitation prevents these sensors from capturing several important body sounds, such as the S3 and S4 heart sounds, which can go as low as 10 Hz. These specific frequencies are critical in cardiovascular diagnostics, as they provide insights into heart valve activities and potential abnormalities. Reducing the gap surrounding the cantilever can slow the air leakage rate, maintain the chamber pressure, and thus allow the sensor to detect low-frequency acoustic signals. A four-triangular-slave piezoelectric sensor with 1.36 μm gaps was introduced by Tseng et al.,155 maintaining a high sensitivity at 10 Hz. By further reducing the gap to 1 μm, Nabeshima et al.151 demonstrated a low-frequency detection limit at 0.7 Hz, capable of capturing vessel expansion caused by heart pulse waves from the throat.

Accelerometers

Accelerometers are widely utilized in measuring vibrations or acceleration, including health and medical monitoring applications, such as tracking heart rate, respiration, and body motions. Unlike acoustic sensors, which detect sound pressure, accelerometers have a unique element, namely proof mass that significantly influences the sensing performance of the accelerometer. The proof mass is typically suspended within the fixed frame using cantilever beam structures,156 allowing it to move in response to inertia forces. When the sensor is accelerated, the proof mass displaces due to inertia, which can be detected using a sensing mechanism discussed above. Compared to thin cantilevers or diaphragm structures used in acoustic sensors, the addition of proof mass in accelerometers provides greater motion inertia. This makes accelerometers more sensitive to physical motion, such as movement or acceleration, rather than sound pressure. Higher inertia generally makes accelerometers a lower dynamic range compared to acoustic sensors. The advances in micromachining fabrication make MEMS-based accelerometers widely utilized due to their tiny footprint, high sensitivity, and low power consumption. Generally, piezoresistive and capacitive transduction are the most prominent sensing schemes used in MEMS accelerometers.87

Piezoresistive accelerometers are among the first commercialized acceleration sensors. The design structures of piezoresistive accelerometers include single cantilever beams, dual cantilever-beam, and quad cantilever-beam structures, Fig. 4a and b. MEMS-based piezoresistive accelerometers have several advantages such as simple design, robustness, and simple manufacturing process. They generally offer a wide bandwidth but are limited by their relatively low sensitivity, making them preferred for impulse/impact detection instead of wearable health monitoring sensors.157 Capacitive accelerometers with a typical design shown in Fig. 4c, in contrast, are highly sensitive to small movements and can detect subtle changes in motion, for monitoring low-g forces and low-frequency vibrations. With those advantages, capacitive sensing-based MEMS accelerometers have been utilized in high-precision applications, including health monitoring. Despite those benefits, the nonlinear response is a limitation of the conventional capacitance accelerometers, making the signal readout process complex. To overcome the nonlinear response of capacitive mechanisms, differential capacitive pressure microsensors were introduced using the parallel comb structure, Fig. 4d. Under the inertial force, the movement of proof mass causes the capacitance to increase on one side of the lateral comb and decrease on the other side, resulting in good linearity.158 Examples of this approach include the work reported by Lou et al.,159 composed of a proof mass, suspending serpentine springs, and comb fingers. The device acts as a full-bridge capacitor sensor, each half-capacitive bridge is split into two parts and located at two cross-axis corners. This differential layout helps cancel common-mode input noise such as substrate coupling, power supply coupling, and cross-axis excitation and features a linear range of ±13G. Lateral comb structures are also used widely in tri-axis accelerometers, capable of detecting motion along all three axes.160–162 Commercial tri-axis accelerometers, such as the ADXL3357,163 and the LSM6DSL,9 have been adopted for body sound monitoring because of their low cost, convenience, and compact size. They are particularly effective for picking up low-frequency body sounds (including heartbeats) and movements (such as chest movements and body motions), with a high sensitivity of 300 mV g−1.


image file: d4nr05145a-f4.tif
Fig. 4 Typical configurations of wearable accelerometers. (a) Configuration of piezoresistive cantilever accelerometer. (b) From left to right: piezoresistive cantilever accelerometers with single cantilever-beam, dual cantilever-beam, and quad cantilever-beam structures. (c) Configuration of differential capacitive accelerometer. (d) Differential capacitive accelerometer with lateral comb structure: configuration (left) and schematic (right).

Advances in nanoengineering have enabled the development of ultra-thin electrode separation capacitive accelerometers, which offer higher bandwidth and improved sensitivity. These innovations make it possible to place the sensor directly on the skin to detect body sounds transmitted through the skin. These specialized sensors are known as accelerometer contact microphones (ACM).164,165 For example, Gupta et al.164 introduced a wearable ACM capable of capturing a wide range of mechano-acoustic physiological signals. This device is fabricated using the MEMS process on a silicon-on-insulator wafer with a 40 μm thick device layer, and 270 nm capacitive gaps. The ultra-thin gap features a sensitivity as high as 76 mV g−1, and a linear response in the range of ±16 g. The accelerometer can pick up a broad frequency range from below 1 Hz to 12 kHz, including heart and respiratory rate, heart sounds, lung sounds, and body motion and position of an individual.

Flexible pressure sensors and strain sensors

Flexible thin patch sensors represent a class of mechano-acoustic sensors widely used for monitoring body signals. Compared to rigid MEMS acoustic sensors and accelerometers, these devices typically utilize soft substrates and offer higher conformable contact with human skin. Flexible pressure/strain sensors operate based on the deformation of the skin surface to which they are attached. Vibrations from the skin induce stresses into the sensor, which are converted into electrical signals through sensing mechanisms such as piezoresistive, piezoelectric, triboelectric, or capacitive effects.130
Sensors with microstructured materials. Porous and microstructured materials have been used in various types of sensors, including piezoresistive, capacitive, and piezoelectric materials86,121,126,166 (Fig. 5a–c). Porous materials enhance both electric and mechanical properties, while microstructures improve the contact between electrodes and sensing layers. In one such example, Park et al.166 proposed a piezoresistive sensor using interlocked microdome arrays that increase the contact area between electrodes. The sensor was achieved by micromolding a composite of carbon nanotubes (CNTs) and PDMS prepolymer into films with 3 × 4 μm microdome structures. These films, when combined face-to-face, form a piezoresistive sensor with a superior sensitivity of 15.1 kPa−1 and a minimum detectable pressure of 0.2 Pa, enabling it to accurately monitor breathing patterns. The sensor sensitivity was further improved in another work reported by Ma et al.,86 which introduced a piezoresistive sensor based on ultralight and super elastic aerogel. The sensor was fabricated by mixing reduced graphene oxide (rGO) with MXene. The MX/rGO aerogel not only combines the large specific surface area of rGO and the high conductivity of MXene (Ti3C2Tx) but also exhibits a rich porous structure, which leads to significantly enhanced performance with respect to those using single-component rGO or MXene. The piezoresistive sensor based on the MX/rGO aerogel shows extremely high sensitivity (22.56 kPa−1), fast response time (<200 ms), and good stability over 10[thin space (1/6-em)]000 cycles. With the ability to capture pressure below 10 Pa, the sensor can pick up heart pulses in adults. While microstructured and porous materials offer highly sensitive pressure sensing, their fabrication still involves many steps and is costly due to mold requirements. Thin metallic serpentine wires offer the advantages of simple and low-cost fabrication processes. By etching serpentine patterns with micro-scale widths (3 μm) onto ultra-thin metal films (50 nm), Park et al.1 developed temperature-independent strain gauge sensors with a Wheatstone full-bridge configuration that presented high sensitivity and a linear response to applied pressures within the range of 0–200 mmHg. Arranging two strain gauges above and below the neutral axis of a polyimide film effectively cancels out the influence of temperature fluctuations. Furthermore, the full-bridge configuration exhibited significantly enhanced sensitivity, achieving a pressure–voltage slope of 0.0031, which is three times higher than that of a quarter-bridge configuration of 0.0009.
image file: d4nr05145a-f5.tif
Fig. 5 Typical configurations of flexible, wearable sensors, with (a–c) microstructured materials, (d–e) hole patterns, and (f–g) sensor array structures. (a) Flexible sensor based on conductive composite elastomers with interlocked microdome-array structures. Scale bar: 5 μm. Reproduced with permission.166 2014, ACS Publications. (b) Flexible pressure sensor based on PDMS with porous-pyramid microstructures. Reproduced with permission.126 2019, ACS Publications. (c) Flexible piezoresistive sensor based on metal thin films with microwire patterns. Reproduced with permission.1 2024, Springer Nature. (d) Flexible acoustic sensor with eight holes patterned around the rim of each diaphragm. Reproduced with permission.167 2019, Springer Nature. (e) Flexible acoustic sensor with holes patterned on the backplate. Reproduced with permission.168 2022, Wiley-VCH. (f) Active-matrix flexible pressure 6 × 6 sensor array. Reproduced with permission.173 2023, Wiley-VCH. (g) Flexible sensor array based on integrated all-nanofiber networked electrodes. Reproduced with permission.171 2022, Elsevier.
Sensors with hole patterns. Capacitive flexible sensors based on hole-patterned structures can improve the frequency range and establish intimate contact with the skin. The sensors consist of a low-stress bending membrane and a high-stress perforated membrane. Holes added to the surface help improve the acoustic sensitivity and frequency response (Fig. 5d and e). An example of this approach is an ultrathin (<5 μm), conformable vibration sensor introduced by Lee et al.,167 based on hole-patterned diaphragm structures of polymer film. The holes around the rim reduced the stiffness of each diaphragm and the air damping underneath. The design not only offers high sensitivity (5.5 V Pa−1) to sense with human voice but also enables noise-canceling functionality even in challenging acoustic environments. However, the sensor is limited by its poor frequency response and a narrow acoustic pressure range. The authors then employed SU8 for the capacitive diaphragm structure, owing to its advantages of high processability, relatively low Young's modulus and dissipation factor, and low curing temperature.168 The hole patterns are utilized to the backplate to form a perforated structure with an open fraction area (44%) significantly reducing the air damping. The SU8 sensor with a small footprint of less than 9 mm2 achieved high sound-sensing quality, featuring a flat frequency response (15–10[thin space (1/6-em)]000 Hz), and high sensitivity (22.4 mV Pa−1).
Sensor array configurations. The location of attachment is critically important for targeting the signal of interest and minimizing the influence of undesired acoustic sources. The location usually requires expert knowledge to optimize the sensor accuracy. Sensor arrays help improve the convenience of sensor locating and installation onto human skin (Fig. 5f and g). According to the readout method, matrix sensor arrays can be classified into passive and active matrix arrays. In passive arrays, electrodes are laid directly on the material, while in active arrays, active components (e.g., transistors) are tightly integrated with each pixel element.169 Generally, passive arrays are easier to fabricate and can be used for wearable applications, such as tactile sensors.170,171 However, electrical crosstalk may exist within the array, leading to inaccurate measurement of the resistance. For body sound monitoring, active arrays have been widely utilized including flexible thin film transistors (TFTs).169 For example, Baek et al.172 introduced the spatiotemporal measurements of arterial pulse waves using wearable active-matrix pressure sensors. The proposed active-matrix pressure sensor arrays consist of inkjet-printed organic TFT arrays in a 10 × 10 active-matrix integrated with piezoresistive sensor sheets. A high sensitivity of 16.8 kPa−1 was achieved with a low power consumption at 101 nW. Another 6 × 6 capacitive sensor array based on the FEP-Air-FEP sandwich structure was proposed by Han et al.173 to record heart sounds at different locations of the chest area simultaneously, including the aortic, pulmonic, Erb's point, tricuspid, and mitral regions. The device exhibits an excellent dynamic sensitivity of 591 pC kPa−1 in the range of 0–8 kPa with 600 Hz bandwidth, allowing for the capturing of heart, breath, and Korotkoff sounds.

5. Fabrication technologies for mechano-acoustic sensors

Fabrication techniques play a crucial role in determining the feasibility of acoustic sensors for real-world and clinical applications, as they affect factors such as cost, efficiency, and sensing performance. This section introduces the widely used fabrication methods for mechano-acoustic sensors.

MEMS (micro-electro-mechanical systems) technique

MEMS technology serves as the industrial standard in the fabrication of acoustic sensors due to its mature processes, high scalability, and ability to achieve small footprints, Fig. 6a. MEMS processes are compatible with various sensing mechanisms, including piezoresistive,76,77,151 capacitive,164,165 piezoelectric effects.17,152 In wearable devices, where compactness is critical, MEMS offers significant advantages by minimizing the geometric mismatch between the rigid sensor platform and the soft, stretchable surfaces of human organs and skin. The fabrication of MEMS-based acoustic sensors typically begins with defining the sensing structure, followed by metallization to form electrical components, backside etching to open the air cavities, oxide removal to release the sensing structure, and finally bonding the microfabricated device to a rigid substrate, such as glass, to create an enclosed chamber.
image file: d4nr05145a-f6.tif
Fig. 6 Fabrication technologies for mechano-acoustic sensors. (a) MEMS technology. (b) Mold casting technology. Reproduced with permission.179 2020, MDPI. (c) Thermal drawing technology. Reproduced with permission.72 2023, Wiley-VCH. (d) Inkjet printing technique. Reproduced with permission.172 2022, ACS Publications.

The materials commonly used in MEMS fabrication include zinc oxide (ZnO), lead zirconate titanate (PZT),153 and silicon (Si),76,77,151 with silicon being the most popular template due to its availability, low cost, and compatibility with well-established microfabrication processes. For instance, Nguyen et al.76,77 developed a silicon acoustic sensor on an SOI (Silicon-on-Insulator) wafer. The cantilever-based sensing element was formed on a thin film Si (300 nm × 100 μm × 100 μm) using ion implantation into 〈100〉 Si (Arsenic doped) with a carrier concentration of approximately 1019 cm−3, and diffusion depth of ∼100 nm. After that, metal layers (Au/Cr) were deposited on the piezoresistive layer to create metal contacts. The Si sensing element with electrodes was then formed by wet-etching and Reactive Ion Etching (RIE), respectively. The fabrication process continues with back-side lithography followed by Si dry etching using Deep Reactive Ion Etching (DRIE). Finally, the cantilever diaphragm is released using vapor HF to remove the box oxide layer. The whole sensor, with a small footprint of 1.5 × 1.5 mm, can be employed for printed circuit boards (PCB) or flexible PCB by wire-bonding technique for various applications of health monitoring.76,151,174

While Si devices are a preferred choice for wearable acoustic sensors, they are often affected by temperature fluctuations and light exposure, which can compromise their sensing performance in extreme environments. To address these limitations, wide-bandgap materials such as silicon carbide (SiC) and gallium nitride (GaN) have been proposed, offering excellent thermal stability and high optical transparency, making them suitable for high-temperature applications and real-time optical observations.

Mold casting technique

The use of master molds offers several advantages for fabrication of soft substrates, especially for microstructured PDMS layers,117,175–179 Fig. 6b. These benefits include ease of customization for tailored product development, large-scale fabrication, and a significant reduction in material waste. This method provides an efficient and sustainable pathway for producing advanced sensing devices while maintaining flexibility and adaptability in design.

There are several approaches to manufacturing microstructures for mold casting. One such approach relies on conventional photolithography techniques to prepare a patterned silicon template, with typically used structures such as micropyramids117 and micropillars.175 Despite its advantage of high accuracy, the manufacturing process of the technique is usually complicated and time-consuming. The other approach exploits natural existing biomaterials such as lotus leaves176–178 to directly fabricate the microstructure arrays. This approach is simple and cost-effective, however, it has a significant limitation regarding the uniformity of the microstructures. In particular, their consistency in shape, dimensions, and spacing cannot be freely controlled, as they are dictated by the inherent properties of the natural biomaterial.121

3D printing techniques have also been utilized for mold fabrication, offering numerous advantages including fast prototype production, ease of sensor structure customization, reduced fabrication costs, and simplified manufacturing processes.179 For example, Zhang et al.180 introduced a capacitive soft pressure sensor with bonded microstructure interfaces. The PDMS microstructured dielectric layers were formed by mold casting technique using a 3D-printed mold. Specifically, a resin template with microcone arrays (50 μm in diameter and 40 μm in height) was fabricated using high-precision 3D printing (NanoArch S130, BMF Precision Tech, Inc.). A mixture of PDMS base and curing agent (mass ratio 5[thin space (1/6-em)]:[thin space (1/6-em)]1) was then cast onto the microcone array mold. The templated PDMS layer, after curing at 80 °C for 30 minutes, was peeled off, serving as a reverse template. This reverse template, again, can be used to develop dielectric layers by mold casting method, with the same micropatterns as the designed resin template. The mold casting structure enables a contact-separation behavior at the electrode–dielectric interface, resulting in an excellent detection limit of 0.007 Pa and a high-frequency range of up to 10 kHz.

Thermal drawing technique

Thermal drawing technique has emerged as a promising method for fabricating innovative flexible and wearable devices, Fig. 6c. This technique involves thermally stretching a macroscale preform (where various functional materials are strategically arranged) into a microscale fiber device with intricate geometries and architectures. The process begins by feeding the multimaterial macroscopic preform into a furnace, where its constituent materials are heated to their softening or melting points. After sufficient heating, the fiber is drawn from the softened preform and undergoes controlled necking to achieve a consistent diameter. This is accomplished through the application of external forces, such as those exerted by turning capstans. The thermal drawing technique results in a down-scale fiber substrate, retaining the geometry, composition, and cross-sectional structure of the original preform but on a significantly reduced diameter. The product, with a much higher aspect ratio, is tuned for both flexibility and sensing effect compared to input substrates.

Among the materials employed in this process, PVDF-based piezoelectric polymers stand out due to their lower processing temperature and compatibility with the diverse materials used in fiber devices. A such example was introduced by Yan et al.,139 beginning with the construction of a macroscopic preform consisting of P(VDF-TrFE) piezoelectric material (with a relatively low melting point of 150 °C), loaded with BaTiO3 ceramic particles. Carbon-loaded polyethylene (CPE) was then added as its high viscosity at the draw temperature delays the onset of capillary instability of the low-viscosity crystalline piezoelectric domain. The whole substrates are encapsulated in an elastic poly(styrene-b-(ethylene-co-butylene)-b-styrene) (SEBS) cladding. The preform was then thermally drawn into a fiber in a three-zone vertical tube furnace with a top-zone temperature of 120 °C, a middle-zone temperature of 252 °C, and a bottom-zone temperature of 80 °C. During the drawing process, four copper wires are introduced into the hollow channels of the CPE, tuning conductivity across two length scales: the microscale cross-section and the meter-scale fiber length. Finally, tens of meters of sensing fiber were achieved with submillimeter features.

Thermal drawing offers several advantages over traditional fabrication methods, making it increasingly popular for producing advanced fiber-shaped piezoelectric acoustic sensors. Notable benefits include single-step device fabrication, scalable manufacturing, and compatibility with other techniques.72 Furthermore, these characteristics facilitate the creation of highly complex, functional fibers in a streamlined manner. However, the integration of diverse materials during thermal drawing poses challenges due to differences in thermal, mechanical, and chemical properties among the constituent materials. Such discrepancies can lead to structural deformations or failures during the drawing process. Addressing these issues requires careful selection and compatibility assessments of materials to ensure they maintain the intricate transverse structure and perform cohesively throughout the process.181

Inkjet printing technique

In recent years, inkjet printing has garnered significant research attention due to its versatility as a mask-free, non-contact patterning technology, Fig. 6d. This method enables the deposition of materials onto various substrates by programming the motion of the printing nozzle such as polymer, metal, carbon, and other 2D materials. Technically, the operation modes of inkjet printing can be classified into (1) drop-on-demand (DoD) printing, which delivers droplets induced by thermal bubbles or a piezoelectric actuator, and (2) continuous inkjet (CIJ) printing, which generates a continuous ink stream through a nozzle by the electrostatic or magnetic field.182 In comparison, the DoD technique has been recognized with several advantages, that it reduces the consumption of costly ink materials owing to the micro-droplet deposition and precise programmable patterning, making it a promising method for flexible and wearable sensors fabrication. For instance, Baek et al.172 proposed Inkjet-Printed thin-film transistor arrays integrated with piezoresistive sheets. Ag ink was used to form transistor bottom-gate and word lines by inkjet-printing technique, using a drop-on-demand inkjet printer (DMP 2850, Fujifilm Dimatix). Ag nanoparticle ink (55 wt% Ag nanoparticles) with an average diameter of 7 nm in tetradecane was used to print using a single nozzle for a reliable printing process. During printing, the cartridge and platen temperatures were set at 40 °C and 50 °C, respectively. The printed patterns were then sintered at 120 °C for 30 minutes to form the sensor array layout.

Inkjet printing offers high manufacturing throughput, scalability for large-area patterning, excellent biocompatibility, and precise deposition capabilities on a wide variety of substrates.182 It allows the flexibility to create geometries using computer-aided design (CAD) digital patterns, and compatibility with a wide range of printable materials,183 offering simpler and more innovative alternatives for producing flexible PCBs compared to conventional methods.

6. Health monitoring applications of mechano-acoustic sensors

Cardiovascular monitoring

Blood flow measurements can be captured from several positions on the body, including fingertips76 (Fig. 7a), wrist,78,86,126,186 and throat1 (Fig. 7b), providing vital information for diagnosing cardiovascular diseases. Blood pressure (BP) and flow velocity obtained from wearable acoustic transducers can reveal clinical insights into heart failure, carotid stenosis, and renal failure.187 The most common technique to measure blood pressure is the use of pressure cuffs with a stethoscope. This technique, despite its high accuracy, is not always convenient for the user and is not suitable for long-term monitoring. Alternative approaches have been introduced to overcome the limitations of pressure cuffs.
image file: d4nr05145a-f7.tif
Fig. 7 Applications of flexible, wearable mechano-acoustic sensors in cardiovascular monitoring. (a) Flexible, wearable mechano-acoustic sensors for real-time monitoring of blood pulse at human fingertip. Reproduced with permission.76 2024, Wiley-VCH. (b) Soft, full Wheatstone bridge 3D piezoresistive pressure sensors for blood pulse wave and blood pressure measurement at wrist and throat. Reproduced with permission.1 2024, Springer Nature. (c) Wearable piezoelectric sensor for continuous blood pressure monitoring at the wrist. Reproduced with permission.185 2023, Wiley-VCH. (d) Wearable piezoelectric sensor for cuffless blood pressure estimation at the wrist. Reproduced with permission.184 2022, MDPI. (e) Precision wearable accelerometer contact microphones for longitudinal monitoring of mechano-acoustic cardiopulmonary signals from the chest wall. Reproduced with permission.164 2020, Springer Nature. (f) Epidermal mechano-acoustic sensing electronics for cardiovascular diagnostics and human-machine interfaces. Reproduced with permission.7 2016, Science.

One of those methods is pulse wave velocity (PWV) measurement, which is closely related to BP, and can be used to estimate vascular stiffness and central arterial blood pressures through the Moens–Kortweg and Hughes equations.188 PWV can be calculated from pulse transit time (PTT), which denotes the time for the carrying of pulse wave information by a pulse signal from one location to another in the cardiovascular system. Guo et al.184 developed a small cuffless BP measurement device using a piezoelectric sensor array to measure the PWV. An optical sensor was attached to the arm to measure the photoplethysmography (PPG) intensity ratio (PIR) signal to estimate the arterial parameters of patients. The proposed device showed a high BP estimation accuracy at systolic blood pressure (SBP) was 0.75 ± 3.9, DBP was 1.1 ± 3.12, and mean arterial pressure (MAP) was 0.49 ± 2.82.

Another method employs a high correlation between the amplitude of vessel expansion caused by blood pulse and blood pressure. This approach simplifies the measurement method and calculation procedure. For example, Min et al.185 proposed a wearable piezoelectric bracelet that can be attached to the wrist for continuous blood pressure measuring (Fig. 6c). To convert sensor output signals to BP values, the authors used a linear transfer function. The device achieved accuracy with a mean error and a standard deviation of −0.89 ± 6.19 for SBP and −0.32 ± 5.28 mmHg for diastolic blood pressure (DBP), respectively. To enhance user comfort, the Rogers research group proposed a wireless, flexible device based on a strain gauge Integrated Smart Sensor (3MIS) for blood pressure estimation. A dimensionless factor k that depends on the mechanical properties of the phantom skin is introduced to convert the sensor output to BP values and is experimentally acquired with a reference system.

As well as blood pressure and velocity, heart sounds are critically important signals for the assessment and monitoring of potential heart diseases. The stethoscope is the major tool used in clinical settings to obtain heart sounds. However, the main limitations of traditional stethoscopes include the high dependency on the clinical experience of doctors and the rigid and bulky form factor that hamper their utility for long-term and continuous medical assessments.189,190 Wearable sensors worn on the chest wall offer long-term and convenient monitoring of heart sounds. S1 and S2 heart sounds with a frequency range from 30–100 Hz (ref. 14 and 15) can be detected clearly by acoustic sensors or accelerometers.9,10,141,142,163–165 Wireless-continuous auscultation using a soft wearable stethoscope system (SWS) was introduced by Lee et al.142 The device utilizes commercial MEMS acoustic sensors integrated with a Bluetooth circuit formed on a flexible PCB. The use of the flexible PCB with a stretchable serpentine interconnect structure minimizes the device thickness and facilitates conformal attachment to the chest wall to capture cardiac signals with an SNR of up to 14.8 dB. To further provide a more convenient and comfortable monitoring condition, a T-shirt woven fabric sensor capable of auscultating cardiac sound signals from the chest of humans was reported.139 The information on the cardiovascular system and heart sounds from users was recorded by the acoustic shirt with an SNR as high as 30 dB.

Auscultation of the S3 heart sound is critical for cardiovascular monitoring; however, detecting S3 is particularly challenging for most acoustic transducers due to its low frequency and weak amplitude. To address this, Gupta et al.164 developed a precision wearable accelerometer-based contact microphone capable of detecting pathological S3 heart sounds in patients with preexisting conditions (Fig. 7e). The device not only captures the subtle S3 heart sound, which typically occurs approximately 150 ms after the S2 sound but also simultaneously monitors shallow breathing patterns. This dual functionality provides valuable diagnostic insights, with the S3 sound serving as a key early marker for patients with reduced cardiac output associated with congestive heart failure. In patients with cardiovascular pathologies, murmurs are often present in addition to signatures associated with S1 and S2. An accelerometer-based epidermal mechano-acoustic sensor introduced by Liu et al.7 showed the ability to capture murmur sounds in cardiac valve closure and opening periods. The device can detect the constant intensity of the murmuring sound from an elderly female who was diagnosed with mild tricuspid and pulmonary regurgitation. By integrating the accelerometer with a pair of conformal capacitive electrodes laminated onto the sternum, the device enables simultaneous measurements of SCG (seismocardiography) and ECG. This dual functionality allows for the concurrent capture of electrophysiological and mechanical data for cardiac auscultation. The obtained data provides insights into the heart motions involving electrical signals followed by mechanical coupling and a sequence of mechano-acoustic signatures as the heart chambers contract and the valves close. Furthermore, the irregular beat rate was presented in patients with similar diseases. The mummers were absent at the aortic site, highlighting the importance of changing recording positions during auscultation to ensure a comprehensive diagnosis.7 To minimize inaccurate measurements resulting from the location of wearable sensors, Han et al.173 developed a 6 × 6 sensor array capable of simultaneously mapping heart sounds over a broad area of the chest. This innovative platform can detect pulse waveforms corresponding to the pressure of the right atrium, right ventricle, and pulmonary artery. The cardiac sound provides valuable information about the physiological activities of right atrial contraction and relaxation, as well as the opening and closing of the tricuspid valve. By allowing direct comparison of sound volume and frequency across different locations, the sensor array eliminates the need for frequent position changes during auscultation. Table 4 shows the aforementioned applications of mechano-acoustic sensors in cardiovascular monitoring and summarizes their key features and performance indicators.

Table 4 Summary and comparison of wearable devices for cardiovascular monitoring
Device description Sensors Detectable signals Performance Ref.
Cuffless arterial compliance sensor Piezoelectric pressure sensor Blood pulse wave velocity Blood pressure measurement error: SBP (0.75 ± 3.9 mmHg), DBP (1.1 ± 3.12 mmHg), MAP (0.49 ± 2.82 mmHg) 184
Optical sensor PPG
Wearable piezoelectric blood-pressure sensor Flexible piezoelectric pressure sensor Blood pulse wave Sensitivity: 0.062 kPa−1 185
Blood pressure measurement error: SBP (−0.89 ± 6.19 mmHg), DBP (−0.32 ± 5.28 mmHg)
Soft, full wheatstone bridge 3D pressure sensors Piezoresistive pressure sensor Blood pulse wave Temperature-independent 1
Sensitivity: 0.0031 mmHg−1
Heart rate measurement error: 1.779 ± 1.96 bpm
Blood pressure measurement error: MAP (2.153 ± 1.96 mmHg)
Single fiber enables acoustic fabrics via nanometer-scale vibrations Flexible piezoelectric fiber sensor Heart sound Minimum sound-detection capability: 0.002 Pa (40 dB) 139
Sensitivity: 19.6 mV measured at 94 dB and 1 kHz
Precision wearable accelerometer contact microphones Capacitive MEMS accelerometer Heart sound, SCG, lung sound, chest wall motion Bandwidth: <1 Hz to 12 kHz 164
Sensitivity: 76 mV g−1
Capability of capturing S3 heart sounds.
Epidermal mechano-acoustic sensing electronics Commercial MEMS accelerometer ECG, SCG Bandwidth: 0.5 Hz to 550 Hz 7
Capability of capturing heart murmur sounds
Speech recognition with 90% accuracy
Wearable piezoelectret patches Flexible piezoelectret pressure sensor Heart sound, Korotkoff sound Dynamic sensitivity of 591 pC kPa−1 in the pressure range 0–8 kPa and 290 pC kPa−1 in the pressure range above 8 kPa 173
Bandwidth: ∼0 Hz to 600 Hz with a frequency resolution < 0.1 Hz


Pulmonary disease monitoring

Wearable acoustic sensors have emerged as a popular solution for breath monitoring, providing real-time insights into respiratory health (Table 5). Breath analysis has been a cornerstone in clinical diagnostics, providing valuable information on an individual's overall systemic health.193,194 The prevalence of pulmonary and respiratory diseases, coupled with worsening air quality in industrialized areas, underscores the growing importance of advanced technologies for breath assessment. In this context, wearable acoustic sensors integrated into smart face masks and respirators have emerged as a popular solution for breath monitoring, providing real-time insights into respiratory health.191,192,195 For example, Zhong et al.191 introduced a wireless smart face mask that incorporates an ultrathin, self-powered pressure sensor to monitor breathing patterns (Fig. 8a). In this work, continuous wavelet transform (CWT) was utilized to analyze and extract frequency and magnitude parameters from various breathing conditions. The system effectively distinguished abnormal breathing conditions, such as coughing, fast breath, and holding breath. Further advancements in this domain include the application of machine learning for enhanced diagnostic capabilities. For instance, Zhang et al.192 employed a bagged decision tree algorithm with acoustic data from face mask sensors to classify respiratory health conditions (Fig. 8b). Their approach achieved a high accuracy of 95.5% in differentiating between healthy individuals and patients with chronic respiratory diseases, such as asthma, bronchitis, and chronic obstructive pulmonary disease (COPD).
image file: d4nr05145a-f8.tif
Fig. 8 Applications of flexible, wearable mechano-acoustic sensors in pulmonary disease monitoring. (a) Smart face mask based on an ultrathin pressure sensor for wireless monitoring of breath conditions. Reproduced with permission.191 2022, Wiley-VCH. (b) Biodegradable smart face masks based on PLA electret fabric for chronic respiratory disease diagnosis. Reproduced with permission.192 2022, ACS Publications. (c) Soft wearable stethoscope designed for automated pulmonary disease diagnosis. Reproduced with permission.142 2022, Science. (d) Precision accelerometer contact microphones for Detection of pathological mechano-acoustic signatures in patients with pulmonary disorders. Reproduced with permission.164 2020, Springer Nature. (e) Wireless broadband acousto-mechanical sensing system for continuous physiological monitoring. Reproduced with permission.222 2023, Elsevier.
Table 5 Summary and comparison of wearable devices for pulmonary monitoring
Device description Sensors Detectable signals Performance Ref.
Smart face mask based on an ultrathin pressure sensor Flexible piezoelectric pressure sensor Breath airflow signals Dynamic sensitivity of 0.19 V Pa−1 in the pressure range 0–30 Pa and 0.048 V Pa−1 in the pressure range above 30–145 Pa 191
Biodegradable smart face masks Flexible electret pressure sensor Breath airflow signals Sensitivity: linear response with applied pressure, from 0.12 V at 4 Pa to 0.64 V at 166 Pa 192
Distinguishing the healthy group and three groups of chronic respiratory diseases (asthma, bronchitis, and chronic obstructive pulmonary disease) with 95.5% accuracy
Precision accelerometer contact microphones Capacitive MEMS accelerometer Heart sound, SCG, lung sound, chest wall motion Ultra-low noise performance (<10 μg √Hz−1) 165
Bandwidth: >10 kHz
Sensitivity: 271 mV g−1 with a linear response in acceleration range ±4 g
Capability of capturing wheeze, bronchial, and crackle from COPD patients
Soft wearable stethoscope Commercial MEMS microphone Heart/lung sounds, chest wall motion Automated diagnoses of four types of lung diseases: crackle, wheeze, stridor, and rhonchi, with a 95% accuracy 142
Capability of detecting disordered breathing for home sleep
Wireless broadband acousto-mechanical sensing system Commercial MEMS microphone/accelerometer Body movement/angle, lung/intestinal/heart sounds Heart rate measurement error: 0.015 ± 0.85 bpm 9
Respiratory rate measurement error: 0.44 ± 2.13 bpm


Another form of face mask is the respirator which offers air filtering functionality to enhance breath quality. However, the mismatch between dynamic environmental conditions and the static design of nonadaptive respirators often results in physiological and psychological discomfort, limiting their widespread adoption. To address this limitation, Shin et al.200 proposed an adaptive respiratory protection system featuring a dynamic air filter (DAF). This system integrates a digital barometer inside the face mask to capture the wearer's breathing signals. These signals, combined with the expansion state of the DAF, are processed using a long short-term memory (LSTM) algorithm to predict changes in the wearer's respiratory patterns. The inference result, along with ambient condition data recorded from a particulate matter (PM) sensor, is used to adjust a stretchable elastic fiber membrane (EFM) air filter to the desired state, optimizing filtration in real time. While several approaches have been developed to enhance the stability and comfort of wearing face masks,194,200 it is sometimes inconvenient in routine activities and susceptible to noise for pulmonary monitoring purposes. For instance, the deformation of the masks, and vocal noise may impact sensing accuracy. The application of wearable electronic stethoscopes that can be comfortably attached to the chest serves as a novel approach for detecting and diagnosing respiratory disorders. For instance, the ACM platform reported by Gupta et al.164 offers a broad measurement bandwidth from below 1 Hz up to 12 kHz allowing the sensor to capture low-frequency chest wall motion and high-frequency lung sounds. The incorporation of these signals helps elucidate the respiratory rate and breathing patterns, that can potentially predict early onset of chronic cardiopulmonary conditions. In addition to breathing patterns, abnormal breath sounds such as wheezes, rhonchi, and crackles serve as useful indicators of pulmonary disorders. The same research group further utilized their ACM devices using a single integrated sensor for episodic and longitudinal assessment of lung sounds, breathing patterns, and respiratory rates.165 The device demonstrated its capability to capture wheeze, bronchial, and crackle sounds with comparable results to an Eko stethoscope. In addition to data quality, the implementation of machine learning to the dataset obtained from wearable stethoscopes can support the interpretation of pulmonary disease diagnoses. Lee et al. proposed a soft wearable stethoscope (SWS) with CNN-based machine learning, embedded in the SWS for Chronic obstructive pulmonary disease (COPD) and cardiovascular disease (CVD) auscultation142 (Fig. 8c). A clinical study with multiple patients and control subjects demonstrates the unique advantage of the wearable auscultation method with embedded machine learning for automated diagnoses of four types of lung diseases: crackle, wheeze, stridor, and rhonchi, with 94.78% accuracy. Displaying the measured signals on a mobile app, combined with the abnormal signal detection algorithm suggests the feasibility of wearable stethoscopes for remote sensing applications.

Airflow based on lung sound also provides valuable information on lung conditions. In this regard, the Rogers group9 conducted a pilot study involving 13 broadband acoustic-mechanical sensing (BAMS) devices mounted on the anterior and posterior chest of 20 healthy participants and 35 patients with chronic lung disease, creating a high-resolution, spatiotemporal mapping of the lung (Fig. 8e). The measurement indicated that patients with a history of resection surgery of the right upper and lower lobes and left upper lobes, showed decreased pulmonary function in the removed lobes, resulting in reduced airflow rates and lower sound intensity. The study also reported differences in lung sound intensities and frequencies between healthy subjects and patients with chronic lung disease: 54 dB compared with ∼36 dB and 219 Hz compared with 256 Hz, respectively.

Sleep monitoring

Sleep monitoring has been a highly active research area to improve the quality of life. Sleep quality can be evaluated by monitoring sleep breathing and sleep stage estimation. The cyclical pattern of sleep is composed of a rapid eye movement (REM) and a non-REM (NREM) phase. The NREM phase is generally divided into four different stages, namely, Stage 1, Stage 2, Stage 3, and Stage 4. Knowledge of these stages allows further inference of new variables. In a typical clinical setting, polysomnography (PSG) is considered the gold-standard device to characterize human sleep that infers the different sleep stages and represents an indirect measure of sleep. Unfortunately, this technique is expensive and requires supervision by a medical doctor during the measurement.

The use of mechano-acoustic devices to quantify sleep patterns represents a promising solution in advanced clinical diagnostics (Table 6). Body sounds and movements play an important role and are widely used in sleep stage estimation.5,196,198,199,201–203 A widely adopted method involves extracting cardiovascular and body motion signals using mechano-acoustic sensors placed on the skin. These sensors collect data that, when processed with machine learning algorithms, can be used to estimate sleep stages accurately. For example, Lee et al.5 proposed an approach in which a multiband z-axis signal from an accelerometer was utilized to extract and collect body signals from the human chest (Fig. 9a). Specifically, the frequency range 0.1–0.8 Hz was extracted for chest motion during respiration, sub-bands between 0.8–20 Hz captured body motions, and the 20–80 Hz range represented cardiac signals. The study employed a hidden Markov model (HMM) to classify sleep stages, achieving an 82% accuracy for binary wake/asleep detection and 56% accuracy for three-stage classification. To enhance the accuracy of sleep stage estimation, mechano-acoustic sensors have been incorporated with ECG and PPG systems. Typically, accelerometers are worn on the wrist to monitor body movements during sleep, while ECG and PPG are used to capture cardiovascular signals. This integration resulted in an enhancement in three-stage detection accuracy of 69% and 72.9% as reported by Beattie et al.198 and Fonseca et al.,197 respectively. A soft, wireless highly integrated device with ECG, PPG, and accelerometers was introduced by Zavanelli et al.196 (Fig. 9b). To estimate the sleep stage, in addition to ECG and PPG signals, SCG vibrations are recorded from the y accelerometers. These signals were sampled at rates of 500 Hz, 120 Hz, and 200 Hz and filtered using a third-order Butterworth band-pass filter set to 4–24 Hz for SCG, 0.5–50 Hz for ECG, and 0.3–7 Hz for PPG. A feedforward neural network (FFNN) was trained on the processed data, achieving a high three-stage classification accuracy of 82.4%.


image file: d4nr05145a-f9.tif
Fig. 9 Applications of flexible, wearable mechano-acoustic sensors in sleep monitoring. (a) Mechano-acoustic sensors placed at the suprasternal notch for physiological processes and sleep stage estimation. Reproduced with permission.72 2023, Wiley-VCH. (b) Soft, wireless sternal patches for detection of sleep apnea and sleep stages. Reproduced with permission.196 2021, Science.
Table 6 Summary and comparison of wearable devices for sleep stage detection
Device description Sensors Detectable signals Sleep detection accuracy Ref.
Soft wireless device placed at the suprasternal notch MEMS accelerometer Mechano-acoustic signals Wake detection: 72.7% 5
NREM detection: 65%
REM detection: 56.3%
Three-stage detection: 56%
Soft, wireless sternal patch Optical sensor, MEMS accelerometer, ECG sensor ECG, PPG, SCG, and ACC Wake detection: 100% 196
NREM detection: 80.9%
REM detection: 70.4%
Three-stage detection: 82.4%
Rigid wristwatch Optical sensor, MEMS accelerometer ACC and PPG Wake detection: 91.5% 197
NREM detection: 65.7%
REM detection: 78.9%
Three-stage detection: 72.9%
Wrist-worn device Optical sensor, MEMS accelerometer ACC and PPG Wake detection: 69.3% 198
NREM detection: 83.4%
REM detection: 71.6%
Three-stage detection: 69%
Flexible, wireless patch MEMS accelerometer, ECG, and temperature sensor ACC, ECG, and TEMP Wake detection: 73.3% 199
NREM detection: 59%
REM detection: 56%
Three-stage detection: 62.1%


Sleep apnea is a sleep disorder in which breathing stops and starts repeatedly during sleep, and is often related to snoring, which is linked with other respiratory symptoms, such as wheezing and chronic bronchitis. Those with asthma and sleep-disordered breathing have impacted sleep quality and decreased nocturnal oxygen saturation. Wearable sensors have emerged as valuable tools for detecting these patterns and facilitating early screening and treatment.9,142,165,196,202 For apnea and other abnormal sleep breathing situation detection, the sensor location at the suprasternal or the chest wall is preferred as it is convenient to capture both airflow, respiration sound, and chest movements, which are considered important signs of these sleep disorders. High-sensitivity accelerometers are particularly suitable for this application, as their low-frequency signals can detect body motions, while their high-frequency sensitivity captures sounds generated by airflow through the trachea. Gupta et al.165 introduced an approach utilizing an accelerometer contact microphone attached to the lung area to monitor breathing patterns. Their study identified the characteristic Cheyne-Stokes respiration pattern in patients with acute decompensated heart failure (ADHF), a condition commonly observed in advanced heart failure. This pathognomonic breathing pattern features cyclical periods of rapid breaths followed by an absence of respiratory signals, indicative of apnea. In another study, Lee et al.142 used a digital stethoscope to record lung sounds, visualized as spectrograms, to analyze apnea/hypopnea events and differentiate types of snoring. The frequency spectrum of lung sounds revealed distinct patterns of snoring, such as tongue snoring and palatal snoring. Tongue snoring during inhalation exhibited a frequency range with power concentrated between below 500 Hz, accompanied by distinct peaks from 500 Hz to 1 kHz, with a noticeable reduction in signal power during exhalation. In contrast, tongue snoring during exhalation displayed a gradual increase in signal power up to 250 Hz, with distinct signal peaks observed. Palatal snoring during inhalation presented a similar power distribution across the frequency spectrum, except for a unique pattern between 350 and 400 Hz. These detailed analyses provide crucial insights into the respiratory dynamics associated with sleep-disordered breathing.

Bowel monitoring

Bowel sounds provide valuable physiological insights into intestinal function. However, despite their regular production, the random frequency and variability of these sounds pose challenges for continuous monitoring using conventional devices like stethoscopes due to their bulkiness. Advanced wearable devices for bowel sound monitoring, on the other hand, can provide real-time information on abdominal and intestinal activities. Numerous studies have been conducted to capture bowel sound62,204–207 highlighting the significant advantages of wearable devices (Table 7). For example, Zhou et al.62 proposed a graphene-based strain sensor with a sandwiched structure, which is tailored to harvesting bowel sounds (Fig. 10a). To avoid interference from heart sounds and abdominal aortic pulsations and optimize bowel sounds, the ileocecal region was selected to be measured. By assuming that bowel sounds are typically characterized by a considerable variation in frequency sound and tone, this study provided a new way to determine the functional condition of the intestine. However, more data needs to be collected to continue to revise the reference ranges of the minimum and difference values of bowel sound amplitude. Machine learning offers an efficient solution for detecting and analyzing bowel sounds, enabling more effective data collection and interpretation. Examples of machine learning for bowel sound recognition include a CNN-based segmentation approach reported by Zhao et al.204 and, SVM classification developed by Yin et al.205 Both methods demonstrated impressive performance, achieving accuracy rates exceeding 90% (Fig. 10b). In attempts to improve patient comfort, Baronetto et al.63 proposed the Gastro Digital Shirt, a smart T-shirt for capturing abdominal sounds produced during digestion. The garment prototype featured an array of eight miniaturized microphones connected to a low-power wearable computer and was designed for long-term recording. Using a large dataset including 3046 bowel sound instances, which were individually annotated, and the Hierarchical Agglomerative Clustering algorithm, the analysis highlighted the presence of four bowel sound types based on their spectral and temporal features. The study showed that the most frequently occurring types belong to two clusters, containing both single and multiple bursts (SB and MB). A survey on people with different intestinal conditions70 was conducted on healthy male subjects together with patients with mechanical intestinal obstruction (MIO) and with paralytic ileus, Fig. 10c. A 5-hour measurement of bowel sounds after food intake in a silent room revealed that MIO patients exhibit the highest number of peaks (233 peaks), much higher than that of patients with the paralytic ileus traces (22 peaks). There are also significant differences in peak values and positions in their power envelope curves.
image file: d4nr05145a-f10.tif
Fig. 10 Applications of flexible, wearable mechano-acoustic sensors in sleep monitoring. (a) Graphene-based strain sensor with sandwich structure for bowel sounds monitoring.62 2022, RSC Publications. (b) Flexible skin-mounted wireless acoustic devices for bowel sounds monitoring and intestinal condition evaluation. Reproduced with permission.223 2020, Springer Nature.
Table 7 Summary and comparison of wearable devices for bowel monitoring
Device description Applications Performance Ref
Wearable devices for long-term bowel sound monitoring Bowel sound recognition Bowel sound recognition with 97.0% sensitivity and 91.7% accuracy 204
Wearable health monitoring system for bowel sound recognition Bowel sound recognition Bowel sound recognition with 86.8% sensitivity and 90.1% accuracy 205
Smart shirt for digestion acoustics monitoring Bowel sound recognition Detection of the presence of four bowel sound types based on their spectral and temporal features, with Cohen's Kappa of 0.7 63
Flexible skin-mounted wireless acoustic device Intestinal condition monitoring Bowel sound classification between the normal subject and patients with MIO or paralytic ileus with 76.89% accuracy 70
Flexible dual-channel digital auscultation patch Intestinal condition monitoring Evaluated the recovery of intestinal peristalsis function in patients with POI and provided guidelines for the feeding time for speeding recovery based on intestinal rate 143


One of the most important clinical applications of bowel sound monitoring is to capture the occurrence frequency of bowel from patients with postoperative ileus (POI). This assists the recovery of patients’ intestinal function, selects the right feeding time, and accelerates the recovery of patients. POI is a common physiological response to abdominal surgery, characterized by symptoms such as the cessation of intestinal peristalsis and the inability to move intestinal contents forward. During POI, patients are unable to consume food until the condition resolves. The traditional judgment of the time of POI relief relies on the doctor to observe when the patient begins to exhaust or defecate. This method is intermittent and depends on the subjective auscultation from physicians. Affected by the noisy environment in the ward, this evaluation can be considered inaccurate and hysteretic. This limitation can be addressed by the use of long-term wearable devices. As such, a dual-channel digital auscultation patch introduced by Wang et al.143 was attached to the abdomen of patients with POI to capture bowel sounds after surgery. The ambient noise in the ward is eliminated using an active noise-reduction algorithm, while the other noise sources, such as frictional noise, are removed using multichannel cross-validation. Through this approach, the number of bowel sounds contained in the data collected daily is objectively and quantitatively identified.

According to the daily change in occurrence frequency, the curve of the median intestinal rate with the postoperative days can be collected and analyzed. From one to three days after the operation, the intestines were in a state of paralysis, and the bowel almost disappeared, less than two times per minute on average, for long-term monitoring. On the fourth day after the operation, the occurrence frequency of bowel sounds began to increase, reaching five times per minute, which was more than the average level of two times per minute in the normal state. The results obtained from the wearable acoustic sensors indicated that the paralyzed state of the intestine is relieved, and the peristalsis function is restored. The study also suggested that timely feeding from the fourth day after the operation could speed up the patient's recovery.

Swallow monitoring

Mechano-acoustic sensors have been utilized as non-invasive approaches for monitoring and capturing swallowing patterns (Table 8). Surface electromyography (sEMG) is a preferred noninvasive method for health assessment in clinical settings but still faces challenges. When used alone, sEMG is limited in the types of activities it can monitor. For example, sEMG monitors the electrical activity of muscles when they are actively contracting, but the relaxation of the swallowing muscles cannot be monitored.57 To address these limitations, an alternative approach involves the use of sEMG and strain sensors. Continuous sensing of mechanical strain on the surface of the skin can capture the contractions and relaxations of the submental muscles during swallowing, helping improve the performance of wearable devices in swallowing assessments.
Table 8 Summary and comparison of wearable devices for swallow monitoring
Device description Swallow monitoring applications Sensors Detectable signals Results Ref
Stretchable derivatives of PEDOT:PSS, graphene, metallic nanoparticles External measurement of swallowed volume during exercise Flexible piezoresistive strain gauge sensor, sEMG sensor sEMG, throat movement The prediction results for walking were significantly better than for biking, with the prediction error ranging from 30–50% compared to 25–65%, respectively 56
Metallic nanoislands on graphene Swallow monitoring in head and neck cancer patients Flexible piezoresistive strain gauge sensor Throat movement Bolus type identification (water bolus, yogurt bolus, and cracker bolus) with 86.4% accuracy 57
Swallow classification between healthy subject and dysphagic patient with 94.7% accuracy
Soft skin-interfaced mechano-acoustic sensors Real-time monitoring and patient feedback on respiratory and swallowing biomechanics MEMS accelerometer Chest wall motion Detection of swallow events while eating, drinking, and intermittent un-cued saliva swallowing with 89.6% sensitivity and 87.8% precision 6
Throat movement
Epidermal graphene sensors Estimating swallowed volume Flexible piezoresistive strain gauge sensor Throat movement Estimation of unknown swallowed volumes cumulatively between 5 and 30 ml of water with 92% accuracy 55


Assessment of liquid intake is necessary and provides valuable information on an individual's hydration status. In this regard, the swallowed volume can be estimated by recording swallowing signals at the throat.55–57 Polat et al.,56 for instance, introduced an external measurement of swallowing volume during exercise using a wearable sensor based on piezoresistive Gr/AuNI/PEDOT:PSS “dough” strain gauge and sEMG attached to the throat (Fig. 11a). The study tested volumes between 10–30 ml in 5 ml increments on participants walking or sitting on their exercise instruments while completing their exercises for swallow therapy. Machine learning was also applied to predict the liquid intake volume based on the sensor data. From the sEMG signals, summation, width, and low-frequency power were extracted while the peak-to-peak width and peak skew were derived from the strain signals. Meanwhile, the peak offset was taken between the sEMG and the strain. The prediction results for walking were significantly better than for biking, with the prediction error ranging from 30–50% compared to 25–65%, respectively. The data implies that the intensity of human routine activities has a marked impact on the estimation. Additionally, the results highlighted a larger error in predicting the smallest volume (10 ml) compared to the volumes of inter-mediate size (15, 20, and 25 ml), which was also witnessed in their previous study testing on participants sitting still55 (Fig. 11b). This error could be attributed to the premature and involuntary movement of the liquid bolus from the oral cavity into the pharynx that occurs before swallowing smaller volumes causing swallow disruptions.


image file: d4nr05145a-f11.tif
Fig. 11 Applications of flexible, wearable mechano-acoustic sensors in swallowing monitoring. (a) Stretchable sensor based on PEDOT:PSS, graphene, metallic nanoparticles for measuring of swallowed volume during exercise. Reproduced with permission.56 2023, Wiley-VCH. (b) Epidermal graphene sensors for estimation of swallowed volume. Reproduced with permission.55 2021, ACS Publications. (c) Soft skin-interfaced mechano-acoustic sensors for real-time monitoring and patient feedback on respiratory and swallowing biomechanics. Reproduced with permission.6 2022, Springer Nature.

Furthermore, there are ongoing attempts to use strain sensors to identify and monitor Parkinson's disease and dysphagia in patients. Tracking swallowing activities in patients and their response to different types and volumes of food reveals valuable information for clinical treatment. For example, Kim et al.208 introduced a flexible submental sensor patch with remote monitoring controls for the management of oropharyngeal swallowing disorders. This sensor patch was optimally designed to enable the accurate recording of submental muscle activity, including burst duration and amplitude, during swallowing for dysphagia patients under treatment. Another study by Ramírez et al.57 developed a smart patch for monitoring swallowing activity in head and neck cancer patients. By employing machine learning, the system achieved a high accuracy at 86.4% by cross-validation in classifying three types of foods: water bolus, yogurt bolus, and cracker bolus. Moreover, the system presented an ability in early auscultation of dysphagia with a high level of accuracy of 94.7%.

Besides detecting, advanced wearable sensors are also capable of assisting in therapeutic treatments for dysphagia. Technically, these treatments often include interventions by speech-language pathologists designed to improve the physiology of the swallowing mechanism by training patients to initiate swallowing with sufficient frequency and during the expiratory phase of the breathing cycle (exhale/swallow/exhale). These therapeutic treatments currently necessitate bulky, expensive equipment to synchronously record swallows and respirations, confined to use in clinical settings. In an attempt to overcome these challenges, Kang et al.6 introduced a wireless, wearable technology that enables continuous, mechano-acoustic tracking of respiratory activities and swallows through movements and vibratory processes monitored at the skin surface (Fig. 11c). Two separated accelerometers were attached to the suprasternal notch and laryngeal prominence to capture respiration and swallowing signals. The respiratory-swallow phase pattern was then recorded and compared with the optimal pattern, then alert patients via a haptic feedback patch attached to their arms.

7. Conclusion and perspectives

Driven by the growing demand for comprehensive health assessments, flexible wearable mechano-acoustic sensors have seen significant advancements in addressing the limitations of traditional bulky equipment. These innovations offer a new approach for long-term, ambulatory monitoring and objective assessment of body sounds, enhancing both functionality and user comfort. The development of miniature MEMS acoustic sensors with footprints of a few milometers represents a significant breakthrough, enabling compact and powerful sensing capabilities for capturing body sounds. These sensors can be integrated into the flexible circuit boards, forming wearable devices with dimensions of just a few square centimeters. Such designs are lightweight and conformal, making them ideal for comfortable, unobtrusive wear. The introduction of fully flexible sensors has further improved wearing comfort and minimized motion artifacts, ensuring more accurate measurements. Some of them are tailored to be biodegradable,209 gas-permeable and transparent,210 reducing skin irritation and discomfort associated with prolonged use. These features are particularly suitable for continuous health monitoring over extended periods. Liquid metals (LM), such as Eutectic Gallium-Indium (EGaIn), are highly promising materials for soft electronics due to their unique and versatile properties such as exceptional conformability, biocompatibility, permeability, self-healing capability, and recyclability.211 Such properties enable the application of LM-based materials in various areas, including radio frequency electronics and soft circuit connections for flexible, wearable devices.

Regarding materials and designs, silicon MEMS microphones and acceleration sensors exhibit a high technological readiness level (TRL) due to their mature manufacturing capabilities, worldwide availability, and well-established sensing mechanisms. The use of these MEMS microphones as surface-mount devices (SMDs) facilitates integration with the fPCBs through automated pick-and-place tools and chip bonding processes. However, a limitation of MEMS microphones is their rigidity, which may compromise the mechanical flexibility of wearable acoustic devices and induce artifact signals due to differences in the mechanical properties of tissues and electronics. A potential solution to this issue is the implementation of a transfer-printing process to create flexible inorganic acoustic sensors on polymeric substrates, as demonstrated in recent work by Yang et al.212 This approach enhances device compliance and integration with human skin. Another drawback of existing MEMS sensors is their narrow measurement range. For instance, commercially available MEMS microphones typically have a cut-off frequency of 35 Hz,142 which hinders the detection of low-frequency body sounds. A proposed solution involves combining MEMS microphones with acceleration sensors which are sensitive to low frequencies. However, this approach may increase the system footprint and cost due to the need for multiple devices, additional metal interconnects, and extra SMD components for associated amplification circuits. The development of monolithic sensors, such as cantilevers capable of detecting a broad range of frequencies, represents an exciting research direction to address this limitation. An alternative to inorganic semiconductor-based sensors is the use of conductive polymers for body sound detection. Their intrinsic mechanical stretchability, combined with the ability to engineer sensitivity and measurement range, is expected to enhance device performance. However, compared to Si-based devices, scalable manufacturing of polymeric sensors poses a significant challenge. High-yield fabrication processes such as inkjet or 3D printing are potential solutions to this problem. In addition to scalable manufacturing, another key technological issue is the development of stretchable circuits for polymeric sensors. In many cases, mechanical failure occurs at the interface between the soft sensor and the fPCBs due to significant differences in material properties. Developing fully stretchable devices using polymeric materials thus remains a critical research question to realize the unique potential of this class of materials.

Power management is another critical aspect of wearable technology. Multi-modal sensing and long-term operation demand higher energy capacities, which often result in increased battery size and weight. Larger batteries can compromise the overall device dimensions, reduce wearing comfort, and may influence the epidermal vibration under sound pressure and hence impact the measurement accuracy. Lithium-ion polymer (LiPo) batteries have been the mainstream power source for supporting intermittent sound measurements over several days. Despite advancements in battery technology, they remain one of the largest components in wearable acoustic systems, contributing to increased device size and weight. For applications such as sleep quality monitoring where acoustic sensors are directly attached to the nose or integrated into a facemask, minimizing device size and weight is crucial to enhance user comfort and prevent sleep interference. To address this challenge, wireless charging using NFC has emerged as a promising solution due to its biocompatibility and safety. In controlled environments such as hospitals, wireless power transmission systems can be installed beneath patient beds to continuously power wearable devices. This approach eliminates the need for bulky batteries, significantly reducing device size and weight while enabling long-term, uninterrupted use. However, the short communication range of NFC limits user mobility. Perhaps an ultimate solution could involve the development of energy-harvesting devices capable of collecting energy from the human body (e.g., using piezoelectric materials to harness body motion) or the surrounding environment (e.g., outdoor and indoor illumination). A recent study by the Gao group213 demonstrated the use of flexible solar panels to convert photoenergy from indoor illumination into electrical power for wearable chemical sensors. Similar concepts can be adapted to meet the power demands of wearable mechano-acoustic sensors. Enzymatic biofuel cells (EBFCs), utilizing physiological glucose or lactate as fuels to convert chemical energy into electrical energy, represent a promising alternative power source. The chemical energy harnessed by EBFCs can be sourced from abundant biofuels found in human body fluids, such as sweat, tears, blood, and saliva.214,215 These biofuels are renewable and can provide a power supply of up to 100 W, meeting the power demands of low-energy bioelectronics, which typically range from 200 μW to 1 W.216,217 Compared to other energy harvesters that rely on solar or biomechanical energy, EBFCs offer distinct advantages. These include continuous power generation, biocompatible interfaces free from toxic materials, a simple configuration that eliminates the need for additional packaging, and biodegradability, making them a highly attractive solution for powering wearable and implantable bioelectronic devices.

In addition to power management, data transmission presents a significant challenge in the system-level integration of wireless devices. Various wireless communication methods, including NFC, RFID, Wi-Fi, and Bluetooth, have been introduced, providing several advantages such as tether-free configuration, ease of use, and reduced motion artifacts. Among the techniques, NFC and RFID stand out as battery-free techniques, but their transmission rates are relatively low. NFC is known for its high security and convenient connection but is constrained by a limited range (≈5–20 cm) and low-sampling rate data transmission. As a result, NFC is better suited for on-demand measurements rather than continuous monitoring of body sounds such as intermittent blood pressure measurement.218 In contrast, RFID enables real-time wireless data exchange via electromagnetic waves, allowing real-time measurement of body signals. However, the operational range of RFID is limited, and its transmission stability is affected by geometry variations between the reader and devices, restricting its application in flexible wearable devices. To overcome these limitations, Wi-Fi and Bluetooth have been explored for data acquisition and transmission through RF (radio frequency) signals. Wi-Fi offers a long wireless transfer range of up to 70 m, a high transmission rate, and has been used in applications like respiration and heart rate monitoring.219 However, due to the wireless transmission of data over relatively long distances, Wi-Fi communication typically requires high power consumption,220 limiting its suitability for long-term wearable devices. Conversely, Bluetooth, with a transfer range of about 30 m, offers 30% less power consumption compared to Wi-Fi, while maintaining a stable connection between wearable devices to nearby user interfaces or processing centers, making it more practical for wearable applications. Bluetooth Low Energy (BLE), a power-efficient version of Bluetooth, has been introduced focusing on minimal energy consumption by sacrificing data rate. This allows BLE transmission on battery-operated devices that need to operate on minimal power and only send small sets of data. To address the data rate limitations in BLE, recent studies have incorporated external memory into wearable devices, enabling high sample-rate recording and data transmission periodically. The reduction of transfer frequency cuts power consumption by up to 60%8 that can support continuous monitoring on a single device for over 24 hours with a small lithium–polymer battery,8,9,221 presenting a promising solution for data transmission for wireless, wearable mechano-acoustic sensors.

Acoustic sensors are sensitive not only to body sounds but also to acoustic noise from surrounding environments and human motion. Ensuring high-quality signals is imperative for precise and reliable diagnosis. Several devices, including acoustic and pressure patches, have demonstrated their capability for continuous measurement of heart pulse, blood pressure, and bowel sounds. However, most measurements require users to maintain a stable posture, and the influence of body movement on recorded data has not been fully addressed. Integrating multimodal sensors, such as acceleration and motion sensors, could help minimize or cancel artifact signals caused by body movement. This approach could also facilitate measurements during dynamic activities, including sports, thereby expanding the applications of acoustic sensors beyond healthcare to high-performance sports. In addition, the application of machine learning to detect artifact signals and recognize distinctive sound patterns from different parts of the human body can underpin reliable measurement and diagnosis. The use of machine learning and artificial intelligence (AI) in wearable acoustic sensors to obtain and interpret meaningful body sounds is expected to be a highly active area of research in the coming years. Advancements in software development also facilitate data sharing and access for home-based monitoring and telehealth but simultaneously raise concerns regarding security and privacy. Further efforts involving the development of wearable acoustic sensors, user education, and ethical considerations are critically important for deploying AI in wearable acoustic devices and other medical applications.

Author contributions

Tran Bach Dang: writing – original draft; Hoang-Phuong Phan: review – editing, supervision; Thanh An Truong: writing, review – editing; Chi Cong Nguyen: review – editing; Michael Listyawan: review – editing; Joshua Sam Sapers: review – editing; Sinuo Zhao: review – editing; Duc Phuc Truong: review – editing.

Data availability

Data will be made available on request.

No primary research results, software or code have been included and no new data were generated or analysed as part of this review.

Conflicts of interest

There are no conflicts to declare.

Acknowledgements

H. P. P. acknowledged funding from Australian Research Council (DP230101312, FT240100203). This work was performed in part at the NSW Node of the Australian National Fabrication Facility.

References

  1. Y. Park, H. Luan, K. Kwon, T. S. Chung, S. Oh, J. Y. Yoo, G. Chung, J. Kim, S. Kim, S. S. Kwak, J. Choi, H. P. Phan, S. Yoo, H. Jeong, J. Shin, S. M. Won, H. J. Yoon, Y. H. Jung and J. A. Rogers, npj Flex. Electron., 2024, 8, 1–8 Search PubMed.
  2. T. Rahman, A. T. Adams, M. Zhang, E. Cherry, B. Zhou, H. Peng and T. Choudhury, in MobiSys 2014 - Proceedings of the 12th Annual International Conference on Mobile Systems, Applications, and Services, Association for Computing Machinery, 2014, pp. 2–13.
  3. A. Roguin, Clin. Med. Res., 2006, 4, 230 CrossRef PubMed.
  4. S. Alsmadi and Y. P. Kahya, Comput. Biol. Med., 2008, 38, 53–61 CrossRef PubMed.
  5. K. H. Lee, X. Ni, J. Y. Lee, H. Arafa, D. J. Pe, S. Xu, R. Avila, M. Irie, J. H. Lee, R. L. Easterlin, D. H. Kim, H. U. Chung, O. O. Olabisi, S. Getaneh, E. Chung, M. Hill, J. Bell, H. Jang, C. Liu, J. Bin Park, J. Kim, S. B. Kim, S. Mehta, M. Pharr, A. Tzavelis, J. T. Reeder, I. Huang, Y. Deng, Z. Xie, C. R. Davies, Y. Huang and J. A. Rogers, Nat. Biomed. Eng., 2020, 4, 148–158 CrossRef PubMed.
  6. Y. J. Kang, H. M. Arafa, J. Y. Yoo, C. Kantarcigil, J. T. Kim, H. Jeong, S. Yoo, S. Oh, J. Kim, C. Wu, A. Tzavelis, Y. Wu, K. Kwon, J. Winograd, S. Xu, B. Martin-Harris and J. A. Rogers, npj Digit. Med., 2022, 5, 1–13 CrossRef PubMed.
  7. Y. Liu, J. J. S. Norton, R. Qazi, Z. Zou, K. R. Ammann, H. Liu, L. Yan, P. L. Tran, K. I. Jang, J. W. Lee, D. Zhang, K. A. Kilian, S. H. Jung, T. Bretl, J. Xiao, M. J. Slepian, Y. Huang, J. W. Jeong and J. A. Rogers, Sci. Adv., 2016, 2, 1601185 CrossRef PubMed.
  8. A. Tzavelis, J. Palla, R. Mathur, B. Bedford, Y. H. Wu, J. Trueb, H. S. Shin, H. Arafa, H. Jeong, W. Ouyang, J. Y. Kwak, J. Chiang, S. Schulz, T. M. Carter, V. Rangaraj, A. K. Katsaggelos, S. A. McColley and J. A. Rogers, IEEE J. Biomed. Health Inform., 2024, 28, 5941–5952 Search PubMed.
  9. J. Y. Yoo, S. Oh, W. Shalish, W. Y. Maeng, E. Cerier, E. Jeanne, M. K. Chung, S. Lv, Y. Wu, S. Yoo, A. Tzavelis, J. Trueb, M. Park, H. Jeong, E. Okunzuwa, S. Smilkova, G. Kim, J. Kim, G. Chung, Y. Park, A. Banks, S. Xu, G. M. Sant'Anna, D. E. Weese-Mayer, A. Bharat and J. A. Rogers, Nat. Med., 2023, 29, 3137–3148 CrossRef CAS PubMed.
  10. O. G. Nayeem, S. Lee, H. Jin, N. Matsuhisa, H. Jinno, A. Miyamoto, T. Yokota and T. Someya, Proc. Natl. Acad. Sci. U. S. A., 2020, 117, 7063–7070 CrossRef PubMed.
  11. S. Wang, Y. Fang, H. He, L. Zhang, C. Li and J. Ouyang, Adv. Funct. Mater., 2021, 31, 2007495 CAS.
  12. B. U. Hwang, J. H. Lee, T. Q. Trung, E. Roh, D. Il Kim, S. W. Kim and N. E. Lee, ACS Nano, 2015, 9, 8801–8810 CAS.
  13. H. Ren, H. Jin, C. Chen, H. Ghayvat and W. Chen, IEEE J. Transl. Eng. Health Med., 2018, 6, 1 Search PubMed.
  14. S. Li, F. Li, S. Tang and W. Xiong, Hindawi Limited, 2020, preprint,  DOI:10.1155/2020/5846191.
  15. C. J. She, X. F. Cheng and K. Wang, Sensors, 2021, 22, 181 CrossRef PubMed.
  16. S. M. E. A. Debbal, J. Med. Eng. Technol., 2020, 44, 396–410 CrossRef PubMed.
  17. Y. L. Tseng, P. Y. Ko and F. S. Jaw, Biomed. Eng. Online, 2012, 11, 8 CrossRef PubMed.
  18. M. Guven and F. Uysal, Sensors, 2023, 23, 5835 CrossRef PubMed.
  19. I. Maglogiannis, E. Loukis, E. Zafiropoulos and A. Stasis, Comput. Methods Programs Biomed., 2009, 95, 47–61 CrossRef PubMed.
  20. W. Xu, K. Yu, J. Ye, H. Li, J. Chen, F. Yin, J. Xu, J. Zhu, D. Li and Q. Shu, Artif. Intell. Med., 2022, 126, 102257 CrossRef PubMed.
  21. S. E. Schmidt, L. H. Madsen, J. Hansen, H. Zimmermann, H. Kelbæk, S. Winter, D. Hammershøi, E. Toft, J. J. Struijk and P. Clemmensen, Cardiovasc. Eng. Technol., 2022, 13, 864–871 CrossRef PubMed.
  22. S. Winther, S. E. Schmidt, N. R. Holm, E. Toft, J. J. Struijk, H. E. Bøtker and M. Bøttcher, Int. J. Cardiovasc. Imaging, 2016, 32, 235–245 CrossRef PubMed.
  23. Y. Kim, Y. K. Hyon, S. S. Jung, S. Lee, G. Yoo, C. Chung and T. Ha, Sci. Rep., 2021, 11, 17186 CrossRef CAS PubMed.
  24. S. Reichert, R. Gass, C. Brandt and E. Andrès, Analysis of Respiratory Sounds: State of the Art, 2008, vol. 2 Search PubMed.
  25. A. Kandaswamy, C. S. Kumar, R. P. Ramanathan, S. Jayaraman and N. Malmurugan, Comput. Biol. Med., 2004, 34, 523–537 CrossRef CAS PubMed.
  26. S. Reichert, R. Gass, C. Brandt and E. Andrès, Clin. Med. Circ. Respirat. Pulm. Med., 2008, 2, CCRPM.S530 Search PubMed.
  27. S. Huq and Z. Moussavi, Med. Biol. Eng. Comput., 2012, 50, 297–308 CrossRef PubMed.
  28. L. P. Malmberg, R. Sorva and A. R. A. Sovijärvi, Pediatr. Pulmonol., 1994, 18, 170–177 CrossRef CAS PubMed.
  29. H. Alshaer, G. R. Fernie, E. Maki and T. D. Bradley, Sleep Med., 2013, 14, 562–571 CrossRef PubMed.
  30. S. A. Taplidou, L. J. Hadjileontiadis, I. K. Kitsas, K. I. Panoulas, T. Penzel, V. Gross and S. M. Panas, in Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, 2004, vol. 26 V, pp. 3832–3835.
  31. K. F. Chung, Pulm. Pharmacol., 1996, 9, 373–377 CrossRef CAS PubMed.
  32. J. Korpáš, J. Sadloňová and M. Vrabec, Pulm. Pharmacol., 1996, 9, 261–268 CrossRef PubMed.
  33. G. Sharma, K. Umapathy and S. Krishnan, Biomed. Signal Process. Control, 2022, 76, 103703 CrossRef PubMed.
  34. A. Murata, Y. Taniguchi, Y. Hashimoto, Y. Kaneko, Y. Takasaki and S. Kudoh, Intern. Med., 1998, 37, 732–735 CAS.
  35. V. P. Singh, J. M. S. Rohith and V. K. Mittal, in 12th IEEE International Conference Electronics, Energy, Environment, Communication, Computer, Control: (E3-C3), INDICON 2015, Institute of Electrical and Electronics Engineers Inc., 2016.
  36. K. P. Dawson, C. W. Thorpe and L. J. Toop, J. Paediatr. Child Health, 1991, 27, 4–6 CAS.
  37. Y. A. Amrulloh, I. H. Priastomo, E. S. Wahyuni and R. Triasih, in Proceedings of 2018 2nd International Conference on Biomedical Engineering: Smart Technology for Better Society, IBIOMED 2018, 2018, pp. 111–114.
  38. A. Renjini, M. N. S. Swapna, K. N. Satheesh Kumar and S. I. Sankararaman, Physica A, 2023, 626, 129039 Search PubMed.
  39. V. Swarnkar, U. R. Abeyratne, A. B. Chang, Y. A. Amrulloh, A. Setyati and R. Triasih, Ann. Biomed. Eng., 2013, 41, 1016–1028 Search PubMed.
  40. M. Faezipour and A. Abuzneid, Telemed. e-Health, 2020, 26, 1202–1205 CrossRef PubMed.
  41. K. K. Lella and A. Pja, Alexandria Eng. J., 2022, 61, 1319–1334 Search PubMed.
  42. T. Rahman, N. Ibtehaz, A. Khandakar, M. S. A. Hossain, Y. M. S. Mekki, M. Ezeddin, E. H. Bhuiyan, M. A. Ayari, A. Tahir, Y. Qiblawey, S. Mahmud, S. M. Zughaier, T. Abbas, S. Al-Maadeed and M. E. H. Chowdhury, Diagnostics, 2022, 12, 920 CrossRef CAS PubMed.
  43. Z. Ren, Y. Chang, K. D. Bartl-Pokorny, F. B. Pokorny and B. W. Schuller, J. Voice, 2024, 38, 1264–1277 CrossRef PubMed.
  44. S. Sangle and C. Gaikwad, in 2021 International Conference on Decision Aid Sciences and Application, DASA 2021, Institute of Electrical and Electronics Engineers Inc., 2021, pp. 182–186.
  45. B. B. Recasens, A. Balañá Corberó, J. M. M. Llorens, A. Guillen-Sola, M. V. Moreno, G. G. Escobar, Y. Umayahara, Z. Soh, T. Tsuji and M. Á. Rubio, Muscle Nerve, 2024, 69, 213–217 CrossRef PubMed.
  46. C. Infante, D. Chamberlain, R. Fletcher, Y. Thorat and R. Kodgule, in GHTC 2017 - IEEE Global Humanitarian Technology Conference, Proceedings, 2017, 2017-January, 1–10.
  47. D. Li, J. Wu, X. Jin, Y. Li, B. Tong, W. Zeng, P. Liu, W. Wang and S. Shang, Interdiscip. Nurs. Res., 2023, 2, 250–256 CrossRef.
  48. K. V. M. Taveira, R. S. Santos, B. L. C. de Leão, J. Stechman Neto, L. Pernambuco, L. K. da Silva, G. De Luca Canto and A. L. Porporatti, Elsevier Editora Ltda, 2018, preprint,  DOI:10.1016/j.bjorl.2017.12.008.
  49. E. de Lima Nunes, L. Menzen and M. Cristina de Almeida Freitas Cardoso, Open J. Otolaryngol., 2019, 2, 10–16 CrossRef.
  50. M. Golabbakhsh, A. Rajaei, M. Derakhshan, S. Sadri, M. Taheri and P. Adibi, Dysphagia, 2014, 29, 572–577 CrossRef PubMed.
  51. Q. Pan, N. Maeda, Y. Manda, N. Kodama and S. Minagi, J. Oral Rehabil., 2016, 43, 840–846 CrossRef CAS PubMed.
  52. K. Takahashi, M. E. Groher and K. ichi Michi, Dysphagia, 1994, 9, 54–62 CAS.
  53. O. Makeyev, P. Lopez-Meyer, S. Schuckers, W. Besio and E. Sazonov, Biomed. Signal Process. Control, 2012, 7, 649–656 CrossRef PubMed.
  54. J. H. Lim, P. M. Djuric and M. Stanacevic, in International Conference on Electrical, Computer, and Energy Technologies, ICECET 2021, Institute of Electrical and Electronics Engineers Inc., 2021.
  55. B. Polat, L. L. Becerra, P. Y. Hsu, V. Kaipu, P. P. Mercier, C. K. Cheng and D. J. Lipomi, ACS Appl. Nano Mater., 2021, 4, 8126–8134 CrossRef CAS.
  56. B. Polat, T. Rafeedi, L. Becerra, A. X. Chen, K. Chiang, V. Kaipu, R. Blau, P. P. Mercier, C. Cheng and D. J. Lipomi, Adv. Sens. Res., 2023, 2, 2200060 CrossRef.
  57. J. Ramírez, D. Rodriquez, F. Qiao, J. Warchall, J. Rye, E. Aklile, A. S. C. Chiang, B. C. Marin, P. P. Mercier, C. K. Cheng, K. A. Hutcheson, E. H. Shinn and D. J. Lipomi, ACS Nano, 2018, 12, 5913–5922 CrossRef PubMed.
  58. A. Santamato, F. Panza, V. Solfrizzi, A. Russo, V. Frisardi, M. Megna, M. Ranieri and P. Fiore, J. Rehabil. Med., 2009, 41, 639–645 CrossRef PubMed.
  59. L. Marks and J. Weinreich, in International Journal of Language and Communication Disorders, Taylor and Francis Ltd, 2001, vol. 36, pp. 288–291 Search PubMed.
  60. J. M. Dudik, A. Kurosu, J. L. Coyle and E. Sejdić, Biomed. Eng. Online, 2018, 17, 69 CrossRef PubMed.
  61. J. K. Nowak, R. Nowak, K. Radzikowski, I. Grulkowski and J. Walkowiak, MDPI AG, 2021, preprint,  DOI:10.3390/s21165294.
  62. M. Zhou, Y. Yu, Y. Zhou, L. Song, S. Wang and D. Na, RSC Adv., 2022, 12, 29103–29112 RSC.
  63. A. Baronetto, L. S. Graf, S. Fischer, M. F. Neurath and O. Amft, in Proceedings - International Symposium on Wearable Computers, ISWC, Association for Computing Machinery, 2020, pp. 17–21.
  64. X. Du, G. Allwood, K. M. Webberley, A. Osseiran, W. Wan, A. Volikova and B. J. Marshall, J. Acoust. Soc. Am., 2018, 144, EL485–EL491 CrossRef PubMed.
  65. S. Nakagawa, S. N. Saito, S. Otsuka, S. Hori and M. Honda, in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Institute of Electrical and Electronics Engineers Inc., 2023.
  66. Z. Zhao, F. Li, Y. Xie, Y. Wu and Y. Wang, IEEE Trans. Mob. Comput., 2024, 23, 3213–3227 Search PubMed.
  67. S. S. Ching and Y. K. Tan, Baishideng Publishing Group Co, 2012, preprint,  DOI:10.3748/wjg.v18.i33.4585.
  68. B. L. Craine, M. Silpa and C. J. O'Toole, Dig. Dis Sci., 1999, 44, 1887–1892 CrossRef CAS PubMed.
  69. R. Ranta, V. Louis-Dorr, C. Heinrich, D. Wolf and F. Guillemin, IEEE Trans. Biomed. Eng., 2010, 57, 1507–1519 Search PubMed.
  70. F. Wang, D. Wu, P. Jin, Y. Zhang, Y. Yang, Y. Ma, A. Yang, J. Fu and X. Feng, Sci. China Inform. Sci., 2019, 62, 202402 CrossRef.
  71. J. C. Doll and B. L. Pruitt, Piezoresistor design and applications, Springer New York, 2013 Search PubMed.
  72. Z. Lin, S. Duan, M. Liu, C. Dang, S. Qian, L. Zhang, H. Wang, W. Yan and M. Zhu, John Wiley and Sons Inc, 2024, preprint,  DOI:10.1002/adma.202306880.
  73. Y. Jung, I. Ha, M. Kim, J. Ahn, J. Lee and S. H. Ko, Nano Energy, 2023, 105, 107979 CAS.
  74. S. Gong, X. Zhang, X. A. Nguyen, Q. Shi, F. Lin, S. Chauhan, Z. Ge and W. Cheng, Nat. Nanotechnol., 2023, 18, 889–897 CAS.
  75. S. Yang and N. Lu, Sensors, 2013, 13, 8577–8594 CAS.
  76. T. T. Hoang, A. M. Cunio, S. Zhao, T. Nguyen, S. Peng, S. Liaw, T. Barber, J. Zhang, S. Farajikhah, F. Dehghani, T. N. Do and H. Phan, Adv. Sens. Res., 2024, 3, 2400039 Search PubMed.
  77. Y. Okamoto, T. V. Nguyen, H. Takahashi, Y. Takei, H. Okada and M. Ichiki, Sci. Rep., 2023, 13, 6503 CAS.
  78. Q. Liu, J. Chen, Y. Li and G. Shi, ACS Nano, 2016, 10, 7901–7906 CAS.
  79. L. Q. Tao, H. Tian, Y. Liu, Z. Y. Ju, Y. Pang, Y. Q. Chen, D. Y. Wang, X. G. Tian, J. C. Yan, N. Q. Deng, Y. Yang and T. L. Ren, Nat. Commun., 2017, 8, 14579 CrossRef CAS PubMed.
  80. Y. Wang, L. Wang, T. Yang, X. Li, X. Zang, M. Zhu, K. Wang, D. Wu and H. Zhu, Adv. Funct. Mater., 2014, 24, 4666–4670 CrossRef CAS.
  81. Y. Cai, J. Shen, G. Ge, Y. Zhang, W. Jin, W. Huang, J. Shao, J. Yang and X. Dong, ACS Nano, 2018, 12, 56–62 CrossRef CAS PubMed.
  82. X. Sun, Z. Qin, L. Ye, H. Zhang, Q. Yu, X. Wu, J. Li and F. Yao, Chem. Eng. J., 2020, 382, 122832 CrossRef CAS.
  83. Y. He, D. Wu, M. Zhou, Y. Zheng, T. Wang, C. Lu, L. Zhang, H. Liu and C. Liu, ACS Appl. Mater. Interfaces, 2021, 13, 15572–15583 CrossRef CAS PubMed.
  84. G.-Y. Gou, X.-S. Li, J.-M. Jian, F. Wu, J. Ren, X.-S. Geng, J.-D. Xu, Y.-C. Qiao, Z.-Y. Yan, G. Dun, C. W. Ahn, Y. Yang and T.-L. Ren, Two-stage amplification of an ultrasensitive MXene-based intelligent artificial eardrum, 2022, vol. 8 Search PubMed.
  85. S. Wang, W. Deng, T. Yang, Y. Ao, H. Zhang, G. Tian, L. Deng, H. Huang, J. Huang, B. Lan and W. Yang, Adv. Funct. Mater., 2023, 33, 2214503 CrossRef CAS.
  86. Y. Ma, Y. Yue, H. Zhang, F. Cheng, W. Zhao, J. Rao, S. Luo, J. Wang, X. Jiang, Z. Liu, N. Liu and Y. Gao, ACS Nano, 2018, 12, 3209–3216 CrossRef CAS PubMed.
  87. A. A. Barlian, W. T. Park, J. R. Mallon, A. J. Rastegar and B. L. Pruitt, Institute of Electrical and Electronics Engineers Inc., 2009, preprint,  DOI:10.1109/JPROC.2009.2013612.
  88. M. Hempel, D. Nezich, J. Kong and M. Hofmann, Nano Lett., 2012, 12, 5714–5718 CrossRef CAS PubMed.
  89. Z. Liu, D. Qi, P. Guo, Y. Liu, B. Zhu, H. Yang, Y. Liu, B. Li, C. Zhang, J. Yu, B. Liedberg and X. Chen, Adv. Mater., 2015, 27, 6230–6237 CrossRef CAS PubMed.
  90. Y. Wang, W. Cai, Y. Zhang, J. Ji, H. Zheng, D. Yan and X. Liu, Discover Nano, 2024, 19, 1–31 CrossRef CAS PubMed.
  91. J. He, F. Shi, Q. Liu, Y. Pang, D. He, W. Sun, L. Peng, J. Yang and M. Qu, Colloids Surf., A, 2022, 642, 128676 CrossRef CAS.
  92. L. Huang, X. Huang, X. Bu, S. Wang and P. Zhang, Sens. Actuators, A, 2023, 359, 114508 CrossRef CAS.
  93. M. Bhattacharjee, M. Soni, P. Escobedo and R. Dahiya, Adv. Electron. Mater., 2020, 6, 2000445 CrossRef CAS.
  94. G. Liu, Z. Lv, S. Batool, M. Z. Li, P. Zhao, L. Guo, Y. Wang, Y. Zhou and S. T. Han, Small, 2023, 19, 2207879 CrossRef CAS PubMed.
  95. Y. Ming, P. Kelin, L. Zhen, G. Yanan, T. Qingquan, Z. Zhixiong and C. Yu, Chem. Eng. J., 2025, 159659 Search PubMed.
  96. S. Lee, M. Kim, J. Choi and S. Y. Kim, Mater. Today Chem., 2023, 29, 101434 CrossRef CAS.
  97. W. Peng, L. Han, H. Huang, X. Xuan, G. Pan, L. Wan, T. Lu, M. Xu and L. Pan, J. Mater. Chem. A, 2020, 8, 26109–26118 RSC.
  98. H. Wu, Z. Pang, L. Ji, X. Pang, Y. Li and X. Yu, Chem. Eng. J., 2024, 497, 154883 CrossRef CAS.
  99. M. Amjadi, K. U. Kyung, I. Park and M. Sitti, Adv. Funct. Mater., 2016, 26, 1678–1698 CrossRef CAS.
  100. J. Yang, D. Tang, J. Ao, T. Ghosh, T. V. Neumann, D. Zhang, E. Piskarev, T. Yu, V. K. Truong, K. Xie, Y. C. Lai, Y. Li and M. D. Dickey, Adv. Funct. Mater., 2020, 30, 2002611 CrossRef CAS.
  101. D. Hohm and R. Gerhard-Multhaupt, J. Acoust. Soc. Am., 1984, 75, 1297–1298 CrossRef CAS.
  102. B. A. Ganji and B. Y. Majlis, Sens. Actuators, A, 2009, 149, 29–37 CrossRef CAS.
  103. J. Lee, S. C. Ko, C. H. Je, M. L. Lee, C. A. Choi, Y. S. Yang, S. Heo and J. Kim, in Proceedings of IEEE Sensors, 2009, pp. 1313–1316.
  104. J. A. Voorthuyzen, P. Bergveld and A. J. Sprenkels, IEEE Trans. Electr. Insul., 1989, 24, 267–276 CrossRef CAS.
  105. D. Hohm and G. Hess, J. Acoust. Soc. Am., 1989, 85, 476–480 CrossRef.
  106. Y. B. Ning, A. W. Mitchell and R. N. Tait, Sens. Actuators, A, 1996, 53, 237–242 CrossRef CAS.
  107. P. R. Scheeper, W. Olthuis and P. Bergveld, Sens. Actuators, A, 1994, 40, 179–186 CrossRef.
  108. S. A. Zawawi, A. A. Hamzah, B. Y. Majlis and F. Mohd-Yasin, in IEEE International Conference on Semiconductor Electronics, Proceedings, ICSE, 2016, vol. 2016-September, pp. 25–28.
  109. Q. Zou, Z. Li and L. Liu, J. Microelectromech. Syst., 1996, 5, 197–204 CrossRef CAS.
  110. P. C. Hsu, C. H. Mastrangelo and K. D. Wise, in Proceedings of the IEEE Micro Electro Mechanical Systems (MEMS), 1998, pp. 580–585.
  111. M. Pedersen, W. Olthuis and P. Bergveld, J. Microelectromech. Syst., 1998, 7, 387–394 CrossRef CAS.
  112. D. Todorović, A. Matković, M. Milićević, D. Jovanović, R. Gajić, I. Salom and M. Spasenović, 2D Mater., 2015, 2, 045013 CrossRef.
  113. A. Torkkeli, O. Rusanen, J. Saarilahti, H. Seppä, H. Sipola and J. Hietanen, Sens. Actuators, A, 2000, 85, 116–123 CAS.
  114. P. K. Sharma, N. Gupta and P. I. Dankov, AEÜ - Int. J. Electron. Commun., 2020, 127, 153455 CrossRef.
  115. Z. Xu, S. Zheng, X. Wu, Z. Liu, R. Bao, W. Yang and M. Yang, Composites, Part A, 2019, 125, 105527 CrossRef CAS.
  116. M. Konieczna, E. Markiewicz and J. Jurga, Polym. Eng. Sci., 2010, 50, 1613–1619 CrossRef CAS.
  117. S. C. B. Mannsfeld, B. C. K. Tee, R. M. Stoltenberg, C. V. H. H. Chen, S. Barman, B. V. O. Muir, A. N. Sokolov, C. Reese and Z. Bao, Nat. Mater., 2010, 9, 859–864 CrossRef CAS PubMed.
  118. G. Schwartz, B. C. K. Tee, J. Mei, A. L. Appleton, D. H. Kim, H. Wang and Z. Bao, Nat. Commun., 2013, 4, 1859 CrossRef PubMed.
  119. B. C. K. Tee, A. Chortos, R. R. Dunn, G. Schwartz, E. Eason and Z. Bao, Adv. Funct. Mater., 2014, 24, 5427–5434 CrossRef CAS.
  120. X. Tang, C. Wu, L. Gan, T. Zhang, T. Zhou, J. Huang, H. Wang, C. Xie and D. Zeng, Small, 2019, 15, 1804559 CrossRef PubMed.
  121. H. Niu, S. Gao, W. Yue, Y. Li, W. Zhou and H. Liu, Small, 2020, 16, 1904774 CrossRef CAS PubMed.
  122. Y. Wan, Z. Qiu, J. Huang, J. Yang, Q. Wang, P. Lu, J. Yang, J. Zhang, S. Huang, Z. Wu and C. F. Guo, Small, 2018, 14, 1801657 CrossRef PubMed.
  123. K. Xia, C. Wang, M. Jian, Q. Wang and Y. Zhang, Nano Res., 2018, 11, 1124–1134 CrossRef CAS.
  124. G. Yao, L. Xu, X. Cheng, Y. Li, X. Huang, W. Guo, S. Liu, Z. L. Wang and H. Wu, Adv. Funct. Mater., 2020, 30, 1907312 CrossRef CAS.
  125. Y. Sun, H. Tai, Z. Yuan, Z. Duan, Q. Huang and Y. Jiang, Part. Part. Syst. Charact., 2021, 38, 2100019 CrossRef CAS.
  126. J. C. Yang, J. O. Kim, J. Oh, S. Y. Kwon, J. Y. Sim, D. W. Kim, H. B. Choi and S. Park, ACS Appl. Mater. Interfaces, 2019, 11, 19472–19480 CrossRef CAS PubMed.
  127. S. Mohith, A. R. Upadhya, K. P. Navin, S. M. Kulkarni and M. Rao, IOP Publishing Ltd, 2021, preprint,  DOI:10.1088/1361-665X/abc6b9.
  128. L. Sui, X. Xiong and G. Shi, Phys. Procedia, 2012, 25, 1388–1396 CrossRef.
  129. Y. Meng, G. Chen and M. Huang, MDPI, 2022, preprint,  DOI:10.3390/nano12071171.
  130. A. Sharma, M. Z. Ansari and C. Cho, Elsevier B.V., 2022, preprint,  DOI:10.1016/j.sna.2022.113934.
  131. S. Bairagi, I. Shahid-ul, M. Shahadat, D. M. Mulvihill and W. Ali, Elsevier Ltd, 2023, preprint,  DOI:10.1016/j.nanoen.2023.108414.
  132. Z. Lou, S. Chen, L. Wang, R. Shi, L. Li, K. Jiang, D. Chen and G. Shen, Nano Energy, 2017, 38, 28–35 CrossRef CAS.
  133. Y. Xin, C. Guo, X. Qi, H. Tian, X. Li, Q. Dai, S. Wang and C. Wang, Ferroelectrics, 2016, 500, 291–300 CrossRef CAS.
  134. S. Choi and Z. Jiang, Sens. Actuators, A, 2006, 128, 317–326 CrossRef CAS.
  135. W. Dong, L. Xiao, W. Hu, C. Zhu, Y. Huang and Z. Yin, Trans. Inst. Meas. Control, 2017, 39, 398–403 CrossRef.
  136. C. Lang, J. Fang, H. Shao, X. Ding and T. Lin, Nat. Commun., 2016, 7, 11108 CrossRef CAS PubMed.
  137. A. Zaszczyńska, A. Gradys and P. Sajkiewicz, MDPI AG, 2020, preprint,  DOI:10.3390/polym12112754.
  138. K. Roy, S. K. Ghosh, A. Sultana, S. Garain, M. Xie, C. R. Bowen, K. Henkel, D. Schmeiβer and D. Mandal, ACS Appl. Nano Mater., 2019, 2, 2013–2025 CrossRef CAS.
  139. W. Yan, G. Noel, G. Loke, E. Meiklejohn, T. Khudiyev, J. Marion, G. Rui, J. Lin, J. Cherston, A. Sahasrabudhe, J. Wilbert, I. Wicaksono, R. W. Hoyt, A. Missakian, L. Zhu, C. Ma, J. Joannopoulos and Y. Fink, Nature, 2022, 603, 616–623 CrossRef CAS PubMed.
  140. J. Park, M. Kim, Y. Lee, H. S. Lee and H. Ko, Sci. Adv., 2015, 1, 1500661 CrossRef PubMed.
  141. Y. Cotur, M. Kasimatis, M. Kaisti, S. Olenik, C. Georgiou and F. Güder, Adv. Funct. Mater., 2020, 30, 1910288 CrossRef CAS PubMed.
  142. S. H. Lee, Y. S. Kim, M. K. Yeo, M. Mahmood, N. Zavanelli, C. Chung, J. Y. Heo, Y. Kim, S. S. Jung and W. H. Yeo, Sci. Adv., 2022, 8, eabo5867 Search PubMed.
  143. G. Wang, Y. Yang, S. Chen, J. Fu, D. Wu, A. Yang, Y. Ma and X. Feng, IEEE J. Biomed. Health Inform., 2022, 26, 2951–2962 Search PubMed.
  144. A. Kumar, A. Varghese, D. Kalra, A. Raunak, Jaiverdhan, M. Prasad, V. Janyani and R. P. Yadav, Elsevier Ltd, 2024, preprint,  DOI:10.1016/j.mssp.2023.107879.
  145. M. A. Shah, I. A. Shah, D. G. Lee and S. Hur, Hindawi Limited, 2019, preprint,  DOI:10.1155/2019/9294528.
  146. M. C. Yew, C. W. Huang, W. Lin, C. H. Wang and P. Chang, in IMPACT Conference 2009 International 3D IC Conference - Proceedings, 2009, pp. 323–326.
  147. P. R. Scheeper, A. G. H. van der Donk, W. Olthuis and P. Bergveld, Sens. Actuators, A, 1994, 44, 1–11 Search PubMed.
  148. W. J. Wang, R. M. Lin and Y. Ren, Microelectron. Int., 2003, 20, 36–40 CrossRef.
  149. Y. Zhang and K. D. Wise, J. Microelectromech. Syst., 1994, 3, 59–68 CrossRef.
  150. S. C. Lo, S. K. Yeh, J. J. Wang, M. Wu, R. Chen and W. Fang, in Proceedings of the IEEE International Conference on Micro Electro Mechanical Systems (MEMS), 2018, vol. 2018-January, pp. 1064–1067.
  151. T. Nabeshima, T. V. Nguyen and H. Takahashi, Micromachines, 2022, 13, 645 CrossRef PubMed.
  152. H. Liu, S. Liu, A. A. Shkel, Y. Tang and E. S. Kim, in Proceedings of the IEEE International Conference on Micro Electro Mechanical Systems (MEMS), 2020, vol. 2020-January, pp. 857–860.
  153. Y. C. Chen, S. C. Lo, H. H. Cheng, M. Wu, I. Y. Huang and W. Fang, in Proceedings of IEEE Sensors, 2019, pp. 1–4.
  154. S. Da Wang, Y. C. Chen, S. C. Lo, Y. J. Wang, M. Wu and W. Fang, in Proceedings of IEEE Sensors, 2021 IEEE Sensors, 2021, pp. 1–4.
  155. S.-H. Tseng, S.-C. Lo, Yu.-C. Chen, Ya.-C. Lee, M. Wu and W. Fang, in 2020 IEEE 33rd International Conference on Micro Electro Mechanical Systems (MEMS), 2020, pp. 845–848.
  156. A. Albarbar, S. Mekid, A. Starr and R. Pietruszkiewicz, Sensors, 2008, 8, 784–799 CrossRef PubMed.
  157. W. Babatain, S. Bhattacharjee, A. M. Hussain and M. M. Hussain, ACS Appl Electron. Mater., 2021, 3, 504–531 CrossRef CAS.
  158. R. Gao and L. Zhang, IEEE Instrum. Meas. Mag., 2004, 7, 20–26 CrossRef.
  159. H. Luo, G. Zhang, L. R. Carley and G. K. Fedder, J. Microelectromech. Syst., 2002, 11, 188–195 CrossRef CAS.
  160. A. Aydemir and T. Akin, in Procedia Engineering, Elsevier Ltd, 2015, vol. 120, pp. 727–730 Search PubMed.
  161. C. M. Sun, M. H. Tsai, Y. C. Liu and W. Fang, IEEE Trans. Electron Devices, 2010, 57, 1670–1679 CAS.
  162. H. Sun, D. Fang, K. Jia, F. Maarouf, H. Qu and H. Xie, IEEE Sens. J., 2011, 11, 925–933 CAS.
  163. E. Marcelli, A. Capucci, G. Minardi and L. Cercenelli, ASAIO J., 2017, 63, 73–79 CrossRef PubMed.
  164. P. Gupta, M. J. Moghimi, Y. Jeong, D. Gupta, O. T. Inan and F. Ayazi, npj Digit. Med., 2020, 3, 19 Search PubMed.
  165. P. Gupta, H. Wen, L. Di Francesco and F. Ayazi, Sci. Rep., 2021, 11, 13427 CAS.
  166. J. Park, Y. Lee, J. Hong, M. Ha, Y. Do Jung, H. Lim, S. Y. Kim and H. Ko, ACS Nano, 2014, 8, 4689–4697 CAS.
  167. S. Lee, J. Kim, I. Yun, G. Y. Bae, D. Kim, S. Park, I. M. Yi, W. Moon, Y. Chung and K. Cho, Nat. Commun., 2019, 10, 2468 Search PubMed.
  168. S. Lee, J. Kim, H. Roh, W. Kim, S. Chung, W. Moon and K. Cho, Adv. Mater., 2022, 34, 2109545 CrossRef CAS PubMed.
  169. S. Xu, Z. Xu, D. Li, T. Cui, X. Li, Y. Yang, H. Liu and T. Ren, MDPI, 2023, preprint,  DOI:10.3390/polym15122699.
  170. S. Sundaram, P. Kellnhofer, Y. Li, J. Y. Zhu, A. Torralba and W. Matusik, Nature, 2019, 569, 698–702 CrossRef CAS PubMed.
  171. P. Wang, G. Sun, W. Yu, G. Li, C. Meng and S. Guo, Nano Energy, 2022, 104, 107883 CrossRef CAS.
  172. S. Baek, Y. Lee, J. Baek, J. Kwon, S. Kim, S. Lee, K. P. Strunk, S. Stehlin, C. Melzer, S. M. Park, H. Ko and S. Jung, ACS Nano, 2022, 16, 368–377 CrossRef CAS PubMed.
  173. L. Han, W. Liang, Q. Xie, J. J. Zhao, Y. Dong, X. Wang and L. Lin, Adv. Sci., 2023, 10, 2301180 CrossRef CAS PubMed.
  174. Y. Okamoto, T. V. Nguyen, H. Takahashi, Y. Takei, H. Okada and M. Ichiki, Sci. Rep., 2023, 13, 1–11 CrossRef PubMed.
  175. Y. Luo, J. Shao, S. Chen, X. Chen, H. Tian, X. Li, L. Wang, D. Wang and B. Lu, ACS Appl. Mater. Interfaces, 2019, 11, 17796–17803 CrossRef CAS PubMed.
  176. T. Li, H. Luo, L. Qin, X. Wang, Z. Xiong, H. Ding, Y. Gu, Z. Liu and T. Zhang, Small, 2016, 12, 5042–5048 CrossRef CAS PubMed.
  177. Y. Wan, Z. Qiu, Y. Hong, Y. Wang, J. Zhang, Q. Liu, Z. Wu and C. F. Guo, Adv. Electron. Mater., 2018, 4, 1700586 CrossRef.
  178. J. Shi, L. Wang, Z. Dai, L. Zhao, M. Du, H. Li and Y. Fang, Small, 2018, 14, 1800819 CrossRef PubMed.
  179. S. He, S. Feng, A. Nag, N. Afsarimanesh, T. Han and S. C. Mukhopadhyay, Sensors, 2020, 20, 703 CrossRef PubMed.
  180. Y. Zhang, X. Zhou, N. Zhang, J. Zhu, N. Bai, X. Hou, T. Sun, G. Li, L. Zhao, Y. Chen, L. Wang and C. F. Guo, Nat. Commun., 2024, 15, 1–11 CAS.
  181. Y. Shen, Z. Wang, Z. Wang, J. Wang, X. Yang, X. Zheng, H. Chen, K. Li, L. Wei and T. Zhang, John Wiley and Sons Inc, 2022, preprint,  DOI:10.1002/inf2.12318.
  182. K. Yan, J. Li, L. Pan and Y. Shi, APL Mater., 2020, 8, 120705 CrossRef CAS.
  183. N. Godard, S. Glinšek, A. Matavž, V. Bobnar and E. Defay, Adv. Mater. Technol., 2019, 4, 1800168 CrossRef.
  184. C. Y. Guo, H. C. Chang, K. J. Wang and T. L. Hsieh, Micromachines, 2022, 13, 1327 CrossRef PubMed.
  185. S. Min, D. H. Kim, D. J. Joe, B. W. Kim, Y. H. Jung, J. H. Lee, B. Y. Lee, I. Doh, J. An, Y. N. Youn, B. Joung, C. D. Yoo, H. S. Ahn and K. J. Lee, Adv. Mater., 2023, 35, 2301627 CrossRef CAS PubMed.
  186. J. Yang, J. Chen, Y. Su, Q. Jing, Z. Li, F. Yi, X. Wen, Z. Wang and Z. L. Wang, Adv. Mater., 2015, 27, 1316–1326 CrossRef CAS PubMed.
  187. M. Kroll, J. Hellums, L. McIntire, A. Schafer and J. Moake, Blood, 1996, 88, 1525–1541 CrossRef CAS PubMed.
  188. R. Mukkamala, J. O. Hahn, O. T. Inan, L. K. Mestha, C. S. Kim, H. Toreyin and S. Kyal, IEEE Trans. Biomed. Eng., 2015, 62, 1879–1901 Search PubMed.
  189. Y. Hu, E. G. Kim, G. Cao, S. Liu and Y. Xu, Ann. Biomed. Eng., 2014, 42, 2264–2277 CrossRef PubMed.
  190. H. Li, Y. Ren, G. Zhang, R. Wang, X. Zhang, T. Zhang, L. Zhang, J. Cui, Q. Da Xu and S. Duan, AIP Adv., 2019, 9, 015005 CrossRef.
  191. J. Zhong, Z. Li, M. Takakuwa, D. Inoue, D. Hashizume, Z. Jiang, Y. Shi, L. Ou, M. O. G. Nayeem, S. Umezu, K. Fukuda and T. Someya, Adv. Mater., 2022, 34, 2107758 CrossRef CAS PubMed.
  192. K. Zhang, Z. Li, J. Zhang, D. Zhao, Y. Pi, Y. Shi, R. Wang, P. Chen, C. Li, G. Chen, I. M. Lei and J. Zhong, ACS Sens., 2022, 7, 3135–3143 CrossRef CAS PubMed.
  193. D. Kim, J. Lee, M. K. Park and S. H. Ko, Commun. Mater., 2024, 5, 1–14 CrossRef.
  194. K. Kwon, Y. J. Lee, Y. Jung, I. Soltis, Y. Na, L. Romero, M. C. Kim, N. Rodeheaver, H. Kim, C. Lee, S. H. Ko, J. Lee and W. H. Yeo, Biomaterials, 2025, 314, 122866 CrossRef CAS PubMed.
  195. S. Honda, H. Hara, T. Arie, S. Akita and K. Takei, iScience, 2022, 25, 104163 CrossRef CAS PubMed.
  196. N. Zavanelli, H. Kim, J. Kim, R. Herbert, M. Mahmood, Y. S. Kim, S. Kwon, N. B. Bolus, F. B. Torstrick, C. S. D. Lee and W. H. Yeo, Sci. Adv., 2021, 7, eabl4146 Search PubMed.
  197. P. Fonseca, T. Weysen, M. S. Goelema, E. I. S. Møst, M. Radha, C. L. Scheurleer, L. Van Den Heuvel and R. M. Aarts, Sleep, 2017, 40 DOI:10.1093/sleep/zsx097.
  198. Z. Beattie, Y. Oyang, A. Statan, A. Ghoreyshi, A. Pantelopoulos, A. Russell and C. Heneghan, Physiol. Meas., 2017, 38, 1968–1979 CrossRef CAS PubMed.
  199. A. J. Boe, L. L. McGee Koch, M. K. O'Brien, N. Shawen, J. A. Rogers, R. L. Lieber, K. J. Reid, P. C. Zee and A. Jayaraman, npj Digit. Med., 2019, 2, 131 CrossRef PubMed.
  200. J. Shin, S. Jeong, J. Kim, Y. Y. Choi, J. Choi, J. G. Lee, S. Kim, M. Kim, Y. Rho, S. Hong, J. Il Choi, C. P. Grigoropoulos and S. H. Ko, ACS Nano, 2021, 15, 15730–15740 CAS.
  201. N. I. Y. N. Chee, S. Ghorbani, H. A. Golkashani, R. L. F. Leong, J. L. Ong and M. W. L. Chee, Nat. Sci. Sleep, 2021, 13, 177–190 CrossRef PubMed.
  202. H. Nakano, T. Furukawa and T. Tanigawa, American Academy of Sleep Medicine, 2019, preprint,  DOI:10.5664/jcsm.7804.
  203. N. Sridhar, A. Shoeb, P. Stephens, A. Kharbouch, D. Ben Shimol, J. Burkart, A. Ghoreyshi and L. Myers, npj Digit. Med., 2020, 3, 106 Search PubMed.
  204. K. Zhao, H. Jiang, Z. Wang, P. Chen, B. Zhu and X. Duan, IEEE Trans. Biomed. Circuits Syst., 2020, 14, 985–996 Search PubMed.
  205. Y. Yin, H. Jiang, S. Feng, J. Liu, P. Chen, B. Zhu and Z. Wang, Science in China Press, 2018, preprint,  DOI:10.1007/s11432-018-9395-5.
  206. X. Ding, Z. Wu, M. Gao, M. Chen, J. Li, T. Wu and L. Lou, Micromachines, 2022, 13, 2221 CrossRef PubMed.
  207. K. Zhao, S. Feng, H. Jiang, Z. Wang, P. Chen, B. Zhu and X. Duan, in Proceedings - IEEE International Symposium on Circuits and Systems, Institute of Electrical and Electronics Engineers Inc., 2022, vol. 2022-May, pp. 2443–2447.
  208. M. K. Kim, C. Kantarcigil, B. Kim, R. K. Baruah, S. Maity, Y. Park, K. Kim, S. Lee, J. B. Malandraki, S. Avlani, A. Smith, S. Sen, M. A. Alam, G. Malandraki and C. H. Lee, Sci. Adv., 2019, 5, eaay3210 CrossRef PubMed.
  209. J. K. Min, Y. Jung, J. Ahn, J. G. Lee, J. Lee and S. H. Ko, Adv. Mater., 2023, 35, 2211273 CrossRef CAS PubMed.
  210. D. Won, J. Bang, S. H. Choi, K. R. Pyun, S. Jeong, Y. Lee and S. H. Ko, Chem. Rev., 2023, 123, 9982–10078 CrossRef CAS PubMed.
  211. M. Kim, H. Lim, S. H. Ko, M. Kim, H. Lim and S. H. Ko, Adv. Sci., 2023, 10, 2205795 CrossRef CAS PubMed.
  212. Q. Yang, T. L. Liu, Y. Xue, H. Wang, Y. Xu, B. Emon, M. Wu, C. Rountree, T. Wei, I. Kandela, C. R. Haney, A. Brikha, I. Stepien, J. Hornick, R. A. Sponenburg, C. Cheng, L. Ladehoff, Y. Chen, Z. Hu, C. Wu, M. Han, J. M. Torkelson, Y. Kozorovitskiy, M. T. A. Saif, Y. Huang, J. K. Chang and J. A. Rogers, Nat. Electron., 2022, 5, 526–538 CrossRef CAS.
  213. J. Min, S. Demchyshyn, J. R. Sempionatto, Y. Song, B. Hailegnaw, C. Xu, Y. Yang, S. Solomon, C. Putz, L. E. Lehner, J. F. Schwarz, C. Schwarzinger, M. C. Scharber, E. Shirzaei Sani, M. Kaltenbrunner and W. Gao, Nat. Electron., 2023, 6, 630–641 CrossRef CAS PubMed.
  214. S. ul Haque, M. Yasir and S. Cosnier, Biosens. Bioelectron., 2022, 214, 114545 CrossRef CAS PubMed.
  215. J. Cai, F. Shen, J. Zhao and X. Xiao, iScience, 2024, 27, 108998 CAS.
  216. A. Szczupak, J. Halámek, L. Halámková, V. Bocharova, L. Alfonta and E. Katz, Energy Environ. Sci., 2012, 5, 8891–8895 CAS.
  217. A. J. Bandodkar, J. M. You, N. H. Kim, Y. Gu, R. Kumar, A. M. V. Mohan, J. Kurniawan, S. Imani, T. Nakagawa, B. Parish, M. Parthasarathy, P. P. Mercier, S. Xu and J. Wang, Energy Environ. Sci., 2017, 10, 1581–1589 Search PubMed.
  218. L. Kong, W. Li, T. Zhang, H. Ma, Y. Cao, K. Wang, Y. Zhou, A. Shamim, L. Zheng, X. Wang and W. Huang, Adv. Mater., 2024, 36, 2400333 CAS.
  219. S. Tan, Y. Ren, J. Yang and Y. Chen, IEEE Internet Things J., 2022, 9, 17832–17843 Search PubMed.
  220. G. D. Putra, A. R. Pratama, A. Lazovik and M. Aiello, in 2017 IEEE 7th Annual Computing and Communication Workshop and Conference, CCWC 2017, DOI:10.1109/CCWC.2017.7868425.
  221. H. U. Chung, A. Y. Rwei, A. Hourlier-Fargette, S. Xu, K. H. Lee, E. C. Dunne, Z. Xie, C. Liu, A. Carlini, D. H. Kim, D. Ryu, E. Kulikova, J. Cao, I. C. Odland, K. B. Fields, B. Hopkins, A. Banks, C. Ogle, D. Grande, J. Bin Park, J. Kim, M. Irie, H. Jang, J. H. Lee, Y. Park, J. Kim, H. H. Jo, H. Hahm, R. Avila, Y. Xu, M. Namkoong, J. W. Kwak, E. Suen, M. A. Paulus, R. J. Kim, B. V. Parsons, K. A. Human, S. S. Kim, M. Patel, W. Reuther, H. S. Kim, S. H. Lee, J. D. Leedle, Y. Yun, S. Rigali, T. Son, I. Jung, H. Arafa, V. R. Soundararajan, A. Ollech, A. Shukla, A. Bradley, M. Schau, C. M. Rand, L. E. Marsillio, Z. L. Harris, Y. Huang, A. Hamvas, A. S. Paller, D. E. Weese-Mayer, J. Y. Lee and J. A. Rogers, Nat. Med., 2020, 26, 418–429 CAS.
  222. F. Liu and L. Lorenzelli, Wearable Electron., 2024, 1, 137–149 Search PubMed.
  223. Y. Chen, Y. Zhang, Z. Liang, Y. Cao, Z. Han and X. Feng, npj Flex. Electron., 2020, 4, 1–20 CrossRef.

This journal is © The Royal Society of Chemistry 2025
Click here to see how this site uses Cookies. View our privacy policy here.