Emily Clements,
Charlotte van der Nagel,
Katherine Crank
,
Deena Hannoun and
Daniel Gerrity*
Southern Nevada Water Authority, P.O. Box 99954, Las Vegas, NV 89193, USA. E-mail: daniel.gerrity@snwa.com
First published on 3rd January 2025
Potable water reuse is becoming more common as communities deal with increased water demands and climate change. Understanding the risks associated with potable reuse is essential to ensuring that public health is protected from waterborne pathogens. This paper provides a review on the studies that have performed quantitative microbial risk assessments (QMRAs) on potable reuse. The 30 articles included here studied direct potable reuse (DPR), indirect potable reuse (IPR), and/or de facto reuse (DFR), and a variety of pathogens, including norovirus, adenovirus, Cryptosporidium, Giardia, Campylobacter, and Salmonella. The QMRAs were either ‘top-down’ or regulations-focused, where log reduction targets (LRTs) were determined based on initial (e.g., raw wastewater) pathogen concentrations and risk goals (e.g., 10−4 annual risk benchmark), or ‘bottom-up’ or risk-estimation-focused, where risks were calculated based on known pathogen concentrations and observed/credited log reduction values (LRVs). Some studies incorporated process failures and pathogen decay, which were often a driving factor for risk, but several studies omitted one or both. Many studies compared multiple treatment trains (e.g., carbon-based advanced treatment (CBAT) vs. reverse-osmosis-based advanced treatment (RBAT)). They found that treatment-based differences were pathogen-dependent because certain processes are better able to inactivate or remove certain pathogens. Many factors influence the risks reported in the various studies, including the assumed ratios of gene copies to infectious units (GC:
IU), assumptions related to ingestion volume and frequency, dynamic vs. static modeling, and Bayesian approaches. The LRTs for the top-down QMRAs varied within and between studies, depending partially on the pathogen concentrations used and whether redundancy was included. The key findings from this review were that while QMRAs often have different goals warranting different assumptions, it is essential that researchers report these assumptions and their justifications so that policymakers and regulators fully understand their implications to avoid overly stringent or nonprotective regulations.
Water impactWe conducted a comprehensive literature review on quantitative microbial risk assessments (QMRAs) for potable reuse, which will likely become more necessary due to climate change and drought. This review provides timely and critical insights into potable reuse QMRAs to inform future research and policy development for water reuse by identifying gaps, challenges, and best practices in conducting and reporting QMRAs. |
Prior to introducing potable reuse in communities, it is essential to assess the risks associated with waterborne diseases that could be acquired through this process. Quantitative microbial risk assessment (QMRA) is a tool commonly used to assess the likelihood of infection and/or illness resulting from pathogen exposure. The four steps include hazard identification, exposure assessment, dose–response modeling, and risk characterization.4,5 While QMRA has been used extensively to analyze the risk of non-potable reuse including agricultural reuse or other purpose-driven applications,6–11 there have been fewer studies on potable reuse.
As potable reuse regulatory development and project implementation occur, it is essential to understand the microbial risks presented by these systems, including how they can be estimated and ultimately managed. Therefore, the goal of this study was to review the studies that have used QMRA to assess the risks from potable reuse and highlight the implications of various assumptions made during the risk assessment. QMRAs are inherently a product of their assumptions, and if those assumptions are not clear, a QMRA can be misinterpreted. This review will also examine the pathogens driving risks, highlight risk mitigation strategies expected to be most effective, and compare log reduction targets (LRTs) and log reduction value (LRV) assumptions from different studies, as these affect the development of regulations.
During the review of the selected papers, two additional papers were identified that did not use the term quantitative microbial risk assessment, likely because they were published before QMRA was a common term; however, these resources performed a QMRA on potable reuse.12,13 This brought the total number to 30 studies of QMRA for potable reuse.
Table S1† summarizes the studies which have performed QMRA for potable reuse. It includes the target pathogens for each QMRA, the type of potable reuse project (DPR, IPR, and/or DFR), the associated treatment train(s), and the QMRA approach (i.e., top-down or regulations-focused vs. bottom-up or risk-estimation-focused). Top-down QMRAs aim to identify LRTs based on initial (e.g., raw wastewater) pathogen concentrations and assumed risk goals (e.g., 10−4 annual risk benchmark). Bottom-up QMRAs estimate risk based on known pathogen concentrations and LRVs achieved by or credited to the treatment train, with the conservative practice of LRV crediting generally resulting in greater estimated risks. Those calculated risks are typically compared against a risk benchmark to determine whether the system is adequately protective of public health. These risk benchmarks are often based either on a probability of infection (Pinf), with a typical target of <10−4 infections per person per year (pppy), or a metric that considers health outcomes (e.g., disability adjusted life years (DALYs)), with a typical target of <10−6 DALYs pppy.18
Table S1† also includes other factors that impact the risk calculation, including the volume of water consumed and ingestion frequency. If the pathogen concentrations used in a QMRA are based on molecular methods (i.e., polymerase chain reaction (PCR)), the number of gene copies (GC) often need to be converted to infectious units (IU) for the risk assessment, as dose–response models are often developed based on infectious doses. A conservative GC:
IU ratio of 1.0 assumes every gene copy equates to one infectious pathogen. However, molecular methods often overestimate infectious pathogen exposure because die-off/inactivation generally does not result in a corresponding level of genome damage. Therefore, GC
:
IU ratios can be significantly greater than 1.0 under real-world conditions.19 Some QMRAs incorporate failures, sensitivity analyses, and/or pathogen decay linked to retention time in the environmental buffer. Studies differ based on the dose–response curves used for a given pathogen, although some studies directly compare multiple dose–response models to understand the implications of this assumption on resulting risk estimates. The decisions researchers made in developing their QMRAs and the implications of those decisions are discussed in more detail throughout this paper.
Not every study performed a simple top-down or bottom-up QMRA. Two studies focused on stormwater for potable reuse and performed a blended top-down/bottom-up QMRA.42,43 Since the studies used the same pathogen concentrations and acceptable risk threshold, they both arrived at the same LRTs. However, they then evaluated different treatment trains, specifically by varying the level of aquifer treatment, to determine if the corresponding LRVs would be sufficient to mitigate risk for the different pathogens, albeit without directly calculating risk.
MacNevin and Zornes44 performed a bottom-up QMRA but iterated over different LRVs to determine the minimum required LRV to consistently achieve a 10−4 annual risk of infection, providing a similar result to following a top-down approach. They used the concentrations of Cryptosporidium and Giardia at 20 different water reclamation facilities and started with a LRV of 4. They then increased the LRV by 0.5 at each facility to determine if the annual risk of infection was less than 10−4 every year for 1000 simulations. This resulted in a total of 40 values of minimum LRVs, ranging from 5 to 10, for both Cryptosporidium and Giardia. MacNevin and Zornes44 compared these results in the context of two potential treatment trains: (1) reverse-osmosis-based treatment (RBAT: UF-RO-UVAOP-ESB + Cl2) with LRVs of 12/15/12 and (2) carbon-based advanced treatment (CBAT: O3-BAF-UF-UVAOP-ESB + Cl2) with LRVs of 16/16/11 for viruses, Giardia, and Cryptosporidium, respectively. All facilities would be able to surpass the LRTs with either treatment train.
Soller et al.45 did much of their analysis with the bottom-up approach to determine risks for specific scenarios, but also did a top-down assessment for DPR to determine the LRTs needed to consistently meet the benchmark risk levels. They found a 14log reduction of viruses, with norovirus as the model pathogen, and a 11+ log reduction of Cryptosporidium and Giardia resulted in around 95% of the simulations having annual risk of infection less than 10−4. They demonstrated that 12/10/10 log reductions for viruses, Giardia, and Cryptosporidium, respectively, were insufficient to achieve the 10−4 annual risk benchmark in any of their simulations, which contradicts the findings of MacNevin and Zornes44 for protozoa. This is potentially problematic considering that the “12/10/10” framework has been adopted for IPR in California46,47 and Nevada,48 and now for DPR in Colorado.49 Differences in assumptions between MacNevin and Zornes44 and Soller et al.45 included different starting concentrations of protozoa, with MacNevin and Zornes44 having significantly lower concentrations, the use of point-estimate LRVs associated with treatment processes44 vs. uniform distributions,45 and different dose–response models. A more recent top-down QMRA from Gerrity et al.39 yielded scenarios that generally supported both Soller et al.45 and MacNevin and Zornes,44 depending on whether the pathogen concentrations were assumed to be maximum values or 97.4th percentile values, respectively.
Church et al.50 used QMRA to develop tentative standards for reuse of dishwashing graywater on military bases for potable use. They followed a top-down approach, but instead of trying to determine LRTs, they determined the final maximum allowable concentrations of norovirus, Salmonella, and E. coli O157:H7 for dishwashing, showering, and drinking, without specifying a certain type of treatment. Church et al.50 found that the maximum allowable concentration for potable reuse was lowest for E. coli O157:H7 (2.7 × 10−6 colony forming units (CFU) per mL). Since E. coli can be monitored with culture-based methods easily and in a cost-effective manner, Church et al.50 suggested converting E. coli O157:H7 to total culturable E. coli with a ratio and applying a 10-fold safety factor. This resulted in a recommended maximum final concentration of E. coli of 1.6 × 10−2 CFU mL−1 when treating recycled dishwashing water for DPR. Overall, top-down QMRAs are useful for identifying LRTs and creating regulations, while bottom-up QMRAs can be used to evaluate the expected performance of an existing treatment train or to determine the inherent safety factor.
DPR can either utilize raw water augmentation or treated water augmentation (Fig. 2). For raw water augmentation, the treated recycled water can be added back to an environmental buffer (i.e., aquifer, river, or lake) upstream of a drinking water treatment plant or blended directly with the water prior to treatment. This can be distinguished from IPR based on the residence time in the environmental buffer, with some regulatory frameworks requiring minimum storage times for an IPR designation (e.g., a minimum of two months in California46). Bailey et al.23 focused their risk assessment on raw water augmentation, with a retention time of 5 days and a mixing ratio of 20% recycled water and 80% surface water. Treated water augmentation occurs when the recycled water is blended directly into the distribution system. Amoueyan et al.21 studied the risks of both types of DPR. For IPR, the environmental buffer can either be surface water or groundwater, depending on the needs of a particular community. DFR is similar to IPR, but the treated wastewater at the drinking water intake is unplanned or incidental and often lacks additional/advanced treatment.
![]() | ||
Fig. 2 Differences between DFR, IPR, and DPR (raw water augmentation and treated water augmentation). |
Both Soller et al.31 and Amoueyan et al.21 compared DFR, IPR, and DPR and found the risks of IPR and DPR to be lower than the risk of DFR if the advanced water treatment (AWT) facilities are operating within design specifications. Amoueyan et al.21,22 found that the lowest risk occurred for DPR with no conventional source water (e.g., surface water or groundwater), and that the risk for IPR was dominated by pathogens assumed to be present in the conventional source water (i.e., not derived from local wastewater), leading to lower risks with greater recycled water contributions (RWCs). Other studies did not account for pathogen concentrations in the traditional source water and therefore found increased risk with higher percentages of recycled water.27 Future assessments of risk in IPR systems should consider pathogen concentrations in the source water, unless there are site-specific data to support their omission, as this would allow for a fair assessment of the relative risk impact of recycled water vs. conventional source water. This could prevent expensive additions to the advanced treatment train on the recycled water side when the driver of risk is actually the conventional source water.
IPR is already implemented in many places, including in the U.S. in states such as California, Virginia, Texas, and Georgia, as well as outside the U.S. in South Africa, Australia, and the United Kingdom.51 However, IPR is not a viable option for all communities, especially communities that lack access to a reservoir or aquifer with an adequate residence time or dilution ratio to sufficiently mitigate risk or meet regulatory requirements. Constructing and maintaining pipelines and pumping the treated water to reservoirs, where the water will be treated again after it is withdrawn, can be barriers for IPR implementation in some communities. Therefore, DPR may be the most sustainable option for certain communities, assuming DPR projects can be permitted. However, DPR greatly reduces the time available for detection and remediation of treatment issues (i.e., the response retention time or RRT).19 Adding an engineered storage buffer (ESB) to a DPR treatment train increases the RRT, allowing for risk reduction through mitigation of off-specification treatment.26 This potentially increases the attractiveness of DPR from the perspective of regulators and other stakeholders.
There is still hesitance about including norovirus in QMRAs for water reuse because there are not widely used, standardized culture methods to measure norovirus infectivity, and there is uncertainty around how to utilize molecular (e.g., qPCR) norovirus concentrations.26,29 Moreover, there are multiple dose–response models for norovirus that provide different results, and there is no consensus on which is most appropriate. As can be seen in Fig. 3, the same dose of norovirus (100 infectious units) can result in an order of magnitude difference in the probability of infection. At this dose, the dose–response functions without an aggregation parameter predict the following probabilities of infection: the hypergeometric 1F1 model52 predicts 53%, the fractional Poisson53 predicts 72%, and beta-Poisson54 predicts 14%. Meanwhile, the fractional Poisson with an aggregation parameter53,55,56 predicts only 6%. The goal of an aggregation parameter is to prevent overestimation of infection by accounting for incomplete mixing of norovirus with a water body, which was observed in the inoculum used in a human trial.57 The drawback of including an aggregation parameter, however, is the unknown extent of aggregation or disaggregation of norovirus in environmental waters, leading some studies to consider the aggregated models as less conservative (i.e., predict lower probabilities of illness). Chaudhry et al.24 found that using the fractional Poisson aggregated dose–response model for norovirus resulted in three orders of magnitude lower median risk than using the disaggregated model. Soller et al.58 took an approach of modeling norovirus risk within two “bounds”, where the lower bound was set as the aggregated fractional Poisson and the upper bound was set to the hypergeometric 1F1. Lim et al.27 also justified the use of the disaggregated hypergeometric 1F1 as being more conservative; however, in the range simulated in Fig. 3, the fractional Poisson predicts the highest probability of infection at lower doses, indicating that the Messner et al.53 model (not considered in the study) would be a potential better choice for an upper bound or conservative model. A more in-depth discussion and full comparison of norovirus dose–response models was published in Van Abel et al.54 Many of the reviewed QMRAs19,21,22,24,29,31,32,45 included multiple dose–response models for norovirus and Cryptosporidium due to the differences in predicted risks.
![]() | ||
Fig. 3 Impact of norovirus dose–response model on risk: hypergeometric 1F1 (no aggregation) from Teunis et al.,52 fractional Poisson with aggregation from Atmar et al.55,56 and Messner et al.,53 fractional Poisson with no aggregation from Messner et al.,53 and approximate beta-Poisson (no aggregation) from Van Abel et al.54 |
Drawbacks for norovirus inclusion in QMRAs are not limited to the dose–response model. Norovirus has multiple molecular assays that capture different strains, and some people are resistant to certain strains,39,59 complicating the interpretation of the molecular results. Although it was included in their sampling campaign, Bailey et al.23 chose not to include norovirus in their risk assessment because they did not detect any gene copies in their recycled or surface water samples. Pecson et al.19 found that the uncertainty in the LRTs for norovirus spanned over 10 orders of magnitude and therefore suggested using a hybrid approach of using enterovirus occurrence data, which are culturable and in high concentrations in wastewater, and the rotavirus dose–response model, which is highly infectious, as a measure of gastrointestinal virus in reuse QMRAs.
Soller et al.45 argues that norovirus should be included in risk assessments because it causes approximately 20 million illnesses a year in the U.S.,60 more than half of the illnesses caused by all foodborne pathogens.61 They also argue that newer dose–response models for norovirus can capture uncertainty.58 Soller et al.45 also mentions that though norovirus is not easily culturable, the GC:
IU ratios for other enteric viruses are sometimes low (i.e., molecular data ≈ culture data), recently excreted viruses are likely mostly infectious, and that it is better to use conservative estimates.62 These all support the inclusion of norovirus molecular data in reuse QMRAs, particularly when characterizing influent wastewater concentrations. In contrast, a dynamic QMRA, where community transmission was taken into account, found that waterborne norovirus likely contributes no appreciable risk to public health, because the risk for this specific organism in a community is dominated by secondary infections and foodborne transmission.20
Three studies have also used a surrogate enteric virus in their QMRA.12,13,25 While Tanaka et al.13 used concentrations from an enteric virus database with 377 samples from unchlorinated secondary effluents, Asano et al.12 used an enteric virus database with 424 secondary effluent samples and 84 tertiary effluent samples. Gerrity et al.25 used SARS-CoV-2 concentrations in wastewater with the hypergeometric dose–response model for norovirus based on the assumption that SARS-CoV-2 concentration dynamics were comparable to norovirus. While their calculated relative risks do not correspond to risk for actual pathogens, Gerrity et al.25 were able to gain insights about how incidental dispersion or engineered mixing could be implemented to attenuate pathogen concentration spikes and ultimately reduce high-end risk estimates. Though Asano et al.12 used the same concentrations for all the enteric viruses, they modeled the risk separately for poliovirus 1, poliovirus 3, and echovirus 12 due to different infectivities.
However, determining accurate and appropriate values can be difficult. Molecular data measures the number of gene copies, rather than the number of infectious pathogens, so the number of gene copies must be converted to infectious units (GC:
IU ratios or harmonization factors). The GC
:
IU ratio has a large impact on risk,19,25,64 and the numbers can vary widely. As discussed by Gerrity et al.,25 conservative approaches assume a GC
:
IU ratio of 1.0, where every gene copy is assumed to equate to an infectious pathogen, but the actual number of infectious units might be orders of magnitude lower due to inactivation/degradation.19 Most studies assumed all gene copies were infectious, but Bailey et al.23 had percentages of infectious units for each pathogen. They used point estimates of 38.5% infectious for adenovirus (2.6
:
1 GC
:
IU), 65% for Salmonella (1.5
:
1 GC
:
IU), 25% for Cryptosporidium (4
:
1 GC
:
IU), and 13% for Giardia (7.7
:
1 GC
:
IU). Gerrity et al.39 modeled the GC
:
IU ratios for norovirus, enterovirus, and adenovirus as log10-uniform distributions from 1
:
1 to 200
:
1, while Amoueyan et al.21 used a 700
:
1 point estimate GC
:
IU for adenovirus. Culture methods, on the other hand, may underestimate the number of infectious viruses present. One proposed option to address this is to assume that only 10% of the viruses present are culturable,65 and this 10-fold correction factor has recently been applied to enterovirus culture data.19,39
Low concentrations of pathogens can be difficult to measure, so using larger sample volumes, or more specifically larger equivalent sample volumes (ESVs),66 can provide more data with fewer non-detects. For example, Pecson et al.19 used 1 L samples to identify Cryptosporidium and the detection rate was 98%, compared to 40% with 50 μL samples.67 This would not be of concern for top-down QMRAs if the LRTs are determined from the highest pathogen concentrations, but for bottom-up QMRAs using pathogen concentration distributions, the lowest concentrations would be censored and potentially omitted, resulting in overestimations of risk. The pathogen concentrations in the assessment also depend on the PDFs used to model them. Zhiteneva et al.68 performed a review summarizing assumptions made when selecting the PDFs for source water, treatment steps, and the dose–response models for potable and non-potable reuse. PDFs assume variability in the system and provide a range of final risk estimates. Each dataset needs to be individually fitted to a PDF, and a poorly chosen PDF can over- or underestimate risk.
Historically, QMRAs have relied on limited data. However, the rise of wastewater surveillance for SARS-CoV-2 during the COVID-19 pandemic has caused a substantial increase in wastewater biobanks, with additional reuse-relevant pathogen datasets based on these samples being published. This increase in pathogen data highlights the importance of reviews such as Zhiteneva et al.68 and Darby et al.69 These papers focus on identifying and aggregating high-quality pathogen data, and they provide criteria on fitting data to distributions, guiding future pathogen data collection and selection for QMRAs.
Dispersion/mixing of pathogens in sewer collection systems and wastewater treatment plants (e.g., in clarifiers and aeration basins) results in overall ‘averaging’ of pathogen concentrations over time, effectively attenuating high end concentrations but also elevating low-end concentrations. This may inflate measures of central tendency by increasing risk for most ingestion events, but it will also reduce risks at the upper percentiles that often drive LRT determinations.25 The attenuation effect is particularly apparent for intermittent spikes in influent pathogen concentration (i.e., outlier events).25 Some QMRAs use point estimates based on maximum influent pathogen concentrations, but those data points may be spikes (i.e., outliers) that might actually be attenuated after accounting for dispersion. However, this benefit of dispersion is only realized with intermittent spikes; if high concentrations last for an extended period of time (e.g., during a community outbreak), the effects of dispersion might be negligible. Just as there are stipulations for chemical peak averaging,17 implementing similar guidelines for pathogens could be beneficial, considering the significant impact it could have on pathogen concentrations and resulting LRTs or credited LRVs.
The differences in decay rates for the different pathogens impact the needed retention times for risk reduction. For DFR, Amoueyan et al.22 found that risk associated with wastewater-derived Cryptosporidium exhibited a meaningful increase with fewer than 105 days of storage in the environmental buffer, while Amoueyan et al.20 found that a reservoir storage time of at least 30 days could potentially reduce risk from norovirus in a DFR system below that of DPR, using bacteriophage MS2 decay rates as a surrogate for norovirus.
Even 1% of wastewater effluent in the drinking water source water can have important health risk considerations in reuse.24,31 Soller et al.31 included Cryptosporidium, Giardia, and norovirus in their analysis, and used residence times of 2–360 days for DFR and 30–360 days for IPR. They found that simulations with a retention time less than 180 days exceeded the annual risk benchmark of 10−4, even with an RWC of 1% for DFR. With more than 10% wastewater contribution with DFR, more than 180 days were needed to consistently achieve a probability of annual infection of less than 10−4. Approximately 90 days in the reservoir were required to consistently meet the annual risk benchmark of 10−4 for IPR with surface water augmentation. For DFR, Lim et al.27 also found a negative correlation of risk with the residence time in the lake (between 270 and 360 days), and a positive correlation of risk with RWC, because they assumed the source water was pathogen free. Tanaka et al.13 and Asano et al.12 both assumed a residence time of 6 months in the reservoir, while Zhiteneva et al.33 modeled their residence time between 50 and 120 days. In California, to be considered IPR, instead of DPR with raw water augmentation, the retention time must either be at least 180 days or the project could apply to the State Board for approval for a reduced theoretical retention time, though it can be no less than 60 days.46
Page et al.28 studied the impact of aquifer treatment for urban stormwater in a managed aquifer recharge system. They found that the aquifer alone resulted in LRVs of 1.4, 2.6, and >6.0 for rotavirus, Cryptosporidium, and Campylobacter, respectively, based on diffusion chamber studies. They used different decay rates for the pathogens in the wetlands and the aquifer, with higher average decay rates for rotavirus and Cryptosporidium in the wetland, and a higher decay rate for Campylobacter in the aquifer. The importance of the aquifer as a treatment barrier depended on the pre- and post-treatment processes, but the estimated risk was less than 10−6 DALYs pppy with adequate treatment and retention time.28
Pathogen decay is not always included in QMRAs, even when studying DFR or IPR,24,30 but it can have a large impact on the risk and can vary seasonally. Though Lim et al.27 did not include a temperature component to their decay equations, they highlighted it as a parameter to be incorporated in future models. Bailey et al.23 modeled pathogen decay at different temperatures (4 and 20 °C), but their retention time was only 5 days. Amoueyan et al.21 did include the temperature component, using higher decay coefficients at higher temperatures, potentially allowing for a more accurate assessment of decay. Pathogen decay rates depend on a variety of factors including temperature, sunlight, and salinity, and the experimental decay rates for the same viral types can vary by over an order of magnitude.71–73 However, the exact impact of these factors on decay rates of different pathogens and how they impact each other is still unknown, thus more data are needed. Collecting these data is essential because it can elucidate what LRVs could be credited for different pathogens at various retention times to help reduce reliance on engineered treatment processes by leveraging natural management barriers.
![]() | ||
Fig. 4 Conceptual diagram of the credited effectiveness of different treatment processes on protozoa and viruses. LRVs from Soller et al.45 Note that observed treatment efficacy may be substantially different from credited treatment efficacy, resulting in an LRV ‘gap’. |
As noted earlier, observed LRVs, which are measured experimentally and represent the actual inactivation or removal of microorganisms from the water, are often not the same as the credited or regulatory LRVs. For example, Amoueyan et al.21 incorporated mean observed LRVs for microfiltration (MF) of 4.60, 2.40, and 3.65 for Cryptosporidium, norovirus, and adenovirus, respectively, but noted that the corresponding credited LRVs would likely be 4, 0, and 0 in an actual system. Amoueyan et al.20 estimated risk for norovirus using both LRV approaches and found the risk was orders of magnitude higher using regulatory LRVs—sometimes yielding 95th percentiles exceeding 10−4 pppy. When lower credited LRVs result in overestimated risk to consumers, the outcome may be overdesigned and potentially cost-prohibitive projects.
Chaudhry et al.24 conducted a literature review to incorporate observed LRVs and found membrane processes were the most effective at reducing overall risk, despite UV generally being the most robust from a crediting perspective. Many potable reuse systems will employ UV doses well in excess of 200–300 mJ cm−2 in order to target photolysis of N-nitrosodimethylamine (NDMA) and/or oxidation of recalcitrant compounds such as 1,4-dioxane (i.e., UV AOP), yielding LRV credits of up to 6 for all pathogen groups. In contrast, the mean LRV for UV in Chaudhry et al.24 was 2.2 for Cryptosporidium and 5.0 for norovirus. The Cryptosporidium LRV74 was based on a UV dose of 1.8 mJ cm−2, and the norovirus LRV75 was based on a UV dose of 127 mJ cm−2 (with MS2 as a surrogate). Since the LRV credits were limited by the lower assumed UV doses, Chaudhry et al.24 found that RO—and not UV—resulted in the largest risk reduction when it was employed. When RO was not used, MF and NF reduced risk most in their treatment train. Other studies also describe the significance of UV design dose on the resulting pathogen risk. For example, Soller et al.32,45 found that reducing the UV dose from 800 mJ cm−2 (i.e., UV AOP) to 12 mJ cm−2 (i.e., closer to traditional wastewater treatment) increased the risk of infection by four orders of magnitude, making the low dose UV treatment trains unable to meet the benchmark risk levels.
Annual risks of infection were sometimes lower for CBAT (O3-BAF-UF-ESB + Cl2) vs. RBAT (MF-RO-UV-ESB + Cl2),45 which was consistent with Amoueyan et al.20,21 who also compared CBAT (UF-O3-BAC-UV-ESB + Cl2) vs. RBAT (MF-RO-UV-ESB + Cl2). Although risk may have been lower with CBAT because risk incorporates both pathogen load and treatment, RBAT was sometimes superior from a treatment perspective (i.e., higher overall LRVs), particularly for Cryptosporidium.32,45 Ozone is a robust barrier in terms of bulk organic matter transformation, trace organic compound oxidation, and microbial inactivation,76 yet protozoan pathogens (namely Cryptosporidium) still demonstrate resistance.77 On the other hand, membrane-based treatment can be a challenge in terms of regulatory virus LRV crediting but is generally accepted as a robust barrier for protozoan pathogens from both a regulatory and observed LRV perspective (Fig. 4). Remy et al.30 evaluated a unique treatment train consisting of filtration, reverse electrodialysis, micro-grain activated carbon (μGAC), and UV as advanced tertiary treatment before reservoir augmentation. They found that train consistently yielded higher risks than a more conventional potable reuse train with UF and RO with a 5% bypass. However, both treatment trains were able to meet the 10−6 DALY benchmark for viruses, bacteria, Giardia, and Cryptosporidium.
This phenomenon is similar to the dispersion effect discussed in Gerrity et al.25 To further explore the risks with different consumption patterns, Jones et al.26 modeled 1, 8, or 96 ingestion events per day, which captured different pathogen concentrations and different log reductions for each treatment process, based on different possible failure analyses. Only consuming water once per day results in a risk profile that has a larger range than when water is consumed multiple times per day. However, Jones et al.26 also found higher median risks for multiple consumptions a day, again because ‘averaging’ has a disproportionate effect on the lower percentiles of risk.
Using one risk endpoint versus another can sometimes lead to opposing conclusions. For example, Lim et al.27 performed a risk assessment for norovirus and Cryptosporidium for DFR and found a higher risk of infection for norovirus (4.4 × 10−2 to 6.4 × 10−1 pppy) than Cryptosporidium (1.2 × 10−4 to 8.8 × 10−3 pppy), but a greater disease burden for Cryptosporidium (7.1 × 10−8 to 5.3 × 10−6 DALYs pppy) than norovirus (6.2 × 10−11 to 3.0 × 10−8 DALYs pppy). This difference is caused by the assumption that Cryptosporidium will have a greater negative health impact (i.e., more severe) than a norovirus infection. When deciding on the preferred risk endpoint, the intended audience is an important consideration. DALYs are potentially more appropriate for communicating and comparing risks outside the U.S., as they are recommended by WHO and used globally (e.g., in Australia).35 However, regulatory development for potable reuse in the U.S. has primarily focused on probability of infection.39
Remy et al.30 and Zhiteneva et al.33 focused on the DALY framework and found risk was driven by Cryptosporidium. Although Remy et al.30 found that Cryptosporidium led to higher DALY estimates than rotavirus, Page et al.28 estimated higher DALYs for rotavirus than Campylobacter and Cryptosporidium. This highlights how disease burden may need to be reevaluated over time, at least in certain regions. The rotavirus vaccine RV5 was introduced in the U.S. in 2006, and the RV1 vaccine was introduced in 2008. Both vaccines are effective in reducing risk and disease burden.78 As new vaccines are developed and dose–response models are created, the pathogens targeted by regulations may need to change to be properly representative. For example, Bailey et al.23 published a QMRA in 2020 and found that adenovirus actually yielded the highest risk when compared to Salmonella, Cryptosporidium, and Giardia. This was due to adenovirus' higher concentrations in recycled water and surface water, presumably due to inadequate disinfection during wastewater treatment that incorporated chloramination and UV. Kimbell et al.34 found that adenovirus also had the highest risk when failures were modeled, compared to a generic enteric virus, Cryptosporidium, and Giardia, though without failures, the generic enteric virus had higher average risks. In either case, viruses dominated the risk calculation because of their higher concentrations.
One study developed a unique alternative to the common risk benchmarks. Church et al.50 chose a target of one illness per 50000 exposures (daily probability of illness of 2 × 10−5 per person), meaning that if a city had 50
000 people drinking once per day, one person per day would get ill on average. This benchmark was chosen because it was two orders of magnitude less than the number of food and water-related illnesses in a military field setting, allowing reuse to contribute up to 1% of the health burden. For reference, this would be much higher (less conservative) than the aforementioned 2.7 × 10−7 daily probability of infection.
Jones et al.26 compared no failure, real failure values from the literature, and total failure. Total failure of UV-AOP (which was simulated to last 15 minutes due to online monitoring and subsequent diversion to an ESB) was the largest driver for increased risk, due to its 6log credit during normal operation. Jones et al.26 found that the hypothetical failure increased the risk for higher percentile annual infection probabilities by up to six orders of magnitude, but the ESB ensured the annual risk of infection still complied with the WHO annual risk limit. Pecson et al.29 assumed a maximum of one critical failure per year per process, where the LRV for that process became 0, which was likely a conservative estimate. They reported median, 95th, and 99th percentile annual risks of infection with and without failures for Cryptosporidium and enterovirus. The median risks of infection without failures were 4.9 × 10−11 and 1.5 × 10−14 for Cryptosporidium and enterovirus, respectively. With failures, the median risks of infection increased to 1.4 × 10−7 for both Cryptosporidium and enterovirus, and the 99th percentiles increased to 1.1 × 10−5 and 2.1 × 10−5 for Cryptosporidium and enterovirus, respectively. Since Jones et al.26 and Pecson et al.29 were not modeling the impact of compound failures, the risks during failure events were lower than those found by Amoueyan et al.21 This highlights the importance of preventing failures and ensuring any treatment train is robust and reliable.
Bailey et al.23 measured pathogen concentrations in recycled water after conventional wastewater treatment and assumed a worst-case scenario for the LRV at the drinking water treatment plant using real-world data from Hijnen and Medema.79 They compared risks from these worst-case scenarios to baseline scenarios, specifically U.S. EPA's LRVs (4/3/2 for virus/Giardia/Crypto) and the WHO's DALY-based LRVs (4/3/3 for virus/Giardia/Crypto) for conventional drinking water treatment. They found that the mean and 95th percentile annual risk for Salmonella, Cryptosporidium, and Giardia for the worst-case scenarios were always within an order of magnitude of the baseline conditions. For adenovirus, the mean and 95th percentile annual risk of infection was between 1 and 2logs higher for the worst-case scenarios. Because Bailey et al.23 used observed data, these worst-case scenarios had less of an impact than some of the modeled failures elsewhere in the literature (e.g., Pecson et al.29).
Pecson et al.19 suggested incorporating 4log treatment redundancy to protect against undetected failures, for final LRTs of 17/14/14 for viruses/Giardia/Cryptosporidium. Gerrity et al.39 assessed the NWRI Expert Panel recommendations for DPR, which included a recommended 5
log redundancy.80 Following the Expert Panel's approach, the top-down QMRA suggested that a 5
log redundancy was sufficient to achieve a 2.7 × 10−7 daily risk benchmark at the 99th percentile, except for Giardia with a slightly higher daily risk.39 Rather than incorporating redundancy, Gerrity et al.39 proposed an alternative approach that quantifies a system's LRV tolerance to off-specification conditions. They found that for baseline LRVs of 15/11/11 in a DPR system, off-specification operation with an LRV of 12 for viruses or 8 for Giardia and Cryptosporidium would still satisfy the annual risk benchmark assuming the reduced LRV occurred fewer than 12 days per year for viruses or 3 days per year for the protozoa. This suggests a built-in redundancy of 3
logs for short-term off-specification conditions or failures.
Despite the potentially significant impact of failures, potable reuse treatment trains have been found to be robust and reliable. Pecson et al.81 assessed the mechanical reliability of a DPR treatment train using operator logs of all mechanical issues over a year and found no critical failures, demonstrating the potential reliability of advanced treatment for DPR. Amoueyan et al.22 found that some failures can be inconsequential because of the overall robustness and redundancy of advanced treatment in DPR or the resiliency afforded by the environmental buffer in IPR.
Simpler QMRAs can also be performed, such as by using conservative point estimates instead of distributions for pathogen concentrations. For example, Page et al.40 used the 95th percentile pathogen concentrations to determine the LRTs for Cryptosporidium, Campylobacter, and viruses for urban stormwater reuse, and Gerrity et al.39 used both the maximum point value and the 97.4th percentile point value from pathogen distributions. Percentile is linked to sample size, so the 97.4th percentile was chosen since this percentile within a 10000 point dataset is statistically equivalent to the maximum value of a 24 point dataset, as can be shown from Blom's equation.82 This might be the required minimum sample size for pathogen monitoring campaigns aimed at developing LRTs. In other words, the maximum value from a 10
000 point distribution might be considered overly conservative when compared against the maximum from a dataset with only 24 values. Asano et al.12 used four point estimates for pathogen concentrations: the maximum and 90th percentile concentrations of the secondary effluent at the WWTP (assuming an additional LRV of 5 for tertiary treatment), the maximum value detected in the tertiary effluent, and the limit of detection for a tertiary treated wastewater effluent sample. Point estimates with conservative values are useful for creating point estimate regulatory LRTs, while using distributions of concentrations allow the risk distributions and central tendencies to be quantified and more fully characterized.19
QMRAs can also be performed dynamically or statically. In static QMRAs, the probability of infection is modeled from a single exposure event without time dependence or system feedback through community spread (Fig. 6). For waterborne diseases, static QMRAs could underestimate overall risk by not including time-dependent secondary transmission, or overestimate the risk by not including the possibility of someone entering an immune state after exposure to the waterborne pathogens.20 While most QMRAs are static, dynamic QMRAs offer time-dependent pathogen loads and the ability to explore the relative contribution of waterborne pathogens to the total number of illnesses. Amoueyan et al.20 used a dynamic QMRA to determine the relative importance of norovirus transmission pathways: foodborne, person-to-person, and person-to-sewage-to-person. They modeled different epidemiological states, such as susceptible, exposed, diseased, carrier, and post-infection (or recovered) using ordinary differential equations, similar to Eisenberg et al.83 Eisenberg et al.83 created a dynamic process model for a Cryptosporidium outbreak that included a 10-state compartmental model of the population, where people could move between susceptible, infected, diseased, or immune states. The number of current infections influenced the infection rate both through person-to-person transmission and through person-to-sewage-to-person transmission. Overall, Amoueyan et al.20 found that waterborne norovirus did not appreciably contribute to the public health risk in their model, because secondary and foodborne transmission dominated the overall risk calculation. Barker et al.38 also included a secondary attack rate, quantifying the percentage of people who would become sick after contact with the infected person. They found that small communities might need additional treatment due to this secondary transmission and the increased contact between members in a small community relative to a large city.
![]() | ||
Fig. 6 Differences between dynamic and static QMRAs. Cww is the pathogen concentration in wastewater. Cdw is the pathogen concentration in drinking water. Pinf is the probability of infection. |
Zhiteneva et al.33 proposed using Bayesian networks as a solution to limited local data availability, where local pathogen data could be combined with pathogen datasets from literature reviews. Bayesian modeling uses Bayes' theorem to update the probabilities of an outcome as more information becomes available.84 Bayesian networks are graphical models that represent a large amount of data using nodes to represent random variables that are connected to each other by their probabilistic dependencies. While Monte Carlo simulations are better suited for prediction because of their continuous distributions, Bayesian networks can be used for forward and backward inference, which could be used to determine how processes perform under certain risk scenarios.33 Bayesian hierarchical modeling (BHM), which is better able to account for variability within and between groups of data, reduces local parameter uncertainty compared to separate modeling, where larger datasets are not taken into account, while still letting local data dominate.41 Seis et al.41 used both local and external pathogen concentrations and compared BHM to separate modeling, where each treatment plant is different and results from one do not influence results from another; complete pooling, where every treatment plant has the same mean and standard deviation; and no pooling, where the treatment plants have different means but a common standard deviation. They included a classical Bayesian hierarchical framework, where a unique mean is estimated for every treatment plant, with the assumption that the local means comes from a common, normal distribution. Seis et al.41 also used extended hierarchical modeling, by letting the individual within-treatment plant variances differ by plant, which added additional hyperparameters in the model. In both cases, the parameters are all estimated on a total data and individual treatment plant level simultaneously, and the information is shared across simulations.41 They found BHM reduced parameter uncertainty, particularly when local data were sparse, while letting local data dominate. Seis et al.41 recommended including external information, such as from meta-analyses of pathogen concentrations, even when local data are available. Widespread use of Bayesian modeling for QMRA could provide more robust analyses, particularly in data-scarce scenarios, by allowing local pathogen concentrations to be supplemented by larger datasets. Bayesian modeling also enables the creation of prediction intervals, quantifying the uncertainty around the predictions. While these could be useful for a greater understanding of risks, the communication of these prediction intervals would be important to prevent unnecessary alarm or unwarranted complacency.
Study | Type | Virus | Giardia | Crypto | Bacteria | Notes |
---|---|---|---|---|---|---|
Barker et al. (2013)38 | DPR | 6.9 | 8 | 7.4 | LRTs for municipal sewage scenario | |
DPR | 12.1 | 10.4 | 12.3 | LRTs for outbreak conditions | ||
Gerrity et al. (2023)39 | DPR | 13 | 10 | 10 | Used 97.4th percentile pathogen concentrations and included a 10-fold safety factor for viable but nonculturable enterovirus; described tolerance to off-specification conditions rather than redundancy | |
DPR | 15 | 11 | 11 | Used maximum pathogen concentrations; described tolerance to off-specification conditions rather than redundancy | ||
MacNevin and Zornes (2020)44 | DPR | 5 | 5 | Minimum LRTs for any WWTP | ||
DPR | 9.5 | 10 | Maximum LRTs for any WWTP | |||
Page et al. (2015)40 | General reuse | 5.8 | 4.8 | 4.8 | 5.3 | LRTs based on stormwater |
Page et al. (2015, 2016)42,43 | IPR | 5.5 | 4.9 | 4.9 | 5.5 | LRTs based on stormwater |
Pecson et al. (2023)19 | DPR | 17 | 14 | 14 | Included 4![]() |
|
Seis et al. (2020)41 | IPR | <12 | Compared different modeling approaches for concentration data: separate point estimate | |||
IPR | >16 | Compared different modeling approaches for concentration data: separate modeling | ||||
Soller et al. (2018)45 | DPR | 14 | 12 | 12 | 95% of simulations have cumulative annual risks less than 10−4 | |
DPR | 15 | 13 | 13 | 100% of simulations have cumulative annual risks less than 10−4 | ||
DPR | 16 | 11 | 11 | 100% of simulations have cumulative annual risks less than 10−4 | ||
California Regulations17,46 | DPR | 20 | 14 | 15 | Included 4![]() ![]() |
|
IPR | 12 | 10 | 10 | Used maximum point estimates | ||
Colorado Regulations49 | DPR | 12 | 10 | 10 | Could be as low as 8/6/5.5 (virus/Giardia/Crypto) if justified by pathogen monitoring | |
Nevada Regulations48 | IPR | 12 | 10 | 10 | ||
Texas Regulations85 | DPR | 8 | 6 | 5.5 | Minimum LRTs, with actual LRTs potentially higher based on monitoring data; LRV calculation begins after WWTP | |
Florida Regulations34 | IPR | 14 | 12 | 12 |
Gerrity et al.39 summarized the regulations for IPR and DPR in the United States, in addition to performing bottom-up and top-down DPR QMRAs. For IPR, California requires LRVs of 12/10/10 for viruses, Giardia, and Cryptosporidium, respectively, although additional stipulations are required for surface water augmentation vs. groundwater replenishment. Colorado also implemented the 12/10/10 framework but for DPR,49 and in Texas, where the LRV calculation begins in the treated wastewater effluent, minimum LRTs of 8/6/5.5 are required for DPR.39,46,85 LRTs for DPR in Texas may be higher if warranted by the pathogen monitoring campaign required for each case-by-case DPR permit. For DPR, California targeted a 2.7 × 10−7 daily risk of infection benchmark, rather than an annual risk of 10−4. While this does not impact point estimate QMRAs, it does impact the results for more complicated, Monte Carlo QMRAs by eliminating the aforementioned averaging effect in the annual risk calculation. California found baseline LRVs of 16/10/11 to be adequately protective of public health, and this determination assumed prior point estimate concentrations for Giardia and Cryptosporidium, a peak norovirus concentration reported in the literature,63 and a daily ingestion volume of 2 L spread equally over 96 ingestion events per day.39 However, California set its final LRTs at 20/14/15 to account for a 6log treatment failure necessitating a 4
log treatment redundancy.17
Regulations are often developed using point estimates based on maximum concentrations, assumed GC:
IU ratios of 1 when using molecular data (e.g., norovirus), and in conjunction with conservative dose–response models. Care should be taken when using maxima, as these peak concentrations are often not comparable across studies. An alternative approach involves using percentiles based on Blom's equation,82 for example, from which 95th or 97.4th percentile concentrations can be determined from individual studies (e.g., a site-specific sampling campaign) or across multiple studies.69 Choosing a single measured point also makes the final risk estimates more susceptible to potentially non-representative site-specific conditions,86 or even error from laboratory analysis. An expert panel from the National Water Research Institute found that California's DPR regulations resulted in inherent conservatism of 9–11
logs, which could result in overdesigned and unsustainable potable water reuse systems.39,80 In other words, overly conservative LRTs can increase capital and operations and maintenance costs, while potentially yielding no appreciable improvement in public health protection. These scenarios–and their long-term implications–can be mitigated when using either distributions or percentile point estimates in a QMRA, rather than maximum values.
Barker et al.38 studied reuse in a small, remote community in Antarctica and compared municipal sewage pathogen loads with estimated loads during a gastroenteritis outbreak. They found that higher LRVs were needed in small communities to meet the benchmark of 10−6 DALYs due to the greater degree of contact between community members in a small population. If regulations are created from the pathogen levels in larger communities and applied to smaller communities with high contact, they might not be protective; conversely, LRTs developed for small communities may be overly stringent for large communities. Therefore, it is important to consider the local context before guidelines from one location are applied to another, highlighting the benefits of allowing tailored LRTs for different communities.
Commonly used risk benchmarks include 10−4 annual probability of infection pppy, as well as 10−6 annual DALYs pppy. However, it is possible to meet one of these benchmarks and not the other, depending on the severity of the disease. Lim et al.27 found that risk from Cryptosporidium and norovirus were both mostly within the acceptable range of the WHO benchmark of 10−6 DALYs but consistently exceeded the 10−4 risk of infection benchmark. This highlights the need to determine what benchmarks are most relevant in a given context to protect public health without being unnecessarily stringent.
The focus of this review is the risk from microbial hazards, but depending on the level of treatment, there are also chemicals that could accumulate in potable reuse systems that could be harmful to public health, including heavy metals, disinfection byproducts, pharmaceuticals, and per- and polyfluoroalkyl substances (PFAS).87,88 Keller et al.89 conducted a review on technological, economic, and environmental considerations of DPR and included a partial list of chemicals of concern after advanced treatment. There could also be problems with public acceptance of potable reuse due to the so-called ‘yuck factor’.90
Remy et al.30 performed a life cycle assessment and a chemical risk assessment alongside their QMRA, which is important for understanding the cumulative health impact of recycled water. They found that while the proposed treatment would meet the 10−6 DALYs pppy target for pathogens, there would be an increase in constituents of emerging concern (CECs) in the IPR reservoir. Germany has health-based precautionary values for iopromide, iomeprol, gabapentin, and EDTA that Remy et al.30 found could be exceeded in the reservoir. The concentrations of glyphosate and AMPA, a degradation product, would exceed the EU guidelines for pesticides (1 μg L−191), if they were applied to these chemicals.30 Though this may be comparable to current wastewater treatment plants discharging to rivers without tertiary treatment, it highlights the importance of considering both chemical and microbial hazards. Their life cycle assessment found that IPR is competitive in terms of energy consumption and emissions with water importation and seasonal storage and is superior to seawater desalination. Kobayashi et al.35 also performed a life cycle assessment, and highlighted how the local and global impacts of IPR differ. Reducing the local impact from pathogens resulted in a higher global ‘cost’ due to emissions leading to climate change. Page et al.36 included other hazards to humans and the environment in their assessment, including nutrients and chemicals. They found that while the risks from organic chemicals were low, elevated iron levels exceeded potable water guidelines, if post recovery aeration was not employed. Dow et al.92 found that while DPR could significantly reduce energy costs due to reduced pumping requirements from Lake Mead into the Las Vegas Valley, the net present value of DPR ranged from $1.0–4.0 billion, compared to $0.6 billion for the status quo IPR approach. The pairing of a life cycle assessment and chemical risk assessments to potable reuse QMRA would be a beneficial addition to future QMRAs.
As regulations are established and potable reuse becomes more widespread, it is crucial to protect human health without imposing excessively stringent requirements that are prohibitively expensive and do not necessarily enhance public health protection. One possible path forward is for regulations to become more flexible, such as was done in Colorado, where LRTs could be reduced if regular sampling provided sufficient evidence that human health would still be protected. By incorporating QMRA for potable reuse, LRTs could be developed for specific contexts, ensuring that health risks are accurately assessed and managed. Continuous monitoring and adaptive management strategies could be implemented to ensure ongoing compliance and safety, providing a dynamic response to emerging data and technological advancements. Due to the rise in wastewater surveillance for public health purposes, more robust and extensive pathogen datasets are expected to be published. Pathogen concentration variability and driving factors will be better characterized, which could reduce potentially unnecessary redundancies which have been built into QMRAs due to uncertainty. Implementing flexible regulations could promote the sustainable and safe expansion of potable reuse systems. Finally, recent publications demonstrate the value and importance of simultaneously evaluating microbial and chemical risks in the context of sustainability and life cycle assessment.
DALY | Disability adjusted life year |
NoV | Norovirus |
AdV | Adenovirus |
EnV | Enterovirus |
Crypto | Cryptosporidium |
Campy | Campylobacter |
QMRA | Quantitative microbial risk assessment |
DPR | Direct potable reuse |
PCR | Polymerase chain reaction |
RWC | Recycled water contribution |
DFR | De facto reuse |
IPR | Indirect potable reuse |
SW | Surface water |
GW | Groundwater |
LRT | Log reduction target |
Pinf | Probability of infection |
LRV | Log reduction value |
FAT | Full advanced treatment |
DWTP | Drinking water treatment plant |
CF | Cartridge filter |
UV | Ultraviolet |
MF | Microfiltration |
UF | Ultrafiltration |
NF | Nanofiltration |
ESB | Engineered storage buffer |
DW | Drinking water |
pppy | Per person per year |
TT | Treatment train |
CBAT | Carbon-based advanced treatment |
BNR | Biological nutrient removal |
RO | Reverse osmosis |
AOP | Advanced oxidation process |
BAF | Biologically active filtration |
WWTP | Wastewater treatment plant |
WW | Wastewater |
Cl2 | Chlorination |
GC | Gene copies |
IU | Infectious units |
RBAT | Reverse-osmosis-based advanced treatment |
MBR | Membrane bioreactor |
Footnote |
† Electronic supplementary information (ESI) available: The ESI includes a table summarizing the 30 studies that conducted quantitative microbial risk assessments for potable reuse. See DOI: https://doi.org/10.1039/d4ew00661e |
This journal is © The Royal Society of Chemistry 2025 |