Shikha
Dhiman
ab,
Teodora
Andrian
c,
Beatriz Santiago
Gonzalez
d,
Marrit M. E.
Tholen
d,
Yuyang
Wang
be and
Lorenzo
Albertazzi
*cd
aLaboratory of Macromolecular and Organic Chemistry, Eindhoven University of Technology, P. O. Box 513, 5600 MB Eindhoven, The Netherlands
bInstitute for Complex Molecular Systems, Eindhoven University of Technology, P. O. Box 513, 5600 MB Eindhoven, The Netherlands
cInstitute of Bioengineering of Catalonia (IBEC), Barcelona Institute of Science and Technology, Barcelona, Spain
dDepartment of Biomedical Engineering, Institute of Complex Molecular Systems, Eindhoven University of Technology, Eindhoven, The Netherlands. E-mail: l.albertazzi@tue.nl
eDepartment of Applied Physics, Eindhoven University of Technology, Postbus 513, 5600 MB Eindhoven, The Netherlands
First published on 1st December 2021
The characterization of newly synthesized materials is a cornerstone of all chemistry and nanotechnology laboratories. For this purpose, a wide array of analytical techniques have been standardized and are used routinely by laboratories across the globe. With these methods we can understand the structure, dynamics and function of novel molecular architectures and their relations with the desired performance, guiding the development of the next generation of materials. Moreover, one of the challenges in materials chemistry is the lack of reproducibility due to improper publishing of the sample preparation protocol. In this context, the recent adoption of the reporting standard MIRIBEL (Minimum Information Reporting in Bio–Nano Experimental Literature) for material characterization and details of experimental protocols aims to provide complete, reproducible and reliable sample preparation for the scientific community. Thus, MIRIBEL should be immediately adopted in publications by scientific journals to overcome this challenge. Besides current standard spectroscopy and microscopy techniques, there is a constant development of novel technologies that aim to help chemists unveil the structure of complex materials. Among them super-resolution microscopy (SRM), an optical technique that bypasses the diffraction limit of light, has facilitated the study of synthetic materials with multicolor ability and minimal invasiveness at nanometric resolution. Although still in its infancy, the potential of SRM to unveil the structure, dynamics and function of complex synthetic architectures has been highlighted in pioneering reports during the last few years. Currently, SRM is a sophisticated technique with many challenges in sample preparation, data analysis, environmental control and automation, and moreover the instrumentation is still expensive. Therefore, SRM is currently limited to expert users and is not implemented in characterization routines. This perspective discusses the potential of SRM to transition from a niche technique to a standard routine method for material characterization. We propose a roadmap for the necessary developments required for this purpose based on a collaborative effort from scientists and engineers across disciplines.
In this perspective, we shall discuss the crucial question for the development of SRM in the field of materials: “can SRM become a standardized routine analytical technique in the fields of chemistry and materials chemistry?”. The development pathways of other routine methods such as atomic force microscopy (AFM), electron microscopy (EM) and light scattering will be a source of inspiration and benchmarking. Thereafter, we shall provide a roadmap for the necessary developments required in SRM during the next decade for its application as a routine measurement technique for characterization of materials and the challenges along this path.
Owing to the difference in the underlying principle, operational requirements, method of preparation and the output, all these techniques exhibit their advantages and intrinsic limitations. AFM34 and EM33 both can spatially resolve structures in the nanometric regime without any labelling requirement. However, it is challenging to study the structure of samples under native conditions as these techniques require invasive sample preparation methods such as drying or freezing that can destroy/alter the sample itself. Moreover, due to the limited penetration through the sample, only surface or thin sections can be analysed and it is impossible to deconvolute a multicomponent system. Overcoming this issue, FM allows multicolour imaging under native conditions using different molecular labels and facilitates study of dynamics and changes in real-time.35 However, this heavily costs the spatial resolution (∼200 nm) due to the diffraction limit of light (Fig. 1).
This contrast brings us to two conclusions: (i) there is always a need for multiple microscopy and spectroscopy techniques to provide a conclusive understanding of the structure, dynamics, and function; (ii) there is a gap between high-resolution (∼1 nm) label-free methods and lower-resolution multicolour and molecular specific methods. SRM can strongly contribute in this regard offering additional possibilities such as: (i) nanometric spatial resolution (5–25 nm), (ii) native or mild sample conditions, (iii) 3D molecular architecture resolution, (iv) multicolour imaging, (v) possible real-time study of dynamics, and (vi) quantitative molecular analysis.
Based on their approach to overcome the diffraction limit, there are three major types of SRM methods: structured illumination microscopy (SIM), stimulated emission depletion (STED) microscopy and single-molecule localization microscopy (SMLM) (Fig. 1).36 An extensive description of the principles of these techniques is provided elsewhere.1,3
Due to their principles different SRM methods present advantages and disadvantages. SIM relies on mathematical deconvolution of interference patterns (Moiré's effect) generated by spatially structured illumination patterns.37,38 A resultant spatial and temporal resolution of ∼120 nm and few seconds, respectively is achievable.39 Therefore, despite the limited gain in resolution, SIM provides faster and the least invasive sub-diffraction imaging, making it ideal for sensitive and dynamic samples.
STED uses a donut shaped illumination beam with an interior excitation laser and exterior depletion laser.40,41 This reduces the effective point spread function (PSF) of illumination, thereby enhancing the spatial resolution to ∼50 nm.42 STED provides excellent spatial resolution in a reasonable time (seconds) and requires minimal postprocessing, which makes it very appealing for a variety of samples, with the limitation of the very high power used that may result in sample damage and photobleaching. Lastly, SMLM relies on accurate localization of individual fluorophore emission achieved by the stochastic activation of dyes in wide-field illumination.43,44 With extensive postprocessing all the molecules in the sample are identified and their x, y, and z positions are identified with nanometric precision resulting in a diffraction-unlimited image. Resolution down to few nanometers is possible at the cost of slow temporal resolution compared to SIM and STED.45,46 Based on the process of dye activation, SMLM methods are further referred to as stochastic optical reconstruction microscopy (STORM),47 (direct) stochastic optical reconstruction microscopy (d(STORM)),48 photoactivated localization microscopy (PALM),49 and point accumulation for imaging in nanoscale topography (PAINT).50 These three SRM methods and their use to study complex synthetic materials and nanomedicine are extensively discussed in previous reviews.3,29,30
Contrasting computational post-processing super-resolution techniques are alternately used to overcome the diffraction limit by analyzing random fluctuations of single fluorophore signals. One of these is super-resolution optical fluctuation imaging (SOFI) that is fundamentally a post-processing method to generate super-resolved images from diffraction-limited image time series from a microscopy measurement. SOFI relies on high-order statistical analysis of temporal fluctuations recorded in a sequence of images.51 Since fluorescence signals coming from a fluctuating emitter show statistically similar behavior over time, their autocorrelation displays a high value, whereas those from the background remain uncorrelated and show a low value. SOFI is a pure software technique that is a strong alternative to SMLM-type of localization analysis.52 Following a similar principle, super-resolution radial fluctuation (SRRF) as an improved analysis framework has been developed to tackle the non-linear response to brightness in high-order correlations, and has shown to resolve SMLM data with a resolution of at least 50 nm that is comparable to localization-based SMLM analysis.53 Compared to localization analysis, SOFI tolerates a larger quantity of emitters in a diffraction-limited spot, lower SNRs, and a higher framerate and can be readily applied to time series data from any microscopy measurement.
Overall, SRM is an ideal bridge between AFM/EM and diffraction-limited optical methods (e.g. confocal) with clear potential to serve material characterization. Notably, the contrasting advantages and limitations of these microscopy techniques have led to the correlative imaging of the system to probe the target area, either simultaneously or in tandem to access maximum information about the structure, dynamics, and functions.
Correlative microscopy using a combination of light with electron microscopy (CLEM), light with AFM and more recently (cryo)-electron with SRM has been demonstrated to unveil oversights or add a new dimension of information, compared to the use of a single technique.54–56 The new opportunity of using correlative microscopy comes with its own challenges to combine contrasting techniques in terms of sample preparation and setups, but the field is advancing to overcome these limitations.57
Although SRM proved to be able to provide important information, many of these characteristics are either partially present or still challenging to achieve. Hence, a multitude of developments in various aspects of engineering and chemistry are required to promote SRM from a sophisticated to a routine technique. We discuss these features in detail below. In light of the wider literature and its ability to provide molecular information with maximal resolution, this perspective will mostly focus on SMLM. However, we do not exclude that SIM and STED also may be able to play an important role in the future.
The extensive interest in SRM has triggered the development of a plethora of organic dyes fulfilling SMLM requirements, and many of them are now commercially available. The material of interest can be covalently or non-covalently functionalized with these labels.42 More recently, new nanoparticle-based probes such as quantum dots, carbon dots, or upconversion nanoparticles have shown great promise for imaging due to their remarkable photostability and brightness (Fig. 3).60,61
Despite the plentiful options of available dyes, several challenges are hampering the streamlined labelling of synthetic samples. Interestingly, many of them are not encountered in biological samples. Firstly, many materials may be perturbed by some of the labels due to dye hydrophobicity or charge. This is particularly true for supramolecular materials as dyes can interfere with the self-assembly process. Secondly, the chemical nature of the material of interest can perturb the performance of the dyes by changing their surrounding environment. Therefore, it doesn't come as a surprise that dyes which are known to generally perform well for SMLM can give suboptimal results for specific materials while mediocre SMLM dyes can be very suitable for others. Therefore, a general workflow for labelling synthetic materials is still to be defined.
On the other hand, the use of buffer formulations, where intermolecular reactions of dyes with external additives such as thiols, ascorbic acid and metal ions are used, has a strong influence on dye properties and therefore on their performance. Buffers are known to influence both the photoswitching behaviour of STORM dyes and the binding kinetics of PAINT imagers. This has been exhaustively studied by Dempsey et al.62 where they tested 26 organic dyes in different buffers. However, the dyes were studied only in a biological environment. In this context, more in-depth investigations tailored on materials to elucidate physical mechanisms behind the photoswitching process are still needed to improve the buffer impact on dye-labelled materials.63
Besides the composition, the preparation and stability of the buffer formulations are other important factors that make the sample preparation protocol lengthy and tedious.64 The buffer formulations usually need to be freshly prepared before each measurement causing possible errors and variations in different measurements. Recently, a few formulations with long shelf stability or quick buffer preparations have been proposed, and some of them are already commercially available.65,66
An interesting approach that can further alleviate the influence of imaging conditions for controlling the photoswitching of dyes is to use spontaneously blinking probes, where there is neither any need of special buffers nor of high-power laser irradiations (∼1 kW cm−2) typically needed for the photoswitching process.62,67,68 Accordingly, we also envision great potential for label-free techniques in this area, e.g. Raman scattering, optical absorption and nonlinear response of thermoreflectance.69,70,72
Finally, one should not forget the importance of using impurity-free and background-free coverslips for the sample preparation. Typically, coverslips can be contaminated with a grease film or dust particles that need to be removed to promote the attachment of the sample to the glass avoiding any non-specific interactions and reducing the background noise due to fluorescent impurities. Furthermore, specific coating to favour the physisorption of the materials of interest can be used.71 For a routine characterization use, it is not acceptable to spend days to find the right surface chemistry to achieve sample immobilization (as it often happens now for new samples) and a universal ready-to-use sample chamber is part of the necessary developments towards a SRM characterization workflow. The recent acceptance of the reporting standard MIRIBEL (Minimum Information Reporting in Bio–Nano Experimental Literature) for material characterization and detailed experimental protocols aims to provide complete, reproducible and reliable sample preparation. The adoption of MIRIBEL as a mandatory requirement for publication will ease the challenges in reproducing experiments and facilitate smooth advancement of scientific research.72–74
Overall, for SMLM to become a routine technique, we need a collective effort from synthetic organic chemists for dye synthesis, and microscopists and nanotechnologists to optimize methods for sample preparation. The focus should be to make this crucial step easier, quicker and more global for a broader research community and variability of samples.
Fig. 4 Data reporting needed for different types of synthetic materials. Created by https://Biorender.com. |
We can identify two types of typical readouts: morphological features (e.g. size and shape) and molecule counting/localization. While the first type of information is also provided by other techniques such as DLS, TEM or AFM, the latter is unique to SMLM thanks to single molecule sensitivity and specific labelling.
Currently, ensemble methods such as western blot, BCA assay and ζ-potential are used as indicators for changes in surface physical features and biomolecule functionalization of materials.75–79 By using STORM or DNA-PAINT, these characteristics can be resolved on the single molecule level. Later, counting of integer numbers of molecules can be achieved. In this framework an ideal technique is qPAINT, which is based on predictable kinetics of the transient binding of a dye-labelled imager and docking strands. With this kinetic information, the number of target molecules can be calculated, avoiding issues related to dye photophysics and photobleaching. Since both STORM and DNA-PAINT allow for multicolour imaging, the number and distribution of multiple different functionalities can be quantified on both the intra- and interparticle level.24,71,72 Moreover, 3D visualization of materials is possible, which could provide additional information with respect to standard analysis techniques.80
When discussing material characterization, spectroscopy also plays a pivotal role. The combination of fluorescence spectroscopy and sensor probes allows important properties to be measured such as polarity, viscosity and local pH. Recently, a combination of SMLM with spectroscopy has been proposed and named spectral SMLM (sSMLM). Nile Red (NR) is the most used probe to show the capacities of sPAINT.81 NR is a polarity-sensitive dye and exhibits a unique spectral variation upon binding to particles, which allows nonspecific binding events to be reduced on the surface of the particle. This reduces variations in size measurements, improving the localization density and resolution providing a spectroscopic evaluation of polarity at the same time.81,82 The use of 3D sPAINT imaging even allows the determination of local polarity of a material in 3-dimensions.83
As discussed before, correlative microscopy combining SRM with EM/AFM, can unveil new information about the samples combining the quantitative readouts of multiple techniques. Recently, it has been shown that TEM and SMLM can be combined into a super-resCLEM workflow, to obtain additional information about heterogeneity in a PLGA–PEG based nanoparticle system, functionalized with ssDNA.84 Remarkably, a high level of heterogeneity in both the size and ligand distribution in a single batch was found. Consequently, this results in varied outcomes in therapeutic efficacy of different particles within the same batch.84 The ability to combine SRM with spectroscopy and other imaging techniques paves the way towards the multiparametric imaging of materials at the single particle level, providing multiple informative and quantitative readouts that can be utilized for optimization of the synthetic route.85–87
Among several challenges for accomplishing this, the suppression of varied roots of errors, seems to offer significant promise to accelerate this transition. One of them is the susceptibility to aberrations,88 which is particularly detrimental for imaging thick samples, such as supramolecular 3D scaffolds. A possible approach for mitigating this problem is the introduction of adaptive optics, using for example deformable mirror devices to compensate refractive index changes.89 Another source of misleading data acquisition is mislocalization,90 where the actual emitter position detected is deviated from the actual emission due to the coupling of the dye to a plasmonic nanoparticle, compromising the accuracy of super-resolution imaging in plasmon-enhanced fluorescence microscopy.
Moreover, an additional limitation is the localization precision that depends on many factors typically related to the signal-to-noise ratio and it will be different from lab to lab. It is therefore particularly important to measure brightness and photophysical properties of single dyes in the sample and always report this data.
Finally, we also foresee a further dramatic increase in ready-to-use software for data analysis,91 and online research data platforms92 easing complex analysis workflows and enabling anyone to use advanced algorithms.
In order to enhance the reproducibility of SRM data, the standardization of protocols and the complete reporting of the experimental conditions are crucial as already discussed in several texts such as the ‘Nanotechnology Standards’93 and MIRIBEL standards.94 Furthermore, extensive protocols by pioneering groups would greatly help novice and even expert researchers to adopt SMLM protocols as part of routine research, expanding their use between nanomaterial research groups.50
Fig. 4 summarizes the data reporting needed for different types of materials. Briefly, there are certain fundamental parameters that should be reported – firstly, details of materials used for nanomaterial formulation, synthesis and labelling protocols, information on ligands, epitopes and fluorophores used, and complementary physicochemical characterization details, as thoroughly outlined in the MIRIBEL standards. Secondly, microscope imaging details such as buffer components, microscope details, laser wavelength and intensity, camera type, exposure time and objective details (e.g. magnification, numerical aperture, and immersion oil).50 Lastly, data analysis details including data reconstruction software, analysis parameters (e.g. threshold and filters applied), controls such as background signal without the presence of nanomaterials, algorithms used for data analysis and full x, y, and z coordinates.44,50 Maintaining consistency of these protocols and parameters between research laboratories would ensure that results arising from SMLM analysis of bio–nano interactions are reproducible and reliable, and valuable among the scientific community.
To improve the reproducibility of SMLM systems, test and calibration procedures are required prior to starting a new experiment, to ensure optimal system performance and quantitative analysis.95 NPs such as microspheres, and commercially available origami test samples such as GATTAquant slides are used for SRM 2D and 3D validation procedures.95–101 Similarly, software packages that allow analysis, visualization and quantification of SMLM images must also be validated. These systems can influence the data output, and generally use different parameters and terminologies, making the selection of an optimal software difficult for the end user. Various commonly used packages such as ThunderSTORM,99 rapidSTORM,100 simpleSTORM,101 and quickPALM102 have been tested and compared using realistic simulated data, permitting researchers to pinpoint optimal analysis software for their experiments.103,104 The implementation of standardized and user-friendly SMLM protocols and software packages will ensure that the results between different laboratories are comparable, and that scientists get the best information out of their imaging data.
Firstly, through the rapid development of scientific CMOS (sCMOS) cameras, we have witnessed an improvement in the imaging FOV from ∼50 × 50 μm2 to ∼500 × 500 μm2 and frame rates from ∼30 fps to 400 fps as compared to electron-multiplying charge-coupled devices (EM-CCD).105,106
Secondly, the choice of fluorophores plays a crucial part in (d)STORM and (f)PALM. Photophysical properties, such as the number of photons detected per switching event and the laser power intensity can affect the time the fluorophore spends in the “on” state,62 thus switching events with high photon numbers improve the optical resolution obtained. DNA-PAINT, based on the transient binding and unbinding between complementary fluorophore-labelled imager strands and target-bound docking strands, achieves programmable dye interactions and is independent of the number of dyes used. However, it suffers from a slow image acquisition speed which can last several hours for ultra-resolution imaging (<5 nm).107 However, we have recently seen a 10× speed increase in DNA-PAINT image acquisition without sacrificing the advantages of the technique such as 1 mm2 area in 8 hours, by careful optimization of the oligonucleotide sequence and buffer conditions, paving the way for relatively fast high-throughput studies.108
Thirdly, high-throughput imaging can be achieved by simultaneously imaging multiple targets, i.e. multiplexing. In STORM and PALM this is achieved by using fixed labels (dyes attached to structures of interest) with minimal spectral overlap to give low cross-talk.62,109 For example, 2-colour dSTORM has been used to study the intracellular trafficking of siRNA—loaded polyplexes,110 polystyrene NPs111 and DNA-loaded polyplexes, and the stability of mesoporous silica NPs112 and polyplexes113 in serum, highlighting its potential in unveiling intracellular trafficking and drug delivery in nanomedicine. However, although this is a fairly easy protocol, it is limited by the availability of fluorescent probes with distinct emission spectra, which in most cases is three or four dyes.62 Newer approaches have been implemented such as maS3TORM (Fig. 5), which uses multiplex automated serial staining114,115 or quenching.116
Fig. 5 Automated maS3TORM setup and workflow indicating photographic (upper left panel), and schematic outlines (lower left panel) and experimental details for an automated multiplex system (right panel). Reused with permission from ref. 115. |
In contrast, in exchange-PAINT the same dye and laser are used to image all structures of interest. This eliminates the need for spectrally different dyes since multiplexing is only limited by the number of available docking strand sequences. This approach was applied to polystyrene NPs to study the intraparticle distribution in biofunctionalization of different antibodies on the NP surface.
Its multiplexing ability has also been demonstrated in 10 colours on DNA origami and in 4 colours on fixed cells,117 and more recently a method based on barcoding and precisely engineered blinking kinetics allowed 124-colour multiplexing in vitro and in situ within minutes.118 Depending on the question being answered, with further development towards the automation of this technique, we may see the use of exchange-PAINT as a standard tool for studying nanomaterials in complex biomolecular systems.
A convergent method is high content screening SMLM (HCS-SMLM) which provides an automated microscope for data acquisition, data extraction and analysis, and is able to achieve 3D imaging of a whole 96-well plate and about 100 cells within a maximum of 10 hours by dSTORM. HCS-DNA-PAINT would allow the quantification at the molecular level, but large-scale screening is still greatly limited by time requirements. Potentially, with some implementations such as optimized blinking kinetics, the application of this versatile HC-SMLM system in drug screening could allow the study of the effects of various nanomaterials and drugs with nanoscale resolution on the organization and dynamics of target proteins.119
In this section, we discuss key attributes of data acquisition and analysis software that will help transform SRM into a routine technique. For microscope control/data acquisition software, we propose that the speed and performance, hardware compatibility, flexibility in data manipulation, and user-friendly interface are key aspects, while for data analysis software, we highlight the efficiency and accuracy in extracting quantitative information from large image data. For both control and analysis software, we point out the advantage of fully harnessing the power of community contributions, meaning that open-source programs are essential. To be able to enjoy the benefit of the frontiers of data science, it is ideal for software to be in an accessible form or written in popular and community-driven computer languages exemplified by recent trends of development using Python and R.
Scientific cameras and commercial super-resolution microscopes are usually equipped with closed-source control and acquisition software. A review of standard commercial software is covered in more exhaustive books or reviews.120 These commercial programs are efficient and beginner-friendly when used in combination with the corresponding hardware. However, they lack the ability to coordinately control multiple instruments in the same imaging system in one graphical user interface (GUI). National Instrument's LabVIEW is the most popular choice for coordinated instrument control but it is however still proprietary and closed software.
For SRM to transform into a routine technique, in addition to closed software, we believe that lightweight and cost-friendly, yet powerful solutions will play an indispensable role. Multiple open source and research-driven programs are emerging as commercial alternatives, and some are even specialized in SRM. μManager is a free open-source program for microscope control with a wide range of hardware support and a sizable community of users.121 μManager as an ImageJ plug-in is designed for microscope control in general, but due to its openness, plug-ins and functionalities for SRM have been developed.122 Tormenta and Python Microscope Environment (PyME)123 developed in Python are fully-fledged software environments equipped with key features including full synchronization of microscope hardware, and also the support for localization-based SRM.124 They are advantageous due to the addition of even more flexibile and bigger community support due to Python, and have become reliable and efficient SRM software. Pycro-manager written in Python has recently been emerging as a powerful environment that combines the strengths of both Python and μManager by combining existing functionalities of μManager directly in Python.125 Pycro-manager allows the user to exploit the strong computation capabilities by using Python libraries such as NumPy and SciPy, and brings state-of-art development in machine learning into the loop.126,127
In SRM, especially SMLM, data analysis is a complex process and is one of the most limiting factors for SRM new users. It is a daunting task, in general, for a novice to do the jobs in data analysis from choosing the right statistical method and parameters to proper visualization and rendering of a bioimage for unbiased interpretation of the results.128 Typical SMLM analysis can be divided into two parts: Gaussian fit for molecule localization and post fit localization analysis. Gaussian fit determines the brightness, location and time of detection needed to interpret single molecule images and kinetics, whereas post fit localization involving density-based segmentation and cluster analysis reveals super-resolved features of the sample.129 An ideal software environment in a routine SRM measurement should be therefore able to first efficiently and accurately localize a single molecule from raw images, quantitatively interpret the results with high-performance using correct statistical models, and then provide high quality data visualizations, preferably all in a user-friendly GUI.
While there are closed source software bundles in commercial SRM systems, we envision that open source SRM software developed by the research community will bring more impact and continuing progress of routine SRM techniques. This is mainly because in quantitative SRM data analysis, it is often essential to fully control and test parameter choices to make sure that the reconstructed image yields reproducible and robust interpretations. This full control is usually not possible with commercial software environments that hide processing steps in black boxes hampering the accessibility to manageable software. For materials chemists, tailoring the analysis for a range of different materials to extract specific and many features is crucial.
High-performance SMLM analysis software packages as ImageJ plugins include Thunderstorm,130 and NanoJ131 for example, whereas more exhaustive reviews with longer lists can be found.132,133 These plugins are well integrated in ImageJ which itself is a fully-featured image processing tool, and allows non-expert users to start SRM analysis with a few clicks while keeping parameter choices easily accessible. In other programming languages, SMAP (Super-resolution microscopy analysis platform) in MATLAB,134 Picasso in Python,135 and SMoLR (Single Molecule Localization in R) in R136 have emerged as fully functional SRM software packages that can independently perform all steps of analysis with high efficiency and performance. These many software packages have been extensively benchmarked, and will continue to be examined by the ever-increasing SRM community to keep improving.133
We envision that the boundary between data acquisition and analysis software will be finally broken, and a single set of the SRM software package that is compatible with multiple SRM modalities will be a major part of routine SRM techniques. In general, the conventional approach of data acquisition and analysis is a two-step process where an extensive optimization of acquisition parameters is manually tested prior to imaging. This is a tedious process and may contribute to irreproducible data even by using the same parameters as they are highly influenced by physical parameters such as laser power, stage balance, etc. Therefore, a smart software that can optimize the data acquisition parameters by simultaneously analyzing the acquired data for achieving the best possible results is needed. Recent approaches of using machine learning for these purposes can create a fully automated system that can acquire better quality images.137,138 A complex technique such as SRM can only become routine, if the user can easily acquire and analyze high quality data. Although most of the existing SRM software packages are designed for biological samples, we envision that dedicated functionality will be developed for materials science as well.
Microfluidics has been integrated into several optical imaging methods, and has shown its capability to make imaging instruments cheaper and smaller with minimal influence on imaging quality.142,143 The advantages of integrated microfluidics are also appealing for more modalities of SRM imaging. In live cell imaging, custom-designed microfluidic channels contributed significantly to automated cell culturing,144 bacteria immobilization145 and controlling the local temperature and medium.146
We believe, automated environmental control including temperature, gas and solution such as pH, solvents and buffer via microfluidics or flow setups with high precision and fast equilibration will open up enormous opportunities for scientists to explore unprecedented properties of synthetic and biological materials.
To improve this, the price of the setup has to be lowered without compromising the performance of the technique itself and the quality of the information acquired. This can be achieved with several ways: (i) reducing the price of individual components; (ii) improve the instrument design and (iii) have more dedicated and tailored setup for chemical analysis where only necessary components are used.
The first step towards a more cost-effective setup is removing the body and the ultra-stable laser table. This not only reduces the costs, but it also results in a more compact set-up that saves lab space. This also allows for more downstream modifications and a more custom set-up. Furthermore, by having a compact design, such as the liteTIRF or miCube set-up, mechanical and thermal drift are reduced, which eliminates the need for drift correction modules, which are needed for imaging a high number of frames and for 3D imaging (Fig. 6).85,147 Additionally, omitting the optical table allows for point-of-care operation in field studies and in-class demonstrations to students. As a commercialized set-up, ONI has already made some progress, and is a small device that does not require an optical table, allows for temperature control between 20 and 45 °C and is equipped with possibilities for microfluidics.148
Fig. 6 (a) LiteTIRF microscope (reprinted from ref. 85), (b) miCube set-up (reprinted from ref. 147), (c) ONI Nanoimager (reprinted from ref. 148), and (d) UC2 microscope (reprinted from ref. 153). |
Second, the use of lower grade components can also significantly reduce the cost of the set-up. Commercialized microscopes often have expensive high-end components, although they can easily be substituted with standard off-the-shelf components. First of all, the scientific-grade laser can be replaced with an industry-grade high-power laser diode. A multi-mode fiber, rotating diffuser and a fiber-based beam shaper are needed for uniform illumination.149–151
Regarding the camera, sCMOS cameras have an improved imaging speed and field of view size while having a significantly reduced price compared to the first EMCCD camera. The numerical aperture (NA) of the objective determines the photon collection efficiency and thus the localization precision of the resulting SMLM image. In general, commercialized microscopes are equipped with a high-NA objective lens (NA > 1.4), and by replacing this with a low-cost objective lens, e.g. an oil immersion objective with a NA of 1.3, it was observed that the localization precision is nearly the same as that of a standard SMLM set-up.150
Another approach that has been investigated, is the development of mobile microscopy, which would enable the researcher to have an hand-held device that is capable of obtaining SMLM images, by simply using a smartphone. This allows for combining acquisition and processing in a single device and it will remove barriers between educational and laboratory environments. However, the cameras that are in normal smartphones, do not have CMOS camera chips and acquiring RAW data with high-quality is very complicated.152
Third, by using cheaper manufacturing protocols, the cost can be reduced. Over the years, 3D printers have become a standard in research laboratories. Open-source projects, such as UC2 and OpenFlexure, have opened the possibility to print and assemble a microscope within the research facility. This not only reduces manufacturing costs, but also shipping has become unnecessary and it allows for usage of SMLM in developing nations (Fig. 6). The modular approach of the UC2 microscope allows for fast modification of the set-up, which is essential in the development stage of an experiment. In the OpenFlexure microscope, the optics module is interchangeable, which facilitates multiple imaging modes. Although these systems are still far from standard SRM, the first examples of subdiffraction imaging have been shown.153–155
In our opinion, all the methods discussed above are good examples of reducing costs in SMLM, however, a major drawback of these systems is that one should make these themselves and specific training will be needed, which will increase the barrier for using these set-ups. However, by sharing knowledge and experiences on online fora, the barrier will be lowered. On the other hand, commercialized microscopes will provide a ready to use set-up and a support team in case a microscope is not functioning properly. The combination of the experiences from open-science projects and commercial instruments will bring a new wave of affordable high-performance SRM instruments.
We believe that these advances in the next decade will expand the application of SRM to new discoveries and insights into the structure, dynamics and function of complex synthetic and biological systems.
This journal is © The Royal Society of Chemistry 2022 |