Helen R.
Saibil
ISMB, Birkbeck, University of London, UK. E-mail: h.saibil@bbk.ac.uk
First published on 11th October 2022
This article provides an introductory background and overview of the discussion meeting. It begins with an account of a few key milestones in the development of the cryo EM field, followed by an overview of the presentations that will form the basis of the discussion.
The basic ingredients were in place before 1990, but the significance of these developments was not appreciated by many outside the immediate field, aside from a very small number of far-sighted protein crystallographers. The low resolution of EM maps inspired the term blobology. Nevertheless, the blobby maps gave useful information about the shapes of large macromolecular machines in different functional states. In some cases, this could be combined with crystal structures of the components, showing how they fit together. Cell tomography was also developing at that early stage but did not attract the attention of structural biologists.
The notion that biological structures could be determined from transmission EM images by 3D reconstruction from 2D projections was developed by Klug and colleagues,1 and the idea of collecting tilt series for tomography by Hoppe.2 The Laboratory of Molecular Biology in Cambridge focussed on ordered assemblies, and the pioneering work on 2D crystals of bacteriorhodopsin demonstrated that features of molecular structure such as transmembrane alpha helices could be directly visualised from EM images of the crystals.3 Among the major problems that needed to be solved was the need to detect weak structural signals buried in a high background of noise. The bacteriorhodopsin crystals could not be discerned from the background noise, but the Fourier transforms of the apparently blank images revealed the sharp diffraction spots which could then be separated from the noise to reveal the structure.
Another big problem to overcome was the dehydration of biological samples in the high vacuum of the EM column. Dubochet and colleagues invented a method for capturing the natively hydrated state of thin samples by cooling rapidly enough to prevent ice crystallisation, producing the vitrified form.4 The microscope technology was developed to transfer and maintain the sample at cryo temperature. This also had the additional major benefit of slowing the effects of electron irradiation damage, which is the ultimate limit on resolving radiation sensitive biological molecules.
Henderson found that the resolution of the 2D crystal analysis was limited by imperfections in the lattice order, and he used cross correlation as a measure of similarity to find the actual positions of unit cells and corrected them, a procedure he called unbending. The notion of finding copies of a structural unit by cross correlation was also used for correlation averaging, an early form of single particle analysis in 2D.5 Extracting images of particles from micrographs of a purified population and grouping similar ones formed the basis of single particle analysis, a powerful and very broadly applicable method that has fuelled the revolution in cryo EM.
Moving to 3D, images of single particles can be considered as 2D projections of the particle from different views. Many of the single particle methods use projection matching, in which images are compared to projections of a model 3D map. With the orientations assigned, it is possible to build the 3D structure of the molecule or assembly of interest. Choosing the initial model carries the risk of reference bias, but recent methods perform very efficient searches of the data set to generate appropriate models, and the high resolution of current structures makes it easier to recognise errors.
Single particle cryo EM is now a big success story, although challenges remain, with scope for further improvement. Key milestones in the development of single particle analysis have been to improve the detection of weak signals in the low signal to noise data, particle alignment and classification using projection matching or statistical analysis to compare images. An original and important innovation introduced by van Heel and Frank6 was the classification of single particle images into similar subsets. Although very crude at first, using negative stain images in 2D, the idea of distinguishing structural variations, whether arising from differences in projection orientation, conformational flexibility or partial occupancy of components, has been central to the study of macromolecular assemblies.7 Flexibility and compositional variation are key characteristics of working machines, and the important functional states could start to be resolved by separating a data set of particles into structural classes.
This was originally done by statistical methods such as principal component analysis or multivariate statistical analysis to find the major components of variation between images in the data set. Current software mostly uses projection matching as a basis for clustering images, greatly enhanced by the use of Bayesian approaches for avoiding false minima in noisy matches. The probabilistic approach was implemented in X-ray crystallography by Bricogne.8 It has been very effectively implemented for single particle reconstruction, along with a noise model, in Relion by Scheres,9 and combined with hardware advances to give the explosion of progress towards atomic resolution that has led to the RSC considering this as a suitable topic for a Faraday Discussion. The automation of complex image processing decisions on how to treat the noise at each stage of refinement opened the field to a much wider community of users, instead of being limited to a few experts.
A major problem holding back progress was the low sensitivity, slow speed and low resolution of film and CCD detectors. The development of direct electron detectors vastly improved the sensitivity and speed of electron image recording. This helped greatly with another big problem, movement of the vitrified sample, particularly when it is irradiated with electrons. Because direct detector recording is so much faster it became possible to record the images as movies, fractionating the exposure into a series of short frames that can be mutually aligned by cross correlation, enabling the detection and correction of movement. More stable electron sources and sample stages also contribute to the improvements. Further improvements come from detailed attention to the sample support. Optimising the grid material and hole size enabled Russo and colleagues to perform a remarkable feat – the extrapolation of image data to zero electron dose, in other words what the electron image would have been before any electrons hit (and damaged) the sample.10,11 The speed and efficiency of direct detectors (especially relative to the initial, laborious photographic film) and the automation of data collection12 provided another essential advance – vastly increased image data sets fuel the development of increasingly sophisticated statistical methods for image alignment and classification. Massively increased computing power enables refinement of electron optical and alignment parameters on a per particle basis, e.g. ref. 13. More recent innovations in electron optics, increasing the use of beam deflections to rapidly collect a group of images without the more time-consuming mechanical stage movements, have sped up data collection still more.
The steady increase in processing power has enabled the development of a new generation of software methods incorporating machine learning. For example, neural networks have been applied to the analysis of structural heterogeneity arising from continuous flexibility in a single particle data set in software such as CryoDRGN.14 Atomic model building has had its own revolution with the application of machine learning to structure prediction by AlphaFold2.15 In addition, there are powerful strategies to tackle flexibility and heterogeneity, using approaches such as localised refinement and multibody refinement.
Near atomic resolution was achieved before the detector revolution, on rigid icosahedral viruses, with a 3.9 Å virus structure, from 12000 images on a Polara microscope with a CCD camera, at 1 Å pixel size.16 They were collected as focal pairs so that particles could be detected on the second, more defocussed image, but the structure determination used the closer to focus particles assumed to be at the same positions. In 2020, atomic resolution was reached17–19 and the lower limit for particle size is going down.
Tomography and sub tomogram averaging are now the frontier methods of structural biology in situ. Data collection for tomography is slower than for single particle analysis and particle picking from tomograms is tricky. Processing is harder because of the missing information in tomograms, owing to the limitation at high tilt for the typical thin sheet specimen geometry, and the huge attrition in signal from the accumulated radiation damage.
The Chameleon (SPT Labtech, Cambridge UK), a development of the Spotiton device in the Carragher lab, uses self-wicking grids to get a rapid spread of the droplets on the grid so that it can be plunge frozen within 50 to a few 100 ms.22 For some samples this reduces preferred orientation. Here, Klebl et al. have developed an ultra fast sub ms grid preparation (Klebl, Muench et al., https://doi.org/10.1039/D2FD00079B). They demonstrate that the method can capture the very short lived acto-myosin-ATP complex, but also show that there is still some preferred orientation of ribosomes even at these short times.
Bakker addresses problems of specific samples by devising plunge freezing protocols with controlled temperature, O2 and light conditions, a reminder that every sample has its own problems and experimental design needs to take these into account (Bakker et al., https://doi.org/10.1039/D2FD00060A). Rapid application after mixing by microfluidics and use of detergent molecules and environmental control to modulate the interface properties is discussed by Braun et al. (https://doi.org/10.1039/D2FD00066K). An attractive but very challenging idea is to use the separation power of electrospray mass spectrometry to deliver specific molecules to the grid, presented by Esser (Rauschenbach et al., https://doi.org/10.1039/D2FD00065B). At present molecules are dehydrated and frozen 2–30 min after landing on the grid. The overall shapes of the proteins can be discerned but internal details are not visible.
CLEM is a key requirement for connecting in situ cell biology to molecular and atomic structures. Frank covers the basic practicalities and challenges of imaging fluorescently labelled regions of brain by plunge freezing of homogenised brain tissue, cryo sectioning, or focussed ion beam milling of high-pressure frozen brain slices, requiring lift out of smaller sample regions (Frank et al., https://doi.org/10.1039/D2FD00081D). These are increasingly difficult and non-routine methods. Targeting specific sites in brain tissue is one of the most challenging areas of CLEM.
An elegant method using scanning transmission EM for tomography of thick specimens up to 1 μm or more has been developed by Elbaum et al. (https://doi.org/10.1039/D2FD00088A). It is applied in this case to the hemozoin crystals sequestered by malaria parasites. Cryo STEM tomography works well with crystals, and is here used to prove that there is no lipid envelope surrounding the crystals. The method involves a compromise between sample thickness and resolution.
The CCP EM team, represented by Joseph, discuss a variety of tools for model validation and fit validation (Joseph, Winn et al., https://doi.org/10.1039/D2FD00103A). Their analysis of the structural database shows that many of the deposited EM derived models can be improved because they do not optimally fit the map. Vilas discusses many possible criteria for map validation, given that gold standard FSC is often over optimistic (Sorzano et al., https://doi.org/10.1039/D2FD00059H).
Russo et al. consider the smallest single particle that it is in principle possible to identify in situ, giving a tour of the technical limitations and potential solutions (Russo et al., https://doi.org/10.1039/D2FD00076H). One conclusion is that it will be necessary to abandon the familiar projection approximation, to allow more accurate restoration of the structure information from the images. Various technical improvements are considered that are expected to give further improvements in contrast, signal to noise, and reduction in electron beam damage. Improved contrast without image degradation is expected with the development of laser phase plates, because they put no additional physical material in the beam path. Improvements in signal could be provided by chromatic aberration correctors, and liquid He cooling is reconsidered to explore lower temperature phases of amorphous ice to reduce radiation damage. Finally, further optimisation of specimen supports might facilitate progress beyond the current physical limitations.
Daum returns to the point that even highly symmetric biological structures can deviate in detail from exact symmetry, specifically considering helical assemblies (Daum et al., https://doi.org/10.1039/D2FD00051B). Even if the symmetry assignment appears correct, relaxing the symmetry can reveal local variations and give higher resolution. Like the early unbending, applying symmetry to a structure which deviates from exact symmetry leads to loss of information.
The Faraday meeting covered all these topics and a broad range of related issues in a lively series of discussions.
This journal is © The Royal Society of Chemistry 2022 |