Machine learning analysis of self-assembled colloidal cones
Abstract
Optical and confocal microscopy is used to image the self-assembly of microscale colloidal particles. The density and size of self-assembled structures is typically quantified by hand, but this is extremely tedious. Here, we investigate whether machine learning can be used to improve the speed and accuracy of identification. This method is applied to confocal images of dense arrays of two-photon lithographed colloidal cones. RetinaNet, a deep learning implementation that uses a convolutional neural network, is used to identify self-assembled stacks of cones. Synthetic data is generated using Blender to supplement experimental training data for the machine learning model. This synthetic data captures key characteristics of confocal images, including slicing in the z-direction and Gaussian noise. We find that the best performance is achieved with a model trained on a mixture of synthetic data and experimental data. This model achieves a mean Average Precision (mAP) of ∼85%, and accurately measures the degree of assembly and distribution of self-assembled stack sizes for different cone diameters. Minor discrepancies between machine learning and hand labeled data is discussed in terms of the quality of synthetic data, and differences in cones of different sizes.