Image-based phenotyping of disaggregated cells using deep learning

Abstract
The ability to phenotype cells is fundamentally important in biological research and medicine. Current methods rely primarily on fluorescence labeling of specific markers. However, there are many situations where this approach is unavailable or undesirable. Machine learning has been used for image cytometry but has been limited by cell agglomeration and it is currently unclear if this approach can reliably phenotype cells that are difficult to distinguish by the human eye. Here, we show disaggregated single cells can be phenotyped with a high degree of accuracy using low-resolution bright-field and non-specific fluorescence images of the nucleus, cytoplasm, and cytoskeleton. Specifically, we trained a convolutional neural network using automatically segmented images of cells from eight standard cancer cell-lines. These cells could be identified with an average F1-score of 95.3%, tested using separately acquired images. Our results demonstrate the potential to develop an "electronic eye" to phenotype cells directly from microscopy images. Berryman et al demonstrate that disaggregated cells can be phenotyped with a high degree of accuracy from bright-field and non-specifically stained microscopy images using a trained convolutional neural network (CNN). This approach allows for the identification of cell types without the need for specific markers.
Funding Information
  • Mitacs (IT09621, IT09621, IT13817)
  • Gouvernement du Canada | Natural Sciences and Engineering Research Council of Canada (2020-05412, 2015-06541, 2020-00530, 508392-17)
  • Gouvernement du Canada | Canadian Institutes of Health Research (381129, 322375, 426032)
  • Michael Smith Foundation for Health Research (18714)