Motivation

The practical needs of wet lab experiments and measurement processes lead us to design a system for trapping single yeast cells to study their oscillation patterns. However, we encountered several challenges:

1. The oscillation cycle of the engineered yeast is roughly 10 hours, while its replicate life span (RLS) exceeds 70 hours, necessitating the processing of a vast number of images.

2. The difficulty and inconvenience in precisely segmenting mutiple yeast cells from the images.

3. There is currently a gap in the availability of an integrated system capable of performing from submatrix segmentation to submatrix and yeast recognition.

Requirement Analysis

Primarily, we have identified our target audience as researchers in fluorescence photo studies and relevant industry field. Our aim is to develop a user-friendly software, enabling interactive utilization and interpretation of experimental interpretation. Lastly, we are committed to protecting user privacy.

Therefore, we have designed a model with the following characteristics

1. Effective data processing: Capable of processing extensive images

2. Accurate image segmentation: Capable of accurate segmenting images to the desired resolution.

3. Dot recognition: The model should have high sensitivity to identify the submatrix with dots.

Software structure

• Overview

To tackle the challenges in image processing, our software, LuminoSeg, integrates multiple deep learning models and is divided into three main components: dot matrix segmentation using YOLOv8, dot matrix recognition using CNN, and single yeast cell segmentation using Cellpose Cyto3.

Figure 1. General Overview of LuminoSeg

• Dot matrix segmentation

Figure 2. The YOLOv8 structure for dot matrix segmentation.

In this research, we employed the YOLOv8 model to detect and segment cell traps within microscopic images. The YOLO model operates through a series of steps: converting input data into tensors, normalization, convolution, activation functions, and predictions. YOLO relies solely on convolutional layers, making it a fully convolutional network (FCN). In the YOLOv3 paper, the authors introduce a more advanced feature extractor called Darknet-53 [1]. As its name suggests, it contains 53 convolutional layers, each followed by a batch normalization layer and a Leaky ReLU activation function [2]. Our images involve three classes: cell trap, dot, and background. Instead of pooling layers, we used a convolutional layer with a stride of 2 for downsampling the feature map. Under the premise of small datasets, we found that the detection accuracy of well-established SSD models is far less accurate than that of YOLO models. Through the convolutional neural network (CNN) architecture design of P4, YOLOv8 achieves high resolution detection, allowing precise localization and segmentation of submatrix. We manually annotated the block diagram using 76 raw data figures and successfully trained the model confidence to 0.9 or higher.

• Dot matrix recognition

The process of manually annotating the massive amount of image data obtained from microscopy is not only time-consuming but also labor-intensive. To address this, we have implemented machine learning models for the automatic recognition of binary matrix values, streamlining the batch processing workflow. Focusing on the binary nature of these matrices, we simplified the model by utilizing only the grayscale channel and applied histogram equalization to boost contrast. We utilized CNNs, renowned for their prowess in image processing, with a compact architecture consisting of two convolutional layers succeeded by two fully connected layers (Fig 3). Compared and tested on different models, the CNN is sufficient to meet our needs. To enhance the robustness of the model and reduce overfitting, we expanded the dataset by generating additional images through transformations based on the manually annotated samples. This approach significantly improves the efficiency of biological image processing using machine learning techniques with good model performance (check performance for details).

Fig 3. The CNN structure for dot recognition.

• Cell segmentation

Cellpose, which is built on the U-Net3 architecture, is adapted for accurate cell segmentation from a diverse image types without the need for model retraining or parameter tuning. It employs a process of downsampling and upsampling feature maps, with skip connections that bridge layers of equivalent size and global skip connections from the image's initial state. Thus, cellpose can compute at the low resolution to all subsequent stages of processing.

Based on the latest iteration of the cellpose model, cyto3, we have trained our model to precisely recognize and segmented yeast cells [3]. In our implementation with tagged images with its number, the model is initiated with image input, and proceeded with denoising, and calibrates cell diameter pixels to optimize recognition accuracy. Then we manually modified incorrectly segmented yeast cells, thereby ensuring the precise segmentation of yeast (Fig 4).

Figure 4. The results of trained cyto3 model recognition. The colored dots represent an identified yeast.

• Fluorescence detection and visualization

Once the cell segmentation is complete, the imageJ is applied to perform the subsequent visualizations (Fig 5). The fluorescence data is then processed for visualization of the oscillating curve or period analysis using the Fast Fourier Transform (FFT) in MATLAB.

Figure 5. Visualization and the FFT of the florescence data.

Model Performance

To enhance the CNN model's accuracy and generalization capability, we utilized images for the training set. Additionally, we applied various image augmentations, including affine transformations, random cropping, and slight rotations, to artificially expand the dataset, resulting in a total of 6,464 images. Of these, 80% were randomly assigned to the training set, while 20% were allocated for validation. After training the model for 80 epochs, the following results were achieved:

• Best Model Training Accuracy: 97.60%

• Best Model Validation Accuracy: 99.30%

• Best Model Training F1-Score: 0.976

• Best Model Validation F1-Score: 0.993

These results indicate that the model achieved high precision and generalization performance.

Future Work

While our model has successfully reduced manual effort and enhanced reproducibility and scalability for large datasets, there is still room for further improvement. Here are the key areas we plan to address:

1. Enhancing model adaptability to different image types

Our current use of YOLO allows for the processing of images of varying sizes. However, the detection of subtle differences in diagrams can still be challenging, potentially leading to segmentation errors and data loss. To address this, we will explore the integration of deeper network architectures or attention mechanisms to improve feature expression and model performance.

2. Innovating evaluation techniques

We will develop a model capable of online updates and real-time user interaction to facilitate continuous learning and improvement, moving beyond traditional manual evaluation methods.

3. Meeting customized requirements

We will engage with microfluidics and fluorescence microscopy experts to further customize the model, ensuring it meets the specialized needs of these fields.

References

1. Redmon J, Farhadi A. YOLOv3: An Incremental Improvement. ArXiv [Internet]. 2018 Apr 8; Available from: https://www.semanticscholar.org/paper/YOLOv3%3A-An-Incremental-Improvement-Redmon-Farhadi/ebc96892b9bcbf007be9a1d7844e4b09fde9d961

2. Ultralytics. Object Detection Datasets Overview [Internet]. Available from: https://docs.ultralytics.com/datasets/detect

3. Stringer C, Wang T, Michaelos M, Pachitariu M. Cellpose: a generalist algorithm for cellular segmentation. Nat Methods. 2021 Jan;18(1):100–6.

gotop
BACK TO
TOP !