Header Banner Image Software
Software

Introduction


The software we created is an integral part of our hardware, PrymChip. It was written in Python to enable quantitative analysis of the detected green fluorescence signal in PrymChip (for more information, visit our hardware site). This software serves as a valuable tool for analyzing photos taken of the PrymChip’s reaction chamber in the detection box, allowing measurement of the fluorescein concentration detected in the device. By taking a photo of the signal generated in the chamber and uploading it into our script, the user can estimate the concentration of fluorescein present in the detection box. The result is based on a calibration curve, created from previously prepared and analyzed photos of various fluorescein solutions. We based our work on a publication named ‘Machine Learning-Driven and Smartphone-Based Fluorescence Detection for CRISPR Diagnostic of SARS-CoV-2’ [1].

Figure 1. PrymChip used for detecting the green fluorescence signal via a smartphone.


How Does the Software Work?


Libraries

We used the following libraries in our script:

Figure 2. Part 1 of the software.


Preprocessing of Images

Firstly, the images are prepared to enhance the detection of green fluorescence. Contrast enhancement is applied to the original images to make the green areas more prominent. Subsequently, Gaussian blur is used to smooth the images, reducing noise and improving the accuracy of green area detection. Finally, the images are converted from BGR to HSV color space, allowing for better color identification, as HSV separates color information from the intensity.

Figure 3. Part 2 of the software.


Creating a Mask

Next, the green areas in the photos are isolated and enhanced. A mask is created for the detected green areas based on a specified range of pixel values that correspond to the green color. This is a binary operation where green areas are marked as white, while all other areas are black. Morphological operations called closing and opening are applied to fill small holes in the white regions of the mask, effectively connecting nearby green areas that may have been separated due to noise. Additionally, small objects are removed from the mask to clean up the edges, ensuring that only significant green areas were highlighted. Finally, the cleaned mask is used to extract the corresponding pixels from the original image, retaining only the areas that represent green fluorescence.

Figure 4. Part 3 of the software.


Extracting Pixel Values

Here, green pixels are isolated in the HSV image, retaining only the areas corresponding to green fluorescence. The average pixel values of the extracted green pixels are computed to obtain the mean color values for these areas. The intensity of the green channel from the mean value is returned and subsequently used for analysis or prediction. Additionally, the images are cropped so that they can be easily seen on a screen.

Figure 5. Part 4 of the software.


Preparing the Data

This section specifies the directory containing the images used for training the model. Known fluorescein concentrations corresponding to these images are provided. The list of intensities corresponds to the fluorescein concentrations, while the list of features stores the green channel intensity for each image. The images are resized for display purposes, allowing for better visibility on the screen.

Figure 6. Part 5 of the software.


Preparing Images

In this step, a series of images is processed to detect green fluorescence and extract the corresponding concentrations to prepare the training data. Here, 6 images from the calibration curve are read and preprocessed to create a mask and calculate the intensity of the green channels in the detected areas. The known concentration for each image is appended to the detected intensity.

Figure 7. Part 6 of the software.


Polynominal Regression

This section trains a polynominal regression model and defines a function to classify fluorescence in new images. The features and intensities arrays are used as inputs for the polynominal regression model. The model is fitted with the training data, allowing it to learn the relationship between green fluorescence intensity and known concentrations.

The classify_fluorescence function takes an image as input, preprocesses it to detect green fluorescence, and extracts the intensity of the green channel. Using the trained model, it predicts the concentration of green fluorescence based on the extracted intensity.

Figure 8. Part 7 of the software.


Testing Images

Finally, the fluorescence detection is tested on images with unknown fluorescein concentrations. The location of the tested image is specified, and the classify_fluorescence function is used to process the image, detect fluorescence, and predict the concentration of fluorescein based on the detected green signal. The result is printed with two decimal places, and the image is displayed in a separate window.

Figure 9. Part 8 of the software.


Experimental Work

The software was validated through experimental work by preparing a series of fluorescein concentrations and placing them in the PrymChip to excite their green fluorescence in the detection box. A series of photos were then taken, corresponding to fluorescein solutions prepared in PBS, with concentrations ranging from 0.78 µM to 100 µM, including a blank sample containing only PBS solution. These results are presented in Figure 2 and Figure 3.

Testing the software on fluorescein solutions was the most convenient and reliable option as the excession (~490 nm) and emission maximums (~520 nm) suggest that the fluorescein or its derivative is used in formation of the RNase Alert utilized normally in SHERLOCK tests conducted by our team.

Figure 10. Series of fluorescein solutions and PBS solution. Photos taken by a Samsung smartphone.


Figure 11. Series of fluorescein solutions and PBS solution. Photos taken by a Google Pixel smartphone.


Additionally, three different concentrations of fluorescein were prepared to assess whether the software can accurately detect the corresponding fluorescein concentrations. Photos of the samples were taken and are presented in Figure 4 and Figure 5.

Figure 12. Test samples. (A) 8 µM solution. (B) 2 µM solution. (C) 1 µM solution. Photos taken by a Samsung smartphone.


Figure 13. Test samples. (A) 8 µM solution. (B) 2 µM solution. (C) 1 µM solution. Photos taken by a Google Pixel smartphone.


The software performed effectively, allowing for the detection of the signal presented in the photos. Since two series of fluorescein concentrations were prepared using photos taken with different phones, the three analyzed concentrations were tested separately on the corresponding phones that captured the images. The lowest concentrations of the fluorescein solutions were not detected by the software (Figure 14). For Samsung calibration curve 6 photos were used (100 µM - 3.13 µM) while for Google Pixel only 5 (100 µM - 6.25 µM). The results of the analysis are gathered in the table below (Table 1).


Table 1. Concentrations of the tested fluorescein solutions and the results obtained from the calibration curves using different phones.

Real fluorescein concentration (µM)

Concentration detected by Samsung phone (µM)

Concentration detected by Google Pixel phone (µM)

8

7.22

0

2

2.84

0

50

53.60

42.88


Figure 14. Singal detected on the Samsung photos used for calibration (100 µM - 3.13 µM).


The results show that the software analyzed the Samsung photos more accurately, with the measured fluorescein concentrations closely matching the actual values. In contrast, the photos from the Google Pixel phone were less accurate. The value for the regression fitted to the Samsung phone data was 0.84, while for the Google Pixel, it was only 0.53. The Google Pixel struggled to detect the green fluorescence at lower fluorescein concentrations. This demonstrates that while the software can be effective and reliable, its performance depends on the phone and camera settings used for analysis.


Other Projects


The software is undoubtedly useful for other projects, especially those based on our PrymChip. By modifying the design of our device, such as changing the LEDs and filter colors in the detection box, the software can be adapted for testing fluorescence other than the green signal. Additionally, the fluorescein solution can be replaced with any other chemical that emits green fluorescence. By preparing a custom calibration curve from the photos registered in PrymChip, it can be utilized to assess the concentration of fluorescence in unknown samples of the same chemical.

Furthermore, by enhancing our environmental detection method and implementing the SHERLOCK detection system, the software could be used to determine the concentration of Prymnesium parvum in water samples based on the emitted fluorescence signal. Instead of focusing solely on Prymnesium parvum, the SHERLOCK detection system could be also designed to detect different species in environmental samples.


Integration with Other Tools


The software can easily be integrated with external tools and applications. It already utilizes external libraries such as OpenCV for image processing, NumPy for numerical operations, and Scikit-learn for machine learning. These examples illustrate how the code can work with various external packages.

Additionally, the software could be extended to use APIs (web services or data sources), allowing connections to online image sources or cloud services for processing and predictions. This would be particularly beneficial in developing a mobile app for users to analyze their photos using our software.

This flexibility means the software can interact with other tools or systems, making it adaptable and easy to expand.


User Experience Testing


The software is easily modified by individuals with a basic understanding of programming in Python. However, for those who lack programming experience, modifying and testing photos through the script can be challenging without proper instructions. The software was initially intended to be integrated into a mobile app written in Kivy. After receiving feedback at the SEMPOWISKO 2024 conference, we learned that potential users preferred the option to modify the code directly to insert their photos.

It is important to note that many attendees we spoke to at the conference had not tested the functional device themselves and likely had some programming experience. Conversely, when the fully functional device was tested by a biologist, Dr. Grzegorz Bereta (see our Human Practices site), he found it difficult to change the calibration curve photos taken with his smartphone and to measure the concentrations from three fluorescein samples not included in the curve.

He acknowledged that the app we envisioned would be a valuable tool for biologists , as it would allow users to simply download the app, upload a photo from their phone gallery, and receive the resulting fluorescein concentration. Transferring photos from a smartphone to a laptop or PC is often time-consuming and cumbersome.


Contribution to Future iGEM Teams


The software was designed to contribute to future iGEM teams, enabling them to use it freely with our hardware device. The complete script is available in the iGEM repository. Within the scripts, there are numerous comments that help users easily understand the purpose of each line of code.

The software is also readily adjustable for photos taken in the PrymChip by other teams. By simply changing the concentrations in the prepared calibration curve and inserting their own photos, users can obtain a fully prepared script for analyzing their images and assessing the concentrations of samples that were not included in the calibration photos but fall within the calibration range.


Future of PrymChip Software


We believe that our software is an integral part of the PrymChip and can be genuinely useful for future iGEM teams in developing experiments based on fluorescence measurements. It can be easily modified by changing the script or by substituting the photos used for training the model, such as those taken with a different phone model.

There is also potential for further improvements by adapting the SHERLOCK detection system within the PrymChip. With a few additions to the script, it could not only display the detected fluorescence signal but also, using appropriate calibration curve photos, determine the concentration of Prymnesium parvum in the samples—especially when employing the environmental isolation and detection method. This functionality could be extended to detect any other organism for which the SHERLOCK method has been designed.

One of the crucial improvements would be the development of a mobile app for this software.


References


[1] Samacoits A, Nimsamer P, Mayuramart O, Chantaravisoot N, Sitthi-amorn P, Nakhakes C, Luangkamchorn L, Tongcham P, Zahm U, Suphanpayak S, Padungwattanachoke N, Leelarthaphin N, Huayhongthong H, Pisitkun T, Payungporn S, Hannanta-anan P. Machine learning-driven and smartphone-based fluorescence detection for CRISPR diagnostic of SARS-CoV-2. ACS Omega. 2021;6(4):2727-2733. doi: 10.1021/acsomega.0c04929.

Back to top button