Sumona Biswas’s research while affiliated with Indian Institute of Information Technology Guwahati and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Feature Fusion GAN Based Virtual Staining on Plant Microscopy Images
  • Article

March 2024

·

5 Reads

·

2 Citations

IEEE/ACM Transactions on Computational Biology and Bioinformatics

Sumona Biswas

·

Virtual staining of microscopy specimens using GAN-based methods could resolve critical concerns of manual staining process as displayed in recent studies on histopathology images. However, most of these works use basic-GAN framework ignoring microscopy image characteristics and their performance were evaluated based on structural and error statistics (SSIM and PSNR) between synthetic and ground-truth without considering any color space although virtual staining deals with color transformation. Besides, major aspects of staining, like color, contrast, focus, image-realness etc. were totally ignored. However, modifications of GAN architecture for virtual staining might be suitable by incorporating microscopy image features. Further, its successful implementation need to be examined by considering various aspects of staining process. Therefore, we designed, a new feature-fusion-GAN for virtual staining followed by performance assessment by framing a state-of-the-art multi-evaluation framework that includes numerous metrics in —qualitative (based on histogram-correlation of color and brightness); quantitative (SSIM and PSNR); focus aptitude (Brenner metrics and Spectral-Moments); and influence on perception (semantic perceptual influence score). For, experimental validation cell boundaries were highlighted by two different staining reagents, Safranin-O and Toluidine-Blue-O on plant microscopy images of potato tuber. We evaluated virtually stained image quality w.r.t ground-truth in RGB and YCbCr color spaces based on defined metrics and results are found very consistent. Further, impact of feature fusion has been demonstrated. Collectively, this study could be a baseline towards guiding architectural upgrading of deep pipelines for virtual staining of diverse microscopy modalities followed by future benchmark methodology or protocols.


A Low-Cost Vegetable Quality Assessment System Based on Microscopy Images in Deep Learning Edge Computing: A Pilot Study on Potato Tuber

January 2024

IEEE Transactions on Consumer Electronics

This work details design and development of a microscopy image-based vegetable quality assessment system (Prototype) by adopting deep learning (DL) technique on edge device. Current automated machine learning methods primarily utilize outer-surface images of vegetables/fruits, often lacking in precise quantification of nutrient content such as carbohydrates, minerals, vitamins, etc. Indeed, such nutrient ingredients can be assessed by examining micro-level cell attributes of microscopy images in DL framework. However, vegetable quality detection based on microscopy/DLs on resource-constrained edge devices poses significant challenges. To address these problems, a portable, cost-effective, efficient, and real-time prototype has been realized. It involves configuring a microscopy image generation module using low-cost Foldscope lens coupled with smartphones and on-device analysis by designing a new lightweight DL architecture and segmentation algorithm. The analysis is executed via a smartphone application, ensuring advantages like bandwidth and energy efficiency, user privacy, local processing without external servers. For system validation, a pilot study has been conducted on the widely consumed potato tuber, focusing on the assessment of starch presence as a key quality metric. The system successfully assesses cell attributes, i.e., starch quantity of 10–25% in ~24s, which is very much consistent. In a comparative study, the network outperforms the existing state-of-the-art lightweight networks by achieving the highest recognition accuracy upto 88.8% and F1-score 85.83 with lesser parameters (1.5M) and FLOPs (118M). Thus, the study demonstrates its applicability for vegetable quality assessment in an easy, affordable, and effective way. Further, the proposed idea can be extended to other vegetables/fruits.


MicrosMobiNet: A Deep Lightweight Network With Hierarchical Feature Fusion Scheme for Microscopy Image Analysis in Mobile-Edge Computing

January 2023

·

11 Reads

·

2 Citations

IEEE Internet of Things Journal

In recent advancements of lightweight deep architectures for edge devices, most of the works follow typical MobileNet pipeline designed for computer vision tasks which is not very appropriate for microscopy image analysis. Certainly, design of dedicated lightweight network for highly complex microscopy image analysis has not been attempted so far. Therefore, this work proposes a new deep lightweight network, “MicrosMobiNet” having multi-scale feature extraction mechanism for bright-field microscopy image analysis on mobile-edge computing framework. It consists of three key attributes—depth-wise separable convolution for making the network lightweight, multiple kernels with hierarchical feature fusion to extract complex features, and residual connection to keep network deep. Experimental validations have been conducted by two different microscopy image datasets—plant (potato tuber) and histopathology (cancer cell) generated by two different image generation modalities. In experiment, multi-class and multi-label classification tasks have been evaluated by measuring accuracy, F1-score, and error. In ablation study, the key attributes of the network have been verified. The results and analysis show that the MicrosMobiNet can achieve classification accuracy upto 98.43% and 96.25% for plant and cancer cells with minimum error 8.38% and 10.03% respectively. In comparative study, the MicrosMobiNet outperforms the existing lightweight state-of-the-art methods with fewer parameters (1.9M) and FLOPs count (42M). Finally, the new network has been implemented on an edge device, Smartphone (Android platform) which is working satisfactorily with high speed (140ms) and very low memory (7.4 MB). Hence, the network exhibits its superiority in bright-field microscopy image analysis on mobile-edge computing platforms in lightweight deep learning framework.


A Large-Scale Fully Annotated Low-Cost Cost Microscopy Image Dataset for Deep Learning Framework

July 2021

·

32 Reads

·

4 Citations

IEEE transactions on nanobioscience

This work presents a large-scale three-fold annotated, low-cost microscopy image dataset of potato tubers for plant cell analysis in deep learning (DL) framework which has huge potential in the advancement of plant cell biology research. Indeed, low-cost microscopes coupled with new generation smartphones could open new aspects in DL-based microscopy image analysis, which offers several benefits including portability, easy to use, and maintenance. However, its successful implications demand properly annotated large number of diverse microscopy images, which has not been addressed properly- that confines the advanced image processing based plant cell research. Therefore, in this work, a low-cost microscopy image database of potato tuber cells having total 34,657 number of images, has been generated by Foldscope (costs around 1 USD) coupled with a smartphone. This dataset includes 13,369 unstained and 21,288 stained (safranin-o, toluidine blue-o, and lugol's iodine) images with three-fold annotation based on weight, section areas, and tissue zones of the tubers. The physical image quality (e.g., contrast, focus, geometrical attributes, etc.) and its applicability in the DL framework (CNN-based multi-class and multi-label classification) have been examined and results are compared with the traditional microscope image set. The results show that the dataset is highly compatible for the DL framework.


Demonstration of potato tuber anatomy, sample preparations, and image acquisition set up for microstructure visualization: (a) A potato tuber sample. (b) Longitudinal cross-section of a tuber. The samples have been divided into three parts, named Z1, Z2, and Z3 nearer to bud, middle and stem respectively as indicated by dotted lines for microscopic observations. (c) Transverse cross-sections of the tuber where sample collection areas, inner and outer core are highlighted by red circle. (d) Tissue samples have been collected by using a cork borer of diameter 4 mm from specified zones. (e) Thin free-hand unstained sections have been obtained. The stained samples have been prepared by using safranin-o (1%), toluidine blue-o (0.05%), and lugol’s iodine. (f) Image capturing set up in which, the camera of the smartphone has been fixed on the microscope eyepiece by using an adaptor. Two types of microscopy images, unstained and stained images have been captured independently without drying the sections.
A schematic diagram displaying inner and outer core cell characteristics of a potato tuber.
Example of unstained and stained images of large (80–100 gm) potato tubers. Rows and columns indicate respective tissue zones (inner and outer core) and different staining agents. The first column specifies the unstained images, whereas, the subsequent columns are for stained images of safranin-o, toluidine blue-o, and lugol’s iodine. The images are from (a) Bud Region (Z1), (b) Middle Region (Z2), and (c) Stem Region (Z3).Note: All the images are with the scale at top-left corner on unstained image.
Example of unstained and stained images of small (15–25 gm) potato tubers. Rows and columns indicate respective tissue zones (inner and outer core) and different staining agents respectively. The first column specifies the unstained images, whereas, the subsequent columns are for stained images of safranin-o, toluidine blue-o, and lugol’s iodine. The images are from (a) Bud Region (Z1), (b) Middle Region (Z2), and (c) Stem Region (Z3). Note: All the images are with the scale at top-left corner on unstained image.
Steps involved in generating the ground truth segmentation labels for the inner core tissues. The original images pre-processed by employing rolling ball algorithm and bandpass filtering. Next, the adaptive thresholding has been employed to obtain binary images. Furthermore, morphological operations have been performed to refine the cell boundaries and remove the starch granules. By changing fcl, fch and R at pre-processing steps, possible binary images have been generated. Then, the best image has been selected for manual correction.

+2

A large-scale optical microscopy image dataset of potato tuber for deep learning based plant cell assessment
  • Article
  • Full-text available

October 2020

·

6,135 Reads

·

18 Citations

Scientific Data

We present a new large-scale three-fold annotated microscopy image dataset, aiming to advance the plant cell biology research by exploring different cell microstructures including cell size and shape, cell wall thickness, intercellular space, etc. in deep learning (DL) framework. This dataset includes 9,811 unstained and 6,127 stained (safranin-o, toluidine blue-o, and lugol’s-iodine) images with three-fold annotation including physical, morphological, and tissue grading based on weight, different section area, and tissue zone respectively. In addition, we prepared ground truth segmentation labels for three different tuber weights. We have validated the pertinence of annotations by performing multi-label cell classification, employing convolutional neural network (CNN), VGG16, for unstained and stained images. The accuracy has been achieved up to 0.94, while, F2-score reaches to 0.92. Furthermore, the ground truth labels have been verified by semantic segmentation algorithm using UNet architecture which presents the mean intersection of union up to 0.70. Hence, the overall results show that the data are very much efficient and could enrich the domain of microscopy plant cell analysis for DL-framework.

Download

Usability of Foldscope in Food Quality Assessment Device

November 2019

·

71 Reads

·

1 Citation

Lecture Notes in Computer Science

This work focuses on the quality assessment of agricultural product based on microscopic image, generated by Foldscope. Microscopic image-based food quality assessment always be an efficient method, but its system complexity, costly, bulk size and requirement of special expertise confines it usability. To encounter such issues, Foldscope which is small, lightweight, cheap and easy to use has been considered to verify its usability as food quality assessment device. In this purpose, measuring starch of potato has been selected to check its compatibility and microscopic images are taken from two image modalities—conventional microscope and Foldscope and the results have been compared. The image processing techniques including morphological filtering followed by Otsu’s method has been employed to detect starch efficiently. In total, 20 images from each of the system have been captured. Following the experiment, the presence of starch (in %) estimated based on the image taken from microscope and Foldscope are 23.50 ± 0.79 and 24.29 ± 0.73 respectively, which is consistent. Such results reveal that the Foldscope can be used in food quality assessment system, which could make such devices simple, portable and handy.


Citations (4)


... where SSIM Y , SSIM Cb , and SSIM Cr are SSIM values between H&E and PHH3 images for luminance, blue-and reddifference chroma components, respectively [42]. In the case of perfectly registered images, Y-biased weighted SSIM is equal to 1.0 since its definition is a linear convex combination of SSIM values. ...

Reference:

Computational Synthesis of Histological Stains: A Step Toward Virtual Enhanced Digital Pathology
Feature Fusion GAN Based Virtual Staining on Plant Microscopy Images
  • Citing Article
  • March 2024

IEEE/ACM Transactions on Computational Biology and Bioinformatics

... The need for usable microscopes under a wide variety of circumstances led to the creation of the Foldscope ( Figure 2) at a Stanford University (California) laboratory. This "origami microscope" simplifies all the parts of a conventional microscope, which allows it to be affordable and accessible outside of hospitals and research centers [11,12,18]. The Foldscope was first presented in 2014, and its use in biomedical sciences has become so popular that nowadays, more than 1 million Foldscopes have been sold. ...

A Large-Scale Fully Annotated Low-Cost Cost Microscopy Image Dataset for Deep Learning Framework
  • Citing Article
  • July 2021

IEEE transactions on nanobioscience

... In biological microscopy, deep learning has demonstrated promising performance in a range of segmentation applications including semantic segmentation of human oocyte (Targosz et al., 2021), semantic and instance segmentation for cell nuclei (Caicedo et al., 2019) and semantic segmentation potato tuber (Biswas and Barma, 2020). Examples of plant phenotyping applications include semantic and instance segmentation for plant leaf detection and counting (Aich and Stavness, 2017;Giuffrida et al., 2018;Itzhaky et al., 2018;Jiang et al., 2019;Fan et al., 2022), semantic and instance segmentation for crop phenotyping (Jiang and Li, 2020), grapevine leaf semantic segmentation (Tamvakis et al., 2022), barley seed detection from instance segmentation (Toda et al., 2020) and many other applications (Kolhar and Jagtap, 2023). ...

A large-scale optical microscopy image dataset of potato tuber for deep learning based plant cell assessment

Scientific Data

... Hdioud Boutaina [6] et al. detected shadows in the HSV color space using dynamic thresholds. In addition, there are a series of image enhancement algorithms based on HSV space [7][8][9][10][11][12][13] which are frequently mentioned. However, due to the need to process large batches of data and the high demands on the speed of the algorithms with limited computational resources, especially for real-time, the interconversion between different colour spaces can no longer be limited to traditional algorithms alone. ...

Foldscope Image Enhancement in HSV Space by PSO Optimization Technique
  • Citing Conference Paper
  • November 2019