Article

Textural Features for Image Classification

Authors:
  • Teachers Training Institute of Penang Campus
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Haralick texture features, derived from the gray-level cooccurrence matrix (GLCM) [64,67], play a crucial role in our methodology. For each pixel in the CT image, we compute the GLCM, capturing the frequency of co-occurring intensities at a specified distance and orientation. ...
... The texture of a lung nodule in CT images encapsulates critical information regarding its heterogeneity, which can be indicative of its nature and potential malignancy [67,69,74]. To systematically quantify these textural patterns, we employ two principal statistical methods: the Gray Level Cooccurrence Matrix (GLCM) and the Gray Level Run Length Matrix (GLRLM). ...
Article
Full-text available
Lung cancer remains a leading cause of global cancer mortality, demanding improved early detection and personalized therapies. This study introduces an innovative framework integrating deep learning (DL), radiomics, and explainable AI (XAI) to significantly advance lung nodule analysis and predict actionable genetic mutations. We synergize advanced neural networks (U-Net, DeepLabV3 for segmentation; YOLOv7 for detection) with comprehensive radiomic features (texture, shape, intensity) to achieve state-of-the-art performance. Our approach yields a segmentation Dice coefficient of 93.5% and accuracy of 98.5%, outperforming existing methods by 5-8%, while effectively reducing false positives. Crucially, we leverage XAI, specifically Grad-CAM visualizations with ResNet models, to link nodule morphology-particularly edge-associated vascular patterns-to clinically relevant EGFR and KRAS mutations. This provides unprecedented transparency, bridging AI-driven image analysis with underlying oncogenic mechanisms. Our ResNet-based classifiers achieve high accuracy for mutation prediction (97.6% EGFR, 97.7% KRAS), validated through stratified 10-fold cross-validation. By demystifying model decisions, XAI fosters clinical trust and empowers clinicians to correlate imaging phenotypes with genotypic alterations, paving the way for more precise diagnostics and treatment planning. We also discuss model fairness, computational efficiency, and outline clear future directions for clinical translation.
... Thirteen Haralick features (angular second moment, contrast, correlation, sum of squares variance, inverse difference moment, sum average, sum variance, difference variance, sum entropy, difference entropy, entropy, information measure of correlation 1 and information measure of correlation 2) were measured to evaluate the quality of interpolated images 42,43 . The results of each score were averaged for the different tested scenarios (authentic, InterpolAI skip1 , InterpolAI skip3 , InterpolAI skip7 , linear skip1 , linear skip3 , linear skip7 , XVFI skip1 , XVFI skip3 and XVFI skip7 ) (Supplementary Table 1), which allowed for principal component analysis (PCA) to be carried out (Fig. 2e). ...
... Thirteen Haralick texture features were calculated to provide a quantitative representation of the texture patterns within an image, offering insights into their spatial arrangements and relationships 42,43 . The 13 features measure the angular second moment, contrast, correlation, sum of squares variance, inverse difference moment, sum average, sum variance, difference variance, sum entropy, difference entropy, entropy, information measure of correlation 1 and information measure of correlation 2 (refs. ...
Article
Full-text available
Recent advances in imaging and computation have enabled analysis of large three-dimensional (3D) biological datasets, revealing spatial composition, morphology, cellular interactions and rare events. However, the accuracy of these analyses is limited by image quality, which can be compromised by missing data, tissue damage or low resolution due to mechanical, temporal or financial constraints. Here, we introduce InterpolAI, a method for interpolation of synthetic images between pairs of authentic images in a stack of images, by leveraging frame interpolation for large image motion, an optical flow-based artificial intelligence (AI) model. InterpolAI outperforms both linear interpolation and state-of-the-art optical flow-based method XVFI, preserving microanatomical features and cell counts, and image contrast, variance and luminance. InterpolAI repairs tissue damages and reduces stitching artifacts. We validated InterpolAI across multiple imaging modalities, species, staining techniques and pixel resolutions. This work demonstrates the potential of AI in improving the resolution, throughput and quality of image datasets to enable improved 3D imaging.
... Despite these significant technological advances, the GEE platform has not yet fully supported all the requirements of SAR image processing, especially in the context of ice mapping. The present study addresses this limitation by integrating Google Colaboratory (Google Colab) and Python Application Programming Interfaces (APIs) to enhance the Sentinel-1 Gray Level Co-occurrence Matrix (GLCM) texture (Haralick et al. 1973) analysis and clas-sification capabilities within GEE. Incorporating the SAR GLCM texture analysis into ice monitoring is particularly important to enhance detecting ice type features (Clausi 1996;Leigh et al. 2013;De Roda Husman et al. 2021). ...
... This texture analysis plays an important role in ice studies because it can discern ice sur- face variations, which may not be easily discernible through the standard SAR backscatter imagery alone (Barber and Ledrew 1991;Leigh et al. 2013). GLCM analysis quantifies the frequency of occurrence of pixel pair values within a specified spatial relationship, thereby creating a collection of the varying combinations of gray level intensities across the image pixels (Haralick et al. 1973;Soh and Tsatsoulis 1999). This statistical data forms a matrix representing the pixel pairs' spatial dependencies. ...
Article
Full-text available
In regions where ice coverage significantly impacts local economies and daily life, the continuous mapping of ice types is indispensable. This study focuses on developing an automated methodology for monitoring freshwater ice, which is crucial for ensuring the safety of winter activities and transportation. The proposed processing workflow integrates cloud computing capabilities utilizing services from Google Earth Engine (GEE) and Google Colab, along with Synthetic Aperture Radar (SAR) imagery from the open Sentinel-1 collection. Despite GEE’s extensive remote sensing capabilities, it lacks built-in support for Grey Level Co-occurrence Matrix (GLCM) texture calculations. Our approach addresses this gap by incorporating GLCM data analysis and clustering techniques into the GEE workflow. The methodology employs Sentinel-1 backscatter information and GLCM texture analysis within the GEE Python API framework to enhance the ice condition monitoring. A key component of this approach is the separability analysis, which identifies the most effective GLCM parameters for distinguishing different types of ice. The classification of freshwater ice types using Sentinel-1 C-band VV backscattering and GLCM texture features provided valuable insights into the challenges of ice classification. The unsupervised model achieved an overall accuracy of 79%79\%, demonstrating good performance in distinguishing between freshwater ice types. We demonstrate the practical application of this methodology in two study regions: Lake Saint-Pierre and Yamaska River in Quebec, Canada. Furthermore, the proposed method in this study can be applied to other regions.
... Recently, generic approaches evolved to use a combination of sharpness, textural, and statistical features to evaluate retinal image quality. Davis et al. [3] calculated different Haralick measures (entropy and contrast) [4], statistical features (mean, variance, kurtosis, skewness, first quartile (Q1) and third quartile (Q3)) along with spatial frequency to completely characterize the luminance and sharpness of retinal images computed from seven local regions covering the entire retinal image. Fasih et al. [49] combined cumulative probability of blur detection (CPBD) [5] along with Run Length Matrix (RLM) features [6] for quality evaluation calculated from the macula and OD regions. ...
Conference Paper
Full-text available
A wide-ranging, routine screening of the enormous numbers of potential ocular patients is made possible by automatic retinal screening systems (ARSS), which only offer professional treatment when early disease signs are identified. Serious vision impairments brought on by extensive disease progressions of silent retinal illnesses, such as diabetic retinopathy, can be avoided or delayed with early identification and appropriate treatment. However, it was discovered that the calibre of the retinal pictures that were processed had a significant impact on how reliable these systems were. This thesis presents a no-reference comprehensive wavelet-based retinal image quality assessment (RIQA) method for ARSS-based early detection of diabetic retinopathy.
... 2) Perimeter feature value: This feature presents the actual measurement of the nuclei shape by testing the nuclei cell border shape 3) Distance to centroid of all nuclei feature value: This feature presents the gematrical distance between the nuclei cell center and the all pixels (area) around the boundary 4) The Distance to c-nearest nuclei (distance to c-NN) feature this feature presents the total distance of the geometrical actual distance and the nearest nuclei cell 5) Mean R value feature: This feature presents the mean (sum of the color pixels divided by the total number of the pixels) for the RED channel color. values that are co-occurrence features which is proposed by [28] and has been calculated by using the GLCMs relies on different values such as 0, 45, 90, and 135, and eight gray-levels. 12) Contrast feature: This feature presents the intensity level deference's (contrast) between each pixel and its neighbor in the tested area 13) Correlation feature: this feature presents each pixel that it is correlated with its neighbors in the tested area. ...
Article
Full-text available
Breast cancer is becoming a leading cause of death among women in the whole world; early tumor detection step in the diagnosis stage is obtained by the cytological testing of the breast image mainly based on the cell morphology and architecture distribution. Accurate diagnosis of this disease can ensure a survival of the patients. This paper presents an analysis of digital histopathology Breast cancer based on cytological images of Fine Needle Biopsy (FNB). The main approach of this study is relying on a localization approach for nuclei cell (cell detection). the nuclei are estimated by circular shape using the Circular Hough Transform (CHT). Then, the cells that have been detected by the (CHT) are then filtered to keep only high-quality and accurate cells that have been estimated for further analysis by using a supervised learning approach. In order to filter the nuclei cells and classify the detected circles as correct (cells) or incorrect Support Vector Machine (SVM) as an approach is proposed to use. A set of 25 features were extracted from the remaining filtered nuclei set. (50 features) produced by calculating the mean and variance for each feature. Support Vector Machine (SVM) and Backpropagation Neural Network (BNN) are the two classification algorithms of the biopsies that used in the final stage. The complete diagnostic procedure is tested on total 130 microscopic images of fine needle biopsies obtained from patients and satisfy (99.88%) classification accuracy by using Resilient Backpropagation Neural Network (RBNN) by selecting only (27 features)from total (50 features). The features selected using mutual information approach which is a distinction between benign or malignant. These results shows that our proposed method consider very promising compared to the previously reported results providing valuable, accurate, and stable diagnostic information.
... A significant challenge in the categorization of visual information from extensive blocks of resolution cells lies in establishing a relevant set of features that effectively represent the pictorial data within these blocks. After these features are established, various pattern-recognition methods can be employed to categorize the image blocks [3]. ...
Conference Paper
Full-text available
The detection of crop diseases is a critical issue in agriculture, particularly in areas where farming is a primary livelihood. Traditional methods for identifying diseases in crops are often slow, necessitate specialized knowledge, and frequently lead to delays in intervention. This project leverages machine learning and image processing techniques to automate the identification of plant diseases through leaf images. Utilizing Convolutional Neural Networks (CNNs), the system achieves high accuracy in detecting and classifying diseases, thereby minimizing the need for manual labor and expert intervention. To enhance the quality and clarity of the images processed by the CNN model, several image pre-processing techniques, such as segmentation and feature extraction, were implemented. This optimization allows the CNN to more effectively recognize disease patterns. The increased volume of image data enables broader and real-time application, assisting farmers in promptly identifying crop diseases. Ultimately, the integration of machine learning and image processing in crop disease detection supports sustainable agricultural practices by facilitating timely diagnoses and predicting disease severity, thereby safeguarding crop yields. This project thus represents a significant step towards enhancing food security and improving agricultural productivity.
... A seleção consistiu no descarte das medidas texturais altamente correlacionadas [11,12]. Para isso foi calculada a matriz de correlação de Spearman e empregado um limiar em módulo de 0,9. ...
Conference Paper
Full-text available
O Pará é o principal produtor de cacau (Theobroma cacao) no Brasil, desempenhando importante papel econômico. Nos últimos anos, o Projeto de Assentamento Itatá, no sudoeste do estado, passou por transformações em seu uso da terra, com a conversão de florestas primárias em áreas agrícolas, especialmente para cultivo de cacau. Este estudo avalia modelos de Random Forest para mapear a presença de cacau na região, utilizando imagens do Sentinel-2. Os resultados indicam que o modelo com atributos de medidas texturais e distância às estradas apresentou o melhor desempenho entre os modelos avaliados, apresentando maior acurácia na identificação do cacau, apesar dos desafios relacionados à semelhança com a vegetação secundária. A pesquisa contribui para a compreensão das dinâmicas de uso da terra associadas ao cultivo de cacau em Itatá.
... Радиомические признаки можно условно разделить на статистические, в т.ч. основанные на гистограммах и текстурах, далее основанные на моделях, основанные на преобразовании и основанные на форме [9]. ...
Article
Background. Automated quantitative analysis of radiographic phenotyping refers to a modern digital research method that allows differential diagnosis of various pathological conditions of the maxillofacial region. Radiological data reflect the characteristics of tissues and lesions, such as heterogeneity and shape, and can, alone or in combination with demographic, histological, genomic or proteomic data, be used to solve clinical problems. Ultrasound is one of the most widely used imaging techniques worldwide. Due to its safety, low cost and accessibility, it is often used as a non-invasive diagnostic and follow-up method in various applications. The aim of the study was to evaluate the possibilities of radiomic analysis in the differential diagnosis of the maxillofacial region masses for further development of an artificial intelligence-based program that can make a preliminary diagnosis using radiomic analysis of ultrasound images. Material and methods. Literature review, examination results of 77 patients with various pathological conditions of the maxillofacial region aged from 25 to 72 years, 56 females and 21 males (the diagnosis was confirmed radiologically and pathologically), statistical analysis of the results. Results. According to the literature review, Loïc Duron et al., 2021 proved the possibility of using radiomic analysis of ultrasound images for the diagnosis of pathological conditions of the head and neck. The most frequent cases out of 77 were neoplasms (pleomorphic adenoma) – 29 (78.39%) and cysts – 8 (21.62%) of the large salivary glands. After pathological confirmation of the diagnosis, the ultrasound images obtained were subjected to manual segmentation, then quantitative analysis using the Slicer 5.6.1 software, as a result of which radiomic features (n=120), represented by digital values, were calculated. Principal component analysis confirmed the presence of radiomic features characteristic of only one condition. Further, we selected features (n=50) with a coefficient of repeatability below 1. Of these, 5 radiomic features were characteristic of only one condition, which can be interpreted as a potential imaging biomarker for these nosologies. Conclusion: Five imaging biomarkers for the diagnosis of pleomorphic adenomas and large salivary gland cysts were identified (Original Glcm JointAverage, Original Glrlm RunEntropy and Original Glszm GreyLevelNonUniformityNorma lized for pleomorphic adenomas, Original Glszm GreyLevelVariance and Original Glcm SumEntropy for cysts). Further research is needed to obtain more data. This radiomics model facilitates proper patient routing and selection of the optimal treatment method.
... The GLCM, introduced by Haralick et al. (1973), is a tool used to analyse texture by examining the spatial relationships in a 2-dimensional image. The GLCM analyses texture by counting how often pairs of pixels with specific grey levels appear together at a certain distance and direction from each other in an image. ...
Article
Full-text available
This study presents an automated method for objectively measuring rock heterogeneity via raw X-ray micro-computed tomography (micro-CT) images, thereby addressing the limitations of traditional methods, which are time-consuming, costly, and subjective. Unlike approaches that rely on image segmentation, the proposed method processes micro-CT images directly, identifying textural heterogeneity. The image is partitioned into subvolumes, where attributes are calculated for each one, with entropy serving as a measure of uncertainty. This method adapts to varying sample characteristics and enables meaningful comparisons across distinct sets of samples. It was applied to a dataset consisting of 4935 images of cylindrical plug samples derived from Brazilian reservoirs. The results showed that the selected attributes play a key role in producing desirable outcomes, such as strong correlations with structural heterogeneity. To assess the effectiveness of our method, we used evaluations provided by four experts who classified 175 samples as either heterogeneous or homogeneous, where each expert assessed a different number of samples. One of the presented attributes demonstrated a statistically significant difference between the homogeneous and heterogeneous samples labelled by all the experts, whereas the other two attributes yielded nonsignificant differences for three out of the four experts. The method was shown to better align with the expert choices than traditional textural attributes known for extracting heterogeneous properties from images. This textural heterogeneity measure provides an additional parameter that can assist in rock characterization, and the automated approach ensures easy reproduction and high cost-effectiveness.
... from the statistical interrelationships between neighboring voxels [26,27]. These features are calculated by constructing a specific matrix that captures the spatial distribution of voxel intensities, and various measurements can be obtained from this matrix using different computational approaches [28]. The first among the second-order features is the GLCM, which quantifies the frequency of voxel pairs with the ...
Article
Full-text available
Introduction This study aimed to evaluate the repeatability of radiomic features extracted from cone-beam computed tomography (CBCT) images and to identify specific radiomic features suitable for use in CBCT studies. Methods In this study, radiomic analysis was conducted using the 3D Slicer program on two CBCT scans obtained at different time points from each of 33 individuals, using the same CBCT device. A total of 107 radiomics features were extracted from the segmented C2 (Axis) data in all cases. The results of the radiomic analysis were evaluated using the Intraclass Correlation Coefficient (ICC) in the SPSS program to assess repeatability. Results As a result of the analysis, 25 out of 107 radiomic features demonstrated excellent repeatability (ICC > 0.90). These included nine of the 14 shape-based features, four of the 18 first-order features, two of the 24 GLCM features, three each from the 14 GLDM, 14 GLRLM, and 16 GLSZM features, and one of the five NGTDM features. Conclusion The grey-value dependent second-order radiomic features obtained from CBCT images have been found to exhibit lower repeatability compared to shape and first-order features. The poor repeatability of grayscale parameters should be taken into account when performing radiomics analysis of CBCT volumes.
... Early detection will not only help provide sufficient treatment for the patients but also help cut down on personal expenses. The approach to predicting CKD is done by identifying the correct metrics and finding the relevant attributes that can be used to increase the model's accuracy (Haralick et al., 1973). The priority of this study lies in the fact that a detailed analysis of attributes and variables can be done to predict Chronic Kidney Disease. ...
... As textural features convey important patterns for identification of and distinguishing between objects of interests within an image (Haralick et al., 1973), we computed and constructed seven Grey-Level Co-occurrence Matrix (GLCM) features using the .glcmTexture() function in GEE. ...
Article
Full-text available
Barishal, known as the “Land of Paddy, Rivers, and Canals,” bears immense historical, social, and economic importance as a major agricultural, cultural, and financial hub of Bangladesh. However, the absence of a comprehensive LULC inventory, high landscape variability, and the limited availability of high quality remotely sensed data make the quantification of LULC changes in the region extremely challenging. This study is the first of its kind in Bangladesh to propose a cloud-based, open-source, machine learning framework that integrates spectral, textural, and topographic information to assess the spatio-temporal dynamics of LULC change in Barishal over 35 years. The performance of four machine learning algorithms (Support Vector Machine, Classification and Regression Tree, K-Nearest Neighbor, and Random Forests) were evaluated to ensure classification reliability. Results indicate that Random Forest outperforms other classifiers, achieving an average accuracy of 99 % across all study periods, making it the most suitable model for classifying heterogeneous landscapes. An analysis of multi-temporal LULC maps reveals a net increase in wetland (0.35 %), built-up (1.81 %), vegetation (8.48 %), and a net decrease in agriculture (−10.33 %) and bare soil (−0.36 %), primarily due to indiscriminate land use transitions. The study establishes a comprehensive and reliable baseline for Barishal’s LULC and introduces a rapid, open data driven approach for mapping complex, heterogeneous coastal landscapes globally. The spatio-temporal patterns of LULC underscore the urgent need for climate-resilient planning in Barishal and provide valuable insights for evidence-based policymaking necessary in implementing SDG 11: Sustainable Cities and Communities and SDG 15: Life on Land.
... Intensity values were normalized using the min-max normalization technique 44 . Then, radiomic features were extracted from the preprocessed images using in-house software and the package collageradiomics developed with Python [45][46][47][48] . For the 3D analysis, 763 radiomic features were extracted within the entire volume of the cyst. ...
Preprint
Full-text available
Distinguishing high-risk intraductal papillary mucinous neoplasms (IPMNs), pancreatic cysts requiring surgery, from low-risk lesions remains a clinical challenge, often resulting in unnecessary procedures due to limited specificity of current methods. While radiomics and deep learning (DL) have been explored for pancreatic cancer, cyst-level malignancy risk stratification of IPMNs remains untapped. We conducted a multi-institutional study (seven centers, 359 T2W MRI images) to assess the feasibility of AI for predicting IPMN dysplasia grade using cyst-level image features. We developed and compared 2D and 3D radiomics-only, deep learning (DL)-only, and radiomics-DL fusion models, using expert radiologist scoring as a baseline reference. Model performance was evaluated using held-out test data. The radiomics-DL fusion model showed the highest discriminatory ability on the test set (AUC 0.692), outperforming the radiomics-only model (AUC 0.665). Expert accuracy varied widely (37.4%-66.7%). The fusion model integrating deep learning and radiomics features from routine T2W MRI (AUC: 0.692) demonstrates potential for objective, cyst-level risk stratification of IPMNs in a multi-center cohort, outperforming both radiomics-only models and expert radiologists. While performance requires improvement for standalone clinical use, this approach offers a scalable, non-invasive method to potentially improve diagnostic accuracy and reduce unnecessary surgical interventions.
... dev, RMS, variance, smoothness, skewness and kurtosis were computed to characterize the overall distribution of the intensity (Echegaray et al. 2018). With the aim to characterize the textural features of the image, a grayscale co-occurrence matrix was generated (Haralick et al. 1973). Subsequently, four quantitative parameters: contrast, correlation, energy and homogeneity, were calculated. ...
Article
Full-text available
Study advances current diagnostic efficiency of canine/feline (sub-)cutaneous tumors using machine learning and multimodal imaging data. White light (WL), fluorescence (FL) and ultrasound (US) imaging were combined into hybrid approaches to differentiate between malignant mastocytomas, soft tissue sarcomas and benign lipomas. Support Vector Machine and Ensemble classifiers were optimized via sequential feature selection. US radio-frequency signals were quantitatively analyzed to derive the colormaps of six US estimates, corresponding to spectral and temporal domains of the acoustic field. This resulted in the quantification of 72 morphological features for US; as well as 24 and 12 – for WL and FL data, respectively. Resulting classification efficiency for mastocytoma and sarcoma using US data was >75%; US+FL − 75–80%; US+WL − 85–90% and US+OPTICS − 90–95%. ∼100% classification efficiency was achieved for the differentiation between benign and malignant tumors even using single WL feature for Ensemble classifier. US features, resulting in inferior classification efficiency, were competitive to superior optical, as they were selected during optimization to be added to or replace optical counterparts. Additional tissue differentiation was performed on z-stacks of US colormaps, obtained using 3D arrays of US radio-frequency signals. This resulted in ∼70% differentiation efficiency for mastocytoma and sarcoma as well as >95% for benign and malignant tissues. The obtained additional metric of classification efficiency provides complementary diagnostic support, which for Support Vector Machine can be expressed as: 90.3 ± 1.9% (US+WL)×71.2 ± 0.6% (USDepth Profile). This hybrid criterion adds robustness to diagnostic model and may be very beneficial to characterize heterogeneous tissues.
... Various approaches have been developed to effectively preserve and utilize spatial structure information in image processing tasks. These include texture analysis methods that capture local patterns and regularities [17], graph-based techniques that model relationships between image regions [18,19], geometric approaches that analyze shapes and spatial configurations [20], deep learning architectures specifically designed to maintain spatial correlations [21], etc. These approaches leverage the inherent continuity and gradual variations present in natural images to preserve essential spatial relationships. ...
Article
Full-text available
Existing Robust Sparse Principal Component Analysis (RSPCA) does not incorporate the two-dimensional spatial structure information of images. To address this issue, we introduce a smooth constraint that characterizes the spatial structure information of images into conventional RSPCA, generating a novel algorithm called Robust Sparse Smooth Principal Component Analysis (RSSPCA). The proposed RSSPCA achieves three key objectives simultaneously: robustness through L1-norm optimization, sparsity for feature selection, and smoothness for preserving spatial relationships. Within the Minorization-Maximization (MM) framework, an iterative process is designed to solve the RSSPCA optimization problem, ensuring that a locally optimal solution is achieved. To evaluate the face reconstruction and recognition performance of the proposed algorithm, we conducted comprehensive experiments on six benchmark face databases. Experimental results demonstrate that incorporating robustness and smoothness improves reconstruction performance, while incorporating sparsity and smoothness improves classification performance. Consequently, the proposed RSSPCA algorithm generally outperforms existing algorithms in face reconstruction and recognition. Additionally, visualization of the generalized eigenfaces provides intuitive insights into how sparse and smooth constraints influence the feature extraction process. The data and source code from this study have been made publicly available on the GitHub repository: https://github.com/yuzhounh/RSSPCA.
... For each cell, a set of quantitative morphological (e.g., area, circularity, and equivalent diameter) and texture-based (e.g., mean, entropy, and fractal-based features) features were calculated. These features have been extensively detailed in previous works for tissue and cell characterization (48)(49)(50)(51)(52)(53)(54). Feature ranking was then performed using chi-square tests to identify features relevant for characterization of T cell viability and activation state. ...
Preprint
T cell characterization is critical for understanding immune function, monitoring disease progression, and optimizing cell-based therapies. Current technologies to characterize T cells, such as flow cytometry, require fluorescent labeling and typically are destructive endpoint measurements. Non-destructive, label-free imaging methods have been proposed, but face limitations with throughput, specificity, and system complexity. Here we demonstrate deep-ultraviolet (UV) microscopy as a label-free, non-destructive, fast and simple imaging approach for assessing T cell viability, activation state, and subtype with high accuracy. Using static deep-UV images, we characterize T cell viability and activation state, demonstrating excellent agreement with flow cytometry measurements. We further apply dynamic deep-UV imaging to quantify intracellular activity, enabling fast and accurate subtyping of CD4⁺ and CD8⁺ T cells. These results corroborate recent studies on metabolic activity differences between these subtypes, but now, with deep-UV microscopy, they are enabled by a non-destructive, fast, low-cost and simple approach. Together, our results demonstrate deep-UV microscopy as a powerful tool for high-throughput immune cell characterization, with broad applications in immunology re-search, immune monitoring, and development of emerging cell-based therapies.
... 4) Texture and Structure Features: AIGI are often characterized by unnatural textures such as unusual smoothness, repetitive patterns, and inconsistent spatial structures [28]. We derive texture descriptors from gray level cooccurrence matrix (GLCM) [29] and structure descriptors from histogram of oriented gradients (HoG) [30] to account for such effects [31]. 5) Frequency Features: generative AI often introduces atypical frequency signatures in images [32], an effect which capture by using features derived from a 2D wavelet decomposition of the images using Daubechies wavelets. ...
Preprint
The rapid advancement of generative AI has enabled the creation of highly photorealistic visual content, offering practical substitutes for real images and videos in scenarios where acquiring real data is difficult or expensive. However, reliably substituting real visual content with AI-generated counterparts requires robust assessment of the perceived realness of AI-generated visual content, a challenging task due to its inherent subjective nature. To address this, we conducted a comprehensive human study evaluating the perceptual realness of both real and AI-generated images, resulting in a new dataset, containing images paired with subjective realness scores, introduced as RAISE in this paper. Further, we develop and train multiple models on RAISE to establish baselines for realness prediction. Our experimental results demonstrate that features derived from deep foundation vision models can effectively capture the subjective realness. RAISE thus provides a valuable resource for developing robust, objective models of perceptual realness assessment.
... ;https://doi.org/10.1101https://doi.org/10. /2025 neighboring pixel pairs with similar intensities more heavily ( Figure 5B); sum average, representing the average sum of pixel pairs ( Figure 5C); entropy, reflecting the degree of randomness in pixel intensity ( Figure 5D); difference entropy, describing the randomness in differences between neighboring pixel intensities ( Figure 5E); and difference variance, which quantifies the variation of intensity differences ( Figure 5F) (Haralick et al., 1973;Soh & Tsatsoulis, 1999). Together, these metrics provide complementary information about cytoskeleton and organization, offering insights into structural changes associated with drug resistance. ...
Preprint
Full-text available
Accurate cell segmentation is an essential step in the quantitative analysis of fluorescence microscopy images. Pre-trained deep learning models for automatic cell segmentation, such as Cellpose, offer strong performance across a variety of biological datasets but may still introduce segmentation errors. While training custom models can improve accuracy, it often requires programming expertise and significant time, limiting the accessibility of automatic cell segmentation for many wet lab researchers. To address this gap, we developed Toggle-Untoggle, a desktop application that combines automated segmentation using the Cellpose cyto3 model with a user-friendly graphical interface for intuitive segmentation quality control. Our app allows users to refine results by interactively toggling individual segmented cells on or off without the need to manually edit segmentation masks, and to export morphological data and cell outlines for downstream analysis. Here we demonstrate the utility of Toggle-Untoggle in enabling accurate, efficient single-cell analysis on real-world fluorescence microscopy data, with no coding skills required.
... For instance, color-based retrieval employed histograms in RGB, HSV, or LAB spaces to capture global color distribution. Texture features like Gabor filters and co-occurrence matrices were used to encode patterns and granularity in im-ages [8]. However, these traditional CBIR methods often failed to align with human perception of image similarity, especially when high-level semantic information was required. ...
Article
This project presents a Content-Based Image Retrieval (CBIR) system that utilizes deep learning to improve the efficiency and accuracy of image similarity search. The system leverages a pre-trained ResNet-50 convolutional neural network, repurposed as a deep feature extractor by removing its final classification layers. Input images are first pre- processed and then passed through the network to extract high-dimensional feature vec- tors that capture rich visual semantics. These deep features are compared using cosine similarity to identify visually similar images. The system supports real-time image up- loads and retrieval by matching queries against a pre computed dataset of image fea- tures, enabling fast and responsive search capabilities. By replacing traditional hand- crafted features with deep feature representations, the system achieves significantly higher retrieval accuracy and robustness across various image types and domains. This approach demonstrates strong potential for practical deployment in areas such as digital asset management, visual search engines, and e-commerce product discovery, where visual similarity plays a critical role. Overall, the integration of deep learning into CBIR systems represents a significant advancement in the field of image search and retrieval. Keywords: CBIR, Deep Learning, ResNet-50, Feature Extraction, Image Simi- larity
... The co-occurrence matrix, which is used to detect image patterns, is a second-order texture statistical descriptor that counts the number of pairs of pixels with the same gray level in its neighborhood and performs various statistical calculations based on that [23]. For a gray image, the GLCM algorithm calculates the joint probability of intensities occurring at certain distances and angles that can be adjusted [24]. For the neighborhood distance with a value of 1, only the patterns in the neighboring pixels are recognized relative to the central pixel. ...
Article
Full-text available
Plant disease is one of the most threatening factors in agriculture field causing a decrease in the quality and quantity of produced products. Some of the diseases can be identified and recognized by the appearance of symptoms on the leaves of the plant. Non-destructive and accurate techniques for detection of diseases could be practical in increasing productivity and decreasing the waste of products. In this research, nine types of common diseases in tomatoes were evaluated by the machine vision method and using Gray Level Co-occuarance Matrix (GLCM), Gray Level Run Length Matrix (GLRM), and Local Binary Pattern (LBP) texture features. Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN), K-Nearest Neighbor (KNN), and Support Vector Machine (SVM) were used to model the dataset. The best algorithm and model were introduced using SVM and KNN models. With the ANN model, the best results were obtained with the GLCM feature, while in other models, the features extracted from the GLRM algorithm exhibited the best outcomes. The SVM model with the cubic kernel function yielded the best results, which had the accuracy of 97.43% and 91.38% in training and test steps. The sensitivity and specificity of this modeling were 99.46% and 95.59% as well as 97.91% and 86.75% for training and test datasets, respectively. In addition, the results were improved using the Genetic Bee Colony (GBC) feature reduction algorithm. The results exhibited acceptable performance in detecting healthy and unhealthy leaves, and in accurately diagnosing the type of tomato disease.
... Classical Texture Descriptors: For classical texture analysis, we employed the Gray-Level Co-occurrence Matrix (GLCM) [46], a widely used method for capturing spatial relationships between pixel intensities. GLCM has also been extensively applied in texture analysis and has shown significant success in haptic studies for characterizing surface properties [14]. ...
Preprint
Full-text available
Accurate prediction of perceptual attributes of haptic textures is essential for advancing VR and AR applications and enhancing robotic interaction with physical surfaces. This paper presents a deep learning-based multi-modal framework, incorporating visual and tactile data, to predict perceptual texture ratings by leveraging multi-feature inputs. To achieve this, a four-dimensional haptic attribute space encompassing rough-smooth, flat-bumpy, sticky-slippery, and hard-soft dimensions is first constructed through psychophysical experiments, where participants evaluate 50 diverse real-world texture samples. A physical signal space is subsequently created by collecting visual and tactile data from these textures. Finally, a deep learning architecture integrating a CNN-based autoencoder for visual feature learning and a ConvLSTM network for tactile data processing is trained to predict user-assigned attribute ratings. This multi-modal, multi-feature approach maps physical signals to perceptual ratings, enabling accurate predictions for unseen textures. To evaluate predictive accuracy, we employed leave-one-out cross-validation to rigorously assess the model's reliability and generalizability against several machine learning and deep learning baselines. Experimental results demonstrate that the framework consistently outperforms single-modality approaches, achieving lower MAE and RMSE, highlighting the efficacy of combining visual and tactile modalities.
Article
Active contour and active surface models are image segmentation methods which offer a solid mathematical background, reduced computational time, smooth boundaries and, in many cases, also robustness in presence of noise. In other cases, due to the complexity of the images, active contour-surface models do not provide good results. However, their performance can be improved by taking into account more strategic image features that affect the evolution of the active contours-surfaces. This review seeks to explore the features used in literature for this goal, the related topic of feature reduction/selection, and the type of images involved. Considerations about limitations and possible future extensions are also presented.
Article
Full-text available
This study aims to establish the foundations for a semi-automated methodology to identify and quantify polyvinyl chloride microplastics (MP-PVC) in environmental samples, utilising Nile Red (NR) dye and entropy analysis of scanning electron microscopy (SEM) images. A specific protocol for NR staining and MP-PVC visualisation was developed. Controlled UV-C radiation exposure was used to simulate the aging process of MP-PVC. Fourier-transform infrared spectroscopy (FTIR) was employed to measure the chemical alterations in MP-PVC. The results identified the optimal NR staining protocol. Aging significantly altered the morphology and chemical composition of MP-PVC, accelerating degradation processes. SEM analysis, coupled with image entropy quantification, enabled monitoring based on statistical testing of the degradation progression, highlighting increased surface complexity and heterogeneity over time. FTIR results corroborated SEM observations, confirming oxidation processes and other chemical modifications. The combination of these techniques offers a comprehensive characterisation of UV-C aging effects on MP-PVC, contributing to the development of more effective tools for monitoring and managing these pollutants in aquatic environments.
Article
Breast cancer remains a leading cause of cancer-related deaths among women worldwide, highlighting the urgent need for early detection. While mammography is the gold standard, it faces cost and accessibility barriers in resource-limited areas. Infrared thermography is a promising cost-effective, non-invasive, painless, and radiation-free alternative that detects tumors by measuring their thermal signatures through thermal infrared radiation. However, challenges persist, including limited clinical validation, lack of Food and Drug Administration (FDA) approval as a primary screening tool, physiological variations among individuals, differing interpretation standards, and a shortage of specialized radiologists. This survey uniquely focuses on integrating texture analysis and machine learning within infrared thermography for breast cancer detection, addressing the existing literature gaps, and noting that this approach achieves high-ranking results. It comprehensively reviews the entire processing pipeline, from image preprocessing and feature extraction to classification and performance assessment. The survey critically analyzes the current limitations, including over-reliance on limited datasets like DMR-IR. By exploring recent advancements, this work aims to reduce radiologists’ workload, enhance diagnostic accuracy, and identify key future research directions in this evolving field.
Article
Global Navigation Satellite Systems (GNSSs) are widely used for positioning, timing, and navigation services. Such widespread usage makes them exposed to various threats including malicious attacks such as spoofing attacks. The availability of low-cost devices such as software-defined radios enhances the viability of performing such attacks. Efficient spoofing detection is of essential importance for the mitigation of such attacks. Although various methods have been proposed for that purpose it is still an important research topic. In this paper, we investigate the spoofing detection method based on the integrated usage of discrete wavelet transform (DWT) and machine learning (ML) techniques and propose efficient solutions. A series of experiments using different wavelets and machine learning techniques for Global Positioning System (GPS) and Galileo systems are performed. Moreover, the impact of the usage of different types of training data are explored. Following the computational complexity analysis, the potential for complexity reduction is investigated and computationally efficient solutions proposed. The obtained results show the efficacy of the proposed approach.
Article
Full-text available
Lung cancer, the leading cause of cancer-related deaths globally, includes non-small cell lung cancer (NSCLC) (85% of cases) and small cell lung cancer (SCLC) (13-15%). While accurate diagnosis and treatment selection are critical, the absence of reliable predictive or prognostic biomarkers remains a significant challenge. This study explored the combined use of radiomics from CT scans and pathomics from H&E slides in three contexts: (1) predicting disease recurrence in early-stage NSCLC, (2) predicting immunotherapy response in advanced-stage NSCLC, and (3) predicting chemotherapy response in SCLC. The integrated radio-pathomic model significantly outperformed individual models. In early-stage NSCLC (N = 194), it achieved an HR of 8.35 (C-index: 0.71, p = 0.0043). In advanced-stage NSCLC (N = 35), the combined model improved predictive performance (AUC: 0.75, p = 0.042). In SCLC (N = 50), the integrated model showed an AUC of 0.78, surpassing both radiomic and pathomic models. These findings highlight the potential of combining radiomics and pathomics for improved lung cancer risk stratification and treatment prediction.
Article
Full-text available
Introduction: Carpal Tunnel Syndrome (CTS) is a prevalent neuropathy requiring accurate, non-invasive diagnostics to minimize patient burden. This study evaluates the New Energy Vision (NEV) camera, an RGB-based multispectral imaging tool, to detect CTS through skin texture and color analysis, developing a machine learning algorithm to distinguish CTS-affected hands from controls. Methods: A two-part observational study included 103 participants (50 controls, 53 CTS patients) in Part 1, using NEV camera images to train a Support Vector Machine (SVM) classifier. Part 2 compared median nerve-damaged (MED) and ulnar nerve-normal (ULN) palm areas in 32 CTS patients. Validations included nerve conduction tests (NCT), Semmes–Weinstein monofilament testing (SWMT), and Boston Carpal Tunnel Questionnaire (BCTQ). Results: The SVM classifier achieved 93.33% accuracy (confusion matrix: [[14, 1], [1, 14]]), with 81.79% cross-validation accuracy. Part 2 identified significant differences (p < 0.05) in color proportions (e.g., red_proportion) and Haralick texture features between MED and ULN areas, corroborated by BCTQ and SWMT. Conclusions: The NEV camera, leveraging multispectral imaging, offers a promising non-invasive CTS diagnostic tool using detection of nerve-related skin changes. Further validation is needed for clinical adoption.
Article
Full-text available
Agriculture is the key foundation of the national economy, impacting livelihoods and security. Accurate crop distribution data is essential for guiding agricultural production and addressing food security issues. This study analyzes remote sensing feature extraction using spatial, temporal, and spectral information, with Jilin Province and Yitong County as the focus. Based on remote sensing image data from Sentinel-2, Landsat-8, and Landsat-7 covering the entire growth cycle of vegetation, feature space was constructed and optimized. Four commonly used pixel-based and object-oriented classification methods, as well as deep learning methods, were used to extract crop planting information in the test area. Finally, by validating and comparing the crop identification accuracy of five classification methods, the method with the highest classification accuracy and best overall mapping effect was applied to the fine identification of crops in Jilin Province from 2002 to 2022. The driving force analysis was conducted using the geographical detector. The research results show that: ① The decision tree had the highest classification accuracy at 93.66% with a Kappa coefficient of 0.93. ② Corn and rice areas in Jilin showed a rising trend, with corn increasing by nearly 15%. ③ Changes in crop areas were significantly affected by vegetation coverage and proximity to residential areas, with q values above 0.5. The study provides a replicable and efficient remote sensing monitoring approach tailored for Jilin Province and comparable areas. This method not only facilitates extensive agricultural surveillance but also delivers robust technical assistance for precision agriculture and land resource management. Future work will focus on quantifying the influence of individual feature variables on classification accuracy and exploring combinations of different classification methods to enhance performance
Preprint
Ultra-high-resolution image synthesis holds significant potential, yet remains an underexplored challenge due to the absence of standardized benchmarks and computational constraints. In this paper, we establish Aesthetic-4K, a meticulously curated dataset containing dedicated training and evaluation subsets specifically designed for comprehensive research on ultra-high-resolution image synthesis. This dataset consists of high-quality 4K images accompanied by descriptive captions generated by GPT-4o. Furthermore, we propose Diffusion-4K, an innovative framework for the direct generation of ultra-high-resolution images. Our approach incorporates the Scale Consistent Variational Auto-Encoder (SC-VAE) and Wavelet-based Latent Fine-tuning (WLF), which are designed for efficient visual token compression and the capture of intricate details in ultra-high-resolution images, thereby facilitating direct training with photorealistic 4K data. This method is applicable to various latent diffusion models and demonstrates its efficacy in synthesizing highly detailed 4K images. Additionally, we propose novel metrics, namely the GLCM Score and Compression Ratio, to assess the texture richness and fine details in local patches, in conjunction with holistic measures such as FID, Aesthetics, and CLIPScore, enabling a thorough and multifaceted evaluation of ultra-high-resolution image synthesis. Consequently, Diffusion-4K achieves impressive performance in ultra-high-resolution image synthesis, particularly when powered by state-of-the-art large-scale diffusion models (eg, Flux-12B). The source code is publicly available at https://github.com/zhang0jhon/diffusion-4k.
Preprint
Full-text available
The growth and characterization of materials using empirical optimization typically requires a significant amount of expert time, experience, and resources. Several complementary characterization methods are routinely performed to determine the quality and properties of a grown sample. Machine learning (ML) can support the conventional approaches by using historical data to guide and provide speed and efficiency to the growth and characterization of materials. Specifically, ML can provide quantitative information from characterization data that is typically obtained from a different modality. In this study, we have investigated the feasibility of projecting the quantitative metric from microscopy measurements, such as atomic force microscopy (AFM), using data obtained from spectroscopy measurements, like Raman spectroscopy. Generative models were also trained to generate the full and specific features of the Raman and photoluminescence spectra from each other and the AFM images of the thin film MoS2_2. The results are promising and have provided a foundational guide for the use of ML for the cross-modal characterization of materials for their accelerated, efficient, and cost-effective discovery.
Book
Full-text available
Les ateliers TAIMA’23 e a pour objectif de faire le point sur l’état de l’art et les dernières avancées dans les technologies, méthodologies et applications potentielles qui se dégagent dans le domaine du traitement et de l’analyse de l’information en rassemblant la communauté scientifique du domaine autour d’experts internationaux de renom. Elle offre ainsi l’opportunité d’échanges scientifiques axés sur les nouvelles technologies émanant des dernières avancées fondamentales et appliquées dans divers axes de recherche parmi lesquels nous pouvons citer de manière non exhaustive l’apprentissage automatique et profond, la vision par ordinateur, la compression, la description de formes, la télédétection, . . . etc. Une attention particulière est portée aux domaines applicatifs d’intérêt régional comme la protection et la gestion durable de l’environnement et la gestion des ressources hydriques.
Article
Full-text available
Objective. Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance Approach. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the multi-Dconv head transposed attention (MDTA) and the mixed contextual feed-forward network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer modules. In addition, we incorporate operation-wise attention layers to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images Main results. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio of 33.681 dB, a structural similarity index measure of 0.921, an information fidelity criterion of 2.857 and a visual information fidelity of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods Significance. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.
Article
Full-text available
Under the influence of human activities and climate change, pine wilt disease (PWD) has caused significant damage to Masson’s pine (Pinus massoniana Lamb.) forests in subtropical China. Existing research has struggled to accurately capture the large-scale spatial distribution of the PWD, particularly for precise extraction at provincial level. This study focuses on Fujian province and proposes a novel method for extracting PWD information at the sub-stand level. This approach uses forest age, canopy height, and temporal vegetation indices (VIs) data for deadwood distribution sub-stands to identify the suspected outbreak areas. In key counties and cities, High-resolution satellite imagery (GF-2 and GF-7) was used to construct a bi-level scale-set model (BSM) for efficient image segmentation, followed by selection of the best classification algorithm for data extraction. For non-key counties, sentinel imagery with 10-meter resolution was used on the GEE cloud platform with RF classification. The results showed an overall annual extraction accuracy exceeding 90%, and statistical analysis revealed a significant reduction in the number of dead trees from 2021 to 2022, indicating effective control measures. This study demonstrated that multi-source remote sensing data can efficiently extract PWD distribution information, fill data gaps for provincial-level monitoring, and support forest pest management.
Article
Glioblastoma (GBM) often exhibits distinct anatomical patterns of relapse after radiotherapy. Tumour cell migration along myelinated white matter tracts is a key driver of disease progression. The failure of conventional imaging to capture subclinical infiltration has driven interest in advanced imaging biomarkers capable of quantifying tumour–brain interactions. Diffusion tensor imaging (DTI), radiomics, and connectomics represent a triad of innovative, non-invasive approaches that map white matter architecture, predict recurrence risk, and inform biologically guided treatment strategies. This review examines the biological rationale and clinical applications of DTI-based metrics, radiomic signatures, and tractography-informed connectomics in GBM. We discuss the integration of these modalities into machine learning frameworks and radiotherapy/surgical planning, supported by landmark studies and multi-institutional data. The implications for personalised neuro-oncology are profound, marking a shift towards risk-adaptive, tract-aware treatment strategies that may improve local control and preserve neurocognitive function.
Preprint
Full-text available
The stink bug complex is a major agricultural pest for soybean crops, significantly reducing productivity. Genetic resistance is the most effective control strategy, but its quantitative nature and labor-intensive phenotyping make its implementation in breeding programs challenging. This study explored high-throughput phenotyping (HTP) using unmanned aerial vehicles (UAVs) equipped with RGB cameras to evaluate a soybean population and identify stink bug resistance by correlating image-derived features and machine learning (ML) models. Using an alpha-lattice design with three replications, we phenotyped 304 soybean lines over two seasons under natural stink bug infestations. We manually evaluated five traits associated with stink bug resistance and correlated them with color, texture, and histogram features from aerial images. Three ML models –AdaBoost, SVM, and MLP –were tested to predict these traits. VIs, especially the Visible Atmospherically Resistant Index at the first percentile (VARI–P25) and texture-based indices at 45° and 135°, effectively predicted traits in stressed environments, particularly during flights near maturation. While ML models showed good predictive ability for yield, healthy seed weight, and maturity, they were less effective for stink bug resistance. Increasing the number of UAV flights modestly improved predictive accuracy, though predicting traits across different seasons remained challenging. Despite this, indices like VARI–25P were valuable for screening and excluding less promising genotypes, optimizing breeding program resources. This pioneering work offers valuable insights and highlights the need for further research to optimize resistance selection, promising significant advances in soybean breeding for stink bug resistance.
Article
Full-text available
This study evaluates remote sensing features to resolve problems associated with feature redundancy, low efficiency, and insufficient input feature analysis in bushfire detection. It calculates spectral features, remote sensing indices, and texture features from Sentinel-2 data for the Blue Mountains region of New South Wales, Australia. Feature separability was evaluated with three measures: J-M distance, discriminant index, and mutual information, leading to an assessment of the best remote sensing features. The results show that for post-fire smoke detection, the best features are the normalized difference vegetation index (NDVI), the B1 band, and the angular second moment (ASM) in the B1 band, with respective scores of 0.900, 0.900, and 0.838. For burned land detection, the best features are NDVI, the B2 band, and correlation (Corr) in the B5 band, with corresponding scores of 1.000, 0.9436, and 0.9173. These results demonstrate the effectiveness of NDVI, the B1 and B2 bands, and specific texture features in the post-fire analysis of remote sensing data. These findings provide valuable insights for the monitoring and analysis of bushfires and offer a solid foundation for future model construction, fire mapping, and feature interpretation tasks.
Article
A procedure is developed to extract numerical features which characterize the pore structure of reservoir rocks. The procedure is based on a set of descriptors which give a statistical description of porous media. These features are evaluated from digitized photomicrographs of reservoir rocks and they characterize the rock grain structure in term of (1) the linear dependency of grey tones in the photomicrograph image, (2) the degree of "homogeneity" of the image and (3) the angular variations of the image grey tone dependencies. On the basis of these textural features, a simple identification rule using piecewise linear discriminant functions is developed for categorizing the photomicrograph images. The procedure was applied to a set of 243 distinct images comprising 6 distinct rock categories. The coefficients of the discriminant functions were obtained using 143 training samples. The remaining (100) samples were then processed, each sample being assigned to one of 6 possible sandstone categories. Eighty-nine per cent of the test samples were correctly identified.