Access to this full-text is provided by MDPI.
Content available from Sensors
This content is subject to copyright.
Citation: Tanut, B.; Tatomwong, W.;
Buachard, S. Developing
a Colorimetric Equation and
a Colorimetric Model to Create
a Smartphone Application That
Identies the Ripening Stage of Lady
Finger Bananas in Thailand. Sensors
2023,23, 6387. hps://doi.org/
10.3390/s23146387
Academic Editor: Steve Vanlanduit
Received: 29 May 2023
Revised: 4 July 2023
Accepted: 10 July 2023
Published: 13 July 2023
Copyright: © 2023 by the authors.
Licensee MDPI, Basel, Swierland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
Aribution (CC BY) license (hps://
creativecommons.org/licenses/by/
4.0/).
sensors
Article
Developing a Colorimetric Equation and a Colorimetric Model
to Create a Smartphone Application That Identies
the Ripening Stage of Lady Finger Bananas in Thailand
Bhoomin Tanut 1, Watcharapun Tatomwong 2and Suwichaya Buachard 3,*
1Department of Computer Science and Information Technology, Faculty of Science and Technology,
Kamphaeng Phet Rajabhat University, Kamphaeng Phet City 65000, Thailand; bhoomin_t@kpru.ac.th
2Wireless Engineering Department, National Telecom Public Company Limited, Bangkok 10210, Thailand;
watcharapun.t@ntplc.co.th
3Department of Biology, Faculty of Science and Technology, Kamphaeng Phet Rajabhat University,
Kamphaeng Phet City 65000, Thailand
*Correspondence: suwichaya_b@kpru.ac.th; Tel.: +66‑087‑139‑2621
Abstract: This article develops a colorimetric equation and a colorimetric model to create a smart‑
phone application that identies the ripening stage of the lady nger banana (LFB) (Musa AA group
‘Kluai Khai’,กล้วยไข่ “gluay kai” in Thai). The mobile application photographs an LFB, automatically
analyzes the color of the banana, and tells the user the number of days until the banana ripens and
the number of days the banana will remain edible. The application is called the Automatic Banana
Ripeness Indicator (ABRI, pronounced like “Aubrey”), and the rapid analysis that it provides is use‑
ful to anyone involved in the storage and distribution of bananas. The colorimetric equation inter‑
prets the skin color with the CIE L*a*b* color model in conjunction with the Pythagorean theorem.
The colorimetric model has three parts. First, COCO‑SSD object detection locates and identies the
banana in the image. Second, the Automatic Power‑Law Transformation, developed here, adjusts
the illumination to a standard derived from the average of a set of laboratory images. After remov‑
ing the image background and converting the image to L*a*b*, the data are sent to the colorimetric
equation to calculate the ripening stage. Results show that ABRI correctly detects a banana with
91.45% accuracy and the Automatic Power‑Law Transformation correctly adjusts the image illumi‑
nation with 95.72% accuracy. The colorimetric equation correctly identies the ripening stage of all
incoming images. ABRI is thus an accurate and robust tool that quickly, conveniently, and reliably
provides the user with any LFB’s ripening stage and the remaining days for consumption.
Keywords: colorimetric equation; colorimetric detection model; COCO‑SSD object detection model;
automatic power‑law transformation method
1. Introduction
Bananas are a popular fruit around the world, and they belong to the genus Musa in
the family Musaceae and are of the order Scitamineae. Bananas grow in tropical and sub‑
tropical areas of more than 130 countries. The majority of edible cultivars are allopolyploid
triploids with a genome type of AAA (dessert bananas), AAB (plantains), or ABB (other
cooking bananas) [1]. In 2020, 119,833,677 tonnes of bananas were produced worldwide,
of which 64,725,093 tonnes or 54.4% were produced in Asia. In Southeast Asia specically,
32,859,278 tonnes of bananas were produced, with Thailand producing 1,360,670 tonnes [2].
Bananas are one of the most widely cultivated tropical fruits, and they can be grown on
small‑scale farms with low production costs, yielding the rst harvests 14 months after
planting and continuing for up to 10 years. There are ve important commercial banana
cultivars in Thailand: Kluai Hom Thong, Kluai Namwa, Kluai Khai, Kluai Hakmuk, and
Kluai Lep Mu Nang. Harvesting bananas provides farmers with income throughout the
year [3], and bananas can be grown as a main crop or in mixed crops.
Sensors 2023,23, 6387. https://doi.org/10.3390/s23146387 https://www.mdpi.com/journal/sensors
Sensors 2023,23, 6387 2 of 19
The lady nger banana (LFB) (Musa genome subgroup AA ‘Kluai Khai’,กล้วยไข่ “gluay
kai” in Thai) has relatively small fruits that are particularly sweet and avorful. Chan‑
thaburi Province grows the most LFBs of any Thai province (approximately 10,173.23 acres
in 2014), followed by Nakhon Sawan, Phetchaburi, Tak, Chumphon, Kamphaeng Phet,
Rayong, and Sukhothai. Some LFBs are exported unprocessed to China, Japan, and Viet‑
nam [4], while others are sent to food processing plants to be peeled, sliced, and canned in
syrup. The main international markets for canned LFBs are the United States and France.
Kamphaeng Phet Province has a specic type of LFB called Kluai Khai Kamphaeng Phet
(KP‑LFB), which are generally even smaller and sweeter than other LFBs, with a distinctive
avor. Kluai Khai Kamphaeng Phet was registered as an ocial geographical indication
(GI) in 2016 [5]. KP‑LFB have not been specially bred for convenient marketing charac‑
teristics. They tend to be more fragile than other banana types, bruising easily. As they
ripen, they also produce more freckles than other banana types, which decreases their vi‑
sual appeal and sale value. Thus, careful and timely transportation of KP‑LFB bound for
far destinations is particularly important [6]. Since freckling is a function of the ripening
process, understanding ripeness levels and being able to identify them can help farmers
estimate how many days remain before bananas are ripe. This, in turn, can help farm‑
ers plan ahead and sell more bananas in peak condition. It should be noted that KP‑LFB
handled under normal local conditions in Kamphaeng Phet Province are not temperature‑
controlled or humidity‑controlled and are not treated with ethylene as a ripening tool. This
study is not about cavendish banana varieties or the conditions under which cavendish ba‑
nanas are handled in other places. This study develops a solution for a local challenge.
That said, the techniques used here could be included in solutions for other challenges in
other places.
Various technologies and tools for have been developed for identication of fruit
ripening stages. Researchers have used spectrometers, sensors, and computer software
for this purpose. These tools are well suited for laboratory investigation, because in a lab it
is easier to control light interference and environmental factors that aect changes in color
value. The following are examples of related research conned to a laboratory, i.e., not mo‑
bile applications. Prommee et al. [7] developed a device and software to identify and pre‑
dict banana quality. They converted RGB color values to L*a*b* color values, which were
then used to identify banana ripeness stages. Ringer et al. [8] used real‑time in situ tech‑
niques to determine the ripening stage of bananas non‑invasively. Their work involved
gloss measurement on the banana skin surface using a luster sensor. This method was
able to identify seven ripeness levels of bananas, from unripe (called level R1) to overripe
(called level R7). Prabha, S. et al. [9] used image processing to assess banana fruit maturity,
and their method can identify three stages: under‑mature, mature, and over‑mature. Zhu,
L. et al. [10] developed a banana grading system which uses the support vector machine
and YOLO algorithms. They extract three features from the image: the banana shape, the
skin color (RGB converted to HSV), shape, and the skin texture. The software can iden‑
tify three classes: unripe, ripe, and over‑ripe. Yusuf A. R. et al.’s study [11] developed a
method for the identication of cavendish banana maturity which applied convolutional
neural networks (CNN) and can identify four ripening stages: unripe, almost ripe, ripe,
and overripe. A. Anushya’s study [12] used Gray Level Co‑Occurrence Matrix (GLCM)
and classication algorithms (decision trees and neural networks) to dene banana ripen‑
ing stages using classiers. The three classes are almost ripe, ripe, and overripe. A study by
Zhang, Y. et al. study [13] used deep learning for the classication of bananas’ seven ripen‑
ing stages. They applied convolutional neural network architecture, resulting in indicators
that can help dierentiate the subtle dierences among subordinate classes of bananas in
the ripening state (R1–R7). In contrast to research described above, the examples below
were developed for use on a smartphone for convenience and portability. Intaravanne,
Y. et al. [14] developed a mobile application for estimation of banana ripeness estimation
that used two‑dimensional spectral analysis. This method can identify three stages: imma‑
ture, ripe, and overripe stage. Kipli, K. et al. [15] used Google Cloud Vision and applied an
Sensors 2023,23, 6387 3 of 19
RGB color system for banana ripeness evaluation via mobile application. That application
can classify the same three stages of ripeness as in the study by Intaravanne et al. [14] The
previous studies mentioned above use methods that dier from each other in various ways,
but one thing that most of the studies have in common is that they identify the stages of
ripening in three to ve stages. To date, there is no known mobile application for quickly
and conveniently assessing the ripeness of lady nger bananas using image processing.
The current study develops a colorimetric mobile application that works in all light
conditions because it is based on a new colorimetric equation and a new automatic il‑
lumination adjustment method called the Automatic Power‑Law Transformation (APLT)
method. There are three main components in the development of this application. First, a
colorimetric equation is created to ret the color of the banana skin into the CIE L*a*b* color
system [16]. The second component is an object detection model that uses COCO‑SSD in
the mobile application [17] to detect a banana within an image. The nal component in
the development of the application in this study is an Automatic Power‑Law Transforma‑
tion method applied in image processing that calculates energy and adjusts light quality.
The objective is for the application to function well in any light environment with accuracy
comparable to laboratory conditions. The application output consists of indication of the
ripeness stage of the banana and an estimate of the remaining number of days in which it
can be consumed, referred to in this study as the remaining days for consumption (RDFC),
which includes the current day but not the day that the banana becomes overly ripe. The
application developed here can benet all people who work in the banana business and,
by extension, the public, who can have access to quality bananas with greater reliability.
2. Materials and Methods
The colorimetric equation and colorimetric application in this study have been de‑
veloped using three kinds of software. A BLUE‑Wave Miniature Spectrometer with Spec‑
traWiz Spectrometer Software v5.1 [18] is used for analysis of banana skin colors to estab‑
lish the colorimetric equation. MATLAB 2015B [19] is used to analyze the banana skin color
in each incoming image. Finally, Visual Code Studio 2020 [20] is used to build the mobile
application. Testing was carried out using a spectrometer (BLUE‑Wave model, StellarNet
Inc., Tampa, FL, USA) for collection of banana skin color and a smartphone (iPhone XS Max,
Apple, Cupertino, CA, USA, with IOS 14.6) [21]. Figure 1shows a schematic diagram of the
ve steps involved in development of this application. The ve main stages are the follow‑
ing: data collection, colorimetric equation creation using color data, colorimetric model
creation using images, colorimetric model adjustment, and application development.
2.1. Data Collection
In order to observe the ripening process of KP‑LFB and divide the process into ripen‑
ing stages, KP‑LFB were harvested at a farm in Kamphaeng Phet City, approximately
45–50 days after the rst appearance of the inorescence (ower spike), as described by
Tongpoolsomjit et al. [22]. At the time of cuing, the bananas were light green with some
yellow, but still 75–80%, because as Tongpoolsomjit et al. explain, bananas cut earlier will
not ripen, only shrivel. The dataset for the colorimetric equation and colorimetric applica‑
tion in this study comes from 80 of these harvested bananas. The changing skin color of
these bananas was captured every day in two ways: rst, with a spectrometer; and then,
with a smartphone camera in a light box, for approximately 6–18 days until they reached
stage R7, as described by Bains et al. [23]. Pictures of dathe ta collection are shown in
Figure 2.
Sensors 2023,23, 6387 4 of 19
Figure 1. Schematic diagram of the ve stages of application development.
(A) (B)
Figure 2. Banana skin color was captured in two ways: using a spectrometer (A) and using a smart‑
phone camera with a light box (B).
When using the spectrometer, the color was recorded at 3 points spread across the
length of the banana, but the tips were avoided, as shown in Figure 2A. The results from
these three points were later averaged. The smartphone pictures were taken from the open
front of a light box with dimensions of 24 ×24 ×24 cm3. They were equipped with a
20‑wa strip of 20 tiny LED lights, as shown in Figure 2B. The average color of the smart‑
phone image was then obtained using image processing techniques that rst removed the
image background.
This study has four experimental groups, which represent a matrix of two variables:
the temperature during ripening, which was either 25 ◦C or 30 ◦C (based on local storage
Sensors 2023,23, 6387 5 of 19
temperatures), and the presence or absence of wax paper wrapped around each individ‑
ual banana. The four groups were called P25, P30, N25, and N30, where P means “paper
used”, N means “no paper used”, and the numbers are the temperatures. Each experimen‑
tal group consisted of 20 bananas (n= 20). The purpose of these experimental groups is
not to nd an optimal temperature or wrapping option, but rather to recognize that peo‑
ple handling bananas might store them at a variety of temperatures, either with or without
some form of wrapping. Thus, the four experimental groups represent some normal vari‑
ations that occur in actual everyday conditions. The four experimental groups and some
snapshots of their ripening process are shown in Figure 3.
There were two cabinets where the bananas were ripened: one cabinet maintained at
25 ◦C for groups P25 and N25, and the other cabinet, which was maintained at 30 ◦C for
groups P30 and N30. In the wax paper groups P25 and P30, each banana was individually
wrapped. The purpose of wrapping these bananas was to investigate the eect of reducing
circulation of the naturally produced ripening gas ethylene. Wax paper is a material readily
available to people who handle bananas, and it does not promote decomposition of the
bananas the way sealed plastic wrap can.
Figure 3. The four experimental groups used for data collection in this study are shown at Day 3,
Day 5, Day 7, and Day 9 after harvesting. P25 = wrapped in wax paper and ripened at 25 ◦C,
P30 = wrapped in wax paper and ripened at 30 ◦C, N25 = no wax paper and ripened at 25 ◦C, N30 = no
wax paper and ripened at 30 ◦C.
All 80 bananas (4 groups ×20 samples) were measured with a spectrometer and pho‑
tographed with a smartphone once a day. They were measured for color by both methods
daily until the banana skin color was substantially discolored. This produced a total of
9 days of data for all 80 bananas. The spectrometer measured all three CIE L*a*b* color
values from the banana skin and the smartphone simply captured an image. RGB color
data were later pulled from the phone images using MATLAB and then converted to CIE
L*a*b* values. Thus, the nine‑day data collection process yielded a total of 1440 records
with L*a*b* values, consisting of 720 records from the spectrometer and 720 records from
the smartphone.
2.2. Creating a Colorimetric Equation to Transform the Spectrometer Data
The purpose of the colorimetric equation in this study is to convert CIE L*a*b* color
data to a single variable representing hue, which will be θ. The CIE L*a*b* color system
Sensors 2023,23, 6387 6 of 19
(CIELAB), shown in Figure 4A, is based on three axes with dierent colors at each end:
(1) red and green, (2) yellow and blue, and (3) white and black. This system was devel‑
oped in 1976 by the International Commission on Illumination (CIE), which is an organiza‑
tion devoted to international cooperation and exchange of information among its member
countries on all maers relating to the science and art of lighting. In CIELAB, The L* axis
indicates lightness, and the a* and b* axes indicate chromaticity. L* is represented as a
vertical axis with values from 0 (black) to 100 (white). The a* axis represents red as +a*
(positive) and green as −a* (negative). The yellow and blue components are represented
on the b* axis as +b* (positive) and −b* (negative) values, respectively [16].
The θvalues in this study will range from 0◦to 90◦, and these represent angles be‑
tween green and yellow/brown in the CIE L*a*b* color space. In a color wheel, shown in
Figure 4B, the θvalues correspond to the 90 degrees starting at hue angle = 150◦(dark
green, θ= 0◦) and nishing at hue angle = 40◦(brown, θ= 90◦). Expressing and han‑
dling the hue as θfacilitates processing of collected data allows for creation of the eventual
phone application.
(A) (B) (C)
(D) (E) (F)
Figure 4. Elements related to the colorimetric equation in this study: (A) the CIE color space [16],
(B) a color (hue) wheel [19], (C) a right triangle [22], (D) location of the hue plane between green and
yellow, here called the Pythagorean Triangle Area (PTA), and (E,F) two examples of L*a*b* results
ploed in the PTA.
The colorimetric equation in this study applies the CIE L*a*b* color system [16] to‑
gether with the Pythagorean theorem [24]. Specically, the Pythagorean theorem is ap‑
plied to navigate the relationship between the −a* and +b* axes in CIE L*a*b*. The colori‑
metric equation here involves a right triangle. A right triangle consists of a hypotenuse
(“h”) side, a base (“b”, or “adjacent”) side, and a perpendicular (“p”, or “opposite”) side,
as shown in Figure 4C. “Adjacent” and “opposite” refer to the side’s relative position to
the desired unknown angle. A right triangle is usually displayed with the base side hor‑
izontal at the boom and the perpendicular side is displayed vertically. Trigonometric
functions can be used to calculate the angles of a right triangle placed inside the hue plane
of the CIE color space between green and yellow. This space will be referred to here as the
Pythagorean Triangle Area (PTA), and it is shown in Figure 4D.
Spectrometer results from the 20 samples in each experimental group were averaged
together. An example of a result is L* = 51.23, a* = −19.97, and b* = 56.35. The L* result is
discarded here, because it is not related to hue. Next, the a* and b* values are ploed in
Sensors 2023,23, 6387 7 of 19
the PTA, and their intersection is the location of that color. Figure 4E,F show two random
samples after they have been ploed into the PTA.
Inside the Pythagorean Triangle Area, Figure 4E,F show how the ploed a* and b*
coordinates dene a right triangle with an unknown angle. This unknown angle will be
the θvalue used to represent banana skin hue in this study, and it will be called the angle
of ripening. This angle of a right triangle could normally be calculated from the length of
the opposite and adjacent sides by using the tan formula. However, because the current
adjacent side is on the −a* axis, the result would be negative. Therefore, the equation
for nding this angle is adjusted by using the absolute value of the −a* axis, as shown
in Equation (1).
tan(θ) = B/abs(A)(1)
where θis the angle of ripening, Ais the value of a* (CIELAB), and Bis the value of
b* (CIELAB).
The CIELAB color values collected from the banana skins with the spectrometer are
applied to Equation (1), which combines the CIELAB data with the Pythagorean theorem.
Equation (1) will also be referred to as the colorimetric equation in this study. The result
is θ(hue) values for all the spectrometer data, and this process will be referred to as Col‑
orimetric Extraction from Color Data (CEFCD).
2.3. Creating a Colorimetric Model to Pull Data from the Smartphone Images
The colorimetric model in this study has two objectives: to capture only relevant color
information from a smartphone image of a banana; and to adjust the brightness of the cap‑
tured color information to a standardized level. The brightness needs to be standardized
because the nal phone application needs to be able to operate in uncontrolled lighting con‑
ditions, so the initial brightness of images will vary. The colorimetric model meets these
two objectives using two processes: the object detection process and the energy calculation
and illumination adjustment process. Each of these is explained below.
2.3.1. Object Detection Process
The banana in each image is isolated from its background using the COCO‑SSD object
detection model [17]. COCO‑SSD is able to detect and classify 90 dierent classes of ob‑
jects in an image based on its previous training with an image database called the COCO
Dataset [25]. That dataset includes bananas, so COCO‑SSD has the ability to detect a ba‑
nana in an image. The performance of the COCO‑SSD model in the context of this study
will be evaluated in two ways. First, the ability of COCO‑SSD to correctly demarcate the
object will be evaluated using the Intersection Over Union (IOU) method [26] to calculate
the average and standard deviation (SD) of the overlapping area between a rectangular
demarcation of the banana’s dimensions generated by COCO‑SSD and a similar rectangu‑
lar demarcation generated by the researcher using visual examination together with the
LabelImg [27] graphical image annotation tool. Second, the accuracy rate of COCO‑SSD
in correctly identifying the object as a banana will be calculated.
2.3.2. Energy Calculation and Illumination Adjustment Process
The ultimate purpose of this procedure is to adjust images taken by the application
user under varying light conditions so that all images will adopt a uniform brightness that
is based on the illumination environment of the original laboratory lightbox pictures. This
is carried out through the following four steps.
Step 1: First, the overall brightness (OB) of each standard lightbox image (LI) is measured.
The formula used here for calculating the OB of any image is based on the Minkowski
norm formula [28], and it is shown in Equation (2).
OB(any_i mage )= (R/3)[∑M
x=1∑N
y=1(R(x,y))p
MN ]1/ p
+ (G/3)[∑M
x=1∑N
y=1(G(x,y))p
MN ]1/ p
+ (B/3)[∑M
x=1∑N
y=1(B(x,y))p
MN ]1/ p
(2)
Sensors 2023,23, 6387 8 of 19
where OB(any_image)is the overall brightness of all the pixels in the image; Mand
Nare the vertical and horizontal pixel counts of the image, respectively; R,G, and
Bare the intensity values of the red green and blue channels, respectively, in one
pixel of the RGB image; and pis the weight of each gray value in the brightness
value (p= 2).
Step 2: Now, the OB values of all the LIs will be averaged. The average OB of all the LIs is
calculated using Equation (3).
¯
OB(L I)=∑N
i=1OB(L I(any_ima ge))
N(3)
where ¯
OB(L I)is the average overall brightness of all the lightbox images (LI), and
Nis the total number of lightbox images.
The Standard Brightness (SB) to which each incoming Unadjusted Image (UI)
from the app will be adjusted is a collection of data consisting of 10 components,
called Brightness Components (BC) here; the ¯
OB(L I), or average overall brightness
of the LIs, is only one of these BCs. The average brightness, contrast, and gradient
of each color channel from all pixels are also calculated here. That means that there
are an additional nine BCs: Brightness of channel R (BR), Brightness of channel G
(BG), Brightness of channel B (BB), Contrast of channel R (CR), Contrast of channel
G (CG), Contrast of channel B (CB), Gradient of channel R (GR), Gradient of channel
G (GG), and Gradient of channel B (GB) [28]. Although here in Step 2, the 10 BCs
have been used to represent the brightness of all LIs as SB, the same 10 BCs can be
calculated for any image to similarly represent its brightness, as will be seen in the
following Step 3.
Step 3: Once the ¯
OB(L I)and other BCs of the LIs have been calculated to establish the SB,
the overall brightness of an unadjusted image (OB(UI)) will now be calculated, using
Equation (2) as before. The other 9 BCs will also be calculated from the UI here in
Step 3.
Step 4: After the OB(UI)is known, the UI will go through the Power‑Law method, which,
in this study, will create 100 alternative image versions, called Candidate Images
(CI), each with a dierent brightness adjustment, based on all possible combina‑
tions of 10 dierent C constants (set at 1–10) and 10 dierent gamma (γ) constants
(which vary, depending on the UI, as will be explained later). This method is shown
in Equation (4). The Power‑Law method adjusts brightness by changing the R, G,
and B channel intensity levels in each pixel, depending on the interaction of C and
gamma. Figure 5shows an example of how altering gamma can aect channel in‑
tensity level.
CI =c∗UIγ(4)
where CI is the Candidate Image (CI), U I is the Unadjusted Image (U I), cis a con‑
stant (positive number), and γis the gamma constant (positive number).
Sensors 2023,23, 6387 9 of 19
Figure 5. An example of how altering γcan aect channel intensity level (when c is xed at 1), as
modied from [29].
The Power‑Law method [30] can increase or decrease the brightness of the UI to vary‑
ing degrees by adjusting c and gamma, but the challenge when using this method is to
nd the optimal c and gamma. A process of trial and error is used here. More specically,
gamma alone will determine whether brightness increases or decreases, but c can aect the
amount of increase or decrease. This study uses the results of the OB calculations to nd
out if brightness needs to be increased or decreased, and the answer will decide which
of two sets of gammas is used for optimization, thus halving the time required to deter‑
mine the best gamma. Whether the brightness needs to be increased or decreased is found
as follows:
If OB(U I)>¯
OB(L I), that means the UI has higher overall brightness than the LIs. In this
case, a set of gammas greater than zero will be used for optimization, as follows: c= [1, 2,
3, 4, 5, 6, 7, 8, 9, 10] and γ= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
If OB(U I)<¯
OB(L I), that means the UI has lower overall brightness than the LIs. In this case,
a set of gammas less than or equal to zero will be used for optimization, as follows: c= [1,
2, 3, 4, 5, 6, 7, 8, 9, 10] and γ= [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
In either case above, every possible combination of the specied c and γvalues was ap‑
plied to one UI using the Power‑Law method, resulting in 100 alternative versions
(10 possible c values ×10 possible γvalues) from each UI. These alternative versions are
the Candidate Images (CIs) mentioned earlier. They have this name because from these
100 candidates, the optimal c/γcombination will be selected. In order to make that selec‑
tion, all the BCs of each CI are compared with all the average BCs of the LIs, i.e., the Stan‑
dard Brightness, and the dierences are summed together using the city block distance
method [31] to establish the Brightness Dierence (BD) between each CI and the Standard
Brightness. An ideal BD would be 0, indicating that the brightness of that CI matches the
Standard Brightness. The Brightness Dierence of each candidate image was calculated
using Equation (5).
BD(any_c,γ)=|¯
OB(L I)−OB(CI)(c,γ)|+|BR o f SB −B Ro f CI(c,γ)|+|C Ro f SB −C Ro f C I(c,γ)|+... +|GBo f SB −GBo f C I(c,γ)|(5)
where BD(any_c,γ)is the Brightness Dierence between the Candidate Image that resulted
from cand γ, and the Standard Brightness; ¯
OB(L I)is the average overall brightness of all
the LIs; OB(C I)(c,γ)is the overall brightness of the Candidate Image that resulted from cand
γ;BRo f S B is the brightness of the red channel of the Standard Brightness; B Ro f CI(c,γ)
is the brightness of the red channel of the Candidate Image that resulted from cand γ;
CRo f SB is the contrast of the red channel of the Standard Brightness; CR o f CI(c,γ)is the
contrast of the red channel of the Candidate Image that resulted from cand γ;GB o f SB is
Sensors 2023,23, 6387 10 of 19
the average gradient of the blue channel of the Standard Brightness; and GB o f CI(c,γ)is the
average gradient of the blue channel of the Candidate Image that resulted from cand γ.
After the Brightness Dierence has been calculated for all the Candidate Images of
the Unadjusted Image, the CI with the lowest BD will be selected as optimal and there‑
after be called the Adjusted Image for further use, because the brightness of the UI has
been adjusted as close as possible to the Standard Brightness of the lightbox images. This
methodology utilizing the Power‑Law method in combination with the energy calculation
process based on the Mosashi norm formula is referred to here as Automatic Power‑Law
Transformation (APLT).
The eectiveness of APLT will be validated by creating 2 new copies of each of the
original 720 unadjusted smartphone images and intentionally corrupting their brightness
either upwards or downwards to conrm that APLT is later able to accurately correct
their brightness. The images thus used in this experiment will be called Validation Im‑
ages (VIs). Half of the VIs (720 images) will have their brightness decreased by 50% using
the MATLAB “Imadjust” function (RGB,[],[],0.5) [32]. The second half of the VIs (the other
720 images) will have their brightness increased by 50%, also using the Imadjust function
(RGB,[],[],1.5). All 1440 VIs will then go through APLT. Finally, the resulting brightness of
the VIs will be compared to the Standard Brightness.
2.3.3. Removing the Image Background
An original image from the smartphone has now passed through object detection to
isolate the banana from the background as well as the energy calculation and illumination
adjustment process to convert the image to Standard Brightness. However, there are still
some remnants of the background, because the banana was isolated as a rectangle. The
current step will remove remnants of the background by rst nding the optimal global
threshold of the AI using Otsu’s method [33] and then seing each background pixel to
black and each non‑background pixel to white by comparing the threshold value to the
intensity of the pixel in a grayscale version of the image, as shown in Equation (6).
g(x,y) = {1, if intensity of f(x,y)>T
0, if intensity of f(x,y)≤T(6)
where g(x,y)is the processed pixel at (x,y)that has been set to black or white; f(x,y)is
the unprocessed pixel at (x,y)in the grayscale image; and Tis a threshold determined
separately by Otsu’s Method.
2.3.4. Convert Adjusted RGB Data to L*a*b* Data
Remember that the spectrometer reported on three points along the length of the ba‑
nana, and data from those points were averaged to resemble L*a*b* readings from a single
point for use in the colorimetric equation. However, the current adjusted image is still an
image with many pixels. Thus, these many pixels similarly need to be averaged together to
resemble one pixel, after which that averaged RGB information will be converted to L*a*b*
for use in the colorimetric equation. To do this, all the red channel values of non‑black
pixels in the current image will be averaged together to obtain one average red channel
value. In the same way, the green and blue channel values will each be averaged from
all non‑black pixels. The set of these three channel averages are like RGB values from a
single point.
The last step here is to convert the averaged RGB values to CIE L*a*b* color values and
then send the L*a*b* color values to the colorimetric Equation (1) to obtain the resulting
θvalue. The complete process of converting the raw smartphone image to a θvalue is
referred to here as colorimetric extraction from image (CEFI).
2.4. Colorimetric Model Adjustment
Data obtained from the spectrometer are considered beer than data obtained from
a smartphone, because the smartphone image is subject to uncontrolled lighting, whereas
Sensors 2023,23, 6387 11 of 19
the spectrometer is not. In order to look for any consistent variation between θvalues de‑
rived from colorimetric extraction from color data (CEFCD) and θvalues derived from the
colorimetric extraction From image (CEFI), results from the two methods were compared.
If there is a consistent error rate, an error constant can be incorporated into the colorimetric
model as an adjustment to bring the smartphone result closer to the spectrometer result.
To test for this error rate, the original lightbox images were processed through CEFI, as
if they were smartphone images produced without a lightbox. The θvalues from those
results were then compared, banana by banana, to the θvalues from the original spectro‑
scope data (using the same bananas) that had passed through CEFCD. This comparison
was done for all the experimental groups (P25, P30, N25, and N30) and on all the experi‑
ment days. The error rate was calculated using Equation (7).
Eθ=∑N
i=1|θ1,P25CEFCD −θ1,P25CEFI|+|θ2, P25CEFCD −θ2, P25CEF I |+... +|θn,N30 CEFCD −θ2,N30 CEFI|
M/2 (7)
where Eθis the error rate between the θvalues of colorimetric extraction from color data
(CEFCD) and colorimetric extraction from image (CEFI) for all experimental groups and
all experiment days; θ1,P25 CEFCD is the θvalue of colorimetric extraction from color data
(CEFCD) on Day 1 for P25; θ1,P25 CEFI is the θvalue of colorimetric extraction from image
(CEFI) on Day 1 for P25; and Mis the total number of the θvalues.
If the error rate is found to be signicant and consistent, then it can be added as an
error constant in the original colorimetric equation. This modied calculation is shown in
Equation (8).
tan(θ) = B/abs(A) + Eθ(8)
where θis the angle of ripening, Ais the value of a* (CIELAB), and Bis the value of
b* (CIELAB).
The angle of ripening from the nal version of the colorimetric equation will be com‑
pared with the daily visual observations of the banana skins to map out equivalents be‑
tween theta values and the remaining days For consumption (RDFC).
2.5. Application Development
The Automatic Banana Ripeness Indicator (ABRI, pronounced like “Aubrey”) appli‑
cation was developed and designed following the principles of System Development Life
Cycle (SDLC) [34] for software development. The application has a responsive design that
automatically adjusts for dierent size screens and viewports [35], and it works on any mo‑
bile device running any operating system. The application has two main screens, behind
which all the image processing procedures described to this point run. The rst screen
is for photographing a banana, displaying the photo, and initiating colorimetric analysis.
The second screen displays the results of the analysis, indicating the ripeness stage of the
banana and an estimate of the remaining number of days in which it can be consumed.
The application will be evaluated for usability and validity by knowledgeable people in
relevant elds.
3. Results
In the course of this research to develop a colorimetric equation and colorimetric appli‑
cation to conveniently identify the ripening stages of Kamphaeng Phet lady nger bananas,
an important challenge to overcome is the variety of light environments in which the app
users will ultimately photograph bananas. This section presents the results of the three ex‑
periments described earlier to test and validate three important steps of the development
process. Information on the nished app is also provided here.
3.1. The Object Detection Process Experiment Result
The banana in each of the 720 banana smartphone photos from the data collection
process was demarcated with a rectangular in 2 ways: rst, it was performed manually
by using the LabelImg graphical image annotation tool. Then, it was automatically deter‑
Sensors 2023,23, 6387 12 of 19
mined using the COCO‑SSD object detection model. The results of the two methods were
compared. A sample output from LabelImg is shown in Figure 6A, and a sample output
from COCO‑SSD is shown in Figure 6B.
(A)
(B)
(C)
Figure 6. Sample detection results showing (A) demarcation using the graphical image annotation
tool LabelImg, (B) demarcation using the COCO‑SSD model, and (C) the two demarcated areas su‑
perimposed to show the amount of overlap.
In order to evaluate the model’s ability to demarcate bananas accurately, the results
of the two demarcation procedures described above were compared using the Intersection
Over Union (IOU) method, which calculates the area where the two rectangles intersect
divided by the total area of the united rectangles (counting the intersection area only once).
The average IOU result for all 720 images was 0.857 with a S.D. of 0.137. These two gures
show that the model can demarcate bananas at high levels of eectiveness and reliability.
Next, the rate at which the COCO model was able to correctly classify (identify) the photo
as a banana was calculated. The classication results are shown in the Figure 7.
Sensors 2023,23, 6387 13 of 19
(A) (B)
Figure 7. The classication results of the COCO‑SSD model when applied to the 720 banana images
in this study: (A) the number of correct and incorrect predictions by COCO‑SSD and (B) a breakdown
of the predictions other than “Banana”.
The COCO‑SSD model correctly classied 622 of the 720 images as a banana, corre‑
sponding to a high accuracy rate of 91.94%. Visual examination of the photos that were
misclassied suggests that the issues were related variously to: the distance at which the
photo was taken (in the cases of Frisbee, Suroard, and Apple), a light reection in the
photo (Dining Table), and a signicant amount of dark spots on the banana (Pizza, Bird,
Airplane, Cake). The distance issue is addressed in the application, which will constantly
display the current classication prediction. The user can move closer or further away
until COCO‑SSD displays the word “Banana”, and then take the photo.
The results of the demarcation and classication experiments indicate that the COCO‑
SSD algorithm functions well in this study for the intended purposes.
3.2. Energy Calculation and Illumination Adjustment Process Experiment Result
Table 1shows all 10 of the Brightness Components (BCs) of the Standard Brightness
(SB), which was calculated and averaged from all the lightbox images. The goal of this
evaluation was to calculate the Brightness Components of the intentionally brightened or
darkened Validation Images (VIs) after they passed through the energy calculation and il‑
lumination adjustment process and then compare those BCs of the adjusted VIs to the BCs
of the SB, since the goal of the Automatic Power‑Law Transformation (APLT) for illumina‑
tion adjustment is to move the BCs of the VIs as close as possible to the BCs of the SB.
Table 1. All 10 Brightness Components of the Standard Brightness, which was calculated and aver‑
aged from all the lightbox images.
OB Brightness Contrast Average Gradients
r g b r g b r g b
6.14 117.91 117.44 67.88 45.44 45.70 71.96 18.02 18.54 19.35
Remember that various gamma and c values were trialed to adjust the BCs of each VI,
and the particular combination of gamma and c that gave the best result for each image
was selected by Equation (5). Figure 8shows a graph of all the nally selected gamma and
c values that were required to adjust the Brightness Components of the Validation Images
as close as possible to the Brightness Components of the Standard Brightness.
Sensors 2023,23, 6387 14 of 19
As one would expect, the intentionally darkened VIs had an overall brightness (OB)
value less than the OB of the established Standard Brightness (SB), and the optimized
gamma used to brighten these images to the SB varied between 0.4 and 1, depending on
the image, as shown in Figure 8A. In the case of the VIs that were intentionally brightened,
their OB was greater than the OB of the SB, so those VIs were adjusted to the SB using
optimized gamma values between 2 and 3, as shown in Figure 8B.
(A)
(B)
Figure 8. Optimized gamma values for the Validation Images after Automatic Power‑Law Transfor‑
mation (APLT), showing results for (A) images that were intentionally darkened and (B) images that
were intentionally brightened. The optimized C value was always 1.
In order to compare the brightness components of the adjusted validation images to
the Standard Brightness components and thereby evaluate the accuracy of the illumina‑
tion adjustment process, the overall brightness (OB) component was disregarded, because
it is a composite value. The remaining 9 brightness components each have possible values
ranging from 0 to 255. Therefore, if each of a single adjusted validation image’s 9 BCs are
subtracted from the corresponding SB BC, the maximum dierence would be 255. A dif‑
ference of 255 would indicate a completely wrong adjustment, and a dierence of 0 would
indicate a perfect adjustment. Therefore, the dierence of the components divided by 255
and then multiplied by 100 will give the accuracy of that particular image’s adjustment.
This calculation was carried out for all 1440 adjusted validation images, and the av‑
erage accuracy and SD were then found. The result is that the validation images were
adjusted to Standard Brightness with an accuracy of 95.72% (S.D. 5.11). Thus, the illumi‑
Sensors 2023,23, 6387 15 of 19
nation adjustment process developed in this research was very eective at adjusting and
standardizing the brightness of the test images that were intentionally lightened and dark‑
ened for this evaluation, and the process is a good tool for adjusting and standardizing the
brightness of images taken by users of the app under uncontrolled light conditions.
3.3. The Colorimetric Equation Creation Experiment Results
The theta values calculated from the spectrometer data were compared to the theta
values calculated from the corresponding smartphone lightbox images for all four experi‑
mental groups and for all nine experiment days, and the results are shown in Figure 9.
Figure 9. Theta results from the spectrometer are compared with those from the corresponding
smartphone images in all four experimental groups and on nine all experiment days.
The average error rate of the smartphone theta values compared to the spectrometer
theta values was calculated across all four experiment groups and all nine experiment days
using Equation (7), and the result was 2.8° ± 0.2°. Therefore, an error constant of 2.8 was
incorporated into Equation (8). Figure 10 shows the relationships between experiment day,
ripeness stage, theta range, hue range, and remaining days For consumption (RDFC) for
all experimental groups averaged together. As in other related studies on banana ripening,
the passing of 1 Ripeness Stage takes approximately 24 h. The observed theta ranges within
each Ripeness Stage varied from 4 to 5°, and the observed hue range within each Ripeness
Stage also varied from 4 to 5°. The rst day that the bananas were ripe enough to eat
was experiment Day 5, corresponding to Ripeness Stage 5, and they remained edible for a
total of ve days (Ripeness Stage 5–9) before they became overripe. The remaining days
for consumption (RDFC) status includes the current day. Bananas cannot be cut before
Stage 1, because they will only shrivel, not ripen, and bananas cannot be eaten after Stage
9, because they will be overripe.
Sensors 2023,23, 6387 16 of 19
Figure 10. The relationships between experiment day, ripeness stage, theta range, hue range, and
remaining days for consumption (RDFC) for all experimental groups averaged together.
3.4. Application Development
The research in this study was developed into an easy‑to‑use progressive web applica‑
tion that conveniently processes banana photos directly from the user’s mobile device cam‑
era. The application is accessible from any mobile device and any operating system, and
it is called the Automatic Banana Ripeness Indicator (ABRI, pronounced like “Aubrey”).
Figure 11 shows an example of the ABRI app in use.
(A) (B) (C)
Figure 11. An example of the ABRI app in use: (A) The banana is brought into the camera frame,
(B) the banana is detected by the app, and (C) analysis results on the ripeness of the banana
are displayed.
When the ABRI icon is clicked, the application immediately opens the device camera
to accept a banana photo. When a banana is brought into the camera frame, the app will
automatically detected it and highlighted it with a yellow detection box. When the user
then takes the photo, it is automatically processed and analyzed as described earlier in this
study, and the user is taken immediately to the results display page.
After completion, the application was evaluated for usability and accuracy by ve
knowledgeable people: two people in the banana business and three biologists. The ve
Sensors 2023,23, 6387 17 of 19
evaluators gave ABRI good marks, with an average 4.51 points out of 5 possible points
across all the evaluation questions. The ABRI application is now ready to use to conve‑
niently and accurately identify the ripening stage of Kluai Khai bananas.
4. Discussion
The results of the colorimetric equation and colorimetric detection model for identi‑
cation of the ripening stage of ‘Kluai Khai’ Banana were well aligned with the spectrome‑
ter results, providing an eective level of accuracy. A central strength of the application
lies in its ability to accept photos taken in a variety of lighting environments, because the
Automatic Power‑Law Transformation (APLT) method automatically adjusts the image
brightness to an established standard. This is a valuable feature, because unstable lighting
is a challenge that always aects the eld of image processing. An overview of the Au‑
tomatic Banana Ripeness Indicator (ABRI) developed in this current study in comparison
with previous studies is shown in Table 2, looking at key features, platforms, and accuracy.
Table 2. Overview of the Automatic Banana Ripeness Indicator (ABRI) in comparison with previous
studies, looking at key features, platforms, and accuracy.
Study Contact CE Platform RS RDFC Images Classier OA(%)
Current Study (ABRI) No No Mobile 9 Yes Yes Yolo + APLT + CIEPT 92.88
[7] No Yes Computer No Yes Yes CIE‑L*a*b* No
[8] Yes Yes Computer 7 No No DA‑ meter No
[9] No Yes Computer 3 No Yes Size + MCI 85.00
[10] No Yes Mobile 4 No Yes Yolo + SVM 96.40
[11] No Yes Computer 4 No Yes Yolo + CNN 71.95
[12] No Yes Computer 3 No Yes GLCM + NN 75.67
[13] No Yes Computer 14 No Yes CNN 91.00
[14] No Yes Mobile 3 No Yes SA No
[15] No Yes Mobile 3 No Yes DL No
Contact = There is physical contact with the banana skin, CE = Controlled Environment, RS = Ripening Stage,
RDFC = Remaining Days For Consumption, OA = Overall Accuracy, APLT = Automatic Power‑Law Transforma‑
tion, CIEPT = CIE L*a*b* Color Scale with Pythagorean Theorem, MCI = Mean Color Intensity, SVM = Support
Vector Machines, CNN = Convolutional Neural Networks, GLCM = Gray Level Concurrence Matrix, SA = Spec‑
tral Analysis, and DM = Data Mining.
Each of the studies compared in Table 2was based on a particular platform and had
its own objectives; thus, the key features vary. A study can be “contact” or “non‑contact”.
A contact study involves a sensor actually contacting the skin of the banana to collect color
information [19], and this can potentially bruise the banana. Non‑contact studies use a cam‑
era to collect color information without contacting the skin of the banana [7,9–15]. Studies
can also be divided by platform, with some studies running on a computer [7,9,11–13] and
other studies running on a smartphone [10,14,15]. A computer can provide higher com‑
puting power, while a smartphone can provide more convenience and portability. That
said, software with special functionality such as object detection and illumination adjust‑
ment can be added to smartphones to compensate for some shortcomings compared to
a computer. Unfortunately, both computer and smartphone platforms that are used in
an uncontrolled environment are subject to unstable lighting. This study overcame that
challenge with the APLT method. The research of Zhu, L. et al. [10] included object detec‑
tion using the Yolo algorithm, and it classied bananas into four ripening stages using the
SVM algorithm. However Zhu et al. were forced to use only lightbox images because no
way to adjust for uncontrolled lighting was included. Intaravanne, Y. et al. [14] applied
frequency domain analysis to classify bananas into three ripening stage but they did not
include lighting adjustment either. The study of Kipli, K. et al. [15] was similar to that of
Intaravanne, Y. et al., but it applied data mining to accomplish classication of ripening
stages. The main characteristics of the current study that set it apart from others are the
use of the APLT method and the colorimetric equation as well as the remaining days for
Sensors 2023,23, 6387 18 of 19
consumption feature that is outpued by the ABRI app. The fact that ABRI is wrien as a
progressive web application also adds exibility and makes installation unnecessary.
5. Conclusions
The Automatic Banana Ripeness Indicator ABRI smartphone application developed
here is an accurate and robust tool that quickly, conveniently, and reliably provides the
user with any LFB’s ripening stage and the remaining days for consumption. At present,
it has the limitation of processing this one variety of banana that is important in this area.
Going forward, the authors plan to expand upon these analytical capabilities to enable
processing of multiple varieties of bananas. The techniques described here also lend them‑
selves to adaptation for developing related analysis of other important agricultural prod‑
ucts in the future.
Author Contributions: Conceptualization, B.T. and S.B.; methodology, B.T.; software, W.T. and B.T.;
validation, W.T. and B.T.; formal analysis, B.T. and S.B.; investigation, S.B.; resources, B.T.; data cura‑
tion, B.T.; writing—original draft preparation, B.T. and S.B.; writing—review and editing, B.T. and
S.B.; visualization, B.T. and S.B.; supervision, B.T.; project administration, B.T. and S.B.; funding ac‑
quisition, B.T. and S.B. All authors have read and agreed to the published version of the manuscript.
Funding: This study was made possible through a grant from the Research and Development Insti‑
tute of Kamphaeng Phet Rajabhat University (scal year 2021).
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Raw data can be requested through the corresponding author.
Acknowledgments: The authors would like to thank FD Green (Thailand) Co., Ltd. in Kamphaeng
Phet City, Computer Vision and Human Interaction Technologies Laboratory at Kamphaeng Phet
Rajabhat University (KPRU Vision Lab), and Biotechnology Laboratory at Kamphaeng Phet Rajabhat
University (KPRU Biotech Lab) for their kind assistance with this project. Thank you also to Paul
Freund of Naresuan University Writing Clinic (DIALD) for editing this manuscript.
Conicts of Interest: The authors declare no conict of interest.
References
1. Wongwaiwech, D.; Kamchonemenukool, S.; Ho, C.T.; Li, S.; Thongsook, T.; Majai, N.; Premjet, D.; Sujipuli, K.; Weerawatanakorn,
M. Nutraceutical Dierence between Two Popular Thai Namwa Cultivars Used for Sun Dried Banana Products. Molecules 2022,
27, 5675. [CrossRef] [PubMed]
2. Ritchie, H.; Rosado, P.; Roser, M. Agricultural Production. Available online: https://ourworldindata.org/agricultural‑production
(accessed on 21 November 2022).
3. Tree Hirunyalawan, V.V.R. Knowledge sharing from farmer/processor and the perceived benets of processed banana con‑
sumers. Kasetsart J. Soc. Sci. 2021,42, 249–254.
4. Chobchainphai, S.; Wasusri, T.; Srilaong, V. The Analysis of production for Khai bananas supply chains a case study of Chan‑
thaburi and Phetchaburi provinces. Thai Agric. Res. J. 2014,32, 16–34.
5. Property, D.o.I. Thai Geographical Indication (GI). Available online: http://www.ipthailand.go.th/images/gibook/GI_Book_111\
.pdf (accessed on 22 November 2022).
6. Linh, C.N.; Joomwong, A. Eect of 1‑MCP in Combination with Heat Treatment on Preservative Quality of Banana (Cv. Kluai
Khai) Fruits. Agric. Sci. J. 2011,42, 341–344.
7. Prommee, W.; Chaiwut1, A.; Nanthasen, T. Thai Banana Quality Predictor By Computer Vision. J. Sci. Technol. Ubon Ratchathani
Univ. 2019,21, 128–135.
8. Ringer, T.; Blanke, M. Non‑invasive, real time in‑situ techniques to determine the ripening stage of banana. J. Food Meas. Charact.
2021,15, 4426–4437. [CrossRef]
9. Surya Prabha, D.; Satheesh Kumar, J. Assessment of banana fruit maturity by image processing technique. J. Food Sci. Technol.
2015,52, 1316–1327. [CrossRef]
10. Zhu, L.; Spachos, P. Support vector machine and YOLO for a mobile food grading system. Internet Things 2021,13, 100359.
[CrossRef]
11. Ramadhan, Y.A.; Djamal, E.C.; Kasyidi, F.; Bon, A.T. Identication of Cavendish Banana Maturity Using Convolutional Neu‑
ral Networks. In Proceedings of the 5th North American International Conference on Industrial Engineering and Operations
Management, Detroit, MI, USA, 10–14 August 2020.
Sensors 2023,23, 6387 19 of 19
12. Anushya, A. Quality Recognition of Banana Images using Classiers. Int. J. Comput. Sci. Mob. Comput. 2020,9, 100–106.
13. Zhang, Y.; Lian, J.; Fan, M.; Zheng, Y. Deep indicator for ne‑grained classication of banana’s ripening stages. EURASIP J.
Image Video Process. 2018,2018, 46. [CrossRef]
14. Intaravanne, Y.; Sumriddetchkajorn, S.; Nukeaw, J. Cell phone‑based two‑dimensional spectral analysis for banana ripeness
estimation. Sens. Actuators B Chem. 2012,168, 390–394. [CrossRef]
15. Kipli, K.; Zen, H.; Sawawi, M.; Noor, M.S.M.; Julai, N.; Junaidi, N.; Razali, M.I.S.M.; Chin, K.L.; Masra, S.M.W. Image Processing
Mobile Application For Banana Ripeness Evaluation. In Proceedings of the 2018 International Conference on Computational
Approach in Smart Systems Design and Applications (ICASSDA), Kuching, Malaysia, 15–17 August 2018; pp. 1–5.
16. Ly, B.; Dyer, E.; Feig, J.; Chien, A.; Bino, S. Research Techniques Made Simple: Cutaneous Colorimetry: A Reliable Technique
for Objective Skin Color Measurement. J. Investig. Dermatol. 2020,140, 4–5. [CrossRef] [PubMed]
17. Zhai, S.; Shang, D.; Wang, S.; Dong, S. DF‑SSD: An Improved SSD Object Detection Algorithm Based on DenseNet and Feature
Fusion. IEEE Access 2020,8, 24344–24357. [CrossRef]
18. StellarNet Inc. BLUE‑Wave Miniature Spectrometer. Available online: https://www.stellarnet.us/spectrometers/blue‑wave‑
miniature‑spectrometers/ (accessed on 5 October 2021).
19. Alexander, J.; Gildea, L.; Balog, J.; Speller, A.; McKenzie, J.; Muirhead, L.; Sco, A.; Kontovounisios, C.; Rasheed, S.; Teare, J.;
et al. A novel methodology for in vivo endoscopic phenotyping of colorectal cancer based on real‑time analysis of the mucosal
lipidome: A prospective observational study of the iKnife. Surg. Endosc. 2016,31, 1362–1363. [CrossRef] [PubMed]
20. Visual Studio Code. Available online: https://code.visualstudio.com/updates/v1_60 (accessed on 5 October 2021).
21. Support Apple Team. iPhone XS Max—Technical Specications. Available online: https://support.apple.com/kb/SP780
?viewlocale=\en_US&locale=th_TH (accessed on 5 October 2021).
22. Tongpoolsomjit, K.; Grandmoet, F.; Boonruangrod, R.; Krueajan, A.; Viyoch, J. Determination of β‑carotene content in Musa
AA pulp (Kluai Khai) at dierent ripening stage and harvest period in Thailand. Emir. J. Food Agric. 2020,32, 442–443. [CrossRef]
23. Bains, B.; Sharma, M.; Singh, S. Quality regulation in banana through post‑harvest treatment with ethylene and ethylene in‑
hibitors. Res. Crop. 2017,18, 657–658. [CrossRef]
24. Wimann, E.C. Designing teaching: The Pythagorean theorem. In Connecting Mathematics and Mathematics Education: Collected
Papers on Mathematics Education as a Design Science; Wimann, E.C., Ed.; Springer International Publishing: Cham, Swierland,
2021; pp. 95–108.
25. Lin, T.‑Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects
in Context. In Proceedings of the European Conference Computer Vision (ECCV 2014), Zurich, Swierland, 6–12 September
2014; pp. 740–755.
26. Rezatoghi, H.; Tsoi, N.; Gwak, J.; Sadeghian, A.; Reid, I.; Savarese, S. Generalized Intersection Over Union: A Metric and a Loss
for Bounding Box Regression. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Paern Recognition
(CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 658–666.
27. Torralba, A.; Russell, B.C.; Yuen, J. LabelMe: Online Image Annotation and Applications. Proc. IEEE 2010,98, 1467–1484.
[CrossRef]
28. Tanut, B.; Riyamongkol, P. The Development of a Defect Detection Model from the High‑Resolution Images of a Sugarcane
Plantation Using an Unmanned Aerial Vehicle. Information 2020,11, 136. [CrossRef]
29. Tsai, C.‑M. Adaptive Local Power‑Law Transformation for Color Image Enhancement. Appl. Math. Inf. Sci. 2013,7, 2020–2021.
[CrossRef]
30. Yelmanov, S.; Hranovska, O.; Romanyshyn, Y. Image Enhancement Technique based on Power‑Law Transformation of Cumu‑
lative Distribution Function. In Proceedings of the 2019 3rd International Conference on Advanced Information and Communi‑
cations Technologies (AICT), Lviv, Ukraine, 2–6 July 2019; pp. 120–124.
31. Mitra, D.; Sarkar, P.; Roy, P. Face Recognition by City‑Block Distance Classier in Supervised Machine Learning. IJRAR 2019,6,
186–187. [CrossRef]
32. Gupta, S.; Porwal, R. Appropriate Contrast Enhancement Measures for Brain and Breast Cancer Images. Int. J. Biomed. Imaging
2016,2016, 5–6. [CrossRef] [PubMed]
33. Liu, D.; Yu, J. Otsu Method and K‑means. In Proceedings of the 2009 Ninth International Conference on Hybrid Intelligent
Systems, Shenyang, China, 12–14 August 2009; pp. 344–349.
34. Kyeremeh, K. Overview of System Development Life Cycle Models. SSRN Electron. J. 2019, 15–16. [CrossRef]
35. Almeida, F.; Monteiro, J. The role of responsive design in web development. Webology 2017,14, 48–65.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual au‑
thor(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
Available via license: CC BY 4.0
Content may be subject to copyright.