ArticlePDF Available

Comparison of deep learning sequence-to-sequence models in predicting indoor temperature and humidity in solar dryer dome

Authors:

Abstract and Figures

Solar Dryer Dome (SDD), which is an agriculture facility for preserving and drying agriculture products, needs an intelligent system for predicting future indoor climate conditions, including temperature and humidity. An accurate indoor climate prediction can help to control its indoor climate conditions by efficiently scheduling its actuators, which include fans, heaters, and dehumidifiers that consume a lot of electricity. This research implemented deep learning architectures to predict future indoor climate conditions such as indoor temperature and indoor humidity using a dataset generated from the SDD facility in Sumedang, Indonesia. This research compared adapted sequenced baseline architectures with sequence-to-sequence (seq2seq) or encoder-decoder architectures in predicting sequence time series data as the input and output of both architecture models which are built based on Recurrent Neural Network (RNN) layers such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). The result shows that the adapted sequence baseline model using GRU is the best model, whereas seq2seq models yield bigger Mean Absolute Error (MAE) values by almost ten times. Overall, all the proposed deep learning models are categorized as extremely strong with R2 ≥ 0.99.
Content may be subject to copyright.
*Corresponding author
E-mail address: karli.setiawan@binus.ac.id
Received August 06, 2022
1
Available online at http://scik.org
Commun. Math. Biol. Neurosci. 2022, 2022:98
https://doi.org/10.28919/cmbn/7655
ISSN: 2052-2541
COMPARISON OF DEEP LEARNING SEQUENCE-TO-SEQUENCE MODELS
IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY IN SOLAR
DRYER DOME
KARLI EKA SETIAWAN1,*, GREGORIUS NATANAEL ELWIREHARDJA2, BENS PARDAMEAN1,2
1Computer Science Department, BINUS Graduate Program, Master of Computer Science Program, Bina Nusantara
University Jakarta, Indonesia 11480
2Bioinformatics and Data Science Research Center, Bina Nusantara University, Jakarta, Indonesia 11480
Copyright © 2021 the author(s). This is an open access article distributed under the Creative Commons Attribution License, which permits
unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract: Solar Dryer Dome (SDD), which is an agriculture facility for preserving and drying agriculture products,
needs an intelligent system for predicting future indoor climate conditions, including temperature and humidity. An
accurate indoor climate prediction can help to control its indoor climate conditions by efficiently scheduling its
actuators, which include fans, heaters, and dehumidifiers that consume a lot of electricity. This research implemented
deep learning architectures to predict future indoor climate conditions such as indoor temperature and indoor humidity
using a dataset generated from the SDD facility in Sumedang, Indonesia. This research compared adapted sequenced
baseline architectures with sequence-to-sequence (seq2seq) or encoder-decoder architectures in predicting sequence
time series data as the input and output of both architecture models which are built based on Recurrent Neural Network
(RNN) layers such as Gated Recurrent Unit (GRU) and Long Short-Term Memory (LSTM). The result shows that the
adapted sequence baseline model using GRU is the best model, whereas seq2seq models yield bigger Mean Absolute
2 SETIAWAN, ELWIREHARDJA, PARDAMEAN
Error (MAE) values by almost ten times. Overall, all the proposed deep learning models are categorized as extremely
strong with .
Keywords: deep learning; solar dryer dome; sequence-to-sequence prediction; indoor climate prediction.
2010 AMS Subject Classification: 68T05, 62H20, 97R40, 97R30.
1. INTRODUCTION
Indonesia has implemented the Indonesia Agriculture 4.0 programs, which means that the
agriculture system should consist of Artificial Intelligence (AI) or Machine Learning (ML), the
Internet of Things (IoT), and cyber-physical systems. One of those programs is Smart Dome 4.0,
a low-cost, eco-friendly, and sophisticated program to support Indonesian farmers in saving their
agricultural products [1]. The purpose of building a Solar Dryer Dome (SDD) is for food
preservation and maintaining the product’s nutritional content because agricultural products
require a long time to process before they are delivered to consumers [2]. SDD overcomes the
many shortcomings of traditional drying methods under the sun in the outdoors, such as longer
drying processes, potential rain, dust impact, bird and other flying animal droppings, the growth
of fungi, inappropriate humidity, and color change.
One weakness of SDD is the need for a power source for running the system continuously to
provide suitable indoor environmental conditions with a constant supply of electricity for operating
the actuators such as fans, heating systems, and dehumidifiers [1] [3] [4]. SDD uses green energy
by collecting solar energy using a solar panel during the day and storing it in a battery for use at
night. Indonesia, a country with two seasons, has various solar radiation distributions, so it can
become a problem for SDD for solar energy absorption [5]. When the weather is dark and rainy
throughout the day, it also becomes a problem. In a mountain area, dramatic weather changes also
affect indoor SDD significantly. Another study on SDD concludes that controlling indoor climate
by increasing indoor temperature and decreasing indoor humidity consumes the most power [6].
It makes predicting environmental parameters for scheduling the actuators an important thing for
SDD. The application of the actuator scheduling can reduce SSD power consumption by using
3
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
energy sparingly.
Predicting indoor climate for controlling SDD environmental conditions in order to achieve
power consumption efficiency and the best quality of dried agricultural products is one of the most
important and difficult tasks to perform for SDD [7]. Deep learning (DL), a method that learns the
distribution of the data to be modeled automatically, can be applied to address these challenges in
the agriculture sector, especially DL with RNN [8] [9]. Many reports show how amazing RNN can
solve various challenges where the data is sequential [10] [11]. RNN can learn historical
information in time series data with the aim of predicting future results [12]. Later, special
improved RNNs such as LSTM and GRU appear, which are intended for long-term learning. [13].
Sequence-to-sequence (seq2seq) or encoder-decoder is one of the deep learning architectures
which is popularly implemented in Natural Language Processing (NLP), which can output
sequence data with sequence data input [14]. Since the input and output data are sequence, this
research compared both the adapted sequence baseline architecture and the seq2seq architecture,
which applied RNN layers on it, so the proposed 4 models are adapted sequence baseline with
stacked GRU, adapted sequence baseline with stacked LSTM, seq2seq with GRU, and seq2seq
with LSTM.
2. RELATED WORKS
For many years, DL has been a major improvement in ML research, solving high-dimensional
data problems. [15]. DL is used in many domains of science, business, and government. The most
popular deep learning methods which are used for predicting indoor climate problems are Long
Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), a simplified LSTM.
There is also some research which used deep learning to predict indoor climate problems. The
closest work to our research is the research done by Gunawan et al. [16]. They developed four
deep learning models for predicting indoor temperature and humidity, such as LSTM, GRU,
Transformer, and Transformer with learnable positional encoding. Their datasets contained indoor
temperature, indoor humidity, and 2 lighting variables in 3 different places. Their results show that
4 SETIAWAN, ELWIREHARDJA, PARDAMEAN
the GRU model was superior for all humidity predictions, and the LSTM model was superior for
2 of 3 temperature predictions.
Another related research was about predicting indoor climate parameters such as temperature,
humidity, and carbon dioxide concentration inside a greenhouse for tomato plants by Ali and
Hassanein [17]. They used the LSTM model as their prediction model. Similarly to Ali and
Hassanein, Jung et al. predicted indoor climate such as temperature, humidity, and carbon dioxide
concentration inside a greenhouse, but they compared three different models such as ANN-
Backpropagation, Nonlinear Autoregressive Exogenous model (NARX), and LSTM with datasets
obtained from Davis wireless vantage Pro2 (Davis instruments, California, USA) weather station,
and HMP 35 Probe (Vasaila, Helsinki, Finland). [18]. The result concluded that LSTM was
superior to ANN-Backpropagation and NARX.
Another indoor climate prediction research was done by Liu et al [19]. Their research
implemented time sliding window to their LSTM model for learning the change of environment
climate over short a period of time. Their datasets were tomato, cucumber, and spicy greenhouse
with indoor temperature and humidity, light intensity, carbon dioxide concentration, soil
temperature, and soil humidity. Their modified LSTM outperformed the GRU model.
Elhariri and Taie conducted a similar study to SDD in which they experimented with Heating,
Ventilation, and Air Conditioning (HVAC), an indoor system similar to SDD [20]. They compared
LSTM and GRU models to predict the future microclimate inside smart buildings by using UCI
Machine Learning Repository SML2010 datasets containing indoor temperature and humidity,
carbon dioxide concentration, and outdoor temperature and humidity. The result showed that the
GRU model was the best in their case.
The research that is closest to ours, which implemented the seq2seq architecture as a model
time series prediction, was done by Fang et al. [21]. They predicted an indoor climate inside the
GreEn-ER building in the center of Grenoble, France, with the datasets containing indoor
temperature and carbon dioxide. They proposed 3 seq2seq models, such as LSTM-Dense, LSTM-
LSTM, and LSTM-Dense-LSTM, which outperform the LSTM and GRU baseline models.
5
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
Inspired by the succession of the seq2seq architecture by Fang et al., this research implemented
LSTM and GRU in our seq2seq models to be compared with our proposed adapted baseline with
stacked LSTM and stacked GRU models.
SDD can accomplish many tasks by accurately predicting future indoor climates, such as
regulating indoor climatic conditions, attaining ideal agricultural product drying conditions, and
minimizing energy use[22]. Inspired by Fang et al. and Gunawan et al., this study proposed 4
models to be compared, such as the adapted GRU-based sequence baseline model, the adapted
LSTM-based sequence baseline model, seq2seq GRU, and seq2seq LSTM [16] [21].
3. DATA AND METHODOLOGY
3.1. Datasets
The dataset which was used in this experiment was generated from a SDD facility in
Sumedang, a town in Western Java, Indonesia. The facility can be seen in Figure 1.
FIGURE 1. SDD Facility in Sumedang, Indonesia.
The datasets were generated from two indoor sensors and an outdoor sensor, containing
temperature and humidity data for each sensor over 12 days of recording. The dataset on the SDD
6 SETIAWAN, ELWIREHARDJA, PARDAMEAN
can be depicted in Figure 2, with temperature represented by a blue line and humidity represented
by an orange line.
FIGURE 2. Datasets Obtained from Sensor.
This research only addressed indoor temperature and indoor humidity, even though all models
forecasted all six features by using all six features as input and output. Because the two primary
factors that affect SDD, which need to be monitored and controlled, are indoor temperature and
indoor humidity. So, the results of the outdoor temperature and the outdoor humidity were ignored.
7
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
3.2. Long Short-Term Memory
Due to its capacity for memorizing temporal information over a large number of timesteps,
Long Short-Term Memory (LSTM) is frequently employed in classification and regression tasks
involving sequential data [23]. LSTM is composed of three gates, which are the input gate, output
gate, and forget gate. An LSTM was designed well to handle time series predictions and is also a
solution for problems which require temporal memory [24].
󰇛󰇜󰇛󰇛󰇜󰇛󰇜󰇛󰇜󰇜
(1)
The input gate in equation (1) is denoted as 󰇛󰇜 where 󰇛󰇜 , 󰇛󰇜 , and 󰇛󰇜 are the
representations for the input data, last iteration output data, and last iteration cell value respectively
with , , and as weight values. The bias vector of input gate in LSTM is indicated by the
symbol of . The symbol of denotes the sigmoid activation function.
󰇛󰇜󰇛󰇛󰇜󰇛󰇜󰇛󰇜󰇜
(2)
The forget gate in equation (2) is denoted as 󰇛󰇜 which eliminates the information from
previous cell state where , , and symbolize the weight values for input data, last
iteration output data, and last iteration cell value respectively. The bias vector of forget gate in
LSTM is symbolized as .
󰇛󰇜󰇛󰇜󰇛󰇜󰇛󰇜
(3)
The cell value in equation (3) is denoted as  where 󰇛󰇜 is the block input.
󰇛󰇜󰇛󰇛󰇜󰇛󰇜󰇛󰇜󰇜
(4)
The output gate in equation (4) is denoted as 󰇛󰇜 where , , and are the weight
values for input data, last iteration output data, and last iteration cell value respectively.
󰇛󰇜󰇛󰇛󰇜󰇜󰇛󰇜
(5)
The block output of LSTM in equation (5) is denoted as 󰇛󰇜 which combine current cell
value and the output gate in LSTM where 󰇛󰇜 is hyperbolic tangent function.
3.3. Gated Recurrent Unit
8 SETIAWAN, ELWIREHARDJA, PARDAMEAN
Gated Recurrent Unit (GRU) is a simplified LSTM with 1 less gate [25]. GRU mostly
outperforms LSTM in many cases [26]. If LSTM has 3 gates, which are input gate, forget gate,
and output gate, GRU has 2 gates, which are reset gate symbolized as and update gate
symbolized as [27].
󰇛󰇛󰇜󰇜
(6)
󰇛󰇜
(7)
󰇛󰇜
(8)
󰇛󰇜
(9)
where: is element-wise multiplier; , , and are weight value; is input data;
is candidate state; is output; are constants;  and  are sigmoid and
tanh activation function.
3.4. Prediction Model Architectures
3.4.1 Adapted Baseline Sequence Models
This research modified the models implemented by Gunawan et al. to handle sequence inputs
and sequence outputs [16]. The adapted baseline sequence models consisted of the two most
popular RNN layers, which are LSTM and GRU, which can be seen in Figure 3.
FIGURE 3. Adapted Baseline Architecture with Stacked LSTM (a) and Stacked GRU (b).
9
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
Since the datasets in this research were processed as 3-dimensional (3D) data because of the
sliding window process, the 3D data input as 󰇛󰇜 of adapted baseline sequence models were
reshaped to 2D data 󰇛󰇜 with representing the amount of sliding window process,
representing timestep, and representing the number of features. The reshaping process is
illustrated in Figure 4. Because the output of adapted baseline models is 2D data 󰇛󰇜, the
output needed to be reshaped back to 3D data 󰇛󰇜.
FIGURE 4. Reshaping Illustration.
The hyperparameters of the adapted baseline models followed the baseline settings by
Gunawan et al., such as implementing 128 neurons on both LSTM and GRU layers, 64 batch size,
0.001 learning rate, and 100 epochs with the Adam optimization algorithm [16].
3.4.2 Sequence-to-sequence (Seq2seq) or Encoder-Decoder Models
The encoder-decoder or sequence-to-sequence (seq2seq) is also part of deep learning, which
originated from machine translation problems, where at the beginning of its appearance, the
seq2seq architecture could empirically perform well for translation tasks from English to French
[28]. Seq2seq consists of two Recurrent Neural Networks (RNN) which act as the encoder and
decoder. The Seq2seq architecture is mostly used for language processing models [29] and has
rarely been used for indoor climate forecasting [21]. The Seq2seq model also performed well in
predicting time-series data, like predicting Beijing PM25 datasets, energy consumption in Sceaux,
highway traffic in the UK, Italian air quality, and California traffic with PeMS-Bays datasets [30].
10 SETIAWAN, ELWIREHARDJA, PARDAMEAN
FIGURE 5. Architecture of LSTM Seq2seq.
FIGURE 6. Architecture of GRU Seq2seq.
11
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
This research implemented two simple seq2seq architectures with RNN layers such as LSTM
and GRU used in the encoder and decoder layers, which can be seen in Figure 5 and 6. Both
seq2seq architectures implemented batch normalization between encoder and decoder. Batch
normalization can improve the accuracy and generalization [12], and accelerate the training
process, which makes it one of the favorite techniques in deep learning [31], because seq2seq is
more complex than our adapted baseline models.
To obtain suitable hyperparameter settings for the seq2seq model, this research observed and
understood the early training with a short run of 10 epochs by doing random search, because by
doing that, it could be a clue for suitable model settings without consuming time and expensive
computational resources [32]. The result of random search observation showed that 64 neurons for
each GRU and LSTM layer and a learning rate with 0.00001 provided the best results. Then this
research equated to 64 batch size, 100 epochs, and implemented Adam optimization algorithm.
Adam was chosen because it has advantages over other adaptive learning rate optimization
algorithms, including the ability to handle non-stationary objectives like RMSProp and manage
sparse gradients like AdaGrad [33].
3.5. Pearson Correlation
󰇛󰇜󰇛󰇜
󰇛󰇜󰇛󰇜
(10)
To find out the correlation between each parameter, Pearson Correlation Coefficient (PCC),
which is denoted as  was implemented where and are the compared parameters,  and
are the mean value of and respectively [34]. The PCC results will be in the range [-1,1]
where  means that the correlation is extremely negative and  is conversely
[35].
3.6. Performance Metrics
The most commonly used performance metrics which are implemented in regression analysis
cases in machine learning studies are Mean Absolute Error (MAE), Mean Square Error (MSE),
12 SETIAWAN, ELWIREHARDJA, PARDAMEAN
and Root Mean Square Error (RMSE) [36]. In fact, each error measurement has different
disadvantages that can lead to inaccurate evaluation of forecasting results, which makes it not
recommended to only use one measurement [37]. This research aimed to forecast indoor
temperature and humidity in the future, which made MAE and RMSE an ideal choice for collecting
error information in the model. This research also implemented the coefficient of determination
󰇛󰇜 because of its potential to compare ground truth elements with predicted data considering its
distribution [36].


(11)

󰇛󰇜

(12)
󰇛󰇜

󰇛󰇜

(13)
In MAE, RMSE and equations, is the predicted value at , is the ground truth
data at  and represent mean of ground truth data. Both MAE and RMSE results must be
in range [0,) with the best value is closer to 0, meanwhile in result will be in range (-, 1]
with the best value is closer to 1.
value describes the proportion of variance in a variable which is affected by another
variable [38]. can be categorized as strong when  and weak when 
[39]. Meanwhile between both strong and weak, there is moderate value.
4. EXPERIMENTS
4.1. Experimental Environments
In this research, TensorFlow Python version 2.8.2 library and Keras version 2.80 library were
used to train the model. The operating system was Ubuntu 20.04. The graphics card, used in this
13
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
research was NVIDIA Quadro RTX 8000 with 127GB of RAM.
4.2. Features Correlation in Dataset
Figure 7 depicts that each PCC value among all parameters with temp1, hum1, temp2, and
hum2 are indoor parameters, whereas temp3 and hum3 are outdoor parameters. The PCC result
shows that between temperature parameters there are extremely strong positive values with 0.97
to 1, and so do the humidity parameters. Meanwhile, the correlations between temperature and
humidity parameters show extremely negative PCC values with less than -0.92.
FIGURE 7. PCC Values among All Dataset Parameters.
4.3. Preprocessing Dataset
The datasets, contained enormous datasets with appropriate time series data, which made them
suitable for deep learning models [40] [41], as shown in Figure 2. This study divided the dataset
into two parts: training data and test data, with a percentage of 80% and 20%, respectively. The
model presented in this study consumed training data in the training step, with 80% of the training
data being used to train the model and 20% of the training data being used to validate the model.
Because extremely high or low data values can trigger the models to overfit, data
standardization was used in this experiment research to assist the models in learning the data [42].
󰇛󰇜󰇛󰇜
󰇛󰇜
(14)
In this study, Z-score standardization, designated as 󰇛󰇜 with mean 󰇛󰇜 and standard
deviation as (󰇜 and, is used. Figures 8 show the outcomes of applying Z-score standardization to
14 SETIAWAN, ELWIREHARDJA, PARDAMEAN
our datasets.
Figure 8 show how standardized datasets were divided into two parts: training and testing.
Relative humidity data in orange and temperature data in blue were used to train models, while
relative humidity data in red and temperature data in green were used to test models.
FIGURE 8. Data Standardization Results.
Accordingly, the data needed to be transformed from 2D data illustrated as 󰇛󰇜, where
represents the amount of data and represents a data feature such as indoor temperature 1, indoor
15
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
humidity 1, indoor temperature 2, indoor humidity 2, outdoor temperature, and outdoor humidity,
into 3D data illustrated as 󰇛󰇜, where c represents the number of smaller pieces of data
partition and represents the timesteps number of input or output data. This research aims to
predict data in the future 5 timesteps based on 150 previous data, because five timesteps of
predicted data should be enough to support SDD operational. Figure 9 illustrates the sliding
process.
FIGURE 9. Input and Output Data after Sliding Window Process.
4.4. Model Training
Figure 10 shows that both adapted LSTM and GRU baseline models were better in learning
datasets containing extremely strong positive and negative PCC. Those adapted baseline sequence
models can reach a loss value below 0.01 in MAE in both the training and validation processes.
Meanwhile, both seq2seq LSTM and GRU can only reach a loss value of around 0.05 in MAE in
both training and validation. The training and validation processes conclude that seq2seq models
were too complicated to learn the datasets containing only extremely strong PCC and the adapted
16 SETIAWAN, ELWIREHARDJA, PARDAMEAN
baseline models were simple enough and suitable to learn these datasets.
FIGURE 10. Train and Validation Loss Plot in MAE of All Models.
4.5. Model Results and Comparison
This research compared all prediction models by using testing data, which was untrained data,
or a fifth of the original datasets. Untrained data, depicted in Figure 8, as red and green lines on
the line chart, was fed into the model, yielding predicted data 󰇛󰇜. Then this study compared both
predicted data () and ground truth data () with MAE, RMSE, and as performance metrics.
Since the prediction result was standardized data with Z-score, the results needed to be
converted back to real ranges. The prediction model results 󰇛󰇜 and ground truth data ()
contained 6 features, including indoor temperature and humidity 1, indoor temperature and
humidity 2, and outdoor temperature and humidity with 5 timesteps. Both sets of data were
compared with MAE, RMSE, and as performance metrics.
17
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
A quick glance at Tables 1 and 2 shows that indoor temperature prediction testing using the
adapted GRU baseline produced the best results in MAE, RMSE, and . The results show
significant differences between adapted sequence baseline models and seq2seq models where the
adapted LSTM baseline outperformed the seq2seq LSTM model by an average difference in both
indoor temperature prediction of 0.3473 in MAE, and the adapted GRU baseline outperformed the
seq2seq GRU model by an average difference in both indoor temperature prediction of 0.3766 in
MAE.
TABLE 1. Indoor Temperature 1 Testing Results.
Time
Error
Metrics
Models
Adapted Sequence
Baseline Models
Seq2seq
Models
Adapted
LSTM
Baseline
Seq2seq
LSTM
Overall
MAE
0.064542
0.418707
RMSE
0.093637
0.626587
0.999916
0.996169
t+1
MAE
0.052800
0.398213
RMSE
0.068675
0.585514
0.999955
0.996666
t+2
MAE
0.052707
0.392174
RMSE
0.076581
0.595027
0.999944
0.996542
t+3
MAE
0.061645
0.415084
RMSE
0.088463
0.618344
0.999925
0.996287
t+4
MAE
0.072228
0.437910
RMSE
0.102411
0.650243
0.999899
0.995878
t+5
MAE
0.083332
0.450155
RMSE
0.122248
0.678977
0.999856
0.995474
18 SETIAWAN, ELWIREHARDJA, PARDAMEAN
TABLE 2. Indoor Temperature 2 Testing Results.
Time
Error
Metrics
Models
Adapted Sequence
Baseline Models
Seq2seq
Models
Adapted
LSTM
Baseline
Seq2seq
LSTM
Overall
MAE
0.072104
0.412551
RMSE
0.114922
0.624098
0.999870
0.996158
t+1
MAE
0.056056
0.432378
RMSE
0.088789
0.605891
0.999923
0.996466
t+2
MAE
0.059912
0.364134
RMSE
0.087589
0.565115
0.999925
0.996854
t+3
MAE
0.075596
0.403863
RMSE
0.114746
0.608699
0.999871
0.996352
t+4
MAE
0.081308
0.423658
RMSE
0.125917
0.647581
0.991026
0.995839
t+5
MAE
0.087647
0.438720
RMSE
0.146487
0.686407
0.999790
0.995280
The results of indoor humidity prediction testing based on Tables 3 and 4 show a similar trend
with indoor temperature prediction testing results that the seq2seq models outperformed the
adapted sequence baseline models, where the adapted LSTM baseline model performed better than
the seq2seq LSTM model with an average MAE difference in both indoor humidity prediction of
0.9127, and the adapted GRU baseline model performed better than the seq2seq GRU model with
an average MAE difference in both indoor humidity prediction of 0.6046.
19
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
TABLE 3. Indoor Humidity 1 Testing Results.
Time
Error
Metrics
Models
Adapted Sequence
Baseline Models
Seq2seq
Models
Adapted
LSTM
Baseline
Seq2seq
LSTM
Overall
MAE
0.087518
1.124548
RMSE
0.125062
1.356994
0.999965
0.996109
t+1
MAE
0.076289
0.977144
RMSE
0.103721
1.221450
0.999976
0.996823
t+2
MAE
0.080836
1.109321
RMSE
0.108182
1.312224
0.999974
0.996356
t+3
MAE
0.086753
1.214375
RMSE
0.119617
1.428900
0.999968
0.995716
t+4
MAE
0.092664
1.199042
RMSE
0.134137
1.431309
0.999959
0.995691
t+5
MAE
0.101051
1.122856
RMSE
0.153102
1.379455
0.999947
0.995961
In Table 3, an interesting thing happened. The adapted LSTM model outperformed all models
in predicting indoor humidity 1, while the adapted GRU baseline outperformed all models in
predicting indoor temperature 1, temperature 2, and humidity 2.
20 SETIAWAN, ELWIREHARDJA, PARDAMEAN
TABLE 4. Indoor Humidity 2 Testing Results.
Time
Error
Metrics
Models
Adapted Sequence
Baseline Models
Seq2seq
Models
Adapted
LSTM
Baseline
Seq2seq
LSTM
Overall
MAE
0.098777
0.887168
RMSE
0.169456
1.157673
0.999938
0.997292
t+1
MAE
0.068244
0.917527
RMSE
0.113080
1.162305
0.999972
0.997284
t+2
MAE
0.099880
0.849962
RMSE
0.154163
1.124332
0.999949
0.997445
t+3
MAE
0.089917
0.890735
RMSE
0.162347
1.153160
0.999943
0.997315
t+4
MAE
0.104501
0.898606
RMSE
0.181975
1.171240
0.999928
0.997226
t+5
MAE
0.131342
0.879009
RMSE
0.218064
1.176594
0.999897
0.997191
The results in Tables 1, 2, 3, and 4 show that in overall prediction, both the adapted baseline
with LSTM and GRU outperformed the seq2seq model with LSTM and GRU, with the number in
bold representing the best result. In predicting indoor temperature, the adapted baseline model with
GRU was the best. Meanwhile, in predicting indoor humidity, both the adapted baseline model
with GRU and LSTM were comparable. The Seq2seq model, a deep learning model which is
popular in natural language processing [43], is more complex than the adapted baseline model to
21
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
handle a dataset containing extremely strong positive PCC values between the same temperature
parameters or humidity parameters and extremely negative PCC values between temperature
parameters and humidity parameters. A quick glance at all the testing result tables shows that
seq2seq models produced ten times higher error than adapted sequence baseline models, but all
models were still good at predicting indoor climate with an average of 0.5 for indoor
temperature prediction and 1.2 for indoor humidity prediction. With coefficient of
determination, all of the models can be categorized as strong in with 0.99 [39].
The dataset used in this research contained a large amount of real-time sensor data, which had
the possibility of containing noise data [44]. So, for future work, the next research will implement
Kalman filtering to correct the data from noise in order to increase the accuracy of all models.
5. CONCLUSION
The results show that in processing the dataset, which contained only extremely strong
positive or extremely strong negative PCC values between each other parameter, both our adapted
baseline models outperformed both our seq2seq models. All models were good at predicting indoor
temperature and humidity because they had a relatively small error number in MAE and RMSE.
The coefficient determination values of all models were also categorized as strong, with
.
Based on this research, the curiosity arose because seq2seq models still have the potential to
be improved, such as by implementing attention layers. In future research, there is a plan to
improve seq2seq architectures by adding an attention layer and stacking some RNN layers inside
both the encoder and decoder layers and improving the case to be a more complex problem, such
as increasing the number of timesteps in both input and output models. Because of the dataset
containing a large amount of time series data captured by sensors inside SDD, there will be a future
study on reducing noise from the dataset by using a filtering technique such as Kalman filtering.
22 SETIAWAN, ELWIREHARDJA, PARDAMEAN
ACKNOWLEDGEMENTS
The experiments in this study used a computer with NVIDIA Quadro RTX 8000 with 127GB
of RAM facilitated by NVIDIA-BINUS Artificial Intelligence Research and Development Center
(NVIDIA-AIRDC). The authors are grateful to the Electrical Engineering Department at Trisakti
University for helping to provide the dataset from SDD facility in Sumedang, Indonesia.
CONFLICT OF INTERESTS
The authors declare that there is no conflict of interests.
REFERENCES
[1] A.S. Budiman, F. Gunawan, E. Djuana, et al. Smart dome 4.0: Low-cost, independent, automated energy system
for agricultural purposes enabled by machine learning, J. Phys.: Conf. Ser. 2224 (2022), 012118.
https://doi.org/10.1088/1742-6596/2224/1/012118.
[2] G. Srinivasan, P. Muthukumar, A review on solar greenhouse dryer: Design, thermal modelling, energy, economic
and environmental aspects, Solar Energy. 229 (2021), 3–21.
https://doi.org/10.1016/j.solener.2021.04.058.
[3] F.E. Gunawan, A.S. Budiman, B. Pardamean, et al. Design and energy assessment of a new hybrid solar drying
dome - Enabling Low-Cost, Independent and Smart Solar Dryer for Indonesia Agriculture 4.0, IOP Conf. Ser.:
Earth Environ. Sci. 998 (2022), 012052. https://doi.org/10.1088/1755-1315/998/1/012052.
[4] R.E. Caraka, R.C. Chen, S.A. Bakar, Employing best input SVR robust lost function with nature-inspired
metaheuristics in wind speed energy forecasting, IAENG Int. J. Comput. Sci. 47 (2020), 572–584.
[5] R.E. Caraka, B.D. Supatmanto, M. Tahmid, et al. Rainfall forecasting using PSPline and rice production with
ocean-atmosphere interaction, IOP Conf. Ser.: Earth Environ. Sci. 195 (2018) 012064.
https://doi.org/10.1088/1755-1315/195/1/012064.
[6] D.N.N. Putri, D.P. Adji, Stevanus, et al. Power system design for solar dryer dome in agriculture, in: 3rd
International Conference on Sustainable Engineering and Creative Computing (ICSECC) 2021, Cikarang,
Indonesia.
23
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
[7] R.E. Caraka, R.C. Chen, H. Yasin, et al. Hybrid vector autoregression feedforward neural network with genetic
algorithm model for forecasting space-time pollution data, Indonesian J. Sci. Technol. 6 (2021), 243–266.
https://doi.org/10.17509/ijost.v6i1.32732.
[8] F.Q. Lauzon, An introduction to deep learning, in: 2012 11th International Conference on Information Science,
Signal Processing and Their Applications (ISSPA), IEEE, Montreal, QC, Canada, 2012: pp. 1438 –1439.
https://doi.org/10.1109/ISSPA.2012.6310529.
[9] H. Prabowo, A.A. Hidayat, T.W. Cenggoro, et al. Aggregating time series and tabular data in deep learning model
for university students’ GPA prediction, IEEE Access. 9 (2021), 87370–87377.
https://doi.org/10.1109/access.2021.3088152.
[10] A. Sherstinsky, Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network,
Physica D: Nonlinear Phenomena. 404 (2020), 132306. https://doi.org/10.1016/j.physd.2019.132306.
[11] A. Budiarto, R. Rahutomo, H.N. Putra, et al. Unsupervised news topic modelling with Doc2Vec and spherical
clustering, Procedia Computer Sci. 179 (2021), 40–46. https://doi.org/10.1016/j.procs.2020.12.007.
[12] Z. Shen, Y. Zhang, J. Lu, et al. A novel time series forecasting model with deep learning, Neurocomputing. 396
(2020), 302–313. https://doi.org/10.1016/j.neucom.2018.12.084.
[13] A. Shewalkar, D. Nyavanandi, S.A. Ludwig, Performance evaluation of deep neural networks applied to speech
recognition: RNN, LSTM and GRU, J. Artif. Intell. Soft Comput. Res. 9 (2019) 235–245.
https://doi.org/10.2478/jaiscr-2019-0006.
[14] S. Hwang, G. Jeon, J. Jeong, et al. A novel time series based Seq2Seq model for temperature prediction in firing
furnace process, Procedia Computer Sci. 155 (2019), 19–26. https://doi.org/10.1016/j.procs.2019.08.007.
[15] Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature. 521 (2015), 436–444.
https://doi.org/10.1038/nature14539.
[16] F. E. Gunawan, A.S. Budiman, B. Pardamean et al. Multivariate time-series deep learning for joint prediction of
temperature and relative humidity in a closed space, in: 2021 International Conference on Computer Science and
Computational Intelligence, 2021.
[17] A. Ali, H.S. Hassanein, Wireless sensor network and deep learning for prediction greenhouse environments, in:
2019 International Conference on Smart Applications, Communications and Networking (SmartNets), IEEE,
24 SETIAWAN, ELWIREHARDJA, PARDAMEAN
Sharm El Sheik, Egypt, 2019: pp. 1–5. https://doi.org/10.1109/SmartNets48225.2019.9069766.
[18] D.H. Jung, H.S. Kim, C. Jhin, et al. Time-serial analysis of deep neural network models for prediction of climatic
conditions inside a greenhouse, Computers Electron. Agric. 173 (2020), 105402.
https://doi.org/10.1016/j.compag.2020.105402.
[19] Y. Liu, D. Li, S. Wan, et al. A long shortterm memorybased model for greenhouse climate prediction, Int. J.
Intell. Syst. 37 (2021), 135151. https://doi.org/10.1002/int.22620.
[20] E. Elhariri, S.A. Taie, H-Ahead multivariate microclimate forecasting system based on deep learning, in: 2019
International Conference on Innovative Trends in Computer Engineering (ITCE), IEEE, Aswan , Egypt, 2019:
pp. 168–173. https://doi.org/10.1109/ITCE.2019.8646540.
[21] Z. Fang, N. Crimier, L. Scanu, A. Midelet, A. Alyafi, B. Delinchant, Multi-zone indoor temperature prediction
with LSTM-based sequence to sequence model, Energy Build. 245 (2021), 111053.
https://doi.org/10.1016/j.enbuild.2021.111053.
[22] R.E. Caraka, R.C. Chen, T. Toharudin, et al. Evaluation performance of SVR genetic algorithm and hybrid PSO
in rainfall forecasting, ICIC Express Lett. Part B. Appl. 11 (2020), 631-639.
https://doi.org/10.24507/icicelb.11.07.631.
[23] Z. Han, J. Zhao, H. Leung, et al. A review of deep learning models for time series prediction, IEEE Sensors J. 21
(2021), 7833–7848. https://doi.org/10.1109/jsen.2019.2923982.
[24] G. Van Houdt, C. Mosquera, G. Nápoles, A review on the long short-term memory model, Artif. Intell. Rev. 53
(2020), 5929–5955. https://doi.org/10.1007/s10462-020-09838-1.
[25] J. Chung, C. Gulcehre, K. Cho, et al. Empirical evaluation of gated recurrent neural networks on sequence
modeling, (2014). http://arxiv.org/abs/1412.3555.
[26] R. Dey, F.M. Salem, Gate-variants of gated recurrent unit (GRU) neural networks, in: 2017 IEEE 60th
International Midwest Symposium on Circuits and Systems (MWSCAS), IEEE, Boston, MA, 2017: pp . 1597–
1600. https://doi.org/10.1109/MWSCAS.2017.8053243.
[27] K. Lu, X.R. Meng, W.X. Sun, et al. GRU-based encoder-decoder for short-term CHP heat load forecast, IOP
Conf. Ser.: Mater. Sci. Eng. 392 (2018), 062173. https://doi.org/10.1088/1757-899x/392/6/062173.
[28] K. Cho, B. van Merrienboer, C. Gulcehre, et al. Learning phrase representations using RNN encoder–decoder
25
DEEP LEARNING MODELS IN PREDICTING INDOOR TEMPERATURE AND HUMIDITY
for statistical machine translation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural
Language Processing (EMNLP), Association for Computational Linguistics, Doha, Qatar, 2014: pp. 1724–1734.
https://doi.org/10.3115/v1/D14-1179.
[29] H. Yousuf, M. Lahzi, S.A. Salloum, et al. A systematic review on sequence-to-sequence learning with neural
network and its models, Int. J. Electric. Computer Eng. 11 (2021), 2315-2326.
https://doi.org/10.11591/ijece.v11i3.pp2315-2326.
[30] S. Du, T. Li, Y. Yang, et al. Multivariate time series forecasting via attention-based encoder–decoder framework,
Neurocomputing. 388 (2020), 269–279. https://doi.org/10.1016/j.neucom.2019.12.118.
[31] N. Bjorck, C.P. Gomes, B. Selman, et al. Understanding batch normalization. In: S. Bengio, H. Wallach, H.
Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett, (eds.) Advances in Neural Information Processing
Systems 31, pp. 7694–7705. Curran Associates, Inc. (2018).
[32] L.N. Smith, A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size,
momentum, and weight decay, (2018). http://arxiv.org/abs/1803.09820.
[33] D.P. Kingma, J.L. Ba, Adam: A method for stochastic optimization, in: 3rd Int. Conf. Learn. Represent. ICLR
2015 - Conf. Track Proc., pp. 1–15, 2015. https://doi.org/10.48550/arXiv.1412.6980.
[34] I. Jebli, F.Z. Belouadha, M.I. Kabbaj, et al. Prediction of solar energy guided by pearson correlation using
machine learning, Energy. 224 (2021), 120109. https://doi.org/10.1016/j.energy.2021.120109.
[35] Y. Liu, Y. Mu, K. Chen, et al. Daily activity feature selection in smart homes based on pearson correlation
coefficient, Neural Process. Lett. 51 (2020), 1771–1787. https://doi.org/10.1007/s11063-019-10185-8.
[36] D. Chicco, M.J. Warrens, G. Jurman, The coefficient of determination R-squared is more informative than
SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation, PeerJ Computer Sci. 7 (2021), e623.
https://doi.org/10.7717/peerj-cs.623.
[37] M.V. Shcherbakov, A. Brebels, N.L. Shcherbakova, et al. A survey of forecast error measures, World Appl. Sci.
J. 24 (2013), 171–176.
[38] P. Schober, C. Boer, L.A. Schwarte, Correlation coefficients, Anesthesia Analgesia. 126 (2018), 1763–1768.
https://doi.org/10.1213/ane.0000000000002864.
[39] J.F. Hair, C.M. Ringle, M. Sarstedt, PLS-SEM: Indeed a silver bullet, J. Market. Theory Practice. 19 (2011),
26 SETIAWAN, ELWIREHARDJA, PARDAMEAN
139–152. https://doi.org/10.2753/mtp1069-6679190202.
[40] T.W. Cenggoro, F. Tanzil, A.H. Aslamiah, E.K. Karuppiah, B. Pardamean, Crowdsourcing annotation system of
object counting dataset for deep learning algorithm, IOP Conf. Ser.: Earth Enviro n. Sci. 195 (2018), 012063.
https://doi.org/10.1088/1755-1315/195/1/012063.
[41] B. Pardamean, H.H. Muljo, T.W. Cenggoro, et al. Using transfer learning for smart building management system,
J. Big Data. 6 (2019), 110. https://doi.org/10.1186/s40537-019-0272-6.
[42] A. Chauhan, Time series data mining for solar active region classification, 2017.
https://doi.org/10.13140/RG.2.2.15327.05283.
[43] I. Sutskever, O. Vinyals, Q.V. Le, Sequence to sequence learning with neural networks. In: Ghahramani Z,
Welling M, Cortes C, Lawrence ND, Weinberger, KQ (eds) Advances in neural information processing systems.
Curran Associates, Inc, (2014), 3104–3112.
[44] S. Park, M.S. Gil, H. Im, et al. Measurement noise recommendation for efficient kalman filtering over a large
amount of sensor data, Sensors. 19 (2019), 1168. https://doi.org/10.3390/s19051168.
... The simple seq2seq architectures investigated in this research contain RNN-based Encoder-Decoder, in which the LSTM layers or GRU layers were implemented into both the encoder and decoder layer. Figures 3 and 4 depict the implementations of the seq2seq architecture in simple seq2seq models used in this research for comparison purposes [35]. It can be seen that specific RNN layers such as LSTM and GRU were utilized in the encoder and decoder layers. ...
... Such findings may be attributed to the fact that the dataset has varying PCC values, implying that its pattern may be more complex to be modeled and may be more suitable for more complex models. Based on our previous research, when the dataset was dominated by very strong correlation PCC values, the seq2seq models were too complex for the data [35], which is not the case in this study. Another way to look at why our proposed models were superior to the adapted baseline model is that this research brought the experiment scenario to the sequence-to-sequence problem, which is similar to an NLP problem. ...
Article
Full-text available
The Solar Dryer Dome (SDD), a solar-powered agronomic facility for drying, retaining, and processing comestible commodities, needs smart systems for optimizing its energy consumption. Therefore, indoor condition variables such as temperature and relative humidity need to be forecasted so that actuators can be scheduled, as the largest energy usage originates from actuator activities such as heaters for increasing indoor temperature and dehumidifiers for maintaining optimal indoor humidity. To build such forecasting systems, prediction models based on deep learning for sequence-to-sequence cases were developed in this research, which may bring future benefits for assisting the SDDs and greenhouses in reducing energy consumption. This research experimented with the complex publicly available indoor climate dataset, the Room Climate dataset, which can be represented as environmental conditions inside an SDD. The main contribution of this research was the implementation of the Luong attention mechanism, which is commonly applied in Natural Language Processing (NLP) research, in time series prediction research by proposing two models with the Luong attention-based sequence-to-sequence (seq2seq) architecture with GRU and LSTM as encoder and decoder layers. The proposed models outperformed the adapted LSTM and GRU baseline models. The implementation of Luong attention had been proven capable of increasing the accuracy of the seq2seq LSTM model by reducing its test MAE by 0.00847 and RMSE by 0.00962 on average for predicting indoor temperature, as well as decreasing 0.068046 MAE and 0.095535 RMSE for predicting indoor humidity. The application of Luong's attention also improved the accuracy of the seq2seq GRU model by reducing the error by 0.01163 in MAE and 0.021996 in RMSE for indoor humidity. However, the implementation of Luong attention in seq2seq GRU for predicting indoor temperature showed inconsistent results by reducing approximately 0.003193 MAE and increasing roughly 0.01049 RMSE. Doi: 10.28991/CEJ-2023-09-05-06 Full Text: PDF
Article
Full-text available
Many economically essential crops in Indonesia (such as coffee, tea, chocolate, or copra) require storage or drying under certain environmental conditions, especially temperature and humidity. The solar dryer dome, typically used for agricultural purposes in Indonesia, produces a sufficient amount of heat to increase the evaporation rate inside the dome and reduce the moisture content of the commodity. A hybrid solar dryer accompanied by a photovoltaic panel, fan, and ventilation system is generally suitable. The system can provide an optimum environment with minimum control. However, as the outdoor temperature and humidity change dramatically, such as at night time, more control is required. Based on Industry 4.0 technologies, we have developed a new kind of hybrid solar dryer that provides an optimum environment 24/7. The system, called Smart Dome 4.0, is an intelligent, low-cost, self-sufficient drying and storage system to support Indonesia Agriculture 4.0. The system has a local power generation unit to self-sustain the required energy and operate without connecting to the electricity grid. The system utilizes a machine-learning algorithm to predict the environmental condition and optimally uses self-generated electric power. The developed Smart Dome 4.0 technology is critical to producing a sustainable solar dome under drastic environmental dynamics.
Article
Full-text available
Solar dryer is typically used for agricultural purposes in Indonesia. There are many economically important crops requiring storage or drying under particular environmental conditions such as temperature and humidity. High temperatures inside solar dryer prevents the growth of microorganism, and quickly reduce moisture content from the substance. A hybrid solar dryer is generally considered to provide the most optimum solution, however solar panels may be expensive and they still only provide heat or energy in the daytime. Hence, we propose here a new kind of hybrid solar dryer for 24/7 optimum conditions for crops - enabled by recent advances in energy technologies as well as Industry 4.0. This study aims to create an efficient, affordable and a self-sufficient intelligent energy system that will be applied to agriculture for storage or drying purposes by measuring the energy needs for the optimal drying system. Therefore, it is crucial to estimate and assess the critical energy needs for such new systems in order to optimize and design such smart solar dryer (SSD) system especially for Indonesia’s agricultural needs. We use design experience of our industry partner (PT Impack Pratama Industri, Indonesia) who has been working extensively on such solar dryer dome (SDD) based on polycarbonate material (only solar irradiation, no other technologies) and theoretical framework based on first principles in thermodynamics to estimate and assess critical energy needs for such dome with all the smart technologies. The calculation was performed based on Mollier diagram and the result still a rough estimation of energy required.
Article
Full-text available
Regression analysis makes up a large part of supervised machine learning, and consists of the prediction of a continuous independent target from a set of other predictor variables. The difference between binary classification and regression is in the target range: in binary classification, the target can have only two values (usually encoded as 0 and 1), while in regression the target can have multiple values. Even if regression analysis has been employed in a huge number of machine learning studies, no consensus has been reached on a single, unified, standard metric to assess the results of the regression itself. Many studies employ the mean square error (MSE) and its rooted variant (RMSE), or the mean absolute error (MAE) and its percentage variant (MAPE). Although useful, these rates share a common drawback: since their values can range between zero and +infinity, a single value of them does not say much about the performance of the regression with respect to the distribution of the ground truth elements. In this study, we focus on two rates that actually generate a high score only if the majority of the elements of a ground truth group has been correctly predicted: the coefficient of determination (also known as R -squared or R ² ) and the symmetric mean absolute percentage error (SMAPE). After showing their mathematical properties, we report a comparison between R ² and SMAPE in several use cases and in two real medical scenarios. Our results demonstrate that the coefficient of determination ( R -squared) is more informative and truthful than SMAPE, and does not have the interpretability limitations of MSE, RMSE, MAE and MAPE. We therefore suggest the usage of R -squared as standard metric to evaluate regression analyses in any scientific domain.
Article
Full-text available
Current approaches of university students’ Grade Point Average (GPA) prediction rely on the use of tabular data as input. Intuitively, adding historical GPA data can help to improve the performance of a GPA prediction model. In this study, we present a dual-input deep learning model that is able to simultaneously process time-series and tabular data for predicting student GPA. Our proposed model achieved the best performance among all tested models with 0.4142 MSE (Mean Squared Error) and 0.418 MAE (Mean Absolute Error) for GPA with a 4.0 scale. It also has the best R2-score of 0.4879, which means it explains the true distribution of students’ GPA better than other models.
Article
Full-text available
The exposure rate to air pollution in most urban cities is really a major concern because it results to a life-threatening consequence for human health and wellbeing. Furthermore, the accurate estimation and continuous forecasting of pollution levels is a very complicated task. In this paper, one of the space-temporal models, a vector autoregressive (VAR) with neural network (NN) and genetic algorithm (GA) was proposed and enhanced. The VAR could tackle the issue of multivariate time series, NN for nonlinearity, and GA for parameter estimation determination. Therefore, the model could be used to make predictions, such as the information of series and location data. The applied methods were on the pollution data, including NOX, PM2.5, PM10, and SO2 in Taipei, Hsinchu, Taichung, and Kaohsiung. The metaheuristics genetic algorithm was used to enhance the proposed methods during the experiments. In conclusion, the VAR-NN-GA gives a good accuracy when metric evaluation is used. Furthermore, the methods can be used to determine the phenomena of 10 years air pollution in Taiwan.
Conference Paper
The use of solar dryer greenhouse for crops has a significant development. Aim to reduce food waste and also energy efficiency result in different technology implementation. The main issues with the solar dryer are that the sun only available during the day. Considering the temperature and humidity of the area, these become a problem. Because when the temperature and humidity are not in the range of the standard, the quality might be affected. The goal of this research is to improve the existing solar dome for food drying by adding elements such as a controlling system, dehumidifier, and also heater. It is expected to increase the quality and shorten the drying process. This paper presents the simple power system design process started from calculating and sizing to modeling in helioscope software. The result shows that having the same size of the dome does not mean having the same size of the component, due to the different areas having different humidity and temperature which affect the condition inside the dome. In this study area, to cover 34.10kWh/day load it is suggested to use 28pcs of 410Wp solar panel, 6687Ah battery, 1100Watt dehumidifier, and 950Watt heating element. The highest losses occur from shading about 8.4%, followed by temperature 4.7%, reflection 3.2%, irradiance 0.8%, and soil 2% with total losses of an average of 20%.
Conference Paper
An accurate predictive model of temperature and humidity plays a vital role in many industrial processes that utilize a closed space such as in agriculture and building management. With the exceptional performance of deep learning on time-series data, developing a predictive temperature and humidity model with deep learning is propitious. In this study, we demonstrated that deep learning models with multivariate time-series data produce remarkable performance for temperature and relative humidity prediction in a closed space. In detail, all deep learning models that we developed in this study achieve almost perfect performance with the R2 value over 0.99.
Article
Greenhouses can grow many off-season vegetables and fruits, which improves people's quality of life. Greenhouses can also help crops resist natural disasters and ensure the stable growth of crops. However, it is highly challenging to carefully control the greenhouse climate. Therefore, the proposal of a greenhouse climate prediction model provides a way to solve this challenge. We focus on the six climatic factors that affect crops growth, including temperature, humidity, illumination, carbon dioxide concentration, soil temperature and soil humidity, and propose a GCP_lstm model for greenhouse climate prediction. The climate change in greenhouse is nonlinear, so we use long short-term memory (LSTM) model to capture the dependence between historical climate data. Moreover, the short-term climate has a greater impact on the future trend of greenhouse climate change. Therefore, we added a 5-min time sliding window through the analysis experiment. In addition, sensors sometimes collect wrong climate data. Based on the existence of abnormal data, our model still has good robustness. We experienced our method on the data sets of three vegetables: tomato, cucumber and pepper. The comparison shows that our method is better than other comparison models.
Article
This article provides an extensive review on design, thermal modelling approaches, and economic, energy and environmental aspects of solar greenhouse dryers developed for drying various agricultural products. Further, the selection and usage of solar photovoltaic panels and thermal energy storage units in the solar greenhouse dryers for achieving continuous and grid-independent drying are discussed in detail. Performances of the various configurations/shapes of the greenhouse dryers in terms of energy requirements are compared. Thermodynamic and thermal modelling studies reported on the performance prediction of solar greenhouse dryers and drying kinetics studies on various agriculture products are summarised. A detailed report on the economic (payback time, cost of the greenhouse dryer, and product drying cost), energy (embodiment energy, specific energy consumption) and environmental (CO2 emission, CO2 mitigation, and carbon credit) aspects of the solar greenhouse dryer based on the dryer type, mode of operation and product dried are presented.
Article
Accurate indoor temperature forecasting can facilitate energy savings of the building without compromising the occupant comfort level, by providing more accurate control of the HVAC (heating, ventilating, and air conditioning) system. In order to make the best use of different input variables, a long short-term memory (LSTM) based sequence to sequence (seq2seq) model was proposed to make multi-step ahead forecasting. The out-of-sample forecasting capacity of the model was evaluated with regard to different forecast horizons by various evaluation metrics. A tailor-made metric was proposed to take account of the small daily-variation characteristic of indoor temperature. The model was benchmarked against Prophet and a seasonal naive model, showing that the current model is much more skillful and reliable in very short-term forecasting. A cross-series learning strategy was adopted to enable multi-zone indoor temperature forecasting with a more generalised model. Furthermore, the uncertainty in model parameters was quantified by prediction intervals created by Monte-Carlo dropout (MC-dropout) technique.