Content uploaded by Serkan Tapkin
Author content
All content in this area was uploaded by Serkan Tapkin on Apr 09, 2015
Content may be subject to copyright.
USE OF NEURAL NETWORKS FOR THE EVALUATION OF
CONCRETE CORE STRENGTHS
S Tapkin
Civil Eng. Department
Anadolu University
Eskisehir, Turkey
cstapkin@anadolu.edu.tr
M Tuncan
Civil Eng. Department
Anadolu University
Eskişehir, Turkey
mtuncan@anadolu.edu.tr
K Ramyar
Civil Eng. Department
Ege University
Izmir, Turkey
kambiz.ramyar@ege.edu.tr
Abstract- This paper examines a method to evaluate
concrete core strengths by using artificial neural
networks. Eight different concrete mixtures were
prepared by using two different aggregates of four
different maximum sizes. Beam specimens were cast
by prepared mixtures. Cores with different diameters
and length-to-diameter ratios were drilled from beam
specimens. Compressive strength tests were carried
out on core specimens at different ages. The
parameters influencing the strength of cores were
used as input for neural network architecture and the
core strengths were evaluated. The outputs of the
proposed network were examined by root mean
squared errors (RMSE). The proposed architecture
gave reliable estimates of the concrete core strength.
The RMSE values were found to be highly reliable.
Conclusively, the results revealed that the feed
forward back propagation neural networks can
perform to obtain reasonable evaluation of core
strengths.
Keywords: Neural networks, Back propagation,
Compressive strength, Core strength
1. INTRODUCTION
The strength is one of the most important
properties of concrete [1-4]. The quality control of
concrete in structures is generally carried out on
standard test specimens [5,6]. However, it is
difficult to assess the actual strength of concrete in
structures since the compaction and curing received
by the in-situ concrete and those received by the
standard specimens are quite different [7]. This
becomes more pronounced for larger members [8].
On the other hand, it is sometimes necessary to
know the strength of concrete in a structure [9].
Although it is expensive, the core test is one the
O Arioz
Civil Eng. Department
Anadolu University
Eskişehir, Turkey
oarioz@anadolu.edu.tr
A Tuncan
Civil Eng. Department
Anadolu University
Eskişehir, Turkey
atuncan@anadolu.edu.tr
most reliable methods to determine the strength of
concrete in structures [2]. However, the results of
the core tests should be carefully interpreted since
the strength of cores is influenced by a number of
factors such as diameter, length-to-diameter (l/d)
ratio, and the moisture conditions of the cores
[2,3,11-16]. Moreover, the maximum size of the
aggregate in concrete mixture plays an important
role for the evaluation of the test results [3,17]. This
is strongly emphasized in recently published
Turkish Standard, TS EN 12504-1 [18].
In the present study, the effects of core
diameter, l/d ratio as well as the type and maximum
size of the aggregate and the age of the concrete on
the core strengths were examined by means of
neural networks.
2. HISTORY OF ARTIFICIAL NEURAL
NETWORKS
The progressing development of
neurobiology has enabled scientists to develop
mathematical models of neurons for the simulation
of neural behaviour. In the early 1940s, one of the
first abstract models of a neuron was introduced by
McCulloch and Pitts [19]. Hebb proposed a
learning law explaining how a network of neurons
learned [20]. Minsky and Rosenblatt followed this
notion through the next two decades [21, 22]. Later,
Minsky and Papert pointed out theoretical
limitations of single-layer neural network models
[23]. Research on artificial neural networks failed
into an indefinable era for nearly two decades due
to this pessimistic projection. In spite of the
negative atmosphere, some researchers still
continued with their research and produced
valuable results. For example, Anderson and
Grossberg did important studies on psychological
models and Kohonen developed associative
memory models [24-26]. In the early 1980s, the
neural network approach was resurrected. Hopfield
introduced the idea of energy minimization in
physics into neural networks [27]. His influential
paper endowed this technology with renewed
momentum. Feldman and Ballard made the term
“connectionist” popular [28]. Sometimes,
connectionism is also referred to as subsymbolic
process, which have become the study of cognitive
and artificial intelligence systems inspired by neural
networks [29]. Unlike symbolic artificial
intelligence, connectionism emphasized the
capability of learning and discovering
representations. Insidiously, connectionism has
become a common ground between traditional
artificial intelligence and neural network research.
In the middle 1980s, Rumelhart and McClelland
generated great impacts on computer, cognitive and
biological sciences [30]. Notably, the
backpropagation learning algorithm developed by
Rumelhart, Hinton and Williams offers a powerful
solution to training a multilayer neural network and
shattered the curse imposed on perceptrons [31].
However, it should be noted that the idea of
backpropagation had been developed by Werbos
and Parker independently [32, 33]. The symbolic
approach which has long dominated the field of
artificial intelligence was recently challenged by the
neural network approach. There have been
speculations about whether one approach should
substitute for another or whether the two
approaches should coexist and combine. More
evidence favours the integration alternative in
which the low-level pattern recognition capability
offered by the neural network approach and the
high-level cognitive reasoning ability provided by
the symbolic approach complement each other. The
optimal architecture of future intelligent systems
may well involve their integration in one way or
another.
3. DATA SET FOR TRAINING AND TESTING
OF THE NEURAL NETWORK
In this study, the core strengths were
analyzed by feed forward back propagation neural
networks. The reason of utilizing feed forward back
propagation was that they were used widely in
almost every study concerning neural network
applications. In this study, there are two hidden
layers in the present architecture opposed to the
other studies which have only one hidden layer in
their architectures [34-41]. The training process
time does not differ too much with two layered
architecture and this gives a more flexible approach
to the solution. The gradient descent algorithm was
used in the training process.
There are several studies on the application of
neural networks to predict the compressive strength
of concrete through input parameters such as type
and dosage of the cement, water-cement ratio,
fineness modulus of sand, sand-aggregate ratio,
slump, type and dosage of admixtures, etc. [34-41].
The use of test results in the neural network
approaches is a fairly new concept. In a recent
study by Hola and Schabowicz, non-destructive
assessment of concrete strength using artificial
intelligence has been presented [42]. The core test
results have not been utilised yet in a neural
network approach. In this study, type and maximum
size of the aggregate used in concrete mixture,
diameter, length-to-diameter ratio and the age of the
concrete cores were used as input parameters for
the estimation of concrete core strength by means
of artificial intelligence. Both the architecture of
two hidden layers and the gradient descent
algorithm has been utilised.
In this study, the neural network toolbox of
MATLAB was used. The reason of using this
software was to provide quick and reliable results.
Two main data sets were analysed. One of them
was for cores removed from crushed limestone
aggregate-containing concrete. The other one was
for cores drilled from natural aggregate containing
concrete. Table 1 presents designations, mix
proportions and some properties of the concrete
mixtures.
The cores with 144, 94, 69 and 46 mm in
diameter were obtained and cut to six different l/d
ratios which were selected as 2, 1.75, 1.5, 1.25, 1,
and 0.75. The cores were tested at the ages of 7, 28,
and 90 days and the compressive strength values
were calculated by taking the average of at least six
specimens.
Table 1.Constituents and some properties of concrete mixtures
Mixture
Mix Proportions (kg/m
3
) Some Properties
Coarse
Aggregate
Fine
Aggregate
Cement
Water
w/c
Type of
Aggregate
Maximum
Aggregate
Size
(mm)
MIX-A 696 1043 356 215
0.6
Crushed
Limestone
10
MIX-B 729 1094 331 200 15
MIX-C 1034 846 315 190 22
MIX-D 1128 752 315 190 30
MIX-E 507 1259 356 195
0.55
Natural
Aggregate
10
MIX-F 833 994 331 181 15
MIX-G 1158 706 315 173 22
MIX-H 1300 565 315 173 30
4. CONSTRUCTION OF NEURAL NETWORK
MODEL
The problem can be defined as a nonlinear
input-output relation between the influencing
factors (core diameter, l/d ratio, maximum
aggregate size and age of concrete) and
compressive strength values at 7, 28 and 90 days.
Fig.2 illustrates the architecture of the neural
network applied in the present study. There are four
nodes in the input layer corresponding to above
mentioned four factors and one in the output layer
corresponding compressive strength. Lots of trials
were carried out for the determination of hidden
neuron number of the two hidden layers. This
procedure was performed for cores drilled from
both crushed limestone aggregate and natural
aggregate-containing concretes. Different optimum
hidden neuron numbers were obtained for different
cases. In this study, the neurons of neighbouring
layers were fully connected.
Each batch of data was divided into two sets,
one for the network learning called training set, and
the other for testing the network called testing set.
Each set was composed of 144 pairs of input and
output vectors. Each input pair was calculated by
taking the average of at least six specimens. An
input vector consisted of four components and an
output vector had only one component.
In general, the network parameters; number of
training samples for each concrete core sample
property was 144, number of input layer neurons
was 4, number of hidden layer neurons ranged
between 5 to 50, number of output layer neurons
was 1, type of back-propagation learning rule was
gradient descent algorithm, activation functions
were logarithmic sigmoid, learning rate was 0.3 and
number of epochs varied from training to training.
Actually, the number of training samples was more
than 144 and different combinations of the number
of hidden neurons and activation functions for the
training of the neural network architecture were
used to have the optimum number of hidden
neurons.
The network was tested with 144 pairs. It was
found out that logarithmic sigmoid activation
function served our purpose very well. Therefore,
logarithmic sigmoid activation function was used
throughout the analyses.
Fig.2. Neural network architecture
Fig.3. Sample training performed through the analyses
Core
diameter
Core
l/d ratio
Maximum
aggregate size
Concrete
age
Core
compressive
strength
Hidden la
y
ers
Input layer
Output layer
Fig.3 shows a typical sample training session
performed in this study. As the data set was
representative of the test data, the learning process
terminated after approximately 200 epochs. As
analyses proceeded, it was seen that the epoch number
rose to maximum 600. The testing set was employed
to evaluate the confidence in the performance of the
trained network. One hundred and forty four testing
vectors of the batch of data were used to test the
neural network model. The training was conducted on
by the 10 and 15 mm maximum aggregate sizes and
the testing was carried out by the 22 and 30 mm
maximum aggregate sizes.
The target outputs of the output neurode are
supposed as the actual compressive strength obtained
from the results of the core tests. The training data set
was normalised before the analyses and the predictive
capabilities of the feed forward back-propagation
neural network were examined. The basis of this
discussion was to demonstrate the prediction
performance of these models by comparing their
levels of prediction rather than to illustrate how well
the models predict a given set of data. The prediction
performances were compared with the Root Mean
Squared Error (RMSE) values. The lesser the Root
Mean Squared Error, the better the estimates were.
RMSE values can be obtained by the following
standard formula:
N
X
X
R
MSE
N
j
j
1
2
_
(Eq.1)
where;
N
= number of observations,
X
J
= predicted values, and
_
X
= Observed values
In other word, the correspondence of the data set
has been ensured. The behaviour of all of the system,
rather data set can be monitored by this way.
Therefore, it is much easier to decide the number of
hidden neurons that can be utilised in the hidden
layers. This is solely done on a root mean squared
error minimisation basis. This means that when the
value of the root mean squared error for the whole set
of data is minimum, the optimum number of hidden
neurons is determined. Many trials were carried out to
determine the optimum number of hidden neurons. It
was found that the optimum number of hidden
neurons was 40 and 35 for cores obtained from
crushed aggregate-containing and natural aggregate-
containing concrete, respectively. After obtaining the
number of hidden neurons, some further analyses
were also carried out to determine the optimum
learning rate. Fig.4 shows the RMSE values for
different hidden neuron numbers. It can be seen that
the smallest RMSE value was obtained by 40 hidden
neurons. The learning rates were found to be 0.3 and
0.5 for cores drilled from crushed limestone and
natural aggregate-bearing concretes, respectively.
5. CONSISTENCY BETWEEN NEURAL
NETWORK MODELLING AND
EXPERIMENTS
When the simulation results for the optimum
hidden neuron numbers were further analyzed, it can
be seen that the modelling results are reasonably good
for such a big data set. RMSE values of 0.0708 and
0.1006 are fairly representative for crushed limestone
and natural aggregate-bearing cores, respectively. It is
not surprising to observe some fluctuations in the root
mean squared errors due to the nature of the back
propagation algorithm. However, it was observed that
the modelling results were very close to the real
compressive strength test results.
As the data set is extremely big, the analyses
gave fairly reasonable results and show the behaviour
of the whole system. As the data sets are composed of
mainly four elements acting together as a whole unit,
there were no means to show the effect of each of
these parameters on concrete strength individually.
Therefore, the above given root mean squared values
show the most correct and realistic representation of
the analysis results.
According to Fig.4, the RMSE values range
between 0.07 and 0.13. There was a regular pattern of
spread in the RMSE values as the graph was
analyzed. Since the minimum RMSE value was
important, the optimum hidden neuron number for
cores drilled from crushed aggregate-containing
concrete was forty. Further analyses were carried on
the forty hidden neuron neural network architecture
and it was found out that the optimum learning rate
was 0.3. Similar analyses were carried out on results
obtained from natural aggregate-bearing concrete and
the optimum hidden neuron number was found to be
thirty five. Further analyses were carried on the thirty
five hidden neurons network architecture (Fig.3). It
was found out that the optimum learning rate was 0.5.
This type of error presentation is more realistic
and meaningful. In this way, a more visual insight to
the whole data set’s performance can be obtained. A
new point of view to the neural network training and
testing can be drawn by the help of the RMSE and
learning rate graphs.
RMSE vs HN Number
0.1072
0.1313
0.0708
0.1242
0.1267
0.0938
0.06
0.08
0.10
0.12
0.14
25 30 35 40 45 50
Hidden Neuron
RMSE
Fig.4. RMSE values vs. hidden neuron number for crushed
aggregate-containing cores
RMSE vs LR Value
0.2289
0.1095
0.0708
0.1211
0.083
0.1072
0.05
0.10
0.15
0.20
0.25
0.1 0.2 0.3 0.4 0.5 0.6
LR Value
RMSE
Fig.5. Different learning rate values for forty hidden
neurons for crushed aggregate-containing cores
RMSE vs HN Number
0.1247
0.1006
0.1383
0.1189
0.1224
0.1459
0.09
0.11
0.13
0.15
25 30 35 40 45 50
Hidden Neuron Number
RMSE
Fig.6. RMSE values vs. hidden neuron number for natural
aggregate-containing cores
RMSE vs LR Value
0.1011
0.1008
0.1048
0.1007
0.1006
0.1012
0.100
0.102
0.104
0.106
0.1 0.2 0.3 0.4 0.5 0.6
LR Value
RMSE
Fig.7. Different learning rate values for forty hidden
neurons for crushed aggregate-containing cores
6. CONCLUSIONS
The core strength test results were analyzed by
means of multi layer feed forward back propagation neural
network model. In this analysis, gradient descent algorithm
and two hidden layers were employed. The following
conclusions can be drawn from this study;
1. The results obtained from the analyses show that the
prediction of the compressive strength of concrete core
specimens by artificial neural networks particularly by
the gradient descent algorithm and two hidden layers
architecture was a viable method. This was mainly
evidenced by the calculated RMSEs for the gradient
descent network. Moreover, by the differences between
the RMSEs enabled to determine the optimum hidden
neuron numbers and the learning rates that make easier
estimations of the core strengths.
2. The average compressive strengths of concrete cores
determined by the artificial neural networks and by
destructive tests during the investigation were very
similar to each other. It was highly significant that the
calculated RMSEs were definitely low therefore it
indicates that the estimations were representative of the
real results.
3. The responsible person on the site can neurally identify
the compressive strength of similar concretes
incorporated in building structures without needing to
determine correlations or to fit hypothetical scaling
curves. Required optimum hidden neuron number and
learning rate values for better predictions can be
obtained by means of RMSE values. A neural network
model can be constructed to provide a quick and
dependable mean of predicting the core strengths. This
model may convert the strength of non standard core to
that of a standard core recommended by relevant
standards and specifications. Neural networks will be
useful to civil engineers especially dealing with material
engineering to evaluate core strength and will provide a
sound basis for these and similar types of analyses.
7. ACKNOWLEDGEMENTS
The authors would like to acknowledge the
financial and technical supports supplied by Scientific
Research Projects (03 02 23) Commission of Anadolu
University, Turkey. The authors also thank to Research
Assistant Kadir Kilinc for his great efforts for the
preparation of the manuscript.
REFERENCES
[1] G.E. Troxell, H.E. Davis and J.W. Kelly, Composition and
Properties of Concrete, McGraw-Hill Book Company, New
York (1968).
[2] A.M. Neville, Properties of Concrete, Addison-Wesley
Longman, U.K. (1995).
[3] O. Arioz, Determination of Concrete Strength by Standard,
Destructive, Semi-Destructive and Nondestructive Methods,
Ph.D. Thesis, Anadolu University, Eskisehir (2005) p. 233.
[4] H.Y. Qasrawi, Concrete strength by combined non destructive
methods simply and reliably predicted, Cem Concr Res 30
(2000) 739-746.
[5] I.N. Prassianakis and P. Giokas, Mechanical properties of old
concrete using destructive and ultrasonic non-destructive
testing methods, Magazine of Concrete Research 55 (2003)
[6] M.N. Soutsos, J.H. Bungey, A.E. Long and G.D. Henderson,
In-situ strength assessment of concrete, The European Concrete
Frame Building Project, (Cardington, U.K., 2000).
[7] J.H. Bungey and M.N. Soutsos, Reliability of partially-
destructive tests to assess the strength of concrete on site,
Construction and Building Materials 15 (2001) 81-92.
[8] B. Miao, P.C. Aitcin, W.D. Cook and D. Mitchell, Influence of
concrete strength on in-situ properties of large columns, ACI
Materials Journal 90 (1993) 214-219.
[9] P.J.E. Sullivan, Testing and evaluation of concrete strength in
structures, ACI Materials Journal September-October (1991)
530-535.
[10] F.M. Bartlett, Precision of in-place concrete strengths predicted
using core strength correction factors obtained by weighed
regression analysis, Structural Safety 19 (1979) 397-410.
[11] T.Y. Erdoğan, Concrete, METU Publisher, Ankara (2003).
[12] E. Arıoğlu E. and N. Arıoğlu, Testing of Concrete Core
Samples and Evaluations, Evrim Publisher, Istanbul (in
Turkish) (1998).
[13] F.M. Bartlett and J.G. MacGregor, Statistical analysis of the
compressive strength of concrete in structures, ACI Materials
Journal 93 (1996) 158-168.
[14] F.M. Bartlett and J.G. MacGregor, Effect of core diameter on
concrete core strengths, ACI Materials Journal 91 (1994) 460-
470.
[15] F.M. Bartlett and J.G. MacGregor, Effect of core length-to-
diameter ratio on concrete core strengths, ACI Materials
Journal 91 (1994) 339-348.
[16] F.M. Bartlett and J.G. MacGregor, Effect of moisture condition
on concrete core strengths, ACI Materials Journal 91 (1993)
227-236.
[17] J.H. Bungey, Determining concrete strength by using small
diameter cores, Magazine of Concrete Research 31 (1979) 91-
98.
[18] TS EN 12504-1, Testing concrete in structures-part 1: core
specimens-taking, examining and testing in compression,
Turkish Standards Institute (Ankara 2002).
[19] W.S. McCulloch and W. Pitts, A logical calculus of ideas
imminent in nervous activity. Bull. Math. Biophysics 5 (1943)
115-133.
[20] D.O. Hebb, The Organization of Behaviour, Wiley, New York
(1949).
[21] M. Minsky, Neural Nets and the Brain-Model Problem, Ph.D.
Thesis, Princeton University (1954).
[22] F. Rosenblatt, The Perceptron: A probabilistic model for
information storage and organization in the brain,
Psychological Review 65 (1958) 386-407.
[23] M. Minsky, and S. Papert, Perceptrons MIT Press, Cambridge,
MA. (1969).
[24] J.A., Anderson, J.W., Silverstein, S.A., Ritz and R.S., Jones,
Distinctive features, categorical perception, and probability
learning: some applications of a neural model, Psychological
Review 84 (1977) 413-451.
[25] S. Grossberg, How does a brain build a cognitive code?,
Psychological Review 87 (1980) 1-51.
[26] T. Kohonen, Associative Memory: A System Theoretical
Approach, Springer-Verlag, New York (1977).
[27] J.J. Hopfield, Neural networks and physical systems with
emergent collective computational abilities, Proceedings of the
National Academy of Science 79 (1982) 2554-2558.
[28] J.A. Feldman and D.H. Ballard, Connectionist models and their
properties, Cognitive Science 6 (1982) 205-254.
[29] P. Smolensky, On the proper treatment of connectionism,
Behavioural and Brain Sciences, Ph.D. Thesis, Harvard
University (1988).
[30] D.E. Rumelhart, J.L. McClelland and the PDP Research Group
(eds.), Parallel Distributed Processing: Explorations in the
Microstructures of Cognition, Vol.1 and Vol.2, MIT Press,
Cambridge, MA (1986).
[31] D.E. Rumelhart, G.E. Hinton and R.J. Williams, Learning
internal representation by error propagation, Parallel
Distributed Processing: Explorations in the Microstructures of
Cognition, Vol.1, MIT Press, Cambridge, MA (1986).
[32] P.J. Werbos, Beyond Regression: New Tools for Prediction
and Analysis in the Behavioural Sciences, Ph.D. Thesis,
Harvard University (1974).
[33] D.B. Parker, Learning Logic, Invention Report S81-64, File 1,
Office of Technology Licensing, Stanford University (1982).
[34] I.-C.Yeh, Modelling of strength of high-performance concrete
using artificial neural networks, Cem Concr Res 28 (1998)
1797-1808.
[35] W.P.S. Dias and S.P. Pooliyadda, Neural networks for
predicting properties of concretes with admixtures,
Construction and Building Materials 15 (2001) 371-379.
[36] L. Ren and Z. Zhao, An optimal neural network and concrete
strength modelling, Advances in Engineering Software 33
(2002) 117-130.
[37] M.N.S. Hadi, Neural networks applications in concrete
structures, Computers & Structures 81 (2003) 373-381.
[38] S.-C. Lee, Prediction of concrete strength using artificial neural
networks, Engineering Structures 25 (2003) 849-857.
[39] M. Sebastia, I.F. Olmo and A. Irabien, Neural network
prediction of unconfined compressive strength of coal fly ash-
cement mixtures, Cem Concr Res 33 (2003) 1137-1146.
[40] J. Bai, S. Wild, J.A. Ware and B.B. Sabir, Using neural
networks to predict workability of concrete incorporating
metakaolin and fly ash, Advances in Engineering Software 34
(2003) 663-669.
[41] N. Hong-Guang and W.Ji-Zong, Prediction of compressive
strength of concrete by neural networks, Cem Concr Res 30
(2000) 1245-1250.
[42] J. Hola and K. Schabowicz, New technique of non-destructive
assessment of concrete strength using artificial intelligence,
NDT & E International (2004) 1-9.