ArticlePDF Available

Hojas de cálculo para la simulación de redes de neuronas artificiales (RNA)

Authors:

Abstract and Figures

La utilización de Redes de Neuronas Artificiales (RNA) en problemas de predicción de series de tiempo, clasificación y reconocimiento de patrones ha aumentado considerablemente en los últimos años. Programas informáticos de matemáticas de propósito general tales como MATLAB, MATHCAD y aplicaciones estadísticas como SPSS y S-PLUS incorporan herramientas que permiten implementar RNAs. A esta oferta de software hay que añadir programas específicos como NeuralWare, EasyNN o Neuron. Desde un punto de vista educativo, el acceso de los estudiantes a estos programas puede ser difícil dado que no están pensados como herramientas didácticas. Por otro lado, las hojas de cálculo como Excel y Gnumeric incorporan utilidades que permiten implementar RNAs y son de fácil acceso para los estudiantes. El objetivo de este trabajo es proporcionar un pequeño tutorial sobre la utilización de Excel para implementar una RNA que nos permita ajustar los valores de una serie de tiempo correspondiente a actividad cerebral alfa y que permita al alumno entender el funcionamiento de estos dispositivos de cálculo
Content may be subject to copyright.
QUESTI10)V。1・26,1−2・p・289−305i2002
HOJASDECALCULOPARALASIMULACIONDE
REDESDENEURONASARTIFICIALES(RNA)
J.GARCiAl,A.M.LOPEZ2・6
J.B.ROMERO3,A.R.GARCiA4
C.CAMACHO2,J.L.CANTERO5
M.AHENZA5yR.SALAS5
Lau肩IizacidndeRedesdeNeulVnaSArt卿iales依M)enp,Vblemasde
pred,cciondeseriesderienp0,Clasyicac‘dny7eCOnOCimentodepatTVneS
haaumentadoco脚derabLementeenlos謝mosa成)S.P70g腰maSiI昨)r−
IndicosdemIeIuhcasdepIOpds的generallalesoommMATIAB,MATH−
∽Dy即IicacionesestadtsttcascomSI郊ySjLUSかcorpO′mher確
mientasquepemltenI〝PLementarRMs・Aestadかtades帥WaTehy
queahadirp70g′amaSeやeC郎COSCOmONeura脇re・励卑小WoNeuTVn・
Desdeunpuntodev‘Staeducanvo,elaccesodeめsesn,diantesaesios
p70gmmaSpuedeserd雅ildadoq恥enOeStdnpensadas‘XmOherramien−
asdiddcaCaS.IbroけOlaのIash asdecdIc訪ocom旗celyG肋me7五
m007pOmn ti融adesquepe棚teninpLemenかRMSySOndeJuilac−
cesoparalosestudianteS・El‘蘭ettVOdeesIe伽bqioesp′VPO7℃ionarun
peque成,tutOnalsobrelautil!ZaC,dnde励cetpaTa点呼7lemenlarunaRM
quenospermltaqfus,arlosvaloTeSdeunaseriedet.empocor′eSPOndienh?
aacIividadcelebmla的yquepemitaalaLumOentenderetfunciona一
mentOdees10Sd短70Silivosdecdlc撮め・
SpreadsmeetfbrmesimulaGonofar船cialneumlnetworks(ANNs)
palabrasclave:Redesdeneuonasar舶ciales,hQiadec釘culo,aprendi−
Z句eSupCⅣisado
Clasincaci6nAMS(MSC2000):97U70
lDemrtamenlodeIngememddDlsenO UnlVcrSIdaddeSetrilh
2Depa:rtamentOdePsl∞logtaEXfX;nmCml・UnⅣCfSL血ddeS帥na
3Der戴ttamentOdeE00nomtaAplICadal・UmVerSldaddeSevllla
41日.S.LoSViveros.Sevlm
5L如ratOnodeSuenoyCogl.ICldn
eLncoITel岬ondcncladebedingLrSeaAnaMaJ血LbJN,zJlmenez・DepmmentodePsICOlogfaBxf氾nmental
UnⅣCrSidaddeSevlua.Awda.CamIloJoseCehs/n41005Sevlua,Bspaiia∴Ttlf:(34)954557812・
Pax:(34)954551784B−ml‘andopez@uSeS
−ReclbldocnmaIZOde2001.
一AccptadoendlClembrede2001
289
ENGLISHSUMMARY
SPREADSMEETFORTHESIM[ULATIONOF
ARTIFICIALNEURALNETWORKS(ANNs)
J.GARCiAl,A.M.LOPEZ2・6
J.B.ROMERO3,A.R.GARCiA4
C.CAMACHO2,J.し.CANTER05
M.AITENZA5andR.SALAS5
LnTleCenLyeaTS,theusediArtqicuzINelITalNetmTksorA伽hasincTea−
SedconsidembかOSOIvepndiclionp70blemsintimeseries,ClassがCatiom
andreCOgnitionQfZ,attemS.Genera串間TZlOSem thematicalpIVgnmSSuCh
asMATLA凡MATHCADandmthem購nCalandsia伽icalp7t nαm場SuChas
SPSSands−PLUSincoや0nIetooLPI姐allowthei〝plememは露Ond“AN聴
ha・殺iα‖0砺ese,坤eC擁pIOgn脇は馴mhasNe〟ml脇砂嵐少時or
NeunOn,CO′卿IeIethes(が−tAaTleq統rus融gANNs.
現)maneducationalpo〝活けyieWanaやeCrtlunconcernsIheaunho鳩qf
thiswork.studentaccess10IhesepIVgTmLSCanbeeやenSiveoT言nsome
CaSe,unadvisablegiventheJをwpossibititiestheypIVVideasdidachcins−
tn(me融・meSepIVgTamSa′euSualb,eaツWusebu donoJacilitatethe
unde融anding〔ftheIechniqueused.OntheotherhandやTeadsheelslike
ErcelorGnumeTZCInCO′pO′utetOOLsLhatalloM′alld“lhenecessaりCal−
CuLatwnst0一′卿LementanALWmesepIVgmmSaTeuSeI申ieniu)中Othe
po,nttI融偽りa7euSedbym∼Ve高砂Iaborat07でes′aSWellaspの,Cholo8%
ecoIWlnu:SCience,andengmeenngS寝dents,IOme′∼tionaJを耽丁肱paper
p′OVidesasmal白utoriaLontheusedia呼鷹adSheel坤eCがCalb,E意CeLtO
i〝函emenlanANNioad的t初evalesげalimese万escomや0関脇glo
Ce,めnIa伍Ihaacliviol
Keywords:Ardncialneuralnetwork,Spreadshee supervisedleamng
AMSCkssincation(MSC2000):97U70
lDcpannnemodcIngelhemdelDlSe兎0.UnⅣer81daddeScvlm
2DepammenlodePSlCdogm鼠penmendl.UmversKLaddeSeⅥ皿
3DepamentodeB∞nOm血Apll∞血IUmVerSldaddeSevlu
41.ES.IJD8VⅣcno8.S鋤nlh
5しわomtonodeSue的yCognicidn
eLacomsJPndenciadel鵜dLngirscaAnaMaJlaIJ5JmJlm血cz,DepaJtamentOdePslcbl0gぬExpenmcntal,
UnWCrSidaddeSevlnB.AvdaCamIloJoQeCelag/n.41005Sevl叫BBpa血雌(34)954557812.
FaX:(34)954551784.Emul:aIm0匹泌@uS.e8
−ReceivedMarch2001.
−AcceptedDeoember200l、
302
... NNS was used to identify how the different water variables linked to Chl-a concentration produce changes. The construction of NNS was made following García (2002) using Microsoft Excel® software. The NNS was set up with input of seven variables (Fig. 2, Table 2), a hidden layer of 10 neurons, and one exit neuron. ...
... b Y represents the addition of each hidden layer neuron multiplied by its weight factor. c Bias is an input variable to the NNS that does not depend on water data (García, 2002). ...
Article
In recent decades, there has been increasing eutrophication of rivers and lagoons in Uruguay and solutions leading to water purification are being sought. The growing pollution has been attributed to nitrogen and phosphorus compounds exported from the river basins with intensification of agricultural production and the absence of tertiary treatment for urban and industrial effluents. Although nitrogen and phosphorus are relevant to eutrophication, there are also other factors that can promote eutrophication and algal blooms. This paper reports a broad analysis of water quality variables recorded over 9 years (2009-2018) at 17 sampling stations on the Uruguay River and 16 sampling stations on the Rı́o Negro, and explores their relationship with the changes of chlorophyll a (Chl-a) concentrations using a generalized linear model and a neural network simulation (NNS). The input variables were total phosphorus; total suspended solids; electrical conductivity of water (EC w ); alkalinity; water temperature (T); water pH (pH) and sampling month. The NNS explained 79% of Chl-a variations and showed the most relevant variables to be T, EC w , and pH. Moreover, the NNS showed that replacement of current land uses by natural prairie would not significantly reduce Chl-a concentrations. The results showed that the main factors that drive Chl-a concentrations (i.e., algae) are not directly linked to agriculture land use.
... 23 different networks were tested. Following suggestions from [28], each network was trained using Microsoft Excel spreadsheets and the Premium Solver macro version 12.5, which was designed for Excel 2007 by Frontline Systems. The generalised reduced gradient algorithm was used to train the networks. ...
Article
Full-text available
The municipality of La Virginia (Risaralda, Colombia) is constantly affected by fl oods that originate from increased water levels in the Cauca River. Disaster relief agencies do not currently have adequate monitoring systems to identify potential overfl ow events in time-series observations to prevent fl ood damage to homes or injury to the general population. In this paper, various simulation models are proposed for the prediction of fl ooding that contributes as a technical tool to the development and implementation of early warning systems to improve the responsiveness of disaster relief agencies. The models, which are based on artifi cial neural networks, take hydroclimatological information from different stations along the Cauca River Basin, and the trend indicates the average daily level of the river within the next 48 hours. This methodology can be easily applied to other urban areas exposed to fl ood risks in developing countries.
Conference Paper
Full-text available
El uso de Redes Neuronales Artificiales (RNA por sus siglas en inglés) en problemas de predicción de series de tiempo es cada vez más común. En la agricultura se generan continuamente una gran cantidad de datos de clima, biometría de la planta, días de cosecha y producción. Lenguajes de programación o software especializado incorporan herramientas que permiten implementar estos algoritmos del Aprendizaje Automático (ML por sus siglas en inglés). Desde un punto de vista educativo o de implementación básica y simple, el acceso a estos programas puede ser difícil, dado que no están pensadas como herramientas didácticas o empresariales. Por otro lado, las hojas de cálculo como Excel incorporan utilidades que permiten implementar RNA y son de fácil acceso para todos. El objetivo de este trabajo es proporcionar un pequeño tutorial sobre la utilización de Excel para implementar una RNA que nos permita ajustar los valores de una serie de tiempo y que permita a cualquier persona entender el funcionamiento de estas herramientas de cálculo.
Conference Paper
Una de las principales fuentes de agua dulce la constituyen aquellas provenientes del derretimiento de nieves, por lo cual el modelar su derretimiento a escalas temporales finas, es decir, escalas temporales menores a la mensual, resulta de gran interés e importancia para la gestión y aprovechamiento del recurso hídrico. Este trabajo expone la construcción de redes neuronales artificiales (RNA) para el pronóstico de caudales de deshielo, a escalas temporales diaria, semanal y quincenal. Para lo cual se utilizaron datos de caudal, temperaturas extremas y precipitación, provenientes de la estación Mapocho en Los Almendros; y datos de humedad relativa, velocidad del viento, temperatura y precipitación, provenientes de la estación Valle Nevado. Los resultados indican que se obtuvieron RNA con coeficientes de correlación de Pearson “R2” mayores a 0.99 en el caso de las escalas temporales diaria y quincenal y mayores a 0.98 en el caso de la escala temporal semanal. Al simular las RNA óptimas para la temporada de deshielo 2013-2014, se obtuvieron caudales con errores absolutos promedio de 5.6% (0.18 m3/s) para la escala temporal diaria, 11.6% (0.39 m3/s) para la escala temporal semanal y 11.5% (0.38 m3/s) para la escala temporal quincenal. Respecto a los datos de entrada para las RNA, se concluye que aquellos más significativos, son los relacionados al caudal y la temperatura máxima y que la humedad relativa es un dato significativo para el pronóstico de caudales de deshielo.
Article
Full-text available
This paper informs a statistical readership about Artificial Neural Networks (ANNs), points out some of the links with statistical methodology and encourages cross-disciplinary research in the directions most likely to bear fruit. The areas of statistical interest are briefly outlined, and a series of examples indicates the flavor of ANN models. We then treat various topics in more depth. In each case, we describe the neural network architectures and training rules and provide a statistical commentary. The topics treated in this way are perceptrons (from single-unit to multilayer versions), Hopfield-type recurrent networks (including probabilistic versions strongly related to statistical physics and Gibbs distributions) and associative memory networks trained by so-called unsupervised learning rules. Perceptrons are shown to have strong associations with discriminant analysis and regression, and unsupervized networks with cluster analysis. The paper concludes with some thoughts on the future of the interface between neural networks and statistics.
Article
Full-text available
Contenido: Una introducción a la computación neuronal; Fundamentos de las redes neuronales; Características de las redes neuronales; Redes neuronales con conexiones hacia adelante; El modelo de Hopfield; El modelo de resonancia adaptativa (ART); El modelo de Kohonen; Redes estocásticas; Redes neuronales y lógica borrosa.
Article
Full-text available
Minimisation methods for training feed-forward networks with back-propagation are compared. Feed-forward neural network training is a special case of function minimisation, where no explicit model of the data is assumed. Therefore, and due to the high dimensionality of the data, linearisation of the training problem through use of orthogonal basis functions is not desirable. The focus is on function minimisation on any basis. Three feed-forward learning problems are tested with five methods. It is shown that, due to the fixed stepsize, standard error backpropagation performs well in avoiding local minima. However, using not only the local gradient but also the second derivative of the error function, constructed from a sequence of local gradients, requires a much shorter training time, and conjugate gradient with Powell restarts shows to be the superior method. 1 Introduction Back-propagation of error gradients has proven its usefulness in training feed-forward and feedb...
Article
In this paper we demonstrate that finite linear combinations of compositions of a fixed, univariate function and a set of affine functionals can uniformly approximate any continuous function ofn real variables with support in the unit hypercube; only mild conditions are imposed on the univariate function. Our results settle an open question about representability in the class of single hidden layer neural networks. In particular, we show that arbitrary decision regions can be arbitrarily well approximated by continuous feedforward neural networks with only a single internal, hidden layer and any continuous sigmoidal nonlinearity. The paper discusses approximation properties of other possible types of nonlinearities that might be implemented by artificial neural networks.
Book
Ripley brings together two crucial ideas in pattern recognition: statistical methods and machine learning via neural networks. He brings unifying principles to the fore, and reviews the state of the subject. Ripley also includes many examples to illustrate real problems in pattern recognition and how to overcome them.
Article
Hypergeometric functions with matricial argument are being used in several fields of mathematics. This article tries to obtain a good approximation of this family of functions, since there are no general expressions for them, calculating zonal polynomials of high degrees and developing the functions in a truncated series.
Article
An abstract is not available.
Book
This book provides the first comprehensive treatment of feed-forward neural networks from the perspective of statistical pattern recognition. After introducing the basic concepts of pattern recognition, the book describes techniques for modelling probability density functions, and discusses the properties and relative merits of the multi-layer perceptron and radial basis function network models. It also motivates the use of various forms of error functions, and reviews the principal algorithms for error function minimization. As well as providing a detailed discussion of learning and generalization in neural networks, the book also covers the important topics of data processing, feature extraction, and prior knowledge. The book concludes with an extensive treatment of Bayesian techniques and their applications to neural networks.
Book
Este libro ofrece un nuevo enfoque a la auto-organización, adapta -ción, aprendizaje y memoria, que da lugar a cursos para postgraduados en ciencias de la información, ciencias computacionales, psicología, biología teórica, y física.