## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Artificial Neural Networks (ANNs) are based on highly simplified brain dynamics and have been used as powerful computational
tools to solve complex pattern recognition, function estimation, and classification problems. Throughout their development,
ANNs have been evolving towards more powerful and more biologically realistic models. In the last decade, the third generation Spiking Neural Networks (SNNs) have been developed which comprise of spiking neurons. Information transfer in these neurons models the information transfer in biological neurons, i.e., via the precise
timing of spikes or a sequence of spikes. Addition of the temporal dimension for information encoding in SNNs yields new insight
into the dynamics of the human brain and has the potential to result in compact representations of large neural networks.
As such, SNNs have great potential for solving complicated time-dependent pattern recognition problems defined by time series
because of their inherent dynamic representation. This article presents an overview of the development of spiking neurons
and SNNs within the context of feedforward networks, and provides insight into their potential for becoming the next generation
neural networks.

To read the full-text of this research,

you can request a copy directly from the authors.

... The field of SN P systems, and membrane computing in general, is an ever-evolving area of computer science that requires more research as it changes the way people perceive how computations are traditionally made [3]. As part of the latest evolution of neural networks [4], much research is still being explored in what specific problems SN P systems be applied to. Figure 1 shows a simple SN P system, system Π 3k3 , that sends spikes to its environment after 3k + 3 time steps [2]. ...

... A branch of membrane computing [3] and considered as part of the third generation of neural networks [4], spiking neural P systems (SN P systems) are defined as a set of neurons that communicate with each other through spikes, and that follow a set of rules [2]. All neurons of the system follow a global clock, and the encoding of information based on the time a spike arrives as well as the configuration of each component in the system is what decides its final output. ...

... The sequence in which the spikes enter the environment is referred to as a spike train. Unlike other examples of neural networks [4], the spikes in an SN P system are all the same. As such, it is when the spikes appear based on the rules within a neuron and its initial configuration that matter. ...

Under the active area of membrane computing, Spiking Neural P systems (SN P systems) are models of computations which take inspirations from biological neurons, e.g. by the sending of spikes through the synapses connecting the neurons. As more research is done in this area, previous works are focused on creating simulators to aid in the creation, experimentation, and understanding of SN P systems. Most simulators are mainly text-based, with little or no visualizations of the systems and their computations. In this work we introduce a novel tool known as WebSnapse. WebSnapse is a web-based simulator which addresses the need for a visual tool for the study and experimentation (e.g. creation, modification) of SN P systems and their computations. We list some limitations of WebSnapse, e.g. in terms of the amount of memory allocated to the web browser during simulations. Any modern web browser, including those for some mobile devices such as phones or tablet computers, can run WebSnapse. In this way, both touch or mouse-based inputs are available to the user in learning about SN P systems in WebSnapse. Our results and testing show promise in the use of web-based technologies in visualising SN P systems and their computations to aid both old and new users.

... The interactions between billions of neurons and trillions of synapses in the brain are difficult to simulate quickly even for supercomputers because brains and computers work differently. Brains can be viewed as collections of billions of processing nodes that communicate with sparse events called spikes [2], whereas computers run imperative programs, vulnerable to the von Neumann bottleneck [3]. Power in-efficiency is another problem. ...

... Column K in Table 2 specifies the number of thalamic neurons each population's neurons receive spikes from, and parameter v th in Table 1 the thalamic neurons' mean spiking frequency. 2 The expected number of thalamic spikes received per second per population is v th K . For example, every neuron in population L23/exc receives about 8 · 1, 600 = 12, 800 thalamic spikes per second. ...

Spiking Neural Networks (SNNs) are models that mimic and replicate the computational properties of the biological brain. Computation is performed using neurons that transmit information on axons between each other via synapses. SNNs have several important application areas, ranging from (brain-like) artificial intelligence to complex brain simulations. Most SNN simulations today are carried out on systems such as CPUs and GPUs, which fit SNNs poorly and often yield slow solutions that consume needlessly much energy. In this work, we present algorithms for efficient simulation of SNNs on Field-Programmable Gate Arrays (FPGAs), which is driven by our hypothesis that said devices can be much more power-efficient without sacrificing execution performance. We also provide an in-depth analysis and discussion of our algorithms and techniques. We target the important Potjans-Diesmann model, a well-known cortical microcircuit often used for assessing SNN simulation performance. By utilizing high-level synthesis (HLS) targeting the latest Intel Agilex 7 FPGA, we show that our best simulator can execute the microcircuit 25% faster than real-time and require only 21 nJ per synaptic event. Our result surpasses the state-of-the-art for single-device simulation, and the energy use is the lowest among published results.

... From a biophysical point of view, action potentials are the result of currents flowing through ion channels in the membrane of nerve cells. The integrate-andfire neuron model [36,37] focuses on the dynamics of these currents and the resulting changes in membrane potential. Therefore, despite numerous simplifications, these models can capture the essence of neuronal behavior in terms of dynamic systems. ...

... An increase in potential above a certain threshold value produces an action potential (i.e., an impulse in the form of Dirac's delta), and then the membrane potential is reset to the resting level. The leaky integrate-and-fire (LIF) neuron model [36,37] is an extended model of the integrate-and-fire neuron, in which the issue of time-independent memory is solved by equipping the cell membrane with a so-called leak. This mechanism causes ions to diffuse in the direction of lowering the potential to the resting level or another level → . ...

Recently, artificial intelligence (AI)-based algorithms have revolutionized the medical image segmentation processes. Thus, the precise segmentation of organs and their lesions may contribute to an efficient diagnostics process and a more effective selection of targeted therapies, as well as increasing the effectiveness of the training process. In this context, AI may contribute to the automatization of the image scan segmentation process and increase the quality of the resulting 3D objects, which may lead to the generation of more realistic virtual objects. In this paper, we focus on the AI-based solutions applied in medical image scan segmentation and intelligent visual content generation, i.e., computer-generated three-dimensional (3D) images in the context of extended reality (XR). We consider different types of neural networks used with a special emphasis on the learning rules applied, taking into account algorithm accuracy and performance, as well as open data availability. This paper attempts to summarize the current development of AI-based segmentation methods in medical imaging and intelligent visual content generation that are applied in XR. It concludes with possible developments and open challenges in AI applications in extended reality-based solutions. Finally, future lines of research and development directions of artificial intelligence applications, both in medical image segmentation and extended reality-based medical solutions, are discussed.

... 11 For example, the feed value of the first-generation neural networks must be discrete, which limits to Boolean functions only. 12,13 Furthermore, unlike first-generation neural networks, second-generation neural networks are able to process continuous values of input and output, they adopt frequency coding that is not suited to certain biological neurons. 12,13 To address this problem, SNNs encode the information by spikes. ...

... 12,13 Furthermore, unlike first-generation neural networks, second-generation neural networks are able to process continuous values of input and output, they adopt frequency coding that is not suited to certain biological neurons. 12,13 To address this problem, SNNs encode the information by spikes. 14,15 On the other hand, the "integrate-and-fire" type of spiking neurons are used primarily, 6,16 while in neural-biological systems, other types of brain cells, such as astrocytes, 17 exists besides the spiking neurons. ...

Biological brains have a natural capacity for resolving certain classification tasks. Studies on biologically plausible spiking neurons, architectures and mechanisms of artificial neural systems that closely match biological observations while giving high classification performance are gaining momentum. Spiking neural P systems (SN P systems) are a class of membrane computing models and third-generation neural networks that are based on the behavior of biological neural cells and have been used in various engineering applications. Furthermore, SN P systems are characterized by a highly flexible structure that enables the design of a machine learning algorithm by mimicking the structure and behavior of biological cells without the over-simplification present in neural networks. Based on this aspect, this paper proposes a novel type of SN P system, namely, layered SN P system (LSN P system), to solve classification problems by supervised learning. The proposed LSN P system consists of a multi-layer network containing multiple weighted fuzzy SN P systems with adaptive weight adjustment rules. The proposed system employs specific ascending dimension techniques and a selection method of output neurons for classification problems. The experimental results obtained using benchmark datasets from the UCI machine learning repository and MNIST dataset demonstrated the feasibility and effectiveness of the proposed LSN P system. More importantly, the proposed LSN P system presents the first SN P system that demonstrates sufficient performance for use in addressing real-world classification problems.

... Here we describe an LIF (Leaky integration and Fire) neuron [21], one of the simplest spiking neuron models. The state of the neuron, known as the membrane potential V m (t), evolves over time depending on its previous state as well as its inputs [22].The equivalent circuit of a LIF neuron is shown in the Figure. 1. ...

Analyzing electroencephalogram (EEG) signals to detect the epileptic seizure status of a subject presents a challenge to existing technologies aimed at providing timely and efficient diagnosis. In this study, we aimed to detect interictal and ictal periods of epileptic seizures using a spiking neural network (SNN). Our proposed approach provides an online and real-time preliminary diagnosis of epileptic seizures and helps to detect possible pathological conditions.To validate our approach, we conducted experiments using multiple datasets. We utilized a trained SNN to identify the presence of epileptic seizures and compared our results with those of related studies. The SNN model was deployed on Xylo, a digital SNN neuromorphic processor designed to process temporal signals. Xylo efficiently simulates spiking leaky integrate-and-fire neurons with exponential input synapses. Xylo has much lower energy requirments than traditional approaches to signal processing, making it an ideal platform for developing low-power seizure detection systems.Our proposed method has a high test accuracy of 93.3% and 92.9% when classifying ictal and interictal periods. At the same time, the application has an average power consumption of 87.4 uW(IO power) + 287.9 uW(computational power) when deployed to Xylo. Our method demonstrates excellent low-latency performance when tested on multiple datasets. Our work provides a new solution for seizure detection, and it is expected to be widely used in portable and wearable devices in the future.

... Translating the brain's computational principles into artificial systems has led to the development of various systems that all aspire to reproduce the synaptic plasticity of the brain. These systems range from nanowires-based networks (Caravelli et al., 2023;Loeffler et al., 2023;Milano et al., 2022) to Spiking Neural Networks (SNNs), which are considered the third generation of Artificial Neural Network (ANN) models (Maass, 1997;Ghosh-Dastidar and Adeli, 2009). The SNN encodes information in the timing of spikes, and utilizes a dedicated learning rule: Spike-Timing-Dependent Plasticity (STDP) (Caporale and Dan, 2008), that modulates synaptic strengths, either strengthening or weakening, based on the relative timing between spikes. ...

In this study, we explore spintronic synapses composed of several Magnetic Tunnel Junctions (MTJs), leveraging their attractive characteristics such as endurance, nonvolatility, stochasticity, and energy efficiency for hardware implementation of unsupervised neuromorphic systems. Spiking Neural Networks (SNNs) running on dedicated hardware are suitable for edge computing and IoT devices where continuous online learning and energy efficiency are important characteristics. We focus in this work on synaptic plasticity by conducting comprehensive electrical simulations to optimize the MTJ-based synapse design and find the accurate neuronal pulses that are responsible for the Spike Timing Dependent Plasticity (STDP) behavior. Most proposals in the literature are based on hardware-independent algorithms that require the network to store the spiking history to be able to update the weights accordingly. In this work, we developed a new learning rule, the Bi-Sigmoid STDP (B2STDP), which originates from the physical properties of MTJs. This rule enables immediate synaptic plasticity based on neuronal activity, leveraging in-memory computing. Finally, the integration of this learning approach within an SNN framework leads to a 91.71% accuracy in unsupervised image classification, demonstrating the potential of MTJ-based synapses for effective online learning in hardware-implemented SNNs.

... In the context of artificial neural networks, saccades were studied with two distinct motivations: predicting human saccadic movements 21,22 , and exploiting saccadic input, primarily in terms of reducing the amount of input data 23 ; yet not from a perspective of their computational properties. Furthermore, signals from the eyes are processed by biological neurons, which can be modeled as spiking neural networks (SNNs) that represent a very efficient class of biologically inspired neural networks [24][25][26][27] They are passing information through sequences of spikes, realizing an efficient sparse communication with all-or-none events. Concurrently, their rich internal dynamics modeling the temporal integration of incoming spikes from the synapses at the dendrites makes them well suited to tackling problems that involve the temporal evolution of the input information fed into the network 28 . ...

Visual oddity task was conceived to study universal ethnic-independent analytic intelligence of humans from a perspective of comprehension of spatial concepts. Advancements in artificial intelligence led to important breakthroughs, yet excelling at such abstract tasks remains challenging. Current approaches typically resort to non-biologically-plausible architectures with ever-growing models consuming substantially more energy than the brain. Motivated by the brain’s efficiency and reasoning capabilities, we present a biologically inspired system that receives inputs from synthetic eye movements – reminiscent of saccades, and processes them with neuronal units incorporating dynamics of neocortical neurons. We introduce a procedurally generated visual oddity dataset to train an architecture extending conventional relational networks and our proposed system. We demonstrate that both approaches are capable of abstract problem-solving at high accuracy, and we uncover that both share the same essential underlying mechanism of reasoning in seemingly unrelated aspects of their architectures. Finally, we show that the biologically inspired network achieves superior accuracy, learns faster and requires fewer parameters than the conventional network.

... When the network structure and activation function are set, the weight value between the connecting neurons can be trained by using the learning rules set in advance. The mathematical model that ultimately generates a function can be used to effectively handle the tasks brought about by the classification and regression problems (Ghosh-Dastidar and Adeli 2009). The second generation artificial neural network has the advantages of high parallelism, good fault tolerance, and strong self-learning ability, which has aroused strong interest among researchers in various fields. ...

With the rapid development of technology today, people have ushered in the era of big data, and artificial neural networks have become popular in various fields. For example, classifying images, recognizing speech, and automatically driving machines, when applying artificial neural networks to the field of optics, not only can computational efficiency be greatly improved, but also the quality of imaging is greatly improved. The optical neural network developed using photons as a medium not only improves the shortcomings of traditional neural networks, but also has the advantages of fast speed and low loss. In recent years, optical instrument has developed rapidly and can be seen in the field of precision measurement and aerospace. When manufacturing optical instrument, the main process is divided into three parts: processing, assembly and testing. However, errors cannot be avoided during the processing and assembly process. Therefore, after the installation and adjustment, it is necessary to measure the position of the system’s optical axis again, using the remeasured optical axis as the reference, in order to ensure more accurate accuracy in the subsequent use process. If the measured optical axis and the actual optical axis do not reach consistency in position, this will cause eccentricity, leading to a decrease in subsequent imaging performance. Therefore, in order to solve this problem, this paper will propose a calibration method using optical instrument as test measurement based on artificial neural network.

... The error is computed by the Equation (1). In this work, Adam, a gradient descent backpropagation algorithm, is employed to optimize the loss [43]. To be specific, the gradients of the loss are propagated backward all the way down to the input layer through the hidden layers using the recursive chain rule. ...

Increasing violence in workplaces such as hospitals seriously challenges public safety. However, it is time- and labor-consuming to visually monitor masses of video data in real time. Therefore, automatic and timely violent activity detection from videos is vital, especially for small monitoring systems. This paper proposes a two-stream deep learning architecture for video violent activity detection named SpikeConvFlowNet. First, RGB frames and their optical flow data are used as inputs for each stream to extract the spatiotemporal features of videos. After that, the spatiotemporal features from the two streams are concatenated and fed to the classifier for the final decision. Each stream utilizes a supervised neural network consisting of multiple convolutional spiking and pooling layers. Convolutional layers are used to extract high-quality spatial features within frames, and spiking neurons can efficiently extract temporal features across frames by remembering historical information. The spiking neuron-based optical flow can strengthen the capability of extracting critical motion information. This method combines their advantages to enhance the performance and efficiency for recognizing violent actions. The experimental results on public datasets demonstrate that, compared with the latest methods, this approach greatly reduces parameters and achieves higher inference efficiency with limited accuracy loss. It is a potential solution for applications in embedded devices that provide low computing power but require fast processing speeds.

... However, ANNs usually require floating-point multiplications between the synaptic weights and the activations, making them quite inefficient in terms of power consumption [8,9]. Inspired by the computationally efficient brain, spiking neural networks (SNNs) constitute a class of neural networks [10][11][12][13][14], where neuronal information is communicated asynchronously through sequences of spikes. The spike trains are commonly interpreted as sequences of sparse binary signals, leading to an efficient operation and are therefore of importance for many applications [15][16][17]. ...

Spiking neural networks (SNNs) are mimicking computationally powerful biologically inspired models in which neurons communicate through sequences of spikes, regarded here as sparse binary sequences of zeros and ones. In neuroscience it is conjectured that time encoding, where the information is carried by the temporal position of spikes, is playing a crucial role at least in some parts of the brain where estimation of the spiking rate with a large latency cannot take place. Motivated by the efficiency of temporal coding, compared with the widely used rate coding, the goal of this paper is to develop and train an energy-efficient time-coded deep spiking neural network system. To ensure that the similarity among input stimuli is translated into a correlation of the spike sequences, we introduce correlative temporal encoding and extended correlative temporal encoding techniques to map analog input information into input spike patterns. Importantly, we propose an implementation where all multiplications in the system are replaced with at most a few additions. As a more efficient alternative to both rate-coded SNNs and artificial neural networks, such system represents a preferable solution for the implementation of neuromorphic hardware. We consider data classification tasks where input spike patterns are presented to a feed-forward architecture with leaky-integrate-and-fire neurons. The SNN is trained by backpropagation through time with the objective to match sequences of output spikes with those of specifically designed target spike patterns, each corresponding to exactly one class. During inference the target spike pattern with the smallest van Rossum distance from the output spike pattern determines the class. Extensive simulations indicate that the proposed system achieves a classification accuracy at par with that of state-of-the-art machine learning models.

... Third-generation neural network technology, known as SNNs, employs neuron models driven by the biological mechanics of neuronal signaling (Ghosh-Dastidar & Adeli, 2009). SNNs have a structure resembling that of regular neural networks. ...

In this paper, a handwritten digit classification system is proposed based on the Discrete Wavelet Transform and Spike Neural Network. The system consists of three stages. The first stage is for preprocessing the data and the second stage is for feature extraction, which is based on Discrete Wavelet Transform (DWT). The third stage is for classification and is based on a Spiking Neural Network (SNN). To evaluate the system, two standard databases are used: the MADBase database and the MNIST database. The proposed system achieved a high classification accuracy rate with 99.1% for the MADBase database and 99.9% for the MNIST database.

... Another gap in the current literature is the lack of model-specific proposals suitable to detect OoD instances in Spiking Neural Networks (SNNs). These models, often referred to as the third generation of artificial neural networks, are the evolution of present-day neural networks in virtue of their energy efficiency when implemented in specialized neuro-morphic hardware, and their ability to model complex temporal dynamics thanks to their event-based nature [10,11]. Despite the intense research activity around this family of models noted in recent times, no prior work exists dealing with the detection of OoD instances by leveraging specifics of the spike-based working procedure of SNNs. ...

Research around Spiking Neural Networks has ignited during the last years due to their advantages when compared to traditional neural networks, including their efficient processing and inherent ability to model complex temporal dynamics. Despite these differences, Spiking Neural Networks face similar issues than other neural computation counterparts when deployed in real-world settings. This work addresses one of the practical circumstances that can hinder the trustworthiness of this family of models: the possibility of querying a trained model with samples far from the distribution of its training data (also referred to as Out-of-Distribution or OoD data). Specifically, this work presents a novel OoD detector that can identify whether test examples input to a Spiking Neural Network belong to the distribution of the data over which it was trained. For this purpose, we characterize the internal activations of the hidden layers of the network in the form of spike count patterns, which lay a basis for determining when the activations induced by a test instance is atypical. Furthermore, a local explanation method is devised to produce attribution maps revealing which parts of the input instance push most towards the detection of an example as an OoD sample. Experimental results are performed over several image classification datasets to compare the proposed detector to other OoD detection schemes from the literature. As the obtained results clearly show, the proposed detector performs competitively against such alternative schemes, and produces relevance attribution maps that conform to expectations for synthetically created OoD instances.

... Then, inspired by biology, we develop a new approach for solving analytic intelligence tasks. To that end, we first focus on spiking neural networks (SNNs) that represent a very efficient class of biologically inspired neural networks [12], [13], passing information through sequences of spikes. Because of the rich dynamics governing the spiking neurons, SNNs are well suited to tackling problems that involve the temporal evolution of the input information fed into the network [14]. ...

Visual oddity task was conceived as a universal ethnic-independent analytic intelligence test for humans. Advancements in artificial intelligence led to important breakthroughs, yet competing with humans on such analytic intelligence tasks remains challenging and typically resorts to non-biologically-plausible architectures. We present a biologically realistic system that receives inputs from synthetic eye movements - saccades, and processes them with neurons incorporating dynamics of neocortical neurons. We introduce a procedurally generated visual oddity dataset to train an architecture extending conventional relational networks and our proposed system. Both approaches surpass the human accuracy, and we uncover that both share the same essential underlying mechanism of reasoning. Finally, we show that the biologically inspired network achieves superior accuracy, learns faster and requires fewer parameters than the conventional network.

... This could be done by adding recurrence to the SOM (Voegtlin, 2002) or by integrating SNNs. SNNs have a distributed network structure (Xin and Embrechts, 2001;Ghosh-Dastidar and Adeli, 2009;Schliebs and Kasabov, 2013), but they do not forget the signal nature of received data and process them as spikes, in a sequential mode. Algorithms using SNNs have already allowed us to solve quite important and varied problems, such as unsupervised learning (Bohte et al., 2002;Dong et al., 2018), auto-encoding (Kamata et al., 2021), and even supervised AI problems (Kheradpisheh and Masquelier, 2020). ...

The field of artificial intelligence has significantly advanced over the past decades, inspired by discoveries from the fields of biology and neuroscience. The idea of this work is inspired by the process of self-organization of cortical areas in the human brain from both afferent and lateral/internal connections. In this work, we develop a brain-inspired neural model associating Self-Organizing Maps (SOM) and Hebbian learning in the Reentrant SOM (ReSOM) model. The framework is applied to multimodal classification problems. Compared to existing methods based on unsupervised learning with post-labeling, the model enhances the state-of-the-art results. This work also demonstrates the distributed and scalable nature of the model through both simulation results and hardware execution on a dedicated FPGA-based platform named SCALP (Self-configurable 3D Cellular Adaptive Platform). SCALP boards can be interconnected in a modular way to support the structure of the neural model. Such a unified software and hardware approach enables the processing to be scaled and allows information from several modalities to be merged dynamically. The deployment on hardware boards provides performance results of parallel execution on several devices, with the communication between each board through dedicated serial links. The proposed unified architecture, composed of the ReSOM model and the SCALP hardware platform, demonstrates a significant increase in accuracy thanks to multimodal association, and a good trade-off between latency and power consumption compared to a centralized GPU implementation.

... This could be done by adding recurrence to the SOM (Voegtlin, 2002) or by integrating SNNs. SNNs have a distributed network structure (Xin and Embrechts, 2001;Ghosh-Dastidar and Adeli, 2009;Schliebs and Kasabov, 2013), but they do not forget the signal nature of received data and process them as spikes, in a sequential mode. Algorithms using SNNs have already allowed us to solve quite important and varied problems, such as unsupervised learning (Bohte et al., 2002;Dong et al., 2018), auto-encoding (Kamata et al., 2021), and even supervised AI problems (Kheradpisheh and Masquelier, 2020). ...

The field of artificial intelligence has significantly advanced over the past decades, inspired by discoveries from the fields of biology and neuroscience. The idea of this work is inspired by the process of self-organization of cortical areas in the human brain from both afferent and lateral/internal connections. In this work, we develop an original brain-inspired neural model associating Self-Organizing Maps (SOM) and Hebbian learning in the Reentrant SOM (ReSOM) model. The framework is applied to multimodal classification problems. Compared to existing methods based on unsupervised learning with post-labeling, the model enhances the state-of-the-art results. This work also demonstrates the distributed and scalable nature of the model through both simulation results and hardware execution on a dedicated FPGA-based platform named SCALP (Self-configurable 3D Cellular Adaptive Platform). SCALP boards can be interconnected in a modular way to support the structure of the neural model. Such a unified software and hardware approach enables the processing to be scaled and allows information from several modalities to be merged dynamically. The deployment on hardware boards provides performance results of parallel execution on several devices, with the communication between each board through dedicated serial links. The proposed unified architecture, composed of the ReSOM model and the SCALP hardware platform, demonstrates a significant increase in accuracy thanks to multimodal association, and a good trade-off between latency and power consumption compared to a centralized GPU implementation.

... There has been an increasing interest in spiking neural networks in recent years. SNNs are seen as hypothetical solutions for the bottlenecks of ANNs in pattern recognition, such as energy efficiency [1]. But current methods such as ANN-to-SNN conversion and back-propagation do not take full advantage of these networks, and unsupervised methods have not yet reached a success comparable to advanced artificial neural networks. ...

There has been an increasing interest in spiking neural networks in recent years. SNNs are seen as hypothetical solutions for the bottlenecks of ANNs in pattern recognition, such as energy efficiency. But current methods such as ANN-to-SNN conversion and back-propagation do not take full advantage of these networks, and unsupervised methods have not yet reached a success comparable to advanced artificial neural networks. It is important to study the behavior of SNNs trained with unsupervised learning methods such as spike-timing dependent plasticity (STDP) on video classification tasks, including mechanisms to model motion information using spikes, as this information is critical for video understanding. This paper presents multiple methods of transposing temporal information into a static format, and then transforming the visual information into spikes using latency coding. These methods are paired with two types of temporal fusion known as early and late fusion, and are used to help the spiking neural network in capturing the spatio-temporal features from videos. In this paper, we rely on the network architecture of a convolutional spiking neural network trained with STDP, and we test the performance of this network when challenged with action recognition tasks. Understanding how a spiking neural network responds to different methods of movement extraction and representation can help reduce the performance gap between SNNs and ANNs. In this paper we show the effect of the similarity in the shape and speed of certain actions on action recognition with spiking neural networks, we also highlight the effectiveness of some methods compared to others.

... Furthermore, the preferences of the spiking neurons in their SCNN change progressively, thus capturing different salient features of the input image. Spiking Neural Networks (SNN) are often referred to as the third generation neural networks that hold the potential for sparse and low-power computation [3]. However, due to the discrete nature of the spiking neuron models, training of SNN using the traditional backpropagation algorithm has been a challenge. ...

We have presented a Spiking Convolutional Neural Network (SCNN) that incorporates retinal foveal-pit inspired Difference of Gaussian filters and rank-order encoding. The model is trained using a variant of the backpropagation algorithm adapted to work with spiking neurons, as implemented in the Nengo library. We have evaluated the performance of our model on two publicly available datasets - one for digit recognition task, and the other for vehicle recognition task. The network has achieved up to 90% accuracy, where loss is calculated using the cross-entropy function. This is an improvement over around 57% accuracy obtained with the alternate approach of performing the classification without any kind of neural filtering. Overall, our proof-of-concept study indicates that introducing biologically plausible filtering in existing SCNN architecture will work well with noisy input images such as those in our vehicle recognition task. Based on our results, we plan to enhance our SCNN by integrating lateral inhibition-based redundancy reduction prior to rank-ordering, which will further improve the classification accuracy by the network.

... However, ANNs that require high-precision arithmetic are in general inefficient in terms of power consumption. SNNs [3] [4] rely on sequences of spikes (ones and zeros) rather than continuous values for neuronal communication and are thus significantly more efficient than other ANNs [5]. Moreover, SNNs are particularly attractive when inputs are sparse and asynchronous, and when learning must be on-line and lifelong. ...

... As the brain mechanism and information processing become more evident, ANNs become stronger and more realistic and sophisticated. 7 ANNs can be divided into two big categories: spiking and nonspiking neurons. Nonspiking neural networks correspond to the first generation of ANN models that are mainly based on describing the nonlinear relationship between the output and the input neural activity by using continuous variables. ...

In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the “center-surround” structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.

Over the past two decades, the term “intelligent media” has surfaced to describe media that take on problematics of cognition, communication, and sensory perception loosely modeled after human intelligence. Taking the form of hardware‐software assemblages, these novel media demonstrate forms of autonomy that challenge human control and herald a complete redistribution of the sensible and agential. The aim of this article is to illuminate the shifting boundaries of nature and artifice as these figure in relations between humans and computational machines in the emergent computational culture of the 21st century. Its specific focus is on art‐making where the medium or materials of art have been dematerialized and figure as “intelligent” and generative in their own right. Based on a historical discussion of art‐making practices and the analysis of an artistic workshop organized by the authors, this article stakes out a much‐needed study of the other‐than‐human agency of artificial entities.

This article presents a comprehensive analysis of spiking neural networks (SNNs) and their mathematical models for simulating the behavior of neurons through the generation of spikes. The study explores various models, including LIF and NLIF , for constructing SNNs and investigates their potential applications in different domains. However, implementation poses several challenges, including identifying the most appropriate model for classification tasks that demand high accuracy and low-performance loss. To address this issue, this research study compares the performance, behavior, and spike generation of multiple SNN models using consistent inputs and neurons. The findings of the study provide valuable insights into the benefits and challenges of SNNs and their models, emphasizing the significance of comparing multiple models to identify the most effective one. Moreover, the study quantifies the number of spiking operations required by each model to process the same inputs and produce equivalent outputs, enabling a thorough assessment of computational efficiency. The findings provide valuable insights into the benefits and limitations of SNNs and their models. The research underscores the significance of comparing different models to make informed decisions in practical applications. Additionally, the results reveal essential variations in biological plausibility and computational efficiency among the models, further emphasizing the importance of selecting the most suitable model for a given task. Overall, this study contributes to a deeper understanding of SNNs and offers practical guidelines for using their potential in real-world scenarios.

There are random uses of androgenic anabolic steroids such as sustanon , especially among young people and adolescent.These drugs have many long-term negative side effects; therefore they have become one of major problem of health. This study was conducted in order to examine the effect of intramuscular injection of androgenic anabolic steroide (sustanon) on some hormonal, immunological and histological parameters in female albino rats. The study carried out in the animal house (College of Veterinary Medicine/ University of Al-Qassim green). Twenty Four female rats were divided into four groups (6 replication for each), the first, second and third treatment sub groups injected by sustanon at concentrations (0.05, 0.1, 0.2) mg/kg/day respectively for six weeks, while the fourth subgroup was considered a control set which injected by physiological normal saline (0.9%Nacl). The blood parameters estimation (RBCs, WBCs, PCV, PLT,Hb) Histologic study included studying histopathological changes in skeletal muscles tissue (thigh, arm). The results showed a significant increase (p ≤ 0.05) of the levels (Hb,RBCs, WBCs, PCV, PLT), compared with thecontrol group. Histology study changes showed that hypetrophy of fiber muscle in thigh and arm tissue with hyperplasia of muscle cells in arm at high doses present study conclude that increasing the concentrations of sustanon drug may cause clear pathological changes (physiologically, and histologically in most of the study parameters.

Using traditional and microwave heating methods, new Copper(II) complexes containing mixed ligands isatinazine (IAH2) and benzilthiosemicarbazonebenzelidene (BtscbH), or benzilthiosemicarbazone-ortho-hydroxybenzelidene (BtscoH2), or benzilthiosemicarbazone-meta-hydroxybenzelidene (Btsc Physical and chemical techniques were used to characterise the chemicals that resulted. In neutral (or slightly acidic) media, the ligands produced ionic complexes with the general formula [Cu(IAH2)(LHi)Ac], but in basic medium, neutral complexes with the general formula [Cu(IAH)(LHi-1)] were generated, where LHi = BtscbH, BtscoH2, BtscmH2, or BtscpH2 ligands; LHi-1= deprotonated As a result, deformed octahedral geometries have been examined in hexa-coordinated mononuclear complexes have been investigated having distorted octahedral geometries Agar plate diffusion techniques were used to test the biological activity of the ligands and all of the complexes against Staphylococcus aureus, pseudomonas aeruginosa, Proteus mirabilis, and Escherichia coliThe antifungal activity of all the ligands and compounds was tested in vitro against Aspergillusniger and Candidaalbicans. There have been no observed effects.

Vitamin D is one of the necessary substances that must be available in food, and it
can be made through the exposure of the skin to ultraviolet rays found in sunlight.
Vitamin D has a significant role in metabolic processes and physiological functions,
and has an effect to avoid damage to muscles and recovery processes. It also has a
role in regulating calcium, as there is a strong relationship between vitamin D and
bone health, as well as a role in muscle function and immune responses in athletes
and normal people. This article focuses on the role of vitamin D for athletes and non
athletes naturally and when it is taken as a dietary supplement by increasing their
overall effectiveness and athletic performance. It was concluded that taking vitamin
D as a nutritional supplement for athletes depending on the type of sports activity
they perform and their continuity of sports training.
Key words: Vitamin D, Performance, Physical Activity, Supplementation, Athletes.

Vitamin D is one of the necessary substances that must be available in food, and it can be made through the exposure of the skin to ultraviolet rays found in sunlight. Vitamin D has a significant role in metabolic processes and physiological functions, and has an effect to avoid damage to muscles and recovery processes. It also has a role in regulating calcium, as there is a strong relationship between vitamin D and bone health, as well as a role in muscle function and immune responses in athletes and normal people. This article focuses on the role of vitamin D for athletes and non athletes naturally and when it is taken as a dietary supplement by increasing their overall effectiveness and athletic performance. It was concluded that taking vitamin D as a nutritional supplement for athletes depending on the type of sports activity they perform and their continuity of sports training. Key words: Vitamin D, Performance, Physical Activity, Supplementation, Athletes.

The importance of data encryption has grown dramatically, especially in terms of personal data. The elliptic curve cryptosystem is the major solution for data security because it has become more prevalent. Security and privacy are required to ensure the data has recently generated much concern within the research community. This paper's objective is to obtain a complicated and secure ciphertext and make cryptanalysis difficult. In this paper, we modified the El-Gamal Elliptic Curve Cryptosystem (ECC) by producing new secret keys for encrypting data and embedding messages by using Discrete Logarithm Problem (DLP) behavior. This modification is to offer enhanced encryption standards and improve the security. The experiential results show that the proposed algorithm is more complex than the original method.

Spiking Neural Networks (SNNs) are the most common and widely used artificial neural network models in bio-inspired computing. However, SNN simulation requires high computational resources. Therefore, multiple state-of-the-art (SOTA) algorithms explore parallel hardware based implementations for SNN simulation, such as the use of Graphics Processing Units (GPUs). However, we recognize inefficiencies in the utilization of hardware resources in the current SOTA implementations for SNN simulation, namely, the Neuron (N)-, Synapse (S)-, and Action Potential (AP)-algorithm. This work proposes and implements two novel algorithms on an NVIDIA Ampere A100 GPU: The Active Block (AB)- and Single Kernel Launch (SKL)-algorithm. The proposed algorithms consider the available computational resources on both, the Central Processing Unit (CPU) and GPU, leading to a balanced workload for SNN simulation. Our SKL-algorithm is able to remove the CPU bottleneck completely. The average speedups obtained by the best of the proposed algorithms are factors of 0.83$\times$, 1.36$\times$ and 1.55$\times$ in comparison to the SOTA algorithms for firing modes 0, 1 and 2 respectively. The maximum speedups obtained are factors of 1.9$\times$, 2.1$\times$ and 2.1$\times$ for modes 0, 1 and 2 respectively.KeywordsSNNsGPUsDynamic ParallelismGrid-stride LoopParallelization Algorithms

Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power computation because of their event-driven mechanism. Usually, the training of SNNs suffers accuracy loss on various tasks, yielding an inferior performance compared with ANNs. A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures. However, an enormous number of time steps are required for these converted SNNs, thus losing the energy-efficient benefit. Utilizing both the accuracy advantages of ANNs and the computing efficiency of SNNs, a novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN). In order to achieve competitive accuracy and reduced inference latency, LaSNN transfers the learning from a well-trained ANN to a small SNN by distilling the knowledge other than converting the parameters of ANN. The information gap between heterogeneous ANN and SNN is bridged by introducing the attention scheme, the knowledge in an ANN is effectively compressed and then efficiently transferred by utilizing our layer-wise distillation paradigm. We conduct detailed experiments to demonstrate the effectiveness, efficacy, and scalability of LaSNN on three benchmark data sets (CIFAR-10, CIFAR-100, and Tiny ImageNet). We achieve competitive top-1 accuracy compared to ANNs and 20x faster inference than converted SNNs with similar performance. More importantly, LaSNN is dexterous and extensible that can be effortlessly developed for SNNs with different architectures/depths and input encoding methods, contributing to their potential development.

Brain-inspired oscillatory neural networks (ONNs) utilize coupled oscillators to emulate biological neuronal dynamics. ONNs are naturally suitable for image/pattern recognition. Many hardware implementations of ONNs have been explored recently by using analog or digital CMOS systems, which can be limited by Moore's law. In this work, we used one of the advanced superconducting technologies for coupled oscillator networks to further improve the energy efficiency and processing speed. Inductively coupled ring oscillators using rapid single flux quantum (RSFQ) technology were designed and simulated. These Josephson junction (JJ) based oscillators can operate as fast as tens of GHz but only consume a energy of several aJ per operation. Excluding cryo-cooling factors, the estimated power consumption of our RSFQ oscillator is hundreds of nW at a frequency of 12.5 GHz, which is multiple times smaller than a typical CMOS ring oscillator. Furthermore, a system for pattern recognition with error detection was designed based on these oscillators. Synchronization dynamics in a pair of coupled oscillators were used to identify whether a test pixel matches a reference pixel. A network comprised of 16 pairs of oscillators were wired together, along with synchronization detectors and a fluxon integrator to demonstrate pattern recognition function. This work was performed using standard JJ models and demonstrated through WRspice and JoSIM software simulations. The circuits and systems proposed in this work aim to provide insights into the use of superconducting technology for implementing ONNs and may serve as basic building blocks for the design of more complex oscillatory computational networks in image processing and beyond. Single oscillator modeling and systematic network function exploration will be performed in future work.

This work proposes a novel modeling approach for analog organic circuits using very simple to customize circuit topology and parameters of individual p-and n-type organic field effect transistors (OFETs). Aided with the combination of primitive elements (OFETs, capacitors, resistors), the convoluted behavior of analog organic neuromorphic circuits (ONCs) and even other general analog organic circuits, can be predicted. The organic log-domain integrator (oLDI) synaptic circuit, the organic differential-pair integrator (oDPI) synaptic circuit, and the organic Axon-Hillock (oAH) somatic circuit are designed and serve as the modular circuit primitives of more complicated ONCs. We first validate our modeling approach by comparing the simulated oDPI and oAH circuit responses to their experimental measurements. Thereafter, the summation effects of the excitatory and inhibitory oDPI circuits in prototyped ONCs are investigated. We also predict the dynamic power dissipation of modular ONCs and show an average power consumption of 2.1
$\mu \text{J}$
per spike for the oAH soma at a
$\sim$
1 Hz spiking frequency. Furthermore, we compare our modeling approach with other two representative organic circuit models and prove that our approach outperforms the other two in terms of accuracy and convergence speed.

Understanding the galaxy's center few hundred parsecs is important for knowing how
galaxies form and evolve. Since molecular gas is the source of star formation, it is an
essential part of the interstellar medium (ISM). In this work, we display high-
resolution data from the Atacama Large Millimeter/Sub-millimeter Array (ALMA) of
12^CO(J = 6 - 5) emission line toward the center of the galaxy NGC 34 at the distance
of 85.2 Mpc (1 arcsec = 412 pc). The center area of this galaxy has drawn in the CO
emission line with a resolution of (0.27ʺ × 0.23ʺ) as viewed by ALMA along with Spitzer
24 μm data. The CO and IR luminosities, molecular gas mass and density, and star
formation rate (SFR) and density, have been calculated for this galaxy. The value of
the molecular gas mass and the star formation rate SFR are found to equal 3.52 ×
10^8 Mө and 1.72 (Mө/yr) respectively. The surface density values of molecular gas
mass and SFR indicates that this is a starburst galaxy.

This experiment was carried out in one of the fields (A) affiliated to the College of Agricultural Engineering Sciences / University of Baghdad, for the spring season 2021, On hybrid tomato plants (Mayai Mayai) to test flower viability, using two factors, the first was three levels of irrigation interval (2, 4, 6) days, and the second factor three concentrations of compound Nano fertilizer with concentrations (0, 1.5, 2.5) gm liter-1, so that the number of treatments is 9 treatments and three replications, the number of experimental units is 27 experimental units distributed randomly according to the random drawing method to ensure reducing experimental error and obtaining the most accurate results. A factorial experiment 3 x 3 x 3 was carried out according to the Nested Design with Factorial design (RCBD), and the results were analyzed using the statistical program Genstat version 12, the means were compared according to the LSD test at the level of significance of 0.05.

Each government is looking for to provide the best services to establish efficiency and quality of performance. This goal could be accomplished by improving the service performance of entire sectors in society. The government of Syria has realized the importance of moving in the direction of information technology. Therefore, E-governance initiatives were launched in Syria as a part of overall country information technology in 20 s century. Each government sector has since upgraded the performance by having its websites and e-services application. However, there are gaps and loose connections exist among the sectors, which has accordingly tarnished the image of Syrian E-governance. This has led to significant questions about the requirement of modification and enhancement of such service. Hence, the purpose of this research is to investigate and explore the factors that drive the E-governance implementation and affect government performance as well as the government-citizen relationship in Syria.KeywordsE-governanceGovernment of SyriaGovernment-Citizen

This paper highlights the role of evolving spiking neural networks (an enhanced version of SNN) for predicting medical diagnosis. This article aims to focus on regression problems under a supervised learning strategy. In this paper, we have trained and tested eSNN on benchmarking datasets. Among the three datasets, one is the ICU Dataset which helps in predicting the recovery ration of patients who stayed in ICU. Another dataset is Plasma_Retinol which predicts the risk of cancer-related to certain carotenoids. Dataset pharynx is a part of a study conducted in the USA to determine the success rate of two radiation types. The selected datasets are those which were previously used for BioMedical Engineering related tasks. Later the evaluation was conducted using Regression Metrics. From experiment results, it is concluded that eSNN with standard parameters without optimization performed well but there is still space available for improvement to achieve the highest possible prediction scores.KeywordseSNNRegressionOptimizationPrediction

Humanoid robots, intelligent machines resembling the human body in shape and functions, cannot only replace humans to complete services and dangerous tasks but also deepen the own understanding of the human body in the mimicking process. Nowadays, attaching a large number of sensors to obtain more sensory information and efficient computation is the development trend for humanoid robots. Nevertheless, due to the constraints of von Neumann‐based structures, humanoid robots are facing multiple challenges, including tremendous energy consumption, latency bottlenecks, and the lack of bionic properties. Memristors, featured with high similarity to the biological elements, play an important role in mimicking the biological nervous system. The memristor‐based nervous system allows humanoid robots to obtain high energy efficiency and bionic sensing properties, which are similar properties to the biological nervous system. Herein, this article first reviews the biological nervous system and memristor‐based nervous system thoroughly, including the structures and also the functions. The applications of memristor‐based nervous systems are introduced, the difficulties that need to be overcome are put forward, and future development prospects are also discussed. This review can hopefully provide an evolutionary perspective on humanoid robots and memristor‐based nervous systems. Neuromorphic computing (NC), taking inspiration from biology, would undoubtedly shed light on the future advances of humanoid robotics, massive computing, and energy‐efficient systems. In this article, to improve the stable development of NC, the authors first provide a comprehensive overview of the memristor‐based nervous system and its correspondence with the biological nervous system and then discuss current challenges and prospects.

Facilitated by the emergence of neuromorphic hardware, neuromorphic algorithms mimic the brain’s asynchronous computation to improve energy efficiency, low latency, and robustness, which are crucial for a wide variety of real-time robotic applications. However, the limited on-chip learning abilities hinder the applicability of neuromorphic computing to real-world robotic tasks. Biomimetism can overcome this limitation by complementing or replacing training with the knowledge of the brain’s connectome associated with the targeted behavior. By drawing inspiration from the human oculomotor network, we designed a spiking neural network (SNN) that tracked visual targets in real-time. We deployed the biomimetic controller on Intel’s Loihi neuromorphic processor to control an in-house robotic head. The robot’s behavior resembled the smooth pursuit and saccadic eye movements observed in humans, while the SNN on Loihi exhibited similar performance to a CPU-run PID controller. Interestingly, this behavior emerged from the SNN without training, which places the biomimetic design as an alternative to the energy- and data-greedy learning-based methods. This work reinforces our on-going efforts to devise energy-efficient autonomous robots that mimic the robustness and versatility of their biological counterparts.

The presence of computation and transmission-variable time delays within a robotic control loop is a major cause of instability, hindering safe human-robot interaction (HRI) under these circumstances. Classical control theory has been adapted to counteract the presence of such variable delays; however, the solutions provided to date cannot cope with HRI robotics inherent features. The highly nonlinear dynamics of HRI cobots (robots intended for human interaction in collaborative tasks), together with the growing use of flexible joints and elastic materials providing passive compliance, prevent traditional control solutions from being applied. Conversely, human motor control natively deals with low power actuators, nonlinear dynamics, and variable transmission time delays. The cerebellum, pivotal to human motor control, is able to predict motor commands by correlating current and past sensorimotor signals, and to ultimately compensate for the existing sensorimotor human delay (tens of milliseconds). This work aims at bridging those inherent features of cerebellar motor control and current robotic challenges—namely, compliant control in the presence of variable sensorimotor delays. We implement a cerebellar-like spiking neural network (SNN) controller that is adaptive, compliant, and robust to variable sensorimotor delays by replicating the cerebellar mechanisms that embrace the presence of biological delays and allow motor learning and adaptation.

A multiparadigm general methodology is advanced for development of reliable, efficient, and practical freeway incident detection algorithms. The performance of the new fuzzy-wavelet radial basis function neural network (RBFNN) freeway incident detection model of Adeli and Karim is evaluated and compared with: the benchmark California algorithm #8 using both real and simulated data. The evaluation is based on three quantitative measures of detection rate, false alarm rate, and detection time, and the qualitative measure of algorithm portability. The new algorithm outperformed the California algorithm consistently under various scenarios. False alarms are a major hindrance to the widespread implementation of automatic freeway incident detection algorithms. The false alarm rate ranges from 0 to 0.07% for the new algorithm and from 0.53 to 3.82% for the California algorithm. The new fuzzy-wavelet RBFNN freeway incident detection model is a single-station pattern-based algorithm that is computationally efficient and requires no recalibration. The new model can be readily transferred without retraining and without my performance deterioration.

The work zone capacity cannot be described by any mathematical function because it is a complicated function of a large number of interacting variables. In this paper, a novel adaptive neuro-fuzzy logic model is presented for estimation of the freeway work zone capacity. Seventeen different factors impacting the work zone capacity are included in the model. A neural network is employed to estimate the parameters associated with the bell-shaped Gaussian membership functions. used in the fuzzy inference mechanism. An optimum generalization strategy is used in order to avoid over-generalization and achieve accurate results. Comparisons with two empirical equations demonstrate that the new model in general provides a more accurate estimate of the work zone capacity, especially when the data for factors impacting the work zone capacity are only partially available. Further, it provides two additional advantages over the existing empirical equations. First, it incorporates a large number of factors impacting the work zone capacity. Second, unlike the empirical equations, the new model does not require selection of various adjustment factors or values by the work zone engineers based on prior experience.

An important advantage of cold-formed steel is the greater flexibility of cross-sectional shapes and sizes available to the structural steel designer. However, the lack of standard optimized shapes makes the selection of the most economical shape very difficult if not impossible. This task is further complicated by the complex and highly nonlinear nature of the rules that govern their design. A general mathematical formulation and computational model is presented for optimization of cold-formed steel beams. The nonlinear optimization problem is solved by adapting the robust neural dynamics model of Adeli and Park, patented recently at the U.S. Patent Office. The basis of the design can be American Iron and Steel Institute (AISI) allowable stress design (ASD) or load and resistance factor design (LRFD) specifications. The computational model has been applied to three different commonly used types of cross-sectional shapes: hat-, I-, and Z-shapes. The robustness and generality of the approach have been demonstrated by application to three different examples. This research lays the mathematical foundation for automated optimum design of structures made of cold-formed shapes. The result would be more economical use of cold-formed steel.

Accurate and timely forecasting of traffic flow is of paramount importance for effective management of traffic congestion in intelligent transportation systems. In this paper, a novel nonparametric dynamic time-delay recurrent wavelet neural network model is presented for forecasting traffic flow. The model incorporates the self-similar, singular, and fractal properties discovered in the traffic flow. The concept of wavelet frame is introduced and exploited in the model to provide flexibility in the design of wavelets and to add extra features such as adaptable translation parameters desirable in traffic flow forecasting. The statistical autocorrelation function is used for selection of the optimum input dimension of traffic flow time series. The model incorporates both the time of the day and the day of the week of the prediction time. As such, it can be used for long-term traffic flow forecasting in addition to short-term forecasting. The model has been validated using actual freeway traffic flow data. The model can assist traffic engineers and highway agencies to create effective traffic management plans for alleviating freeway congestions.

We investigate the interaction of an excitable system with a slow oscillation. Under robust and general assumptions compatible with the more stringent assumptions usually made about excitable systems, we show that such a coupled system can display bursting, i.e. a stable solution in which some variable undergoes rapid oscillations followed by a period of quiescence, with both oscillation and quiescence continually repeated. Under a further weak condition, the bursting is ”parabolic”, i.e. the local frequency of the fast oscillation increases and then decreases within a burst. The technique in this paper involves nonlinear change of coordinates which transform the equations into ones which are closely related to Hill’s equation.

Traffic incidents are nonrecurrent and pseudorandom events that disrupt the normal flow of traffic and create a bottleneck in the road network. The probability of incidents is higher during peak flow rates when the systemwide effect of incidents is most severe. Model-based solutions to the incident detection problem have not produced practical, useful results primarily because the complexity of the problem does not lend itself to accurate mathematical and knowledge-based representations. A new multiparadigm intelligent system approach is presented for the solution of the problem, employing advanced signal processing, pattern recognition, and classification techniques. The methodology effectively integrates fuzzy, wavelet, and neural computing techniques to improve reliability and robustness. A wavelet-based denoising technique is employed to eliminate undesirable fluctuations in observed data from traffic sensors. Fuzzy c-mean clustering is used to extract significant information from the observed data and to reduce its dimensionality. A radial basis function neural network (RRBFNN) is developed to classify the denoised and clustered observed data. The new model produced excellent incident detection rates with no false alarms when tested using both real and simulated data.

A new dynamic time-delay fuzzy wavelet neural network model is presented for nonparametric identification of structures using the nonlinear autoregressive moving average with exogenous inputs approach. The model is based on the integration of four different computing concepts: dynamic time delay neural network, wavelet, fuzzy logic, and the reconstructed state space concept from the chaos theory. Noise in the signals is removed using the discrete wavelet packet transform method. In order to preserve the dynamics of time series, the reconstructed state space concept from the chaos theory is employed to construct the input vector. In addition to denoising, wavelets are employed in combination with two soft computing techniques, neural networks and fuzzy logic, to create a new pattern recognition model to capture the characteristics of the time series sensor data accurately and efficiently. The model balances the global and local influences of the training data and incorporates the imprecision existing in the sensor data effectively. Experimental results on a five-story steel frame are employed to validate the computational model and demonstrate its accuracy and efficiency.

In this paper we develop and analyze spiking neural network (SNN) versions of resilient propagation (RProp) and QuickProp, both training methods used to speed up training in artificial neural networks (ANNs) by making certain assumptions about the data and the error surface. Modifications are made to both algorithms to adapt them to SNNs. Results generated on standard XOR and Fisher Iris data sets using the QuickProp and RProp versions of SpikeProp are shown to converge to a final error of 0.5 -an average of 80% faster than using SpikeProp on its own.

This book is devoted to an analysis of general weakly connected neural networks (WCNNs) that can be written in the form (0.1) m Here, each Xi E IR is a vector that summarizes all physiological attributes of the ith neuron, n is the number of neurons, Ii describes the dynam ics of the ith neuron, and gi describes the interactions between neurons. The small parameter € indicates the strength of connections between the neurons. Weakly connected systems have attracted much attention since the sec ond half of seventeenth century, when Christian Huygens noticed that a pair of pendulum clocks synchronize when they are attached to a light weight beam instead of a wall. The pair of clocks is among the first weakly connected systems to have been studied. Systems of the form (0.1) arise in formal perturbation theories developed by Poincare, Liapunov and Malkin, and in averaging theories developed by Bogoliubov and Mitropolsky.

We suggest a simple spiking model-resonate-and-fire neuron, which is similar to the integrate-and-fire neuron except that the state variable is complex. The model provides geometric illustrations to many interesting phenomena occurring in biological neurons having subthreshold damped oscillations of membrane potential. For example, such neurons prefer a certain resonant frequency of the input that is nearly equal to their eigenfrequency, they can be excited or inhibited by a doublet (two pulses) depending on its interspike interval, and they can fire in response to an inhibitory input. All these properties could be observed in Hodgkin-Huxley-type models. We use the resonate-and-fire model to illustrate possible sensitivity of biological neurons to the fine temporal structure of the input spike train. Being an analogue of the integrate-and-fire model, the resonate-and-fire model is computationally efficient and suitable for simulations of large networks of spiking neurons.

The goal of this research is to develop an efficient SNN model for epilepsy and epileptic seizure detection using electroencephalograms (EEGs), a complicated pattern recognition problem. Three training algorithms are investigated: SpikeProp (using both incremental and batch processing), QuickProp, and RProp. Since the epilepsy and epileptic seizure detection problem requires a large training dataset the efficacy of these algorithms is investigated by first applying them to the XOR and Fisher iris benchmark problems. Three measures of performance are investigated: number of convergence epochs, computational efficiency, and classification accuracy. Extensive parametric analysis is performed to identify heuristic rules and optimum parameter values that increase the computational efficiency and classification accuracy. The result is a remarkable increase in computational efficiency. For the XOR problem, the computational efficiency of SpikeProp, QuickProp, and RProp is increased by a factor of 588, 82, and 75, respectively, compared with the results reported in the literature. EEGs from three different subject groups are analyzed: (a) healthy subjects, (b) epileptic subjects during a seizure-free interval, and (c) epileptic subjects during a seizure. It is concluded that RProp is the best training algorithm because it has the highest classification accuracy among all training algorithms specially for large size training datasets with about the same computational efficiency provided by SpikeProp. The SNN model for EEG classification and epilepsy and seizure detection uses RProp as training algorithm. This model yields a high classification accuracy of 92.5%.

Recently, the writers developed a new mesoscopic-wavelet model for simulating freeway traffic flow patterns and extracting congestion characteristics. As an extension of that research, in this paper, a new neural network-wavelet microsimulation model is presented to track the travel time of each individual vehicle for traffic delay and queue length estimation at work zones. The model incorporates the dynamics of a single vehicle in changing traffic flow conditions. The extracted congestion characteristics obtained from the mesoscopic-wavelet model are used in a Levenberg-Marquardt backpropagation (BP) neural network for classifying the traffic flow as free flow, transitional flow, and congested flow with stationary queue. The neural network model is trained using simulated data and tested using both simulated and real data. The computational model presented is applied to five examples of freeways with two and three lanes and one lane closure with varying entry flow or demand patterns. The new microsimulation model is more accurate than macroscopic models and substantially more efficient than microscopic models.

An adaptive computational model is presented for estimating the work zone capacity and queue length and delay,laking into account the following factors: number of lanes, number of open lanes, work zone layout, length, lane width, percentage trucks, grade, speed, work intensity, darkness factor, and proximity of ramps. The model integrates judiciously the mathematical rigor of traffic flow theory with the adaptability of neural network analysis. A radial-basis function neural network model is developed to learn the mapping from quantifiable and nonquantifiable factors describing the work zone traffic control problem to the associated work zone capacity. This model exhibits good generalization properties from a small set of training data, a specially attractive feature for estimating the work zone capacity where only limited data is available. Queue delays and lengths are computed using a deterministic traffic flow model based on the estimated work zone capacity. The result of this research is being used to develop an intelligent decision support system to help work zone engineers perform scenario analysis and create traffic management plans consistently, reliably, and efficiently.

The metal roofing industry uses a variety of commonly used methods to determine the structural performance (positive load capacity and negative or uplift load capacity) of various cold-formed metal roof panel configurations. The metal roof panel system considered in this paper consists of cold-formed U-shaped panels fabricated from 0.024-in. (0.61-mm) thick coated steel that are attached to cold-formed Z-shaped 0.060-in. (1.52-mm) thick coated steel purlins using concealed clips. Uplift capacity may be calculated for a given panel section according to the approach described in the American Iron and Steel Institute specifications. Its method for calculating load capacity of metal roofing systems is considered unreliable because it typically produces results dramatically different than results obtained from actual testing. A new method of determining the load capacity of U-shaped metal roof panel systems is presented using a counterpropagation neural network. The new method accounts for distortional changes in the geometry of the roof panel system's cross section due to uniform loading, particularly negative (uplift) loading, and the failure modes that prevent the metal roof system from reaching the ultimate load. The proposed methodology provides an accurate and reliable method of determining the structural performance of a metal roof system as an alternative to testing.

Optimization of large structures consisting of thousands of members subjected to the highly nonlinear constraints of the actual commonly used design codes, such as the American Institute of Steel Construction (AISC), Allowable Stress Design (ASD), or Load and Resistance Factor Design (LRFD) specifications (AISC 1989, 1994), requires high-performance computing resources. We have previously developed parallel optimization algorithms on shared memory multiprocessors where a few powerful processors are connected to a single shared memory. In contrast, in a distributed memory machine, a relatively large number of microprocessors are connected to their own locally distributed memories without globally shared memory. In this article, we present distributed nonlinear neural dynamics algorithms for discrete optimization of large steel structures. The algorithms are implemented on a recently introduced distributed memory machine, the GRAY T3D, and applied to the minimum weight design of three large space steel structures ranging in size from 1,310 to 8,904 members. The stability, convergence, and efficiency of the algorithms are demonstrated through examples. For an 8,904-member structure, a high parallel processing efficiency of 94% is achieved using a 32-processor configuration.

Neural network computing has recently been applied to structural engineering problems. Most of the published research is based on a back-propagation neural network (BPN), primarily due to its simplicity. The back-propagation algorithm, however, has a slow rate of learning and is therefore impractical for learning of complicated problems requiring large networks. In this paper, we present application of counterpropagation neural network (CPN) with competition and interpolation layers in structural analysis and design. To circumvent the arbitrary trial-and-error selection of the learning coefficients encountered in the counterpropagation algorithm, a simple formula is proposed as a function of the iteration number and excellent convergence is reported. The CPN is compared with the BPN using two structural engineering examples reported in recent literature. We found superior convergence property and a substantial decrease in the central processing unit (CPU) time for the CPN. In addition, CPN was applied to two new examples in the area of steel design requiring large networks with thousands of links. It is shown that CPN can learn complicated structural design problems within a reasonable CPU time.

Estimation of the cost of a construction project is an important task in the management of construction projects. The quality of construction management depends on accurate estimation of the construction cost. Highway construction costs are very noisy and the noise is the result of many unpredictable factors. In this paper, a regularization neural network is formulated and a neural network architecture is presented for estimation of the cost of construction projects. The model is applied to estimate the cost of reinforced-concrete pavements as an example. The new computational model is based on a solid mathematical foundation making the cost estimation consistently more reliable and predictable. Further, the result of estimation from the regularization neural network depends only on the training examples. It does not depend on the architecture of the neural network, the learning parameters, and the number of iterations required for training the system. Moreover, the problem of noise in the data is taken into account in a rational manner.

This article concludes a series of papers concerned with the flow of electric current through the surface membrane of a giant nerve fibre (Hodgkinet al., 1952,J. Physiol.116, 424–448; Hodgkin and Huxley, 1952,J. Physiol.116, 449–566). Its general object is to discuss the results of the preceding papers (Section 1), to put them into mathematical form (Section 2) and to show that they will account for conduction and excitation in quantitative terms (Sections 3–6).

A concurrent adaptive conjugate gradient learning al gorithm has been developed for training of multilayer feed-forward neural networks and implemented in C on a MIMD shared-memory machine (CRAY Y-MP/8- 864 supercomputer). The learning algorithm has been applied to the domain of image recognition. The per formance of the algorithm has been evaluated by ap plying it to two large-scale training examples with 2,304 training instances. The concurrent adaptive neural networks algorithm has superior convergence property compared with the concurrent momentum back-propagation algorithm. A maximum speedup of about 7.9 is achieved using eight processors for a large network with 4,160 links as a result of microtask ing only. When vectorization is combined with micro tasking, a maximum speedup of about 44 is realized using eight processors.

An abstract is not available.

An abstract is not available.

This paper presents a generalization of the perception learning procedure for learning the correct sets of connections for arbitrary networks. The rule, falled the generalized delta rule, is a simple scheme for implementing a gradient descent method for finding weights that minimize the sum squared error of the sytem's performance. The major theoretical contribution of the work is the procedure called error propagation, whereby the gradient can be determined by individual units of the network based only on locally available information. The major empirical contribution of the work is to show that the problem of local minima not serious in this application of gradient descent. Keywords: Learning; networks; Perceptrons; Adaptive systems; Learning machines; and Back propagation

An improved freeway incident-detection model is presented based on speed, volume, and occupancy data from a single detector station using a combination of wavelet-based signal processing, statistical cluster analysis, and neural network pattern recognition. A comparative study of different wavelets (Haar, second-order Daubechies, and second- and fourth-order Coifman wavelets) and filtering schemes is conducted in terms of efficacy and accuracy of smoothing. It is concluded that the fourth-order Coifman wavelet is more effective than other types of wavelets for the traffic incident detection problem. A statistical multivariate analysis based on the Mahalanobis distance is employed to perform data clustering and parameter reduction to reduce the size of the input space for the subsequent step of classification by the Levenberg–Marquardt backpropagation (BP) neural network. For a straight two-lane freeway using real data, the model yields an incident detection rate of 100%, false alarm rate of 0.3%, and detection time of 35.6 seconds.

The first journal article on neural network application in civil/structural engineering was published in this journal in 1989. This article reviews neural network articles published in archival research journals since then. The emphasis of the review is on the two fields of structural engineering and construction engineering and management. Neural networks articles published in other civil engi-neering areas are also reviewed, including environmental and water resources engineering, traffic engineering, high-way engineering, and geotechnical engineering. The great majority of civil engineering applications of neural net-works are based on the simple backpropagation algorithm. Applications of other recent, more powerful and efficient neu-ral networks models are also reviewed. Recent works on inte-gration of neural networks with other computing paradigms such as genetic algorithm, fuzzy logic, and wavelet to enhance the performance of neural network models are presented.

A new non-linear control model is presented for active control of three-dimensional (3D) building structures. Both geometrical and material non-linearities are included in the structural control formulation. A dynamic fuzzy wavelet neuroemulator is presented for predicting the structural response in future time steps. Two dynamic coupling actions are taken into account simultaneously in the control model: (a) coupling between lateral and torsional motions of the structure and (b) coupling between the actuator and the structure. The new neuroemulator is validated using two irregular 3D steel building structures, a 12-story structure with vertical setbacks and an 8-story structure with plan irregularity. Numerical validations in both time and frequency domains demonstrate that the new neuroemulator provides accurate prediction of structural displacement responses, which is required in neural network models for active control of structures. In the companion paper, a floating-point genetic algorithm is presented for finding the optimal control forces needed for active non-linear control of building structures using the dynamic fuzzy wavelet neuroemulator presented in this paper.

A non-parametric system identification-based model is presented for damage detection of highrise building structures subjected to seismic excitations using the dynamic fuzzy wavelet neural network (WNN) model developed by the authors. The model does not require complete measurements of the dynamic responses of the whole structure. A large structure is divided into a series of sub-structures around a few pre-selected floors where sensors are placed and measurements are made. The new model balances the global and local influences of the training data and incorporates the imprecision existing in the sensor data effectively, thus resulting in fast training convergence and high accuracy. A new damage evaluation method is proposed based on a power density spectrum method, called pseudospectrum. The multiple signal classification (MUSIC) method is employed to compute the pseudospectrum from the structural response time series. The methodology is validated using the data obtained for a 38-storey concrete test model. The results demonstrate the effectiveness of the WNN model together with the pseudospectrum method for damage detection of highrise buildings based on a small amount of sensed data. Copyright © 2007 John Wiley & Sons, Ltd.

Artificial neural networks are known to be effective in solving problems involving pattern recognition and classification. The traffic incident-detection problem can be viewed as recognizing incident patterns from incident-free patterns. A neural network classifier has to be trained first using incident and incident-free traffic data. The dimensionality of the training input data is high, and the embedded incident characteristics are not easily detectable. In this article we present a computational model for automatic traffic incident detection using discrete wavelet transform, linear discriminant analysis, and neural networks. Wavelet transform and linear discriminant analysis are used for feature extraction, denoising, and effective preprocessing of data before an adaptive neural network model is used to make the traffic incident detection. Simulated as well as actual traffic data are used to test the model. For incidents with a duration of more than 5 minutes, the incident-detection model yields a detection rate of nearly 100 percent and a false-alarm rate of about 1 percent for two- or three-lane freeways.

In a companion paper, a new non-linear control model was presented for active control of three-dimensional (3D) building structures including geometrical and material non-linearities, coupling action between lateral and torsional motions, and actuator dynamics (Int. J. Numer. Meth. Engng; DOI: 10.1002/nme.2195). A dynamic fuzzy wavelet neuroemulator was presented for predicting the structural response in future time steps. In this paper, a new neuro-genetic algorithm or controller is presented for finding the optimal control forces. The control algorithm does not need the pre-training required in a neural network-based controller, which improves the efficiency of the general control methodology significantly. Two 3D steel building structures, a 12-story structure with vertical setbacks and an 8-story structure with plan irregularity, are used to validate the neuro-genetic control algorithm under three different seismic excitations. Numerical validations demonstrate that the new control methodology significantly reduces the displacements of buildings subjected to various seismic excitations including structures with plan and elevation irregularities. Copyright © 2008 John Wiley & Sons, Ltd.

Estimation of freeway travel time with reasonable accuracy is essential for successful implementation of an advanced traveler information system (ATIS) for use in an intelligent transportation system (ITS). An ATIS consists of a route guiding system that recommends the most suitable route based on the traveler's requirements using the information gathered from various sources such as loop detectors and probe vehicles. This information can be disseminated through mass media or on on-board satellite-based navigational system. Based on the estimated travel times for various routes, the traveler can make a route choice. In this article, a neural network model is presented for forecasting the freeway link travel time using the counter propagation neural (CPN) network. The performance of the model is compared with a recently reported freeway link travel forecasting model using the backpropagation (BP) neural network algorithm. It is shown that the new model based on the CPN network, and the learning coefficients proposed by Adeli and Park, is nearly two orders of magnitude faster than the BP network. As such, the proposed freeway link travel-forecasting model is particularly suitable for real-time advanced travel information and management systems.

For a network of spiking neurons that encodes information in the timing of individual spike times, we derive a supervised learning rule, SpikeProp, akin to traditional error-backpropagation. With this algorithm, we demonstrate how networks of spiking neurons with biologically reasonable action potentials can perform complex non-linear classification in fast temporal coding just as well as rate-coded networks. We perform experiments for the classical XOR problem, when posed in a temporal setting, as well as for a number of other benchmark datasets. Comparing the (implicit) number of spiking neurons required for the encoding of the interpolated XOR problem, the trained networks demonstrate that temporal coding is a viable code for fast neural information processing, and as such requires less neurons than instantaneous rate-coding. Furthermore, we find that reliable temporal computation in the spiking networks was only accomplished when using spike response functions with a time constant longer than the coding interval, as has been predicted by theoretical considerations.

A supervised learning rule for Spiking Neural Networks (SNNs) is presented that can cope with neurons that spike multiple times. The rule is developed by extending the existing SpikeProp algorithm which could only be used for one spike per neuron. The problem caused by the discontinuity in the spike process is counteracted with a simple but effective rule, which makes the learning process more efficient. Our learning rule is successfully tested on a classification task of Poisson spike trains. We also applied the algorithm on a temporal version of the XOR problem and show that it is possible to learn this classical problem using only one spiking neuron making use of a hair-trigger situation.

The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.

A nonlinear neural dynamics model is presented as a new structural optimization technique and applied to minimum weight design of space trusses subjected to stress and displacement constraints under multiple loading conditions. A pseudo-objective function is formulated for the optimization problem in the form of a Lyapunov function to ensure the global convergence and the stability of the neural dynamic system by adopting an exterior penalty function method. The topology of the neural dynamics model consists of one variable layer and multi-constraint layers. The number of constraint layers corresponds to the number of loading conditions in the structural optimization problem. Design sensitivity coefficients calculated by the adjoint variable method are included in the inhibitory connections from the constraint layers to the variable layer. Optimum weights and design solutions are presented for four example structures and compared with those reported in the literature.

An adaptive conjugate gradient learning algorithm has been developed for training of multilayer feedforward neural networks. The problem of arbitrary trial-and-error selection of the learning and momentum ratios encountered in the momentum backpropagation algorithm is circumvented in the new adaptive algorithm. Instead of constant learning and momentum ratios, the step length in the inexact line search is adapted during the learning process through a mathematical approach. Thus, the new adaptive algorithm provides a more solid mathematical foundation for neural network learning. The algorithm has been implemented in C on a SUN-SPARCstation and applied to two different domains: engineering design and image recognition. It is shown that the adaptive neural networks algorithm has superior convergence property compared with the momentum backpropagation algorithm.

Parallel backpropagation neural networks learning algorithms have been developed employing the vectorization and microtasking capabilities of vector MIMD machines. They have been implemented in C on CRAY Y-MP/864 supercomputer under UNICOS operating system. The algorithms have been applied to two different domains: engineering design and image recognition, and their performance has been investigated. A maximum speedup of about 6.7 is achieved using eight processors for a large network with 5950 links due to microtasking only. When vectorization is combined with microtasking, a maximum speedup of about 33 is realized using eight processors.

A computational approach is presented for predicting the location and time of occurrence of future moderate-to-large earthquakes in an approximate sense based on neural network modeling and using a vector of eight seismicity indicators as input. Two different methods are explored. In the first method, a large seismic region is subdivided into several small subregions and the temporal historical earthquake record is divided into a number of small equal time periods. Seismicity indicators are computed for each subregion for each time period and their relationship to the magnitude of the largest earthquake occurring in that subregion during the following time-period is studied using a recurrent neural network. In the second more direct approach, the temporal historical earthquake record is divided into a number of unequal time periods where each period is defined as the time between large earthquakes. Seismicity indicators are computed for each time-period and their relationship to the latitude and longitude of the epicentral location, and time of occurrence of the following major earthquake is studied using a recurrent neural network.