## No full-text available

To read the full-text of this research,

you can request a copy directly from the authors.

Traffic flow data are needed for traffic management and control applications as well as for transportation planning issues. Such data are usually collected from traffic sensors; however, it is not practical or even feasible to deploy traffic sensors on all of a network's links. Instead, it is necessary to extend the information acquired from a subset of link flows to estimate the entire network's traffic flow. To this end, this study proposes a robust deep learning architecture based on a stacked sparse autoencoders (SAEs) model for a precise estimation of the whole network's traffic flow with an already-deployed sensor set. The proposed deep learning architecture has two consequent components: a deep learning model based on the SAEs and a fully connected layer. First, the SAEs model is used to extract traffic flow features and reach a meaningful pattern of the relation between the traffic flow data and network structure. Subsequently, the fully connected layer is used for the traffic flow estimation. Then, the whole architecture is fine-tuned to update its parameters in order to enhance the traffic flow estimation. For training the proposed deep learning architecture, synthetic link flow data are randomly generated from the network's prior demand information. The performance of the proposed model is evaluated then validated using two real networks. A third medium real-size network is used to measure the robustness of applying the proposed methodology to this specific problem of traffic flow estimation.

To read the full-text of this research,

you can request a copy directly from the authors.

... In graphical form, it is how to select a line to sep-mainly answers where and how many traffic sensors of a specific type are to be placed in a network for a designated purpose [10,11]. Although TSLP studies differ in solution structures depending on the problem formulation, they share some common guiding lines (e.g., coverage rules and link flow independence) [12]. ...

... Recently, [12] obtained close results to these exact link flow inference methods with fewer sensor counting via deep learning neural networks. The proposed technique can learn the latent relationships among a network's flow elements to accurately predict the absent link data. ...

... Algorithm 2 initializes the problem with a set of paths equal to the number of O/D pairs (̅). The central core in steps (7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18) attempts to find the best solution to cover the considered paths while perturbing the search in each internal iteration. Two leading operators of diversification are used: tolerance probability (Tp) and neighbor search fraction (Nfs). ...

In this study, we present exact and heuristics algorithms for a traffic sensors location problem called the screen line problem. It is a problem of how to locate traffic sensors on a transportation network where all the origin/destination node pairs are fully separated. The problem experiences two main complexity dimensions that obstruct finding an efficient solution algorithm for large-scale networks: its mathematical formulation, which is proved in the literature to be NP-hard, and an inherent combinatorial complexity due to the need for a network complete path enumeration. In this study, the problem is reformulated as a set covering problem. Thereafter, the dual formulation is recalled showing that the shortest path-based column generation method would yield as many paths as necessary and hence circumvent the intractability of the full path enumeration task. This path generation technique enables applying both the proposed heuristics and exact methods to the problem. In addition, the gap value between the heuristics and the exact algorithms is set to be examined statistically. For evaluation, three networks of different sizes were used to track the scalability of proposed algorithms. The methodology showed high efficiency to deal with up to 10,000 demand node pairs in addition to the capability of producing practical solutions with respect to normal traffic flow conditions. The proposed heuristics algorithm stipulates a gap value of less than 25% with more than 99% confidence.

... These models encounter difficulties in dealing with non-homogeneous distribution variables; besides, they are prone to statistical flaws when there is a correlation among the independent variables (le Cessie and Van Houwelingen 1994, Midi et al. 2010). The second approach is advanced ML methods that could capture the inherent characteristics of the input data interactions (Owais et al. 2020b, Moussa and. Despite their capabilities, they might fail to infer/prioritize the impact of each input variable on the output/ target variable. ...

... DL is recognized as one of the rapidly growing branches in ML. It is the technology where an exclusive architecture of multilayer "deep" neural networks is used (Ciregan et al. 2012, Owais et al. 2020b). DL has proved pre-eminent performance in many applications in transportation engineering, including incident detection and traffic congestion (Chakraborty et al. 2018a, Chakraborty et al. 2018b, traffic flow estimation (Owais et al. 2020b), transportation-networks-reliability-analysis (Nabian and Meidani 2018), and pavement performance detection (Zhang et al. 2017, Fan et al. 2018, Dorafshan and Azari 2020, Moussa and Owais 2020. ...

... It is the technology where an exclusive architecture of multilayer "deep" neural networks is used (Ciregan et al. 2012, Owais et al. 2020b). DL has proved pre-eminent performance in many applications in transportation engineering, including incident detection and traffic congestion (Chakraborty et al. 2018a, Chakraborty et al. 2018b, traffic flow estimation (Owais et al. 2020b), transportation-networks-reliability-analysis (Nabian and Meidani 2018), and pavement performance detection (Zhang et al. 2017, Fan et al. 2018, Dorafshan and Azari 2020, Moussa and Owais 2020. Several studies have already employed DL in the traffic safety analysis. ...

Traffic accidents are rare events with inconsistent spatial and temporal dimensions; thus, accident injury severity (INJ-S) analysis faces a significant challenge in its classification and data stability. While classical statistical models have limitations in accurately modeling INJ-S, advanced machine learning methods have no apparent equations to prioritize/analyze different contributing factors to predict INJ-S levels. Also, the intercorrelations among the input factors could make the results of a typical sensitivity analysis misleading. Rear-end accidents constitute the most frequent type of traffic accidents; and therefore, their associated INJ-S need more insight investigations. To resolve all these issues, this study presents a sophisticated approach based on a deep learning paradigm combined with a Variance-Based Globa1 Sensitivity Analysis (VB/GSA). The methodology proposes a deep residual neural networks structure that utilizes residual shortcuts (i.e., connections), unlike other neural network architectures. The connections allow the DRNNs to bypass a few layers in the deep network architecture, circumventing the regular training with high accuracy problems. The Monte Carlo simulation with the aid of the trained DRNNs model was conducted to investigate the impact of each explanatory factor on the INJ-S levels based on the VB/GSA. The developed methodology was used to analyze all rear-end accidents in North Carolina from 2010 to 2017. The performance of the developed methodology was evaluated utilizing some selected representative indicators and then compared with the well-known ordered logistic regression (OLR) model. The developed methodology was found to achieve an overall accuracy of 83% and attained a superior performance, as compared with the OLR model. Furthermore, the VB/GSA analysis could identify the most significant attributes to rear-end crashes INJ-S level.

... One of the oldest ML techniques is Artificial Neural Networks (ANNs) paradigm that is developed by [50]. Over the last two decades, ANNs have increasingly been applied to develop predictive models from data due to their ability to recognize and learn the trends of data as well as the latent relationships among them [51]. That makes them an excellent substitution for the typical physical models to analyze complex relationships involving multiple input variables [52,53]. ...

... A fast-growing branch in ML is the Deep learning (DL) technology, in which a unique architecture of multi-layer ''deep" neural networks is utilized [51,74]. DL has shown superior performance in a wide range of applications in civil engineering; traffic flow estimation [51,[75][76][77][78], traffic congestion and incident detection [79][80][81], transportation network reliability analysis [82][83][84][85], pavement crack detection [86][87][88], and structural damage detection [89,90]. ...

... A fast-growing branch in ML is the Deep learning (DL) technology, in which a unique architecture of multi-layer ''deep" neural networks is utilized [51,74]. DL has shown superior performance in a wide range of applications in civil engineering; traffic flow estimation [51,[75][76][77][78], traffic congestion and incident detection [79][80][81], transportation network reliability analysis [82][83][84][85], pavement crack detection [86][87][88], and structural damage detection [89,90]. DL superior performance relies on its ability to learn high-complex features (representations) of the row data than other ML tools [91][92][93]. ...

The dynamic modulus (E*) of hot-mix asphalt mixtures is one of the most tedious and time-consuming laboratory testing material properties. It requires costly, advanced equipment and skills that are not easily accessible in the majority of laboratories yet. Thus, many studies have been dedicated to developing E* predictive models. Unfortunately, it is a complex task due to the many input variables and their non-linear effect on the E*. This study applies a deep residual neural networks (DRNNs) technique for the first time to the problem to enhance the E* prediction capabilities. The proposed DRNNs architecture utilizes residual connections (i.e., shortcuts) that bypass some layers in the deep network structure in order to alleviate the problem of training with high accuracy. An intensive laboratory database is employed in the DRNNs model development considering all influential input parameters such as; mixture gradation, volumetric properties, binder characteristics, and testing conditions parameters. Moreover, a brute force enumeration is integrated in the model to reduce the number of needed input variables and identify the best combinations of them. Then, the proposed DRNNs performance, with the best combination of inputs, is evaluated using representative performance indicators and compared with the well-known E* predictive models, namely; Witczak 1-37A, Witczak 1-40D, and Hirsch models. Finally, a variance-based global sensitivity (VB-GS) analysis is conducted with the Monte Carlo simulation aid to highlight each input variable effect on the E* magnitude in real practice while removing the potential distortion of results due to the input variables correlations. Performance evaluation indicators reveal that the DRNNs model outperforms other E* prediction ones. Furthermore, VB-GS analysis shows that, among all feasible inputs, binder stiffness characteristics and testing temperature are the most significant ones.

... Despite their usefulness in collecting different traffic measures, it is not always practical to install them in all network links. Thus, the problem becomes: how many sensors are needed, where to locate them, and how to estimate and predict the missing measures (Owais, Moussa, and Hussain, 2020). ...

... The methods vary according to the chosen statistical approach, such as least squares (Bell, 1991;Cascetta, 1984), maximizing entropy (Lam and Lo, 1991;Wong, et al., 2005), maximum likelihood (Spiess, 1987;Hai Yang, Sasaki, Iida, and Asakura, 1992), and Bayesian inference (Li, 2005;Tebaldi and West, 1998;Wei and Asakura, 2013). Deep learning is also used for the link flow estimation problem because it estimates the network flows from a subset of installed counting sensors using randomly generated synthetic flow data for the training stage (Owais, Moussa, et al., 2020). ...

... While we have assumed that the network sensors are provided and fixed, we would assume that there is no certain information about the O/D demand and consequently the link-node pair mapping matrix. To track the observability levels for this set of sensors, we used the method of synthetic demand generation presented in (Owais, Moussa, and Hussain, 2020) to generate 100 versions of the A matrix trying to capture the variation in the path choice within the network in real practice. The levels of observability through these 100 iterations are depicted in Fig. 5. ...

Traffic flow data are a significant component of most intelligent transportation systems (ITS). Complete traffic flow data are required for most ITS applications, but providing traffic sensors in all network streets is not practical. Some flow types are difficult to observe directly, such as node pair demand (O/D flow). This study provides a mathematical analysis approach using a factorization scheme to convert conventional traffic assignment mapping into a useful format. The new mapping structure helps in identifying the amount of traffic-counting data (link flows) necessary to solve either the full observability problem for the network or a partial one. Once the required data are provided, the observability problem can be easily solved using backward substitution. In addition, the new format provides the dependencies of the different flow measures in the network. The proposed approach can track the change in the network observability state with the route choice uncertainty. Two fully reported illustrative examples in addition to a real case network are presented to demonstrate the generality of the proposed method and its potential contribution to the observability problem.

... As an important branch of machine learning, neural network models, including artificial neural networks (ANN), fuzzy neural networks (FNN), and radial basis neural networks (RBFNN), are equipped with a multinode network memory function to extract more complex nonlinear feature information from the historical traffic flow. Deep learning models deepen the hierarchical structure of neural networks [18,19], and this multi-hidden layer network improves the value density of feature information in the process of multilevel parameter transfer, abstracts the low-level feature distribution into high-level feature information, and strengthens the feature representation ability of the model in comparison to traditional neural networks, which can learn the deeper traffic flow evolution laws. Deep belief networks (DBNs) [20] expand the network depth by multilayer stacking based on restricted Boltzmann machines, whose hidden layer units are trained to capture the correlation of higher-order data exhibited at the visual layer, thereby enabling the network to more closely approximate the real system energy state of the data. ...

... Let each iteration's population size be n, after the algorithm completes the selection operation in the iteration process, the remaining individuals are sorted by fitness value, and the fitness threshold condition f t is set to further divide the population into a high fitness population A and a low fitness population B. e threshold condition f t f t is defined as follows: (19) where f i represents the individual's fitness value for the population crossover operation, the probability p of one parent's fitness relative to the overall fitness of both parents is calculated and used to determine the position of the gene breakpoint c break . ...

Traffic flow is chaotic due to nonstationary realistic factors, and revealing the internal nonlinear dynamics of chaotic data and making high-accuracy predictions is the key to traffic control and inducement. Given that high-quality phase space reconstruction is the foundation of predictive modeling. Firstly, an improved C-C method based on the fused norm search domain is proposed to address the issue that the C-C method in the phase space reconstruction algorithm does not meet the Euclidean metric accuracy and reduces the reconstruction quality when the infinite norm metric is used. Secondly, to address the problem of insufficient learning ability of traditional convolutional combinatorial modeling for complex phase space laws of chaotic traffic flow, the high-dimensional phase space features are extracted using the layer-by-layer pretraining mechanism of convolutional deep belief networks (CDBNs), and the temporal features are extracted by combining with long short-term memory (LSTM). Finally, an improved probabilistic dynamic reproduction-based genetic algorithm (PDRGA) is proposed to address the problem of the hybrid model falling into a local optimum when learning the phase space law. Experiments are conducted in three aspects: phase space reconstruction quality analysis, comparison of optimization algorithm convergence, and prediction model performance comparison. The experimentation with two data sets demonstrates that the improved C-C method combines the advantages of the high accuracy metric of the L2 norm with the low operational complexity of the infinite norm, achieving a balance between reconstruction quality and algorithm efficiency. The proposed PDRGA optimization algorithm is a lightweight improvement of the traditional genetic algorithm (GA) and solves the problem that the model tends to fall into a local optimum by optimizing the initial weights of CDBN. Meanwhile, the five error evaluation indexes of the proposed PDRGA-CDBN-LSTM hybrid model are lower than those of the baseline model, providing a new modeling idea for chaotic traffic flow prediction.

... The system shows that correlating speed violations with high tickets helps in regulating users' behavior. However, providing these sensors on all the network's streets is a costly and not practical task [7]. Therefore, users tend to locate the roads in which the sensors are installed to avoid being charged without any tangible change in their driving habits. ...

... In [31,32], the bi-level optimization is used by which the equilibrium between the estimating O/D flows and users' route choice model is ensured in the final solution. Recently, Owais et al. [7] deployed the deep neural network for the flow estimation using innovative learning architecture based on Stacked sparse Auto-Encoders (SAEs) to attain meaningful patterns between the sensors data and the network structure. ...

This study presents a mathematical approach to distribute portable excess speed detectors in urban transportation networks. This type of sensor is studied to be located in a network in order to separate most of the demand node pairs in the system resembling the well-known traffic sensor surveillance problem. However, newly, the locations are permitted to be changed introducing the dynamic form of the sensor location problem. The problem is formulated mathematically into three different location problems, namely SLP1, SLP2, and SLP3. The aim is to find the optimal number of sensors to intercept most of the daily traffic for each model objective. The proposed formulations are proven to be an NP-hard problem, and then heuristics are called for the solution. The methodology is applied to AL Riyadh city as a real case study network with 240 demand node pairs and 124 two-way streets. In the SLP1, all the demand node pairs are covered by 19% of the network's roads, whereas SLP2 model shows the best locations for each assumed budget of sensors to purchase. The SLP2 solutions range from 24 sensors with 100% paths coverage to 1 sensor with nearly 20% of paths coverage. The SLP3 model manages to redistribute the sensors in the network while maintaining its traffic coverage efficiency. Four locations structures manage to cover all the network streets with coverage ranges between 100% and 60%. The results show the capability of providing satisfactory solutions with reasonable computing burden.

... However, in the real-world, each link travel time is variant and hard to predict. It depends on the level of traffic on it, so the algorithm fails to find the optimum path until the traffic is observed on all links in real-time which unpractical premise [9][10][11]. The traffic is also stochastic with time causing travel time uncertainty turning the TN to what is called Stochastic Transportation Network (STN). ...

... It also has a non-path based solution algorithm which makes it a non-biased method to evaluate the generated paths. DUE is formulated as follows: (11) s.t (12) (1 3) (14) To solve this set of equations for any time slot , the convex combination method is adapted as follows: ...

Routing problems play a crucial part in urban transportation network operation and management. This study addresses the problem of finding a set of non-dominated shortest paths in stochastic transportation networks. Instead of the previous practice of assuming the travel time variability to be tracked by a known probability density function, it is extracted from the existing correlation between the traffic flow and the corresponding links’ time. The time horizon is divided into time intervals/slots in which the network is assumed to experience a static traffic equilibrium with different traffic conditions for each slot. Starting with Priori demand information, prior generated paths, and a chosen traffic assignment method, the proposed methodology conducts successive simulations to the network intervals. It manages to draw both links and paths probability distribution of their travel time considering the correlation among them. Then, multi-objective analysis is conducted on the generated paths to produce the Pareto-optimal set for each demand node pair in the network. Numerical studies are conducted to show the methodology efficiency and generality for any network. The expected travel time and the reliability could be drawn for each path in the network.

... As a partial remedy, many researchers [5,6] have identified Mobility as a Service (MaaS) as one of the most effective solutions in the different CM paradigms. These approaches for mobility services are now discussed as possible sustainable solutions for transportation planning, promising the enhancement of traffic management and the lessening of congestion [5,7]. MaaS can offer travelers access to several modes of transport without the need to own any vehicle, thereby presenting travelers with seamless and carefree traveling [8]. ...

Mobility as a Service (MaaS), as a part of the smart mobility paradigm, is recognized as one of the most effective solutions for the congestion management (CM) problem in cities. MaaS is a possible sustainable solution for transportation planning, promising the enhancement of traffic management and the lessening of congestion. MaaS can offer travelers access to several modes of transport without the need to own any vehicle, thereby presenting travelers with seamless and carefree traveling. This study aims to develop a methodological framework adapting MaaS as a supportive tool to alleviate traffic congestion. To support this mobility, the users and the drivers should be connected via a single platform based on an Artificial Intelligence algorithm (Reinforced Learning, for example). Such a strategy would optimize the mobility in the area as a whole over time by learning from actions/decisions such as: ride-sharing matching, taxi dispatching, in-route guiding, and the generation of intermodal paths. That would help in providing solutions for real-time interaction. Decisions about departure times, paths to follow, and modes of travel would be available for all.

... In addition, Shao et al. (2021) used the same concept while minimizing the error propagation of accumulated counting measurements for each link inference. Interestingly, Owais et al. (2020c) obtained similar results to these link flow inference methods with fewer sensors via deep learning neural networks. The proposed technique can learn the latent relationships among a network's flow elements to accurately predict missing link data (Moussa & Owais, 2020. ...

Traffic flow data is a decisive element in transportation planning and traffic management. Over time, traffic sensors have been recognized as sources of such data. Despite their outstanding capabilities in measuring different traffic flow information types, they are not practical to apply across all transportation network streets or intersections. Thus, the traffic sensor location problem (TSLP) has emerged to answer two typical questions: how many sensors are needed, and what are the best locations for their deployment. This paper reviews the TSLP classes that have been extensively examined in the literature over the last three decades. This study tries to fulfill two major gaps in the existing literature. First, this is the only review article that summarizes the contributions made toward solving the TSLP spanning nearly 30 years. Second, it presents a comprehensive review and analysis of most TSLP studies with a new categorization system. This contribution clarifies the progress made and provides recommendations for further research.

... While loop detectors are installed to collect link flow data, the observation points are often limited to a subset of links and there are still a large proportion of links that do not have direct observations. Thus, unobserved link flows need to be estimated based on available data and this is referred to as the link flow estimation problem in the transportation literature (Abadi et al. 2015;Brunauer et al., 2017;Lederman and Wynter, 2011;Owais et al. 2020;Van Oijen et al. 2020). ...

This paper addresses the problem of estimating link flows in a road network by combining limited traffic volume and vehicle trajectory data. While traffic volume data from loop detectors have been the common data source for link flow estimation, the detectors only cover a subset of links. Vehicle trajectory data collected from vehicle tracking sensors are also incorporated these days. However, trajectory data are often sparse in that the observed trajectories only represent a small subset of the whole population, where the exact sampling rate is unknown and may vary over space and time. This study proposes a novel generative modelling framework, where we formulate the link-to-link movements of a vehicle as a sequential decision-making problem using the Markov Decision Process framework and train an agent to make sequential decisions to generate realistic synthetic vehicle trajectories. We use Reinforcement Learning (RL)-based methods to find the best behaviour of the agent, based on which synthetic population vehicle trajectories can be generated to estimate link flows across the whole network. To ensure the generated population vehicle trajectories are consistent with the observed traffic volume and trajectory data, two methods based on Inverse Reinforcement Learning and Constrained Reinforcement Learning are proposed. The proposed generative modelling framework solved by either of these RL-based methods is validated by solving the link flow estimation problem in a real road network. Additionally, we perform comprehensive experiments to compare the performance with two existing methods. The results show that the proposed framework has higher estimation accuracy and robustness under realistic scenarios where certain behavioural assumptions about drivers are not met or the network coverage and penetration rate of trajectory data are low.

... The performance of such models is outstanding in many aspects Owais 2020, 2021). At present, the DL techniques in urban traffic flow research mainly include traditional neural networks applicable to gridded traffic flows and graph neural networks (GNNs) applicable to networked flows (Bui, Cho, and Yi 2021;Owais, Moussa, and Hussain 2020;Xiong et al. 2020). In contrast to gridding urban traffic as a feature image, the networked approach considers the traffic flow only at node locations. ...

Prompt and accurate traffic flow forecasting is a key foundation of urban traffic management. However, the flows in different areas and feature channels (inflow/outflow) may correspond to different degrees of importance in forecasting flows. Many forecasting models inadequately consider this heterogeneity, resulting in decreased predictive accuracy. To overcome this problem, an attention-based hybrid spatiotemporal residual model assisted by spatial and channel information is proposed in this study. By assigning different weights (attention levels) to different regions, the spatial attention module selects relatively important locations from all inputs in the modeling process. Similarly, the channel attention module selects relatively important channels from the multichannel feature map in the modeling process by assigning different weights. The proposed model provides effective selection and attention results for key areas and channels, respectively, during the forecasting process, thereby decreasing the computational overhead and increasing the accuracy. In the case involving Beijing, the proposed model exhibits a 3.7% lower prediction error, and its runtime is 60.9% less the model without attention, indicating that the spatial and channel attention modules are instrumental in increasing the forecasting efficiency. Moreover, in the case involving Shanghai, the proposed model outperforms other models in terms of generalizability and practicality.

... Neural networks (NNs) have aroused widespread attention since they have been applied in numerous fields including machine learning, deep learning, and engineering data prediction [1][2][3]. As the fourth two-terminal circuit element, the memristor was predicted to exist by Chua in 1971, and the prototype of memristor was obtained by the research team of HP for the first time [4][5][6]. ...

This paper investigates the passivity of multiple weighted coupled memristive neural networks (MWCMNNs) based on the feedback control. Firstly, a kind of memristor-based coupled neural network model with multiple weights is presented for the first time. Furthermore, a novel passivity criterion for MWCMNNs is established by constructing an appropriate Lyapunov functional and developing a suitable feedback controller. In addition, with the assistance of some inequality techniques, sufficient conditions for ensuring the input strict passivity and output strict passivity of MWCMNNs are derived. Finally, the validity of the theoretical results is verified by a numerical example.

... (3) e adjustment of the support stiffness or sleeper spacing leads to fluctuations in the corrugation wavelength and its growth rate, while reducing the support stiffness and the sleeper spacing can suppress the formation of rail corrugation. e above studies coincide well with the conclusions obtained in existing studies and are of great significance to the prevention and maintenance of corrugated wear, while the real-time detection of rail corrugation based on deep learning network [67,68] will be investigated in the future research. ...

Urban rail corrugation on curved tracks with small radii causes strong howling during operation, which has been bothering subway operating companies for many years. Therefore, revealing its causes and growth is important for the comfort and safety of subway operation. Current studies believe that the occurrence of rail corrugation is largely due to the resonant vibration of the wheel-rail system. However, little attention has been paid to the key causes of the track resonance and the practical prediction of the occurrence probability of rail corrugation on the certain track. This paper intends to solve these above issues. Firstly, the practical model of predicting the rail corrugation growth is proposed based on the wheel-rail coupling interaction, the key causes of corrugation are investigated, and the sensitivity analysis is carried out, while the corrugation superposition model is introduced to the analyze the corrugation evolution as well as to validate the corrugation growth from the aspect of material friction and wear. Secondly, the impact of the key causes on the initiation and development of the rail corrugation is investigated based on the cosimulation. Finally, case studies validate the proposed theory model and method. The results show that the practical prediction model for the rail corrugation growth proposed in this paper is able to estimate the occurrence possibility of rail corrugation on a specific track, and the superharmonic resonance of the track directly excited by passing vehicles eventually leads to rail corrugation. It is also found that shortwave corrugation develops more rapidly, and adjusting the support stiffness or sleeper spacing leads to fluctuations in the corrugation wavelength and its wear rate.

... Most of the line design techniques depend on the demand information, which adds more complexity to the problem. As the conventional transportation problem, the transit O/D estimation could be based on the well-known step planning models (i.e., trip generation, trip distribution, modal choice, and traffic assignment) [44][45][46][47]. For the actual size networks as our case, the four models are unlikely to give accurate results due to the high level of uncertainty at the operational stage [48,49]. ...

The overall purpose of this study is to enhance existing transit systems by planning a new underground metro network. The design of a new metro network in the existing cities is a complex problem. Therefore, in this research, the study idea arises from the prerequisites to get out of conventional metro network design to develop a future scheme for forecasting an optimal metro network for these existing cities. Two models are proposed to design metro transit networks based on an optimal cost-benefit ratio. Model 1 presents a grid metro network, and Model 2 presents the ring-radial metro network. The proposed methodology introduces a non-demand criterion for transit system design. The new network design aims to increase the overall transit system connectivity by minimizing passenger transfers through the transit network between origin and destination. An existing square city is presented as a case study for both models. It includes twenty-five traffic analysis zones, and thirty-six new metro stations are selected at the existing street intersection. TransCAD software is used as a base for stations and the metro network lines to coordinate all these data. A passenger transfer counting algorithm is then proposed to determine the number of needed transfers between stations from each origin to each destination. Thus, a passenger Origin/Destination transfer matrix is created via the NetBeans program to help in determining the number of transfers required to complete the trips on both proposed networks. Results show that Model 2 achieves the maximum cost-benefit ratio (CBR) of the transit network that increases 41% more than CBR of Model 1. Therefore, it is found that the ring radial network is a more optimal network to existing square cities than the grid network according to overall network connectivity.

... Most of the line design techniques depend on the demand information, which adds more complexity to the problem. As the conventional transportation problem, the transit O/D estimation could be based on the well-known step planning models (i.e., trip generation, trip distribution, modal choice, and traffic assignment) [49][50][51][52]. For the actual size networks as our case, the four models are unlikely to give accurate results due to the high level of uncertainty at the operational stage [53,54]. ...

Traffic congestion is known as the most significant problem in developing countries in recent years. This study investigates the integration of bus and metro systems by proposing new ring lines. Study methodology presents a practical scheme of multiple subway line design to obviate the difficulty of dealing with large-scale networks that always suffer from severely combinatorial problems, which represent a hindrance to many theoretical design algorithms. The new lines are aimed to increase the connection between the transit modes and consequently increasing the overall transit network efficiency. In design strategic phase, a mathematical formulation is derived to minimize passenger transfer number (PTN) among public transportation facilities. A real case network of the Greater Cairo city is used to validate the presented methodology. After testing many solutions using the brute force technique, two subway lines are recommended with their station structure to increase the overall network connectivity by more than 70%.

... Furthermore, it would be helpful to propose a nondemand-based criterion for analyzing demand-coverage imbalances in existing transit networks. This would allow for determination of unsatisfied demand centers, and facilitate direct planning in scenarios concerning sizeable scale networks with unreliable demand information (Owais, Moussa, & Hussain, 2020). Reducing the potential number of transfers from one mode to another could reflect positively on passengers, and increase their confidence in public transport (Owais & Osman, 2018). ...

Connectivity is a significant problem in large-scale transit networks because the number of transfers required to conduct a trip is considered a discomfort by transit users. This paper presents a practical solution for an underground metro line planning problem by integrating existing bus and metro networks into a single connected transit network. The proposed method aims to obviate the usual combinatorial complexity when solving a transit route design problem. It aims to increase the overall transit system connectivity by selecting a consistent and non-demand-oriented criterion for the design. The metro lines are designed by minimizing passenger transfers through the transit network according to predefined demand node pairs. The design scheme offers a set of ring route alternatives for a sizeable case study in Greater Cairo. The case study selected sixteen traffic analysis zones, an existing metro network consisting of three main lines (113.6 km long), and twelve main bus lines (487.7 km long) for analysis. TransCAD software was used as the basis for coordinating the stations and lines of both the bus and metro systems. Subsequently, a passenger transfer counting algorithm was implemented to determine the number of transfers required between stations from each origin to each destination. A passenger origin-destination transfer matrix was created using the NetBeans integrated development environment to help determine the number of transfers required to complete trips on the transit network before and after proposing the new line. Based on the evaluation, the ring lines were highly efficient at significantly decreasing passenger transfers between stations with the minimum construction cost. This study will be of value during the strategic stages of the transit line design and will assist in rapidly generating initial solutions when certain demand information is unavailable.

... One of the fastest-growing methods in the field of ML, which has gained popularity in the last years due to its outstanding results in numerous engineering application domains, is deep learning architecture (DL) [70,71]. DL architecture is an artificial neural network that contains multiple layers (deep networks) between input and output layers [72]. Multiple layers allow the architecture to progressively extract high-level features from the raw input data [73,74]. ...

Evaluating the hot mix asphalt (HMA) expected performance is one of the significant aspects of highways research. Dynamic modulus (E*) pre-sents itself as a fundamental mechanistic property that is one of the primary inputs for mechanistic-empirical models for pavements design. Un-fortunately, E* testing is an expensive and complicated task that requires advanced testing equipment. Moreover, a significant source of difficulty in E* modeling is that many of the factors of variation in the HMA mixture components and testing conditions significantly influence the predict-ed values. For each laboratory practice, a vast number of mixes are required to estimate the E* accurately. This study aims to extend the knowledge/practice of other laboratories to a target one in order to reduce the laboratory effort required for E* determination while attaining accurate E* prediction. Therefore, the transfer learning solution using deep learning (DL) technology is adopted for the problem. By transfer learning, instead of starting the learning process from scratch, previous learnings that have been gained when solving a similar problem is used. A deep convolution neural networks (DCNNs) technique, which incorporates a stack of six convolution blocks, is newly adapted for that purpose. Pre-trained DCNNs are constructed using a large data set that comes from different sources to constitute cumulative learning. The constructed pre-trained DCNNs aim to dramatically reduce the effort elsewhere (target lab) when it comes to the E* prediction problem. Then, a laboratory effort reduction justification is investigated through fine toning the constructed pre-trained DCNNs using a limited amount of the target lab data. The performance of the proposed DCNNs is evaluated using representative statistical performance indicators and compared with well-known predictive models (e.g., η-based Witczak 1-37A, G,δ-based Witczak 1-40D and G-based Hirsch models). The proposed methodology proves itself as an excellent tool for the E* prediction compared with the other models. Moreover, it could preserve its accurate performance with less data input using the transferred learning from the previous phase of the solution.

... Past decades have been dedicated to traffic simulation models and algorithms since demand simulation is essential for the analysis of urban transportation systems [45][46][47]. Specialized engineering in simulation can study the formation and dissipation of congestion on roadways, assess the impacts of control strategies, and compare alternative geometric configurations. A superior list of methods and guidelines for supporting the use, calibration, and validation of traffic simulators are found in [48]. ...

The increasing congestion on transportation networks has raised inconvenience among the network’s users, especially in major cities intersections where there are frequently problems experienced like delays and low levels of service. Roundabouts are always offered as a solution or an alternative from existing signalized intersections believing in their substantial delays reduction. However, it is essential to adopt advanced transportation analysis tools to ensure whether this solution performance would last overtime or not. This study focuses on evaluating the performance of two real existing roundabouts in Jeddah and Al-Madinah cities over time. It aims at analyzing and assessing the level of service of these two intersections in the case of existing roundabouts and their parallel solution of signalized intersections to decide when it is better to convert the existing roundabout into a signalized intersection. Sidra and Synchro, as the simulation tools are fed with synthetically generated demand scenarios, represent the normal increase in traffic with passing years. As a result, the underperforming year is easily detected for both intersections. Also, the impact of the rise in left-turn volume is evaluated for both solution types. It is expected that the proposed framework would help practitioners to continuously assess the applicability of converting roundabout solutions to signalized intersections besides determining the span of service.

The primary objective of this study was to evaluate the impacts of traffic states on crash risk in the vicinities of Type A weaving segments. A deep convolutional embedded clustering (DCEC) was developed to classify traffic flow into nine states. The proposed DCEC outperformed the three common clustering algorithms, i.e. K-means, deep embedded clustering, and deep convolutional autoencoders clustering, in terms of silhouette coefficient and calinski-harabaz index on the same samples, suggesting that the DCEC provides better clustering performance. The characteristics of the nine traffic states are described for the right and inside lanes separately. The DCED visualization indicates that the spatiotemporal features of the nine traffic states are different from each other. The empirical analyses suggest that crash severity and the main types of crashes are different across the nine traffic states. The results of the logistic regression model prove that the nine traffic states are significantly associated with crash risk in the vicinities of weaving segments, and each traffic state can be assigned with a unique safety level. The convolutional neural network with gated convolutional layers (G-CNN) was developed to predict the crash risk in each traffic state. Compared with the traditional four traffic states classification based on 4-phase traffic theory, the model incorporating the various crash mechanisms across the nine traffic states provides more accurate predictions.

With the progress in intelligent transportation systems, a great interest has been directed towards traffic sensors information for flow estimation problems. Nevertheless, there is a great challenge to locate such traffic sensors on a network to attain the maximum benefits from them. Considering the O/D matrix estimation problem, all traffic sensors location models depend crucially on the reliability of the estimated matrix compared with a priori f low information. Thus, the required sensors number (cost) and locations for a network vary according to the estimation technique (e.g. least square, minimizing entropy, maximum likelihood, etc.) as well as the reliability of the priori information. Alternatively, this study presents a robust traffic sensor location model, which produces different trade-offs between the potential accuracy of the estimated O/D matrix and the cost of sensors’ installation in a polynomial time complexity. The proposed approach searches for the number and locations of sensors that minimize the boundary of the maximum possible relative error for the estimated O/D matrix. The traffic sensor location problem is formulated as a set covering problem, then a multi-criteria meta-heuristics algorithm is adopted. The pioneer of this work is that it targets the maximum possible relative error directly in the multi-objective design process, which is considered a robust criterion for evaluating a solution set. Moreover, the proposed approach is extended to incorporate the screen line problem in a straightforward manner. For the purpose of validating the feasibility and the effectiveness of the proposed approach, two real networks are used. The results show the capability of producing the Pareto optimal (near optimal) solutions for any network.

Urban traffic congestion prediction is a very hot topic due to the environmental and economical impacts that currently implies. In this sense, to be able to predict bottlenecks and to provide alternatives to the circulation of vehicles becomes an essential task for traffic management. A novel methodology, based on ensembles of machine learning algorithms, is proposed to predict traffic congestion in this paper. In particular, a set of seven algorithms of machine learning has been selected to prove their effectiveness in the traffic congestion prediction. Since all the seven algorithms are able to address supervised classification, the methodology has been developed to be used as a binary classification problem. Thus, collected data from sensors located at the Spanish city of Seville are analyzed and models reaching up to 83 % are generated.

This paper compares four different artificial neural network approaches for computer network traffic forecast, such as: 1) multilayer perceptron (MLP) using the backpropagation as training algorithm; 2) MLP with resilient backpropagation (Rprop); (3) recurrent neural network (RNN); 4) deep learning stacked autoencoder (SAE). The computer network traffic is sampled from the traffic of the network devices that are connected to the internet. It is shown herein how a simpler neural network model, such as the RNN and MLP, can work even better than a more complex model, such as the SAE. Internet traffic prediction is an important task for many applications, such as adaptive applications, congestion control, admission control, anomaly detection and bandwidth allocation. In addition, efficient methods of resource management, such as the bandwidth, can be used to gain performance and reduce costs, improving the quality of service (QoS). The popularity of the newest deep learning methods have been increasing in several areas, but there is a lack of studies concerning time series prediction, such as internet traffic.

It has been recognized by many researchers that accurate bus travel time prediction is critical for successful deployment of traffic signal priority (TSP) systems. Although there exist a lot of studies on travel time prediction for Advanced Traveler Information Systems (ATIS), this problem for TSP purpose is a little different and the amount of literature is limited. This paper proposes a deep learning based approach for continuous travel time prediction problem. Parameters of the deep network are fine-tuned following a layer-by-layer pre-training procedure on a dataset generated by traffic simulations. Variables that may affect continuous travel time are selected carefully. Experiments are conducted to validate the performance of the proposed model. The results indicate that the proposed model produces prediction with mean absolute error less than 4 seconds, which is accurate enough for TSP operations. This paper also reveals that, except for obvious factors like speed, travel distance and traffic density, the signal time when the prediction is made is also an important factor affecting travel time.

This paper proposes a two-stage optimization model to determine the origin–destination (O–D) trip matrix and the heterogeneous sensor deployment strategy in an integrated manner for a vehicular traffic network using sensor information from active (camera-based license plate recognition) and passive (vehicle detector) sensors. The first stage solves the heterogeneous sensor selection and location problem to determine the optimal sensor deployment strategy, in terms of the selection of the numbers of the two sensor types and their installation locations, to maximize the traffic information available for the O–D matrix estimation problem. The traffic information includes the observed link flow, path trajectory, and path coverage information. The second stage leverages this traffic information to determine the network O–D matrix that minimizes the error between the observed and estimated traffic flows (link, O–D, and/or path). Correspondingly, two network O–D matrix estimation models are proposed where the link-based model incorporates the flow conservation rule between O–D and link flows and uses the link-node incidence matrix, and the path-based model assumes a given link-path incidence matrix. An iterative solution procedure is designed to determine the network O–D matrix and link flow estimates. Results from numerical experiments suggest that the path-based model outperforms the link-based model in the estimation of network O–D matrices. The relative contributions of combinations of the two sensor types to the network O–D matrix estimation problem are also analyzed. They suggest that active sensors provide valuable path information to solve the O–D matrix estimation problem, but at the cost of a significantly higher unit price. The study results have key implications for heterogeneous sensor selection and location strategies.

Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech.

We demonstrate a new deep learning autoencoder network, trained by a
nonnegativity constraint algorithm (NCAE), that learns features which show
part-based representation of data. The learning algorithm is based on
constraining negative weights. The performance of the algorithm is assessed
based on decomposing data into parts and its prediction performance is tested
on three standard image data sets and one text dataset. The results indicate
that the nonnegativity constraint forces the autoencoder to learn features that
amount to a part-based representation of data, while improving sparsity and
reconstruction quality in comparison with the traditional sparse autoencoder
and Nonnegative Matrix Factorization. It is also shown that this newly acquired
representation improves the prediction performance of a deep neural network.

Static origin destination (O-D) matrices that specify the number of trips from each origin to each destination are usually needed for several transportation planning and operations decisions. One approach to the estimation of an O-D matrix is to use data from traditional counting sensors on links in conjunction with models or assumptions on how vehicular traffic uses the network. A closely related problem is to locate a given number or counting sensors to obtain good estimates of O-D flows. In this paper, a new linear integer programming model for the placement of the sensors at a location to maximize the reduction in the uncertainties in route flows estimates is presented. The model assumes a general underlying traffic loading model, as long as the route choice set from each origin to each destination is known and prior route flows and their reliabilities for each O-D route are given. Extensive computational experiments and comparisons with some existing sensor location models indicate that the proposed model consistently gives good estimates of O-D flows.

Path flow estimator (PFE) is a one-stage network observer proposed in the transportation literature to estimate path flows and path travel times from traffic counts in a transportation network. The estimated path flows can further be aggregated to obtain the origin–destination (O–D) flows, which are usually required in many transportation applications. In this paper, we examine the capability of PFE in capturing the total demand of the study network as well as individual O–D demands. Numerical examples are provided to show the effects of the number and locations of traffic counts on the quality of O–D estimates. The results indicate that PFE has the potential to correctly estimate the total demand when proper observations, in terms of the number and their locations, are provided. In general, the spatial distribution of O–D demands is difficult to estimate even when traffic counts are available on all network links.

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

In this study, we consider the bi-objective traffic counting location problem for the purpose of origin-destination (O-D) trip table estimation. The problem is to determine the number and locations of counting stations that would best cover the network. The maximal coverage and minimal resource utilization criteria, which are generally conflicting, are simultaneously considered in a multi-objective manner to reveal the tradeoff between the quality and cost of coverage. A distance-based genetic algorithm (GA) is used to solve the proposed bi-objective traffic counting location problem by explicitly generating the non-dominated solutions. Numerical results are provided to demonstrate the feasibility of the proposed model. The primary results indicate that the distance-based GA can produce the set of non-dominated solutions from which the decision makers can examine the tradeoff between the quality and cost of coverage and make a proper selection without the need to repeatedly solve the maximal covering problem with different levels of resource.

Sensors are used to monitor traffic in networks. For example, in transportation networks, they may be used to measure traffic volumes on given arcs and paths of the network. This paper refers to an active sensor when it reads identifications of vehicles, including their routes in the network, that the vehicles actively provide when they use the network. On the other hand, the conventional inductance loop detectors are passive sensors that mostly count vehicles at points in a network to obtain traffic volumes (e.g., vehicles per hour) on a lane or road of the network.
This paper introduces a new set of network location problems that determine where to locate active sensors in order to monitor or manage particular classes of identified traffic streams. In particular, it focuses on the development of two generic locational decision models for active sensors, which seek to answer these questions: (1) “How many and where should such sensors be located to obtain sufficient information on flow volumes on specified paths?”, and (2) “Given that the traffic management planners have already located count detectors on some network arcs, how many and where should active sensors be located to get the maximum information on flow volumes on specified paths?”
The problem is formulated and analyzed for three different scenarios depending on whether there are already count detectors on arcs and if so, whether all the arcs or a fraction of them have them. Location of an active sensor results in a set of linear equations in path flow variables, whose solution provide the path flows. The general problem, which is related to the set-covering problem, is shown to be NP-Hard, but special cases are devised, where an arc may carry only two routes, that are shown to be polynomially solvable. New graph theoretic models and theorems are obtained for the latter cases, including the introduction of the generalized edge-covering by nodes problem on the path intersection graph for these special cases. An exact algorithm for the special cases and an approximate one for the general case are presented.

Successful traffic speed prediction is of great importance for the benefits of both road users and traffic management agencies. To solve the problem, traffic scientists have developed a number of time-series speed prediction approaches, including traditional statistical models and machine learning techniques. However, existing methods are still unsatisfying due to the difficulty to reflect the stochastic traffic flow characteristics. Recently, various deep learning models have been introduced to the prediction field. In this paper, a deep learning method, the Deep Belief Network (DBN) model, is proposed for short-term traffic speed information prediction. The DBN model is trained in a greedy unsupervised method and fine-tuned by labeled data. Based on traffic speed data collected from one arterial in Beijing, China, the model is trained and tested for different prediction time horizons. From experiment analysis, it is concluded that the DBN can outperform Back Propagation Neural Network (BPNN) and Auto-Regressive Integrated Moving Average (ARIMA) for all time horizons. The advantages of DBN indicate that deep learning is promising in traffic research area.

Traffic data provide the basis for both research and applications in transportation control, management, and evaluation, but real-world traffic data collected from loop detectors or other sensors often contain corrupted or missing data points which need to be imputed for traffic analysis. For this end, here we propose a deep learning model named denoising stacked autoencoders for traffic data imputation. We tested and evaluated the model performance with consideration of both temporal and spatial factors. Through these experiments and evaluation results, we developed an algorithm for efficient realization of deep learning for traffic data imputation by training the model hierarchically using the full set of data from all vehicle detector stations. Using data provided by Caltrans PeMS, we have shown that the mean absolute error of the proposed realization is under 10 veh/5-min, a better performance compared with other popular models: the history model, ARIMA model and BP neural network model. We further investigated why the deep leaning model works well for traffic data imputation by visualizing the features extracted by the first hidden layer. Clearly, this work has demonstrated the effectiveness as well as efficiency of deep learning in the field of traffic data imputation and analysis.

Global optimization of the energy consumption of dual power source vehicles
such as hybrid electric vehicles, plug-in hybrid electric vehicles, and plug in
fuel cell electric vehicles requires knowledge of the complete route
characteristics at the beginning of the trip. One of the main characteristics
is the vehicle speed profile across the route. The profile will translate
directly into energy requirements for a given vehicle. However, the vehicle
speed that a given driver chooses will vary from driver to driver and from time
to time, and may be slower, equal to, or faster than the average traffic flow.
If the specific driver speed profile can be predicted, the energy usage can be
optimized across the route chosen. The purpose of this paper is to research the
application of Deep Learning techniques to this problem to identify at the
beginning of a drive cycle the driver specific vehicle speed profile for an
individual driver repeated drive cycle, which can be used in an optimization
algorithm to minimize the amount of fossil fuel energy used during the trip.

The traditional approach to origin-destination (OD) estimation based on data surveys is highly expensive. Therefore, researchers have attempted to develop reasonable low-cost approaches to estimating the OD vector, such as OD estimation based on traffic sensor data. In this estimation approach, the location problem for the sensors is critical. One type of sensor that can be used for this purpose, on which this paper focuses, is vehicle identification sensors. The information collected by these sensors that can be employed for OD estimation is discussed in this paper. We use data gathered by vehicle identification sensors that include an ID for each vehicle and the time at which the sensor detected it. Based on these data, the subset of sensors that detected a given vehicle and the order in which they detected it are available. In this paper, four location models are proposed, all of which consider the order of the sensors. The first model always yields the minimum number of sensors to ensure the uniqueness of path flows. The second model yields the maximum number of uniquely observed paths given a budget constraint on the sensors. The third model always yields the minimum number of sensors to ensure the uniqueness of OD flows. Finally, the fourth model yields the maximum number of uniquely observed OD flows given a budget constraint on the sensors. For several numerical examples, these four models were solved using the GAMS software. These numerical examples include several medium-sized examples, including an example of a real-world large-scale transportation network in Mashhad.

The link observability problem is to identify the minimum set of links to be installed with sensors that allow the full determination of flows on all the unobserved links. Inevitably, the observed link flows are subject to measurement errors, which will accumulate and propagate in the inference of the unobserved link flows, leading to uncertainty in the inference process. In this paper, we develop a robust network sensor location model for complete link flow observability, while considering the propagation of measurement errors in the link flow inference. Our model development relies on two observations: (1) multiple sensor location schemes exist for the complete inference of the unobserved link flows, and different schemes can have different accumulated variances of the inferred flows as propagated from the measurement errors. (2) Fewer unobserved links involved in the nodal flow conservation equations will have a lower chance of accumulating measurement errors, and hence a lower uncertainty in the inferred link flows. These observations motivate a new way to formulate the sensor location problem. Mathematically, we formulate the problem as min–max and min–sum binary integer linear programs. The objective function minimizes the largest or cumulative number of unobserved links connected to each node, which reduces the chance of incurring higher variances in the inference process. Computationally, the resultant binary integer linear program permits the use of a number of commercial software packages for its globally optimal solution. Furthermore, considering the non-uniqueness of the minimum set of observed links for complete link flow observability, the optimization programs also consider a secondary criterion for selecting the sensor location scheme with the minimum accumulated uncertainty of the complete link flow inference.

Many past researchers have ignored the multi-objective nature of the transit route network design problem (TrNDP), recognizing user or operator cost as their sole objective. The main purpose of this study is to identify the inherent conflict among TrNDP objectives in the design process. The conventional scheme for transit route design is addressed. A route constructive genetic algorithm is proposed to produce a vast pool of candidate routes that reflect the objectives of design, and then, a set covering problem (SCP) is formulated for the selection stage. A heuristic algorithm based on a randomized priority search is implemented for the SCP to produce a set of nondominated solutions that achieve different tradeoffs among the identified objectives. The solution methodology has been tested using Mandl's benchmark network problem. The test results showed that the methodology developed in this research not only outperforms solutions previously identified in the literature in terms of strategic and tactical terms of design, but it is also able to produce Pareto (or near Pareto) optimal solutions. A real-scale network of Rivera was also tested to prove the proposed methodology's reliability for larger-scale transit networks. Although many efficient meta-heuristics have been presented so far for the TrNDP, the presented one may take the lead because it does not require any weight coefficient calibration to address the multi-objective nature of the problem.

Accurate and timely traffic flow information is important for the successful deployment of intelligent transportation systems. Over the last few years, traffic data have been exploding, and we have truly entered the era of big data for transportation. Existing traffic flow prediction methods mainly use shallow traffic prediction models and are still unsatisfying for many real-world applications. This situation inspires us to rethink the traffic flow prediction problem based on deep architecture models with big traffic data. In this paper, a novel deep-learning-based traffic flow prediction method is proposed, which considers the spatial and temporal correlations inherently. A stacked autoencoder model is used to learn generic traffic flow features, and it is trained in a greedy layerwise fashion. To the best of our knowledge, this is the first time that a deep architecture model is applied using autoencoders as building blocks to represent traffic flow features for prediction. Moreover, experiments demonstrate that the proposed method for traffic flow prediction has superior performance.

Traffic sensors serve an important function in obtaining traffic information. In this paper, a novel traffic sensor location approach is proposed to determine the maximum number of traffic flows by considering the time-spatial correlation. The problem is formulated as three 0-1 programming models to maximise the number of obtained flows under different cases. To solve these novel sensor location problems, an ant colony optimisation algorithm with a local search procedure is designed. Numerical experiments are conducted in both a simulated network and in the Sioux-Falls network. Results demonstrate the effectiveness and robustness of the proposed algorithm, which is believed to possess potential applicability in real surveillance network design.

Path-differentiated congestion pricing is a tolling scheme that imposes tolls on paths instead of individual links. One way to implement this scheme is to deploy automated vehicle identification sensors, such as toll tag readers or license plate scanners, on roads in a network. These sensors collect vehicles’ location information to identify their paths and charge them accordingly. In this paper, we investigate how to optimally locate these sensors for the purpose of implementing path-differentiated pricing. We consider three relevant problems. The first is to locate a minimum number of sensors to implement a given path-differentiated scheme. The second is to design an optimal path-differentiated pricing scheme for a given set of sensors. The last problem is to find a path differentiated scheme to induce a given target link-flow distribution while requiring a minimum number of sensors.

In this paper, we deal with the observability problem in traffic networks and the optimal location of counting and scanning devices. After explaining what we mean by observability, the problems of what to observe, how to observe traffic data and how to incorporate prior or obsolete information together with the cases of genuine and pseudo-samples of flow data are discussed. Plate scanning information is dealt with and the flow amount of information measure of information corresponding to a subset of scanned links is analysed. Some pivoting and matrix techniques are given for solving the most common problems of observability of traffic flows in a network. Finally, the problem of optimal location of counters and plate scanning cameras is analysed and several examples are given.

With the advent of intelligent transportation systems, transportation networks have a considerable amount of traffic detectors, and large amounts of streaming data are available to manage and plan a multi-modal network and provide real-time traffic information to travelers. The related problem of optimally locating sensors on the network to estimate flows has been the object of growing interest in the past few years. Available sensors use various technologies and measure different aspects of traffic flows. This paper classifies sensor location problems in the literature in two categories: the sensor location flow-observability problem and the sensor location flow-estimation problem. This paper reviews the existing contributions for the latter of the two problem types and presents a unifying bilevel optimization framework in which the upper level addresses the location decisions and the lower level addresses the estimation variables. Several directions for future research are discussed.

Traffic flow prediction is a fundamental problem in transportation modeling and management. Many existing approaches fail to provide favorable results due to being: 1) shallow in architecture; 2) hand engineered in features; and 3) separate in learning. In this paper we propose a deep architecture that consists of two parts, i.e., a deep belief network (DBN) at the bottom and a multitask regression layer at the top. A DBN is employed here for unsupervised feature learning. It can learn effective features for traffic flow prediction in an unsupervised fashion, which has been examined and found to be effective for many areas such as image and audio classification. To the best of our knowledge, this is the first paper that applies the deep learning approach to transportation research. To incorporate multitask learning (MTL) in our deep architecture, a multitask regression layer is used above the DBN for supervised prediction. We further investigate homogeneous MTL and heterogeneous MTL for traffic flow prediction. To take full advantage of weight sharing in our deep architecture, we propose a grouping method based on the weights in the top layer to make MTL more effective. Experiments on transportation data sets show good performance of our deep architecture. Abundant experiments show that our approach achieved close to 5% improvements over the state of the art. It is also presented that MTL can improve the generalization performance of shared tasks. These positive results demonstrate that deep learning and MTL are promising in transportation research.

Deep learning is currently an extremely active research area in machine learning and pattern recognition society. It has gained huge successes in a broad area of applications such as speech recognition, computer vision, and natural language processing. With the sheer size of data available today, big data brings big opportunities and transformative potential for various sectors; on the other hand, it also presents unprecedented challenges to harnessing data and information. As the data keeps getting bigger, deep learning is coming to play a key role in providing big data predictive analytics solutions. In this paper, we provide a brief overview of deep learning, and highlight current research efforts and the challenges to big data, as well as the future trends.

In recent years, deep neural networks (including recurrent ones) have won
numerous contests in pattern recognition and machine learning. This historical
survey compactly summarises relevant work, much of it from the previous
millennium. Shallow and deep learners are distinguished by the depth of their
credit assignment paths, which are chains of possibly learnable, causal links
between actions and effects. I review deep supervised learning (also
recapitulating the history of backpropagation), unsupervised learning,
reinforcement learning & evolutionary computation, and indirect search for
short programs encoding deep and large networks.

Recently, a new methodology (“synergistic sensor location”) has been introduced to efficiently determine all link flows in a road network by using only a subset of the link flow measurements. In this paper, we generalize this previous work by solving the following problem: Suppose that one is only interested in a subset of the link flows, and that certain link flows are known a priori. At a minimum, what link flows are needed to be able to uniquely determine the desired link flows? An algorithm is presented that does not require the need for path enumeration.

The problem of optimally locating sensors on a traffic network to measure flows has been object of growing interest in the past few years, due to its relevance in transportation systems. Different locations of sensors on the network can allow, indeed, the collection of data whose usage can be useful for traffic management and control purposes. Many different models have been proposed in the literature as well as corresponding solution approaches. The proposed existing models differ according to different criteria: (i) sensor types to be located on the network (e.g., counting sensors, image sensors, Automatic Vehicle Identification (AVI) readers), (ii) available a-priori information, and (iii) flows of interest (e.g., OD flows, route flows, link flows). The purpose of this paper is to review the existing contributions and to give a unifying picture of these models by categorizing them into two main problems: the Sensor Location Flow-Observability Problem and the Sensor Location Flow-Estimation Problem. For both the problems, we will describe the corresponding computational complexity and the existing results. After describing various models and identifying their advantages and limitations, we conclude with several promising directions for future research and discuss other classes of location problems that address different objectives than the ones reviewed in the paper.

Sensors are becoming increasingly critical elements in contemporary transportation systems, gathering essential (real-time) traffic information for the planning, management and control of these complex systems. In a recent paper, Hu, Peeta and Chu introduced the interesting problem of determining the smallest subset of links in a traffic network for counting sensor installation, in such a way that it becomes possible to infer the flows on all remaining links. The problem is particularly elegant because of its limited number of assumptions. Unfortunately, path enumeration was required, which – as recognized by the authors – is infeasible for large-scale networks without further simplifying assumptions (that would destroy the assumption-free nature of the problem). In this paper, we present a reformulation of this link observability problem, requiring only node enumeration. Using this node-based approach, we prove a conjecture made by Hu, Peeta and Chu by deriving an explicit relationship between the number of nodes and links in a transportation network, and the minimum number of sensors to install in order to be able to infer all link flows. In addition, we demonstrate how the proposed method can be employed for road networks that already have sensors installed on them. Numerical examples are presented throughout.

Estimating origin-destination trip matrices from link traffic counts has been a subject of substantial research. It is well known that the accuracy of the resulting estimated origin-destination (O-D) matrix largely depends on the employed estimation approach itself, errors of the input data, and an appropriate set of links from which flow information should be collected. Previous studies have overwhelmingly focused on the development of various estimation models, while paying very limited attention to the traffic counting location and error bound issues. Recognizing their interdependence, this study makes a joint investigation of the traffic counting location, estimation method, and error bound in an integrated manner, while taking into account the effects of various route choice assumptions made in the traffic assignment models and the levels of traffic congestion on the network. A few useful properties of the counting location rules and error bound measures for the O-D matrix estimation problem are demonstrated theoretically and numerically.

Origin-destination (O-D) trip matrix estimation from traffic count surveys is regarded as the most economical and effective methodology in road network analysis for transport planning and traffic management. Despite the numerous mathematical estimation techniques previously developed, the fundamental procedure of selecting count locations itself is a prime determinant in the quality of the ultimate estimation and deserves more in-depth exploration. In this paper, some existing methods being adopted in practice are reviewed. Two basic rules are established based on previous works and are formulated in a linear programming model to determine the best survey locations for O-D estimation. However, technical problems will be incurred when applied to a large network with huge number of variables involved. The proposed maximal O-D selection method is proved to provide results with a comparable level of reliability, and a sensitivity test is conducted with different objective functions to verify this proposed strategic algorithm. This paper examines the efficiency of additional link counts for O-D estimation and recommends an efficient data collection method. The models and algorithms are illustrated with numerical examples.

The focus of this paper is on a certain class of equilibrium traffic assignment problems characterized by a path formulation of the associated mathematical programs. In such cases the equilibration iterations would require path enumeration, and are therefore prohibitively expensive. In this paper we prove that a predetermined sequence of step sizes (in a descent direction) would guarantee, under certain regularity conditions, convergence to the equilibrium solution. This algorithm was suggested in the literature without a proof of convergence, which we give here.

The widely used BPR volume-delay functions have some inherent drawbacks. A set of conditions is developed which a “well behaved” volume delay function should satisfy. This leads to the definition of a new class of functions named conical volume-delay functions, due to their geometrical interpretation as hyperbolic conical sections. It is shown that these functions satisfy all conditions set forth and, thus, constitute a viable alternative to the BPR type functions.

Several route choice models are reviewed in the context of the stochastic user equilibrium problem. The traffic assignment problem has been extensively studied in the literature. Several models were developed focusing mainly on the solution of the link flow pattern for congested urban areas. The behavioural assumption governing route choice, which is the essential part of any traffic assignment model, received relatively much less attention. The core of any traffic assignment method is the route choice model. In the wellknown deterministic case, a simple choice model is assumed in which drivers choose their best route. The assumption of perfect knowledge of travel costs has been long considered inadequate to explain travel behaviour. Consequently, probabilistic route choice models were developed in which drivers were assumed to minimize their perceived costs given a set of routes. The objective of the paper is to review the different route choice models used to solve the traffic assignment problem. Focus is on the different model structures. The paper connects some of the route choice models proposed long ago, such as the logit and probit models, with recently developed models. It discusses several extensions to the simple logit model, as well as the choice set generation problem and the incorporation of the models in the assignment problem.

This paper contains a quantitative evaluation of probabilistic traffic assignment models and proposes an alternate formulation. The paper also discusses the weaknesses of existing stochastic-network-loading techniques (with special attention paid to Dial's multipath method) and compares them to the suggested approach. The discussion is supported by several numerical examples on small contrived networks. The paper concludes with the discussion of two techniques that can be used to approximate the link flows resulting from the proposed model in large networks.

We present a method for estimating the KL divergence between continuous densities and we prove it converges almost surely. Divergence estimation is typically solved estimating the densities first. Our main result shows this intermediate step is unnecessary and that the divergence can be either estimated using the empirical cdf or k-nearest-neighbour density estimation, which does not converge to the true measure for finite k. The convergence proof is based on describing the statistics of our estimator using waiting-times distributions, as the exponential or Erlang. We illustrate the proposed estimators and show how they compare to existing methods based on density estimation, and we also outline how our divergence estimators can be used for solving the two-sample problem.

The increasing need for mobility has brought about significant changes in transportation infrastructures. Inefficiencies cause enormous losses of time, decrease in the level of safety for both vehicles and pedestrians, high pollution, degradation of quality of life, and huge waste of nonrenewable fossil energy.The scope of this article is to introduce novel functionality for providing knowledge to vehicles, thus jointly managing traffic and safety. This will be achieved through the design of the proposed functionality, which, at a high level, will comprise (1) sensor networks formed by vehicles of a certain vicinity that exchange traffic-related information, (2) cognitive management functionality placed inside the vehicles for inferring knowledge and experience, and (3) cognitive management functionality in the overall transportation infrastructure. The goal of the aforementioned three main components shall be to issue directives to the drivers and the overall transportation infrastructure valuable in context handling.

Information on link flows in a vehicular traffic network is critical for developing long-term planning and/or short-term operational management strategies. In the literature, most studies to develop such strategies typically assume the availability of measured link traffic information on all network links, either through manual survey or advanced traffic sensor technologies. In practical applications, the assumption of installed sensors on all links is generally unrealistic due to budgetary constraints. It motivates the need to estimate flows on all links of a traffic network based on the measurement of link flows on a subset of links with suitably equipped sensors. This study, addressed from a budgetary planning perspective, seeks to identify the smallest subset of links in a network on which to locate sensors that enables the accurate estimation of traffic flows on all links of the network under steady-state conditions. Here, steady-state implies that the path flows are static. A “basis link” method is proposed to determine the locations of vehicle sensors, by using the link-path incidence matrix to express the network structure and then identifying its “basis” in a matrix algebra context. The theoretical background and mathematical properties of the proposed method are elaborated. The approach is useful for deploying long-term planning and link-based applications in traffic networks.

This paper describes the validation of a route choice simulator known as VLADIMIR (Variable Legend Assessment Device for Interactive Measurement of Individual Route choice). VLADIMIR is an interactive computer-based tool designed to study drivers’ route choice behaviour. It has been extensively used to obtain data on route choice in the presence of information sources such as Variable Message Signs or In-Car Navigation devices. The simulator uses a sequence of digitized photographs to portray a real network with junctions, links, landmarks and road signs. Subject drivers are invited to make journeys between specified origins and destinations under a range of travel scenarios, during which the simulator automatically records their route choices. This paper describes validation experiments carried out during the period Summer 1994 to Autumn 1995 and reports on the results obtained. Each experiment involved a comparison of routes selected in real life with those driven under simulated conditions in VLADIMIR. The analysis included investigation of the subjects’ own assessment of the realism of the VLADIMIR routes they had chosen, a comparison of models based on the real life routes with models based on VLADIMIR routes, and a statistical comparison of the two sets of routes. After an extensive series of data collection exercises and analyses, we have concluded that a well designed simulator is able to replicate real life route choices with a very high degree of detail and accuracy. Not only was VLADIMIR able to precisely replicate the route choices of drivers who were familiar with the network but it also appears capable of representing the kind of errors made and route choice strategies adopted by less familiar drivers. Furthermore, evidence is presented to suggest that it can accurately replicate route choice responses to roadside VMS information.

There has been substantial interest in development and application of methodology for estimating origin–destination (O–D) trip matrices from traffic counts. Generally, the quality of an estimated O–D matrix depends much on the reliability of the input data, and the number and locations of traffic counting points in the road network. The former has been investigated extensively, while the latter has received very limited attention. This paper addresses the problem of how to determine the optimal number and locations of traffic counting points in a road network for a given prior O–D distribution pattern. Four location rules: O–D covering rule, maximal flow fraction rule, maximal flow-intercepting rule and link independence rule are proposed, and integer linear programming models and heuristic algorithms are developed to determine the counting links satisfying these rules. The models and algorithms are illustrated with numerical examples.

The paper proposes an efficient algorithm for determining the stochastic user equilibrium solution for logit-based loading. The commonly used Method of Successive Averages typically has a very slow convergence rate. The new algorithm described here uses Williams’ result [ Williams, (1977)On the formation of travel demand models and economic evaluation measures of user benefit. Environment and Planning9A(3), 285–344] which enables the expected value of the perceived travel costs Srs to be readily calculated for any flow vector x. This enables the value of the Sheffi and Powell, 1982objective function [Sheffi, Y. and Powell, W. B. (1982) An algorithm for the equilibrium assignment problem with random link times. Networks12(2), 191–207], and its gradient in any specified search direction, to be calculated. It is then shown how, at each iteration, an optimal step length along the search direction can be easily estimated, rather than using the pre-set step lengths, thus giving much faster convergence. The basic algorithm uses the standard search direction (towards the auxiliary solution). In addition the performance of two further versions of the algorithm are investigated, both of which use an optimal step length but alternative search directions, based on the Davidon–Fletcher–Powell function minimisation method. The first is an unconstrained and the second a constrained version. Comparisons are made of all three versions of the algorithm, using a number of test networks ranging from a simple three-link network to one with almost 3000 links. It is found that for all but the smallest network the version using the standard search direction gives the fastest rate of convergence. Extensions to allow for multiple user classes and elastic demand are also possible.