Journal of Transportation Engineering

Published by American Society of Civil Engineers

Online ISSN: 1943-5436

·

Print ISSN: 0733-947X

Articles


Table 1
Figure 2. 
Table 4
Table 5
Event-Based Modeling of Driver Yielding Behavior at Unsignalized Crosswalks
  • Article
  • Full-text available

July 2011

·

524 Reads

·

This research explores factors associated with driver yielding behavior at unsignalized pedestrian crossings and develops predictive models for yielding using logistic regression. It considers the effect of variables describing driver attributes, pedestrian characteristics and concurrent conditions at the crosswalk on the yield response. Special consideration is given to 'vehicle dynamics constraints' that form a threshold for the potential to yield. Similarities are identified to driver reaction in response to the 'amber' indication at a signalized intersection. The logit models were developed from data collected at two unsignalized mid-block crosswalks in North Carolina. The data include 'before' and 'after' observations of two pedestrian safety treatments, an in-street pedestrian crossing sign and pedestrian-actuated in-roadway warning lights.The analysis suggests that drivers are more likely to yield to assertive pedestrians who walk briskly in their approach to the crosswalk. In turn, the yield probability is reduced with higher speeds, deceleration rates and if vehicles are traveling in platoons. The treatment effects proved to be significant and increased the propensity of drivers to yield, but their effectiveness may be dependent on whether the pedestrian activates the treatment.The results of this research provide new insights on the complex interaction of pedestrians and vehicles at unsignalized intersections and have implications for future work towards predictive models for driver yielding behavior. The developed logit models can provide the basis for representing driver yielding behavior in a microsimulation modeling environment.
Download
Share

Relationships Among Urban Freeway Accidents, Traffic Flow, Weather and Lighting Conditions

January 2001

·

710 Reads

Linear and nonlinear multivariate statistical analyses are applied to determine how the types of accidents that occur on heavily used freeways in Southern California are related to both the flow of traffic and weather and ambient lighting conditions. Traffic flow is measured in terms of time series of 30-second observations from inductive loop detectors in the vicinity of the accident prior to the time of its occurrence. Results indicate that the type of collision is strongly related to median traffic speed and to temporal variations in speed in the left and interior lanes. Hit-object collisions and collisions involving multiple vehicles that are associated with lane -change maneuvers are more likely to occur on wet roads, while rear-end collisions are more likely to occur on dry roads during daylight. Controlling for weather and lighting conditions, there is evidence that accident severity is influenced more by volume than by speed.

Estimation of Highway Maintenance Marginal Cost under Multiple Maintenance Activities

October 2010

·

436 Reads

This paper focuses on the estimation of highway maintenance marginal costs. Highway maintenance marginal cost has been estimated in the literature using the perpetual overlay indirect approach. This approach assumes that pavement overlay costs dominate maintenance costs and ignores other maintenance activities. This paper focuses on two questions. First, is it acceptable to ignore the less costly activities? Second, if multiple maintenance activities are to be considered, is it acceptable to ignore their interdependence? The results show that less costly maintenance activities cannot be ignored. Furthermore, if multiple activities are to be considered, their interdependence should be taken into account.

Mean and Standard Deviation of Key Variables
Time by Component by Geographical Area Suburban Areas Inside the Outside the Beltway Beltway Rural Total Time in Area in Minutes:
Speed and Delay on Signalized Arterials

February 1998

·

243 Reads

This research presents a model to predict the influence of demand and capacity on the running speed of signalized arterials in Montgomery County, Maryland. The model separates the changes to link running speed due to same-direction traffic and intersection approach delay from cross traffic. It is found that flow has a small impact on link speed, each 1000 vehicles per lane per hour reduces speed by 4 - 8 kph. Longer links have higher speeds, indicating that they more closely approximate free-flow conditions. A surprising result comes from measuring the effect of an additional lane on link speed, after controlling for flow per lane. It is found that there are slight diseconomies of additional lanes in terms of speed, each additional lane is associated with somewhat slower speeds. Measures of intersection and link travel times are also compared. Although link running times exceed intersection stopped delay, total intersection delay (stopped and approach) exceeds the delay caused by same-direction traffic. This information can inform investment decisions about roadway and intersection improvements.

Fig. 2. Nonlinear weighting function
Fig. 3. Simulation results: absolute and weighted total travel time
Balancing Efficiency and Equity of Ramp Meters

February 2005

·

129 Reads

A new freeway ramp control objective - minimizing total weighted travel time is presented in this study. This new objective function is capable of balancing efficiency and equity of ramp meters, while the previous metering objective - minimizing total absolute travel time is purely efficiency-oriented and hence produces a most efficient but least equitable solution. When certain assumptions hold, this metering objective is shown to be equal to minimizing non-linearly weighted ramp delay. A simulation method to achieve the new metering objective is developed and demonstrated using the example of BEEX, a new ramp control strategy also developed in this study, in a microscopic traffic simulator.

Fig. 1. Study locations
Fig. 2. Typical freeway section with upstream and downstream detectors
Fig. 5. Queuing input-output diagram: Relationship with phases
A Queuing and Statistical Analysis of Freeway Bottleneck Formation

February 2004

·

267 Reads

A modified approach to treat traffic flow parameters (flow, density and speed) has been introduced in this paper. A queuing analysis has been conducted on traffic flow data on Interstate 94 in the Minneapolis-St. Paul metro area. A methodology has been developed to calibrate loop detector count data. Corrected flow data has been subjected to analysis using queuing analysis to compute densities and speeds on freeway sections. Statistical analysis identifies 'active bottleneck' locations on freeways and sections where bottlenecks occur because of disturbances caused by downstream bottlenecks propagating backwards in the form of shockwaves. A sample of six days on Interstate 94 was considered for the analysis. Our analysis reveals that the same section cannot always be characterized as a 'bottleneck' location; at some times it is active and at others, it is subject to downstream bottlenecks. Traffic flow characteristics change and that leads to changing situations on each freeway section.

Figure 1. Illustration of a main road which is crossed by a smaller road and the effect on the noise level contours when the traffic is increased on the main road.
Table 1 Marginal change in noise level as a function of distance
Figure 7. Marginal cost expressed as e/km through the research area for a class 3c vehicle as a function of the total traffic volume.
Figure 8. Map outline over the municipalities used for extending the marginal cost estimation in Lerum to a larger region.
Noise Charges in Road Traffic: A Pricing Schedule Based on the Marginal Cost Principle

January 2008

·

201 Reads

One way of mitigating the negative effects of noise from road traffic is to include the external cost of noise in a road charging system. This study shows how standardized calculation methods for road traffic noise can be used together with monetary estimates of the social cost of noise exposure to calculate charges based on the social marginal cost. Using Swedish data on traffic volume and individuals exposed to road noise, together with official Swedish monetary values for noise exposure, we estimate road-noise charges for light (cars) and heavy (trucks) vehicles.

TABLE 1 . Linear Regression Results
TABLE 2 . Single Loop Speed Computation Results Using AM DownstreamData
TABLE 4 . Single Loop Speed Computation Results Using Midday Upstream Data
Individual Vehicle Speed Estimation Using Single Loop Inductive Waveforms

November 1999

·

469 Reads

Travel time is the reciprocal of speed and is a useful measure of road congestion and traffic system performance. Travel time is also a basic traffic variable that is used in many Intelligent Transportation System (ITS) strategies such as route guidance, incident detection, and traveler information systems. Previously, speeds were mainly acquired from double inductive loops configured as speed traps, since single loop speed estimates based on assumptions of a constant vehicle length were inaccurate. However, more accurate measurements of speed can now be accomplished with single loops by utilizing inductive waveforms of vehicles that are outputed from newer detector cards. An algorithm using signal processing and statistical methods was developed to extract speeds from inductive waveforms. The results show that the proposed algorithm performs better than conventional single loop estimation methods. The results also show that the algorithm is robust under different traffic conditions and is transferrable across surveillance sites without the need for recalibration. The use of the extensive single loop surveillance infrastructure is a cost-effective way of obtaining more accurate network-wide travel time information. Key Words: speed estimation, inductive waveform, vehicle signature, single loop detector

Turning Movement Estimation in Real Time

January 1995

·

151 Reads

Fast processors offer exciting opportunities for real-time traffic monitoring. Conventional transportation planning models that assume stable and predictable travel patterns do not lend themselves to on-line traffic forecasting. This paper describes how a new traffic flow inference model has the potential to determine comprehensive flow information in real time. Its philosophical basis is borrowed from the field of operational research, where it has been used for optimizing water and electricity flows. This paper shows how road traffic turning movement flows can be estimated from link detected flows at small recurrent intervals, in real time. The paper details the formulation of the problem, outlines the structure of the data set that provides the detector data for the model input and observed turning flows for the model evaluation. The theoretical principles that define the model are described briefly. Turning movement flow estimates, at 5-min intervals, from two independent surveys are presented and analyzed. The results show an overall mean coefficient of determination (r(2)) of 79-82% between observed and modeled turning movement flows.

A Tool to Evaluate the Safety Effects of Changes in Freeway Traffic Flow

January 2003

·

179 Reads

This research involves the development of a tool that can be used to assess the changes in traffic safety tendencies that result from changes in traffic flow. The tool uses data from single inductive loop detectors, converting 30-second observations of volume and occupancy for multiple freeway lanes into traffic flow regimes. Each regime has a specific pattern of crash types, which were determined through nonlinear multivariate analyses of over 1,000 crashes on freeways in Southern California. These analyses revealed ways in which differences in variances in speeds and volumes across lanes, as well as central tendencies of speeds and volumes, combine in complex ways to explain crash taxonomy. This research may provide the foundation to forecast the crash rates, in terms of vehicle miles of travel, for vehicles that are exposed to different traffic flow conditions.

Optimization Models for Transportation Project Programming Process

January 1995

·

251 Reads

Five optimization models are constructed for selecting an optimal subset of projects submitted for a statewide programming process. Our approach develops models that are consistent with user needs and appropriate for the assumptions used in the project prioritization process. Each of the models builds on a basic linear-programming formulation in which a maximization of benefits and minimization of costs is desired. The five models include the following: a priority index that provides a ranking of projects but does not directly facilitate trade-offs between project costs and the ranks (model 1); a model that incorporates a formal approach to making trade-offs between rank and cost (model 2); a model that explicitly includes policy objectives by setting a fixed goal for each objective (model 3); a model that includes a strict budget constraint in addition to requiring that funded projects equal or exceed a fixed goal for each policy objective (model 4); and finally, a model that combines the relative rankings and budgetary constraint (model 5). Models 2-5 are developed in both a continuous and integer variable format, thus generating nine optimization approaches. Models 4 and 5 also introduce a method for determining the improvement in the overall transportation-system performance, given the current budget and decision-maker objectives.

Distributed Air‐Traffic Control. II: Explorations in Test Bed

June 1997

·

38 Reads

: This is the second of a two-part paper dealing with distributed planning for Air Traffic Control. Three different organizational structures have been implemented: the Local, Centralized Architecture, and the Location Centered, Cooperative Planning System with one- and two-level Coordinator-Coworker Hierarchies. We present an initial, simplified analysis of the speedup obtainable by using the latter two organizational structures. The vehicle of our empirical studies, the Distributed Air Traffic Control Test Bed is then introduced. We discuss the design and the results of a series of experiments performed. We compare in the test bed performance measures of the three systems using the respective organizational structures. The comparisons are made at different levels of traffic density and problem size, in terms of communication overhead and processing time needed for planning. Keywords: Distributed Planning and Problem Solving; Air Traffic Control; System Recovery with Graceful Degradat...

Frontage Roads: An Assessment Of Legal Issues, Design Decisions, Costs, Operations, And Land-Development Differences

December 2001

·

108 Reads

·

·

·

[...]

·

A policy of building frontage roads alongside freeway mainlanes avoids the purchase of access rights when upgrading existing highways to freeway standards, and generally supplements local street networks. It also may affect corridor operations, land values, and development patterns. This paper seeks to provide a comprehensive evaluation of frontage road design policies by summarizing research results related to legal statutes affecting public access to roadways, discussing access policies and practices across the states, comparing land development and operations of corridors with and without frontage roads, summarizing studies on access-right valuation, and evaluating construction cost distinctions. A literature review concluded that a wide variety of options are available to agencies for limiting access to and improving flow and safety along freeway corridors. Statistical analyses of paired corridors suggested that land near frontage roads is associated with lower household incomes, lower population densities, lower percentages of bike trips to work, lower vehicle occupancies for work trips, and higher unemployment rates than those without frontage roads. Lower employment densities along freeway corridors also emerged when frontage roads were present. Operational simulations of various freeway systems demonstrated that frontage roads may improve the operation of freeway mainlanes in heavily developed areas, but not in moderately developed areas (e.g., purely residential). Arterial systems in these simulations were supplemented by frontage roads and thus also performed better in their presence. The financial costs associated with frontage road facilities were found to be considerably higher than those associated with non-frontage road facilities, except in cases of extr...

Figure 3. Capacity Reduction due to various LDT categories in the Right-Turning Traffic
Effect Of Vehicle Type On The Capacity Of Signalized Intersections: The Case of Light-Duty Trucks

March 2000

·

273 Reads

This work analyzes the impacts of different light-duty trucks (LDTs) on the capacity of signalized intersections. Data were collected at two intersections in Austin, Texas, and regression analysis generated estimates of mean headways associated with various categories of LDTs, as well as passenger cars. Using the estimated headways Passenger Car Equivalents (PCEs) were calculated, and these suggest that the impacts of light-duty trucks should be given special consideration when analyzing the capacity of signalized intersections. For example, a single large sport-utility vehicle in through traffic is equivalent to 1.41 passenger cars; and a van is equivalent to 1.34. Such long headways reduce intersection capacity and increase urban congestion. KEYWORDS Headways, passenger car equivalent (PCE), light-duty truck, capacity, signalized intersection. INTRODUCTION Under ideal geometric and operational conditions, the Highway Capacity Manual (HCM 1998) estimates a lane's saturation flow r...

A Distributed Approach to Optimized Control of Street Traffic Signals

June 1997

·

186 Reads

: The paper describes our long-term activity aimed at the control of traffic signals by a network of distributed processors situated at street intersections. Every processor runs an identical expert system and communicates directly with the four adjacent processors. (However, each expert system may need a somewhat different knowledge base to correspond to the geometry and the average traffic pattern of the associated intersection.) Messages can reach also indefinitely distant processors, modulated by the needs of intervening ones. The information transmitted can be raw data, processed information or expert advice. The rule-base of the expert systems has a natural segmentation, corresponding to different prevailing traffic patterns and the respective control strategies. Multidimensional learning programs optimize both the hierarchy of the rules and the parameters embedded in individual rules. Different measures of effectiveness can be selected as the criterion for optimization. Traffic ...

3D Calculation of Stopping-Sight Distance from GPS Data

September 2006

·

293 Reads

Sight distance is a key element in highway geometric design. Existing models for evaluating sight distance are applicable only to two-dimensional (213), separate horizontal, and vertical alignments or simple elements of these separate alignments (vertical curve, horizontal curve). A new model using global positioning system (GPS) data is presented for determining the available sight distance on 3D combined horizontal and vertical alignments. Piecewise parametric equations in the form of cubic B-splines are used to represent the highway surface and sight obstructions, including tangents (grades), horizontal curves, and vertical curves. The available sight distance is found analytically by examining the intersection between the sight line and the elements representing the highway surface and sight obstructions. A profile of available sight distance can be established and used to evaluate sight-distance deficiency. Application of the new model is illustrated using actual GPS data for highway K-177 in Kansas (United States). The model has been tested and verified on most of the highways in Kansas. Software has been developed and can be used for determining the available sight distance on any highway for which GPS data are available.

Impacts of Shorter Perception-Reaction Time of Adapted Cruise Controlled Vehicles on Traffic Flow and Safety

March 2003

·

101 Reads

Auto manufacturers have begun to market an adaptive cruise control system (ACCS) as an option that promotes driver safety. This paper examines how the presence of vehicles equipped with ACCS affects stability and safety of a flow consisting of both ACCS and non-ACCS vehicles. We focus on the effects of the short perception-reaction time of the ACCS vehicles. Given a perturbation to the first vehicle in the platoon, the behavior of each following vehicle is tracked for changes in headway, speed, and location using a simulation model. A fuzzy rule based car-following model is selected as the test-bed for simulation after reviewing other models. Simulation is conducted under many scenarios with respect to the number of ACCS vehicles, the perception-reaction times, and the ACCS vehicle's placement in the platoon. We introduce two measures to evaluate stability and safety: one, the pattern of changes in the minimum spacing for each pair of two consecutive vehicles, and two, the total time during which the following vehicles cannot stop safely. It is found that, generally, the shorter perception-reaction time of the ACCS vehicles can shorten the process of achieving stability, and also can promote safety to both the ACCS and the non-ACCS vehicles under congested conditions. The findings generate an interesting debate on the issues of the benefits that non-ACCS vehicles enjoy as a result of the presence, of ACCS vehicles in the traffic flow.

Effect of Hot-Air Lance on Crack Sealant Adhesion

July 1999

·

37 Reads

The hot-air lance (HAL) is widely used in crack-sealing work based on the premise that it improves sealant adhesion. The effectiveness of the HAL in promoting adhesion is uncertain, however. Our goal was to measure the adhesion strength of bituminous crack sealant to dry asphalt concrete (AC) and assess the effect of the HAL on adhesion. To this end, we monitored the use of the HAL in the field and reproduced its effect on asphalt binders and AC pavements by using an automated HAL in a series of laboratory experiments. In those experiments, we looked at the effect of heat treatment on binder oxidation and embrittlement by means of infrared spectroscopy, thin-layer chromatography, and goniometry. We also compared the adhesion strength of three sealants applied to unheated, heated, and overheated AC substrates prepared with quartz or limestone aggregates. The results show that sealant adhesion and failure mechanisms are governed by the sealant source, the type of aggregate in the AC mix, and the heat treatment on the rout prior to pouring the sealant. The HAL does not oxidize the binder, but it may cause embrittlement by raising the asphaltenes content of the binder. Normal heat treatment has little effect on sealant adhesion to dry AC, but overheating can cause a 50% reduction in adhesion strength and lead to premature sealant failure. To retain the possible benefits of the HAL in sealing damp cracks and to prevent overheating, the HAL should be operated at reduced temperatures.

TABLE 1 . Grading and Test Requirements for RCA 
TABLE 2 . Basic Properties of RCA Aggregates 
TABLE 3 . Comparison of Performance Using K-Theta Model 
Resilient Response of Recycled Concrete Road Aggregates

October 2001

·

697 Reads

This paper presents the results of a recent investigation on the performance of four recycled concrete aggregates (RCAs). The materials, obtained by crushing concrete with compressive strength ranging from 15 MPa to 75 MPa, were reconstituted to satisfy the grading requirements for a subbase material. Triaxial specimens were tested under repeated loading one day after compaction. It appears that the original concrete compressive strength, the amount of softer material in the RCA, and the flakiness index of the RCA can significantly affect the resilient modulus. Degradation is mostly related to the crushing of softer and flaky materials within the aggregate matrix. In this regard, the Ten Percent Fines test is suitable as an evaluation test as it does not impose an excessive force on the RCA. The overall results indicated that RCA may be utilized as a subbase or base course material if it can be produced to consistently meet product quality standards.

Cyclic Triaxial Tests on Clay Subgrades for Analytical Pavement Design

May 2004

·

272 Reads

This is a journal article. It was published in the journal, Journal of Transportation Engineering [© American Society of Civil Engineers] To introduce a performance specification, pavement foundations must be designed using analytical methods incorporating the laboratory measured parameters of resilient elastic modulus and resistance to permanent deformation of the subgrade and foundation materials. This paper presents results from a program of repeated load triaxial tests performed on a range of fine-grained subgrades prepared in a number of states to evaluate these parameters for various design conditions. The results highlight several difficulties in measuring small strains on ‘undisturbed’ soils over a large strain range and in predicting and modeling long-term behavior. However testing at higher strains has shown that the deviator stress at which the cumulative permanent deformation starts to increase significantly, termed the ‘threshold stress’, approximates to 50% of the deviator stress at failure. In addition, the resilient modulus of the soils is shown to approach a low asymptotic value at higher deviator stress. Comparison between elastic and plastic behavior shows that the deviator stress at ‘threshold’ coincides with the stiffness asymptote. Using these correlations a simplified mechanistic design method for pavement foundations is proposed.

Upgrading Arc Median Shortest Path Problem for an Urban Transportation Network

October 2009

·

59 Reads

In this paper, we propose an algorithm for an upgrading arc median shortest path problem for a transportation network. The problem is to identify a set of nondominated paths that minimizes both upgrading cost and overall travel time of the entire network. These two objectives are realistic for transportation network problems, but of a conflicting and noncompensatory nature. In addition, unlike upgrading cost which is the sum of the arc costs on the path, overall travel time of the entire network cannot be expressed as a sum of arc travel times on the path. The proposed solution approach to the problem is based on heuristic labeling and exhaustive search techniques, in criteria space and solution space, respectively. The first approach labels each node in terms of upgrading cost, and deletes cyclic and infeasible paths in criteria space. The latter calculates the overall travel time of the entire network for each feasible path, deletes dominated paths on the basis of the objective vector and identifies a set of Pareto optimal paths in the solution space. The computational study, using two small-scale transportation networks, has demonstrated that the algorithm proposed herein is able to efficiently identify a set of nondominated median shortest paths, based on two conflicting and noncompensatory objectives. Yes Yes

Fig. 2: Delay regions for risk of delay estimation  
Table 3 : Optimal Schedule Comparisons: with and without risk of delay
Fig 4: . Maximising reliability: source delay effects  
Fig 5: Effect of risk of delay importance  
Modeling Reliability of Train Arrival Times

November 1996

·

889 Reads

This paper deals with the scheduling of trains so as to minimise train trip times, whilst maximising reliability of arrival times. The amount of risk of delay associated with a schedule is used as the reliability component of a constrained schedule optimisation model. The paper outlines the model developed to quantify the risk of delays to individual trains, as well as to specific track segments and to the schedule as a whole. The risk model, which deals with single track operations, can be used to estimate the likely impact on reliability of arrival times of changes in train frequencies and operating practices; track and station infrastructure investment strategies; and train technology upgrading. An application of the model to the optimisation of schedules on a track corridor is described. The results obtained using the model are compared with the schedules used by train operations planning staff, in terms of overall delay and timetable reliability. The results highlight the significance of including a measure of timetable reliability, such as risk of delays, in the objective function for scheduling optimisation.

Assessment of Construction Smoothness Specification Pay Factor Limits Using Artificial Neural Network Modeling

July 2005

·

38 Reads

Currently, Indiana Department of Transportation (INDOT) is using the California Profilograph as the standard measuring device in its construction smoothness specifications. The output derived from the profilograph is called Profile Index (PI). PI represents the total accumulated deviations of the profilograph output traces beyond a tolerance zone (blanking band). At present, INDOT is using 0.2-inch blanking band to evaluate the profile traces, which has raised some concerns because some small unpleasant surface irregularities are covered by the blanking band. This study developed a rational method for interpreting profilograph traces using the 0.0-inch blanking band (zero tolerance) method and established the corresponding pavement smoothness specifications. The development of the preliminary PI0.0 smoothness specification was performed by converting the existing PI0.2 specification to the PI0.0 specification. Several Profile Index conversion models were used to perform the conversion. In addition, current incentive/disincentive policies specified in the smoothness specification are based on the subjective engineering judgment. To which extent they can really reflect the long-term benefits of a smoother pavement by providing a longer service life is still unknown. Thus, the roughness progression model was developed using the Artificial Neural Network methodology to determine the effect of various initial specification smoothness limits on the future smoothness progression and the pavement service life. Finally, using the developed model, the preliminary converted specification was modified to account for the long term benefit of the pavement and thus justified the incentive/disincentive policies in the specification.

Table 1: Boarding process at a bus stop and at a BRT station 
Table 2: Summary of events 
Table 4: Average walking time for passenger and average lost time for bus (Peak time) 
Influence of Platform Walking on BRT Station Bus Dwell Time Estimation: Australian Analysis

December 2010

·

604 Reads

The common approach to estimate bus dwell time at a Bus Rapid Transit (BRT) station platform is to apply the traditional dwell-time methodology derived for suburban bus stops. Current dwell-time models are sensitive toward bus type and fare collection policy along with the number of boarding and alighting passengers. However, they fall short in accounting for the effects of passengers walking on a relatively longer BRT station platform. Analysis presented in this paper shows that the average walking time of a passenger at a BRT platform is 10 times more than that of a bus stop. The requirement of walking to the bus entry door at the BRT station platform may lead to the bus experiencing a higher dwell time. This paper presents a theory for a BRT network that explains the loss of station capacity during peak period operation. It also highlights shortcomings of present available bus dwell-time models suggested for the analysis of BRT operation.

Automated Road Segmentation Using a Bayesian Algorithm

August 2005

·

117 Reads

Modern road profilers deliver long sequences of measurements on road characteristics including a road's longitudinal and transversal unevenness. These measurements represent adjacent parts of the physical road, and interest focuses more on the overall pattern of these measurements than on each single value. In order to systematically assess the information contained in these measurement series, one typically wishes to partition a given series into segments, where each segment contains measurements which are "similar" to each other but "dissimilar" to the elements in the neighboring segments. An algorithm is suggested that combines a recently developed Bayesian identification of transitions between two homogeneous road sections with a heuristic approach that uses this technique iteratively to find multiple homogeneous sections in arbitrary long measurement series. The approach is demonstrated with narrowly spaced measurement series of the international roughness index as well as rutting.

Road Network Robustness for Avoiding Functional Isolation in Disasters

September 2004

·

466 Reads

In a disaster situation, road networks play a critical role in maintaining routes for evacuation and logistics. In the event of a catastrophic disaster such as an earthquake, part of the road network may be easily broken into isolated components. In this situation, it is critical that individuals in each district have access to vital facilities in their local neighborhood so that no district is isolated. In this paper, we propose using a topological index (TI) to quantify road network dispersiveness/concentration. Dispersiveness/concentration evaluated by the TI is a valuable approach to evaluate the isolation of districts in a city. We used the index to evaluate dispersiveness/ concentration of a road network in the Hanshin region in Japan, which experienced severe damage in the 1995 Hanshin-Awaji Earthquake. In order to construct the graph for calculating TI, we also present the methodology for specifying effective road links for avoiding functional isolation of districts.

Load Transfer Analyses of Buried Pipe in Different Backfills

November 1997

·

57 Reads

Nonlinear finite-element analyses have been carried out to assess the effects of different trench backfill materials, pipe burial depths, and pipe materials on the amount of traffic load transferred to buried pipe. The analyses show that the use of trench backfills such as controlled low strength material (CLSM) instead of traditional materials such as sand and clay, results in significantly reduced stresses in polyvinyl chloride (PVC) pipe under traffic loading. This finding is in agreement with recent truck load tests carried out in the City of Edmonton, where strains were monitored on buried PVC water mains. The protection of buried pipes under or in CLSM backfill from traffic loading becomes more significant with the decrease in pipe burial depth and stiffness. The reasons behind the difference in load transfer between the traditional backfills and CLSM are the high elastic modulus and strength of CLSM, as well as uniform load transfer along the longitudinal axis of the pipe.

Stresses and Deformation of Buried Pipeline under Wave Loading

October 2001

·

49 Reads

An evaluation of wave-induced pore pressures and effective stresses has been recognized by marine geotechnical engineers as an important factor in the design of offshore pipelines. Most previous investigations for the wave-seabed-pipe interaction problem have not considered the material of the pipeline. Thus, the internal stresses and deformation of the pipeline have not been investigated in the wave-seabed-pipe interaction problem. This paper considers the pipeline itself as an elastic material and links the analysis of the pipeline with the wave-seabed interaction problem. Based on the numerical results presented, the effective stresses in the angular direction sigma (theta theta) and shear stress tau within the pipe are much larger than the wave-induced pore pressure.

Repeatability in Crack Data Collection on Flexible Pavements : Comparison between Surveys Using Video Cameras, Laser Cameras and a Simplified Manual Survey

July 2005

·

45 Reads

Crack data can be collected using manual or automatic surveys. Traditionally, manual methods are used, and they are still the most common. Changing into automatic systems will enhance the efficiency of data collection as well as the objectivity. In this study the repeatability of an automatic crack data collection system using video images was evaluated. Ten repetitive measurements were made on a 10-km-long road section. Cracking was measured using six laser cameras attached to the same vehicle. The results from the two methods were compared. Simultaneously, a simplified manual windshield survey was conducted by three different persons, and the repeatability obtained was used for comparison with the repeatability established using the image and laser method, respectively. The correlation between repeated measurements using the two automatic systems was high, while the repeatability for the manual, subjective method was low. Suitable measures for crack characterization are discussed.

Predicting Traffic-Generated Carbon Dioxide Concentrations in Sydney

September 1997

·

55 Reads

The abilities of two line-source dispersion models to predict road-edge concentrations of carbon dioxide along several roads are tested. The comparison of the predicted CO2 concentrations from both CALINE4 and HIWAY-2 using a new power-based emissions module are compared with eight days of experimental data collected at three sites in Sydney through a recent held study carried out by the Commonwealth Scientific Industrial Research Organization (CSIRO) on behalf of the Road and Traffic Authority (RTA) in Sydney to quantify air pollution near roads and highways in Sydney. The measurements were made at locations up to 60 m downwind from the roadside and to heights 10 m above the ground; The comparison of the measured CO2 concentrations and those predicted on the basis of appropriate traffic and meteorological data shows that both CALINE4 and HIWAY-2 when used in conjunction with the new vehicle emissions module, satisfactorily estimate road-edge concentrations of CO2. As such, both the two dispersion models, CALINE4 and HIWAY-2, in conjunction with the new power-based emissions module, are potentially useful tools for highway planners.

Structural Response of Concrete Pavements under Moving Truck Loads

December 2007

·

499 Reads

While there has been a great deal of research conducted on concrete pavement performance and deterioration under static loads, only very limited research has been carried out on its dynamic response. Furthermore, opinions differ as to which type of loading (static or dynamic) results in greater values of slab deflection or flexural stress. In the present study, a test section consisting of two jointed reinforced concrete pavement and two jointed plain (unreinforced) concrete pavement was constructed and tested under both quasistatic and dynamic truck loads. Truck load was allowed to wander at predetermined locations on the instrumented pavement at speeds from 5 to 55 km/h. Strain gauges and displacement transducers were installed along the test section to monitor the pavement responses. Time history responses of the test section were recorded and used to validate a finite-element model developed in the ANSYS platform for further sensitivity study on those parameters affecting the dynamic response of concrete pavements. Results indicate the significance of dynamic amplification in concrete pavement design.

Modeling Damage to Rigid Pavements Caused by Subgrade Pumping

January 1996

·

44 Reads

The efficient utilization of construction and maintenance resources requires the ability to effectively predict and model subgrade pumping in rigid concrete pavements. A review of the evolution of a rigid pavement pumping model is presented in this paper. Extensions were made to an existing model to consider the effects of climate and drainage conditions, as well as soil composition on pumping magnitudes. Additional modifications permit the effect of nonconventional vehicle configurations to be considered in the calculation of pumping values. Finally, the process of calculating pumping volumes and distributing void areas was altered to provide a more realistic effect. This improved model allows proposed designs to be analyzed to determine optimum pavement characteristics. The model is implemented in a finite-element–based method for analysis of pavements that also includes the effects of concrete cracking and fatigue. Urgent need for experimental data on the size and growth of voids beneath a pavement slab is pointed out.

Estimation of Remaining Service Life of Flexible Pavements from Surface Deflections

April 2010

·

446 Reads

Remaining service life (RSL) has been defined as the anticipated number of years that a pavement will be functionally and structurally acceptable with only routine maintenance. Usually RSL is computed from pavement condition survey results. This paper presents a methodology whether RSL was estimated from pavement surface deflections. Deflection data were collected with a Dynatest 8000 falling weight deflectometer (FWD) from 1998 to 2006. Nonlinear regression procedure in the Statistical Analysis Software and Solver in Microsoft Excel were used in model development. The results showed that a sigmoidal relationship exists between RSL and center (FWD first sensor) deflection. Sigmoidal RSL models have very good fits and can be used to predict RSL at the network level based on the center deflection from FWD.

Discrete-Element Method Investigation of the Resilient Behavior of Granular Materials

July 2004

·

96 Reads

This paper presents the results of numerical simulations of the resilient modulus test used to mechanically characterize the resilient behavior of aggregate materials, commonly used in pavement bases and subbases. The investigation made use of the discrete-element method (DEM) to replicate the particle behavior usually experienced during laboratory sample preparation and testing. The simulations were based on assemblies of circular particles confined between top and bottom rigid boundaries and laterally confined at constant stress. Contact forces and displacements were assumed to obey a linear relationship and shear forces were bounded by a maximum value (Coulomb friction law). Compacted samples were subjected to deviator repeated loads. The investigation showed that the DEM is capable of reproducing the results of the resilient modulus test performed on real granular materials in a qualitative manner. Further, the method predicted the effect of the state of stress depicted by laboratory testing.

Effect of Restriction of Vision on Driving Performance

September 1994

·

41 Reads

The effect of restricting vision on driving performance was investigated in a field study. Commonly occurring binocular visual-field defects were simulated for a group of young normal subjects and the effect of these defects on their driving performance on a private closed rural road, free of other vehicles, was assessed, The monocular condition did not significantly affect performance for any of the driving tasks assessed. Restriction of the binocular visual field to 40 ~ and less, significantly increased the time taken to complete the course, reduced the ability to detect and correctly identify road signs, avoid obstacles, and to maneuver through limited spaces. Accuracy of road positioning and reversing were also impaired with field restriction. However, the time taken for many driving tasks, reversing and maneuvering, and the driver's ability to estimate speed, stopping distance, or reaction time were not affected by a restriction of the binocular visual fields. The results are discussed with regard to their impact on traffic engineering practices.

Microscopic Dual-Regime Model for Single-Lane Roundabouts

June 2009

·

56 Reads

Most of microsimulation tools used to model roundabouts encompass classical gap-acceptance algorithms to represent the insertion of approaching vehicles into the circulatory roadway. However, these algorithms fail to reproduce the mean priority sharing process experimentally observed when the circulatory roadway is congested. This paper fills this shortage by proposing an integrated microscopic framework with: (1) a gap-acceptance algorithm giving relevant capacity estimates in uncongested regime; and (2) a probabilistic rate-based insertion decision module in congested regime. In this framework the car-following model can be implemented independently of the insertion decision-making process. Moreover, its direct influence on the insertion decision model is released in congested regime thanks to a relaxation procedure. The obtained simulation results are convincing compared to on-field data collected at different sites for both peak and off-peak periods.

Toll Plaza Merging Traffic Control for Throughput Maximization

January 2010

·

844 Reads

A simple real-time merging traffic control concept is proposed for efficient toll plaza management in cases where the total flow exiting from the toll booths exceeds the capacity of the downstream highway (or bridge, or tunnel), leading to congestion and reduced efficiency due to capacity drop. Merging traffic control aims at maintaining the number of vehicles in the merge area close to a critical value that maximizes throughput: to this end, an algorithm known from local ramp metering operations is proposed (ALINEA). The potential control concept efficiency is demonstrated by use of microscopic simulation applied to a particular toll plaza infrastructure with and without merging traffic control. It is shown that the employed feedback regulator is little sensitive to various settings which indicates easy applicability with low fine-tuning needs in potential field applications. The case of partially uncontrolled lanes (e.g., for high-occupancy vehicles, buses or other vehicles) is also addressed.

Reset to Zero and Specify Active Safety Systems according to Real-World Needs

May 2010

·

26 Reads

This article was accepted for publication in the Journal of Transportation Engineering [© ASCE] and the definitive version is available from: http://cedb.asce.org or http://dx.doi.org/10.1061/(ASCE)TE.1943-5436.0000042 Emergency Brake Assist (EBA), Adaptive Cruise Control (ACC) and alternative instantiations of intelligent vehicle control systems aspire to support the driver in controlling the vehicle and alleviate the incidents that would lead to collisions and injury. This paper resets to zero and based on data from the On-The-Spot (OTS) accident study challenges the capability of active safety systems to aim at the sources of longitudinal control failures. The road user interactions file from 3024 road accidents in Thames Valley and Nottinghamshire in UK was analysed. Interactions where “failure to stop” or “sudden braking” is the precipitating factor are analysed and the main contributory factors are identified. Some of those factors are addressed by current and coming technologies – like low road friction, excessive speed and close following, but other common ones are significantly neglected – like distraction, failure to judge other person’s path, failure to look, and “look but did not see” instances. Accepted for publication

Simulation of Road Surface Profiles

June 2001

·

352 Reads

This paper presents a novel technique for the simulation of shock and vibrations related to road surface irregularities. The technique is based on a recently developed universal road profile classification scheme, which is one of the main outcomes of a project aimed at better understanding the statistical nature of road surfaces and their interactions with road vehicles. The method, which focuses on the nonstationary and non- Gaussian nature of road profiles, is described along with an analysis procedure developed and implemented to automatically detect and extract transient events from the road spatial acceleration data as well as identify stationary segments of similar roughness (RMS). The paper shows how the concept of treating road surface irregularities as two fundamental components, namely, steady-state road surface irregularities and transient events, can be employed for classification and simulation purposes. The simulation technique is based on a universal statistical model of road surface profiles that characterizes the power spectral density of the underlying irregularities, the probability distribution function of the RMS level using the offset Rayleigh distribution function, and the transient density. The transient events are generated with random amplitudes according to the Gaussian distribution, the mean and standard deviation of which are functions of the underlying RMS level. This paper shows how these two components can be combined to numerically synthesize a process that faithfully represents the nonstationary, transient-laden nature of road surface profiles. The synthesized process can be physically realized on a vibration shaker to simulate road profiles.

Railway Route Rationalization: a Valuation Model

March 1985

·

25 Reads

Abandonment of rail branches and secondary lines with low traffic density is an effective means of maintaining railway company profitability in many countries such as Britain, Canada, Australia, and Japan. In the United States, it was not until passage by Congress of the Stagger's Act in 1980, that the industry's abandonment vigor became fully manifested. This is because the Stagger's Act substantially reduced the time required for the Interstate Commerce Commission to act upon abandonment applications. An economic model is developed to predict losses in a railroad branch line without going through the cumbersome and lengthy calculations normally undertaken, thus reducing the time necessary for the decision‐making process. To this purpose, regression analysis has been performed between losses incurred in 50 railroad branch lines and independent variables extracted from the abandonment applications made for these lines to the Interstate Commerce Commission. The resulting statistically significant model indicates that losses can confidently be predicted by making use of cost and revenue data that are readily obtainable by railway companies freight revenues, maintenance costs for way and structures, rehabilitation costs and equipment maintenance costs.

Statistical Approach to Road Segmentation

May 2003

·

93 Reads

A method for segmentation of a road based on surface measurements is presented. This method is based on a statistical model of the measurement series and acknowledges that neighboring measurements are dependent. Sudden changes in the level, in the variance, or in the autocorrelation of a series are detected. No prior knowledge about these quantities is required, and no distributional assumptions about the nature of the sudden change are made. The method allows for an assessment of the information contained in the measurements themselves. Uncertainties about the existence and possible location of a change-point are communicated to the user in terms of probabilities. The road engineer may then match the extracted information with available complementary information about, e.g., the age of the pavement and traffic volumes. The focus is on the situation where it is known beforehand that, at most, one change points is present. Thus, the suggested method is best suited to assist the engineer in a detailed study of selected parts of a given measurement series, or may be used iteratively when several change points are suspected. The application of this method is demonstrated using a measurement series for the international roughness index, collected with the Swedish road surface tester known as the laser RST-vehicle.










Top-cited authors