Accurate prediction of rates of road deterioration is important in road management systems, to ensure efficient prioritization and for setting budget levels. Libyan roads (research target) facing increasing damage resulting from the absence of regular maintenance reinforces the need to develop a system to predict the deterioration of roads in order to determine the optimal pavement intervention strategies (OIS) to support the decision makers. In fact, in a PMS, pavement deterioration could be modeled deterministically or probabilistically. This paper proposes a Bayesian linear regression method to develop a performance model with the absence of the historical data depending on experts’ knowledge as a prior distribution. As such, Libyan road experts who have worked for long time with Libyan Road and Transportation Agency will be interviewed to assist and support the input data to feed the Bayesian Model. Then, posterior distribution will be computed using a likelihood function depending on few inspections. The expected results will be the pavement future conditions based on experts’ knowledge and few onsite inspections.
Efficient and secure movement of goods and people across the Canada-US border is vital to support the economies on both sides of the border. Nearly 30% of Canada-US road trade passes through the existing four lane Ambassador Bridge between Windsor, Ontario and Detroit, Michigan with nearly 8,000 trucks crossings every day (PBOA, 2015). The enormous movement of commercial vehicles through this corridor is sometimes subjected to extended delays resulting in significant economic losses. In order to meet increased long-term travel demand and reduce the likelihood of disruption in moving surface trade between the two countries, Gordie Howe International Bridge (GHIB), a new six-lane bridge across the Detroit River is being constructed. The new bridge will provide a much needed additional border crossing option in this busy trade corridor. While the development of a new infrastructure such as the GHIB provides system resilience, its introduction should be accompanied with emerging Intelligent Transportation System (ITS) technologies. An example of the latter is the use of Radio Frequency Identification (RFID) technology at border crossing facilities. RFID enabled documents have a radio chip embedded, which allows them to communicate with a ground station that is typically 10 to 15 feet away from the card. Such touchless technology is believed to reduce the time it normally takes border custom agents to process passenger vehicles crossing the border. Vehicles in this situation will have to go through RFID enabled lanes. This paper provides a framework for simulating the potential benefits of using RFID technology at an existing border crossing (the Ambassador Bridge) under various RFID adoption scenarios. The objective is to examine the incremental increases in the number of RFID equipped lanes, RFID enabled documents or combinations of the two. The movement of individual passenger and commercial vehicles, the interactions among them, and their passage through primary service booths at the existing Ambassador Bridge will be simulated in the VISSIM micro simulation traffic software (PTV America, 2016).
Short term traffic flow prediction is a cornerstone for intelligent traffic operations, management, policy making, and strategy formulation. It is an essential instrument to support ATMS (advanced traffic management system) implementation and ATIS (advanced traveller information system) service such as congestion mitigation, ramp metering, road pricing, and route guidance, etc. among others. Traffic flow prediction is to forecast macroscopic traffic quantities including traffic volume, speed, and occupancy (i.e. analogy to density) three major indicators of traffic state for a short time future horizon. This research focuses on hourly traffic flow prediction based on advanced time series techniques. This research contributes to literature in a few aspects. (1) It discovered the existence of cointegration effect among macroscopic traffic variables. This is beneficial to better understanding the mechanism of the traffic data generating process, thus improving the prediction accuracy; (2) established a vector error correction model for traffic state prediction according to the Granger Representation Theorem; (3) introduced the regime switching structure to capture structural break in traffic time series and reflect multi-states of traffic situation; (4) incorporated spatially correlated information from upstream or/and downstream of the location of prediction into the model to enhance the accuracy of prediction; Large scale experiments show consistent effectiveness and robustness of the TS-TVEC model. It is our belief that TS-TVEC is a theoretically sound, powerful and competitive method suitable for modelling and forecasting complex multivariate traffic time series where threshold cointegration effect is non-trivial. The model is able to provide accurate predictions, and potentially applicable to a wide variety of traffic circumstances and real time traffic state forecasting.
The Angus L. Macdonald Bridge is one of the two suspension bridges linking twin cities Halifax and Dartmouth in NS, Canada. The Macdonald Bridge has been a vital link of the Halifax transportation network for its continuous 24/7 access across the Halifax Harbour. The bridge remains very busy in peak periods during the day due to its proximity to Downtown Halifax and Dartmouth. Only light vehicles are permitted to cross the bridge, while truck and other heavy vehicles use the wider Mackay Bridge. The bridge not only accommodates a large commuter traffic volume, but also facilitates pedestrian and bicyclist that cross the Halifax harbour. However, after being operational about 60 years, the time has come to replace the entire deck slab of the Macdonald Bridge for an extended service life and a minimum frequency of maintenance. At the beginning of 2015, the Halifax Harbour Bridge Commission launched the “Big Lift” project in order to replace 46 deck segments within 18 months, a total worth of $150 million. Although this is undoubtedly a necessary transportation infrastructure development in Halifax Regional Municipality (HRM), it will likely affect the vicinity of the bridge, traffic flow on Mackay Bridge, daily travel activity and traffic operation, transit routes and schedules.
The shock wave theory was proposed by Lighthill and Whitham (1955) and Richards (1956). For a signalized intersection, this theory was developed to analyze the dynamics of queue formation along the cycles. In an oversaturated condition, part of the queue cannot be discharged at the end of a cycle. This generates residual queue for other cycles, causing delays or even blocking upstream ramps and intersections. Given this, finding the residual queue length is significant to the calculation of travel times and delays. Reviewing the literature, we found nine papers that have used the residual queue length equation. However, the equation has been presented in six different forms, and some of them are completely different from the others. Since the equation is based on a given geometrical proof, it should be identical throughout the literature, but this is not the case. Hence, in this paper we aim to investigate if the existing variations of the residual queue equation generate the same and correct values.
This study analyzes the differences in the lane change duration (LCD) between cars and heavy vehicles as a lane changing vehicle (LCV). The study also analyzes the differences in LCD among lane changes for different types of lead and trailing vehicles (cars and heavy vehicles) in the target lane. This study developed an objective method of determining the lane change duration using the vehicle trajectory data from the US-101 freeway in California, U.S.A. The relationship between the LCD and explanatory variables was modeled by using a multinomial linear regression model. The result shows that the LCD decreases when the positive speed difference between the LCV and the lead vehicle in the current lane increases. However, the LCD increases as the spacing between them increases. The result also shows that drivers take less time to change lanes when the spacing between the trailing vehicle and the lead vehicle in the target lane is longer. On the contrary, the LCD is longer if the lead vehicle speed is lower and LCV is a heavy vehicle.
The City of Toronto Central Business District (CBD) experiences the highest volumes of traffic during the A.M. and P.M. peak periods, when travel demand is at its maximum value for the day. During these peak periods, congestion resulting from high traffic volume arises, causing significant delays to passenger vehicles, commercial vehicles, streetcars and buses. In an effort to alleviate these congestion levels, the City of Toronto, like many major cities around the world, restricts on-street parking on most major streets during the peak periods in the CBD. This policy ensures that the streets’ full capacity is utilized since on-street parking effectively blocks the right-most lane. This paper uses traffic microsimulation to study the impact of illegal parking on congestion during the A.M. peak period in Toronto’s CBD. Although simulation models for Toronto’s road network exist, these models omit illegal parking and therefore do not account for their adverse effects on network travel times and delays. This research builds on an existing microsimulation model and tries to improve its accuracy and realism by incorporating illegal parking into the model. The paper is organized as follows: After the introduction, a review of past literature discussing the impacts of illegal parking is presented, followed by an explanation of the data used to generate the microsimulation model and the methodology involved. Then, the preliminary results of the study are revealed, ending with a discussion of the results and a conclusion derived from the findings.
Automated vehicles and connected vehicles are highly anticipated. It is expected that over the next few decades innovations related to these two technology vectors will transform the automotive industry, personal mobility, public transportation, the taxi industry, land use, urban planning, transportation infrastructure, jobs, vehicle ownership, and many other physical and social aspects of our built world and our daily lives. However certain we may be that fully autonomous vehicles (AVs) will dominate motorized urban and inter-urban transportation in the foreseeable future, everything else including its timing, cost, labour disruption, congestion, rights of way, and the management of interim fleets of mixed autonomous and non-autonomous vehicles can only be surmised. The constellation of unpredictable barriers and unforeseen innovations is far more extensive than the cornucopia of potential and hoped-for benefits. We begin with a simple recap of the expected technology trajectory for robotic vehicles. Following that we consider the dimly-understood vehicle-capability landscape with which transportation planners must contend over the next 50 years. Next, we discuss vehicle ownership arguing that ownership will in the end be more important for sustainability and liveability than will the speed with which robotic technology matures and become pervasive. After this we present a case for robotic public transit and finally a process that uses robotic vehicles to dramatically expand transit ridership that we call Transit Leap.
Roadside inspection of commercial motor vehicles/trucks (CMV) checks for overweight and unsafe vehicles. Inspections cause delays to vehicles which entail economic costs. There is a trade-off between the number and thoroughness of inspections, with attendant delay costs to the driver and vehicles, and the benefits of reductions in overweight damage, improvement in safety, reduction of security threats and efficient collection of taxes. This trade-off is made more efficient when road management can distinguish between vehicles that are more likely to be in violation of weight and safety requirements from those that are less likely. If low risk vehicles could be identified and not delayed, this would save time for those vehicles and make inspection more effective by concentrating on higher risk trucks. Prescreening is the process of evaluating a vehicle prior to its arrival to a fixed or mobile inspection location and deciding whether the vehicle should be called off the highway for inspection. This paper explores how alternative Vehicle to Infrastructure (V2I) prescreening technologies impact vehicle safety and productivity. A simulation model is developed and utilized to estimate the safety impact and to understand the interrelationships between technology choice, inspection capacity and the road safety inspection process.
Built environments that are perceived to be connected, convivial, conspicuous, comfortable and convenient (Gardner et al, 1996). They should also be safe, enhance community participation, encourage physical activity, connect communities, and contribute to the health and wellbeing of local residents. Most industrialised countries are seeking ways to encourage people to walk and cycle and not use cars in order to reduce emissions and to counter the rise in obesity. For example, cycling and walking, particularly as part of a person’s daily commute, is known to have a number of health benefits. The road traffic environment can have a major impact on people’s choices to walk or cycle, how safe they feel when engaging in these activities, and can influence how connected residents feel to each other. For example, fear of injury is the number one stated reason for Londoners choosing not to cycle. People living in low income urban areas are most likely to experience the negative impacts of transport in terms of injuries and quality of life which is a barrier to active travel and the health benefits this confers (Christie et al 2010; Christie et al 2011; Lyons et al 2003; Titheridge et al 2014). For most industrialised countries road safety is measured by counting casualties or rates per head of population. These data are collected by the police and focus on those collisions which involve a mechanically propelled vehicle. These data are generally used to identify intervention policy and practice and in this respect such approaches tend to be reactive. At a local level once interventions are implemented their evaluation generally requires several years of collision data in order to judge whether it has been successful. This is because of the relatively low frequency of collisions at local levels. There are a number of problems with using casualty data to measure the safety of roads. The aims of ASPIRE are 1. To understand what are the key aspects of feeling safe as pedestrians and cyclists in urban communities and how this links to active travel and wellbeing, taking into account contextual factors such as the characteristics of the environment and traffic within it. 2. To explore how feeling safe differs between low income communities compared to more affluent ones. 3. To develop a tool that can be used by local authorities to design interventions and to improve perceived safety and measure impacts on active travel and wellbeing especially among the most low income communities. 4. To understand and be equipped to promote policy actors to improve the road traffic system (actor analysis; system approach dealing with content, context and process aspects of the road traffic system). This presentation presents an overview of our project and then will primarily discuss the data collection and database management aspects. These are key issues that must be resolved before the questions above can be addressed.
The first traffic signs have been introduced at the end of 19th century. Over a century later these signs are still playing a major rule in traffic operation and safety performance of ground transportation network. The sheeting material of these signs has been evolved and tested constantly to evaluate retroreflectivity based on their use, type, pigments, film and micro-prisms. The objective is to ensure that road signs are visible, legible and reflecting an appropriate amount of light at the approach of road users. However these signs are still known to be a safety bottle neck especially under poor light condition. Only one quarter of the total driving occurs during night time however more than half of total collisions occur at dark. Among all road facilities, intersections are known to be the most vulnerable with 45% of total collisions. SCI is sitting at the top of the list with the highest collision rate accounting 8% of fatalities and 11% of total collisions with serious injuries. Unfortunately pedestrians are the leading victims in the equation of nighttime SCI collisions. Only in Canada we have lost 2728 individuals between 1999 and 2011. Twenty-one percent of the cause for these collisions are known to be due to the environment surrounding the driver and “Inadequate and poorly maintained signs” is often cited as a contributing factor. According to one study, about 60% of SCI collisions are due to deliberate, intentional or unintentional incompliance to the command of the traffic signs. Hence there isn’t any clear understanding about the balance between intentional vs unintentional violations. Nevertheless misperception or oversighting of the sign is a major contributing factor. The failure is due to the poor visibility or illegibility of signs or road/pavement marking alignments. Hereupon, the poorly visible signs may fail to meet the condition “to command drivers’ attention” required in the Manual for Uniform Traffic Control Devices. Signs with enhanced conspicuity could be a proper substitute.
As economy is globalized in recent few decades, demand for freight transportation has dramatically increased. In particular, road transportation is a major mode of freight transportation. According to the United States Department of Transportation (U.S. DOT) report, the tonnage of goods by heavy vehicles (in millions tons) increased from 12,778 in 2007 to 13,182 in 2012, and this tonnage will increase to 18,786 by 2040 (U.S. DOT, 2014). Similarly, Transport Canada reported that the tonnage of goods by heavy vehicles increased to 251.4 billion tonne-kilometers in 2013, which is a 4.1% increase from 2012 (Transport Canada, 2015). Consequently, as more passenger cars and heavy vehicles share the same road, keeping roads safe becomes a big challenge. In the U.S., 4,186 large trucks and buses were involved in fatal crashes in 2013 and, large truck and bus fatalities per 100 million vehicle miles traveled by all motor vehicles remained steady at 0.142 from 2012 to 2013 (U.S. FMCSA, 2015). Thus, it is essential to analyze the safety of car-heavy vehicle mixed traffic flow condition. The objective of this study is to analyze rear-end collision risk on a freeway using two surrogate safety measures: time-to-collision (TTC) and post-encroachment-time (PET). These measures were estimated for different types of lead and following vehicles (car or heavy vehicle) using the individual vehicle trajectory data. The differences of these two safety surrogate measures were also discussed.
In the days following the rail disaster at Lac-Mégantic, QC that resulted in 47 fatalities and devastated the downtown, public calls were made for “real-time” information regarding the movement of dangerous goods through communities. The response from Transport Canada was the issuance of Protective Direction (PD) 32 requiring railways to share with emergency organizations on a yearly basis aggregate information detailing the amount and type of dangerous goods being transported by rail through their communities (Government of Canada, 2013). Transport Canada and the president of the Canadian Association of Fire Chiefs recently stated publicly that these new data sharing measures are sufficient and question the need for at data at real-time granularity to support emergency planning, though acknowledge some municipalities are still calling for real-time information (CBC, 2015). A better understanding of the perspectives of individual emergency planning officials could shed light on specific concerns regarding data availability, including whether there are opportunities to enhance the type, frequency, and resolution of the data. No published research appears to solicit and present the perspectives of these organizations regarding their preferences relating to the information provided through PD 32 or through alternate sources. This paper presents the results of a survey distributed among emergency planning organizations in communities along rail lines in New Brunswick.
In 2008, Canada’s energy sector appeared to have reached a turning point. The price of crude oil had steadily increased since 2002, causing concern about the impact of higher prices on Canadian consumers. Despite the vast reserves of crude oil in Canada, imports continued to supply almost half of crude oil refined domestically. That same year however, higher oil prices also helped to catapult energy into Canada’s largest export earner, surpassing motor vehicles and parts. By 2015, energy’s run as Canada’s biggest export ended with declining oil prices becoming a cause for concern. The world was grappling with an over-supply of oil that stemmed from changing demand and from the supply of non-conventional crudes. In Canada, growing oil production has altered distribution channels to refineries and markets. Together, these changes may also have consequences for the environment. This article examines trends from 2005 to 2014 in Canadian oil production and distribution as well as some possible implications. It focuses on two potential environmental concerns arising from these trends: The risk of accidents during transport and higher GHG emissions related to overall industry growth and the increased extraction from non-conventional reserves.
Economists have long advocated congestion pricing as the best way to tackle traffic congestion. Yet congestion pricing is still fairly rare, and various second-best policies for congestion relief continue to gain attention. A leading candidate is to subsidize transit fares in order to attract people out of their cars. Subsidization is politically popular but it has several limitations. First, reducing fares below marginal social cost creates a deadweight loss from induced trips and it contributes to crowding which is a serious problem in many cities2. Second, if transit is a poor substitute for driving large fare reductions are needed to make a dent in traffic congestion. Third, if the own-price elasticity of car trips is large then any potential benefits from congestion relief will be largely offset by latent demand (Duranton and Turner, 2011). Finally, lowering fares exacerbates transit deficits. Cities vary widely in their fare policies. Many levy fares that are constant throughout the day. Others have adopted some degree of time variation — either as peak-period surcharges (e.g., London and Washington, D.C.) or off-peak discounts (e.g., Singapore and Melbourne). The main goal of this paper is to analyze optimal fare policies when traffic congestion and transit crowding are both present. We use a dynamic model that accounts for trip-timing decisions and the evolution of transit crowding and traffic congestion over the course of a peak travel period. The focus is on how transit fares should be set to simultaneously address traffic congestion and transit crowding externalities, and how the level and time structure of fares affect overall efficiency of the two-mode system.
The purpose of this study is to measure the elasticity of vehicle-kilometers travelled in Ontario with respect to fuel prices. This elasticity is useful for understanding the response to road transportation use that may arise if Ontario were to implement a carbon pricing regime such as a carbon tax or cap-and-trade system, both of which would lead to a rise in fuel prices. We find that the elasticity of vehicle-kilometers travelled in Ontario with respect to the price of gasoline is within the range of -0.07 and -0.16, and our preferred model yields an elasticity of -0.12. We also found that while fuel economy negatively impacts fuel consumption with an elasticity close to -1.00, fuel economy positively impacts vehicle-kilometres overall with an elasticity close to 1.5. This implies that as fuel economy improves, people generally choose to use more road transportation in addition to saving on fuel consumption.
In this paper, we present a methodology that extends back Statistics Canada's official estimates for Ontario's productivity from 1997 to 1985. Using this extended series, we examine the role of public capital contribution to productivity with particular emphasis on transportation capital. We follow growth accounting framework first introduced by Robert Solow in 1957 (Solow, 1957) and currently used by Statistics Canada and the OECD (OECD, 2001). We make adjustment for the lack of disaggregated data for the province of Ontario and we estimate productivity at the business sector level. Estimation of public capital contribution to productivity uses elasticity estimates of output with respect to public capital from Macdonald (2008). Other Canadian researchers have found a similar magnitude of the relationship between public infrastructure and productivity of the business sector (Harchaoui & Tarkhani, 2002; Brox & Fader, 2005). Estimates of elasticity of transit infrastructure were taken from Vafa & Georgiev (2013). However, we note that there may be important differences between business cost savings from public infrastructure and transportation. This question will be a subject of further study.
The premise of many infrastructure public-private partnerships (P3s) is to deliver better life-cycle value than conventional procurement approaches. The structure of the project can either enhance or shrink project life cycle value, the so-called value “pie”. Both the size of the pie (project value created) and the size of its slices (value captured by partners) depend on a number of technical and contractual considerations. This research demonstrates an early stage life cycle evaluation of an infrastructure public-private partnership (PPP). It explicitly studies the value implications for the project partners. The discussion speaks to managers, policy-makers, and all those concerned with the development of infrastructure projects. The paper starts with an overview of the concepts central to the early stage life cycle evaluation of both general and PPP projects. It then presents the essential elements of the analysis of economic value. It further illustrates the analysis using a realistic case study of a hypothetical public-private partnership for developing and operating a major international airport.
This paper presents the most significant findings from the forensic evaluation of the long-term cracking performance of asphalt mix designs including Marshall and Superpave mixes with various performance grades of binders and RAP content of 25% total weight of aggregates. The experiment targeted comparison of permeability, stiffness, low-temperature behavior, and oxidation susceptibility of the mixes and correlation of those properties with deflection and cracking data from the six LTPP SPS-9A sections on Route 2 in Connecticut. The mechanical testing in the laboratory included measuring hydraulic conductivity by a Flexible Wall Permeameter, dynamic complex modulus by the Asphalt Mixture Performance Tester (AMPT), creep compliance and tensile strength by Indirect Tension Test, and fracture properties by Semi-Circular Beam (SCB) test. The evaluation of field performance included analysis of deflection basins and back-calculated elastic moduli from Falling Weight Deflectometer data as well as visual evaluation of surface distresses, such as cracking and weathering. The forensic laboratory testing revealed reasonable correlations between some laboratory test results and field performance. For instance, the dynamic modulus values measured by AMPT at 20 C at the highest and lowest frequency were found to be similar to the backcalculated asphalt layer moduli. The extent of transverse cracking appeared to be highly associated with the Young moduli estimated from SCB fracture energy and toughness. The amount of longitudinal wheelpath cracking correlated better with SCB fracture energy. On the other hand, neither fracture properties nor tensile strength was found to be correlated with the extent of longitudinal joint cracking observed. The laboratory testing revealed overall higher stiffness and oxidation in RAP-containing mixes. The use of those stiffer mixes, however, did not affect much load-related performance of the experimental pavement sections. On the other hand, a very fast deterioration of longitudinal joints occurred in all pavement sections, which was found most likely related to creating cold joints during paving. This phenomenon has been reduced in current practice with the introduction of wedge joints by the CTDOT.
In 2014 the Wisconsin Department of Transportation (WisDOT) and industry developed a pilot program for hot mix asphalt (HMA) with higher recycled asphalt content that required use of performance tests during mix design and production. Following the balanced mix design concept mixture tests were selected to address rutting resistance after short-term aging and durability after long-term aging. The test selected were the Hamburg Wheel Tracking (HWT) test, the semi-circular bend (SCB) test at intermediate temperature and the disc-shaped compact tension (DC(T)) test at low pavement temperatures. Asphalt binder extraction and grading from aged mix was also required. The focus of this paper is to summarize the mixture performance test and recovered binder data gathered during the pilot project on STH 77 in Ashland County, Wisconsin; suggest modifications to the SCB test procedure; and present accelerated aging protocols for continued use of performance testing in practice. Semi-circular bend test results collected during the project at 25°C did not relate well to values published in the literature or show adequate sensitivity to changes in mix properties. The effects of test temperature and an alternative analysis method are presented. Based on the results recommendations include use of a climate based approach for test temperature selection and inclusion of post peak analysis to better discriminate between mix composition and aging conditions. Accelerated long-term aging protocols involving loose mix aging at 135°C for 12 and 24 hours are compared to AASHTO R 30 compacted mix aging using recovered binder and mixture fracture properties. Results found that 12 hour loose mix aging produced similar recovered binder grading to AASHTO R 30, whereas the effect of aging on mixture fracture tests was inconclusive. The relationship between laboratory and field aging is investigated through comparison of field cores to laboratory aged plant produced mix from a project constructed in southeast Minnesota in 2006. Lastly, the laboratory performance of the high recycled and conventional mix designs are compared on the basis of mixture cracking resistance and recovered asphalt binder properties after extended aging. The high recycle mix exhibited equal or better performance relative to the conventional mix across all selected performance tests. This comparative analysis also provides an example of how the inclusion of performance testing can influence the materials selection process and produce test results indicative of improved overall performance of the mix.